Updates from: 11/09/2022 02:09:22
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Aad Sspr Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/aad-sspr-technical-profile.md
Previously updated : 06/23/2020 Last updated : 11/08/2022
This technical profile:
- Uses the Azure AD SSPR service to generate and send a code to an email address, and then verifies the code. - Validates an email address via a verification code. - ## Protocol The **Name** attribute of the **Protocol** element needs to be set to `Proprietary`. The **handler** attribute must contain the fully qualified name of the protocol handler assembly that is used by Azure AD B2C:
active-directory-b2c Custom Policy Developer Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policy-developer-notes.md
Previously updated : 06/27/2022 Last updated : 11/08/2022
Azure Active Directory B2C [user flows and custom policies](user-flow-overview.m
## Terms for features in public preview - We encourage you to use public preview features for evaluation purposes only.+ - [Service level agreements (SLAs)](https://azure.microsoft.com/support/legal/sla/active-directory-b2c) don't apply to public preview features.+ - Support requests for public preview features can be submitted through regular support channels. ## User flows
The following table summarizes the Security Assertion Markup Language (SAML) app
| - | :--: | -- | | [MFA using time-based one-time password (TOTP) with authenticator apps](multi-factor-authentication.md#verification-methods) | GA | Users can use any authenticator app that supports TOTP verification, such as the [Microsoft Authenticator app](https://www.microsoft.com/security/mobile-authenticator-app).| | [Phone factor authentication](phone-factor-technical-profile.md) | GA | |
-| [Azure AD MFA authentication](multi-factor-auth-technical-profile.md) | Preview | |
+| [Azure AD MFA authentication](multi-factor-auth-technical-profile.md) | GA | |
| [One-time password](one-time-password-technical-profile.md) | GA | | | [Azure Active Directory](active-directory-technical-profile.md) as local directory | GA | | | [Predicate validations](predicates.md) | GA | For example, password complexity. |
active-directory-b2c Multi Factor Auth Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/multi-factor-auth-technical-profile.md
Previously updated : 12/09/2021 Last updated : 11/08/2022
Azure Active Directory B2C (Azure AD B2C) provides support for verifying a phone number by using a verification code, or verifying a Time-based One-time Password (TOTP) code. ## Protocol
active-directory Howto Mfa App Passwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-app-passwords.md
Previously updated : 11/01/2022 Last updated : 11/08/2022
Modern authentication is supported for the Microsoft Office 2013 clients and lat
This article shows you how to use app passwords for legacy applications that don't support multi-factor authentication prompts. >[!NOTE]
-> App passwords don't work with Conditional Access based multi-factor authentication policies and modern authentication. App passwords only work with legacy authentication protocols such as IMAP and SMTP.
+>App passwords don't work for accounts that are required to use modern authentication.
## Overview and considerations
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
The filter for devices API is available in Microsoft Graph v1.0 endpoint and can
The following device attributes can be used with the filter for devices condition in Conditional Access.
+> [!NOTE]
+> Azure AD uses device authentication to evaluate device filter rules. For a device that is unregistered with Azure AD, all device properties are considered as null values and the device attributes cannot be determined since the device does not exist in the directory. The best way to target policies for unregistered devices is by using the negative operator since the configured filter rule would apply. If you were to use a positive operator, the filter rule would only apply when a device exists in the directory and the configured rule matches the attribute on the device.
+ | Supported device attributes | Supported operators | Supported values | Example | | | | | | | deviceId | Equals, NotEquals, In, NotIn | A valid deviceId that is a GUID | (device.deviceid -eq "498c4de7-1aee-4ded-8d5d-000000000000") |
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
Organizations can require that an approved client app is used to access selecte
To apply this grant control, the device must be registered in Azure AD, which requires using a broker app. The broker app can be Microsoft Authenticator for iOS, or either Microsoft Authenticator or Microsoft Company Portal for Android devices. If a broker app isn't installed on the device when the user attempts to authenticate, the user is redirected to the appropriate app store to install the required broker app.
-The following client apps support this setting:
+The following client apps support this setting, this list isn't exhaustive and is subject to change::
- Microsoft Azure Information Protection - Microsoft Bookings
To apply this grant control, Conditional Access requires that the device is regi
Applications must have the Intune SDK with policy assurance implemented and must meet certain other requirements to support this setting. Developers who are implementing applications with the Intune SDK can find more information on these requirements in the [SDK documentation](/mem/intune/developer/app-sdk-get-started).
-The following client apps are confirmed to support this setting:
+The following client apps are confirmed to support this setting, this list isn't exhaustive and is subject to change:
- Microsoft Cortana - Microsoft Edge
The following client apps are confirmed to support this setting:
- MultiLine for Intune - Nine Mail - Email and Calendar - Notate for Intune-- Yammer (iOS and iPadOS)
+- Yammer (Android, iOS, and iPadOS)
This list isn't all encompassing, if your app isn't in this list please check with the application vendor to confirm support.
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/overview.md
Previously updated : 08/05/2022 Last updated : 11/07/2022
Risk-based policies require access to [Identity Protection](../identity-protecti
Other products and features that may interact with Conditional Access policies require appropriate licensing for those products and features.
+When licenses required for Conditional Access expire, policies aren't automatically disabled or deleted so customers can migrate away from Conditional Access policies without a sudden change in their security posture. Remaining policies can be viewed and deleted, but no longer updated.
+
+[Security defaults](../fundamentals/concept-fundamentals-security-defaults.md) help protect against identity-related attacks and are available for all customers.
+ ## Next steps - [Building a Conditional Access policy piece by piece](concept-conditional-access-policies.md)
active-directory B2b Tutorial Require Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-tutorial-require-mfa.md
Last updated 01/07/2022
-+ +
+# Customer intent: As a tenant administrator, I want to set up MFA requirement for B2B guest users to protect my apps and resources.
# Tutorial: Enforce multi-factor authentication for B2B guest users
-When collaborating with external B2B guest users, itΓÇÖs a good idea to protect your apps with multi-factor authentication (MFA) policies. Then external users will need more than just a user name and password to access your resources. In Azure Active Directory (Azure AD), you can accomplish this goal with a Conditional Access policy that requires MFA for access. MFA policies can be enforced at the tenant, app, or individual guest user level, the same way that they are enabled for members of your own organization. The resource tenant is always responsible for Azure AD Multi-Factor Authentication for users, even if the guest userΓÇÖs organization has Multi-Factor Authentication capabilities.
+When collaborating with external B2B guest users, itΓÇÖs a good idea to protect your apps with multi-factor authentication (MFA) policies. Then external users will need more than just a user name and password to access your resources. In Azure Active Directory (Azure AD), you can accomplish this goal with a Conditional Access policy that requires MFA for access. MFA policies can be enforced at the tenant, app, or individual guest user level, the same way that they're enabled for members of your own organization. The resource tenant is always responsible for Azure AD Multi-Factor Authentication for users, even if the guest userΓÇÖs organization has Multi-Factor Authentication capabilities.
Example:
-![Diagram showing a guest user signing into a company's apps](media/tutorial-mfa/aad-b2b-mfa-example.png)
+ 1. An admin or employee at Company A invites a guest user to use a cloud or on-premises application that is configured to require MFA for access. 1. The guest user signs in with their own work, school, or social identity.
Example:
In this tutorial, you will: > [!div class="checklist"]
+>
> - Test the sign-in experience before MFA setup. > - Create a Conditional Access policy that requires MFA for access to a cloud app in your environment. In this tutorial, weΓÇÖll use the Microsoft Azure Management app to illustrate the process. > - Use the What If tool to simulate MFA sign-in.
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
To complete the scenario in this tutorial, you need: -- **Access to Azure AD Premium edition**, which includes Conditional Access policy capabilities. To enforce MFA, you need to create an Azure AD Conditional Access policy. Note that MFA policies are always enforced at your organization, regardless of whether the partner has MFA capabilities.
+- **Access to Azure AD Premium edition**, which includes Conditional Access policy capabilities. To enforce MFA, you need to create an Azure AD Conditional Access policy. MFA policies are always enforced at your organization, regardless of whether the partner has MFA capabilities.
- **A valid external email account** that you can add to your tenant directory as a guest user and use to sign in. If you don't know how to create a guest account, see [Add a B2B guest user in the Azure portal](add-users-administrator.md). ## Create a test guest user in Azure AD
To complete the scenario in this tutorial, you need:
1. Sign in to the [Azure portal](https://portal.azure.com/) as an Azure AD administrator. 1. In the Azure portal, select **Azure Active Directory**. 1. In the left menu, under **Manage**, select **Users**.
-1. Select **New guest user**.
+1. Select **New user**, and then select **Invite external user**.
- ![Screenshot showing where to select the New guest user option](media/tutorial-mfa/tutorial-mfa-user-3.png)
+ :::image type="content" source="media/tutorial-mfa/tutorial-mfa-new-user.png" alt-text="Screenshot showing where to select the new guest user option.":::
1. Under **Identity**, enter the email address of the external user. Optionally, include a name and welcome message.
- ![Screenshot showing where to enter the guest invitation message](media/tutorial-mfa/tutorial-mfa-user-4.png)
+ :::image type="content" source="media/tutorial-mfa/tutorial-mfa-new-user-identity.png" alt-text="Screenshot showing where to enter the guest email.":::
1. Select **Invite** to automatically send the invitation to the guest user. A **Successfully invited user** message appears. 1. After you send the invitation, the user account is automatically added to the directory as a guest.
To complete the scenario in this tutorial, you need:
## Test the sign-in experience before MFA setup 1. Use your test user name and password to sign in to your [Azure portal](https://portal.azure.com/).
-1. Note that youΓÇÖre able to access the Azure portal using just your sign-in credentials. No additional authentication is required.
+1. You should be able to access the Azure portal using only your sign-in credentials. No other authentication is required.
1. Sign out. ## Create a Conditional Access policy that requires MFA
To complete the scenario in this tutorial, you need:
1. On the **Conditional Access** page, in the toolbar on the top, select **New policy**. 1. On the **New** page, in the **Name** textbox, type **Require MFA for B2B portal access**. 1. In the **Assignments** section, choose the link under **Users and groups**.
-1. On the **Users and groups** page, choose **Select users and groups**, and then choose **All guest and external users**.
+1. On the **Users and groups** page, choose **Select users and groups**, and then choose **Guest or external users**. You can assign the policy to different [external user types](authentication-conditional-access.md#assigning-conditional-access-policies-to-external-user-types-preview), built-in [directory roles](../conditional-access/concept-conditional-access-users-groups.md#include-users), or users and groups.
+
+ :::image type="content" source="media/tutorial-mfa/tutorial-mfa-user-access.png" alt-text="Screenshot showing selecting all guest users.":::
- ![Screenshot showing selecting all guest users](media/tutorial-mfa/tutorial-mfa-policy-6.png)
1. In the **Assignments** section, choose the link under **Cloud apps or actions**. 1. Choose **Select apps**, and then choose the link under **Select**.
- ![Screenshot showing the Cloud apps page and the Select option](media/tutorial-mfa/tutorial-mfa-policy-10.png)
+ :::image type="content" source="media/tutorial-mfa/tutorial-mfa-app-access.png" alt-text="Screenshot showing the Cloud apps page and the Select option." lightbox="media/tutorial-mfa/tutorial-mfa-app-access.png":::
-1. On the **Select** page, choose **Microsoft Azure Management**, and then choose **Select**.
+1. On the **Select** page, choose **Microsoft Azure Management**, and then choose **Select**.
- ![Screenshot that highlights the Microsoft Azure Management option.](media/tutorial-mfa/tutorial-mfa-policy-11.png)
+1. On the **New** page, in the **Access controls** section, choose the link under **Grant**.
+1. On the **Grant** page, choose **Grant access**, select the **Require multi-factor authentication** check box, and then choose **Select**.
-1. On the **New** page, in the **Access controls** section, choose the link under **Grant**.
-1. On the **Grant** page, choose **Grant access**, select the **Require multi-factor authentication** check box, and then choose **Select**.
+ :::image type="content" source="media/tutorial-mfa/tutorial-mfa-grant-access.png" alt-text="Screenshot showing the Require multi-factor authentication option.":::
- ![Screenshot showing the Require multi-factor authentication option](media/tutorial-mfa/tutorial-mfa-policy-13.png)
-1. Under **Enable policy**, select **On**.
+1. Under **Enable policy**, select **On**.
- ![Screenshot showing the Enable policy option set to On](media/tutorial-mfa/tutorial-mfa-policy-14.png)
+ :::image type="content" source="media/tutorial-mfa/tutorial-mfa-enable-policy.png" alt-text="Screenshot showing the Enable policy option set to On.":::
-1. Select **Create**.
+1. Select **Create**.
## Use the What If option to simulate sign-in 1. On the **Conditional Access | Policies** page, select **What If**.
- ![Screenshot that highlights where to select the What if option on the Conditional Access - Policies page.](media/tutorial-mfa/tutorial-mfa-whatif-1.png)
+ :::image type="content" source="media/tutorial-mfa/tutorial-mfa-what-if.png" alt-text="Screenshot that highlights where to select the What if option on the Conditional Access - Policies page.":::
1. Select the link under **User**. 1. In the search box, type the name of your test guest user. Choose the user in the search results, and then choose **Select**.
- ![Screenshot showing a guest user selected](media/tutorial-mfa/tutorial-mfa-whatif-2.png)
+ :::image type="content" source="media/tutorial-mfa/tutorial-mfa-what-if-user.png" alt-text="Screenshot showing a guest user selected.":::
-1. Select the link under **Cloud apps, actions, or authentication content**.
-. Choose **Select apps**, and then choose the link under **Select**.
+1. Select the link under **Cloud apps, actions, or authentication content**. Choose **Select apps**, and then choose the link under **Select**.
- ![Screenshot showing the Microsoft Azure Management app selected](media/tutorial-mfa/tutorial-mfa-whatif-3.png)
+ :::image type="content" source="media/tutorial-mfa/tutorial-mfa-what-if-app.png" alt-text="Screenshot showing the Microsoft Azure Management app selected." lightbox="media/tutorial-mfa/tutorial-mfa-what-if-app.png":::
1. On the **Cloud apps** page, in the applications list, choose **Microsoft Azure Management**, and then choose **Select**. 1. Choose **What If**, and verify that your new policy appears under **Evaluation results** on the **Policies that will apply** tab.
- ![Screenshot showing where to select the What if option](media/tutorial-mfa/tutorial-mfa-whatif-4.png)
+ :::image type="content" source="media/tutorial-mfa/tutorial-mfa-whatif-4.png" alt-text="Screenshot showing the results of the What If evaluation.":::
## Test your Conditional Access policy 1. Use your test user name and password to sign in to your [Azure portal](https://portal.azure.com/).
-1. You should see a request for additional authentication methods. Note that it could take some time for the policy to take effect.
+1. You should see a request for additional authentication methods. It can take some time for the policy to take effect.
- ![Screenshot showing the More information required message](media/tutorial-mfa/mfa-required.png)
+ :::image type="content" source="media/tutorial-mfa/mfa-required.PNG" alt-text="Screenshot showing the More information required message.":::
> [!NOTE] > You also can configure [cross-tenant access settings](cross-tenant-access-overview.md) to trust the MFA from the Azure AD home tenant. This allows external Azure AD users to use the MFA registered in their own tenant rather than register in the resource tenant.
active-directory Reset Redemption Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/reset-redemption-status.md
-+
+# Customer intent: As a tenant administrator, I want to update the sign-in information for a guest user.
# Reset redemption status for a guest user (Preview)
-After a guest user has redeemed your invitation for B2B collaboration, there might be times when you'll need to update their sign-in information, for example when:
+In this article, you'll learn how to update the [guest user's](user-properties.md) sign-in information after they've redeemed your invitation for B2B collaboration. There might be times when you'll need to update their sign-in information, for example when:
- The user wants to sign in using a different email and identity provider - The account for the user in their home tenant has been deleted and re-created - The user has moved to a different company, but they still need the same access to your resources - The userΓÇÖs responsibilities have been passed along to another user
-To manage these scenarios previously, you had to manually delete the guest userΓÇÖs account from your directory and reinvite the user. Now you can use PowerShell or the Microsoft Graph invitation API to reset the user's redemption status and reinvite the user while keeping the user's object ID, group memberships, and app assignments. When the user redeems the new invitation, the UPN of the user doesn't change, but the user's sign-in name changes to the new email. Then the user can sign in using the new email or an email you've added to the `otherMails` property of the user object.
+To manage these scenarios previously, you had to manually delete the guest userΓÇÖs account from your directory and reinvite the user. Now you can use the Azure portal, PowerShell or the Microsoft Graph invitation API to reset the user's redemption status and reinvite the user while keeping the user's object ID, group memberships, and app assignments. When the user redeems the new invitation, the [UPN](../hybrid/plan-connect-userprincipalname.md#what-is-userprincipalname) of the user doesn't change, but the user's sign-in name changes to the new email. Then the user can sign in using the new email or an email you've added to the `otherMails` property of the user object.
## Use the Azure portal to reset redemption status
To manage these scenarios previously, you had to manually delete the guest user
1. Select **Users**. 1. In the list, select the user's name to open their user profile. 1. If the user wants to sign in using a different email:
- - Select the **Properties** tab.
- - Select the **Edit** icon next to **Contact information**.
+ - Select **Edit properties**.
+ - Select the **Contact Information** tab.
- Next to **Email**, type the new email. - Update **Other emails** to also include the new email. - Select the **Save** button at the bottom of the page.
-1. In the **Overview** tab, underΓÇ»**My Feed**, select **B2B collaboration**.
- ![new user profile page displaying the B2B Collaboration tile](./media/reset-redemption-status/user-profile-b2b-collaboration.png)
-1. Under **Redemption status**, next to **Reset invitation status? (Preview)**, select **Yes**.
-1. Select **Yes** to confirm.
+1. On the **Overview** tab, underΓÇ»**My Feed**, select the **Manage (resend invitation / reset status)** link in the **B2B collaboration** tile.
+
+ :::image type="content" source="media/reset-redemption-status/user-profile-b2b-collaboration.png" alt-text="Screenshot of the guest user's profile overview." lightbox="media/reset-redemption-status/user-profile-b2b-collaboration.png":::
+1. In the **Manage invitations** pane, under **Redemption status**, set **Reset invitation status? (Preview)** to **Yes**.
+1. Select **Yes** to confirm.
## Use PowerShell or Microsoft Graph API to reset redemption status
New-MgInvitation `
### Use Microsoft Graph API to reset redemption status
-Using the [Microsoft Graph invitation API](/graph/api/resources/invitation), set the `resetRedemption` property to `true` and specify the new email address in the `invitedUserEmailAddress` property.
+To use the [Microsoft Graph invitation API](/graph/api/resources/invitation), set the `resetRedemption` property to `true` and specify the new email address in the `invitedUserEmailAddress` property.
```json POST https://graph.microsoft.com/beta/invitations
ContentType: application/json
- [Add Azure Active Directory B2B collaboration users by using PowerShell](customize-invitation-api.md#powershell) - [Properties of an Azure AD B2B guest user](user-properties.md)
+- [B2B for Azure AD integrated apps](configure-saas-apps.md)
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/troubleshoot.md
Last updated 08/30/2022
tags: active-directory -+
By default, SharePoint Online and OneDrive have their own set of external user o
If you're notified that you don't have permissions to invite users, verify that your user account is authorized to invite external users under Azure Active Directory > User settings > External users > Manage external collaboration settings:
-![Screenshot showing the External Users settings.](media/troubleshoot/external-user-settings.png)
If you've recently modified these settings or assigned the Guest Inviter role to a user, there might be a 15-60 minute delay before the changes take effect.
Let's say you inadvertently invite a guest user with an email address that match
## Next steps
-[Get support for B2B collaboration](../fundamentals/active-directory-troubleshooting-support-howto.md)
+- [Get support for B2B collaboration](../fundamentals/active-directory-troubleshooting-support-howto.md)
+- [Use audit logs and access reviews](auditing-and-reporting.md)
active-directory User Flow Customize Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-flow-customize-language.md
Previously updated : 03/02/2021 Last updated : 11/02/2022 +
+# Customer intent: As a tenant administrator, I want to modify the user flow language, when the users are signing up via the self-service sign-up user flow.
# Language customization in Azure Active Directory
-Language customization in Azure Active Directory (Azure AD) allows your user flow to accommodate different languages to suit your user's needs. Microsoft provides the translations for [36 languages](#supported-languages). Even if your experience is provided for only a single language, you can customize the attribute names on the attribute collection page.
+Language customization in Azure Active Directory (Azure AD) allows your user flow to accommodate different languages to suit your user's needs. Microsoft provides the translations for [36 languages](#supported-languages). In this article, you'll learn how to customize the attribute names on the [attribute collection page](self-service-sign-up-user-flow.md#select-the-layout-of-the-attribute-collection-form), even if your experience is provided for only a single language.
## How language customization works
-By default, language customization is enabled for users signing up to ensure a consistent sign up experience. You can use languages to modify the strings displayed to users as part of the attribute collection process during sign up.
+By default, language customization is enabled for users signing up to ensure a consistent sign-up experience. You can use languages to modify the strings displayed to users as part of the attribute collection process during sign-up. If you're using [custom user attributes](user-flow-add-custom-attributes.md), you need to provide your [own translations](#customize-your-strings).
-> [!NOTE]
-> If you're using custom user attributes, you need to provide your own translations. For more information, see [Customize your strings](#customize-your-strings).
-
-## Customize your strings
+## Customize your strings
Language customization enables you to customize any string in your user flow.
Language customization enables you to customize any string in your user flow.
4. Select **Languages**. 5. On the **Languages** page for the user flow, select the language that you want to customize. 6. Expand **Attribute collection page**.
-7. Select **Download defaults** (or **Download overrides** if you have previously edited this language).
+7. Select **Download defaults** (or **Download overrides** if you've previously edited this language).
These steps give you a JSON file that you can use to start editing your strings.
+ :::image type="content" source="media/user-flow-customize-language/language-customization-download-defaults.png" alt-text="Screenshot of downloading the default language customization json file." lightbox="media/user-flow-customize-language/language-customization-download-defaults.png":::
+ ### Change any string on the page 1. Open the JSON file downloaded from previous instructions in a JSON editor. 1. Find the element that you want to change. You can find `StringId` for the string you're looking for, or look for the `Value` attribute that you want to change. 1. Update the `Value` attribute with what you want displayed.
-1. For every string that you want to change, change `Override` to `true`.
-1. Save the file and upload your changes. (You can find the upload control in the same place as where you downloaded the JSON file.)
+1. For every string that you want to change, change `Override` to `true`. If the `Override` value isn't changed to `true`, the entry is ignored.
+1. Save the file and [upload your changes](#upload-your-changes).
-> [!IMPORTANT]
-> If you need to override a string, make sure to set the `Override` value to `true`. If the value isn't changed, the entry is ignored.
+ :::image type="content" source="media/user-flow-customize-language/language-customization-upload-override.png" alt-text="Screenshot of uploading the language customization json file.":::
### Change extension attributes
Replace `<ExtensionAttributeValue>` with the new string to be displayed.
### Provide a list of values by using LocalizedCollections
-If you want to provide a set list of values for responses, you need to create a `LocalizedCollections` attribute. `LocalizedCollections` is an array of `Name` and `Value` pairs. The order for the items will be the order they are displayed. To add `LocalizedCollections`, use the following format:
+If you want to provide a set list of values for responses, you need to create a `LocalizedCollections` attribute. `LocalizedCollections` is an array of `Name` and `Value` pairs. The order for the items will be the order they're displayed. To add `LocalizedCollections`, use the following format:
```JSON {
If you want to provide a set list of values for responses, you need to create a
### Upload your changes 1. After you complete the changes to your JSON file, go back to your tenant.
-1. Select **User flows** and click the user flow that you want to enable for translations.
+1. Select **User flows** and select the user flow that you want to enable for translations.
1. Select **Languages**. 1. Select the language that you want to translate to. 1. Select **Attribute collection page**. 1. Select the folder icon, and select the JSON file to upload.
+1. The changes are saved to your user flow automatically and you'll find the override under the **Configured** tab.
+1. To remove or download your customized override file, select the language and expand the **Attribute collection page**.
-The changes are saved to your user flow automatically.
+ :::image type="content" source="media/user-flow-customize-language/language-customization-remove-download-overrides.png" alt-text="Screenshot of removing or downloading the language customization json file.":::
## Additional information
Microsoft provides the `ui_locales` OIDC parameter to social logins. But some so
### Browser behavior
-Chrome and Firefox both request for their set language. If it's a supported language, it's displayed before the default. Microsoft Edge currently does not request a language and goes straight to the default language.
+Chrome and Firefox both request for their set language. If it's a supported language, it's displayed before the default. Microsoft Edge currently doesn't request a language and goes straight to the default language.
## Supported languages
Azure AD includes support for the following languages. User flow languages are p
| Vietnamese | vi | ![X indicating no.](./media/user-flow-customize-language/no.png) | ![Green check mark.](./media/user-flow-customize-language/yes.png) | | Chinese - Simplified | zh-hans | ![Green check mark.](./media/user-flow-customize-language/yes.png) | ![Green check mark.](./media/user-flow-customize-language/yes.png) | | Chinese - Traditional | zh-hant | ![Green check mark.](./media/user-flow-customize-language/yes.png) | ![Green check mark.](./media/user-flow-customize-language/yes.png) |++
+## Next steps
+
+- [Add an API connector to a user flow](self-service-sign-up-add-api-connector.md)
+- [Define custom attributes for user flows](user-flow-add-custom-attributes.md)
active-directory Entitlement Management Access Package Approval Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-approval-policy.md
In order to make sure users are getting access to the right access packages, you
1. If you would like to include a syntax check for text answers to questions, you can also specify a custom regex pattern. :::image type="content" source="media/entitlement-management-access-package-approval-policy/add-regex-localization.png" alt-text="Screenshot of the add regex localization policy." lightbox="media/entitlement-management-access-package-approval-policy/add-regex-localization.png":::
+ If you would like to include a syntax check for text answers to questions, you can also specify a custom regex pattern.
1. To require requestors to answer this question when requesting access to an access package, select the check box under **Required**. 1. Fill out the remaining tabs (for example, Lifecycle) based on your needs.
active-directory How To Lifecycle Workflow Sync Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md
The following table shows the scheduling (trigger) relevant attributes and the m
|Attribute|Type|Supported in HR Inbound Provisioning|Support in Azure AD Connect Cloud Sync|Support in Azure AD Connect Sync| |--|--|--|--|--| |employeeHireDate|DateTimeOffset|Yes|Yes|Yes|
-|employeeLeaveDateTime|DateTimeOffset|Yes|Yes|Not currently|
+|employeeLeaveDateTime|DateTimeOffset|Yes|Yes|Yes|
> [!NOTE] > Manually setting the employeeLeaveDateTime for cloud-only users requires special permissions. For more information, see: [Configure the employeeLeaveDateTime property for a user](/graph/tutorial-lifecycle-workflows-set-employeeleavedatetime)
active-directory Lifecycle Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-tasks.md
Common task parameters are the non-unique parameters contained in every task. Wh
|Parameter |Definition | ||| |category | A read-only string that identifies the category or categories of the task. Automatically determined when the taskDefinitionID is chosen. |
-|taskDefinitionId | A string referencing a taskDefinition which determines which task to run. |
+|taskDefinitionId | A string referencing a taskDefinition that determines which task to run. |
|isEnabled | A boolean value that denotes whether the task is set to run or not. If set to ΓÇ£true" then the task will run. Defaults to true. | |displayName | A unique string that identifies the task. | |description | A string that describes the purpose of the task for administrative use. (Optional) |
-|executionSequence | An integer that is read-only which states in what order the task will run in a workflow. For more information about executionSequence and workflow order, see: [Configure Scope](understanding-lifecycle-workflows.md#configure-scope). |
+|executionSequence | A read-only integer that states in what order the task will run in a workflow. For more information about executionSequence and workflow order, see: [Configure Scope](understanding-lifecycle-workflows.md#configure-scope). |
|continueOnError | A boolean value that determines if the failure of this task stops the subsequent workflows from running. | |arguments | Contains unique parameters relevant for the given task. |
Below is each specific task, and detailed information such as parameters and pre
Lifecycle Workflows allow you to automate the sending of welcome emails to new hires in your organization. You're able to customize the task name and description for this task in the Azure portal. The Azure AD prerequisite to run the **Send welcome email to new hire** task is:
For Microsoft Graph the parameters for the **Send welcome email to new hire** ta
```
+### Send onboarding reminder email
++
+Lifecycle Workflows allow you to automate the sending of onboarding reminder emails to managers of new hires in your organization. You're able to customize the task name and description for this task in the Azure portal.
++
+The Azure AD prerequisite to run the **Send onboarding reminder email** task is:
+
+- A populated manager attribute for the user.
+- A populated manager's mail attribute for the user.
++
+For Microsoft Graph the parameters for the **Send onboarding reminder email** task are as follows:
+
+|Parameter |Definition |
+|||
+|category | joiner |
+|displayName | Send onboarding reminder email (Customizable by user) |
+|description | Send onboarding reminder email to userΓÇÖs manager (Customizable by user) |
+|taskDefinitionId | 3C860712-2D37-42A4-928F-5C93935D26A1 |
+++
+```Example for usage within the workflow
+{
+ "category": "joiner",
+ "continueOnError": true,
+ "description": "Send onboarding reminder email to userΓÇÖs manager",
+ "displayName": "Send onboarding reminder email",
+ "isEnabled": true,
+ "taskDefinitionId": "3C860712-2D37-42A4-928F-5C93935D26A1",
+ "arguments": []
+}
+
+```
+ ### Generate Temporary Access Pass and send via email to user's manager
-When a compatible user joins your organization, Lifecycle Workflows allow you to automatically generate a Temporary Access Pass(TAP) and have it sent to the new user's manager.
+When a compatible user joins your organization, Lifecycle Workflows allow you to automatically generate a Temporary Access Pass(TAP), and have it sent to the new user's manager.
-With this task in the Azure portal, you're able to give the task a name and description. You must also set the following:
+With this task in the Azure portal, you're able to give the task a name and description. You must also set:
**Activation duration**- How long the password is active. **One time use**- If the password is one use only.
For Microsoft Graph the parameters for the **Add user to groups** task are as fo
|Parameter |Definition | |||
-|category | joiner,leaver |
+|category | joiner, leaver |
|displayName | AddUserToGroup (Customizable by user) | |description | Add user to groups (Customizable by user) | |taskDefinitionId | 22085229-5809-45e8-97fd-270d28d66910 |
-|arguments | Argument contains a name parameter that is the "groupID", and a value parameter which is the group ID of the group you are adding the user to. |
+|arguments | Argument contains a name parameter that is the "groupID", and a value parameter that is the group ID of the group you're adding the user to. |
```Example for usage within the workflow
For Microsoft Graph the parameters for the **Add user to teams** task are as fol
|Parameter |Definition | |||
-|category | joiner,leaver |
+|category | joiner, leaver |
|displayName | AddUserToTeam (Customizable by user) | |description | Add user to teams (Customizable by user) | |taskDefinitionId | e440ed8d-25a1-4618-84ce-091ed5be5594 |
-|argument | Argument contains a name parameter that is the "teamID", and a value parameter which is the team ID of the existing team you are adding a user to. |
+|argument | Argument contains a name parameter that is the "teamID", and a value parameter that is the team ID of the existing team you're adding a user to. |
For Microsoft Graph the parameters for the **Enable user account** task are as f
|Parameter |Definition | |||
-|category | joiner,leaver |
+|category | joiner, leaver |
|displayName | EnableUserAccount (Customizable by user) | |description | Enable user account (Customizable by user) | |taskDefinitionId | 6fc52c9d-398b-4305-9763-15f42c1676fc |
For Microsoft Graph the parameters for the **Run a Custom Task Extension** task
|Parameter |Definition | |||
-|category | joiner,leaver |
+|category | joiner, leaver |
|displayName | Run a Custom Task Extension (Customizable by user) | |description | Run a Custom Task Extension to call-out to an external system. (Customizable by user) | |taskDefinitionId | "d79d1fcc-16be-490c-a865-f4533b1639ee |
-|argument | Argument contains a name parameter that is the "LogicAppURL", and a value parameter which is the Logic App HTTP trigger. |
+|argument | Argument contains a name parameter that is the "LogicAppURL", and a value parameter that is the Logic App HTTP trigger. |
For Microsoft Graph the parameters for the **Disable user account** task are as
|Parameter |Definition | |||
-|category | joiner,leaver |
+|category | joiner, leaver |
|displayName | DisableUserAccount (Customizable by user) | |description | Disable user account (Customizable by user) | |taskDefinitionId | 1dfdfcc7-52fa-4c2e-bf3a-e3919cc12950 |
For Microsoft Graph the parameters for the **Remove user from selected groups**
|displayName | Remove user from selected groups (Customizable by user) | |description | Remove user from membership of selected Azure AD groups (Customizable by user) | |taskDefinitionId | 1953a66c-751c-45e5-8bfe-01462c70da3c |
-|argument | Argument contains a name parameter that is the "groupID", and a value parameter which is the group Id(s) of the group or groups you are removing the user from. |
+|argument | Argument contains a name parameter that is the "groupID", and a value parameter that is the group Id(s) of the group or groups you're removing the user from. |
For Microsoft Graph the parameters for the **Remove user from selected groups**
### Remove users from all groups
-Allows users to be removed from every cloud-only group they are a member of. Dynamic and Privileged Access Groups not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
+Allows users to be removed from every cloud-only group they're a member of. Dynamic and Privileged Access Groups not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
You're able to customize the task name and description for this task in the Azure portal.
For Microsoft Graph the parameters for the **Remove User from Teams** task are a
|Parameter |Definition | |||
-|category | joiner,leaver |
+|category | joiner, leaver |
|displayName | Remove user from selected Teams (Customizable by user) | |description | Remove user from membership of selected Teams (Customizable by user) | |taskDefinitionId | 06aa7acb-01af-4824-8899-b14e5ed788d6 |
-|arguments | Argument contains a name parameter that is the "teamID", and a value parameter which is the Teams ID of the Teams you are removing the user from. |
+|arguments | Argument contains a name parameter that is the "teamID", and a value parameter that is the Teams ID of the Teams you're removing the user from. |
```Example for usage within the workflow
For Microsoft Graph the parameters for the **Remove User from Teams** task are a
### Remove users from all teams
-Allows users to be removed from every static team they are a member of. You're able to customize the task name and description for this task in the Azure portal.
+Allows users to be removed from every static team they're a member of. You're able to customize the task name and description for this task in the Azure portal.
:::image type="content" source="media/lifecycle-workflow-task/remove-user-all-team-task.png" alt-text="Screenshot of Workflows task: remove user from all teams."::: For Microsoft Graph the parameters for the **Remove users from all teams** task are as follows:
active-directory How To Connect Staged Rollout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-staged-rollout.md
To configure Staged Rollout, follow these steps:
1. On the *Azure AD Connect* page, under the *Staged rollout of cloud authentication*, select the **Enable staged rollout for managed user sign-in** link.
-1. On the *Enable staged rollout feature* page, select the options you want to enable: [Password Hash Sync](./whatis-phs.md), [Pass-through authentication](./how-to-connect-pta.md), [Seamless single sign-on](./how-to-connect-sso.md), or [Certificate-based Authentication (Preview)](../authentication/active-directory-certificate-based-authentication-get-started.md). For example, if you want to enable **Password Hash Sync** and **Seamless single sign-on**, slide both controls to **On**.
+1. On the *Enable staged rollout feature* page, select the options you want to enable: [Password Hash Sync](./whatis-phs.md), [Pass-through authentication](./how-to-connect-pta.md), [Seamless single sign-on](./how-to-connect-sso.md), or [Certificate-based Authentication](../authentication/active-directory-certificate-based-authentication-get-started.md). For example, if you want to enable **Password Hash Sync** and **Seamless single sign-on**, slide both controls to **On**.
1. Add groups to the features you selected. For example, *pass-through authentication* and *seamless SSO*. To avoid a time-out, ensure that the security groups contain no more than 200 members initially.
active-directory Amazon Business Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/amazon-business-tutorial.md
Previously updated : 06/16/2021 Last updated : 11/08/2022
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Amazon Business SSO
-1. In a different web browser window, sign in to your Amazon Business company site as an administrator.
+1. To automate the configuration within Amazon Business, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+1. After adding extension to the browser, click on **Set up Amazon Business** will direct you to the Amazon Business Single Sign-On application. From there, provide the admin credentials to sign in to Amazon Business Single Sign-On. The browser extension will automatically configure the application for you and automate steps 3-17.
+
+ ![Setup configuration](common/setup-sso.png)
+
+1. If you want to set up Amazon Business manually, in a different web browser window, sign in to your Amazon Business company site as an administrator.
1. Click on the **User Profile** and select **Business Settings**.
active-directory Invision Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/invision-provisioning-tutorial.md
na Previously updated : 06/25/2020 Last updated : 11/08/2022
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* An [InVision Enterprise account](https://www.invisionapp.com/enterprise) with SSO enabled.
+* An [InVision Enterprise account](https://www.invisionapp.com/) with SSO enabled.
* A user account in InVision with Admin permissions. ## Step 1. Plan your provisioning deployment
The scenario outlined in this tutorial assumes that you already have the followi
## Step 2. Configure InVision to support provisioning with Azure AD
-1. Sign in to your [InVision Enterprise account](https://www.invisionapp.com/enterprise) as an Admin or Owner. Open the **Team Settings** drawer on the bottom left and select **Settings**.
+1. Sign in to your [InVision Enterprise account](https://www.invisionapp.com/) as an Admin or Owner. Open the **Team Settings** drawer on the bottom left and select **Settings**.
![SCIM setup configuration](./media/invision-provisioning-tutorial/invision-scim-settings.png)
active-directory Juno Journey Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/juno-journey-provisioning-tutorial.md
Previously updated : 04/16/2020 Last updated : 11/08/2022
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* A [Juno Journey tenant](https://www.junojourney.com/getstarted).
+* A [Juno Journey tenant](https://app.junojourney.com/login).
* A user account in Juno Journey with Admin permissions. ## Step 1. Plan your provisioning deployment
active-directory Mobilexpense Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mobilexpense-tutorial.md
Previously updated : 06/29/2022 Last updated : 11/08/2022 # Tutorial: Azure AD SSO integration with Mobile Xpense
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<sub-domain>.mobilexpense.com/<customername>` > [!NOTE]
- > These values are not real. Update these values with the actual Reply URL and Sign-on URL. Contact [Mobile Xpense Client support team](https://www.mobilexpense.net/contact) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Reply URL and Sign-on URL. Contact [Mobile Xpense Client support team](https://www.mobilexpense.com/contact) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Mobile Xpense SSO
-To configure single sign-on on **Mobile Xpense** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Mobile Xpense support team](https://www.mobilexpense.net/contact). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Mobile Xpense** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Mobile Xpense support team](https://www.mobilexpense.com/contact). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Mobile Xpense test user
-In this section, you create a user called Britta Simon in Mobile Xpense. Work with [Mobile Xpense support team](https://www.mobilexpense.net/contact) to add the users in the Mobile Xpense platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in Mobile Xpense. Work with [Mobile Xpense support team](https://www.mobilexpense.com/contact) to add the users in the Mobile Xpense platform. Users must be created and activated before you use single sign-on.
## Test SSO
active-directory Ms Confluence Jira Plugin Adminguide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ms-confluence-jira-plugin-adminguide.md
Before you start using the plug-in, you must configure it. Select the plug-in, s
The following image shows the configuration screen in both Jira and Confluence:
-![Plug-in configuration screen](./media/ms-confluence-jira-plugin-adminguide/jira.png)
+![Plug-in configuration screen](./media/jiramicrosoft-tutorial/jira-configure-addon.png)
* **Metadata URL**: The URL to get federation metadata from Azure AD.
The following image shows the configuration screen in both Jira and Confluence:
* **Enable Single Signout**: The selection to make if you want to sign out from Azure AD when a user signs out from Jira or Confluence.
+* Enable **Force Azure Login** checkbox, if you wish to sign in through Azure AD credentials only.
+
+* **Enable Use of Application Proxy** checkbox, if you have configured your on-premise atlassian application in an App Proxy setup.
+
+ * For App proxy setup , follow the steps on the [Azure AD App Proxy Documentation](../app-proxy/what-is-application-proxy.md).
+ ## Troubleshooting * **You're getting multiple certificate errors**: Sign in to Azure AD and remove the multiple certificates that are available against the app. Ensure that only one certificate is present.
active-directory Predictixpricereporting Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/predictixpricereporting-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<companyname-pricing>.predictix.com/sso/request` > [!NOTE]
- > These values are placeholders. Update these values with the actual Identifier and Sign on URL. Contact the [Predictix Price Reporting support team](https://www.infor.com/company/customer-center/) to get the values. You can also refer to the patterns shown in the **Basic SAML Configuration** dialog box in the Azure portal.
+ > These values are placeholders. Update these values with the actual Identifier and Sign on URL. Contact the [Predictix Price Reporting support team](https://www.infor.com/customer-center) to get the values. You can also refer to the patterns shown in the **Basic SAML Configuration** dialog box in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, select the **Download** link next to **Certificate (Base64)**, per your requirements, and save the certificate on your computer:
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Predictix Price Reporting SSO
-To configure single sign-on on the Predictix Price Reporting side, you need to send the certificate that you downloaded and the URLs that you copied from the Azure portal to the [Predictix Price Reporting support team](https://www.infor.com/company/customer-center/). This team ensures the SAML SSO connection is set properly on both sides.
+To configure single sign-on on the Predictix Price Reporting side, you need to send the certificate that you downloaded and the URLs that you copied from the Azure portal to the [Predictix Price Reporting support team](https://www.infor.com/customer-center). This team ensures the SAML SSO connection is set properly on both sides.
### Create a Predictix Price Reporting test user
-Next, you need to create a user named Britta Simon in Predictix Price Reporting. Work with the [Predictix Price Reporting support team](https://www.infor.com/company/customer-center/) to add users. Users need to be created and activated before you use single sign-on.
+Next, you need to create a user named Britta Simon in Predictix Price Reporting. Work with the [Predictix Price Reporting support team](https://www.infor.com/customer-center) to add users. Users need to be created and activated before you use single sign-on.
## Test SSO
active-directory Zendesk Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zendesk-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
## Step 2. Configure Zendesk to support provisioning with Azure AD
-1. Log in to [Admin Center](https://support.zendesk.com/hc/en-us/articles/4408839227290#topic_hfg_dyz_1hb), click **Apps and integrations** in the sidebar, then select **APIs > Zendesk APIs**.
+1. Log in to [Admin Center](https://support.zendesk.com/hc/en-us/articles/4581766374554#topic_hfg_dyz_1hb), click **Apps and integrations** in the sidebar, then select **APIs > Zendesk APIs**.
1. Click the **Settings** tab, and make sure Token Access is **enabled**. 1. Click the **Add API token** button to the right of **Active API Tokens**.The token is generated and displayed. 1. Enter an **API token description**.
aks Command Invoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/command-invoke.md
az aks command invoke \
``` The above runs `kubectl apply -f deployment.yaml configmap.yaml -n default` on the *myAKSCluster* cluster in *myResourceGroup*. The `deployment.yaml` and `configmap.yaml` files used by that command are part of the current directory on the development computer where `az aks command invoke` was run.++
+## Troubleshooting
+
+The following link describes the most common issues with `az aks command invoke` and how to fix them:
+
+https://learn.microsoft.com/troubleshoot/azure/azure-kubernetes/resolve-az-aks-command-invoke-failures
+
aks Load Balancer Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md
Title: Use a Public Load Balancer
+ Title: Use a public load balancer
description: Learn how to use a public load balancer with a Standard SKU to expose your services with Azure Kubernetes Service (AKS).
#Customer intent: As a cluster operator or developer, I want to learn how to create a service in AKS that uses an Azure Load Balancer with a Standard SKU.
-# Use a public Standard Load Balancer in Azure Kubernetes Service (AKS)
+# Use a public standard load balancer in Azure Kubernetes Service (AKS)
-The Azure Load Balancer is on L4 of the Open Systems Interconnection (OSI) model that supports both inbound and outbound scenarios. It distributes inbound flows that arrive at the load balancer's front end to the backend pool instances.
+The [Azure Load Balancer][az-lb] operates at layer 4 of the Open Systems Interconnection (OSI) model that supports both inbound and outbound scenarios. It distributes inbound flows that arrive at the load balancer's front end to the back end pool instances.
-A **public** Load Balancer when integrated with AKS serves two purposes:
+A **public** load balancer integrated with AKS serves two purposes:
-1. To provide outbound connections to the cluster nodes inside the AKS virtual network. It achieves this objective by translating the nodes private IP address to a public IP address that is part of its *Outbound Pool*.
+1. To provide outbound connections to the cluster nodes inside the AKS virtual network. To do this, it translates the private IP address to a public IP address that is part of its *Outbound Pool*.
2. To provide access to applications via Kubernetes services of type `LoadBalancer`. With it, you can easily scale your applications and create highly available services. An **internal (or private)** load balancer is used where only private IPs are allowed as frontend. Internal load balancers are used to load balance traffic inside a virtual network. A load balancer frontend can also be accessed from an on-premises network in a hybrid scenario.
-This document covers the integration with Public Load balancer. For internal Load Balancer integration, see the [AKS Internal Load balancer documentation](internal-lb.md).
+This document covers integration with public load balancer. For internal load balancer integration, see the [AKS internal load balancer documentation](internal-lb.md).
## Before you begin
-Azure Load Balancer is available in two SKUs - *Basic* and *Standard*. By default, *Standard* SKU is used when you create an AKS cluster. The *Standard* SKU gives you access to added functionality, such as a larger backend pool, [multiple node pools](use-multiple-node-pools.md), [Availability Zones](availability-zones.md), and is [secure by default][azure-lb]. It's the recommended Load Balancer SKU for AKS.
+Azure Load Balancer is available in two SKUs: *Basic* and *Standard*. By default, *Standard* SKU is used when you create an AKS cluster. The *Standard* SKU gives you access to added functionality, such as a larger backend pool, [multiple node pools](use-multiple-node-pools.md), [Availability Zones](availability-zones.md), and is [secure by default][azure-lb]. It's the recommended load balancer SKU for AKS.
For more information on the *Basic* and *Standard* SKUs, see [Azure load balancer SKU comparison][azure-lb-comparison].
-This article assumes you have an AKS cluster with the *Standard* SKU Azure Load Balancer and walks through how to use and configure some of the capabilities and features of the load balancer. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
+This article assumes you have an AKS cluster with the *Standard* SKU Azure Load Balancer. If you need an AKS cluster, create one [using the Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or [the Azure portal][aks-quickstart-portal].
> [!IMPORTANT]
-> If you prefer not to leverage the Azure Load Balancer to provide outbound connection and instead have your own gateway, firewall or proxy for that purpose you can skip the creation of the load balancer outbound pool and respective frontend IP by using [**Outbound type as UserDefinedRouting (UDR)**](egress-outboundtype.md). The Outbound type defines the egress method for a cluster and it defaults to type: load balancer.
+> If you prefer not to leverage the Azure Load Balancer to provide outbound connection and instead have your own gateway, firewall, or proxy for that purpose, you can skip the creation of the load balancer outbound pool and respective frontend IP by using [**outbound type as UserDefinedRouting (UDR)**](egress-outboundtype.md). The outbound type defines the egress method for a cluster and defaults to type `loadBalancer`.
## Use the public standard load balancer
-After creating an AKS cluster with Outbound Type: Load Balancer (default), the cluster is ready to use the load balancer to expose services as well.
+After creating an AKS cluster with outbound type `LoadBalancer` (default), the cluster is ready to use the load balancer to expose services.
-For that you can create a public Service of type `LoadBalancer` as shown in the following example. Start by creating a service manifest named `public-svc.yaml`:
+To do this, you can create a public service of type `LoadBalancer`. Start by creating a service manifest named `public-svc.yaml`.
```yaml apiVersion: v1
spec:
app: public-app ```
-Deploy the public service manifest by using [kubectl apply][kubectl-apply] and specify the name of your YAML manifest:
+Deploy the public service manifest by using [kubectl apply][kubectl-apply] and specify the name of your YAML manifest.
```azurecli-interactive kubectl apply -f public-svc.yaml ```
-The Azure Load Balancer will be configured with a new public IP that will front this new service. Since the Azure Load Balancer can have multiple Frontend IPs, each new service deployed will get a new dedicated frontend IP to be uniquely accessed.
+The Azure Load Balancer will be configured with a new public IP that will front this new service. Since the Azure Load Balancer can have multiple frontend IPs, each new service deployed will get a new dedicated frontend IP to be uniquely accessed.
-You can confirm your service is created and the load balancer is configured by running for example:
+You can use the following command to confirm your service is created and the load balancer is configured.
```azurecli-interactive kubectl get service public-svc
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S
default public-svc LoadBalancer 10.0.39.110 52.156.88.187 80:32068/TCP 52s ```
-When you view the service details, the public IP address created for this service on the load balancer is shown in the *EXTERNAL-IP* column. It may take a minute or two for the IP address to change from *\<pending\>* to an actual public IP address, as shown in the above example.
+When you view the service details, the public IP address created for this service on the load balancer is shown in the *EXTERNAL-IP* column. It may take a few minutes for the IP address to change from *\<pending\>* to an actual public IP address.
## Configure the public standard load balancer
-When using the Standard SKU public load balancer, there's a set of options that can be customized at creation time or by updating the cluster. These options allow you to customize the Load Balancer to meet your workloads needs and should be reviewed accordingly. With the Standard load balancer you can:
+When using the standard public load balancer, there's a set of options you can customize at creation time or by updating the cluster. These options allow you to customize the load balancer to meet your workloads needs and should be reviewed accordingly. With the standard load balancer, you can:
-* Set or scale the number of Managed Outbound IPs
-* Bring your own custom [Outbound IPs or Outbound IP Prefix](#provide-your-own-outbound-public-ips-or-prefixes)
+* Set or scale the number of managed outbound IPs
+* Bring your own custom [outbound IPs or outbound IP prefix](#provide-your-own-outbound-public-ips-or-prefixes)
* Customize the number of allocated outbound ports to each node of the cluster * Configure the timeout setting for idle connections > [!IMPORTANT]
-> Only one outbound IP option (managed IPs, bring your own IP, or IP Prefix) can be used at a given time.
+> Only one outbound IP option (managed IPs, bring your own IP, or IP prefix) can be used at a given time.
### Scale the number of managed outbound public IPs
-Azure Load Balancer provides outbound connectivity from a virtual network in addition to inbound. Outbound rules make it simple to configure public Standard Load Balancer's outbound network address translation.
+Azure Load Balancer provides outbound connectivity from a virtual network in addition to inbound. Outbound rules make it simple to configure network address translation for the public standard load balancer.
-Like all Load Balancer rules, outbound rules follow the same familiar syntax as load balancing and inbound NAT rules:
+Like all load balancer rules, outbound rules follow the same syntax as load balancing and inbound NAT rules:
***frontend IPs + parameters + backend pool***
-An outbound rule configures outbound NAT for all virtual machines identified by the backend pool to be translated to the frontend. And parameters provide additional fine grained control over the outbound NAT algorithm.
+An outbound rule configures outbound NAT for all virtual machines identified by the backend pool to be translated to the frontend. Parameters provide additional control over the outbound NAT algorithm.
-While an outbound rule can be used with just a single public IP address, outbound rules ease the configuration burden for scaling outbound NAT. You can use multiple IP addresses to plan for large-scale scenarios and you can use outbound rules to mitigate SNAT exhaustion prone patterns. Each additional IP address provided by a frontend provides 64k ephemeral ports for Load Balancer to use as SNAT ports.
+While an outbound rule can be used with a single public IP address, outbound rules ease the configuration burden for scaling outbound NAT. You can use multiple IP addresses to plan for large-scale scenarios, and you can use outbound rules to mitigate SNAT exhaustion prone patterns. Each additional IP address provided by a frontend provides 64k ephemeral ports for the load balancer to use as SNAT ports.
When using a *Standard* SKU load balancer with managed outbound public IPs, which are created by default, you can scale the number of managed outbound public IPs using the **`load-balancer-managed-ip-count`** parameter.
-To update an existing cluster, run the following command. This parameter can also be set at cluster create-time to have multiple managed outbound public IPs.
+To update an existing cluster, run the following command. This parameter can also be set at cluster creation time to have multiple managed outbound public IPs.
```azurecli-interactive az aks update \
az aks update \
--load-balancer-managed-outbound-ip-count 2 ```
-The above example sets the number of managed outbound public IPs to *2* for the *myAKSCluster* cluster in *myResourceGroup*.
+The above example sets the number of managed outbound public IPs to *2* for the *myAKSCluster* cluster in *myResourceGroup*.
-You can also use the **`load-balancer-managed-ip-count`** parameter to set the initial number of managed outbound public IPs when creating your cluster by appending the **`--load-balancer-managed-outbound-ip-count`** parameter and setting it to your desired value. The default number of managed outbound public IPs is 1.
+You can also use the **`load-balancer-managed-ip-count`** parameter to set the initial number of managed outbound public IPs when creating your cluster by appending the **`--load-balancer-managed-outbound-ip-count`** parameter and setting it to your desired value. The default number of managed outbound public IPs is *1*.
### Provide your own outbound public IPs or prefixes
-When you use a *Standard* SKU load balancer, by default the AKS cluster automatically creates a public IP in the AKS-managed infrastructure resource group and assigns it to the load balancer outbound pool.
+When you use a *Standard* SKU load balancer, the AKS cluster automatically creates a public IP in the AKS-managed infrastructure resource group and assigns it to the load balancer outbound pool by default.
-A public IP created by AKS is considered an AKS managed resource. This means the lifecycle of that public IP is intended to be managed by AKS and requires no user action directly on the public IP resource. Alternatively, you can assign your own custom public IP or public IP prefix at cluster creation time. Your custom IPs can also be updated on an existing cluster's load balancer properties.
+A public IP created by AKS is considered an AKS-managed resource. This means the lifecycle of that public IP is intended to be managed by AKS and requires no user action directly on the public IP resource. Alternatively, you can assign your own custom public IP or public IP prefix at cluster creation time. Your custom IPs can also be updated on an existing cluster's load balancer properties.
Requirements for using your own public IP or prefix: -- Custom public IP addresses must be created and owned by the user. Managed public IP addresses created by AKS cannot be reused as a bring your own custom IP as it can cause management conflicts.-- You must ensure the AKS cluster identity (Service Principal or Managed Identity) has permissions to access the outbound IP. As per the [required public IP permissions list](kubernetes-service-principal.md#networking).-- Make sure you meet the [pre-requisites and constraints](../virtual-network/ip-services/public-ip-address-prefix.md#limitations) necessary to configure Outbound IPs or Outbound IP prefixes.
+* Custom public IP addresses must be created and owned by the user. Managed public IP addresses created by AKS cannot be reused as a bring your own custom IP as it can cause management conflicts.
+* You must ensure the AKS cluster identity (Service Principal or Managed Identity) has permissions to access the outbound IP, as per the [required public IP permissions list](kubernetes-service-principal.md#networking).
+* Make sure you meet the [pre-requisites and constraints](../virtual-network/ip-services/public-ip-address-prefix.md#limitations) necessary to configure outbound IPs or outbound IP prefixes.
#### Update the cluster with your own outbound public IP
az aks update \
#### Update the cluster with your own outbound public IP prefix
-You can also use public IP prefixes for egress with your *Standard* SKU load balancer. The following example uses the [az network public-ip prefix show][az-network-public-ip-prefix-show] command to list the IDs of your public IP prefixes:
+You can also use public IP prefixes for egress with your *Standard* SKU load balancer. The following example uses the [az network public-ip prefix show][az-network-public-ip-prefix-show] command to list the IDs of your public IP prefixes.
```azurecli-interactive az network public-ip prefix show --resource-group myResourceGroup --name myPublicIPPrefix --query id -o tsv
az aks update \
#### Create the cluster with your own public IP or prefixes
-You may wish to bring your own IP addresses or IP prefixes for egress at cluster creation time to support scenarios like adding egress endpoints to an allowlist. Append the same parameters shown above to your cluster creation step to define your own public IPs and IP prefixes at the start of a cluster's lifecycle.
+You can bring your own IP addresses or IP prefixes for egress at cluster creation time to support scenarios like adding egress endpoints to an allowlist. Append the same parameters shown above to your cluster creation step to define your own public IPs and IP prefixes at the start of a cluster's lifecycle.
Use the *az aks create* command with the *load-balancer-outbound-ips* parameter to create a new cluster with your public IPs at the start.
az aks create \
### Configure the allocated outbound ports > [!IMPORTANT]
-> If you have applications on your cluster which can establish a large number of connections to small set of destinations, for example many instances of a frontend application connecting to a database, you may have a scenario very susceptible to encounter SNAT port exhaustion. SNAT port exhaustion happens when an application runs out of outbound ports to use to establish a connection to another application or host. If you have a scenario where you may encounter SNAT port exhaustion, it is highly recommended that you increase the allocated outbound ports and outbound frontend IPs on the load balancer to prevent SNAT port exhaustion. See below for information on how to properly calculate outbound ports and outbound frontend IP values.
+> If you have applications on your cluster that can establish a large number of connections to small set of destinations, such as many instances of a frontend application connecting to a database, you may have a scenario very susceptible to encounter SNAT port exhaustion. SNAT port exhaustion happens when an application runs out of outbound ports to use to establish a connection to another application or host. If you have a scenario where you may encounter SNAT port exhaustion, we highly recommended that you increase the allocated outbound ports and outbound frontend IPs on the load balancer to prevent SNAT port exhaustion. See below for information on how to properly calculate outbound ports and outbound frontend IP values.
-By default, AKS sets *AllocatedOutboundPorts* on its load balancer to `0`, which enables [automatic outbound port assignment based on backend pool size][azure-lb-outbound-preallocatedports] when creating a cluster. For example, if a cluster has 50 or fewer nodes, 1024 ports are allocated to each node. As the number of nodes in the cluster is increased, fewer ports will be available per node. To show the *AllocatedOutboundPorts* value for the AKS cluster load balancer, use `az network lb outbound-rule list`. For example:
+By default, AKS sets *AllocatedOutboundPorts* on its load balancer to `0`, which enables [automatic outbound port assignment based on backend pool size][azure-lb-outbound-preallocatedports] when creating a cluster. For example, if a cluster has 50 or fewer nodes, 1024 ports are allocated to each node. As the number of nodes in the cluster is increased, fewer ports will be available per node. To show the *AllocatedOutboundPorts* value for the AKS cluster load balancer, use `az network lb outbound-rule list`.
```azurecli-interactive NODE_RG=$(az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv) az network lb outbound-rule list --resource-group $NODE_RG --lb-name kubernetes -o table ```
-The following example output shows that automatic outbound port assignment based on backend pool size is enabled for the cluster:
+The following example output shows that automatic outbound port assignment based on backend pool size is enabled for the cluster.
```console AllocatedOutboundPorts EnableTcpReset IdleTimeoutInMinutes Name Protocol ProvisioningState ResourceGroup
AllocatedOutboundPorts EnableTcpReset IdleTimeoutInMinutes Name
0 True 30 aksOutboundRule All Succeeded MC_myResourceGroup_myAKSCluster_eastus ```
-To configure a specific value for *AllocatedOutboundPorts* and outbound IP address when creating or updating a cluster, use `load-balancer-outbound-ports` and either `load-balancer-managed-outbound-ip-count`, `load-balancer-outbound-ips`, or `load-balancer-outbound-ip-prefixes`. Before setting a specific value or increasing an existing value for either for outbound ports and outbound IP address, you must calculate the appropriate number of outbound ports and IP address. Use the following equation for this calculation rounded to the nearest integer: `64,000 ports per IP / <outbound ports per node> * <number of outbound IPs> = <maximum number of nodes in the cluster>`.
+To configure a specific value for *AllocatedOutboundPorts* and outbound IP address when creating or updating a cluster, use `load-balancer-outbound-ports` and either `load-balancer-managed-outbound-ip-count`, `load-balancer-outbound-ips`, or `load-balancer-outbound-ip-prefixes`. Before setting a specific value or increasing an existing value for either for outbound ports and outbound IP address, you must calculate the appropriate number of outbound ports and IP addresses. Use the following equation for this calculation rounded to the nearest integer: `64,000 ports per IP / <outbound ports per node> * <number of outbound IPs> = <maximum number of nodes in the cluster>`.
When calculating the number of outbound ports and IPs and setting the values, remember:+ * The number of outbound ports is fixed per node based on the value you set. * The value for outbound ports must be a multiple of 8. * Adding more IPs does not add more ports to any node. It provides capacity for more nodes in the cluster. * You must account for nodes that may be added as part of upgrades, including the count of nodes specified via [maxSurge values][maxsurge]. The following examples show how the number of outbound ports and IP addresses are affected by the values you set:-- If the default values are used and the cluster has 48 nodes, each node will have 1024 ports available.-- If the default values are used and the cluster scales from 48 to 52 nodes, each node will be updated from 1024 ports available to 512 ports available.-- If outbound ports is set to 1,000 and outbound IP count is set to 2, then the cluster can support a maximum of 128 nodes: `64,000 ports per IP / 1,000 ports per node * 2 IPs = 128 nodes`.-- If outbound ports is set to 1,000 and outbound IP count is set to 7, then the cluster can support a maximum of 448 nodes: `64,000 ports per IP / 1,000 ports per node * 7 IPs = 448 nodes`.-- If outbound ports is set to 4,000 and outbound IP count is set to 2, then the cluster can support a maximum of 32 nodes: `64,000 ports per IP / 4,000 ports per node * 2 IPs = 32 nodes`.-- If outbound ports is set to 4,000 and outbound IP count is set to 7, then the cluster can support a maximum of 112 nodes: `64,000 ports per IP / 4,000 ports per node * 7 IPs = 112 nodes`.+
+* If the default values are used and the cluster has 48 nodes, each node will have 1024 ports available.
+* If the default values are used and the cluster scales from 48 to 52 nodes, each node will be updated from 1024 ports available to 512 ports available.
+* If outbound ports is set to 1,000 and outbound IP count is set to 2, then the cluster can support a maximum of 128 nodes: `64,000 ports per IP / 1,000 ports per node * 2 IPs = 128 nodes`.
+* If outbound ports is set to 1,000 and outbound IP count is set to 7, then the cluster can support a maximum of 448 nodes: `64,000 ports per IP / 1,000 ports per node * 7 IPs = 448 nodes`.
+* If outbound ports is set to 4,000 and outbound IP count is set to 2, then the cluster can support a maximum of 32 nodes: `64,000 ports per IP / 4,000 ports per node * 2 IPs = 32 nodes`.
+* If outbound ports is set to 4,000 and outbound IP count is set to 7, then the cluster can support a maximum of 112 nodes: `64,000 ports per IP / 4,000 ports per node * 7 IPs = 112 nodes`.
> [!IMPORTANT]
-> After calculating the number outbound ports and IPs, verify you have additional outbound port capacity to handle node surge during upgrades. It is critical to allocate sufficient excess ports for additional nodes needed for upgrade and other operations. AKS defaults to one buffer node for upgrade operations. If using [maxSurge values][maxsurge], multiply the outbound ports per node by your maxSurge value to determine the number of ports required. For example if you calculated you needed 4000 ports per node with 7 IP address on a cluster with a maximum of 100 nodes and a max surge of 2:
+> After calculating the number outbound ports and IPs, verify you have additional outbound port capacity to handle node surge during upgrades. It is critical to allocate sufficient excess ports for additional nodes needed for upgrade and other operations. AKS defaults to one buffer node for upgrade operations. If using [maxSurge values][maxsurge], multiply the outbound ports per node by your maxSurge value to determine the number of ports required. For example, if you calculated you needed 4000 ports per node with 7 IP address on a cluster with a maximum of 100 nodes and a max surge of 2:
+>
> * 2 surge nodes * 4000 ports per node = 8000 ports needed for node surge during upgrades. > * 100 nodes * 4000 ports per node = 400,000 ports required for your cluster. > * 7 IPs * 64000 ports per IP = 448,000 ports available for your cluster.
az aks update \
### Configure the load balancer idle timeout
-When SNAT port resources are exhausted, outbound flows fail until existing flows release SNAT ports. Load Balancer reclaims SNAT ports when the flow closes and the AKS-configured load balancer uses a 30-minute idle timeout for reclaiming SNAT ports from idle flows.
-You can also use transport (for example, **`TCP keepalives`**) or **`application-layer keepalives`** to refresh an idle flow and reset this idle timeout if necessary. You can configure this timeout following the below example:
+When SNAT port resources are exhausted, outbound flows fail until existing flows release SNAT ports. Load balancer reclaims SNAT ports when the flow closes and the AKS-configured load balancer uses a 30-minute idle timeout for reclaiming SNAT ports from idle flows.
+
+You can also use transport (for example, **`TCP keepalives`** or **`application-layer keepalives`**) to refresh an idle flow and reset this idle timeout if necessary. You can configure this timeout following the below example.
```azurecli-interactive az aks update \
az aks update \
--load-balancer-idle-timeout 4 ```
-If you expect to have numerous short lived connections, and no connections that are long lived and might have long times of idle, like leveraging `kubectl proxy` or `kubectl port-forward` consider using a low timeout value such as 4 minutes. Also, when using TCP keepalives, it's sufficient to enable them on one side of the connection. For example, it's sufficient to enable them on the server side only to reset the idle timer of the flow and it's not necessary for both sides to start TCP keepalives. Similar concepts exist for application layer, including database client-server configurations. Check the server side for what options exist for application-specific keepalives.
+If you expect to have numerous short-lived connections and no long-lived connection that might have long times of idle, like leveraging `kubectl proxy` or `kubectl port-forward`, consider using a low timeout value such as 4 minutes. When using TCP keepalives, it's sufficient to enable them on one side of the connection. For example, it's sufficient to enable them on the server side only to reset the idle timer of the flow. It's not necessary for both sides to start TCP keepalives. Similar concepts exist for application layer, including database client-server configurations. Check the server side for what options exist for application-specific keepalives.
> [!IMPORTANT]
-> AKS enables TCP Reset on idle by default and recommends you keep this configuration on and leverage it for more predictable application behavior on your scenarios.
+>
+> AKS enables *TCP Reset* on idle by default. We recommend you keep this configuration on and leverage it for more predictable application behavior on your scenarios.
+>
> TCP RST is only sent during TCP connection in ESTABLISHED state. Read more about it [here](../load-balancer/load-balancer-tcp-reset.md).
-When setting *IdleTimeoutInMinutes* to a different value than the default of 30 minutes, consider how long your workloads will need an outbound connection. Also consider the default timeout value for a *Standard* SKU load balancer used outside of AKS is 4 minutes. An *IdleTimeoutInMinutes* value that more accurately reflects your specific AKS workload can help decrease SNAT exhaustion caused by tying up connections no longer being used.
+When setting *IdleTimeoutInMinutes* to a different value than the default of 30 minutes, consider how long your workloads will need an outbound connection. Also consider that the default timeout value for a *Standard* SKU load balancer used outside of AKS is 4 minutes. An *IdleTimeoutInMinutes* value that more accurately reflects your specific AKS workload can help decrease SNAT exhaustion caused by tying up connections no longer being used.
> [!WARNING]
-> Altering the values for *AllocatedOutboundPorts* and *IdleTimeoutInMinutes* may significantly change the behavior of the outbound rule for your load balancer and should not be done lightly, without understanding the tradeoffs and your application's connection patterns, check the [SNAT Troubleshooting section below][troubleshoot-snat] and review the [Load Balancer outbound rules][azure-lb-outbound-rules-overview] and [outbound connections in Azure][azure-lb-outbound-connections] before updating these values to fully understand the impact of your changes.
+> Altering the values for *AllocatedOutboundPorts* and *IdleTimeoutInMinutes* may significantly change the behavior of the outbound rule for your load balancer and should not be done lightly. Check the [SNAT Troubleshooting section below][troubleshoot-snat] and review the [Load Balancer outbound rules][azure-lb-outbound-rules-overview] and [outbound connections in Azure][azure-lb-outbound-connections] before updating these values to fully understand the impact of your changes.
## Restrict inbound traffic to specific IP ranges
-The following manifest uses *loadBalancerSourceRanges* to specify a new IP range for inbound external traffic:
+The following manifest uses *loadBalancerSourceRanges* to specify a new IP range for inbound external traffic.
```yaml apiVersion: v1
spec:
This example updates the rule to allow inbound external traffic only from the `MY_EXTERNAL_IP_RANGE` range. If you replace `MY_EXTERNAL_IP_RANGE` with the internal subnet IP address, traffic is restricted to only cluster internal IPs. If traffic is restricted to cluster internal IPs, clients outside your Kubernetes cluster won't be able to access the load balancer. > [!NOTE]
-> Inbound, external traffic flows from the load balancer to the virtual network for your AKS cluster. The virtual network has a Network Security Group (NSG) which allows all inbound traffic from the load balancer. This NSG uses a [service tag][service-tags] of type *LoadBalancer* to allow traffic from the load balancer.
+> Inbound, external traffic flows from the load balancer to the virtual network for your AKS cluster. The virtual network has a network security group (NSG) which allows all inbound traffic from the load balancer. This NSG uses a [service tag][service-tags] of type *LoadBalancer* to allow traffic from the load balancer.
## Maintain the client's IP on inbound connections
-By default, a service of type `LoadBalancer` [in Kubernetes](https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer) and in AKS won't persist the client's IP address on the connection to the pod. The source IP on the packet that's delivered to the pod will be the private IP of the node. To maintain the clientΓÇÖs IP address, you must set `service.spec.externalTrafficPolicy` to `local` in the service definition. The following manifest shows an example:
+By default, a service of type `LoadBalancer` [in Kubernetes](https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer) and in AKS won't persist the client's IP address on the connection to the pod. The source IP on the packet that's delivered to the pod will be the private IP of the node. To maintain the clientΓÇÖs IP address, you must set `service.spec.externalTrafficPolicy` to `local` in the service definition. The following manifest shows an example.
```yaml apiVersion: v1
Below is a list of annotations supported for Kubernetes services with type `Load
| `service.beta.kubernetes.io/azure-load-balancer-resource-group` | Name of the resource group | Specify the resource group of load balancer public IPs that aren't in the same resource group as the cluster infrastructure (node resource group). | `service.beta.kubernetes.io/azure-allowed-service-tags` | List of allowed service tags | Specify a list of allowed [service tags][service-tags] separated by comma. | `service.beta.kubernetes.io/azure-load-balancer-tcp-idle-timeout` | TCP idle timeouts in minutes | Specify the time, in minutes, for TCP connection idle timeouts to occur on the load balancer. Default and minimum value is 4. Maximum value is 30. Must be an integer.
-|`service.beta.kubernetes.io/azure-load-balancer-disable-tcp-reset` | `true` | Disable `enableTcpReset` for SLB. Deprecated in Kubernetes 1.18 and removed in 1.20.
-
+|`service.beta.kubernetes.io/azure-load-balancer-disable-tcp-reset` | `true` | Disable `enableTcpReset` for SLB. Deprecated in Kubernetes 1.18 and removed in 1.20.
## Troubleshooting SNAT
-If you know that you're starting many outbound TCP or UDP connections to the same destination IP address and port, and you observe failing outbound connections or are advised by support that you're exhausting SNAT ports (preallocated ephemeral ports used by PAT), you have several general mitigation options. Review these options and decide what is available and best for your scenario. It's possible that one or more can help manage this scenario. For detailed information, review the [Outbound Connections Troubleshooting Guide](../load-balancer/troubleshoot-outbound-connection.md).
+If you know that you're starting many outbound TCP or UDP connections to the same destination IP address and port, and you observe failing outbound connections or are advised by support that you're exhausting SNAT ports (preallocated ephemeral ports used by PAT), you have several general mitigation options. Review these options and decide what's best for your scenario. It's possible that one or more can help manage this scenario. For detailed information, review the [outbound connections troubleshooting guide](../load-balancer/troubleshoot-outbound-connection.md).
Frequently the root cause of SNAT exhaustion is an anti-pattern for how outbound connectivity is established, managed, or configurable timers changed from their default values. Review this section carefully. ### Steps
-1. Check if your connections remain idle for a long time and rely on the default idle timeout for releasing that port. If so the default timeout of 30 min might need to be reduced for your scenario.
+
+1. Check if your connections remain idle for a long time and rely on the default idle timeout for releasing that port. If so, the default timeout of 30 minutes might need to be reduced for your scenario.
2. Investigate how your application is creating outbound connectivity (for example, code review or packet capture).
-3. Determine if this activity is expected behavior or whether the application is misbehaving. Use [metrics](../load-balancer/load-balancer-standard-diagnostics.md) and [logs](../load-balancer/monitor-load-balancer.md) in Azure Monitor to substantiate your findings. Use "Failed" category for SNAT Connections metric for example.
+3. Determine if this activity is expected behavior or whether the application is misbehaving. Use [metrics](../load-balancer/load-balancer-standard-diagnostics.md) and [logs](../load-balancer/monitor-load-balancer.md) in Azure Monitor to substantiate your findings. For example, use the "Failed" category for SNAT connections metric.
4. Evaluate if appropriate [patterns](#design-patterns) are followed.
-5. Evaluate if SNAT port exhaustion should be mitigated with [additional Outbound IP addresses + additional Allocated Outbound Ports](#configure-the-allocated-outbound-ports) .
+5. Evaluate if SNAT port exhaustion should be mitigated with [additional outbound IP addresses + additional allocated outbound ports](#configure-the-allocated-outbound-ports) .
### Design patterns+ Always take advantage of connection reuse and connection pooling whenever possible. These patterns will avoid resource exhaustion problems and result in predictable behavior. Primitives for these patterns can be found in many development libraries and frameworks. -- Atomic requests (one request per connection) are generally not a good design choice. Such anti-pattern limits scale, reduces performance, and decreases reliability. Instead, reuse HTTP/S connections to reduce the numbers of connections and associated SNAT ports. The application scale will increase and performance improve because of reduced handshakes, overhead, and cryptographic operation cost when using TLS.-- If you're using out of cluster/custom DNS, or custom upstream servers on coreDNS have in mind that DNS can introduce many individual flows at volume when the client isn't caching the DNS resolvers result. Make sure to customize coreDNS first instead of using custom DNS servers, and define a good caching value.-- UDP flows (for example DNS lookups) allocate SNAT ports for the duration of the idle timeout. The longer the idle timeout, the higher the pressure on SNAT ports. Use short idle timeout (for example 4 minutes).
+* Atomic requests (one request per connection) are generally not a good design choice. Such anti-patterns limits scale, reduces performance, and decreases reliability. Instead, reuse HTTP/S connections to reduce the numbers of connections and associated SNAT ports. The application scale will increase and performance will improve because of reduced handshakes, overhead, and cryptographic operation cost when using TLS.
+* If you're using out of cluster/custom DNS, or custom upstream servers on coreDNS, keep in mind that DNS can introduce many individual flows at volume when the client isn't caching the DNS resolvers result. Make sure to customize coreDNS first instead of using custom DNS servers, and define a good caching value.
+* UDP flows (for example, DNS lookups) allocate SNAT ports for the duration of the idle timeout. The longer the idle timeout, the higher the pressure on SNAT ports. Use short idle timeout (for example., 4 minutes).
Use connection pools to shape your connection volume.-- Never silently abandon a TCP flow and rely on TCP timers to clean up flow. If you don't let TCP explicitly close the connection, state remains allocated at intermediate systems and endpoints and makes SNAT ports unavailable for other connections. This pattern can trigger application failures and SNAT exhaustion.-- Don't change OS-level TCP close related timer values without expert knowledge of impact. While the TCP stack will recover, your application performance can be negatively affected when the endpoints of a connection have mismatched expectations. Wishing to change timers is usually a sign of an underlying design problem. Review following recommendations.
+* Never silently abandon a TCP flow and rely on TCP timers to clean up flow. If you don't let TCP explicitly close the connection, state remains allocated at intermediate systems and endpoints and makes SNAT ports unavailable for other connections. This pattern can trigger application failures and SNAT exhaustion.
+* Don't change OS-level TCP close related timer values without expert knowledge of impact. While the TCP stack will recover, your application performance can be negatively affected when the endpoints of a connection have mismatched expectations. Wishing to change timers is usually a sign of an underlying design problem. Review following recommendations.
-## Moving from a basic SKU load balancer to standard SKU
+## Moving from a *Basic SKU* load balancer to *Standard* SKU
-If you have an existing cluster with the Basic SKU Load Balancer, there are important behavioral differences to note when migrating to use a cluster with the Standard SKU Load Balancer.
+If you have an existing cluster with the *Basic* SKU load balancer, there are important behavioral differences to note when migrating to use a cluster with the *Standard* SKU load balancer.
-For example, making blue/green deployments to migrate clusters is a common practice given the `load-balancer-sku` type of a cluster can only be defined at cluster create time. However, *Basic SKU* Load Balancers use *Basic SKU* IP Addresses, which aren't compatible with *Standard SKU* Load Balancers as they require *Standard SKU* IP Addresses. When migrating clusters to upgrade Load Balancer SKUs, a new IP address with a compatible IP Address SKU will be required.
+For example, making blue/green deployments to migrate clusters is a common practice given the `load-balancer-sku` type of a cluster can only be defined at cluster create time. However, *Basic* SKU load balancers use *Basic* SKU IP addresses, which aren't compatible with *Standard SKU* load balancers as they require *Standard SKU* IP addresses. When migrating clusters to upgrade load balancer SKUs, a new IP address with a compatible IP address SKU will be required.
-For more considerations on how to migrate clusters, visit [our documentation on migration considerations](aks-migration.md) to view a list of important topics to consider when migrating. The below limitations are also important behavioral differences to note when using Standard SKU Load Balancers in AKS.
+For more considerations on how to migrate clusters, visit [our documentation on migration considerations](aks-migration.md) to view a list of important topics to consider when migrating. The below limitations are also important behavioral differences to note when using *Standard* SKU load balancers in AKS.
## Limitations The following limitations apply when you create and manage AKS clusters that support a load balancer with the *Standard* SKU: * At least one public IP or IP prefix is required for allowing egress traffic from the AKS cluster. The public IP or IP prefix is also required to maintain connectivity between the control plane and agent nodes and to maintain compatibility with previous versions of AKS. You have the following options for specifying public IPs or IP prefixes with a *Standard* SKU load balancer:
- * Provide your own public IPs.
- * Provide your own public IP prefixes.
- * Specify a number up to 100 to allow the AKS cluster to create that many *Standard* SKU public IPs in the same resource group created as the AKS cluster, which is usually named with *MC_* at the beginning. AKS assigns the public IP to the *Standard* SKU load balancer. By default, one public IP will automatically be created in the same resource group as the AKS cluster, if no public IP, public IP prefix, or number of IPs is specified. You also must allow public addresses and avoid creating any Azure Policy that bans IP creation.
+ * Provide your own public IPs.
+ * Provide your own public IP prefixes.
+ * Specify a number up to 100 to allow the AKS cluster to create that many *Standard* SKU public IPs in the same resource group created as the AKS cluster, which is usually named with *MC_* at the beginning. AKS assigns the public IP to the *Standard* SKU load balancer. By default, one public IP will automatically be created in the same resource group as the AKS cluster, if no public IP, public IP prefix, or number of IPs is specified. You also must allow public addresses and avoid creating any Azure policy that bans IP creation.
* A public IP created by AKS cannot be reused as a custom bring your own public IP address. All custom IP addresses must be created and managed by the user. * Defining the load balancer SKU can only be done when you create an AKS cluster. You can't change the load balancer SKU after an AKS cluster has been created.
-* You can only use one type of load balancer SKU (Basic or Standard) in a single cluster.
-* *Standard* SKU Load Balancers only support *Standard* SKU IP Addresses.
+* You can only use one type of load balancer SKU (*Basic* or *Standard*) in a single cluster.
+* *Standard* SKU load balancers only support *Standard* SKU IP addresses.
## Next steps Learn more about Kubernetes services at the [Kubernetes services documentation][kubernetes-services].
-Learn more about using Internal Load Balancer for Inbound traffic at the [AKS Internal Load Balancer documentation](internal-lb.md).
+Learn more about using internal load balancer for inbound traffic at the [AKS internal load balancer documentation](internal-lb.md).
<!-- LINKS - External --> [kubectl]: https://kubernetes.io/docs/user-guide/kubectl/
Learn more about using Internal Load Balancer for Inbound traffic at the [AKS In
[use-multiple-node-pools]: use-multiple-node-pools.md [troubleshoot-snat]: #troubleshooting-snat [service-tags]: ../virtual-network/network-security-groups-overview.md#service-tags
-[maxsurge]: upgrade-cluster.md#customize-node-surge-upgrade
+[maxsurge]: upgrade-cluster.md#customize-node-surge-upgrade
+[az-lb]: ../load-balancer/load-balancer-overview.md
aks Operator Best Practices Advanced Scheduler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-advanced-scheduler.md
Let's assume:
Again, let's assume: 1. You have a two-node cluster: *node1* and *node2*.
-1. You upgrade then node pool.
+1. You upgrade the node pool.
1. An additional node is created: *node3*. 1. The taints from *node1* are applied to *node3*. 1. *node1* is deleted.
Alternatively, you can use node selectors. For example, you label nodes to indic
Unlike tolerations, pods without a matching node selector can still be scheduled on labeled nodes. This behavior allows unused resources on the nodes to consume, but prioritizes pods that define the matching node selector.
-Let's look at an example of nodes with a high amount of memory. These nodes prioritize pods that request a high amount of memory. To ensure the resources don't sit idle, they also allow other pods to run. The follow example command adds a node pool with the label *hardware=highmem* to the *myAKSCluster* in the *myResourceGroup*. All nodes in that node pool will have this label.
+Let's look at an example of nodes with a high amount of memory. These nodes prioritize pods that request a high amount of memory. To ensure the resources don't sit idle, they also allow other pods to run. The following example command adds a node pool with the label *hardware=highmem* to the *myAKSCluster* in the *myResourceGroup*. All nodes in that node pool will have this label.
```azurecli-interactive az aks nodepool add \
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
Last updated 12/17/2020
Part of the AKS cluster lifecycle involves performing periodic upgrades to the latest Kubernetes version. ItΓÇÖs important you apply the latest security releases, or upgrade to get the latest features. This article shows you how to check for, configure, and apply upgrades to your AKS cluster.
-For AKS clusters that use multiple node pools or Windows Server nodes, see [Upgrade a node pool in AKS][nodepool-upgrade].
+For AKS clusters that use multiple node pools or Windows Server nodes, see [Upgrade a node pool in AKS][nodepool-upgrade]. To upgrade a specific node pool without doing a Kubernetes cluster upgrade, see [Upgrade a specific node pool][specific-nodepool].
## Before you begin
-### [Azure CLI](#tab/azure-cli)
-
-This article requires that you're running the Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-This tutorial requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+* If you're using Azure CLI, this article requires that you're running the Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+* If you're using Azure PowerShell, this tutorial requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
default 1.18.10 1.19.3
If no upgrade is available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when `Get-AzAksUpgradeProfile` shows that no upgrades are available.
+### [Azure portal](#tab/azure-portal)
+
+To check which Kubernetes releases are available for your cluster:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Navigate to your AKS cluster.
+3. Under **Settings**, select **Cluster configuration**.
+4. In **Kubernetes version**, select **Upgrade version**. This will redirect you to a new page.
+5. In **Kubernetes version**, select the version to check for available upgrades.
+
+If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when no upgrades are available.
+ ## Customize node surge upgrade
Name Location KubernetesVersion ProvisioningState Fqdn
myAKSCluster eastus 1.19.1 Succeeded myakscluster-dns-379cbbb9.hcp.eastus.azmk8s.io ```
+### [Azure portal](#tab/azure-portal)
+
+You can also manually upgrade your cluster in the Azure portal.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Navigate to your AKS cluster.
+3. Under **Settings**, select **Cluster configuration**.
+4. In **Kubernetes version**, select **Upgrade version**. This will redirect you to a new page.
+5. In **Kubernetes version**, select your desired version and then select **Save**.
+
+It takes a few minutes to upgrade the cluster, depending on how many nodes you have.
+
+To confirm that the upgrade was successful, navigate to your AKS cluster in the Azure portal. On the **Overview** page, select the **Kubernetes version**.
+ ## View the upgrade events
This article showed you how to upgrade an existing AKS cluster. To learn more ab
[upgrade-cluster]: #upgrade-an-aks-cluster [planned-maintenance]: planned-maintenance.md [aks-auto-upgrade]: auto-upgrade-cluster.md
+[specific-nodepool]: node-image-upgrade.md#upgrade-a-specific-node-pool
aks Use Group Managed Service Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-group-managed-service-accounts.md
Enabling GMSA with Windows Server nodes on AKS requires:
* Permissions to configure GMSA on Active Directory Domain Service or on-prem Active Directory. * The domain controller must have Active Directory Web Services enabled and must be reachable on port 9389 by the AKS cluster.
+> [!NOTE]
+> Microsoft also provides a purpose-built PowerShell module to configure gMSA on AKS. You can find more information on the module and how to use it in the article [gMSA on Azure Kubernetes Service](/virtualization/windowscontainers/manage-containers/gmsa-aks-ps-module).
+ ## Configure GMSA on Active Directory domain controller To use GMSA with AKS, you need both GMSA and a standard domain user credential to access the GMSA credential configured on your domain controller. To configure GMSA on your domain controller, see [Getting Started with Group Managed Service Accounts][gmsa-getting-started]. For the standard domain user credential, you can use an existing user or create a new one, as long as it has access to the GMSA credential.
To verify GMSA is working and configured correctly, open a web browser to the ex
### No authentication is prompted when loading the page
-If the page loads, but you are not prompted to authenticate, use `kubelet logs POD_NAME` to display the logs of your pod and verify you see *IIS with authentication is ready*.
+If the page loads, but you are not prompted to authenticate, use `kubectl logs POD_NAME` to display the logs of your pod and verify you see *IIS with authentication is ready*.
+
+> [!NOTE]
+> Windows containers won't show logs on kubectl by default. To enable Windows containers to show logs, you need to embed the Log Monitor tool on your Windows image. More information is available [here](https://github.com/microsoft/windows-container-tools).
### Connection timeout when trying to load the page
aks Use Windows Hpc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-windows-hpc.md
NAME READY STATUS RESTARTS AGE
privileged-daemonset-12345 1/1 Running 0 2m13s ```
-Use `kubctl log` to view the logs of the pod and verify the pod has administrator rights:
+Use `kubectl log` to view the logs of the pod and verify the pod has administrator rights:
```output $ kubectl logs privileged-daemonset-12345 --namespace kube-system
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
In this article, you deployed a Kubernetes cluster and configured it to use a wo
[az-identity-create]: /cli/azure/identity#az-identity-create [az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials [az-identity-federated-credential-create]: /cli/azure/identity/federated-credential#az-identity-federated-credential-create
-[workload-identity-migration]: workload-identity-migration-sidecar.md
+[workload-identity-migration]: workload-identity-migrate-from-pod-identity.md
[azure-identity-libraries]: ../active-directory/develop/reference-v2-libraries.md
aks Workload Identity Migrate From Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-migrate-from-pod-identity.md
+
+ Title: Modernize your Azure Kubernetes Service (AKS) application to use workload identity
+description: In this Azure Kubernetes Service (AKS) article, you learn how to configure your Azure Kubernetes Service pod to authenticate with workload identity.
++ Last updated : 11/3/2022++
+# Modernize application authentication with workload identity
+
+This article focuses on pod-managed identity migration to Azure Active Directory (Azure AD) workload identity (preview) for your Azure Kubernetes Service (AKS) cluster. It also provides guidance depending on the version of the [Azure Identity][azure-identity-supported-versions] client library used by your container-based application.
++
+## Before you begin
+
+- The Azure CLI version 2.40.0 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+
+## Migration scenarios
+
+This section explains the migration options available depending on what version of the Azure Identity SDK is installed.
+
+For either scenario, you need to have the federated trust set up before you update your application to use the workload identity. The following are the minimum steps required:
+
+- [Create a managed identity](#create-a-managed-identity) credential.
+- Associate the managed identity with the kubernetes service account already used for the pod-manged identity or [create a new Kubernetes service account](#create-kubernetes-service-account) and then associate it with the managed identity.
+- [Establish a federated trust relationship](#establish-federated-identity-credential-trust) between the managed identity and Azure AD.
+
+### Migrate from latest version
+
+If your cluster is already using the latest version of the Azure Identity SDK, perform the following steps to complete the authentication configuration:
+
+- Deploy workload identity in parallel to where the trust is setup. You can restart your application deployment to begin using the workload identity, where it injects the OIDC annotations into the application automatically.
+- After verifying the application is able to authenticate successfully, you can [remove the pod-managed identity](#remove-pod-managed-identity) annotations from your application and then remove the pod-managed identity add-on.
+
+## Migrate from older version
+
+If your cluster isn't using the latest version of the Azure Identity SDK, you have two options:
+
+- You can use a migration sidecar that we provide, which converts the IMDS transactions your application makes over to [OpenID Connect][openid-connect-overview] (OIDC). The migration sidecar isn't intended to be a long-term solution, but a way to get up and running quickly on workload identity. Running the migration sidecar within your application proxies the application IMDS transactions over to OIDC. Perform the following steps to:
+
+ - [Deploy the workload with migration sidecar](#deploy-the-workload-with-migration-sidecar) to proxy the application IMDS transactions.
+ - Once you verify the authentication transactions are completing successfully, you can [remove the pod-managed identity](#remove-pod-managed-identity) annotations from your application and then remove the pod-managed identity add-on.
+
+- Rewrite your application to support the latest version of the [Azure Identity][azure-identity-supported-versions] client library. Afterwards, perform the following steps:
+
+ - Restart your application deployment to begin authenticating using the workload identity.
+ - Once you verify the authentication transactions are completing successfully, you can [remove the pod-managed identity](#remove-pod-managed-identity) annotations from your application and then remove the pod-managed identity add-on.
+
+## Create a managed identity
+
+If you don't have a managed identity created and assigned to your pod, perform the following steps to create and grant the necessary permissions to storage, Key Vault, or whatever resources your application needs to authenticate with in Azure.
+
+1. Use the Azure CLI [az account set][az-account-set] command to set a specific subscription to be the current active subscription. Then use the [az identity create][az-identity-create] command to create a managed identity.
+
+ ```azurecli
+ az account set --subscription "subscriptionID"
+ ```
+
+ ```azurecli
+ az identity create --name "userAssignedIdentityName" --resource-group "resourceGroupName" --location "location" --subscription "subscriptionID"
+ ```
+
+ ```bash
+ export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "resourceGroupName" --name "userAssignedIdentityName" --query 'clientId' -otsv)"
+ ```
+
+2. Grant the managed identity the permissions required to access the resources in Azure it requires.
+
+3. To get the OIDC Issuer URL and save it to an environmental variable, run the following command. Replace the default values for the cluster name and the resource group name.
+
+ ```bash
+ export AKS_OIDC_ISSUER="$(az aks show -n myAKSCluster -g myResourceGroup --query "oidcIssuerProfile.issuerUrl" -otsv)"
+ ```
+
+## Create Kubernetes service account
+
+If you don't have a dedicated Kubernetes service account created for this application, perform the following steps to create and then annotate it with the client ID of the managed identity created in the previous step. Use the [az aks get-credentials][az-aks-get-credentials] command and replace the values for the cluster name and the resource group name.
+
+```azurecli
+az aks get-credentials -n myAKSCluster -g "${RESOURCE_GROUP}"
+```
+
+Copy and paste the following multi-line input in the Azure CLI.
+
+```bash
+cat <<EOF | kubectl apply -f -
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ annotations:
+ azure.workload.identity/client-id: ${USER_ASSIGNED_CLIENT_ID}
+ labels:
+ azure.workload.identity/use: "true"
+ name: ${SERVICE_ACCOUNT_NAME}
+ namespace: ${SERVICE_ACCOUNT_NAMESPACE}
+EOF
+```
+
+The following output resembles successful creation of the identity:
+
+```output
+Serviceaccount/workload-identity-sa created
+```
+
+## Establish federated identity credential trust
+
+Use the [az identity federated-credential create][az-identity-federated-credential-create] command to create the federated identity credential between the managed identity, the service account issuer, and the subject. Replace the values `resourceGroupName`, `userAssignedIdentityName`, `federatedIdentityName`, `serviceAccountNamespace`, and `serviceAccountName`.
+
+```azurecli
+az identity federated-credential create --name federatedIdentityName --identity-name userAssignedIdentityName --resource-group resourceGroupName --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME}
+```
+
+> [!NOTE]
+> It takes a few seconds for the federated identity credential to be propagated after being initially added. If a token request is made immediately after adding the federated identity credential, it might lead to failure for a couple of minutes as the cache is populated in the directory with old data. To avoid this issue, you can add a slight delay after adding the federated identity credential.
+
+## Deploy the workload with migration sidecar
+
+If your application is using managed identity and still relies on IMDS to get an access token, you can use the workload identity migration sidecar to start migrating to workload identity. This sidecar is a migration solution and in the long-term applications, you should modify their code to use the latest Azure Identity SDKs that support client assertion.
+
+To update or deploy the workload, add these pod annotations only if you want to use the migration sidecar. You inject the following [annotation][pod-annotations] values to use the sidecar in your pod specification:
+
+* `azure.workload.identity/inject-proxy-sidecar` - value is `true` or `false`
+* `azure.workload.identity/proxy-sidecar-port` - value is the desired port for the proxy sidecar. The default value is `8080`.
+
+When a pod with the above annotations is created, the Azure Workload Identity mutating webhook automatically injects the init-container and proxy sidecar to the pod spec.
+
+The webhook that is already running adds the following YAML snippets to the pod deployment. The following is an example of the mutated pod spec:
+
+```yml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: httpbin-pod
+ labels:
+ app: httpbin
+spec:
+ serviceAccountName: workload-identity-sa
+ initContainers:
+ - name: init-networking
+ image: mcr.microsoft.com/oss/azure/workload-identity/proxy-init:v0.13.0
+ securityContext:
+ capabilities:
+ add:
+ - NET_ADMIN
+ drop:
+ - ALL
+ privileged: true
+ runAsUser: 0
+ env:
+ - name: PROXY_PORT
+ value: "8080"
+ containers:
+ - name: nginx
+ image: nginx:alpine
+ ports:
+ - containerPort: 80
+ - name: proxy
+ image: mcr.microsoft.com/oss/azure/workload-identity/proxy:v0.13.0
+ ports:
+ - containerPort: 8080
+```
+
+This configuration applies to any configuration where a pod is being created. After updating or deploying your application, you can verify the pod is in a running state using the [kubectl describe pod][kubectl-describe] command. Replace the value `podName` with the image name of your deployed pod.
+
+```bash
+kubectl describe pods podName -c azwi-proxy
+```
+
+To verify that pod is passing IMDS transactions, use the [kubectl logs][kubelet-logs] command. Replace the value `podName` with the image name of your deployed pod:
+
+```bash
+kubectl logs podName
+```
+
+The following log output resembles successful communication through the proxy sidecar. Verify that the logs show a token is successfully acquired and the GET operation is successful.
+
+```output
+I0926 00:29:29.968723 1 proxy.go:97] proxy "msg"="starting the proxy server" "port"=8080 "userAgent"="azure-workload-identity/proxy/v0.13.0-12-gc8527f3 (linux/amd64) c8527f3/2022-09-26-00:19"
+I0926 00:29:29.972496 1 proxy.go:173] proxy "msg"="received readyz request" "method"="GET" "uri"="/readyz"
+I0926 00:29:30.936769 1 proxy.go:107] proxy "msg"="received token request" "method"="GET" "uri"="/metadata/identity/oauth2/token?resource=https://management.core.windows.net/api-version=2018-02-01&client_id=<client_id>"
+I0926 00:29:31.101998 1 proxy.go:129] proxy "msg"="successfully acquired token" "method"="GET" "uri"="/metadata/identity/oauth2/token?resource=https://management.core.windows.net/api-version=2018-02-01&client_id=<client_id>"
+```
+
+## Remove pod-managed identity
+
+After you've completed your testing and the application is successfully able to get a token using the proxy sidecar, you can remove the Azure AD pod-managed identity mapping for the pod from your cluster, and then remove the identity.
+
+1. Run the [az aks pod-identity delete][az-aks-pod-identity-delete] command to remove the identity from your pod. This should only be done after all pods in the namespace using the pod-managed identity mapping have migrated to use the sidecar.
+
+ ```azurecli
+ az aks pod-identity delete --name podIdentityName --namespace podIdentityNamespace --resource-group myResourceGroup --cluster-name myAKSCluster
+ ```
+
+## Next steps
+
+This article showed you how to set up your pod to authenticate using a workload identity as a migration option. For more information about Azure AD workload identity (preview), see the following [Overview][workload-identity-overview] article.
+
+<!-- INTERNAL LINKS -->
+[pod-annotations]: workload-identity-overview.md#pod-annotations
+[az-identity-create]: /cli/azure/identity#az-identity-create
+[az-account-set]: /cli/azure/account#az-account-set
+[az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials
+[workload-identity-overview]: workload-identity-overview.md
+[az-identity-federated-credential-create]: /cli/azure/identity/federated-credential#az-identity-federated-credential-create
+[az-aks-pod-identity-delete]: /cli/azure/aks/pod-identity#az-aks-pod-identity-delete
+[azure-identity-supported-versions]: workload-identity-overview.md#dependencies
+[azure-identity-libraries]: ../active-directory/develop/reference-v2-libraries.md
+[openid-connect-overview]: ../active-directory/develop/v2-protocols-oidc.md
+[install-azure-cli]: /cli/azure/install-azure-cli
+
+<!-- EXTERNAL LINKS -->
+[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
aks Workload Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md
The following table summarizes our migration or deployment recommendations for w
|Scenario |Description | |||
-| New or existing cluster deployment [runs a supported version](#dependencies) of Azure Identity client library | No migration steps are required.<br> Sample deployment resources:<br> - [Deploy and configure workload identity on a new cluster][deploy-configure-workload-identity-new-cluster]<br> - [Tutorial: Use a workload identity with an application on AKS][tutorial-use-workload-identity] |
-| New or existing cluster deployment [runs an unsupported version](#dependencies) of Azure Identity client library| Update container image to use a supported version of the Azure Identity SDK, or use the [migration sidecar][workload-identity-migration-sidecar]. |
+| New or existing cluster deployment [runs a supported version][azure-identity-libraries] of Azure Identity client library | No migration steps are required.<br> Sample deployment resources:<br> - [Deploy and configure workload identity on a new cluster][deploy-configure-workload-identity-new-cluster]<br> - [Tutorial: Use a workload identity with an application on AKS][tutorial-use-workload-identity] |
+| New or existing cluster deployment runs an unsupported version of Azure Identity client library| Update container image to use a supported version of the Azure Identity SDK, or use the [migration sidecar][workload-identity-migration-sidecar]. |
## Next steps
The following table summarizes our migration or deployment recommendations for w
[openid-connect-overview]: ../active-directory/develop/v2-protocols-oidc.md [deploy-configure-workload-identity-new-cluster]: workload-identity-deploy-cluster.md [tutorial-use-workload-identity]: ./learn/tutorial-kubernetes-workload-identity.md
-[workload-identity-migration-sidecar]: workload-identity-migration-sidecar.md
+[workload-identity-migration-sidecar]: workload-identity-migrate-from-pod-identity.md
[dotnet-azure-identity-client-library]: /dotnet/api/overview/azure/identity-readme [java-azure-identity-client-library]: /java/api/overview/azure/identity-readme [javascript-azure-identity-client-library]: /javascript/api/overview/azure/identity-readme
api-management Authorizations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-overview.md
The feature consists of two parts, management and runtime:
For public preview the following limitations exist: - Authorizations feature is not supported in the following regions: swedencentral, australiacentral, australiacentral2, jioindiacentral.
+- Authorizations feature is not supported in National Clouds.
+- Authorizations feature is not supported on self-hosted gateways.
- Supported identity providers can be found in [this](https://github.com/Azure/APIManagement-Authorizations/blob/main/docs/identityproviders.md) GitHub repository. - Maximum configured number of authorization providers per API Management instance: 1,000 - Maximum configured number of authorizations per authorization provider: 10,000 - Maximum configured number of access policies per authorization: 100 - Maximum requests per minute per service: 250 - Authorization code PKCE flow with code challenge isn't supported.-- Authorizations feature isn't supported on self-hosted gateways. - API documentation is not available yet. Please see [this](https://github.com/Azure/APIManagement-Authorizations) GitHub repository with samples. ### Authorization providers
api-management Graphql Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-api.md
Title: Import a GraphQL API to Azure API Management using the portal | Microsoft Docs
+ Title: Import a GraphQL API to Azure API Management | Microsoft Docs
-description: Learn how to add an existing GraphQL service as an API in Azure API Management. Manage the API and enable queries to pass through to the GraphQL endpoint.
+description: Learn how to add an existing GraphQL service as an API in Azure API Management using the Azure portal, Azure CLI, or Azure PowerShell. Manage the API and enable queries to pass through to the GraphQL endpoint.
Previously updated : 05/19/2022 Last updated : 10/27/2022
If you want to import a GraphQL schema and set up field resolvers using REST or
- An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md). - A GraphQL API.
+- Azure CLI
+ [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
+
+- Azure PowerShell
+ [!INCLUDE [azure-powershell-requirements-no-header](../../includes/azure-powershell-requirements-no-header.md)]
## Add a GraphQL API
-1. Navigate to your API Management instance.
-1. From the side navigation menu, under the **APIs** section, select **APIs**.
+#### [Portal](#tab/portal)
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+1. In the left menu, select **APIs** > **+ Add API**.
1. Under **Define a new API**, select the **GraphQL** icon. :::image type="content" source="media/graphql-api/import-graphql-api.png" alt-text="Screenshot of selecting GraphQL icon from list of APIs.":::
If you want to import a GraphQL schema and set up field resolvers using REST or
1. After the API is created, browse the schema on the **Design** tab, in the **Frontend** section. :::image type="content" source="media/graphql-api/explore-schema.png" alt-text="Screenshot of exploring the GraphQL schema in the portal.":::
+#### [Azure CLI](#tab/cli)
+
+The following example uses the [az apim api import](/cli/azure/apim/api#az-apim-api-import) command to import a GraphQL passthrough API from the specified URL to an API Management instance named *apim-hello-world*.
+
+```azurecli-interactive
+# API Management service-specific details
+APIMServiceName="apim-hello-world"
+ResourceGroupName="myResourceGroup"
+
+# API-specific details
+APIId="my-graphql-api"
+APIPath="myapi"
+DisplayName="MyGraphQLAPI"
+SpecificationFormat="GraphQL"
+SpecificationURL="<GraphQL backend endpoint>"
+
+# Import API
+az apim api import --path $APIPath --resource-group $ResourceGroupName \
+ --service-name $APIMServiceName --api-id $APIId \
+ --display-name $DisplayName --specification-format $SpecificationFormat --specification-url $SpecificationURL
+```
+
+After importing the API, if needed, you can update the settings by using the [az apim api update](/cli/azure/apim/api#az-apim-api-update) command.
++
+#### [PowerShell](#tab/powershell)
+
+The following example uses the [Import-AzApiManagementApi](/powershell/module/az.apimanagement/import-azapimanagementapi?) Azure PowerShell cmdlet to import a GraphQL passthrough API from the specified URL to an API Management instance named *apim-hello-world*.
+
+```powershell-interactive
+# API Management service-specific details
+$apimServiceName = "apim-hello-world"
+$resourceGroupName = "myResourceGroup"
+
+# API-specific details
+$apiId = "my-graphql-api"
+$apiPath = "myapi"
+$specificationFormat = "GraphQL"
+$specificationUrl = "<GraphQL backend endpoint>"
+
+# Get context of the API Management instance.
+$context = New-AzApiManagementContext -ResourceGroupName $resourceGroupName -ServiceName $apimServiceName
+
+# Import API
+Import-AzApiManagementApi -Context $context -ApiId $apiId -SpecificationFormat $specificationFormat -SpecificationUrl $specificationUrl -Path $apiPath
+```
+
+After importing the API, if needed, you can update the settings by using the [Set-AzApiManagementApi](/powershell/module/az.apimanagement/set-azapimanagementapi) cmdlet.
+++ [!INCLUDE [api-management-graphql-test.md](../../includes/api-management-graphql-test.md)] [!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)]
api-management Import Api From Oas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-api-from-oas.md
Title: Import an OpenAPI specification using the Azure portal | Microsoft Docs
-description: Learn how to import an OpenAPI specification with API Management, and then test the API in the Azure and Developer portals.
+ Title: Import an OpenAPI specification to Azure API Management | Microsoft Docs
+description: Learn how to import an OpenAPI specification to an API Management instance using the Azure portal, Azure CLI, or Azure PowerShell. Then, test the API in the Azure portal.
- -- Previously updated : 04/20/2020+ Last updated : 10/26/2022 + # Import an OpenAPI specification
-This article shows how to import an "OpenAPI specification" back-end API residing at https://conferenceapi.azurewebsites.net?format=json. This back-end API is provided by Microsoft and hosted on Azure. The article also shows how to test the APIM API.
+This article shows how to import an "OpenAPI specification" backend API residing at `https://conferenceapi.azurewebsites.net?format=json`. This backend API is provided by Microsoft and hosted on Azure. The article also shows how to test the APIM API.
In this article, you learn how to:- > [!div class="checklist"]
-> * Import an "OpenAPI specification" back-end API
+> * Import an OpenAPI specification using the Azure portal, Azure CLI, or Azure PowerShell
> * Test the API in the Azure portal
+> [!NOTE]
+> API import limitations are documented in [API import restrictions and known issues](api-management-api-import-restrictions.md).
+ ## Prerequisites
-Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md)
+* An API Management instance. If you don't already have one, complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md).
+* Azure CLI
+ [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
-## <a name="create-api"> </a>Import and publish a back-end API
-1. Navigate to your API Management service in the Azure portal and select **APIs** from the menu.
-2. Select **OpenAPI specification** from the **Add a new API** list.
+* Azure PowerShell
+ [!INCLUDE [azure-powershell-requirements-no-header](../../includes/azure-powershell-requirements-no-header.md)]
- ![OpenAPI specification](./media/import-api-from-oas/oas-api.png)
-3. Enter API settings. You can set the values during creation or configure them later by going to the **Settings** tab. The settings are explained in the [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
-4. Select **Create**.
+## <a name="create-api"> </a>Import a backend API
-> [!NOTE]
-> The API import limitations are documented in [another article](api-management-api-import-restrictions.md).
+#### [Portal](#tab/portal)
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+1. In the left menu, select **APIs** > **+ Add API**.
+1. Under **Create from definition**, select **OpenAPI**.
+
+ :::image type="content" source="media/import-api-from-oas/oas-api.png" alt-text="Screenshot of creating an API from an OpenAPI specification in the portal." border="false":::
+1. Enter API settings. You can set the values during creation or configure them later by going to the **Settings** tab. The settings are explained in the [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
+1. Select **Create**.
+
+#### [Azure CLI](#tab/cli)
+
+The following example uses the [az apim api import](/cli/azure/apim/api#az-apim-api-import) command to import an OpenAPI specification from the specified URL to an API Management instance named *apim-hello-world*. To import using a path to a specification instead of a URL, use the `--specification-path` parameter.
+
+```azurecli-interactive
+# API Management service-specific details
+APIMServiceName="apim-hello-world"
+ResourceGroupName="myResourceGroup"
+
+# API-specific details
+APIId="demo-conference-api"
+APIPath="conference"
+SpecificationFormat="OpenAPI"
+SpecificationURL="https://conferenceapi.azurewebsites.net/?format=json"
+
+# Import API
+az apim api import --path $APIPath --resource-group $ResourceGroupName \
+ --service-name $APIMServiceName --api-id $APIId \
+ --specification-format $SpecificationFormat --specification-url $SpecificationURL
+```
+
+After importing the API, if needed, you can update the settings by using the [az apim api update](/cli/azure/apim/api#az-apim-api-update) command.
+
+#### [PowerShell](#tab/powershell)
+
+The following example uses the [Import-AzApiManagementApi](/powershell/module/az.apimanagement/import-azapimanagementapi?) Azure PowerShell cmdlet to import an OpenAPI specification from the specified URL to an API Management instance named *apim-hello-world*. To import using a path to a specification instead of a URL, use the `-SpecificationPath` parameter.
+
+```powershell-interactive
+# API Management service-specific details
+$apimServiceName = "apim-hello-world"
+$resourceGroupName = "myResourceGroup"
+
+# API-specific details
+$apiId = "demo-conference-api"
+$apiPath = "conference"
+$specificationFormat = "OpenAPI"
+$specificationUrl = "https://conferenceapi.azurewebsites.net/?format=json"
+
+# Get context of the API Management instance.
+$context = New-AzApiManagementContext -ResourceGroupName $resourceGroupName -ServiceName $apimServiceName
+
+# Import API
+Import-AzApiManagementApi -Context $context -ApiId $apiId -SpecificationFormat $specificationFormat -SpecificationUrl $specificationUrl -Path $apiPath
+```
+
+After importing the API, if needed, you can update the settings by using the [Set-AzApiManagementApi](/powershell/module/az.apimanagement/set-azapimanagementapi) cmdlet.
+++ [!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)] ## Next steps > [!div class="nextstepaction"]
-> [Transform and protect a published API](transform-api.md)
+> * [Create and publish a product](api-management-howto-add-products.md)
+> * [Transform and protect a published API](transform-api.md)
api-management Import Soap Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-soap-api.md
Title: Import SOAP API to Azure API Management using the portal | Microsoft Docs
-description: Learn how to import a SOAP API to Azure API Management as a WSDL specification. Then, test the API in the Azure portal.
+ Title: Import SOAP API to Azure API Management | Microsoft Docs
+description: Learn how to import a SOAP API to Azure API Management as a WSDL specification using the Azure portal, Azure CLI, or Azure PowerShell. Then, test the API in the Azure portal.
Previously updated : 03/01/2022 Last updated : 10/26/2022
In this article, you learn how to:
## Prerequisites
-Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md)
+* An API Management instance. If you don't already have one, complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md).
+* Azure CLI
+ [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
-## <a name="create-api"> </a>Import and publish a backend API
-1. From the left menu, under the **APIs** section, select **APIs** > **+ Add API**.
+* Azure PowerShell
+ [!INCLUDE [azure-powershell-requirements-no-header](../../includes/azure-powershell-requirements-no-header.md)]
++
+
+## <a name="create-api"> </a>Import a backend API
+
+#### [Portal](#tab/portal)
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+1. In the left menu, select **APIs** > **+ Add API**.
1. Under **Create from definition**, select **WSDL**. ![SOAP API](./media/import-soap-api/wsdl-api.png)
Complete the following quickstart: [Create an Azure API Management instance](get
1. In **Import method**, **SOAP pass-through** is selected by default. With this selection, the API is exposed as SOAP, and API consumers have to use SOAP rules. If you want to "restify" the API, follow the steps in [Import a SOAP API and convert it to REST](restify-soap-api.md).
- ![Create SOAP API from WDL specification](./media/import-soap-api/pass-through.png)
+ ![Create SOAP API from WSDL specification](./media/import-soap-api/pass-through.png)
1. The following fields are filled automatically with information from the SOAP API: **Display name**, **Name**, **Description**. 1. Enter other API settings. You can set the values during creation or configure them later by going to the **Settings** tab. For more information about API settings, see [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial. 1. Select **Create**.
-### Test the new API in the portal
+#### [Azure CLI](#tab/cli)
+
+The following example uses the [az apim api import](/cli/azure/apim/api#az-apim-api-import) command to import a WSDL specification from the specified URL to an API Management instance named *apim-hello-world*. To import using a path to a specification instead of a URL, use the `--specification-path` parameter.
+
+For this example WSDL, the service name is *OrdersAPI*, and one of the available endpoints (interfaces) is *basic*.
+
+```azurecli-interactive
+# API Management service-specific details
+APIMServiceName="apim-hello-world"
+ResourceGroupName="myResourceGroup"
+
+# API-specific details
+APIId="order-api"
+APIPath="order"
+SpecificationFormat="Wsdl"
+SpecificationURL="https://fazioapisoap.azurewebsites.net/FazioService.svc?singleWsdl"
+WsdlServiceName="OrdersAPI"
+WsdlEndpointName="basic"
-Operations can be called directly from the portal, which provides a convenient way to view and test the operations of an API.
+# Import API
+az apim api import --path $APIPath --resource-group $ResourceGroupName \
+ --service-name $APIMServiceName --api-id $APIId \
+ --specification-format $SpecificationFormat --specification-url $SpecificationURL \
+ --wsdl-service-name $WsdlServiceName --wsdl-endpoint-name $WsdlEndpointName
+```
-1. Select the API you created in the previous step.
-2. Press the **Test** tab.
-3. Select some operation.
+#### [PowerShell](#tab/powershell)
- The page displays fields for query parameters and fields for the headers. One of the headers is **Ocp-Apim-Subscription-Key**, for the subscription key of the product that is associated with this API. If you created the API Management instance, you're an administrator already, so the key is filled in automatically.
-1. Press **Send**.
+The following example uses the [Import-AzApiManagementApi](/powershell/module/az.apimanagement/import-azapimanagementapi?) Azure PowerShell cmdlet to import a WSDL specification from the specified URL to an API Management instance named *apim-hello-world*. To import using a path to a specification instead of a URL, use the `-SpecificationPath` parameter.
+
+For this example WSDL, the service name is *OrdersAPI*, and one of the available endpoints (interfaces) is *basic*.
+
+```powershell-interactive
+# API Management service-specific details
+$apimServiceName = "apim-hello-world"
+$resourceGroupName = "myResourceGroup"
+
+# API-specific det
+$apiId = "orders-api"
+$apiPath = "orders"
+$specificationFormat = "Wsdl"
+$specificationUrl = "https://fazioapisoap.azurewebsites.net/FazioService.svc?singleWsdl"
+$wsdlServiceName = "OrdersAPI"
+$wsdlEndpointName = "basic"
+
+# Get context of the API Management instance.
+$context = New-AzApiManagementContext -ResourceGroupName $resourceGroupName -ServiceName $apimServiceName
+
+# Import API
+Import-AzApiManagementApi -Context $context -ApiId $apiId -SpecificationFormat $specificationFormat -SpecificationUrl $specificationUrl -Path $apiPath -WsdlServiceName $wsdlServiceName -WsdlEndpointName $wsdlEndpointName
+```
++
- When the test is successful, the backend responds with **200 OK** and some data.
## Wildcard SOAP action
api-management Websocket Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/websocket-api.md
Title: Import a WebSocket API using the Azure portal | Microsoft Docs
+ Title: Import a WebSocket API to Azure API Management | Microsoft Docs
description: Learn how API Management supports WebSocket, add a WebSocket API, and WebSocket limitations. Previously updated : 11/2/2021 Last updated : 10/27/2022 # Import a WebSocket API
-With API ManagementΓÇÖs WebSocket API solution, you can now manage, protect, observe, and expose both WebSocket and REST APIs with API Management and provide a central hub for discovering and consuming all APIs. API publishers can quickly add a WebSocket API in API Management via:
-* A simple gesture in the Azure portal, and
-* The Management API and Azure Resource Manager.
+With API ManagementΓÇÖs WebSocket API solution, API publishers can quickly add a WebSocket API in API Management via the Azure portal, Azure CLI, Azure PowerShell, and other Azure tools.
You can secure WebSocket APIs by applying existing access control policies, like [JWT validation](./api-management-access-restriction-policies.md#ValidateJWT). You can also test WebSocket APIs using the API test consoles in both Azure portal and developer portal. Building on existing observability capabilities, API Management provides metrics and logs for monitoring and troubleshooting WebSocket APIs.
In this article, you will:
- An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md). - A WebSocket API.
+- Azure CLI
## WebSocket passthrough
Per the [WebSocket protocol](https://tools.ietf.org/html/rfc6455), when a client
## Add a WebSocket API
-1. Navigate to your API Management instance.
-1. From the side navigation menu, under the **APIs** section, select **APIs**.
-1. Under **Define a new API**, select the **WebSocket** icon.
+#### [Portal](#tab/portal)
+
+1. 1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+1. In the left menu, select **APIs** > **+ Add API**.
+1. Under **Define a new API**, select **WebSocket**.
1. In the dialog box, select **Full** and complete the required form fields. | Field | Description |
Per the [WebSocket protocol](https://tools.ietf.org/html/rfc6455), when a client
1. Click **Create**. ++ ## Test your WebSocket API 1. Navigate to your WebSocket API.
Below are the current restrictions of WebSocket support in API Management:
* WebSocket APIs are not supported yet in the Consumption tier. * WebSocket APIs are not supported yet in the [self-hosted gateway](./self-hosted-gateway-overview.md). * 200 active connections limit per unit.
-* Websockets APIs support the following valid buffer types for messages: Close, BinaryFragment, BinaryMessage, UTF8Fragment, and UTF8Message.
+* WebSocket APIs support the following valid buffer types for messages: Close, BinaryFragment, BinaryMessage, UTF8Fragment, and UTF8Message.
* Currently, the [set-header](api-management-transformation-policies.md#SetHTTPheader) policy doesn't support changing certain well-known headers, including `Host` headers, in onHandshake requests. * During the TLS handshake with a WebSocket backend, API Management validates that the server certificate is trusted and that its subject name matches the hostname. With HTTP APIs, API Management validates that the certificate is trusted but doesnΓÇÖt validate that hostname and subject match.
The following policies are not supported by and cannot be applied to the onHands
* Validate status code > [!NOTE]
-> If you applied the policies at higher scopes (i.e., global or product) and they were inherited by a WebSocket API through the policy, they will be skipped at run time.
+> If you applied the policies at higher scopes (i.e., global or product) and they were inherited by a WebSocket API through the policy, they will be skipped at runtime.
[!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)]
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md
Having issues? [Let us know](https://aka.ms/PythonAppServiceQuickstartFeedback).
## 2 - Create a web app in Azure
-To host your application in Azure, you need to create Azure App Service web app in Azure. You can create a web app using the [Azure portal](https://portal.azure.com/), [VS Code](https://code.visualstudio.com/) using the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack), or the Azure CLI.
+To host your application in Azure, you need to create Azure App Service web app in Azure. You can create a web app using [Azure portal](https://portal.azure.com/), [VS Code](https://code.visualstudio.com/), [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack), or Azure CLI.
### [Azure CLI](#tab/azure-cli)
Having issues? [Let us know](https://aka.ms/PythonAppServiceQuickstartFeedback).
> [Add user sign-in to a Python web app](../active-directory/develop/quickstart-v2-python-webapp.md) > [!div class="nextstepaction"]
-> [Tutorial: Run Python app in custom container](./tutorial-custom-container.md)
+> [Tutorial: Run Python app in custom container](./tutorial-custom-container.md)
app-service Tutorial Java Tomcat Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-tomcat-connect-managed-identity-postgresql-database.md
Run the following commands in your terminal to clone the sample repo and set up
```bash git clone https://github.com/Azure-Samples/Passwordless-Connections-for-Java-Apps
-cd Passwordless-Connections-for-Java-Apps/Tomcat/checklist/
+cd Passwordless-Connections-for-Java-Apps/Tomcat/
``` ## Create an Azure Postgres DB
-Follow these steps to create an Azure Database for Postgres Single Server in your subscription. The Spring Boot app will connect to this database and store its data when running, persisting the application state no matter where you run the application.
+Follow these steps to create an Azure Database for Postgres in your subscription. The Spring Boot app will connect to this database and store its data when running, persisting the application state no matter where you run the application.
1. Sign into the Azure CLI, and optionally set your subscription if you have more than one connected to your login credentials.
Follow these steps to create an Azure Database for Postgres Single Server in you
az group create --name $RESOURCE_GROUP --location $LOCATION ```
-1. Create an Azure Postgres Database server. The server is created with an administrator account, but it won't be used as we'll use the Azure Active Directory (Azure AD) admin account to perform administrative tasks.
+1. Create an Azure Postgres Database server. The server is created with an administrator account, but it won't be used because we'll use the Azure Active Directory (Azure AD) admin account to perform administrative tasks.
+
+ ### [Flexible Server](#tab/flexible)
+
+ ```azurecli-interactive
+ POSTGRESQL_ADMIN_USER=azureuser
+ # PostgreSQL admin access rights won't be used because Azure AD authentication is leveraged to administer the database.
+ POSTGRESQL_ADMIN_PASSWORD=<admin-password>
+ POSTGRESQL_HOST=<postgresql-host-name>
+
+ # Create a PostgreSQL server.
+ az postgres flexible-server create \
+ --resource-group $RESOURCE_GROUP \
+ --name $POSTGRESQL_HOST \
+ --location $LOCATION \
+ --admin-user $POSTGRESQL_ADMIN_USER \
+ --admin-password $POSTGRESQL_ADMIN_PASSWORD \
+ --public-network-access 0.0.0.0 \
+ --sku-name Standard_D2s_v3
+ ```
+
+ ### [Single Server](#tab/single)
```azurecli-interactive POSTGRESQL_ADMIN_USER=azureuser
- # PostgreSQL admin access rights won't be used as Azure AD authentication is leveraged to administer the database.
+ # PostgreSQL admin access rights won't be used because Azure AD authentication is leveraged to administer the database.
POSTGRESQL_ADMIN_PASSWORD=<admin-password> POSTGRESQL_HOST=<postgresql-host-name>
Follow these steps to create an Azure Database for Postgres Single Server in you
1. Create a database for the application.
+ ### [Flexible Server](#tab/flexible)
+
+ ```azurecli-interactive
+ DATABASE_NAME=checklist
+
+ az postgres flexible-server db create \
+ --resource-group $RESOURCE_GROUP \
+ --server-name $POSTGRESQL_HOST \
+ --database-name $DATABASE_NAME
+ ```
+
+ ### [Single Server](#tab/single)
+ ```azurecli-interactive DATABASE_NAME=checklist
Follow these steps to create an Azure Database for Postgres Single Server in you
Follow these steps to build a WAR file and deploy to Azure App Service on Tomcat using a WAR packaging.
-The changes you made in *application.properties* also apply to the managed identity, so the only thing to do is to remove the existing application settings in App Service.
-
-1. The sample app contains a *pom-war.xml* file that can generate the WAR file. Run the following command to build the app.
+1. The sample app contains a *pom.xml* file that can generate the WAR file. Run the following command to build the app.
```bash
- mvn clean package -f pom-war.xml
+ mvn clean package -f pom.xml
``` 1. Create an Azure App Service resource on Linux using Tomcat 9.0. ```azurecli-interactive
+ APPSERVICE_PLAN=<app-service-plan>
+ APPSERVICE_NAME=<app-service-name>
# Create an App Service plan az appservice plan create \ --resource-group $RESOURCE_GROUP \
The changes you made in *application.properties* also apply to the managed ident
## Connect Postgres Database with identity connectivity
-Next, connect your app to an Postgres Database Single Server with a system-assigned managed identity using Service Connector. To do this, run the [az webapp connection create](/cli/azure/webapp/connection/create#az-webapp-connection-create-postgres) command.
+Next, connect your app to a Postgres Database with a system-assigned managed identity using Service Connector.
+
+### [Flexible Server](#tab/flexible)
+
+To do this, run the [az webapp connection create](/cli/azure/webapp/connection/create#az-webapp-connection-create-postgres-flexible) command.
+
+```azurecli-interactive
+az webapp connection create postgres-flexible \
+ --resource-group $RESOURCE_GROUP \
+ --name $APPSERVICE_NAME \
+ --target-resource-group $RESOURCE_GROUP \
+ --server $POSTGRESQL_HOST \
+ --database $DATABASE_NAME \
+ --system-identity
+```
+
+### [Single Server](#tab/single)
+
+To do this, run the [az webapp connection create](/cli/azure/webapp/connection/create#az-webapp-connection-create-postgres) command.
```azurecli-interactive az webapp connection create postgres \
az webapp connection create postgres \
--system-identity ``` + This command creates a connection between your web app and your PostgreSQL server, and manages authentication through a system-assigned managed identity. ## View sample web app
application-gateway Application Gateway Backend Health Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-backend-health-troubleshooting.md
az network application-gateway start -n <appgw_name> -g <rg_name>
**Message:** Status code of the backend's HTTP response did not match the probe setting. Expected:{HTTPStatusCode0} Received:{HTTPStatusCode1}.
-**Cause:** After the TCP connection has been established and a TLS handshake is done (if TLS is enabled), Application Gateway will send the probe as an HTTP GET request to the backend server. As described earlier, the default probe will be to `<protocol>://127.0.0.1:<port>/`, and it considers response status codes in the rage 200 through 399 as Healthy. If the server returns any other status code, it will be marked as Unhealthy with this message.
+**Cause:** After the TCP connection has been established and a TLS handshake is done (if TLS is enabled), Application Gateway will send the probe as an HTTP GET request to the backend server. As described earlier, the default probe will be to `<protocol>://127.0.0.1:<port>/`, and it considers response status codes in the range 200 through 399 as Healthy. If the server returns any other status code, it will be marked as Unhealthy with this message.
**Solution:** Depending on the backend server's response code, you can take the following steps. A few of the common status codes are listed here:
This behavior can occur for one or more of the following reasons:
## Next steps
-Learn more about [Application Gateway diagnostics and logging](./application-gateway-diagnostics.md).
+Learn more about [Application Gateway diagnostics and logging](./application-gateway-diagnostics.md).
application-gateway Mutual Authentication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-overview.md
Mutual authentication, or client authentication, allows for the Application Gate
Application Gateway supports certificate-based mutual authentication where you can upload a trusted client CA certificate(s) to the Application Gateway, and the gateway will use that certificate to authenticate the client sending a request to the gateway. With the rise in IoT use cases and increased security requirements across industries, mutual authentication provides a way for you to manage and control which clients can talk to your Application Gateway.
-To configure mutual authentication, a trusted client CA certificate is required to be uploaded as part of the client authentication portion of an SSL profile. The SSL profile will then need to be associated to a listener in order to complete configuration of mutual authentication. There must always be a root CA certificate in the client certificate that you upload. You can upload a certificate chain as well, but the chain must include a root CA certificate in addition to as many intermediate CA certificates as you'd like.
+To configure mutual authentication, a trusted client CA certificate is required to be uploaded as part of the client authentication portion of an SSL profile. The SSL profile will then need to be associated to a listener in order to complete configuration of mutual authentication. There must always be a root CA certificate in the client certificate that you upload. You can upload a certificate chain as well, but the chain must include a root CA certificate in addition to as many intermediate CA certificates as you'd like. The maximum size of each uploaded file must be 25 KB or less.
For example, if your client certificate contains a root CA certificate, multiple intermediate CA certificates, and a leaf certificate, make sure that the root CA certificate and all the intermediate CA certificates are uploaded onto Application Gateway in one file. For more information on how to extract a trusted client CA certificate, see [how to extract trusted client CA certificates](./mutual-authentication-certificate-management.md).
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
Title: Form Recognizer overview
+ Title: What is Azure Form Recognizer
description: Machine-learning based OCR and intelligent document processing understanding service to automate extraction of text, table and structure, and key-value pairs from your forms and documents.
Previously updated : 10/31/2022 Last updated : 11/08/2022 recommendations: false
attestation Claim Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/claim-sets.md
Below claims are generated and included in the attestation token by the service
- **tcbinfohash**: SHA256 value of the TCB Info collateral - **x-ms-sgx-report-data**: SGX enclave report data field (usually SHA256 hash of x-ms-sgx-ehd)
+Below claims will appear only in the attestation token generated for Intel® Xeon® Scalable processor-based server platforms. The claims will not appear if the SGX enclave is not configured with [Key Separation and Sharing Support](https://github.com/openenclave/openenclave/issues/3054). The claim definitions can be found [here](https://github.com/openenclave/openenclave/issues/3054)
+
+- **x-ms-sgx-config-id**
+- **x-ms-sgx-config-svn**
+- **x-ms-sgx-isv-extended-product-id**
+- **x-ms-sgx-isv-family-id**
+ Below claims are considered deprecated but are fully supported and will continue to be included in the future. It is recommended to use the non-deprecated claim names. Deprecated claim | Recommended claim
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
Previously updated : 10/20/2022- Last updated : 11/7/2022+
In the Product Catalog, always-available services are listed as "non-regional" s
| [Azure Cosmos DB](../cosmos-db/high-availability.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure DNS: Azure DNS Private Zones](../dns/private-dns-getstarted-portal.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure DNS: Azure DNS Private Resolver](../dns/dns-private-resolver-get-started-portal.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| [Azure ExpressRoute](../expressroute/designing-for-high-availability-with-expressroute.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure ExpressRoute](../expressroute/designing-for-high-availability-with-expressroute.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
| [Azure Public IP](../virtual-network/ip-services/public-ip-addresses.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal.](media/icon-zonal.svg) | | [Azure Site Recovery](migrate-recovery-services-vault.md) | ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure SQL](/azure/azure-sql/database/high-availability-sla) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
azure-arc Privacy Data Collection And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/privacy-data-collection-and-reporting.md
This article describes the data that Azure Arc-enabled data services transmits to Microsoft.
-Azure Arc-enabled data services doesn't store any customer data.
+Neither Azure Arc-enabled data services nor any of the applicable data services store any customer data. This applies to Azure Arc-enabled SQL Managed Instance, Azure Arc-enabled PostgreSQL, and Azure Arc-enabled SQL Server.
## Related products
azure-cache-for-redis Cache How To Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-upgrade.md
Set-AzRedisCache -Name "CacheName" -ResourceGroupName "ResourceGroupName" -Redis
## Next steps -- To learn more about Azure Cache for Redis versions, see lin[Set Redis version for Azure Cache for Redis](cache-how-to-version.md)
+- To learn more about Azure Cache for Redis versions, see [Set Redis version for Azure Cache for Redis](cache-how-to-version.md)
- To learn more about Redis 6 features, see [Diving Into Redis 6.0 by Redis](https://redis.com/blog/diving-into-redis-6/)-- To learn more about Azure Cache for Redis features: [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
+- To learn more about Azure Cache for Redis features: [Azure Cache for Redis Premium service tiers](cache-overview.md#service-tiers)
azure-maps Quick Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-android-map.md
Android Studio will take a few seconds to build the application. After the build
:::image type="content" source="media/quick-android-map/quickstart-android-map.png" alt-text="A screenshot showing Azure Maps in an Android application.":::
+> [!TIP]
+> By default, Android reloads the activity when the orientation changes or the keyboard is hidden. This results in the map state being reset (reload the map which resets the view and reloads data to initial state). To prevent this from happening, add the following to the mainfest: `android:configChanges="orientation|keyboardHidden"`. This will stop the activity from reloading and instead call `onConfigurationChanged()` when the orientation has changed or the keyboard is hidden.
+ ## Clean up resources >[!WARNING]
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
Title: Manage the Azure Monitor agent
-description: Options for managing the Azure Monitor agent on Azure virtual machines and Azure Arc-enabled servers.
+ Title: Manage Azure Monitor Agent
+description: Options for managing Azure Monitor Agent on Azure virtual machines and Azure Arc-enabled servers.
-# Manage the Azure Monitor agent
+# Manage Azure Monitor Agent
-This article provides the different options currently available to install, uninstall, and update the [Azure Monitor agent](azure-monitor-agent-overview.md). This agent extension can be installed on Azure virtual machines, scale sets, and Azure Arc-enabled servers. It also lists the options to create [associations with data collection rules](data-collection-rule-azure-monitor-agent.md) that define which data the agent should collect. Installing, upgrading, or uninstalling the Azure Monitor agent won't require you to restart your server.
+This article provides the different options currently available to install, uninstall, and update the [Azure Monitor agent](azure-monitor-agent-overview.md). This agent extension can be installed on Azure virtual machines, scale sets, and Azure Arc-enabled servers. It also lists the options to create [associations with data collection rules](data-collection-rule-azure-monitor-agent.md) that define which data the agent should collect. Installing, upgrading, or uninstalling Azure Monitor Agent won't require you to restart your server.
## Virtual machine extension details
-The Azure Monitor agent is implemented as an [Azure VM extension](../../virtual-machines/extensions/overview.md) with the details in the following table. You can install it by using any of the methods to install virtual machine extensions including the methods described in this article.
+Azure Monitor Agent is implemented as an [Azure VM extension](../../virtual-machines/extensions/overview.md) with the details in the following table. You can install it by using any of the methods to install virtual machine extensions including the methods described in this article.
| Property | Windows | Linux | |:|:|:|
View [Azure Monitor agent extension versions](./azure-monitor-agent-extension-ve
## Prerequisites
-The following prerequisites must be met prior to installing the Azure Monitor agent.
+The following prerequisites must be met prior to installing Azure Monitor Agent.
- **Permissions**: For methods other than using the Azure portal, you must have the following role assignments to install the agent:
The following prerequisites must be met prior to installing the Azure Monitor ag
| Any role that includes the action *Microsoft.Resources/deployments/** | <ul><li>Subscription and/or</li><li>Resource group and/or </li></ul> | To deploy Azure Resource Manager templates | - **Non-Azure**: To install the agent on physical servers and virtual machines hosted *outside* of Azure (that is, on-premises) or in other clouds, you must [install the Azure Arc Connected Machine agent](../../azure-arc/servers/agent-overview.md) first, at no added cost. - **Authentication**: [Managed identity](../../active-directory/managed-identities-azure-resources/overview.md) must be enabled on Azure virtual machines. Both user-assigned and system-assigned managed identities are supported.
- - **User-assigned**: This managed identity is recommended for large-scale deployments, configurable via [built-in Azure policies](#use-azure-policy). You can create a user-assigned managed identity once and share it across multiple VMs, which means it's more scalable than a system-assigned managed identity. If you use a user-assigned managed identity, you must pass the managed identity details to the Azure Monitor agent via extension settings:
+ - **User-assigned**: This managed identity is recommended for large-scale deployments, configurable via [built-in Azure policies](#use-azure-policy). You can create a user-assigned managed identity once and share it across multiple VMs, which means it's more scalable than a system-assigned managed identity. If you use a user-assigned managed identity, you must pass the managed identity details to Azure Monitor Agent via extension settings:
```json {
The following prerequisites must be met prior to installing the Azure Monitor ag
We recommend that you use `mi_res_id` as the `identifier-name`. The following sample commands only show usage with `mi_res_id` for the sake of brevity. For more information on `mi_res_id`, `object_id`, and `client_id`, see the [Managed identity documentation](../../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md#get-a-token-using-http). - **System-assigned**: This managed identity is suited for initial testing or small deployments. When used at scale, for example, for all VMs in a subscription, it results in a substantial number of identities created (and deleted) in Azure Active Directory. To avoid this churn of identities, use user-assigned managed identities instead. *For Azure Arc-enabled servers, system-assigned managed identity is enabled automatically* as soon as you install the Azure Arc agent. It's the only supported type for Azure Arc-enabled servers. - **Not required for Azure Arc-enabled servers**: The system identity is enabled automatically if the agent is installed via [creating and assigning a data collection rule by using the Azure portal](data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association).-- **Networking**: If you use network firewalls, the [AzureResourceManager service tag](../../virtual-network/service-tags-overview.md) must be enabled on the virtual network for the virtual machine. The virtual machine must also have access to the following HTTPS endpoints:
+- **Networking**: If you use network firewalls, the [Azure Resource Manager service tag](../../virtual-network/service-tags-overview.md) must be enabled on the virtual network for the virtual machine. The virtual machine must also have access to the following HTTPS endpoints:
- global.handler.control.monitor.azure.com - `<virtual-machine-region-name>`.handler.control.monitor.azure.com (example: westus.handler.control.azure.com)
The following prerequisites must be met prior to installing the Azure Monitor ag
(If you use private links on the agent, you must also add the [dce endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint)). > [!NOTE]
-> This article only pertains to agent installation or management. After you install the agent, you must review the next article to [configure data collection rules and associate them with the machines](./data-collection-rule-azure-monitor-agent.md) with agents installed. *The Azure Monitor agents can't function without being associated with data collection rules.*
+> This article only pertains to agent installation or management. After you install the agent, you must review the next article to [configure data collection rules and associate them with the machines](./data-collection-rule-azure-monitor-agent.md) with agents installed. *Azure Monitor Agents can't function without being associated with data collection rules.*
-## Use the Azure portal
+## Install
-Follow these instructions to use the Azure portal.
+#### [Portal](#tab/azure-portal)
-### Install
+To install Azure Monitor Agent by using the Azure portal, follow the process to [create a data collection rule](data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) in the Azure portal. This process creates the rule, associates it to the selected resources, and installs Azure Monitor Agent on them if it's not already installed.
-To install the Azure Monitor agent by using the Azure portal, follow the process to [create a data collection rule](data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association) in the Azure portal. This process creates the rule, associates it to the selected resources, and installs the Azure Monitor agent on them if it's not already installed.
+#### [PowerShell](#tab/azure-powershell)
-### Uninstall
+You can install Azure Monitor Agent on Azure virtual machines and on Azure Arc-enabled servers by using the PowerShell command for adding a virtual machine extension.
-To uninstall the Azure Monitor agent by using the Azure portal, go to your virtual machine, scale set, or Azure Arc-enabled server. Select the **Extensions** tab and select **AzureMonitorWindowsAgent** or **AzureMonitorLinuxAgent**. In the dialog that opens, select **Uninstall**.
-
-### Update
-
-To perform a one-time update of the agent, you must first uninstall the existing agent version. Then install the new version as described.
-
-We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature. Go to your virtual machine or scale set, select the **Extensions** tab and select **AzureMonitorWindowsAgent** or **AzureMonitorLinuxAgent**. In the dialog that opens, select **Enable automatic upgrade**.
+### Install on Azure virtual machines
-## Use Resource Manager templates
+Use the following PowerShell commands to install Azure Monitor Agent on Azure virtual machines. Choose the appropriate command based on your chosen authentication method.
-Follow these instructions to use Azure Resource Manager templates.
+#### User-assigned managed identity
-### Install
+- Windows
+ ```powershell
+ Set-AzVMExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> -EnableAutomaticUpgrade $true -SettingString '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}'
+ ```
-You can use Resource Manager templates to install the Azure Monitor agent on Azure virtual machines and on Azure Arc-enabled servers and to create an association with data collection rules. You must create any data collection rule prior to creating the association.
+- Linux
+ ```powershell
+ Set-AzVMExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> -EnableAutomaticUpgrade $true -SettingString '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}'
+ ```
-Get sample templates for installing the agent and creating the association from the following resources:
+#### System-assigned managed identity
-- [Template to install Azure Monitor agent (Azure and Azure Arc)](../agents/resource-manager-agent.md#azure-monitor-agent)-- [Template to create association with data collection rule](./resource-manager-data-collection-rules.md)
+- Windows
+ ```powershell
+ Set-AzVMExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> -EnableAutomaticUpgrade $true
+ ```
-Install the templates by using [any deployment method for Resource Manager templates](../../azure-resource-manager/templates/deploy-powershell.md), such as the following commands.
+- Linux
+ ```powershell
+ Set-AzVMExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> -EnableAutomaticUpgrade $true
+ ```
-# [PowerShell](#tab/ARMAgentPowerShell)
+### Install on Azure Arc-enabled servers
-```powershell
-New-AzResourceGroupDeployment -ResourceGroupName "<resource-group-name>" -TemplateFile "<template-filename.json>" -TemplateParameterFile "<parameter-filename.json>"
-```
+Use the following PowerShell commands to install Azure Monitor Agent on Azure Arc-enabled servers.
-# [CLI](#tab/ARMAgentCLI)
+- Windows
+ ```powershell
+ New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -EnableAutomaticUpgrade
+ ```
-```azurecli
-az deployment group create --resource-group "<resource-group-name>" --template-file "<path-to-template>" --parameters "@<parameter-filename.json>"
-```
-
+- Linux
+ ```powershell
+ New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -EnableAutomaticUpgrade
+ ```
-## Use PowerShell
+#### [Azure CLI](#tab/azure-cli)
-You can install the Azure Monitor agent on Azure virtual machines and on Azure Arc-enabled servers by using the PowerShell command for adding a virtual machine extension.
+You can install Azure Monitor Agent on Azure virtual machines and on Azure Arc-enabled servers by using the Azure CLI command for adding a virtual machine extension.
### Install on Azure virtual machines
-Use the following PowerShell commands to install the Azure Monitor agent on Azure virtual machines. Choose the appropriate command based on your chosen authentication method.
+Use the following CLI commands to install Azure Monitor Agent on Azure virtual machines. Choose the appropriate command based on your chosen authentication method.
#### User-assigned managed identity
-# [Windows](#tab/PowerShellWindows)
-
-```powershell
-Set-AzVMExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> -EnableAutomaticUpgrade $true -SettingString '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}'
-```
-# [Linux](#tab/PowerShellLinux)
+- Windows
+ ```azurecli
+ az vm extension set --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --enable-auto-upgrade true --settings '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":"/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}'
+ ```
-```powershell
-Set-AzVMExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> -EnableAutomaticUpgrade $true -SettingString '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}'
-```
-
+- Linux
+ ```azurecli
+ az vm extension set --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --enable-auto-upgrade true --settings '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":"/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}'
+ ```
#### System-assigned managed identity
-# [Windows](#tab/PowerShellWindows)
-
-```powershell
-Set-AzVMExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> -EnableAutomaticUpgrade $true
-```
-# [Linux](#tab/PowerShellLinux)
+- Windows
+ ```azurecli
+ az vm extension set --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --enable-auto-upgrade true
+ ```
-```powershell
-Set-AzVMExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> -EnableAutomaticUpgrade $true
-```
--
-### Uninstall on Azure virtual machines
+- Linux
+ ```azurecli
+ az vm extension set --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --enable-auto-upgrade true
+ ```
-Use the following PowerShell commands to uninstall the Azure Monitor agent on Azure virtual machines.
+### Install on Azure Arc-enabled servers
-# [Windows](#tab/PowerShellWindows)
+Use the following CLI commands to install Azure Monitor Agent on Azure Arc-enabled servers.
-```powershell
-Remove-AzVMExtension -Name AzureMonitorWindowsAgent -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name>
-```
-# [Linux](#tab/PowerShellLinux)
+- Windows
+ ```azurecli
+ az connectedmachine extension create --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --type AzureMonitorWindowsAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> --enable-auto-upgrade true
+ ```
-```powershell
-Remove-AzVMExtension -Name AzureMonitorLinuxAgent -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name>
-```
-
+- Linux
+ ```azurecli
+ az connectedmachine extension create --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --type AzureMonitorLinuxAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> --enable-auto-upgrade true
+ ```
-### Update on Azure virtual machines
+#### [Resource Manager template](#tab/azure-resource-manager)
-To perform a one-time update of the agent, you must first uninstall the existing agent version, then install the new version as described.
+You can use Resource Manager templates to install Azure Monitor Agent on Azure virtual machines and on Azure Arc-enabled servers and to create an association with data collection rules. You must create any data collection rule prior to creating the association.
-We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature by using the following PowerShell commands.
+Get sample templates for installing the agent and creating the association from the following resources:
-# [Windows](#tab/PowerShellWindows)
+- [Template to install Azure Monitor agent (Azure and Azure Arc)](../agents/resource-manager-agent.md#azure-monitor-agent)
+- [Template to create association with data collection rule](./resource-manager-data-collection-rules.md)
-```powershell
-Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Publisher Microsoft.Azure.Monitor -ExtensionType AzureMonitorWindowsAgent -TypeHandlerVersion <version-number> -Location <location> -EnableAutomaticUpgrade $true
-```
+Install the templates by using [any deployment method for Resource Manager templates](../../azure-resource-manager/templates/deploy-powershell.md), such as the following commands.
-# [Linux](#tab/PowerShellLinux)
+- PowerShell
+ ```powershell
+ New-AzResourceGroupDeployment -ResourceGroupName "<resource-group-name>" -TemplateFile "<template-filename.json>" -TemplateParameterFile "<parameter-filename.json>"
+ ```
-```powershell
-Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Publisher Microsoft.Azure.Monitor -ExtensionType AzureMonitorLinuxAgent -TypeHandlerVersion <version-number> -Location <location> -EnableAutomaticUpgrade $true
-```
-
-
+- Azure CLI
+ ```azurecli
+ az deployment group create --resource-group "<resource-group-name>" --template-file "<path-to-template>" --parameters "@<parameter-filename.json>"
+ ```
-### Install on Azure Arc-enabled servers
+
-Use the following PowerShell commands to install the Azure Monitor agent on Azure Arc-enabled servers.
+## Uninstall
-# [Windows](#tab/PowerShellWindowsArc)
+#### [Portal](#tab/azure-portal)
-```powershell
-New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -EnableAutomaticUpgrade
-```
-# [Linux](#tab/PowerShellLinuxArc)
+To uninstall Azure Monitor Agent by using the Azure portal, go to your virtual machine, scale set, or Azure Arc-enabled server. Select the **Extensions** tab and select **AzureMonitorWindowsAgent** or **AzureMonitorLinuxAgent**. In the dialog that opens, select **Uninstall**.
-```powershell
-New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -EnableAutomaticUpgrade
-```
-
+#### [PowerShell](#tab/azure-powershell)
-### Uninstall on Azure Arc-enabled servers
+### Uninstall on Azure virtual machines
-Use the following PowerShell commands to uninstall the Azure Monitor agent on Azure Arc-enabled servers.
+Use the following PowerShell commands to uninstall Azure Monitor Agent on Azure virtual machines.
-# [Windows](#tab/PowerShellWindowsArc)
+- Windows
+ ```powershell
+ Remove-AzVMExtension -Name AzureMonitorWindowsAgent -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name>
+ ```
-```powershell
-Remove-AzConnectedMachineExtension -MachineName <arc-server-name> -ResourceGroupName <resource-group-name> -Name AzureMonitorWindowsAgent
-```
+- Linux
+ ```powershell
+ Remove-AzVMExtension -Name AzureMonitorLinuxAgent -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name>
+ ```
-# [Linux](#tab/PowerShellLinuxArc)
+### Uninstall on Azure Arc-enabled servers
-```powershell
-Remove-AzConnectedMachineExtension -MachineName <arc-server-name> -ResourceGroupName <resource-group-name> -Name AzureMonitorLinuxAgent
-```
-
+Use the following PowerShell commands to uninstall Azure Monitor Agent on Azure Arc-enabled servers.
-### Upgrade on Azure Arc-enabled servers
+- Windows
+ ```powershell
+ Remove-AzConnectedMachineExtension -MachineName <arc-server-name> -ResourceGroupName <resource-group-name> -Name AzureMonitorWindowsAgent
+ ```
-To perform a one-time upgrade of the agent, use the following PowerShell commands.
+- Linux
+ ```powershell
+ Remove-AzConnectedMachineExtension -MachineName <arc-server-name> -ResourceGroupName <resource-group-name> -Name AzureMonitorLinuxAgent
+ ```
-# [Windows](#tab/PowerShellWindowsArc)
+#### [Azure CLI](#tab/azure-cli)
-```powershell
-$target = @{"Microsoft.Azure.Monitor.AzureMonitorWindowsAgent" = @{"targetVersion"=<target-version-number>}}
-Update-AzConnectedExtension -ResourceGroupName $env.ResourceGroupName -MachineName <arc-server-name> -ExtensionTarget $target
-```
+### Uninstall on Azure virtual machines
-# [Linux](#tab/PowerShellLinuxArc)
+Use the following CLI commands to uninstall Azure Monitor Agent on Azure virtual machines.
-```powershell
-$target = @{"Microsoft.Azure.Monitor.AzureMonitorLinuxAgent" = @{"targetVersion"=<target-version-number>}}
-Update-AzConnectedExtension -ResourceGroupName $env.ResourceGroupName -MachineName <arc-server-name> -ExtensionTarget $target
-```
-
+- Windows
+ ```azurecli
+ az vm extension delete --resource-group <resource-group-name> --vm-name <virtual-machine-name> -name AzureMonitorWindowsAgent
+ ```
-We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../azure-arc/servers/manage-automatic-vm-extension-upgrade.md#manage-automatic-extension-upgrade) feature by using the following PowerShell commands.
+- Linux
+ ```azurecli
+ az vm extension delete --resource-group <resource-group-name> --vm-name <virtual-machine-name> -name AzureMonitorLinuxAgent
+ ```
-# [Windows](#tab/PowerShellWindowsArc)
+### Uninstall on Azure Arc-enabled servers
-```powershell
-Update-AzConnectedMachineExtension -ResourceGroup <resource-group-name> -MachineName <arc-server-name> -Name AzureMonitorWindowsAgent -EnableAutomaticUpgrade
-```
+Use the following CLI commands to uninstall Azure Monitor Agent on Azure Arc-enabled servers.
-# [Linux](#tab/PowerShellLinuxArc)
+- Windows
+ ```azurecli
+ az connectedmachine extension delete --name AzureMonitorWindowsAgent --machine-name <arc-server-name> --resource-group <resource-group-name>
+ ```
-```powershell
-Update-AzConnectedMachineExtension -ResourceGroup <resource-group-name> -MachineName <arc-server-name> -Name AzureMonitorLinuxAgent -EnableAutomaticUpgrade
-```
-
+- Linux
+ ```azurecli
+ az connectedmachine extension delete --name AzureMonitorLinuxAgent --machine-name <arc-server-name> --resource-group <resource-group-name>
+ ```
-## Use the Azure CLI
+#### [Resource Manager template](#tab/azure-resource-manager)
-You can install the Azure Monitor agent on Azure virtual machines and on Azure Arc-enabled servers by using the Azure CLI command for adding a virtual machine extension.
+N/A
-### Install on Azure virtual machines
+
-Use the following CLI commands to install the Azure Monitor agent on Azure virtual machines. Choose the appropriate command based on your chosen authentication method.
+## Update
-#### User-assigned managed identity
+#### [Portal](#tab/azure-portal)
-# [Windows](#tab/CLIWindows)
+To perform a one-time update of the agent, you must first uninstall the existing agent version. Then install the new version as described.
-```azurecli
-az vm extension set --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --enable-auto-upgrade true --settings '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":"/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}'
-```
+We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature. Go to your virtual machine or scale set, select the **Extensions** tab and select **AzureMonitorWindowsAgent** or **AzureMonitorLinuxAgent**. In the dialog that opens, select **Enable automatic upgrade**.
-# [Linux](#tab/CLILinux)
+#### [PowerShell](#tab/azure-powershell)
-```azurecli
-az vm extension set --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --enable-auto-upgrade true --settings '{"authentication":{"managedIdentity":{"identifier-name":"mi_res_id","identifier-value":"/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<my-user-assigned-identity>"}}}'
-```
-
+### Update on Azure virtual machines
-#### System-assigned managed identity
+To perform a one-time update of the agent, you must first uninstall the existing agent version, then install the new version as described.
-# [Windows](#tab/CLIWindows)
+We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature by using the following PowerShell commands.
-```azurecli
-az vm extension set --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --enable-auto-upgrade true
-```
+- Windows
+ ```powershell
+ Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Publisher Microsoft.Azure.Monitor -ExtensionType AzureMonitorWindowsAgent -TypeHandlerVersion <version-number> -Location <location> -EnableAutomaticUpgrade $true
+ ```
-# [Linux](#tab/CLILinux)
+- Linux
+ ```powershell
+ Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Publisher Microsoft.Azure.Monitor -ExtensionType AzureMonitorLinuxAgent -TypeHandlerVersion <version-number> -Location <location> -EnableAutomaticUpgrade $true
+ ```
-```azurecli
-az vm extension set --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --enable-auto-upgrade true
-```
-
+### Update on Azure Arc-enabled servers
-### Uninstall on Azure virtual machines
+To perform a one-time upgrade of the agent, use the following PowerShell commands.
-Use the following CLI commands to uninstall the Azure Monitor agent on Azure virtual machines.
+- Windows
+ ```powershell
+ $target = @{"Microsoft.Azure.Monitor.AzureMonitorWindowsAgent" = @{"targetVersion"=<target-version-number>}}
+ Update-AzConnectedExtension -ResourceGroupName $env.ResourceGroupName -MachineName <arc-server-name> -ExtensionTarget $target
+ ```
-# [Windows](#tab/CLIWindows)
+- Linux
+ ```powershell
+ $target = @{"Microsoft.Azure.Monitor.AzureMonitorLinuxAgent" = @{"targetVersion"=<target-version-number>}}
+ Update-AzConnectedExtension -ResourceGroupName $env.ResourceGroupName -MachineName <arc-server-name> -ExtensionTarget $target
+ ```
-```azurecli
-az vm extension delete --resource-group <resource-group-name> --vm-name <virtual-machine-name> -name AzureMonitorWindowsAgent
-```
+We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../azure-arc/servers/manage-automatic-vm-extension-upgrade.md#manage-automatic-extension-upgrade) feature by using the following PowerShell commands.
-# [Linux](#tab/CLILinux)
+- Windows
+ ```powershell
+ Update-AzConnectedMachineExtension -ResourceGroup <resource-group-name> -MachineName <arc-server-name> -Name AzureMonitorWindowsAgent -EnableAutomaticUpgrade
+ ```
+- Linux
+ ```powershell
+ Update-AzConnectedMachineExtension -ResourceGroup <resource-group-name> -MachineName <arc-server-name> -Name AzureMonitorLinuxAgent -EnableAutomaticUpgrade
+ ```
-```azurecli
-az vm extension delete --resource-group <resource-group-name> --vm-name <virtual-machine-name> -name AzureMonitorLinuxAgent
-```
-
+#### [Azure CLI](#tab/azure-cli)
### Update on Azure virtual machines
To perform a one-time update of the agent, you must first uninstall the existing
We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature by using the following CLI commands.
-# [Windows](#tab/CLIWindows)
+- Windows
+ ```azurecli
+ az vm extension set -name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --vm-name <virtual-machine-name> --resource-group <resource-group-name> --enable-auto-upgrade true
+ ```
-```azurecli
-az vm extension set -name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --vm-name <virtual-machine-name> --resource-group <resource-group-name> --enable-auto-upgrade true
-```
-# [Linux](#tab/CLILinux)
+- Linux
+ ```azurecli
+ az vm extension set -name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --vm-name <virtual-machine-name> --resource-group <resource-group-name> --enable-auto-upgrade true
+ ```
-```azurecli
-az vm extension set -name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --vm-name <virtual-machine-name> --resource-group <resource-group-name> --enable-auto-upgrade true
-```
--
-### Install on Azure Arc-enabled servers
-
-Use the following CLI commands to install the Azure Monitor agent on Azure Arc-enabled servers.
-
-# [Windows](#tab/CLIWindowsArc)
-
-```azurecli
-az connectedmachine extension create --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --type AzureMonitorWindowsAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> --enable-auto-upgrade true
-```
-
-# [Linux](#tab/CLILinuxArc)
-
-```azurecli
-az connectedmachine extension create --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --type AzureMonitorLinuxAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> --enable-auto-upgrade true
-```
--
-### Uninstall on Azure Arc-enabled servers
-
-Use the following CLI commands to uninstall the Azure Monitor agent on Azure Arc-enabled servers.
-
-# [Windows](#tab/CLIWindowsArc)
-
-```azurecli
-az connectedmachine extension delete --name AzureMonitorWindowsAgent --machine-name <arc-server-name> --resource-group <resource-group-name>
-```
-# [Linux](#tab/CLILinuxArc)
-
-```azurecli
-az connectedmachine extension delete --name AzureMonitorLinuxAgent --machine-name <arc-server-name> --resource-group <resource-group-name>
-```
--
-### Upgrade on Azure Arc-enabled servers
+### Update on Azure Arc-enabled servers
To perform a one-time upgrade of the agent, use the following CLI commands.
-# [Windows](#tab/CLIWindowsArc)
+- Windows
+ ```azurecli
+ az connectedmachine upgrade-extension --extension-targets "{\"Microsoft.Azure.Monitor.AzureMonitorWindowsAgent\":{\"targetVersion\":\"<target-version-number>\"}}" --machine-name <arc-server-name> --resource-group <resource-group-name>
+ ```
-```azurecli
-az connectedmachine upgrade-extension --extension-targets "{\"Microsoft.Azure.Monitor.AzureMonitorWindowsAgent\":{\"targetVersion\":\"<target-version-number>\"}}" --machine-name <arc-server-name> --resource-group <resource-group-name>
-```
-
-# [Linux](#tab/CLILinuxArc)
-
-```azurecli
-az connectedmachine upgrade-extension --extension-targets "{\"Microsoft.Azure.Monitor.AzureMonitorWindowsAgent\":{\"targetVersion\":\"<target-version-number>\"}}" --machine-name <arc-server-name> --resource-group <resource-group-name>
-```
-
+- Linux
+ ```azurecli
+ az connectedmachine upgrade-extension --extension-targets "{\"Microsoft.Azure.Monitor.AzureMonitorWindowsAgent\":{\"targetVersion\":\"<target-version-number>\"}}" --machine-name <arc-server-name> --resource-group <resource-group-name>
+ ```
We recommend that you enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../azure-arc/servers/manage-automatic-vm-extension-upgrade.md#manage-automatic-extension-upgrade) feature by using the following PowerShell commands.
-# [Windows](#tab/CLIWindowsArc)
+- Windows
+ ```azurecli
+ az connectedmachine extension update --name AzureMonitorWindowsAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --enable-auto-upgrade true
+ ```
+
+- Linux
+ ```azurecli
+ az connectedmachine extension update --name AzureMonitorLinuxAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --enable-auto-upgrade true
+ ```
-```azurecli
-az connectedmachine extension update --name AzureMonitorWindowsAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --enable-auto-upgrade true
-```
+#### [Resource Manager template](#tab/azure-resource-manager)
-# [Linux](#tab/CLILinuxArc)
+N/A
-```azurecli
-az connectedmachine extension update --name AzureMonitorLinuxAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --enable-auto-upgrade true
-```
## Use Azure Policy
az connectedmachine extension update --name AzureMonitorLinuxAgent --machine-nam
Use the following policies and policy initiatives to automatically install the agent and associate it with a data collection rule every time you create a virtual machine, scale set, or Azure Arc-enabled server. > [!NOTE]
-> As per Microsoft Identity best practices, policies for installing the Azure Monitor agent on virtual machines and scale sets rely on user-assigned managed identity. This option is the more scalable and resilient managed identity for these resources.
+> As per Microsoft Identity best practices, policies for installing Azure Monitor Agent on virtual machines and scale sets rely on user-assigned managed identity. This option is the more scalable and resilient managed identity for these resources.
> For Azure Arc-enabled servers, policies rely on system-assigned managed identity as the only supported option today. ### Built-in policy initiatives
Policy initiatives for Windows and Linux virtual machines, scale sets consist of
- (Optional) Create and assign built-in user-assigned managed identity, per subscription, per region. [Learn more](../../active-directory/managed-identities-azure-resources/how-to-assign-managed-identity-via-azure-policy.md#policy-definition-and-details). - `Bring Your Own User-Assigned Identity`: If set to `true`, it creates the built-in user-assigned managed identity in the predefined resource group and assigns it to all machines that the policy is applied to. If set to `false`, you can instead use existing user-assigned identity that *you must assign* to the machines beforehand.-- Install the Azure Monitor agent extension on the machine, and configure it to use user-assigned identity as specified by the following parameters.
+- Install Azure Monitor Agent extension on the machine, and configure it to use user-assigned identity as specified by the following parameters.
- `Bring Your Own User-Assigned Managed Identity`: If set to `false`, it configures the agent to use the built-in user-assigned managed identity created by the preceding policy. If set to `true`, it configures the agent to use an existing user-assigned identity that *you must assign* to the machines in scope beforehand. - `User-Assigned Managed Identity Name`: If you use your own identity (selected `true`), specify the name of the identity that's assigned to the machines. - `User-Assigned Managed Identity Resource Group`: If you use your own identity (selected `true`), specify the resource group where the identity exists.
Policy initiatives for Windows and Linux virtual machines, scale sets consist of
- Create and deploy the association to link the machine to specified data collection rule. - `Data Collection Rule Resource Id`: The Azure Resource Manager resourceId of the rule you want to associate via this policy to all machines the policy is applied to.
- ![Partial screenshot from the Azure Policy Definitions page that shows two built-in policy initiatives for configuring the Azure Monitor agent.](media/azure-monitor-agent-install/built-in-ama-dcr-initiatives.png)
+ ![Partial screenshot from the Azure Policy Definitions page that shows two built-in policy initiatives for configuring Azure Monitor Agent.](media/azure-monitor-agent-install/built-in-ama-dcr-initiatives.png)
#### Known issues
Policy initiatives for Windows and Linux virtual machines, scale sets consist of
You can choose to use the individual policies from the preceding policy initiative to perform a single action at scale. For example, if you *only* want to automatically install the agent, use the second agent installation policy from the initiative, as shown.
-![Partial screenshot from the Azure Policy Definitions page that shows policies contained within the initiative for configuring the Azure Monitor agent.](media/azure-monitor-agent-install/built-in-ama-dcr-policy.png)
+![Partial screenshot from the Azure Policy Definitions page that shows policies contained within the initiative for configuring Azure Monitor Agent.](media/azure-monitor-agent-install/built-in-ama-dcr-policy.png)
### Remediation
-The initiatives or policies will apply to each virtual machine as it's created. A [remediation task](../../governance/policy/how-to/remediate-resources.md) deploys the policy definitions in the initiative to existing resources, so you can configure the Azure Monitor agent for any resources that were already created.
+The initiatives or policies will apply to each virtual machine as it's created. A [remediation task](../../governance/policy/how-to/remediate-resources.md) deploys the policy definitions in the initiative to existing resources, so you can configure Azure Monitor Agent for any resources that were already created.
When you create the assignment by using the Azure portal, you have the option of creating a remediation task at the same time. For information on the remediation, see [Remediate non-compliant resources with Azure Policy](../../governance/policy/how-to/remediate-resources.md).
-![Screenshot that shows initiative remediation for the Azure Monitor agent.](media/azure-monitor-agent-install/built-in-ama-dcr-remediation.png)
+![Screenshot that shows initiative remediation for Azure Monitor Agent.](media/azure-monitor-agent-install/built-in-ama-dcr-remediation.png)
## Next steps
azure-monitor Availability Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md
Title: Create and run custom availability tests using Azure Functions
-description: This doc will cover how to create an Azure Function with TrackAvailability() that will run periodically according to the configuration given in TimerTrigger function. The results of this test will be sent to your Application Insights resource, where you will be able to query for and alert on the availability results data. Customized tests will allow you to write more complex availability tests than is possible using the portal UI, monitor an app inside of your Azure VNET, change the endpoint address, or create an availability test if it's not available in your region.
+ Title: Create and run custom availability tests by using Azure Functions
+description: This article explains how to create an Azure function with TrackAvailability() that will run periodically according to the configuration given in a TimerTrigger function.
Last updated 05/06/2021 ms.devlang: csharp
-# Create and run custom availability tests using Azure Functions
+# Create and run custom availability tests by using Azure Functions
-This article will cover how to create an Azure Function with TrackAvailability() that will run periodically according to the configuration given in TimerTrigger function with your own business logic. The results of this test will be sent to your Application Insights resource, where you will be able to query for and alert on the availability results data. This allows you to create customized tests similar to what you can do via [Availability Monitoring](./monitor-web-app-availability.md) in the portal. Customized tests will allow you to write more complex availability tests than is possible using the portal UI, monitor an app inside of your Azure VNET, change the endpoint address, or create an availability test even if this feature is not available in your region.
+This article explains how to create an Azure function with `TrackAvailability()` that will run periodically according to the configuration given in the `TimerTrigger` function with your own business logic. The results of this test will be sent to your Application Insights resource, where you can query for and alert on the availability results data. Then you can create customized tests similar to what you can do via [availability monitoring](./monitor-web-app-availability.md) in the Azure portal. By using customized tests, you can:
+
+- Write more complex availability tests than is possible by using the portal UI.
+- Monitor an app inside of your Azure virtual network.
+- Change the endpoint address.
+- Create an availability test even if this feature isn't available in your region.
> [!NOTE]
-> This example is designed solely to show you the mechanics of how the TrackAvailability() API call works within an Azure Function. Not how to write the underlying HTTP Test code/business logic that would be required to turn this into a fully functional availability test. By default if you walk through this example you will be creating a basic availability HTTP GET test.
+> This example is designed solely to show you the mechanics of how the `TrackAvailability()` API call works within an Azure function. It doesn't show you how to write the underlying HTTP test code or business logic that's required to turn this example into a fully functional availability test. By default, if you walk through this example, you'll be creating a basic availability HTTP GET test.
+>
> To follow these instructions, you must use the [dedicated plan](../../azure-functions/dedicated-plan.md) to allow editing code in App Service Editor. ## Create a timer trigger function 1. Create an Azure Functions resource.
- - If you already have an Application Insights Resource:
- - By default Azure Functions creates an Application Insights resource but if you would like to use one of your already created resources you will need to specify that during creation.
+ - If you already have an Application Insights resource:
+
+ - By default, Azure Functions creates an Application Insights resource. But if you want to use a resource you created previously, you must specify that during creation.
- Follow the instructions on how to [create an Azure Functions resource](../../azure-functions/functions-create-scheduled-function.md#create-a-function-app) with the following modification:
- - On the **Monitoring** tab, select the Application Insights dropdown box then type or select the name of your resource.
- :::image type="content" source="media/availability-azure-functions/app-insights-resource.png" alt-text="On the monitoring tab select your existing Application Insights resource.":::
- - If you do not have an Application Insights Resource created yet for your timer triggered function:
- - By default when you are creating your Azure Functions application it will create an Application Insights resource for you. Follow the instructions on how to [create an Azure Functions resource](../../azure-functions/functions-create-scheduled-function.md#create-a-function-app).
+
+ On the **Monitoring** tab, select the **Application Insights** dropdown box and then enter or select the name of your resource.
+
+ :::image type="content" source="media/availability-azure-functions/app-insights-resource.png" alt-text="Screenshot that shows selecting your existing Application Insights resource on the Monitoring tab.":::
+
+ - If you don't have an Application Insights resource created yet for your timer-triggered function:
+ - By default, when you're creating your Azure Functions application, it will create an Application Insights resource for you. Follow the instructions on how to [create an Azure Functions resource](../../azure-functions/functions-create-scheduled-function.md#create-a-function-app).
+
> [!NOTE]
- > You can host your functions on a Consumption, Premium, or App Service plan. If you are testing behind a V-Net or testing non public endpoints then you will need to use the premium plan in place of the consumption. Select your plan on the **Hosting** tab. Please ensure the latest .NET version is selected when creating the Function App.
-2. Create a timer trigger function.
+ > You can host your functions on a Consumption, Premium, or App Service plan. If you're testing behind a virtual network or testing nonpublic endpoints, you'll need to use the Premium plan in place of the Consumption plan. Select your plan on the **Hosting** tab. Ensure the latest .NET version is selected when you create the function app.
+1. Create a timer trigger function.
1. In your function app, select the **Functions** tab.
- 1. Select **Add** and in the Add function tab select the follow configurations:
- 1. Development environment: *Develop in portal*
- 1. Select a template: *Timer trigger*
- 1. Select **Add** to create the Timer trigger function.
+ 1. Select **Add**. On the **Add function** pane, select the following configurations:
+ 1. **Development environment**: **Develop in portal**
+ 1. **Select a template**: **Timer trigger**
+ 1. Select **Add** to create the timer trigger function.
- :::image type="content" source="media/availability-azure-functions/add-function.png" alt-text="Screenshot of how to add a timer trigger function to your function app." lightbox="media/availability-azure-functions/add-function.png":::
+ :::image type="content" source="media/availability-azure-functions/add-function.png" alt-text="Screenshot that shows how to add a timer trigger function to your function app." lightbox="media/availability-azure-functions/add-function.png":::
## Add and edit code in the App Service Editor
-Navigate to your deployed function app and under *Development Tools* select the **App Service Editor** tab.
+Go to your deployed function app, and under **Development Tools**, select the **App Service Editor** tab.
-To create a new file, right click under your timer trigger function (for example "TimerTrigger1") and select **New File**. Then type the name of the file and press enter.
+To create a new file, right-click under your timer trigger function (for example, **TimerTrigger1**) and select **New File**. Then enter the name of the file and select **Enter**.
-1. Create a new file called "function.proj" and paste the following code:
+1. Create a new file called **function.proj** and paste the following code:
```xml <Project Sdk="Microsoft.NET.Sdk">
To create a new file, right click under your timer trigger function (for example
</Project> ```
- :::image type="content" source="media/availability-azure-functions/function-proj.png" alt-text=" Screenshot of function.proj in App Service Editor." lightbox="media/availability-azure-functions/function-proj.png":::
+ :::image type="content" source="media/availability-azure-functions/function-proj.png" alt-text=" Screenshot that shows function.proj in the App Service Editor." lightbox="media/availability-azure-functions/function-proj.png":::
-2. Create a new file called "runAvailabilityTest.csx" and paste the following code:
+1. Create a new file called **runAvailabilityTest.csx** and paste the following code:
```csharp using System.Net.Http;
To create a new file, right click under your timer trigger function (for example
} ```
-3. Copy the code below into the run.csx file (this will replace the pre-existing code):
+1. Copy the following code into the **run.csx** file. (You'll replace the preexisting code.)
```csharp #load "runAvailabilityTest.csx"
To create a new file, right click under your timer trigger function (for example
## Check availability
-To make sure everything is working, you can look at the graph in the Availability tab of your Application Insights resource.
+To make sure everything is working, look at the graph on the **Availability** tab of your Application Insights resource.
> [!NOTE]
-> Tests created with TrackAvailability() will appear with **CUSTOM** next to the test name.
+> Tests created with `TrackAvailability()` will appear with **CUSTOM** next to the test name.
- :::image type="content" source="media/availability-azure-functions/availability-custom.png" alt-text="Availability tab with successful results." lightbox="media/availability-azure-functions/availability-custom.png":::
+ :::image type="content" source="media/availability-azure-functions/availability-custom.png" alt-text="Screenshot that shows the Availability tab with successful results." lightbox="media/availability-azure-functions/availability-custom.png":::
-To see the end-to-end transaction details, select **Successful** or **Failed** under drill into, then select a sample. You can also get to the end-to-end transaction details by selecting a data point on the graph.
+To see the end-to-end transaction details, under **Drill into**, select **Successful** or **Failed**. Then select a sample. You can also get to the end-to-end transaction details by selecting a data point on the graph.
-## Query in Logs (Analytics)
+## Query in Log Analytics
-You can use Logs(analytics) to view you availability results, dependencies, and more. To learn more about Logs, visit [Log query overview](../logs/log-query-overview.md).
+You can use Log Analytics to view your availability results, dependencies, and more. To learn more about Log Analytics, see [Log query overview](../logs/log-query-overview.md).
## Next steps
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
This section shows you how to download the auto-instrumentation jar file.
#### Download the jar file
-Download the [applicationinsights-agent-3.4.2.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.2/applicationinsights-agent-3.4.2.jar) file.
+Download the [applicationinsights-agent-3.4.3.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.3/applicationinsights-agent-3.4.3.jar) file.
> [!WARNING] >
Download the [applicationinsights-agent-3.4.2.jar](https://github.com/microsoft/
#### Point the JVM to the jar file
-Add `-javaagent:"path/to/applicationinsights-agent-3.4.2.jar"` to your application's JVM args.
+Add `-javaagent:"path/to/applicationinsights-agent-3.4.3.jar"` to your application's JVM args.
> [!TIP] > For help with configuring your application's JVM args, see [Tips for updating your JVM args](./java-standalone-arguments.md).
Add `-javaagent:"path/to/applicationinsights-agent-3.4.2.jar"` to your applicati
APPLICATIONINSIGHTS_CONNECTION_STRING=<Copy connection string from Application Insights Resource Overview> ```
- - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.4.2.jar` with the following content:
+ - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.4.3.jar` with the following content:
```json {
azure-monitor Java Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md
There are two options for enabling Application Insights Java with Spring Boot: J
## Enabling with JVM argument
-Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.2.jar"` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.3.jar"` somewhere before `-jar`, for example:
```
-java -javaagent:"path/to/applicationinsights-agent-3.4.2.jar" -jar <myapp.jar>
+java -javaagent:"path/to/applicationinsights-agent-3.4.3.jar" -jar <myapp.jar>
``` ### Spring Boot via Docker entry point
-If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.2.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.3.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.2.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.3.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.2.jar"` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.3.jar"` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.2.jar" -jar <myapp.jar>
+ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.3.jar" -jar <myapp.jar>
``` ### Configuration
To enable Application Insights Java programmatically, you must add the following
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-runtime-attach</artifactId>
- <version>3.4.2</version>
+ <version>3.4.3</version>
</dependency> ```
as the JVM argument enablement, with the following differences below.
By default, when enabling Application Insights Java programmatically, the configuration file `applicationinsights.json` will be read from the classpath (`src/main/resources`, `src/test/resources`).
-From 3.4.2, you can configure the name of a JSON file in the classpath with the `applicationinsights.runtime-attach.configuration.classpath.file` system property.
+From 3.4.3, you can configure the name of a JSON file in the classpath with the `applicationinsights.runtime-attach.configuration.classpath.file` system property.
For example, with `-Dapplicationinsights.runtime-attach.configuration.classpath.file=applicationinsights-dev.json`, Application Insights will use `applicationinsights-dev.json` file for configuration. > [!NOTE]
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md
Read the Spring Boot documentation [here](../app/java-in-process-agent.md).
If you installed Tomcat via `apt-get` or `yum`, then you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.2.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.3.jar"
``` ### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.2.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), then you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.2.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.3.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.2.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.3.jar` to `CATALINA_OPTS`.
## Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, then modify that file and a
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.2.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.3.jar
``` Quotes aren't necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.2.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.3.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.2.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, just modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.3.jar` to `CATALINA_OPTS`.
### Running Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.2.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.3.jar` to the `Java Options` under the `Java` tab.
## JBoss EAP 7 ### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.2.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.4.3.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.2.jar -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.3.jar -Xms1303m -Xmx1303m ..."
... ``` ### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.2.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.3.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.4.2.jar` to the existing `jv
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.4.2.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.4.3.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`
``` --exec--javaagent:path/to/applicationinsights-agent-3.4.2.jar
+-javaagent:path/to/applicationinsights-agent-3.4.3.jar
``` ## Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.4.2.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.3.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.4.2.jar>
+ -javaagent:path/to/applicationinsights-agent-3.4.3.jar>
</jvm-options> ... </java-config>
Java and Process Management > Process definition > Java Virtual Machine
``` In "Generic JVM arguments" add the following JVM argument: ```--javaagent:path/to/applicationinsights-agent-3.4.2.jar
+-javaagent:path/to/applicationinsights-agent-3.4.3.jar
``` After that, save and restart the application server.
After that, save and restart the application server.
Create a new file `jvm.options` in the server directory (for example `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.4.2.jar
+-javaagent:path/to/applicationinsights-agent-3.4.3.jar
``` ## Others
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
You'll find more information and configuration options in the following sections
## Configuration file path
-By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.2.jar`.
+By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.3.jar`.
You can specify your own configuration file path by using one of these two options: * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.2.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.3.jar` is located.
Alternatively, instead of using a configuration file, you can specify the entire _content_ of the JSON configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
Or you can set the connection string by using the Java system property `applicat
You can also set the connection string by specifying a file to load the connection string from.
-If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.2.jar` is located.
+If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.3.jar` is located.
```json {
Starting from 3.4.2, you can capture the log markers for Logback and Log4j 2:
### Additional log attributes for Logback (preview)
-Starting from 3.4.2, you can capture `FileName`, `ClassName`, `MethodName`, and `LineNumber`, for Logback:
+Starting from 3.4.3, you can capture `FileName`, `ClassName`, `MethodName`, and `LineNumber`, for Logback:
```json {
In the preceding configuration example:
* `level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. * `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.4.2.jar` is located.
+`applicationinsights-agent-3.4.3.jar` is located.
Starting from version 3.0.2, you can also set the self-diagnostics `level` by using the environment variable `APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`. It then takes precedence over the self-diagnostics level specified in the JSON configuration.
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
auto-instrumentation which is provided by the 3.x Java agent.
| 2.x dependency | Action | Remarks | |-|--||
-| `applicationinsights-core` | Update the version to `3.4.2` or later | |
-| `applicationinsights-web` | Update the version to `3.4.2` or later, and remove the Application Insights web filter your `web.xml` file. | |
-| `applicationinsights-web-auto` | Replace with `3.4.2` or later of `applicationinsights-web` | |
+| `applicationinsights-core` | Update the version to `3.4.3` or later | |
+| `applicationinsights-web` | Update the version to `3.4.3` or later, and remove the Application Insights web filter your `web.xml` file. | |
+| `applicationinsights-web-auto` | Replace with `3.4.3` or later of `applicationinsights-web` | |
| `applicationinsights-logging-log4j1_2` | Remove the dependency and remove the Application Insights appender from your log4j configuration. | No longer needed since Log4j 1.2 is auto-instrumented in the 3.x Java agent. | | `applicationinsights-logging-log4j2` | Remove the dependency and remove the Application Insights appender from your log4j configuration. | No longer needed since Log4j 2 is auto-instrumented in the 3.x Java agent. | | `applicationinsights-logging-log4j1_2` | Remove the dependency and remove the Application Insights appender from your logback configuration. | No longer needed since Logback is auto-instrumented in the 3.x Java agent. |
-| `applicationinsights-spring-boot-starter` | Replace with `3.4.2` or later of `applicationinsights-web` | The cloud role name will no longer default to `spring.application.name`, see the [3.x configuration docs](./java-standalone-config.md#cloud-role-name) for configuring the cloud role name. |
+| `applicationinsights-spring-boot-starter` | Replace with `3.4.3` or later of `applicationinsights-web` | The cloud role name will no longer default to `spring.application.name`, see the [3.x configuration docs](./java-standalone-config.md#cloud-role-name) for configuring the cloud role name. |
## Step 2: Add the 3.x Java agent Add the 3.x Java agent to your JVM command-line args, for example ```--javaagent:path/to/applicationinsights-agent-3.4.2.jar
+-javaagent:path/to/applicationinsights-agent-3.4.3.jar
``` If you were using the Application Insights 2.x Java agent, just replace your existing `-javaagent:...` with the above.
azure-monitor Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/performance-counters.md
# System performance counters in Application Insights
-Windows provides a wide variety of [performance counters](/windows/desktop/perfctrs/about-performance-counters) such as processor, memory, and disk usage statistics. You can also define your own performance counters. Performance counters collection is supported as long as your application is running under IIS on an on-premises host, or virtual machine to which you have administrative access. Though applications running as Azure Web Apps don't have direct access to performance counters, a subset of available counters is collected by Application Insights.
+Windows provides a variety of [performance counters](/windows/desktop/perfctrs/about-performance-counters), such as those used to gather processor, memory, and disk usage statistics. You can also define your own performance counters.
+
+Performance counters collection is supported if your application is running under IIS on an on-premises host or is a virtual machine to which you have administrative access. Although applications running as Azure Web Apps don't have direct access to performance counters, a subset of available counters is collected by Application Insights.
## Prerequisites
net localgroup "Performance Monitor Users" /add "IIS APPPOOL\NameOfYourPool"
## View counters
-The Metrics pane shows the default set of performance counters.
+The **Metrics** pane shows the default set of performance counters.
+
+![Screenshot that shows performance counters reported in Application Insights.](./media/performance-counters/performance-counters.png)
-![Performance counters reported in Application Insights](./media/performance-counters/performance-counters.png)
+Current default counters for ASP.NET web applications:
-The current default counters for ASP.NET web applications are:
- % Process\\Processor Time - % Process\\Processor Time Normalized - Memory\\Available Bytes
The current default counters for ASP.NET web applications are:
- ASP.NET Applications\\Requests In Application Queue - Processor(_Total)\\% Processor Time
-The current default counters collected for ASP.NET Core web applications are:
+Current default counters collected for ASP.NET Core web applications:
+ - % Process\\Processor Time - % Process\\Processor Time Normalized - Memory\\Available Bytes
If the performance counter you want isn't included in the list of metrics, you c
Get-Counter -ListSet * ```
- (For more information, see [`Get-Counter`](/powershell/module/microsoft.powershell.diagnostics/get-counter).)
+ For more information, see [`Get-Counter`](/powershell/module/microsoft.powershell.diagnostics/get-counter).
-2. Open ApplicationInsights.config.
+1. Open `ApplicationInsights.config`.
If you added Application Insights to your app during development: 1. Edit `ApplicationInsights.config` in your project. 1. Redeploy it to your servers.
-3. Edit the performance collector directive:
+1. Edit the performance collector directive:
```xml
If the performance counter you want isn't included in the list of metrics, you c
``` > [!NOTE]
-> ASP.NET Core applications do not have `ApplicationInsights.config`, and hence the above method is not valid for ASP.NET Core Applications.
+> ASP.NET Core applications don't have `ApplicationInsights.config`, so the preceding method isn't valid for ASP.NET Core applications.
-You can capture both standard counters and counters you've implemented yourself. `\Objects\Processes` is an example of a standard counter that is available on all Windows systems. `\Sales(photo)\# Items Sold` is an example of a custom counter that might be implemented in a web service.
+You can capture both standard counters and counters you've implemented yourself. `\Objects\Processes` is an example of a standard counter that's available on all Windows systems. `\Sales(photo)\# Items Sold` is an example of a custom counter that might be implemented in a web service.
-The format is `\Category(instance)\Counter"`, or for categories that don't have instances, just `\Category\Counter`.
+The format is `\Category(instance)\Counter`, or for categories that don't have instances, just `\Category\Counter`.
-`ReportAs` is required for counter names that don't match `[a-zA-Z()/-_ \.]+` - that is, they contain characters that aren't in the following sets: letters, round brackets, forward slash, hyphen, underscore, space, dot.
+The `ReportAs` parameter is required for counter names that don't match `[a-zA-Z()/-_ \.]+`. That is, they contain characters that aren't in the following sets: letters, round brackets, forward slash, hyphen, underscore, space, and dot.
-If you specify an instance, it will be collected as a dimension "CounterInstanceName" of the reported metric.
+If you specify an instance, it will be collected as a dimension `CounterInstanceName` of the reported metric.
-### Collecting performance counters in code for ASP.NET Web Applications or .NET/.NET Core Console Applications
-To collect system performance counters and send them to Application Insights, you can adapt the snippet below:
+### Collect performance counters in code for ASP.NET web applications or .NET/.NET Core console applications
+To collect system performance counters and send them to Application Insights, you can adapt the following snippet:
```csharp var perfCollectorModule = new PerformanceCollectorModule();
To collect system performance counters and send them to Application Insights, yo
perfCollectorModule.Initialize(TelemetryConfiguration.Active); ```
-Or you can do the same thing with custom metrics you created:
+Or you can do the same thing with custom metrics that you created:
```csharp var perfCollectorModule = new PerformanceCollectorModule();
Or you can do the same thing with custom metrics you created:
perfCollectorModule.Initialize(TelemetryConfiguration.Active); ```
-### Collecting performance counters in code for ASP.NET Core Web Applications
+### Collect performance counters in code for ASP.NET Core web applications
-Modify `ConfigureServices` method in your `Startup.cs` class as below.
+Modify the `ConfigureServices` method in your `Startup.cs` class:
```csharp using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector;
using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector;
} ```
-## Performance counters in Analytics
-You can search and display performance counter reports in [Analytics](../logs/log-query-overview.md).
+## Performance counters in Log Analytics
+You can search and display performance counter reports in [Log Analytics](../logs/log-query-overview.md).
-The **performanceCounters** schema exposes the `category`, `counter` name, and `instance` name of each performance counter. In the telemetry for each application, you'll see only the counters for that application. For example, to see what counters are available:
+The **performanceCounters** schema exposes the `category`, `counter` name, and `instance` name of each performance counter. In the telemetry for each application, you'll see only the counters for that application. For example, to see what counters are available:
-![Performance counters in Application Insights analytics](./media/performance-counters/analytics-performance-counters.png)
+![Screenshot that shows performance counters in Application Insights analytics.](./media/performance-counters/analytics-performance-counters.png)
-('Instance' here refers to the performance counter instance, not the role, or server machine instance. The performance counter instance name typically segments counters such as processor time by the name of the process or application.)
+Here, `Instance` refers to the performance counter instance, not the role or server machine instance. The performance counter instance name typically segments counters, such as processor time, by the name of the process or application.
To get a chart of available memory over the recent period:
-![Memory timechart in Application Insights analytics](./media/performance-counters/analytics-available-memory.png)
+![Screenshot that shows a memory time chart in Application Insights analytics.](./media/performance-counters/analytics-available-memory.png)
Like other telemetry, **performanceCounters** also has a column `cloud_RoleInstance` that indicates the identity of the host server instance on which your app is running. For example, to compare the performance of your app on the different machines:
-![Performance segmented by role instance in Application Insights analytics](./media/performance-counters/analytics-metrics-role-instance.png)
+![Screenshot that shows performance segmented by role instance in Application Insights analytics.](./media/performance-counters/analytics-metrics-role-instance.png)
## ASP.NET and Application Insights counts
-*What's the difference between the Exception rate and Exceptions metrics?*
+The next sections discuss ASP.NET and Application Insights counts.
-* `Exception rate` is a system performance counter. The CLR counts all the handled and unhandled exceptions that are thrown, and divides the total in a sampling interval by the length of the interval. The Application Insights SDK collects this result and sends it to the portal.
+### What's the difference between the Exception rate and Exceptions metrics?
-* `Exceptions` is a count of the TrackException reports received by the portal in the sampling interval of the chart. It includes only the handled exceptions where you have written TrackException calls in your code, and doesn't include all [unhandled exceptions](./asp-net-exceptions.md).
+* `Exception rate`: The Exception rate is a system performance counter. The CLR counts all the handled and unhandled exceptions that are thrown and divides the total in a sampling interval by the length of the interval. The Application Insights SDK collects this result and sends it to the portal.
+* `Exceptions`: The Exceptions metric is a count of the `TrackException` reports received by the portal in the sampling interval of the chart. It includes only the handled exceptions where you've written `TrackException` calls in your code. It doesn't include all [unhandled exceptions](./asp-net-exceptions.md).
-## Performance counters for applications running in Azure Web Apps and Windows Containers on Azure App Service
+## Performance counters for applications running in Azure Web Apps and Windows containers on Azure App Service
-Both ASP.NET and ASP.NET Core applications deployed to Azure Web Apps run in a special sandbox environment. Applications deployed to Azure App Service can utilize a [Windows container](../../app-service/quickstart-custom-container.md?pivots=container-windows&tabs=dotnet) or be hosted in a sandbox environment. If the application is deployed in a Windows Container, all standard performance counters are available in the container image.
+Both ASP.NET and ASP.NET Core applications deployed to Azure Web Apps run in a special sandbox environment. Applications deployed to Azure App Service can utilize a [Windows container](../../app-service/quickstart-custom-container.md?pivots=container-windows&tabs=dotnet) or be hosted in a sandbox environment. If the application is deployed in a Windows container, all standard performance counters are available in the container image.
-The sandbox environment doesn't allow direct access to system performance counters. However, a limited subset of counters is exposed as environment variables as described [here](https://github.com/projectkudu/kudu/wiki/Perf-Counters-exposed-as-environment-variables). Only a subset of counters is available in this environment, and the full list can be found [here](https://github.com/microsoft/ApplicationInsights-dotnet/blob/main/WEB/Src/PerformanceCollector/PerformanceCollector/Implementation/WebAppPerformanceCollector/CounterFactory.cs).
+The sandbox environment doesn't allow direct access to system performance counters. However, a limited subset of counters is exposed as environment variables as described in [Perf Counters exposed as environment variables](https://github.com/projectkudu/kudu/wiki/Perf-Counters-exposed-as-environment-variables). Only a subset of counters is available in this environment. For the full list, see [Perf Counters exposed as environment variables](https://github.com/microsoft/ApplicationInsights-dotnet/blob/main/WEB/Src/PerformanceCollector/PerformanceCollector/Implementation/WebAppPerformanceCollector/CounterFactory.cs).
-The Application Insights SDK for [ASP.NET](https://nuget.org/packages/Microsoft.ApplicationInsights.Web) and [ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) detects if code is deployed to a Web App or a non-Windows container. The detection determines whether it collects performance counters in a sandbox environment or utilizing the standard collection mechanism when hosted on a Windows Container or Virtual Machine.
+The Application Insights SDK for [ASP.NET](https://nuget.org/packages/Microsoft.ApplicationInsights.Web) and [ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) detects if code is deployed to a web app or a non-Windows container. The detection determines whether it collects performance counters in a sandbox environment or utilizes the standard collection mechanism when hosted on a Windows container or virtual machine.
## Performance counters in ASP.NET Core applications
Support for performance counters in ASP.NET Core is limited:
* [SDK](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) versions 2.4.1 and later collect performance counters if the application is running in Azure Web Apps (Windows). * SDK versions 2.7.1 and later collect performance counters if the application is running in Windows and targets `NETSTANDARD2.0` or later.
-* For applications targeting the .NET Framework, all versions of the SDK support performance counters.
-* SDK Versions 2.8.0 and later support cpu/memory counter in Linux. No other counter is supported in Linux. The recommended way to get system counters in Linux (and other non-Windows environments) is by using [EventCounters](eventcounters.md)
+* For applications that target the .NET Framework, all versions of the SDK support performance counters.
+* SDK versions 2.8.0 and later support the CPU/Memory counter in Linux. No other counter is supported in Linux. To get system counters in Linux (and other non-Windows environments), use [EventCounters](eventcounters.md).
## Alerts
-Like other metrics, you can [set an alert](../alerts/alerts-log.md) to warn you if a performance counter goes outside a limit you specify. Open the Alerts pane and select Add Alert.
+Like other metrics, you can [set an alert](../alerts/alerts-log.md) to warn you if a performance counter goes outside a limit you specify. To set an alert, open the **Alerts** pane and select **Add Alert**.
## <a name="next"></a>Next steps * [Dependency tracking](./asp-net-dependencies.md)
-* [Exception tracking](./asp-net-exceptions.md)
+* [Exception tracking](./asp-net-exceptions.md)
azure-monitor Platforms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/platforms.md
Title: 'Application Insights: languages, platforms, and integrations | Microsoft Docs'
-description: Languages, platforms, and integrations available for Application Insights
+ Title: 'Application Insights: Languages, platforms, and integrations | Microsoft Docs'
+description: Languages, platforms, and integrations that are available for Application Insights.
Last updated 10/24/2022
## Supported platforms and frameworks
-### Azure Service Integration (Portal Enablement, ARM Deployments)
-* [Azure VM and Azure virtual machine scale sets](./azure-vm-vmss-apps.md)
+Supported platforms and frameworks are listed here.
+
+### Azure service integration (portal enablement, Azure Resource Manager deployments)
+* [Azure Virtual Machines and Azure Virtual Machine Scale Sets](./azure-vm-vmss-apps.md)
* [Azure App Service](./azure-web-apps.md) * [Azure Functions](../../azure-functions/functions-monitoring.md) * [Azure Cloud Services](./azure-web-apps-net-core.md), including both web and worker roles
* [ASP.NET Core](./asp-net-core.md) * [Node.js](./nodejs.md) * [Python](./opencensus-python.md)
-* [JavaScript - Web](./javascript.md)
+* [JavaScript - web](./javascript.md)
* [React](./javascript-react-plugin.md) * [React Native](./javascript-react-native-plugin.md) * [Angular](./javascript-angular-plugin.md)
* [iOS](../app/mobile-center-quickstart.md) (App Center) > [!NOTE]
-> OpenTelemetry-based instrumentation is available in PREVIEW state for [C#, Node.js, and Python](opentelemetry-enable.md). Please review the limitations noted at the beginning of each langauge's official documentation. Those who require a full-feature experience should use the existing Application Insights SDKs.
+> OpenTelemetry-based instrumentation is available in preview for [C#, Node.js, and Python](opentelemetry-enable.md). Review the limitations noted at the beginning of each language's official documentation. If you require a full-feature experience, use the existing Application Insights SDKs.
## Logging frameworks * [ILogger](./ilogger.md) * [Log4Net, NLog, or System.Diagnostics.Trace](./asp-net-trace-logs.md) * [Log4J, Logback, or java.util.logging](./java-in-process-agent.md#autocollected-logs)
-* [LogStash plugin](https://github.com/Azure/azure-diagnostics-tools/tree/master/Logstash/logstash-output-applicationinsights)
+* [LogStash plug-in](https://github.com/Azure/azure-diagnostics-tools/tree/master/Logstash/logstash-output-applicationinsights)
* [Azure Monitor](/archive/blogs/msoms/application-insights-connector-in-oms) ## Export and data analysis
* [Power BI for workspace-based resources](../logs/log-powerbi.md) ## Unsupported SDKs
-Several other community-supported Application Insights SDKs exist. However, Azure Monitor only provides support when using the supported instrumentation options listed on this page. We're constantly assessing opportunities to expand our support for other languages. Follow [Azure Updates for Application Insights](https://azure.microsoft.com/updates/?query=application%20insights) for the latest SDK news.
+Several other community-supported Application Insights SDKs exist. However, Azure Monitor only provides support when you use the supported instrumentation options listed on this page. We're constantly assessing opportunities to expand our support for other languages. Follow [Azure Updates for Application Insights](https://azure.microsoft.com/updates/?query=application%20insights) for the latest SDK news.
azure-monitor Resources Roles Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resources-roles-access-control.md
Title: Resources, roles and access control in Azure Application Insights | Microsoft Docs
+ Title: Resources, roles, and access control in Application Insights | Microsoft Docs
description: Owners, contributors and readers of your organization's insights. Last updated 02/14/2019
# Resources, roles, and access control in Application Insights
-You can control who has read and update access to your data in Azure [Application Insights][start], by using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md).
+You can control who has read and update access to your data in [Application Insights][start] by using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md).
> [!IMPORTANT]
-> Assign access to users in the **resource group or subscription** to which your application resource belongs - not in the resource itself. Assign the **Application Insights component contributor** role. This ensures uniform control of access to web tests and alerts along with your application resource. [Learn more](#access).
-
+> Assign access to users in the resource group or subscription to which your application resource belongs, not in the resource itself. Assign the Application Insights Component Contributor role. This role ensures uniform control of access to web tests and alerts along with your application resource. [Learn more](#access).
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
-## Resources, groups and subscriptions
+## Resources, groups, and subscriptions
-First, some definitions:
+First, let's define some terms:
-* **Resource** - An instance of a Microsoft Azure service. Your Application Insights resource collects, analyzes and displays the telemetry data sent from your application. Other types of Azure resources include web apps, databases, and VMs.
+* **Resource**: An instance of an Azure service. Your Application Insights resource collects, analyzes, and displays the telemetry data sent from your application. Other types of Azure resources include web apps, databases, and VMs.
- To see your resources, open the [Azure portal][portal], sign in, and click All Resources. To find a resource, type part of its name in the filter field.
+ To see your resources, open the [Azure portal][portal], sign in, and select **All resources**. To find a resource, enter part of its name in the filter field.
- ![List of Azure resources](./media/resources-roles-access-control/10-browse.png)
+ ![Screenshot that shows a list of Azure resources.](./media/resources-roles-access-control/10-browse.png)
<a name="resource-group"></a>
-* [**Resource group**][group] - Every resource belongs to one group. A group is a convenient way to manage related resources, particularly for access control. For example, into one resource group you could put a Web App, an Application Insights resource to monitor the app, and a Storage resource to keep exported data.
-
-* [**Subscription**](https://portal.azure.com) - To use Application Insights or other Azure resources, you sign in to an Azure subscription. Every resource group belongs to one Azure subscription, where you choose your price package. If it's an organization subscription, the owner may choose the members and their access permissions.
-* [**Microsoft account**][account] - The username and password that you use to sign in to Microsoft Azure subscriptions, XBox Live, Outlook.com, and other Microsoft services.
+* [Resource group][group]: Every resource belongs to one group. A group is a convenient way to manage related resources, particularly for access control. For example, in one resource group you could put a web app, an Application Insights resource to monitor the app, and an Azure Storage resource to keep exported data.
+* [Subscription](https://portal.azure.com): To use Application Insights or other Azure resources, you sign in to an Azure subscription. Every resource group belongs to one Azure subscription, where you choose your price package. If it's an organization subscription, the owner can choose the members and their access permissions.
+* [Microsoft account][account]: The username and password that you use to sign in to Azure subscriptions, Xbox Live, Outlook.com, and other Microsoft services.
## <a name="access"></a> Control access in the resource group
-It's important to understand that in addition to the resource you created for your application, there are also separate hidden resources for alerts and web tests. They are attached to the same [resource group](#resource-group) as your Application Insights resource. You might also have put other Azure services in there, such as websites or storage.
+Along with the resource you created for your application, there are also separate hidden resources for alerts and web tests. They're attached to the same [resource group](#resource-group) as your Application Insights resource. You might also have put other Azure services in there, such as websites or storage.
-## To provide access to another user
+## Provide access to another user
You must have Owner rights to the subscription or the resource group.
-The user must have a [Microsoft Account][account], or access to their [organizational Microsoft Account](../../active-directory/fundamentals/sign-up-organization.md). You can provide access to individuals, and also to user groups defined in Azure Active Directory.
+The user must have a [Microsoft account][account] or access to their [organizational Microsoft account](../../active-directory/fundamentals/sign-up-organization.md). You can provide access to individuals and also to user groups defined in Azure Active Directory.
-#### Navigate to resource group or directly to the resource itself
+#### Go to a resource group or directly to the resource itself
-1. Assign the Contributor role to the Role Based Access Control.
+Assign the Contributor role to Azure RBAC.
- For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+For detailed steps, see [Assign Azure roles by using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
#### Select a role
-Where applicable we link to the associated official reference documentation.
+Where applicable, the link connects to the associated official reference documentation.
| Role | In the resource group | | | | | [Owner](../../role-based-access-control/built-in-roles.md#owner) |Can change anything, including user access. | | [Contributor](../../role-based-access-control/built-in-roles.md#contributor) |Can edit anything, including all resources. |
-| [Application Insights Component contributor](../../role-based-access-control/built-in-roles.md#application-insights-component-contributor) |Can edit Application Insights resources. |
+| [Application Insights Component Contributor](../../role-based-access-control/built-in-roles.md#application-insights-component-contributor) |Can edit Application Insights resources. |
| [Reader](../../role-based-access-control/built-in-roles.md#reader) |Can view but not change anything. |
-| [Application Insights Snapshot Debugger](../../role-based-access-control/built-in-roles.md#application-insights-snapshot-debugger) | Gives the user permission to use Application Insights Snapshot Debugger features. This role is included in neither the Owner nor Contributor roles. |
+| [Application Insights Snapshot Debugger](../../role-based-access-control/built-in-roles.md#application-insights-snapshot-debugger) | Gives the user permission to use Application Insights Snapshot Debugger features. This role isn't included in the Owner or Contributor roles. |
| Azure Service Deploy Release Management Contributor | Contributor role for services deploying through Azure Service Deploy. |
-| [Data Purger](../../role-based-access-control/built-in-roles.md#data-purger) | Special role for purging personal data. See our [guidance for personal data](../logs/personal-data-mgmt.md) for more information. |
-| ExpressRoute Administrator | Can create delete and manage express routes.|
-| [Log Analytics Contributor](../../role-based-access-control/built-in-roles.md#log-analytics-contributor) | Log Analytics Contributor can read all monitoring data and edit monitoring settings. Editing monitoring settings includes adding the VM extension to VMs; reading storage account keys to be able to configure collection of logs from Azure Storage; creating and configuring Automation accounts; adding solutions; and configuring Azure diagnostics on all Azure resources. |
-| [Log Analytics Reader](../../role-based-access-control/built-in-roles.md#log-analytics-reader) | Log Analytics Reader can view and search all monitoring data as well as and view monitoring settings, including viewing the configuration of Azure diagnostics on all Azure resources. |
+| [Data Purger](../../role-based-access-control/built-in-roles.md#data-purger) | Special role for purging personal data. For more information, see [Manage personal data in Log Analytics and Application Insights](../logs/personal-data-mgmt.md). |
+| Azure ExpressRoute administrator | Can create, delete, and manage express routes.|
+| [Log Analytics Contributor](../../role-based-access-control/built-in-roles.md#log-analytics-contributor) | Log Analytics Contributor can read all monitoring data and edit monitoring settings. Editing monitoring settings includes adding the VM extension to VMs, reading storage account keys to be able to configure collection of logs from Azure Storage, creating and configuring Automation accounts, adding solutions, and configuring Azure diagnostics on all Azure resources. |
+| [Log Analytics Reader](../../role-based-access-control/built-in-roles.md#log-analytics-reader) | Log Analytics Reader can view and search all monitoring data and view monitoring settings, including viewing the configuration of Azure diagnostics on all Azure resources. |
| masterreader | Allows a user to view everything but not make changes. | | [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) | Can read all monitoring data and update monitoring settings.| | [Monitoring Metrics Publisher](../../role-based-access-control/built-in-roles.md#monitoring-metrics-publisher) | Enables publishing metrics against Azure resources. | | [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) | Can read all monitoring data. |
-| Resource Policy Contributor (Preview) | Backfilled users from EA, with rights to create/modify resource policy, create support ticket and read resource/hierarchy. |
-| [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) | Allows a user to manage access for other users to Azure resources.|
-| [Website Contributor](../../role-based-access-control/built-in-roles.md#website-contributor) | Lets you manage websites (not web plans), but not access to them..|
+| Resource Policy Contributor (preview) | Backfilled users from Enterprise Agreements, with rights to create/modify resource policy, create support tickets, and read resource/hierarchy. |
+| [User Access administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) | Allows a user to manage access for other users to Azure resources.|
+| [Website Contributor](../../role-based-access-control/built-in-roles.md#website-contributor) | Lets you manage websites (not web plans) but not access to them.|
-'Editing' includes creating, deleting and updating:
+Editing includes creating, deleting, and updating:
* Resources * Web tests
Where applicable we link to the associated official reference documentation.
#### Select the user
-If the user you want isn't in the directory, you can invite anyone with a Microsoft account.
-(If they use services like Outlook.com, OneDrive, Windows Phone, or XBox Live, they have a Microsoft account.)
+If the user you want isn't in the directory, you can invite anyone with a Microsoft account. If they use services like Outlook.com, OneDrive, Windows Phone, or Xbox Live, they have a Microsoft account.
## Related content
-* [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md)
+See the article [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md).
## PowerShell query to determine role membership
-Since certain roles can be linked to notifications and e-mail alerts it can be helpful to be able to generate a list of users who belong to a given role. To help with generating these types of lists we offer the following sample queries that can be adjusted to fit your specific needs:
+Because certain roles can be linked to notifications and email alerts, it can be helpful to be able to generate a list of users who belong to a given role. To help with generating these types of lists, the following sample queries can be adjusted to fit your specific needs.
### Query entire subscription for Admin roles + Contributor roles
azure-monitor Status Monitor V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-overview.md
Title: Azure Application Insights Agent overview | Microsoft Docs
-description: An overview of Application Insights Agent. Monitor website performance without redeploying the website. Works with ASP.NET web apps hosted on-premises, in VMs, or on Azure.
+ Title: Application Insights Agent overview | Microsoft Docs
+description: Learn how to use Application Insights Agent to monitor website performance without redeploying the website. It works with ASP.NET web apps hosted on-premises, in VMs, or on Azure.
Last updated 09/16/2019
# Deploy Azure Monitor Application Insights Agent for on-premises servers > [!IMPORTANT]
-> This guidance is recommended for On-Premises and non-Azure cloud deployments of Application Insights Agent. Here's the recommended approach for [Azure virtual machine and virtual machine scale set deployments](./azure-vm-vmss-apps.md).
+> This guidance is recommended for on-premises and non-Azure cloud deployments of Application Insights Agent. We recommend a [different deployment approach for Azure virtual machines and Azure virtual machine scale sets](./azure-vm-vmss-apps.md).
Application Insights Agent (formerly named Status Monitor V2) is a PowerShell module published to the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az.ApplicationMonitor). It replaces Status Monitor.
For a complete list of supported auto-instrumentation scenarios, see [Supported
## PowerShell Gallery
-Application Insights Agent is located here: https://www.powershellgallery.com/packages/Az.ApplicationMonitor.
-
-![PowerShell Gallery](https://img.shields.io/powershellgallery/v/Az.ApplicationMonitor.svg?color=Blue&label=Current%20Version&logo=PowerShell&style=for-the-badge)
+Application Insights Agent is located in the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az.ApplicationMonitor).
+![PowerShell Gallery icon.](https://img.shields.io/powershellgallery/v/Az.ApplicationMonitor.svg?color=Blue&label=Current%20Version&logo=PowerShell&style=for-the-badge)
## Instructions-- See the [getting started instructions](status-monitor-v2-get-started.md) to get a start with concise code samples.-- See the [detailed instructions](status-monitor-v2-detailed-instructions.md) for a deep dive on how to get started.
+- To get started with concise code samples, see [Get started](status-monitor-v2-get-started.md).
+- For a deep dive on how to get started, see the [detailed instructions](status-monitor-v2-detailed-instructions.md).
## PowerShell API reference - [Disable-ApplicationInsightsMonitoring](./status-monitor-v2-api-reference.md#disable-applicationinsightsmonitoring)
Application Insights Agent is located here: https://www.powershellgallery.com/pa
## FAQ -- Does Application Insights Agent support proxy installations?
+This section provides answers to common questions.
- *Yes*. There are multiple ways to download Application Insights Agent.
-If your computer has internet access, you can onboard to the PowerShell Gallery by using `-Proxy` parameters.
-You can also manually download the module and either install it on your computer or use it directly.
-Each of these options is described in the [detailed instructions](status-monitor-v2-detailed-instructions.md).
+### Does Application Insights Agent support proxy installations?
-- Does Status Monitor v2 support ASP.NET Core applications?
+Yes. There are multiple ways to download Application Insights Agent:
- *Yes*. Starting from [Application Insights Agent 2.0.0-beta1](https://www.powershellgallery.com/packages/Az.ApplicationMonitor/2.0.0-beta1), ASP.NET Core applications hosted in IIS are supported.
+- If your computer has internet access, you can onboard to the PowerShell Gallery by using `-Proxy` parameters.
+- You can also manually download the module and either install it on your computer or use it directly.
+
+Each of these options is described in the [detailed instructions](status-monitor-v2-detailed-instructions.md).
-- How do I verify that the enablement succeeded?
+### Does Status Monitor v2 support ASP.NET Core applications?
- - The [Get-ApplicationInsightsMonitoringStatus](./status-monitor-v2-api-reference.md#get-applicationinsightsmonitoringstatus) cmdlet can be used to verify that enablement succeeded.
- - We recommend you use [Live Metrics](./live-stream.md) to quickly determine if your app is sending telemetry.
+ Yes. Starting from [Application Insights Agent 2.0.0-beta1](https://www.powershellgallery.com/packages/Az.ApplicationMonitor/2.0.0-beta1), ASP.NET Core applications hosted in IIS are supported.
+### How do I verify that the enablement succeeded?
+
+ - You can use the [Get-ApplicationInsightsMonitoringStatus](./status-monitor-v2-api-reference.md#get-applicationinsightsmonitoringstatus) cmdlet to verify that enablement succeeded.
+ - Use [Live Metrics](./live-stream.md) to quickly determine if your app is sending telemetry.
- You can also use [Log Analytics](../logs/log-analytics-tutorial.md) to list all the cloud roles currently sending telemetry:
+
```Kusto union * | summarize count() by cloud_RoleName, cloud_RoleInstance ```
See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/ap
## Release notes
+The release note updates are listed here.
+ ### 2.0.0-beta3 -- Update ApplicationInsights .NET/.NET Core SDK to 2.20.1-redfield.-- Enable SQL query collection.
+- Updated the Application Insights .NET/.NET Core SDK to 2.20.1-redfield
+- Enabled SQL query collection
### 2.0.0-beta2 -- Updated ApplicationInsights .NET/.NET Core SDK to 2.18.1-redfield.
+Updated the Application Insights .NET/.NET Core SDK to 2.18.1-redfield
### 2.0.0-beta1 -- Added ASP.NET Core Auto-Instrumentation feature.-
+Added the ASP.NET Core auto-instrumentation feature
## Next steps
View your telemetry:
* [Explore metrics](../essentials/metrics-charts.md) to monitor performance and usage. * [Search events and logs](./diagnostic-search.md) to diagnose problems.
-* [Use Analytics](../logs/log-query-overview.md) for more advanced queries.
+* [Use Log Analytics](../logs/log-query-overview.md) for more advanced queries.
* [Create dashboards](./overview-dashboard.md). Add more telemetry: * [Create web tests](monitor-web-app-availability.md) to make sure your site stays live.
-* [Add web client telemetry](./javascript.md) to see exceptions from web page code and to enable trace calls.
-* [Add the Application Insights SDK to your code](./asp-net.md) so you can insert trace and log calls.
+* [Add web client telemetry](./javascript.md) to see exceptions from webpage code and to enable trace calls.
+* [Add the Application Insights SDK to your code](./asp-net.md) so that you can insert trace and log calls.
azure-monitor Data Platform Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-platform-metrics.md
na Previously updated : 05/09/2022 Last updated : 11/08/2022
The differences between each of the metrics are summarized in the following tabl
| Analyze | [Metrics Explorer](metrics-charts.md) | [Metrics Explorer](metrics-charts.md) | PromQL<br>Grafana dashboards | | Alert | [metrics alert rule](../alerts/tutorial-metric-alert.md) | [metrics alert rule](../alerts/tutorial-metric-alert.md) | [Prometheus alert rule](../essentials/prometheus-rule-groups.md) | | Visualize | [Workbooks](../visualize/workbooks-overview.md)<br>[Azure dashboards](../app/tutorial-app-dashboards.md)<br>[Grafana](../visualize/grafana-plugin.md) | [Workbooks](../visualize/workbooks-overview.md)<br>[Azure dashboards](../app/tutorial-app-dashboards.md)<br>[Grafana](../visualize/grafana-plugin.md) | [Grafana](../../managed-grafan) |
-| Retrieve | [Azure CLI](/cli/azure/monitor/metrics)<br>[Azure PowerShell cmdlets](/powershell/module/az.monitor)<br>[REST API](./rest-api-walkthrough.md) or client library<br>[.NET](/dotnet/api/overview/azure/Monitor.Query-readme)<br>[Java](/jav) |
+| Retrieve | [Azure CLI](/cli/azure/monitor/metrics)<br>[Azure PowerShell cmdlets](/powershell/module/az.monitor)<br>[REST API](./rest-api-walkthrough.md) or client library<br>[.NET](/dotnet/api/overview/azure/Monitor.Query-readme)<br>[Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/monitor/azquery)<br>[Java](/jav) |
azure-monitor Prometheus Metrics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-overview.md
The only requirement to enable Azure Monitor managed service for Prometheus is t
The primary method for visualizing Prometheus metrics is [Azure Managed Grafana](../../managed-grafan#link-a-grafana-workspace) so that it can be used as a data source in a Grafana dashboard. You then have access to multiple prebuilt dashboards that use Prometheus metrics and the ability to create any number of custom dashboards. ## Rules and alerts
-Azure Monitor managed service for Prometheus adds a new Prometheus alert type for creating alert rules and recording rules using PromQL queries. You can view fired and resolved Prometheus alerts in the Azure portal along with other alert types. Prometheus alerts are configured with the same [alert rules](https://aka.ms/azureprometheus-promio-alertrules) used by Prometheus. For your AKS cluster, you can use a [set of predefined Prometheus alert rules](../containers/container-insights-metric-alerts.md).
-
+Azure Monitor managed service for Prometheus supports recording rules and alert rules using PromQL queries. Metrics recorded by recording rules are stored back in the Azure Monitor workspace and can be queried by dashboard or by other rules. Alerts fired by alert rules can trigger actions or notifications, as defined in the [action groups](/azure/azure-monitor/alerts/action-groups) configured for the alert rule. You can also view fired and resolved Prometheus alerts in the Azure portal along with other alert types. For your AKS cluster, a set of [predefined Prometheus alert rules](/azure/azure-monitor/containers/container-insights-metric-alerts) and [recording rules ](/azure/azure-monitor/essentials/prometheus-metrics-scrape-default#recording-rules)is provided to allow easy quick start.
## Limitations See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for performance related service limits for Azure Monitor managed service for Prometheus.- - Private Links aren't supported for Prometheus metrics collection into Azure monitor workspace. - Azure monitor managed service for Prometheus is only supported in public clouds. - Metrics addon doesn't work on AKS clusters configured with HTTP proxy.
Following are links to Prometheus documentation.
- [Collect Prometheus metrics for your AKS cluster](../containers/container-insights-prometheus-metrics-addon.md). - [Configure Prometheus alerting and recording rules groups](prometheus-rule-groups.md). - [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).+
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/overview.md
Title: Overview
description: This site describes the REST API created to make the data collected by Azure Log Analytics easily available. Previously updated : 11/29/2021 Last updated : 11/08/2022 # Azure Monitor Log Analytics API Overview
To try the API without writing any code, you can use:
- Your favorite client such as [Fiddler](https://www.telerik.com/fiddler) or [Postman](https://www.getpostman.com/) to manually generate queries with a user interface. - [cURL](https://curl.haxx.se/) from the command line, and then pipe the output into [jsonlint](https://github.com/zaach/jsonlint) to get readable JSON.
-Instead of calling the REST API directly, you can also use the Azure Monitor Query SDK. The SDK contains idiomatic client libraries for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), and [Python](/python/api/overview/azure/monitor-query-readme). Each client library is a wrapper around the REST API that allows you to retrieve log data from the workspace.
+Instead of calling the REST API directly, you can also use the Azure Monitor Query SDK. The SDK contains idiomatic client libraries for the following ecosystems:
+
+- [.NET](/dotnet/api/overview/azure/Monitor.Query-readme)
+- [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/monitor/azquery)
+- [Java](/java/api/overview/azure/monitor-query-readme)
+- [JavaScript](/javascript/api/overview/azure/monitor-query-readme)
+- [Python](/python/api/overview/azure/monitor-query-readme)
+
+Each client library is a wrapper around the REST API that allows you to retrieve log data from the workspace.
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
Title: Azure Monitor Logs pricing details
+ Title: Azure Monitor Logs cost calculations and options
description: Cost details for data stored in a Log Analytics workspace in Azure Monitor, including commitment tiers and data size calculation.
Last updated 03/24/2022
ms.reviwer: dalek git
-# Azure Monitor Logs pricing details
+# Azure Monitor Logs cost calculations and options
The most significant charges for most Azure Monitor implementations will typically be ingestion and retention of data in your Log Analytics workspaces. Several features in Azure Monitor don't have a direct cost but add to the workspace data that's collected. This article describes how data charges are calculated for your Log Analytics workspaces and Application Insights resources and the different configuration options that affect your costs.
azure-monitor Data Platform Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-platform-logs.md
description: Learn the basics of Azure Monitor Logs, which is used for advanced
documentationcenter: '' na Previously updated : 01/27/2022 Last updated : 11/08/2022
The following table describes some of the ways that you can use Azure Monitor Lo
| **Alert** | Configure a [log alert rule](../alerts/alerts-log.md) that sends a notification or takes [automated action](../alerts/action-groups.md) when the results of the query match a particular result. | | **Visualize** | Pin query results rendered as tables or charts to an [Azure dashboard](../../azure-portal/azure-portal-dashboards.md).<br>Create a [workbook](../visualize/workbooks-overview.md) to combine with multiple sets of data in an interactive report. <br>Export the results of a query to [Power BI](./log-powerbi.md) to use different visualizations and share with users outside Azure.<br>Export the results of a query to [Grafana](../visualize/grafana-plugin.md) to use its dashboarding and combine with other data sources.| | **Get insights** | Logs support [insights](../insights/insights-overview.md) that provide a customized monitoring experience for particular applications and services. |
-| **Retrieve** | Access log query results from a:<ul><li>Command line via the [Azure CLI](/cli/azure/monitor/log-analytics) or [Azure PowerShell cmdlets](/powershell/module/az.operationalinsights).</li><li>Custom app via the [REST API](https://dev.loganalytics.io/) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), or [Python](/python/api/overview/azure/monitor-query-readme).</li></ul> |
+| **Retrieve** | Access log query results from a:<ul><li>Command line via the [Azure CLI](/cli/azure/monitor/log-analytics) or [Azure PowerShell cmdlets](/powershell/module/az.operationalinsights).</li><li>Custom app via the [REST API](https://dev.loganalytics.io/) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/monitor/azquery), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), or [Python](/python/api/overview/azure/monitor-query-readme).</li></ul> |
| **Export** | Configure [automated export of log data](./logs-data-export.md) to an Azure storage account or Azure Event Hubs.<br>Build a workflow to retrieve log data and copy it to an external location by using [Azure Logic Apps](./logicapp-flow-connector.md). | ![Diagram that shows an overview of Azure Monitor Logs.](media/data-platform-logs/logs-overview.png)
azure-monitor Log Query Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-query-overview.md
Previously updated : 06/22/2022 Last updated : 11/08/2022
Areas in Azure Monitor where you will use queries include the following:
- [Logic Apps](../logs/logicapp-flow-connector.md). Use the results of a log query in an automated workflow using Logic Apps. - [PowerShell](/powershell/module/az.operationalinsights/invoke-azoperationalinsightsquery). Use the results of a log query in a PowerShell script from a command line or an Azure Automation runbook that uses Invoke-AzOperationalInsightsQuery. - [Azure Monitor Logs API](https://dev.loganalytics.io). Retrieve log data from the workspace from any REST API client. The API request includes a query that is run against Azure Monitor to determine the data to retrieve.-- Azure Monitor Query SDK. Retrieve log data from the workspace via an idiomatic client library for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), or [Python](/python/api/overview/azure/monitor-query-readme).
+- Azure Monitor Query SDK. Retrieve log data from the workspace via an idiomatic client library for the following ecosystems:
+ - [.NET](/dotnet/api/overview/azure/Monitor.Query-readme)
+ - [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/monitor/azquery)
+ - [Java](/java/api/overview/azure/monitor-query-readme)
+ - [JavaScript](/javascript/api/overview/azure/monitor-query-readme)
+ - [Python](/python/api/overview/azure/monitor-query-readme)
## Getting started The best way to get started learning to write log queries using KQL is leveraging available tutorials and samples.
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
Capabilities that require dedicated clusters:
| Americas | Europe | Middle East | Africa | Asia Pacific | ||||||
- | Brazil South | France Central | | South Africa North | Australia East |
+ | Brazil South | France Central | UAE North | South Africa North | Australia East |
| Canada Central | Germany West Central | | | Central India | | Central US | North Europe | | | Japan East | | East US | Norway East | | | Korea Central |
azure-monitor Vminsights Dependency Agent Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-dependency-agent-maintenance.md
The Dependency Agent collects data about processes running on the virtual machin
## Dependency Agent requirements -- The Dependency Agent requires the Log Analytics Agent to be installed on the same machine.
+- The Dependency Agent requires the Azure Monitor Agent to be installed on the same machine.
- On both the Windows and Linux versions, the Dependency Agent collects data using a user-space service and a kernel driver.
- - Dependency Agent supports the same [Windows versions Log Analytics Agent supports](../agents/agents-overview.md#supported-operating-systems), except Windows Server 2008 SP2 and Azure Stack HCI.
+ - Dependency Agent supports the same [Windows versions that Azure Monitor Agent supports](../agents/agents-overview.md#supported-operating-systems), except Windows Server 2008 SP2 and Azure Stack HCI.
- For Linux, see [Dependency Agent Linux support](#dependency-agent-linux-support). ## Upgrade Dependency Agent
Since the Dependency agent works at the kernel level, support is also dependent
## Next steps
-If you want to stop monitoring your VMs for a while or remove VM insights entirely, see [Disable monitoring of your VMs in VM insights](../vm/vminsights-optout.md).
+If you want to stop monitoring your VMs for a while or remove VM insights entirely, see [Disable monitoring of your VMs in VM insights](../vm/vminsights-optout.md).
azure-netapp-files Azure Netapp Files Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes.md
na Previously updated : 10/25/2022 Last updated : 11/08/2022 # Create an NFS volume for Azure NetApp Files
This article shows you how to create an NFS volume. For SMB volumes, see [Create
![Show advanced selection](../media/azure-netapp-files/volume-create-advanced-selection.png)
+ >[!NOTE]
+ >By default, the `.snapshot` directory path is hidden from NFSv4.1 clients. Enabling the **Hide snapshot path** option will hide the .snapshot directory from NFSv3 clients; the directory will still be accessible.
+ 3. Click **Protocol**, and then complete the following actions: * Select **NFS** as the protocol type for the volume.
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 11/07/2022 Last updated : 11/08/2022 # Guidelines for Azure NetApp Files network planning
Azure NetApp Files Standard network features are supported for the following reg
* East US * East US 2 * France Central
+* Germany North
* Germany West Central * Japan East * Japan West
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
na Previously updated : 11/07/2022 Last updated : 11/08/2022 # Resource limits for Azure NetApp Files
The following table describes resource limits for Azure NetApp Files:
| Maximum size of a single file | 16 TiB | No | | Maximum size of directory metadata in a single directory | 320 MB | No | | Maximum number of files in a single directory | *Approximately* 4 million. <br> See [Determine if a directory is approaching the limit size](#directory-limit). | No |
-| Maximum number of files ([`maxfiles`](#maxfiles)) per volume | 100 million | Yes |
+| Maximum number of files ([`maxfiles`](#maxfiles)) per volume | 106,255,630 | Yes |
| Maximum number of export policy rules per volume | 5 | No | | Minimum assigned throughput for a manual QoS volume | 1 MiB/s | No | | Maximum assigned throughput for a manual QoS volume | 4,500 MiB/s | No |
The service dynamically adjusts the `maxfiles` limit for a volume based on its p
>[!IMPORTANT] > If your volume has a quota of at least 4 TiB and you want to increase the quota, you must initiate [a support request](#request-limit-increase).
-For volumes with at least 4 TiB of quota, you can increase the `maxfiles` (inodes) limit beyond 100 million. For every 100 million files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 4 TiB. For example, if you increase the `maxfiles` limit from 100 million files to 200 million files (or any number in between), you need to increase the volume quota from 4 TiB to 8 TiB.
+If you have allocated at least 4 TiB of quota for a volume, you can initiate a support request to increase the maxfiles (inodes) limit beyond 106,255,630. For every 106,255,630 files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 4 TiB. For example, if you increase the maxfiles limit from 106,255,630 files to 212,511,260 files (or any number in between), you need to increase the volume quota from 4 TiB to 8 TiB.
-You can increase the `maxfiles` limit to 500 million if your volume quota is at least 20 TiB.
+You can increase the `maxfiles` limit to 531,278,150 if your volume quota is at least 20 TiB.
## Request limit increase
azure-netapp-files Convert Nfsv3 Nfsv41 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/convert-nfsv3-nfsv41.md
na Previously updated : 06/06/2022 Last updated : 11/08/2022 # Convert an NFS volume between NFSv3 and NFSv4.1
Converting a volume between NFSv3 and NFSv4.1 does not require that you create a
* You cannot convert a single-protocol NFS volume to a dual-protocol volume, or the other way around. * You cannot convert a destination volume in a cross-region replication relationship. * Converting an NFSv4.1 volume to NFSv3 will cause all advanced NFSv4.1 features such as ACLs and file locking to become unavailable.
+* Converting a volume from NFSv3 to NFSv4.1 will cause the `.snapshot` directory to be hidden from NFSv4.1 clients. The directory will still be accessible.
+* Converting a volume from NFSv4.1 to NFSv3 will cause the `.snapshot` directory to be visible. You can modify the properties of the volume to [hide the snapshot path](snapshots-edit-hide-path.md).
## Register the option
azure-netapp-files Faq Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-nfs.md
Previously updated : 08/03/2022 Last updated : 11/08/2022 # NFS FAQs for Azure NetApp Files
For example, if a client mounting a volume becomes unresponsive or crashes beyon
A grace period defines a period of special processing in which clients can try to reclaim their locking state during a server recovery. The default timeout for the leases is 30 seconds with a grace period of 45 seconds. After that time, the client's lease will be released.
+## Why is the `.snapshot` directory not visible in an NFSv4.1 volume, but it is visible in an NFSv3 volume?
+
+By design, the .snapshot directory is never visible to NFSv4.1 clients. By default, the `.snapshot `directory will be visible to NFSv3 clients. To hide the `.snapshot` directory from NFSv3 clients, edit the properties of the volume to [hide the snapshot path](snapshots-edit-hide-path.md).
+ ## Oracle dNFS ### Are there any Oracle patches required with dNFS?
azure-portal Set Preferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/set-preferences.md
Title: Manage Azure portal settings and preferences description: Change Azure portal settings such as default subscription/directory, timeouts, menu mode, contrast, theme, notifications, language/region and more. Previously updated : 09/06/2022 Last updated : 11/7/2022
To switch to a different directory, select the directory that you want to work i
:::image type="content" source="media/set-preferences/settings-directories-subscriptions-default-filter.png" alt-text="Screenshot showing the Directories settings pane.":::
-### Subscription filters
+## Subscription filters
You can choose the subscriptions that are filtered by default when you sign in to the Azure portal. This can be helpful if you have a primary list of subscriptions you work with but use others occasionally.
+> [!IMPORTANT]
+> After you apply a subscription filter in the Azure portal settings page, you will only see subscriptions that match the filter across all portal experiences. You won't be able to work with other subscriptions that are excluded from the selected filter. Any new subscriptions that are created after the filter was applied may not be shown if the filter criteria don't match. To see them, you must update the filter criteria to include other subscriptions in the portal, or select **Advanced filters** and use the **Default** filter to always show all subscriptions.
+>
+> Certain features, such as **Management groups** or **Security Center**, may show subscriptions that don't match your filter criteria. However, you won't be able to perform operations on those subscriptions (such as moving a subscription between management groups) unless you adjust your filters to include the subscriptions that you want to work with.
+ To use customized filters, select **Advanced filters**. You'll be prompted to confirm before continuing. :::image type="content" source="media/set-preferences/settings-advanced-filters-enable.png" alt-text="Screenshot showing the confirmation dialog box for Advanced filters.":::
This will enable the **Advanced filters** page, where you can create and manage
:::image type="content" source="media/set-preferences/settings-advanced-filters-disable.png" alt-text="Screenshot showing the confirmation dialog box for disabling Advanced filters.":::
-## Advanced filters
+### Advanced filters
After enabling the **Advanced filters** page, you can create, modify, or delete subscription filters.
azure-signalr Howto Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-custom-domain.md
Last updated 08/15/2022
-# Configure a custom domain for Azure SignalR Service
+# How to Configure a custom domain for Azure SignalR Service
-In addition to the default domain provided Azure SignalR Service, you can also add custom domains.
+In addition to the default domain provided with Azure SignalR Service, you can also add a custom DNS domain to your service. In this article, you'll learn how to add a custom domain to your SignalR Service.
> [!NOTE] > Custom domains is a Premium tier feature. Standard tier resources can be upgraded to Premium tier without downtime.
+To configure a custom domain, you need to:
+
+1. Add a custom domain certificate.
+1. Create a DNS CNAME record.
+1. Add the custom domain.
+
+## Prerequisites
+
+- A custom domain registered through an Azure App Service or a third party registrar.
+- An Azure account with an active subscription.
+ - If you don't have one, you [can create one for free](https://azure.microsoft.com/free/).
+- An Azure Resource Group.
+- An Azure SignalR Service resource.
+- An Azure Key Vault instance.
+- A custom domain SSL certificate stored in your Key Vault instance. See [Get started with Key Vault certificates](../key-vault/certificates/certificate-scenarios.md)
+- An Azure DNS zone. (Optional)
+ ## Add a custom certificate
-You need to add a custom certificate before you can add a custom domain. A custom certificate is a resource of your SignalR Service. It references a certificate stored in your Azure Key Vault. For security and compliance reasons, SignalR Service doesn't permanently store your certificate. Instead it fetches the certificate from your key vault on the fly and keeps it in memory.
+Before you can add a custom domain, you need to add a custom SSL certificate. Your SignalR Service accesses the certificate stored in your key vault through a managed identity.
-There are four steps to adding a custom certificate.
+There are three steps to adding a domain certificate.
-1. Enable managed identity in SignalR Service.
-1. Give the SignalR Service managed identity permission to access the key vault.
-1. Store a certificate in your key vault.
-1. Add a custom certificate in SignalR Service.
+1. Enable managed identity in your SignalR Service.
+1. Give the managed identity access to your key vault.
+1. Add a custom certificate to your SignalR Service.
### Enable managed identity in SignalR Service
-1. In the Azure portal, go to your SignalR Service resource.
-1. In the menu pane on the left, select **Identity**.
-1. We'll use a system assigned identity to simplify this procedure, but you can configure a user assigned identity here if you want. On the **System assigned** table, set **Status** to **On**.
+You can use either a system-assigned or user-assigned managed identity. This article demonstrates using a system-assigned managed identity.
+
+1. In the Azure portal, go to your SignalR service resource.
+1. Select **Identity** from the menu on the left.
+1. On the **System assigned** table, set **Status** to **On**.
+ :::image type="content" alt-text="Screenshot of enabling managed identity." source="media/howto-custom-domain/portal-identity.png" :::
-1. Select **Save**, and then select **Yes** when prompted to enable system assigned managed identity.
-It will take a few moments for the managed identity to be created. When configuration is complete, the screen will show an **Object (principal) ID**. The object ID is the ID of the system-assigned managed identity SignalR Service will use to access the key vault. The name of the managed identity is the same as the name of the SignalR Service instance. In the next step, you'll need to search for the principal (managed identity) using the name or Object ID.
+1. Select **Save**, and then select **Yes** when prompted to enable system-assigned managed identity.
+
+Once the identity is created, the **Object (principal) ID** is displayed. SignalR Service will use the object ID of the system-assigned managed identity to access the key vault. The name of the managed identity is the same as the name of the SignalR Service instance. In the next section, you'll need to search for the principal (managed identity) using the name or Object ID.
+
-### Grant your SignalR Service resource access to Key Vault
+### Give the managed identity access to your key vault
-SignalR Service uses a [managed identity](~/articles/active-directory/managed-identities-azure-resources/overview.md) to access your key vault. The managed identity used by SignalR Service has to be given permission to access your key vault. The way you grant permission depends on your key vault permission model.
+SignalR Service uses a [managed identity](~/articles/active-directory/managed-identities-azure-resources/overview.md) to access your key vault. You must give the managed identity permission to access your key vault.
+
+The steps to grant permission depends whether you selected *vault access policy* or *Azure role-based access control* as your key vault permission model.
#### [Vault access policy](#tab/vault-access-policy)
-If you're using **Vault access policy** as the key vault permission model, follow this procedure to add a new access policy.
+If you're using **Vault access policy** as your key vault permission model, follow this procedure to add a new access policy.
1. Go to your key vault resource.
-1. In the menu pane, under **Settings** select **Access policies**.
+1. Select **Access policies** from the menu on the left.
+1. Select **Create**.
+ :::image type="content" source="media/howto-custom-domain/portal-key-vault-access-policies.png" alt-text="Screenshot of Key Vault's access policy page.":::
+1. In the **Permissions** tab:
+ 1. Select **Get** under **Secret permissions**.
+ 1. Select **Get** under **Certificate permissions**.
+1. Select **Next** to go to the **Principal** tab.
- :::image type="content" alt-text="Screenshot of Vault access policy selected as the vault permission model." source="media/howto-custom-domain/portal-key-vault-perm-model-access-policy.png" :::
+ :::image type="content" source="media/howto-custom-domain/portal-key-vault-create-access-policy.png" alt-text="Screenshot of Permissions tab of Key Vault's Create an access policy page.":::
-1. In the **Add access policy** screen, select **Add access policy**. In the **Secret permissions** dropdown list, select **Get** permission. In the **Certificate permissions** dropdown list, select **Get** permission.
+1. Enter the Object ID of the managed identity into the search box.
+1. Select the managed identity from the search results.
+1. Select the **Review + create** tab.
- :::image type="content" alt-text="Screenshot of Add access policy dialog." source="media/howto-custom-domain/portal-key-vault-permissions.png" :::
+ :::image type="content" source="media/howto-custom-domain/portal-key-vault-create-access-policy-principal.png" alt-text="Screenshot of the Principal tab of Key Vault's Create an access policy page.":::
-1. Under **Select principal**, select **None selected**. The **Principal** pane will appear to the right. Search for the Azure SignalR Service resource name. Select **Select**.
+1. Select **Create** from the **Review + create** tab.
- :::image type="content" alt-text="Screenshot of principal selection in Key Vault." source="media/howto-custom-domain/portal-key-vault-principal.png" :::
+The managed identity for your SignalR Service instance is listed in the access policies table.
-1. Select **Add**.
#### [Azure role-based access control](#tab/azure-rbac)
-If you're using the **Azure role-based access control** permission model, follow this procedure to assign a role to the SignalR Service managed identity. You must be a member of the [Azure built-in **Owner**](~/articles/role-based-access-control/built-in-roles.md#owner) role to complete this procedure.
+When using the **Azure role-based access control** permission model, follow this procedure to assign a role to the SignalR Service managed identity. To complete this procedure, you must be a member of the [Azure built-in **Owner**](~/articles/role-based-access-control/built-in-roles.md#owner) role.
:::image type="content" alt-text="Screenshot of Azure role-based access control selected as the vault permission model." source="media/howto-custom-domain/portal-key-vault-perm-model-rbac.png" :::
If you're using the **Azure role-based access control** permission model, follow
1. On the **Members** tab, under **Assign access to**, select **Managed identity**. 1. Select **+Select members**. The **Select members** pane will open on the right.
-1. Search for the Azure SignalR Service resource name or the user assigned identity name. Select **Select**.
+1. Search for the SignalR Service resource name or the user assigned identity name. Select **Select**.
:::image type="content" alt-text="Screenshot of members tab when adding a role assignment to Key Vault." source="media/howto-custom-domain/portal-key-vault-members.png" :::
If you're using the **Azure role-based access control** permission model, follow
--
-### Create a custom certificate
+### Add a custom certificate to your SignalR Service
-1. In the Azure portal, go to your Azure SignalR Service resource.
+Use the following steps to add the custom certificate to your SignalR Service:
+
+1. In the Azure portal, go to your SignalR Service resource.
1. In the menu pane, select **Custom domain**. 1. Under **Custom certificate**, select **Add**. :::image type="content" alt-text="Screenshot of custom certificate management." source="media/howto-custom-domain/portal-custom-certificate-management.png" :::
-1. Fill in a name for the custom certificate.
-1. Select **Select from your Key Vault** to choose a Key Vault certificate. After selection the following **Key Vault Base URI**, **Key Vault Secret Name** should be automatically filled. Alternatively you can also fill in these fields manually.
+1. Enter a name of the custom certificate.
+1. Select **Select from your Key Vault** to choose a key vault certificate. After selection the following **Key Vault Base URI**, **Key Vault Secret Name** should be automatically filled. Alternatively you can also fill in these fields manually.
1. Optionally, you can specify a **Key Vault Secret Version** if you want to pin the certificate to a specific version. 1. Select **Add**.
- :::image type="content" alt-text="Screenshot of adding a custom certificate." source="media/howto-custom-domain/portal-custom-certificate-add.png" :::
+
+The SignalR Service will fetch the certificate and validate its content. When it succeeds, the certificate's **Provisioning State** will be **Succeeded**.
-Azure SignalR Service will then fetch the certificate and validate its content. If everything is good, the **Provisioning State** will be **Succeeded**.
+ :::image type="content" alt-text="Screenshot of an added custom certificate." source="media/howto-custom-domain/portal-custom-certificate-added.png" :::
- :::image type="content" alt-text="Screenshot of an added custom certificate." source="media/howto-custom-domain/portal-custom-certificate-added.png" :::
+## Create a custom domain CNAME record
-## Create a custom domain CNAME
+You must create a CNAME record for your custom domain in an Azure DNS Zone or with your third-party registrar service. The CNAME record creates an alias from your custom domain to the default domain of SignalR Service. The SignalR Service uses the record to validate the ownership of your custom domain.
-To validate the ownership of your custom domain, you need to create a CNAME record for the custom domain and point it to the default domain of SignalR Service.
+For example, if your default domain is `contoso.service.signalr.net`, and your custom domain is `contoso.example.com`, you need to create a CNAME record on `example.com`.
-For example, if your default domain is `contoso.service.signalr.net`, and your custom domain is `contoso.example.com`, you need to create a CNAME record on `example.com`.
+Once you've created the CNAME record, you can perform a DNS lookup to see the CNAME information.
+In example, the output from the linux dig (DNS lookup) command should look similar to this output:
```
-contoso.example.com. 0 IN CNAME contoso.service.signalr.net.
+ contoso.example.com. 0 IN CNAME contoso.service.signalr.net.
```
-If you're using Azure DNS Zone, see [manage DNS records](~/articles/dns/dns-operations-recordsets-portal.md) for how to add a CNAME record.
+If you're using Azure DNS Zone, see [manage DNS records](~/articles/dns/dns-operations-recordsets-portal.md) to learn how to add a CNAME record.
:::image type="content" alt-text="Screenshot of adding a CNAME record in Azure DNS Zone." source="media/howto-custom-domain/portal-dns-cname.png" :::
-If you're using other DNS providers, follow provider's guide to create a CNAME record.
+If you're using other DNS providers, follow the provider's guide to create a CNAME record.
## Add a custom domain
-A custom domain is another sub resource of your Azure SignalR Service. It contains the configuration for a custom domain.
+Now add the custom domain to your SignalR Service.
-1. In the Azure portal, go to your Azure SignalR Service resource.
+1. In the Azure portal, go to your SignalR Service resource.
1. In the menu pane, select **Custom domain**. 1. Under **Custom domain**, select **Add**. :::image type="content" alt-text="Screenshot of custom domain management." source="media/howto-custom-domain/portal-custom-domain-management.png" :::
-1. Fill in a name for the custom domain.
-1. Fill in the full domain name of your custom domain, for example, `contoso.com`.
+1. Enter a name for the custom domain.
+1. Enter the full domain name of your custom domain, for example, `contoso.com`.
1. Select a custom certificate that applies to this custom domain. 1. Select **Add**.
A custom domain is another sub resource of your Azure SignalR Service. It contai
## Verify a custom domain
-You can now access your Azure SignalR Service endpoint via the custom domain. To verify it, you can access the health API.
+To verify the custom domain, you can use the health API. The health API is a public endpoint that returns the health status of your SignalR Service instance. The health API is available at `https://<your custom domain>/api/health`.
Here's an example using cURL:
PS C:\> curl.exe -v https://contoso.example.com/api/health
> Host: contoso.example.com < HTTP/1.1 200 OK
-...
-PS C:\>
``` #### [Bash](#tab/azure-bash)
$ curl -vvv https://contoso.example.com/api/health
> Host: contoso.example.com ... < HTTP/2 200
-...
``` -- It should return `200` status code without any certificate error.
+## Access Key Vault in private network
+
+If you've configured a [Private Endpoint](../private-link/private-endpoint-overview.md) to your key vault, your SignalR Service won't be able to access your key vault via a public network. You can give your SignalR Service access to your key vault through a private network by creating a [Shared Private Endpoint](./howto-shared-private-endpoints-key-vault.md).
+
+After you create a Shared Private Endpoint, you can add a custom certificate as described in the [Add a custom certificate to your SignalR Service](#add-a-custom-certificate-to-your-signalr-service) section above.
+
+>[!IMPORTANT]
+>**You don't have to change the domain in your key vault URI**. For example, if your key vault base URI is `https://contoso.vault.azure.net`, you'll use this URI to configure a custom certificate.
+
+You don't have to explicitly allow SignalR Service IP addresses in key vault firewall settings. For more info, see [Key Vault private link diagnostics](../key-vault/general/private-link-diagnostics.md).
-## Key Vault in private network
+## Cleanup
-If you have configured [Private Endpoint](../private-link/private-endpoint-overview.md) to your Key Vault, Azure SignalR Service cannot access the Key Vault via public network. You need to set up a [Shared Private Endpoint](./howto-shared-private-endpoints-key-vault.md) to let Azure SignalR Service access your Key Vault via private network.
+If you don't plan to use the resources you've created in this article, you can delete the Resource Group.
-After you create a Shared Private Endpoint, you can create a custom certificate as usual. **You don't have to change the domain in Key Vault URI**. For example, if your Key Vault base URI is `https://contoso.vault.azure.net`, you still use this URI to configure custom certificate.
+>[!CAUTION]
+> Deleting the resource group deletes all resources contained within it. If resources outside the scope of this article exist in the specified resource group, they will also be deleted.
-You don't have to explicitly allow Azure SignalR Service IPs in Key Vault firewall settings. For more info, see [Key Vault private link diagnostics](../key-vault/general/private-link-diagnostics.md).
## Next steps
azure-signalr Howto Shared Private Endpoints Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-shared-private-endpoints-key-vault.md
description: Learn how Azure SignalR Service can use shared private endpoints to
-+ Last updated 09/23/2022 # Access Key Vault in a private network through shared private endpoints
-Azure SignalR Service can access your Azure Key Vault instance in a private network through shared private endpoints. In this way, you don't have to expose your key vault on a public network.
+Azure SignalR Service can access your Key Vault in a private network through Shared Private Endpoints. This way, your Key Vault isn't exposed on a public network.
:::image type="content" alt-text="Diagram that shows the architecture of a shared private endpoint." source="media\howto-shared-private-endpoints-key-vault\shared-private-endpoint-overview.png" :::
-## Management of shared private link resources
+You can create private endpoints through Azure SignalR Service APIs for shared access to a resource integrated with [Azure Private Link service](https://azure.microsoft.com/services/private-link/). These endpoints, called *shared private link resources*, are created inside the SignalR execution environment and aren't accessible outside this environment.
-Private endpoints of secured resources that are created through Azure SignalR Service APIs are called *shared private link resources*. This is because you're "sharing" access to a resource, such a key vault, that has been integrated with the [Azure Private Link service](https://azure.microsoft.com/services/private-link/). These private endpoints are created inside an Azure SignalR Service execution environment and aren't directly visible to you.
+In this article, you'll learn how to create a shared private endpoint to Key Vault.
-> [!NOTE]
-> The examples in this article are based on the following assumptions:
-> * The resource ID of the Azure SignalR Service instance is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr_.
-> * The resource ID of the key vault is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.KeyVault/vaults/contoso-kv_.
+## Prerequisites
-The examples show how the *contoso-signalr* service can be configured so that its outbound calls to the key vault go through a private endpoint rather than a public network.
+You'll need the following resources to complete this article:
-## Create a shared private link resource to the key vault
+- An Azure resource group.
+- An Azure SignalR Service instance.
+- An Azure Key Vault instance.
++
+The examples in this article use the following naming convention, although you can use your own names instead.
+
+- The resource ID of this Azure SignalR Service is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr_.
+- The resource ID of Azure Key Vault is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.KeyVault/vaults/contoso-kv_.
+- The rest of the examples show how the *contoso-signalr* service can be configured so that its outbound calls to Key Vault go through a private endpoint rather than public network.
++
+## Create a shared private link resource to the Key Vault
### [Azure portal](#tab/azure-portal) 1. In the Azure portal, go to your Azure SignalR Service resource.
-1. On the menu pane, select **Networking**. Switch to the **Private access** tab.
-1. Select **Add shared private endpoint**.
+1. Select **Networking**.
+1. Select the **Private access** tab.
+1. Select **Add shared private endpoint** in the **Shared private endpoints** section.
:::image type="content" alt-text="Screenshot of the button for adding a shared private endpoint." source="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-management.png" lightbox="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-management.png" :::
-1. Fill in a name for the shared private endpoint.
-1. Select the target linked resource either by selecting from your owned resources or by filling in a resource ID.
+ Enter the following information:
+ | Field | Description |
+ | -- | -- |
+ | **Name** | The name of the shared private endpoint. |
+ | **Type** | Select *Microsoft.KeyVault/vaults* |
+ | **Subscription** | The subscription containing your Key Vault. |
+ | **Resource** | Enter the name of your Key Vault resource. |
+ | **Request Message** | Enter "please approve" |
+ 1. Select **Add**. :::image type="content" alt-text="Screenshot of adding a shared private endpoint." source="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-add.png" :::
-1. Confirm that the shared private endpoint resource is now in a **Succeeded** provisioning state. The connection state is **Pending** at the target resource side.
+When you've successfully added the private endpoint, the provisioning state will be **Succeeded**. The connection state will be **Pending** until you approve the endpoint on the Key Vault side.
:::image type="content" alt-text="Screenshot of an added shared private endpoint." source="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-added.png" lightbox="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-added.png" ::: ### [Azure CLI](#tab/azure-cli)
-You can make the following API call with the [Azure CLI](/cli/azure/) to create a shared private link resource:
+Make the following API call with the [Azure CLI](/cli/azure/) to create a shared private link resource:
-```dotnetcli
+```azurecli
az rest --method put --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr/sharedPrivateLinkResources/kv-pe?api-version=2021-06-01-preview --body @create-pe.json ```
The contents of the *create-pe.json* file, which represent the request body to t
} ```
-The process of creating an outbound private endpoint is a long-running (asynchronous) operation. As in all asynchronous Azure operations, the `PUT` call returns an `Azure-AsyncOperation` header value that looks like the following:
+The process of creating an outbound private endpoint is a long-running (asynchronous) operation. As in all asynchronous Azure operations, the `PUT` call returns an `Azure-AsyncOperation` header value that looks like the following text:
```plaintext "Azure-AsyncOperation": "https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr/operationStatuses/c0786383-8d5f-4554-8d17-f16fcf482fb2?api-version=2021-06-01-preview"
The process of creating an outbound private endpoint is a long-running (asynchro
You can poll this URI periodically to obtain the status of the operation.
-If you're using the CLI, you can poll for the status by manually querying the `Azure-AsyncOperationHeader` value:
+You can poll for the status by manually querying the `Azure-AsyncOperationHeader` value:
-```dotnetcli
+```azurecli
az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr/operationStatuses/c0786383-8d5f-4554-8d17-f16fcf482fb2?api-version=2021-06-01-preview ```
Wait until the status changes to **Succeeded** before you proceed to the next st
--
-## Approve the private endpoint connection for the key vault
+## Approve the private endpoint connection for the Key Vault
### [Azure portal](#tab/azure-portal)
-1. In the Azure portal, select the **Networking** tab for your key vault and go to **Private endpoint connections**. After the asynchronous operation has succeeded, there should be a request for a private endpoint connection with the request message from the previous API call.
+1. Go to your Key Vault resource
+1. Select the **Networking**.
+1. Select the **Private endpoint connections** tab.
+ After the asynchronous operation has succeeded, there should be a request for a private endpoint connection with the request message from the previous API call.
-1. Select the private endpoint that Azure SignalR Service created. Then select **Approve**.
-
- :::image type="content" alt-text="Screenshot of the Azure portal that shows the pane for private endpoint connections." source="media\howto-shared-private-endpoints-key-vault\portal-key-vault-approve-private-endpoint.png" :::
-
-1. Make sure that the private endpoint connection appears, as shown in the following screenshot. It could take one to two minutes for the status to be updated in the portal.
+1. Select the private endpoint that SignalR Service created, then select **Approve**.
+1. Select **Yes** to approve the connection.
:::image type="content" alt-text="Screenshot of the Azure portal that shows an Approved status on the pane for private endpoint connections." source="media\howto-shared-private-endpoints-key-vault\portal-key-vault-approved-private-endpoint.png" ::: ### [Azure CLI](#tab/azure-cli)
-1. List private endpoint connections:
+1. List private endpoint connections.
- ```dotnetcli
+ ```azurecli
az network private-endpoint-connection list -n <key-vault-resource-name> -g <key-vault-resource-group-name> --type 'Microsoft.KeyVault/vaults' ```
- There should be a pending private endpoint connection. Note down its ID.
+ There should be a pending private endpoint connection. Note its ID.
```json [
Wait until the status changes to **Succeeded** before you proceed to the next st
1. Approve the private endpoint connection:
- ```dotnetcli
+ ```azurecli
az network private-endpoint-connection approve --id <private-endpoint-connection-id> ``` --
-## Query the status of the shared private link resource
+## Verify the shared private endpoint is functional
-It takes minutes for the approval to be propagated to Azure SignalR Service. You can check the state by using either the Azure portal or the Azure CLI.
+After a few minutes, the approval propagates to the SignalR Service, and the connection state is set to *Approved*. You can check the state using either Azure portal or Azure CLI.
### [Azure portal](#tab/azure-portal)
It takes minutes for the approval to be propagated to Azure SignalR Service. You
### [Azure CLI](#tab/azure-cli)
-```dotnetcli
+```azurecli
az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr/sharedPrivateLinkResources/func-pe?api-version=2021-06-01-preview ```
-This command returns JSON that shows the connection state as the `status` value in the `properties` section.
+The command will return a JSON object, where the connection state is shown as "status" in the "properties" section.
+ ```json {
This command returns JSON that shows the connection state as the `status` value
```
-If the provisioning state (`properties.provisioningState`) of the resource is `Succeeded` and the connection state (`properties.status`) is `Approved`, the shared private link resource is functional and Azure SignalR Service can communicate over the private endpoint.
+When the "Provisioning State" (`properties.provisioningState`) of the resource is `Succeeded` and "Connection State" (`properties.status`) is `Approved`, the shared private link resource is functional, and the SignalR Service can communicate over the private endpoint.
--
-At this point, the private endpoint between Azure SignalR Service and Azure Key Vault is established.
+When the private endpoint between the SignalR Service and Azure Key Vault is functional, the value of the provisioning state is **Succeeded**, and the connection state is **Approved**.
-Now you can configure features like custom domain as usual. *You don't have to use a special domain for Key Vault*. Azure SignalR Service automatically handles DNS resolution.
+## Cleanup
-## Next steps
+If you don't plan to use the resources you've created in this article, you can delete the Resource Group.
-Learn more:
+>[!CAUTION]
+> Deleting the resource group deletes all resources contained within it. If resources outside the scope of this article exist in the specified resource group, they will also be deleted.
+
+## Next steps
+ [What are private endpoints?](../private-link/private-endpoint-overview.md) + [Configure a custom domain](howto-custom-domain.md)
azure-signalr Howto Shared Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-shared-private-endpoints.md
# Secure Azure SignalR outbound traffic through Shared Private Endpoints
-If you're using [serverless mode](concept-service-mode.md#serverless-mode) in Azure SignalR Service, you might have outbound traffic to upstream. Upstream such as
-Azure Web App and Azure Functions, can be configured to accept connections from a list of virtual networks and refuse outside connections that originate from a public network. You can create an outbound [private endpoint connection](../private-link/private-endpoint-overview.md) to reach these endpoints.
+When you're using [serverless mode](concept-service-mode.md#serverless-mode) in Azure SignalR Service, you can create outbound [private endpoint connections](../private-link/private-endpoint-overview.md) to an upstream service.
+
+Upstream services, such as Azure Web App and Azure Functions, can be configured to accept connections from a list of virtual networks and refuse outside connections that originate from a public network. To reach these endpoints, you can create an outbound private endpoint connection.
:::image type="content" alt-text="Diagram showing architecture of shared private endpoint." source="media\howto-shared-private-endpoints\shared-private-endpoint-overview.png" ::: This outbound method is subject to the following requirements:
-+ The upstream must be Azure Web App or Azure Function.
-
-+ The Azure SignalR Service service must be on the Standard tier.
-++ The upstream service must be Azure Web App or Azure Function.++ The Azure SignalR service not must be on the free tier. + The Azure Web App or Azure Function must be on certain SKUs. See [Use Private Endpoints for Azure Web App](../app-service/networking/private-endpoint.md).
+In this article, you'll learn how to create a shared private endpoint with an outbound private endpoint connection to secure outbound traffic to an upstream Azure Function instance.
+ ## Shared Private Link Resources Management
-Private endpoints of secured resources that are created through Azure SignalR Service APIs are referred to as *shared private link resources*. This is because you're "sharing" access to a resource, such as an Azure Function, that has been integrated with the [Azure Private Link service](https://azure.microsoft.com/services/private-link/). These private endpoints are created inside Azure SignalR Service execution environment and aren't directly visible to you.
+You create private endpoints of secured resources through the SignalR Service APIs. These endpoints, called *shared private link resources*, allow you to share access to a resource, such as an Azure Function integrated with the [Azure Private Link service](https://azure.microsoft.com/services/private-link/). These private endpoints are created inside the SignalR Service execution environment and aren't accessible outside this environment.
+
+## Prerequisites
+
+You'll need the following resources to complete the steps in this article:
+
+- An Azure Resource Group
+- An Azure SignalR Service instance (must not be in free tier)
+- An Azure Function instance
-> [!NOTE]
+- > [!NOTE]
> The examples in this article are based on the following assumptions:
-> * The resource ID of this Azure SignalR Service is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr_.
+> * The resource ID of the SignalR Service is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr_.
> * The resource ID of upstream Azure Function is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.Web/sites/contoso-func_.
+> The rest of the examples show how the *contoso-signalr* service can be configured so that its upstream calls to the function go through a private endpoint rather than public network.
+> You may use your own resource IDs in the examples.
-The rest of the examples show how the *contoso-signalr* service can be configured so that its upstream calls to function go through a private endpoint rather than public network.
+## Create a shared private link resource to the function
-### Step 1: Create a shared private link resource to the function
+### [Azure portal](#tab/azure-portal)
-#### [Azure portal](#tab/azure-portal)
-
-1. In the Azure portal, go to your Azure SignalR Service resource.
-1. In the menu pane, select **Networking**. Switch to **Private access** tab.
-1. Click **Add shared private endpoint**.
+1. In the Azure portal, go to your SignalR Service resource.
+1. Select **Networking** with the left menu.
+1. Select the **Private access** tab.
+1. Select **Add shared private endpoint** in the **Shared private endpoints** section.
:::image type="content" alt-text="Screenshot of shared private endpoints management." source="media\howto-shared-private-endpoints\portal-shared-private-endpoints-management.png" lightbox="media\howto-shared-private-endpoints\portal-shared-private-endpoints-management.png" :::
-1. Fill in a name for the shared private endpoint.
-1. Select the target linked resource either by selecting from your owned resources or by filling a resource ID.
-1. Click **Add**.
+ Enter the following information:
+ | Field | Description |
+ | -- | -- |
+ | **Name** | The name of the shared private endpoint. |
+ | **Type** | Select *Microsoft.Web/sites* |
+ | **Subscription** | The subscription containing your Function app. |
+ | **Resource** | Enter the name of your Function app. |
+ | **Request Message** | Enter "please approve" |
+
+1. Select **Add**.
:::image type="content" alt-text="Screenshot of adding a shared private endpoint." source="media\howto-shared-private-endpoints\portal-shared-private-endpoints-add.png" :::
-1. The shared private endpoint resource will be in **Succeeded** provisioning state. The connection state is **Pending** approval at target resource side.
+The shared private endpoint resource will be in **Succeeded** provisioning state. The connection state is **Pending** approval at target resource side.
:::image type="content" alt-text="Screenshot of an added shared private endpoint." source="media\howto-shared-private-endpoints\portal-shared-private-endpoints-added.png" lightbox="media\howto-shared-private-endpoints\portal-shared-private-endpoints-added.png" :::
-#### [Azure CLI](#tab/azure-cli)
+### [Azure CLI](#tab/azure-cli)
-You can make the following API call with the [Azure CLI](/cli/azure/) to create a shared private link resource:
+You can make the following API call to create a shared private link resource:
-```dotnetcli
+```azurecli
az rest --method put --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr/sharedPrivateLinkResources/func-pe?api-version=2021-06-01-preview --body @create-pe.json ```
The contents of the *create-pe.json* file, which represent the request body to t
} ```
-The process of creating an outbound private endpoint is a long-running (asynchronous) operation. As in all asynchronous Azure operations, the `PUT` call returns an `Azure-AsyncOperation` header value that looks like the following:
+The process of creating an outbound private endpoint is a long-running (asynchronous) operation. As in all asynchronous Azure operations, the `PUT` call returns an `Azure-AsyncOperation` header value that looks like the following example:
```plaintext "Azure-AsyncOperation": "https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr/operationStatuses/c0786383-8d5f-4554-8d17-f16fcf482fb2?api-version=2021-06-01-preview" ```
-You can poll this URI periodically to obtain the status of the operation.
+Poll this URI periodically to obtain the status of the operation by manually querying the `Azure-AsyncOperationHeader` value,
-If you're using the CLI, you can poll for the status by manually querying the `Azure-AsyncOperationHeader` value,
-
-```dotnetcli
+```azurecli
az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr/operationStatuses/c0786383-8d5f-4554-8d17-f16fcf482fb2?api-version=2021-06-01-preview ```
Wait until the status changes to "Succeeded" before proceeding to the next steps
--
-### Step 2a: Approve the private endpoint connection for the function
+## Approve the private endpoint connection for the function
> [!IMPORTANT]
-> After you approved the private endpoint connection, the Function is no longer accessible from public network. You may need to create other private endpoints in your own virtual network to access the Function endpoint.
+> After you approve the private endpoint connection, the Function is no longer accessible from a public network. You may need to create other private endpoints in your virtual network to access the Function endpoint.
-#### [Azure portal](#tab/azure-portal)
+### [Azure portal](#tab/azure-portal)
-1. In the Azure portal, select the **Networking** tab of your Function App and navigate to **Private endpoint connections**. Click **Configure your private endpoint connections**. After the asynchronous operation has succeeded, there should be a request for a private endpoint connection with the request message from the previous API call.
+1. In the Azure portal, go to your Function app.
+1. Select **Networking** from the left side menu.
+1. Select **Private endpoint connections**.
+1. Select **Private endpoints** in **Inbound Traffic**.
+1. Select the **Connection name** of the private endpoint connection.
+1. Select **Approve**.
:::image type="content" alt-text="Screenshot of the Azure portal, showing the Private endpoint connections pane." source="media\howto-shared-private-endpoints\portal-function-approve-private-endpoint.png" :::
-1. Select the private endpoint that Azure SignalR Service created. In the **Private endpoint** column, identify the private endpoint connection by the name that's specified in the previous API, select **Approve**.
-
- Make sure that the private endpoint connection appears as shown in the following screenshot. It could take one to two minutes for the status to be updated in the portal.
+ Make sure that the private endpoint connection appears as shown in the following screenshot. It could take a few minutes for the status to be updated.
:::image type="content" alt-text="Screenshot of the Azure portal, showing an Approved status on the Private endpoint connections pane." source="media\howto-shared-private-endpoints\portal-function-approved-private-endpoint.png" :::
-#### [Azure CLI](#tab/azure-cli)
+### [Azure CLI](#tab/azure-cli)
1. List private endpoint connections.
- ```dotnetcli
+ ```azurecli
az network private-endpoint-connection list -n <function-resource-name> -g <function-resource-group-name> --type 'Microsoft.Web/sites' ```
Wait until the status changes to "Succeeded" before proceeding to the next steps
1. Approve the private endpoint connection.
- ```dotnetcli
+ ```azurecli
az network private-endpoint-connection approve --id <private-endpoint-connection-id> ``` --
-### Step 2b: Query the status of the shared private link resource
+## Query the status of the shared private link resource
-It takes minutes for the approval to be propagated to Azure SignalR Service. You can check the state using either Azure portal or Azure CLI.
+The approval takes a few minutes to propagate to the SignalR Service. You can check the state using either the Azure portal or Azure CLI.
-#### [Azure portal](#tab/azure-portal)
+### [Azure portal](#tab/azure-portal)
:::image type="content" alt-text="Screenshot of an approved shared private endpoint." source="media\howto-shared-private-endpoints\portal-shared-private-endpoints-approved.png" lightbox="media\howto-shared-private-endpoints\portal-shared-private-endpoints-approved.png" :::
-#### [Azure CLI](#tab/azure-cli)
+### [Azure CLI](#tab/azure-cli)
-```dotnetcli
+```azurecli
az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr/sharedPrivateLinkResources/func-pe?api-version=2021-06-01-preview ```
-This would return a JSON, where the connection state would show up as "status" under the "properties" section.
+The command will return a JSON structure, where the connection state is shown as "status" in the "properties" section.
```json {
This would return a JSON, where the connection state would show up as "status" u
```
-If the "Provisioning State" (`properties.provisioningState`) of the resource is `Succeeded` and "Connection State" (`properties.status`) is `Approved`, it means that the shared private link resource is functional and Azure SignalR Service can communicate over the private endpoint.
+When the "Provisioning State" (`properties.provisioningState`) of the resource is `Succeeded` and "Connection State" (`properties.status`) is `Approved`, the shared private link resource is functional, and the SignalR Service can communicate over the private endpoint.
--
-At this point, the private endpoint between Azure SignalR Service and Azure Function is established.
+At this point, the private endpoint between the SignalR Service and Azure Function is established.
-### Step 3: Verify upstream calls are from a private IP
+## Verify upstream calls are from a private IP
-Once the private endpoint is set up, you can verify incoming calls are from a private IP by checking the `X-Forwarded-For` header at upstream side.
+Once the private endpoint is set up, you can verify incoming calls from a private IP by checking the `X-Forwarded-For` header upstream side.
:::image type="content" alt-text="Screenshot of the Azure portal, showing incoming requests are from a private IP." source="media\howto-shared-private-endpoints\portal-function-log.png" :::
+## Cleanup
+
+If you don't plan to use the resources you've created in this article, you can delete the Resource Group.
+
+>[!CAUTION]
+> Deleting the resource group deletes all resources contained within it. If resources outside the scope of this article exist in the specified resource group, they will also be deleted.
+ ## Next steps Learn more about private endpoints:
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
The diagram shows the adjacency between private clouds and VNets in Azure, Azure
:::image type="content" source="media/adjacency-overview-drawing-final.png" alt-text="Diagram of Azure VMware Solution private cloud adjacency to Azure and on-premises." border="false":::
-## AV36P and AV52 node sizes generally available in Azure VMware Solution
+## AV36P and AV52 node sizes available in Azure VMware Solution
- The new node sizes increase memory and storage options to optimize your workloads. These gains in performance enable you to do more per server, break storage bottlenecks, and lower transaction costs of latency-sensitive workloads. The availability of these new nodes allows large latency-sensitive services to be hosted efficiently on the Azure VMware Solution infrastructure.
+ The new node sizes increase memory and storage options to optimize your workloads. The gains in performance enable you to do more per server, break storage bottlenecks, and lower transaction costs of latency-sensitive workloads. The availability of the new nodes allow for large latency-sensitive services to be hosted efficiently on the Azure VMware Solution infrastructure.
**AV36P key highlights for Memory and Storage optimized Workloads:**
backup Multi User Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization.md
To create the Resource Guard in a tenant different from the vault tenant, follow
# [PowerShell](#tab/powershell)
-Use the following command to create a resource guard:
+To create a resource guard, run the following cmdlet:
```azurepowershell-interactive New-AzDataProtectionResourceGuard -Location ΓÇ£LocationΓÇ¥ -Name ΓÇ£ResourceGuardNameΓÇ¥ -ResourceGroupName ΓÇ£rgNameΓÇ¥ ```
+# [CLI](#tab/cli)
+
+To create a resource guard, run the following command:
+
+ ```azurecli-interactive
+ az dataprotection resource-guard create --location "Location" --tags key1="val1" --resource-group "RgName" --resource-guard-name "ResourceGuardName"
+ ```
+ ### Select operations to protect using Resource Guard
To exempt operations, follow these steps:
# [PowerShell](#tab/powershell)
-Use the following commands to update the operations. These exclude operations from protection by the resource guard.
+To update the operations. These exclude operations from protection by the resource guard, run the following cmdlets:
```azurepowershell-interactive $resourceGuard = Get-AzDataProtectionResourceGuard -SubscriptionId "xxxxxxxx-xxxx-xxxx-xxxxxxxxxxxx" -ResourceGroupName "rgName" -Name "resGuardName"
Use the following commands to update the operations. These exclude operations fr
- The second and third commands fetch the critical operations that you want to update. - The fourth command excludes some critical operations from the resource guard.
+# [CLI](#tab/cli)
+
+To update the operations that are to be excluded from being protected by the resource guard, run the following commands:
+
+ ```azurecli-interactive
+ az dataprotection resource-guard update --name
+ --resource-group
+ [--critical-operation-exclusion-list {deleteProtection, getSecurityPIN, updatePolicy, updateProtection}]
+ [--resource-type {Microsoft.RecoveryServices/vaults}]
+ [--tags]
+ [--type]
+
+ ```
+
+**Example**:
+
+ ```azurecli
+ az dataprotection resource-guard update --resource-group "RgName" --resource-guard-name "ResourceGuardName" --resource-type "Microsoft.RecoveryServices/vaults" --critical-operation-exclusion-list deleteProtection getSecurityPIN updatePolicy
+ ```
++
To enable MUA on the vaults, follow these steps.
# [PowerShell](#tab/powershell)
-Use the following command to enable MUA on a Recovery Services vault:
+To enable MUA on a Recovery Services vault, run the following cmdlet:
```azurepowershell-interactive $token = (Get-AzAccessToken -TenantId "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx").Token
Use the following command to enable MUA on a Recovery Services vault:
>[!NOTE] >The token parameter is optional and is only needed to authenticate cross tenant protected operations.
+# [CLI](#tab/cli)
+
+To enable MUA on a Recovery Services vault, run the following command:
+
+ ```azurecli-interactive
+ az backup vault resource-guard-mapping update --resource-guard-id
+ [--ids]
+ [--name]
+ [--resource-group]
+ [--tenant-id]
+
+ ```
+
+The tenant ID is required if the resource guard exists in a different tenant.
+
+**Example**:
+
+ ```azurecli
+ az backup vault resource-guard-mapping update --resource-group RgName --name VaultName --resource-guard-id ResourceGuardId
+ ```
+
To disable MUA on a vault, follow these steps:
# [PowerShell](#tab/powershell)
-Use the following command to disable MUA on a Recovery Services vault:
+To disable MUA on a Recovery Services vault, use the following cmdlet:
```azurepowershell-interactive $token = (Get-AzAccessToken -TenantId "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx").Token
Use the following command to disable MUA on a Recovery Services vault:
>[!NOTE] >The token parameter is optional and is only needed to authenticate the cross tenant protected operations. -
+# [CLI](#tab/cli)
+
+To disable MUA on a Recovery Services vault, run the following command:
+
+ ```azurecli-interactive
+ az backup vault resource-guard-mapping delete [--ids]
+ [--name]
+ [--resource-group]
+ [--tenant-id]
+ [--yes]
+
+ ```
+
+
+The tenant ID is required if the resource guard exists in a different tenant.
+
+**Example**:
+
+ ```azurecli
+ az backup vault resource-guard-mapping delete --resource-group RgName --name VaultName
+ ```
backup Restore Sql Database Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-sql-database-azure-vm.md
Title: Restore SQL Server databases on an Azure VM description: This article describes how to restore SQL Server databases that are running on an Azure VM and that are backed up with Azure Backup. You can also use Cross Region Restore to restore your databases to a secondary region. Previously updated : 08/11/2022 Last updated : 11/08/2022
For eg., when you have a backup policy of weekly fulls, daily differentials and
#### Excluding backup file types
-The **ExtensionSettingOverrides.json** is a JSON (JavaScript Object Notation) file that contains overrides for multiple settings of the Azure Backup service for SQL. For "Partial Restore as files" operation, a new JSON field ` RecoveryPointsToBeExcludedForRestoreAsFiles ` must be added. This field holds a string value that denotes which recovery point types should be excluded in the next restore as files operation.
+The **ExtensionSettingsOverrides.json** is a JSON (JavaScript Object Notation) file that contains overrides for multiple settings of the Azure Backup service for SQL. For "Partial Restore as files" operation, a new JSON field `RecoveryPointTypesToBeExcludedForRestoreAsFiles` must be added. This field holds a string value that denotes which recovery point types should be excluded in the next restore as files operation.
1. In the target machine where files are to be downloaded, go to "C:\Program Files\Azure Workload Backup\bin" folder
-2. Create a new JSON file named "ExtensionSettingOverrides.JSON", if it doesn't already exist.
+2. Create a new JSON file named "ExtensionSettingsOverrides.JSON", if it doesn't already exist.
3. Add the following JSON key value pair ```json {
- "RecoveryPointsToBeExcludedForRestoreAsFiles": "ExcludeFull"
+ "RecoveryPointTypesToBeExcludedForRestoreAsFiles": "ExcludeFull"
} ``` 4. No restart of any service is required. The Azure Backup service will attempt to exclude backup types in the restore chain as mentioned in this file.
-The ``` RecoveryPointsToBeExcludedForRestoreAsFiles ``` only takes specific values which denote the recovery points to be excluded during restore. For SQL, these values are:
+The `RecoveryPointTypesToBeExcludedForRestoreAsFiles` only takes specific values which denote the recovery points to be excluded during restore. For SQL, these values are:
- ExcludeFull (Other backup types such as differential and logs will be downloaded, if they are present in the restore point chain) - ExcludeFullAndDifferential (Other backup types such as logs will be downloaded, if they are present in the restore point chain)
+- ExcludeFullAndIncremental (Other backup types such as logs will be downloaded, if they are present in the restore point chain)
+- ExcludeFullAndDifferentialAndIncremental (Other backup types such as logs will be downloaded, if they are present in the restore point chain)
+ ### Restore to a specific restore point
batch Batch Pools Without Public Ip Addresses Classic Retirement Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pools-without-public-ip-addresses-classic-retirement-migration-guide.md
Title: Migrate pools without public IP addresses (classic) in Batch
-description: Learn how to opt in to migrate Azure Batch pools without public IP addresses (classic) and plan for feature end of support.
+description: Learn how to migrate Azure Batch pools without public IP addresses (classic) and plan for feature end of support.
The alternative to using a Batch pool without a public IP address (classic) requ
- Use mutable public network access for Batch accounts. - Get firewall support for Batch account public endpoints. You can configure IP address network rules to restrict public network access to your Batch account.
-## Opt in and migrate your eligible pools
+## Migrate your eligible pools
-When the Batch feature pools without public IP addresses (classic) retires on March 31, 2023, existing pools that use the feature can migrate only if the pools were created in a virtual network. To migrate your eligible pools, complete the opt-in process to use simplified compute node communication:
+When the Batch pools without public IP addresses (classic) feature retires on March 31, 2023, existing pools that use the feature can migrate only if the pools were created in a virtual network. To migrate your eligible pools, use simplified compute node communication:
-1. Opt in to [use simplified compute node communication](./simplified-compute-node-communication.md#opt-your-batch-account-in-or-out-of-simplified-compute-node-communication).
-
- :::image type="content" source="media/certificates/opt-in.png" alt-text="Screenshot that shows creating a support request to opt in.":::
+1. Take steps to enable [simplified compute node communication](./simplified-compute-node-communication.md) on your pools.
1. Create a private endpoint for Batch node management in the virtual network.
When the Batch feature pools without public IP addresses (classic) retires on Ma
- How can I migrate my Batch pools that use the pools without public IP addresses (classic) feature to simplified compute node communication?
- If you created the pools in a virtual network, [opt in and complete the migration process](#opt-in-and-migrate-your-eligible-pools).
-
+ If you created the pools in a virtual network, [complete the migration process](#migrate-your-eligible-pools).
+ If your pools weren't created in a virtual network, create a new simplified compute node communication pool without public IP addresses. - What differences will I see in billing?
When the Batch feature pools without public IP addresses (classic) retires on Ma
- What if I donΓÇÖt migrate my pools to simplified compute node communication pools without public IP addresses?
- After *March 31, 2023*, we will stop supporting Batch pools without public IP addresses (classic). After that date, existing pool functionality, including scale-out operations, might break. The pool might actively be scaled down to zero at any time.
+ After *March 31, 2023*, we'll stop supporting Batch pools without public IP addresses (classic). After that date, existing pool functionality, including scale-out operations, might break. The pool might actively be scaled down to zero at any time.
## Next steps
batch Batch Spot Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-spot-vms.md
# Use Spot VMs with Batch
-Azure Batch offers Spot virtual machines (VMs) to reduce the cost of Batch workloads. Spot VMs make new types of Batch workloads possible by enabling a large amount of compute power to be used for a very low cost.
+Azure Batch offers Spot virtual machines (VMs) to reduce the cost of Batch workloads. Spot VMs make new types of Batch workloads possible by enabling a large amount of compute power to be used for a low cost.
Spot VMs take advantage of surplus capacity in Azure. When you specify Spot VMs in your pools, Azure Batch can use this surplus, when available. The tradeoff for using Spot VMs is that those VMs may not always be available to be allocated, or may be preempted at any time, depending on available capacity. For this reason, Spot VMs are most suitable for batch and asynchronous processing workloads where the job completion time is flexible and the work is distributed across many VMs.
-Spot VMs are offered at a significantly reduced price compared with dedicated VMs. For pricing details, see [Batch Pricing](https://azure.microsoft.com/pricing/details/batch/).
+Spot VMs are offered at a reduced price compared with dedicated VMs. For pricing details, see [Batch Pricing](https://azure.microsoft.com/pricing/details/batch/).
## Differences between Spot and low-priority VMs
-Batch offers two types of low-cost preemptible VMs:
+Batch offers two types of low-cost pre-emptible VMs:
- [Spot VMs](../virtual-machines/spot-vms.md), a modern Azure-wide offering also available as single-instance VMs or Virtual Machine Scale Sets. - Low-priority VMs, a legacy offering only available through Azure Batch.
The type of node you get depends on your Batch account's pool allocation mode, w
Azure Spot VMs and Batch low-priority VMs are similar but have a few differences in behavior.
-| | Spot VMs | Low-priority VMs |
-| -- | -- | -- |
+| | Spot VMs | Low-priority VMs |
+|-|-|-|
| **Supported Batch accounts** | User-subscription Batch accounts | Batch-managed Batch accounts | | **Supported Batch pool configurations** | Virtual Machine Configuration | Virtual Machine Configuration and Cloud Service Configuration (deprecated) | | **Available regions** | All regions supporting [Spot VMs](../virtual-machines/spot-vms.md) | All regions except Microsoft Azure China 21Vianet |
Some examples of batch processing use cases well suited to use Spot VMs are:
- **Development and testing**: In particular, if large-scale solutions are being developed, significant savings can be realized. All types of testing can benefit, but large-scale load testing and regression testing are great uses. - **Supplementing on-demand capacity**: Spot VMs can be used to supplement regular dedicated VMs. When available, jobs can scale and therefore complete quicker for lower cost; when not available, the baseline of dedicated VMs remains available.-- **Flexible job execution time**: If there is flexibility in the time jobs have to complete, then potential drops in capacity can be tolerated; however, with the addition of Spot VMs jobs frequently run faster and for a lower cost.
+- **Flexible job execution time**: If there's flexibility in the time jobs have to complete, then potential drops in capacity can be tolerated; however, with the addition of Spot VMs jobs frequently run faster and for a lower cost.
Batch pools can be configured to use Spot VMs in a few ways: - A pool can use only Spot VMs. In this case, Batch recovers any preempted capacity when available. This configuration is the cheapest way to execute jobs.-- Spot VMs can be used in conjunction with a fixed baseline of dedicated VMs. The fixed number of dedicated VMs ensures there is always some capacity to keep a job progressing.
+- Spot VMs can be used with a fixed baseline of dedicated VMs. The fixed number of dedicated VMs ensures there's always some capacity to keep a job progressing.
- A pool can use a dynamic mix of dedicated and Spot VMs, so that the cheaper Spot VMs are solely used when available, but the full-priced dedicated VMs are scaled up when required. This configuration keeps a minimum amount of capacity available to keep the jobs progressing.
-Keep in mind the following when planning your use of Spot VMs:
+Keep in mind the following practices when planning your use of Spot VMs:
- To maximize use of surplus capacity in Azure, suitable jobs can scale out. - Occasionally VMs may not be available or are preempted, which results in reduced capacity for jobs and may lead to task interruption and reruns.-- Tasks with shorter execution times tend to work best with Spot VMs. Jobs with longer tasks may be impacted more if interrupted. If long-running tasks implement checkpointing to save progress as they execute, this impact may be reduced. -- Long-running MPI jobs that utilize multiple VMs are not well suited to use Spot VMs, because one preempted VM can lead to the whole job having to run again.-- Spot nodes may be marked as unusable if [network security group (NSG) rules](batch-virtual-network.md#network-security-groups-specifying-subnet-level-rules) are configured incorrectly.
+- Tasks with shorter execution times tend to work best with Spot VMs. Jobs with longer tasks may be impacted more if interrupted. If long-running tasks implement checkpointing to save progress as they execute, this impact may be reduced.
+- Long-running MPI jobs that utilize multiple VMs aren't well suited to use Spot VMs, because one preempted VM can lead to the whole job having to run again.
+- Spot nodes may be marked as unusable if [network security group (NSG) rules](batch-virtual-network.md#general-virtual-network-requirements) are configured incorrectly.
## Create and manage pools with Spot VMs
Spot VM:
bool? isNodeDedicated = poolNode.IsDedicated; ```
-VMs may occasionally be preempted. When this happens, tasks that were running on the preempted node VMs are requeued and run again.
+VMs may occasionally be preempted. When preemption happens, tasks that were running on the preempted node VMs are requeued and run again.
-For Virtual Machine Configuration pools, Batch also does the following:
+For Virtual Machine Configuration pools, Batch also performs the following behaviors:
-- The preempted VMs have their state updated to **Preempted**.
+- The preempted VMs have their state updated to **Preempted**.
- The VM is effectively deleted, leading to loss of any data stored locally on the VM. - A list nodes operation on the pool will still return the preempted nodes.-- The pool continually attempts to reach the target number of Spot nodes available. When replacement capacity is found, the nodes keep their IDs, but are reinitialized, going through **Creating** and **Starting** states before they are available for task scheduling.
+- The pool continually attempts to reach the target number of Spot nodes available. When replacement capacity is found, the nodes keep their IDs, but are reinitialized, going through **Creating** and **Starting** states before they're available for task scheduling.
- Preemption counts are available as a metric in the Azure portal. ## Scale pools containing Spot VMs
-As with pools solely consisting of dedicated VMs, it is possible to scale a pool containing Spot VMs by calling the Resize method or by using autoscale.
+As with pools solely consisting of dedicated VMs, it's possible to scale a pool containing Spot VMs by calling the Resize method or by using autoscale.
The pool resize operation takes a second optional parameter that updates the value of **targetLowPriorityNodes**:
The pool autoscale formula supports Spot VMs as follows:
## Configure jobs and tasks
-Jobs and tasks require little additional configuration for Spot nodes. Keep in mind the following:
+Jobs and tasks may require some extra configuration for Spot nodes:
- The JobManagerTask property of a job has an **AllowLowPriorityNode** property. When this property is true, the job manager task can be scheduled on either a dedicated or Spot node. If it's false, the job manager task is scheduled to a dedicated node only.-- The AZ_BATCH_NODE_IS_DEDICATED [environment variable](batch-compute-node-environment-variables.md) is available to a task application so that it can determine whether it is running on a Spot or on a dedicated node.
+- The `AZ_BATCH_NODE_IS_DEDICATED` [environment variable](batch-compute-node-environment-variables.md) is available to a task application so that it can determine whether it's running on a Spot or on a dedicated node.
## View metrics for Spot VMs
To view these metrics in the Azure portal
## Limitations -- Spot VMs in Batch do not support setting a max price and do not support price-based evictions. They can only be evicted for capacity reasons.
+- Spot VMs in Batch don't support setting a max price and don't support price-based evictions. They can only be evicted for capacity reasons.
- Spot VMs are only available for Virtual Machine Configuration pools and not for Cloud Service Configuration pools, which are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/).-- Spot VMs are not available for some clouds, VM sizes, and subscription offer types. See more about [Spot limitations](../virtual-machines/spot-vms.md#limitations).
+- Spot VMs aren't available for some clouds, VM sizes, and subscription offer types. See more about [Spot limitations](../virtual-machines/spot-vms.md#limitations).
## Next steps
batch Batch Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-virtual-network.md
Title: Provision a pool in a virtual network description: How to create a Batch pool in an Azure virtual network so that compute nodes can communicate securely with other VMs in the network, such as a file server. Previously updated : 12/13/2021 Last updated : 10/26/2022
When you create an Azure Batch pool, you can provision the pool in a subnet of a
## Why use a VNet?
-Compute nodes in a pool can communicate with each other, such as to run multi-instance tasks, without requiring a separate VNet. However, by default, nodes in a pool can't communicate with virtual machines that are outside of the pool, such as license servers or a file servers.
+Compute nodes in a pool can communicate with each other, such as to run multi-instance tasks, without requiring a
+separate VNet. However, by default, nodes in a pool can't communicate with virtual machines that are outside of
+the pool, such as license or file servers.
To allow compute nodes to communicate securely with other virtual machines, or with an on-premises network, you can provision the pool in a subnet of an Azure VNet.
To allow compute nodes to communicate securely with other virtual machines, or w
- To create an Azure Resource Manager-based VNet, see [Create a virtual network](../virtual-network/manage-virtual-network.md#create-a-virtual-network). A Resource Manager-based VNet is recommended for new deployments, and is supported only on pools that use Virtual Machine Configuration. - To create a classic VNet, see [Create a virtual network (classic) with multiple subnets](/previous-versions/azure/virtual-network/create-virtual-network-classic). A classic VNet is supported only on pools that use Cloud Services Configuration.
-## VNet requirements
+## General virtual network requirements
+* The VNet must be in the same subscription and region as the Batch account you use to create your pool.
+
+* The subnet specified for the pool must have enough unassigned IP addresses to accommodate the number of VMs targeted for the pool; that is, the sum of the `targetDedicatedNodes` and `targetLowPriorityNodes` properties of the pool. If the subnet doesn't have enough unassigned IP addresses, the pool partially allocates the compute nodes, and a resize error occurs.
+
+* If you aren't using [Simplified Compute Node Communication](simplified-compute-node-communication.md),
+Azure Storage endpoints need to be resolved by any custom DNS servers that serve your virtual network. Specifically,
+URLs of the form `<account>.table.core.windows.net`, `<account>.queue.core.windows.net`, and
+`<account>.blob.core.windows.net` should be resolvable.
+
+* Multiple pools can be created in the same virtual network or in the same subnet (as long as it has sufficient address space). A single pool can't exist across multiple virtual networks or subnets.
+
+Other virtual network requirements differ, depending on whether the Batch pool is in the `VirtualMachineConfiguration`
+or `CloudServiceConfiguration`. `VirtualMachineConfiguration` for Batch pools is recommended as `CloudServiceConfiguration`
+pools are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/).
+
+> [!IMPORTANT]
+> Batch pools can be configured in one of two communication modes. `Classic` communication
+> mode is where the Batch service initiates communication to the compute nodes.
+> [`Simplified` communication mode](simplified-compute-node-communication.md)
+> is where the compute nodes initiate communication to the Batch Service.
+
+## Pools in Virtual Machine Configuration
+
+Requirements:
+
+- Supported VNets: Azure Resource Manager-based (ARM) virtual networks only
+- Subnet ID: when specifying the subnet using the Batch APIs, use the *resource identifier* of the subnet. The subnet identifier is of the form:
+
+`/subscriptions/{subscription}/resourceGroups/{group}/providers/Microsoft.Network/virtualNetworks/{network}/subnets/{subnet}`
+
+- Permissions: check whether your security policies or locks on the VNet's subscription or resource group restrict a user's permissions to manage the VNet.
+- Networking resources: Batch automatically creates more networking resources in the resource group containing the VNet.
+
+> [!IMPORTANT]
+> For each 100 dedicated or low-priority nodes, Batch creates: one network security group (NSG), one public IP address,
+> and one load balancer. These resources are limited by the subscription's
+> [resource quotas](../../articles/azure-resource-manager/management/azure-subscription-service-limits.md).
+> For large pools, you might need to request a quota increase for one or more of these resources.
+
+### Network security groups for Virtual Machine Configuration pools: Batch default
+
+Batch will create a network security group (NSG) at the network interface level of each Virtual Machine Scale
+Set deployment within a Batch pool. For pools that don't have public IP addresses under `simplified` compute
+node communication, NSGs aren't created.
+
+In order to provide the necessary communication between compute nodes and the Batch service, these NSGs are
+configured such that:
+
+* Inbound TCP traffic on ports 29876 and 29877 from Batch service IP addresses that correspond to the
+`BatchNodeManagement` service tag. This rule is only created in `classic` pool communication mode.
+* Inbound TCP traffic on port 22 (Linux nodes) or port 3389 (Windows nodes) to permit remote access. For certain types of multi-instance tasks on Linux (such as MPI), you'll need to also allow SSH port 22 traffic for IPs in the subnet containing the Batch compute nodes. This traffic may be blocked per subnet-level NSG rules (see below).
+* Outbound any traffic on port 443 to Batch service IP addresses that correspond to the `BatchNodeManagement` service tag.
+* Outbound traffic on any port to the virtual network. This rule may be amended per subnet-level NSG rules (see below).
+* Outbound traffic on any port to the Internet. This rule may be amended per subnet-level NSG rules (see below).
+
+> [!IMPORTANT]
+> Use caution if you modify or add inbound or outbound rules in Batch-configured NSGs. If communication to the compute nodes in the specified subnet is denied by an NSG, the Batch service will set the state of the compute nodes to **unusable**. Additionally, no resource locks should be applied to any resource created by Batch, since this can prevent cleanup of resources as a result of user-initiated actions such as deleting a pool.
+
+### Network security groups for Virtual Machine Configuration pools: Specifying subnet-level rules
+
+If you have an NSG associated with the subnet for Batch compute nodes, you must configure this
+NSG with at least the inbound and outbound security rules that are shown in the following tables.
+
+> [!WARNING]
+> Batch service IP addresses can change over time. Therefore, we highly recommend that you use the `BatchNodeManagement` service tag (or a regional variant) for the NSG rules indicated in the following tables. Avoid populating NSG rules with specific Batch service IP addresses.
+
+#### Inbound security rules
+
+| Source Service Tag or IP Addresses | Destination Ports | Protocol | Pool Communication Mode | Required |
+|-|-|-|-|-|
+| `BatchNodeManagement.<region>` [service tag](../../articles/virtual-network/network-security-groups-overview.md#service-tags) | 29876-29877 | TCP | Classic | Yes |
+| Source IP addresses for remotely accessing compute nodes | 3389 (Windows), 22 (Linux) | TCP | Classic or Simplified | No |
+
+Configure inbound traffic on port 3389 (Windows) or 22 (Linux) only if you need to permit remote access
+to the compute nodes from outside sources. You may need to enable port 22 rules on Linux if you require
+support for multi-instance tasks with certain MPI runtimes. Allowing traffic on these ports isn't strictly
+required for the pool compute nodes to be usable. You can also disable default remote access on these ports
+through configuring [pool endpoints](pool-endpoint-configuration.md).
+
+#### Outbound security rules
+
+| Destination Service Tag | Destination Ports | Protocol | Pool Communication Mode | Required |
+|-|-|-|-|-|
+| `BatchNodeManagement.<region>` [service tag](../../articles/virtual-network/network-security-groups-overview.md#service-tags) | 443 | * | Simplified | Yes |
+| `Storage.<region>` [service tag](../../articles/virtual-network/network-security-groups-overview.md#service-tags) | 443 | TCP | Classic | Yes |
+
+Outbound to `BatchNodeManagement.<region>` service tag is required in `classic` pool communication mode
+if using Job Manager tasks or if your tasks must communicate back to the Batch service. For outbound to
+`BatchNodeManagement.<region>` in `simplified` pool communication mode, the Batch service currently only
+uses TCP protocol, but UDP may be required for future compatibility. For
+[pools without public IP addresses](simplified-node-communication-pool-no-public-ip.md)
+using `simplified` communication mode and with a node management private endpoint, an NSG isn't needed.
+For more information about outbound security rules for the `BatchNodeManagement` service tag, see
+[Use simplified compute node communication](simplified-compute-node-communication.md).
+
+## Pools in the Cloud Services Configuration
+
+> [!WARNING]
+> Cloud Services Configuration pools are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). Please use Virtual Machine Configuration pools instead.
+
+Requirements:
+
+- Supported VNets: Classic VNets only
+- Subnet ID: when specifying the subnet using the Batch APIs, use the *resource identifier* of the subnet. The subnet identifier is of the form:
+
+`/subscriptions/{subscription}/resourceGroups/{group}/providers/Microsoft.ClassicNetwork/virtualNetworks/{network}/subnets/{subnet}`
+
+- Permissions: the `Microsoft Azure Batch` service principal must have the `Classic Virtual Machine Contributor` Azure role for the specified VNet.
+
+### Network security groups for Cloud Services Configuration pools
+
+The subnet must allow inbound communication from the Batch service to be able to schedule tasks on the compute nodes, and outbound communication to communicate with Azure Storage or other resources.
+
+You don't need to specify an NSG, because Batch configures inbound communication only from Batch IP addresses to the pool nodes. However, If the specified subnet has associated NSGs and/or a firewall, configure the inbound and outbound security rules as shown in the following tables. If communication to the compute nodes in the specified subnet is denied by an NSG, the Batch service sets the state of the compute nodes to **unusable**.
+
+Configure inbound traffic on port 3389 for Windows if you need to permit RDP access to the pool nodes. This rule isn't required for the pool nodes to be usable.
+
+**Inbound security rules**
+
+| Source IP addresses | Source ports | Destination | Destination ports | Protocol | Action |
+| | | | | | |
+Any <br /><br />Although this rule effectively requires "allow all", the Batch service applies an ACL rule at the level of each node that filters out all non-Batch service IP addresses. | * | Any | 10100, 20100, 30100 | TCP | Allow |
+| Optional, to allow RDP access to compute nodes. | * | Any | 3389 | TCP | Allow |
+
+**Outbound security rules**
+
+| Source | Source ports | Destination | Destination ports | Protocol | Action |
+| | | | | | |
+| Any | * | Any | 443 | Any | Allow |
## Create a pool with a VNet in the Azure portal
-Once you have created your VNet and assigned a subnet to it, you can create a Batch pool with that VNet. Follow these steps to create a pool from the Azure portal: 
+Once you've created your VNet and assigned a subnet to it, you can create a Batch pool with that VNet. Follow these steps to create a pool from the Azure portal: 
1. Navigate to your Batch account in the Azure portal. This account must be in the same subscription and region as the resource group containing the VNet you intend to use. 1. In the **Settings** window on the left, select the **Pools** menu item. 1. In the **Pools** window, select **Add**. 1. On the **Add Pool** window, select the option you intend to use from the **Image Type** dropdown. 1. Select the correct **Publisher/Offer/Sku** for your custom image.
-1. Specify the remaining required settings, including the **Node size**, **Target dedicated nodes**, and **Target Spot/low-priority nodes**, as well as any desired optional settings.
+1. Specify the remaining required settings, including the **Node size**, **Target dedicated nodes**, and **Target Spot/low-priority nodes**, and any desired optional settings.
1. In **Virtual Network**, select the virtual network and subnet you wish to use. 1. Select **OK** to create your pool.
Once you have created your VNet and assigned a subnet to it, you can create a Ba
You might have requirements in your organization to redirect (force) internet-bound traffic from the subnet back to your on-premises location for inspection and logging. Additionally, you may have enabled forced tunneling for the subnets in your VNet.
-To ensure that the nodes in your pool work in a VNet that has forced tunneling enabled, you must add the following [user-defined routes](../virtual-network/virtual-networks-udr-overview.md) (UDR) for that subnet:
+To ensure that the nodes in your pool work in a VNet that has forced tunneling enabled, you must add the following [user-defined routes](../virtual-network/virtual-networks-udr-overview.md) (UDR) for that subnet.
+
+For classic communication mode pools:
- The Batch service needs to communicate with nodes for scheduling tasks. To enable this communication, add a UDR corresponding to the `BatchNodeManagement.<region>` [service tag](../virtual-network/virtual-networks-udr-overview.md#service-tags-for-user-defined-routes) in the region where your Batch account exists. Set the **Next hop type** to **Internet**. -- Ensure that outbound TCP traffic to the Azure Batch `BatchNodeManagement.<region>` service tag on destination port 443 is not blocked by your on-premises network. This is required for [Simplified Compute Node Communication](simplified-compute-node-communication.md).
+- Ensure that outbound TCP traffic to Azure Storage on destination port 443 (specifically, URLs of the form `*.table.core.windows.net`, `*.queue.core.windows.net`, and `*.blob.core.windows.net`) isn't blocked by your on-premises network.
+
+For [simplified communication mode](simplified-compute-node-communication.md) pools without using node management private endpoint:
+
+- Ensure that outbound TCP/UDP traffic to the Azure Batch `BatchNodeManagement.<region>` service tag on destination port 443 isn't blocked by your on-premises network. Currently only TCP protocol is used, but UDP may be required for future compatibility.
-- Ensure that outbound TCP traffic to Azure Storage on destination port 443 (specifically, URLs of the form `*.table.core.windows.net`, `*.queue.core.windows.net`, and `*.blob.core.windows.net`) is not blocked by your on-premises network.
+For all pools:
-- If you use virtual file mounts, review the [networking requirements](virtual-file-mount.md#networking-requirements) and ensure that no required traffic is blocked.
+- If you use virtual file mounts, review the [networking requirements](virtual-file-mount.md#networking-requirements), and ensure that no required traffic is blocked.
> [!WARNING] > Batch service IP addresses can change over time. To prevent outages due to Batch service IP address changes, do not directly specify IP addresses. Instead use the `BatchNodeManagement.<region>` [service tag](../virtual-network/virtual-networks-udr-overview.md#service-tags-for-user-defined-routes).
batch Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/best-practices.md
Number Friendly Name Serial Number HealthStatus Opera
Where disk number 2 is the uninitialized data disk attached to this compute node. These disks can then be initialized, partitioned, and formatted as required for your workflow.
-For more information about Azure data disks in Linux, see this [article](../virtual-machine-scale-sets/tutorial-use-disks-powershell.md).
+For more information about Azure data disks in Windows, see this [article](../virtual-machine-scale-sets/tutorial-use-disks-powershell.md).
### Collect Batch agent logs
batch Create Pool Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/create-pool-public-ip.md
For information about creating pools without public IP addresses, read [Create a
## Prerequisites - The Batch client API must use [Azure Active Directory (AD) authentication](batch-aad-auth.md) to use a public IP address.-- An [Azure VNet](batch-virtual-network.md) from the same subscription where you're creating your pool and IP addresses. You can only use Azure Resource Manager-based VNets. Verify that the VNet meets all of the [general VNet requirements](batch-virtual-network.md#vnet-requirements).
+- An [Azure VNet](batch-virtual-network.md) from the same subscription where you're creating your pool and IP addresses. You can only use Azure Resource Manager-based VNets. Verify that the VNet meets all of the [general VNet requirements](batch-virtual-network.md#general-virtual-network-requirements).
- At least one existing Azure public IP address. Follow the [public IP address requirements](#public-ip-address-requirements) to create and configure the IP addresses. > [!NOTE]
Request body:
## Next steps - [Learn about the Batch service workflow and primary resources](batch-service-workflow-features.md).-- [Create a pool in a subnet of an Azure virtual network](batch-virtual-network.md).
+- [Create a pool in a subnet of an Azure virtual network](batch-virtual-network.md).
batch Nodes And Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/nodes-and-pools.md
Enabling internode communication also impacts the placement of the nodes within
## Start tasks
-If desired, you can add a [start task](jobs-and-tasks.md#start-task) that will executes on each node as that node joins the pool, and each time a node is restarted or reimaged. The start task is especially useful for preparing compute nodes for the execution of tasks, like installing the applications that your tasks run on the compute nodes.
+If desired, you can add a [start task](jobs-and-tasks.md#start-task) that will execute on each node as that node joins the pool, and each time a node is restarted or reimaged. The start task is especially useful for preparing compute nodes for the execution of tasks, like installing the applications that your tasks run on the compute nodes.
## Application packages
When you provision a pool of compute nodes in Batch, you can associate the pool
### VNet requirements - For more information about setting up a Batch pool in a VNet, see [Create a pool of virtual machines with your virtual network](batch-virtual-network.md). > [!TIP]
batch Simplified Compute Node Communication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-compute-node-communication.md
Title: Use simplified compute node communication
-description: Learn how the Azure Batch service is simplifying the way Batch pool infrastructure is managed and how to opt in or out of the feature.
+description: Learn about the simplified compute node communication mode in the Azure Batch service and how to enable it.
Previously updated : 06/02/2022 Last updated : 11/02/2022 # Use simplified compute node communication
-An Azure Batch pool contains one or more compute nodes which execute user-specified workloads in the form of Batch tasks. To enable Batch functionality and Batch pool infrastructure management, compute nodes must communicate with the Azure Batch service.
+An Azure Batch pool contains one or more compute nodes that execute user-specified workloads in the form of Batch tasks. To enable Batch functionality and Batch pool infrastructure management, compute nodes must communicate with the Azure Batch service.
-This document describes forthcoming changes with how the Azure Batch service communicates with Batch pool compute nodes, the network configuration changes which may be required, and how to opt your Batch accounts in or out of using the new simplified compute node communication feature during the public preview period.
+Batch supports two types of node communication modes:
+- `Classic` where the Batch service initiates communication to the compute nodes
+- `Simplified` where the compute nodes initiate communication to the Batch service
-> [!IMPORTANT]
-> Support for simplified compute node communication in Azure Batch is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+This document describes the simplified compute node communication mode and the associated network configuration requirements.
-Opting in isn't required at this time. However, in the future, using simplified compute node communication will be required and defaulted for all Batch accounts.
+> [!TIP]
+> Information in this document pertaining to networking resources and rules such as NSGs does not apply to
+> Batch pools with [no public IP addresses](simplified-node-communication-pool-no-public-ip.md) using the
+> node management private endpoint without Internet outbound access.
## Supported regions Simplified compute node communication in Azure Batch is currently available for the following regions: -- Public: Central US EUAP, East US 2 EUAP, West Central US, North Central US, South Central US, East US, East US 2, West US 2, West US, Central US, West US 3, East Asia, South East Asia, Australia East, Australia Southeast, Brazil Southeast, Brazil South, Canada Central, Canada East, North Europe, West Europe, Central India, South India, Japan East, Japan West, Korea Central, Korea South, Sweden Central, Sweden South, Switzerland North, Switzerland West, UK West, UK South, UAE North, France Central, Germany West Central, Norway East, South Africa North.
+- Public: all public regions where Batch is present except for West India and France South.
- Government: USGov Arizona, USGov Virginia, USGov Texas. - China: China North 3.
-## Compute node communication changes
+## Compute node communication differences between Classic and Simplified
-The Azure Batch service is simplifying the way Batch pool infrastructure is managed on behalf of users. The new communication method reduces the complexity and scope of inbound and outbound networking connections required in baseline operations.
+The simplified compute node communication mode streamlines the way Batch pool infrastructure is
+managed on behalf of users. This communication mode reduces the complexity and scope of inbound
+and outbound networking connections required in baseline operations.
-Batch pools in accounts which haven't been opted in to simplified compute node communication require the following networking rules in network security groups (NSGs), user-defined routes (UDRs), and firewalls when [creating a pool in a virtual network](batch-virtual-network.md):
+Batch pools with the `classic` communication mode require the following networking rules in network
+security groups (NSGs), user-defined routes (UDRs), and firewalls when
+[creating a pool in a virtual network](batch-virtual-network.md):
- Inbound: - Destination ports 29876, 29877 over TCP from BatchNodeManagement.*region*
Batch pools in accounts which haven't been opted in to simplified compute node c
- Destination port 443 over TCP to Storage.*region* - Destination port 443 over TCP to BatchNodeManagement.*region* for certain workloads that require communication back to the Batch Service, such as Job Manager tasks
-With the new model, Batch pools in accounts that use simplified compute node communication require the following networking rules in NSGs, UDRs, and firewalls:
+Batch pools with the `simplified` communication mode require the following networking rules in
+NSGs, UDRs, and firewalls:
- Inbound: - None - Outbound:
- - Destination port 443 over TCP to BatchNodeManagement.*region*
+ - Destination port 443 over ANY to BatchNodeManagement.*region*
-Outbound requirements for a Batch account can be discovered using the [List Outbound Network Dependencies Endpoints API](/rest/api/batchmanagement/batch-account/list-outbound-network-dependencies-endpoints). This API will report the base set of dependencies, depending upon the Batch account pool communication model. User-specific workloads may need additional rules such as opening traffic to other Azure resources (such as Azure Storage for Application Packages, Azure Container Registry, etc.) or endpoints like the Microsoft package repository for virtual file system mounting functionality.
+Outbound requirements for a Batch account can be discovered using the
+[List Outbound Network Dependencies Endpoints API](/rest/api/batchmanagement/batch-account/list-outbound-network-dependencies-endpoints)
+This API will report the base set of dependencies, depending upon the Batch account pool communication mode.
+User-specific workloads may need extra rules such as opening traffic to other Azure resources (such as Azure
+Storage for Application Packages, Azure Container Registry, etc.) or endpoints like the Microsoft package
+repository for virtual file system mounting functionality.
-## Benefits of the new model
+## Benefits of the simplified communication mode
-Azure Batch users who [opt in to the new communication model](#opt-your-batch-account-in-or-out-of-simplified-compute-node-communication) benefit from simplification of networking connections and rules.
+Azure Batch users utilizing the simplified mode benefit from simplification of networking connections and
+rules. Simplified compute node communication helps reduce security risks by removing the requirement to open
+ports for inbound communication from the internet. Only a single outbound rule to a well-known Service Tag is
+required for baseline operation.
-Simplified compute node communication helps reduce security risks by removing the requirement to open ports for inbound communication from the internet. Only a single outbound rule to a well-known Service Tag is required for baseline operation.
+The `simplified` mode also provides more fine-grained data exfiltration control over the `classic`
+communication mode since outbound communication to Storage.*region* is no longer required. You can
+explicitly lock down outbound communication to Azure Storage if necessary for your workflow. For
+example, you can scope your outbound communication rules to Azure Storage to enable your AppPackage
+storage accounts or other storage accounts for resource files or output files.
-The new model also provides more fine-grained data exfiltration control, since outbound communication to Storage.*region* is no longer required. You can explicitly lock down outbound communication to Azure Storage if required for your workflow (such as AppPackage storage accounts, other storage accounts for resource files or output files, or other similar scenarios).
+Even if your workloads aren't currently impacted by the changes (as described in the next section), it's
+recommended to move to the `simplified` mode. Doing so will ensure your Batch workloads are ready for any
+future improvements enabled by this mode, and also for when this communication mode will move to become
+the default.
-Even if your workloads aren't currently impacted by the changes (as described in the next section), you may still want to [opt in to use simplified compute node communication](#opt-your-batch-account-in-or-out-of-simplified-compute-node-communication) now. This will ensure your Batch workloads are ready for any future improvements enabled by this model.
+## Potential impact between classic and simplified communication modes
-## Scope of impact
+In many cases, the `simplified` communication mode won't directly affect your Batch workloads. However,
+simplified compute node communication will have an impact for the following cases:
-In many cases, this new communication model won't directly affect your Batch workloads. However, simplified compute node communication will have an impact for the following cases:
--- Users who specify a Virtual Network as part of creating a Batch pool and do one or both of the following:
+- Users who specify a Virtual Network as part of creating a Batch pool and do one or both of the following actions:
- Explicitly disable outbound network traffic rules that are incompatible with simplified compute node communication. - Use UDRs and firewall rules that are incompatible with simplified compute node communication.-- Users who enable software firewalls on compute nodes and explicitly disable outbound traffic in software firewall rules which are incompatible with simplified compute node communication.
+- Users who enable software firewalls on compute nodes and explicitly disable outbound traffic in software firewall rules that are incompatible with simplified compute node communication.
-If either of these cases applies to you, and you would like to opt in to the preview, follow the steps outlined in the next section to ensure that your Batch workloads can still function under the new model.
+If either of these cases applies to you, then follow the steps outlined in the next section to ensure that
+your Batch workloads can still function under the `simplified` mode. We strongly recommend that you test and
+verify all of your changes in a dev and test environment first before pushing your changes into production.
-### Required network configuration changes
+### Required network configuration changes for simplified communication mode
-For impacted users, the following set of steps is required to migrate to the new communication model:
+The following set of steps is required to migrate to the new communication mode:
-1. Ensure your networking configuration as applicable to Batch pools (NSGs, UDRs, firewalls, etc.) includes a union of the models (that is, the network rules prior to simplified compute node communication and after). At a minimum, these rules would be:
+1. Ensure your networking configuration as applicable to Batch pools (NSGs, UDRs, firewalls, etc.) includes a union of the modes (that is, the combined network rules of both `classic` and `simplified` modes). At a minimum, these rules would be:
- Inbound: - Destination ports 29876, 29877 over TCP from BatchNodeManagement.*region* - Outbound: - Destination port 443 over TCP to Storage.*region*
- - Destination port 443 over TCP to BatchNodeManagement.*region*
-1. If you have any additional inbound or outbound scenarios required by your workflow, you'll need to ensure that your rules reflect these requirements.
-1. [Opt in to simplified compute node communication](#opt-your-batch-account-in-or-out-of-simplified-compute-node-communication) as described below.
-1. Use one of the following options to update your workloads to use the new communication model. Whichever method you use, keep in mind that pools without public IP addresses are unaffected and can't currently use simplified compute node communication. Please see the [Current limitations](#current-limitations) section.
- 1. Create new pools and validate that the new pools are working correctly. Migrate your workload to the new pools and delete any earlier pools.
- 1. Resize all existing pools to zero nodes and scale back out.
-1. After confirming that all previous pools have been either deleted or scaled to zero and back out, query the [List Outbound Network Dependencies Endpoints API](/rest/api/batchmanagement/batch-account/list-outbound-network-dependencies-endpoints) to confirm that no outbound rule to Azure Storage for the region exists (excluding any autostorage accounts if linked to your Batch account).
-1. Modify all applicable networking configuration to the Simplified Compute Node Communication rules, at the minimum (please note any additional rules needed as discussed above):
+ - Destination port 443 over ANY to BatchNodeManagement.*region*
+1. If you have any other inbound or outbound scenarios required by your workflow, you'll need to ensure that your rules reflect these requirements.
+1. Use one of the following options to update your workloads to use the new communication mode.
+ - Create new pools with the `targetNodeCommunicationMode` set to `simplified` and validate that the new pools are working correctly. Migrate your workload to the new pools and delete any earlier pools.
+ - Update existing pools `targetNodeCommunicationMode` property to `simplified` and then resize all existing pools to zero nodes and scale back out.
+1. Use the [Get Pool](/rest/api/batchservice/pool/get), [List Pool](/rest/api/batchservice/pool/list) API or Portal to confirm the `currentNodeCommunicationMode` is set to the desired communication mode of `simplified`.
+1. Modify all applicable networking configuration to the Simplified Compute Node Communication rules, at the minimum (note any extra rules needed as discussed above):
- Inbound: - None - Outbound:
- - Destination port 443 over TCP to BatchNodeManagement.*region*
-
-If you follow these steps, but later want to stop using simplified compute node communication, you'll need to do the following:
-
-1. [Opt out of simplified compute node communication](#opt-your-batch-account-in-or-out-of-simplified-compute-node-communication) as described below.
-1. Migrate your workload to new pools, or resize existing pools and scale back out (see step 4 above).
-1. Confirm that all of your pools are no longer using simplified compute node communication by using the [List Outbound Network Dependencies Endpoints API](/rest/api/batchmanagement/batch-account/list-outbound-network-dependencies-endpoints). You should see an outbound rule to Azure Storage for the region (independent of any autostorage accounts linked to your Batch account).
-
-## Opt your Batch account in or out of simplified compute node communication
-
-To opt a Batch account in or out of simplified compute node communication, [create a new support request in the Azure portal](../azure-portal/supportability/how-to-create-azure-support-request.md).
-
-> [!IMPORTANT]
-> When you opt in (or opt out) of simplified compute node communication, the change only impacts future behavior. Any Batch pools containing non-zero compute nodes that were created before the request are unaffected, and will use whichever model was active before the request. Please see the migration steps for more information on how to migrate existing pools before either opting-in or opting-out.
-
-Use the following options when creating your request.
--
-1. For **Issue type**, select **Technical**.
-1. For **Service type**, select **Batch Service**.
-1. For **Resource**, select the Batch account for this request.
-1. For **Summary**:
- - To opt in, type "Enable simplified compute node communication".
- - To opt our, type "Disable simplified compute node communication".
-1. For **Problem type**, select **Batch Accounts**.
-1. For **Problem subtype**, select **Other issues with Batch Accounts**.
-1. Select **Next**, then select **Next** again to go to the **Additional details** page.
-1. In **Additional details**, you can optionally specify that you want to enable all of the Batch accounts in your subscription, or across multiple subscriptions. If you do so, be sure to include the subscription IDs here.
-1. Make any other required selections on the page, then select **Next**.
-1. Review your request details, then select **Create** to submit your support request.
-
-After your request has been submitted, you'll be notified once the account has been opted in (or out).
-
-## Current limitations
-
-The following are known limitations for accounts that opt in to simplified compute node communication:
+ - Destination port 443 over ANY to BatchNodeManagement.*region*
+
+If you follow these steps, but later want to switch back to `classic` compute node communication, you'll need to take the following actions:
+
+1. Create new pools or update existing pools `targetNodeCommunicationMode` property set to `classic`.
+1. Migrate your workload to these pools, or resize existing pools and scale back out (see step 3 above).
+1. See step 4 above to confirm that your pools are operating in `classic` communication mode.
+1. Optionally revert your networking configuration.
+
+## Specifying the node communication mode on a Batch pool
+
+Below are examples of how to create a Batch pool with `simplified` compute node communication.
+
+> [!TIP]
+> Specifying the target node communication mode is a preference indication for the Batch service and not a guarantee that it
+> will be honored. Certain configurations on the pool may prevent the Batch service from honoring the specified target node
+> communication mode, such as interaction with No public IP address, virtual networks, and the pool configuration type.
+
+### Azure portal
+
+Navigate to the Pools blade of your Batch account and click the Add button. Under `OPTIONAL SETTINGS`, you can
+select `Simplified` as an option from the pull-down of `Node communication mode` as shown below.
+
+ :::image type="content" source="media/simplified-compute-node-communication/add-pool-simplified-mode.png" alt-text="Screenshot that shows creating a pool with simplified mode.":::
+
+To update an existing pool to simplified communication mode, navigate to the Pools blade of your Batch account and
+click on the pool to update. On the left-side navigation, select `Node communication mode`. There you'll be able
+to select a new target node communication mode as shown below. After selecting the appropriate communication mode,
+click the `Save` button to update. You'll need to scale the pool down to zero nodes first, and then back out
+for the change to take effect, if conditions allow.
+
+ :::image type="content" source="media/simplified-compute-node-communication/update-pool-simplified-mode.png" alt-text="Screenshot that shows updating a pool to simplified mode.":::
+
+To display the current node communication mode for a pool, navigate to the Pools blade of your Batch account, and
+click on the pool to view. Select `Properties` on the left-side navigation and the pool node communication mode
+will be shown under the General section.
+
+ :::image type="content" source="media/simplified-compute-node-communication/get-pool-simplified-mode.png" alt-text="Screenshot that shows properties with a pool with simplified mode.":::
+
+### REST API
+
+This example shows how to use the [Batch Service REST API](/rest/api/batchservice/pool/add) to create a pool with
+`simplified` compute node communication.
+
+```http
+POST {batchURL}/pools?api-version=2022-10-01.16.0
+client-request-id: 00000000-0000-0000-0000-000000000000
+```
+
+#### Request body
+
+```json
+"pool": {
+ "id": "pool-simplified",
+ "vmSize": "standard_d2s_v3",
+ "virtualMachineConfiguration": {
+ "imageReference": {
+ "publisher": "Canonical",
+ "offer": "0001-com-ubuntu-server-jammy",
+ "sku": "22_04-lts"
+ },
+ "nodeAgentSKUId": "batch.node.ubuntu 22.04"
+ },
+ "resizeTimeout": "PT15M",
+ "targetDedicatedNodes": 2,
+ "targetLowPriorityNodes": 0,
+ "taskSlotsPerNode": 1,
+ "taskSchedulingPolicy": {
+ "nodeFillType": "spread"
+ },
+ "enableAutoScale": false,
+ "enableInterNodeCommunication": false,
+ "targetNodeCommunicationMode": "simplified"
+}
+```
+
+## Limitations
+
+The following are known limitations of the `simplified` communication mode:
+
+- Limited migration support for previously created pools without public IP addresses
+([V1 preview](batch-pool-no-public-ip-address.md)). These pools can only be migrated if created in a
+[virtual network](batch-virtual-network.md), otherwise they won't use simplified compute node communication, even
+if specified on the pool. For more information, see the
+[migration guide](batch-pools-without-public-ip-addresses-classic-retirement-migration-guide.md).
+- Cloud Service Configuration pools are currently not supported for simplified compute node communication and are
+[deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/).
+Specifying a communication mode for these types of pools aren't honored and will always result in `classic`
+communication mode. We recommend using Virtual Machine Configuration for your Batch pools. For more information, see
+[Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md).
-- Limited migration support for previously created pools without public IP addresses ([V1 preview](batch-pool-no-public-ip-address.md)). They can only be migrated if created in a [virtual network](batch-virtual-network.md), otherwise they won't use simplified compute node communication, even if the Batch account has opted in.-- Cloud Service Configuration pools are currently not supported for simplified compute node communication and are generally deprecated. We recommend using Virtual Machine Configuration for your Batch pools. For more information, see [Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md). ## Next steps
cognitive-services Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/troubleshooting.md
This issue usually is caused by audio data. You might see this error because:
* The audio uses an unsupported codec format, which causes the audio data to be treated as silence.
+## Connection closed or timeout
+
+There is a known issue on Windows 11 that might affect some types of Secure Sockets Layer (SSL) and Transport Layer Security (TLS) connections. These connections might have handshake failures. For developers, the affected connections are likely to send multiple frames followed by a partial frame with a size of less than 5 bytes within a single input buffer. If the connection fails, your app will receive the error such as, "USP error", "Connection closed", "ServiceTimeout", or "SEC_E_ILLEGAL_MESSAGE".
+
+There is an out of band update available for Windows 11 that fixes these issues. The update may be manually installed by following the instructions here:
+- [Windows 11 21H2](https://support.microsoft.com/topic/october-17-2022-kb5020387-os-build-22000-1100-out-of-band-5e723873-2769-4e3d-8882-5cb044455a92)
+- [Windows 11 22H2](https://support.microsoft.com/topic/october-25-2022-kb5018496-os-build-22621-755-preview-64040bea-1e02-4b6d-bad1-b036200c2cb3)
+
+The issue started October 12th, 2022 and should be resolved via Windows update in November, 2022.
+ ## Next steps * [Review the release notes](releasenotes.md)
cognitive-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/reference.md
The Azure OpenAI service provides two methods for authentication. you can use e
The service APIs are versioned using the ```api-version``` query parameter. All versions follow the YYYY-MM-DD date structure, with a -preview suffix for a preview service. For example:
-```
+```http
POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2022-06-01-preview ```
With the Completions operation, the model will generate one or more predicted co
**Create a completion**
-```
+```http
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/completions?api-version={api-version} ```
Get a vector representation of a given input that can be easily consumed by mach
**Create an embedding**
-```
+```http
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/embeddings?api-version={api-version} ```
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYM
#### List all available models This API will return the list of all available models in your resource. This includes both 'base models' that are available by default and models you've created from fine-tuning jobs.
-```
+```http
GET https://{your-resource-name}.openai.azure.com/openai/models?api-version={api-version} ```
GET https://{your-resource-name}.openai.azure.com/openai/models?api-version={api
#### Example request
-```
+```console
curl -X GET https://example_resource_name.openai.azure.com/openai/models?api-version=2022-06-01-preview \ -H "api-key: YOUR_API_KEY" ```
curl -X GET https://example_resource_name.openai.azure.com/openai/models?api-ver
This API will retrieve information on a specific model
-```
+```http
GET https://{your-resource-name}.openai.azure.com/openai/models/{model_id}?api-version={api-version} ```
GET https://{your-resource-name}.openai.azure.com/openai/models/{model_id}?api-v
#### Example request
-```
+```console
curl -X GET https://example_resource_name.openai.azure.com/openai/models/ada?api-version=2022-06-01-preview \ -H "api-key: YOUR_API_KEY" ```
You can create customized versions of our models using the fine-tuning APIs. The
This API will list your resource's fine-tuning jobs
-```
+```http
GET https://{your-resource-name}.openai.azure.com/openai/fine-tunes?api-version={api-version} ```
curl -X GET https://your_resource_name.openai.azure.com/openai/fine-tunes?api-ve
This API will create a new job to fine-tune a specified model with the specified dataset.
-```
+```http
POST https://{your-resource-name}.openai.azure.com/openai/fine-tunes?api-version={api-version} ```
curl https://your-resource-name.openai.azure.com/openai/fine-tunes?api-version=2
This API will retrieve information about a specific fine tuning job
-```
+```http
GET https://{your-resource-name}.openai.azure.com/openai/fine-tunes/{fine_tune_id}?api-version={api-version} ```
curl https://example_resource_name.openai.azure.com/openai/fine-tunes/ft-d3f2a65
This API will delete a specific fine tuning job
-```
+```http
DELETE https://{your-resource-name}.openai.azure.com/openai/fine-tunes/{fine_tune_id}?api-version={api-version} ```
curl https://example_resource_name.openai.azure.com/openai/fine-tunes/ft-d3f2a65
This API will retrieve the events associated with the specified fine tuning job. To stream events as they become available, use the query parameter ΓÇ£streamΓÇ¥ and pass true value (&stream=true)
-```
+```http
GET https://{your-resource-name}.openai.azure.com/openai/fine-tunes/{fine_tune_id}/events?api-version={api-version} ```
GET https://{your-resource-name}.openai.azure.com/openai/fine-tunes/{fine_tune_i
- `2022-06-01-preview` #### Example request
-```
+```console
curl -X GET https://your_resource_name.openai.azure.com/openai/fine-tunes/ft-d3f2a65d49d34e74a80f6328ba6d8d08/events?stream=true&api-version=2022-06-01-preview \ -H "api-key: YOUR_API_KEY" ```
curl -X GET https://your_resource_name.openai.azure.com/openai/fine-tunes/ft-d3f
This API will cancel the specified job
-```
+```http
POST https://{your-resource-name}.openai.azure.com/openai/fine-tunes/{fine_tune_id}/cancel?api-version={api-version} ```
POST https://{your-resource-name}.openai.azure.com/openai/fine-tunes/{fine_tune_
- `2022-06-01-preview` #### Example request
-```
+```console
curl -X POST https://your_resource_name.openai.azure.com/openai/fine-tunes/ft-d3f2a65d49d34e74a80f6328ba6d8d08/cancel?api-version=2022-06-01-preview \ -H "api-key: YOUR_API_KEY" ```
curl -X POST https://your_resource_name.openai.azure.com/openai/fine-tunes/ft-d3
This API will list all the Files that have been uploaded to the resource
-```
+```http
GET https://{your-resource-name}.openai.azure.com/openai/files?api-version={api-version} ```
GET https://{your-resource-name}.openai.azure.com/openai/files?api-version={api-
#### Example request
-```
+```console
curl -X GET https://example_resource_name.openai.azure.com/openai/files?api-version=2022-06-01-preview \ -H "api-key: YOUR_API_KEY" ```
curl -X GET https://example_resource_name.openai.azure.com/openai/files?api-vers
This API will upload a file that contains the examples used for fine-tuning a model.
-```
+```http
GET https://{your-resource-name}.openai.azure.com/openai/files?api-version={api-version} ```
curl -X POST https://example_resource_name.openai.azure.com/openai/files?api-ver
#### Example response
-```JSON
+```json
{ "bytes": 405898, "purpose": "fine-tune",
curl -X POST https://example_resource_name.openai.azure.com/openai/files?api-ver
This API will return information on the specified file
-```
+```http
GET https://{your-resource-name}.openai.azure.com/openai/files/{file_id}?api-version={api-version} ```
curl -X GET https://example_resource_name.openai.azure.com/openai/files/file-6ca
This API will delete the specified file
-```
+```http
DELETE https://{your-resource-name}.openai.azure.com/openai/files/{file_id}?api-version={api-version} ```
curl -X DELETE https://example_resource_name.openai.azure.com/openai/files/file-
This API will download the specified file.
-```
+```http
GET https://{your-resource-name}.openai.azure.com/openai/files/{file_id}/content?api-version={api-version} ```
GET https://{your-resource-name}.openai.azure.com/openai/files/{file_id}/content
- `2022-06-01-preview` #### Example request
-```
+```console
curl -X GET https://example_resource_name.openai.azure.com/openai/files/file-6ca9bd640c8e4eaa9ec922604226ab6c/content?api-version=2022-06-01-preview \ -H "api-key: YOUR_API_KEY" ```
curl -X GET https://example_resource_name.openai.azure.com/openai/files/file-6ca
Import files from blob storage or other web locations. We recommend you use this option for importing large files. Large files can become unstable when uploaded through multipart forms because the requests are atomic and can't be retried or resumed.
-```
+```http
POST https://{your-resource-name}.openai.azure.com/openai/files/import?api-version={api-version} ```
curl -X POST https://example_resource_name.openai.azure.com/openai/files/files/i
### Example response
-```JSON
+```json
{ "purpose": "fine-tune", "filename": "validationfiletest.jsonl",
curl -X POST https://example_resource_name.openai.azure.com/openai/files/files/i
This API will return a list of all the deployments in the resource.
-```
-POST https://{your-resource-name}.openai.azure.com/openai/deployments?api-version={api-version}
+```http
+GET https://{your-resource-name}.openai.azure.com/openai/deployments?api-version={api-version}
``` **Path parameters**
curl -X GET https://example_resource_name.openai.azure.com/openai/deployments?ap
This API will create a new deployment in the resource. This will enable you to make completions and embeddings calls with the model.
-```
+```http
POST https://{your-resource-name}.openai.azure.com/openai/deployments?api-version={api-version} ```
curl -X POST https://example_resource_name.openai.azure.com/openai/deployments?a
This API will retrieve information about the specified deployment
-```
+```http
GET https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment_id}?api-version={api-version} ```
curl -X GET https://example_resource_name.openai.azure.com/openai/deployments/{d
This API will update an existing deployment. Make sure to set the content-type to `application/merge-patch+json`
-```
+```http
PATCH https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment_id}?api-version={api-version} ```
curl -X PATCH https://example_resource_name.openai.azure.com/openai/deployments
This API will delete the specified deployment
-```
+```http
DELETE https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment_id}?api-version={api-version} ```
DELETE https://{your-resource-name}.openai.azure.com/openai/deployments/{deploym
#### Example request
-```Console
+```console
curl -X DELETE https://example_resource_name.openai.azure.com/openai/deployments/{deployment_id}?api-version=2022-06-01-preview \ -H "api-key: YOUR_API_KEY" ```
cognitive-services Concepts Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-features.md
Title: "Features: Action and context - Personalizer"
+ Title: "Features: Action and Context - Personalizer"
description: Personalizer uses features, information about actions and context, to make better ranking suggestions. Features can be very generic, or specific to an item.
ms.
Previously updated : 10/14/2019 Last updated : 10/25/2022
-# Features are information about actions and context
+# Context and Actions
-The Personalizer service works by learning what your application should show to users in a given context.
+Personalizer works by learning what your application should show to users in a given context. These are the two most important pieces of information that you pass into Personalizer. The **context** represents the information you have about the current user or the state of your system, and the **actions** are the options to be chosen from.
-Personalizer uses **features**, which is information about the **current context** to choose the best **action**. The features represent all information you think may help personalize to achieve higher rewards. Features can be very generic, or specific to an item.
+## Table of Contents
-For example, you may have a **feature** about:
+* [Context](#context) Information about the current user or state of the system
+* [Actions](#actions) A list of options to choose from
+* [Features](#features) Attributes describing the Context and Actions
+* [Feature Engineering](#feature-engineering) Tips for constructing impactful features
+* [Namespaces](#namespaces) Grouping Features
+* [Examples](#json-examples) Examples of Context and Action features in JSON format
-* The _user persona_ such as a `Sports_Shopper`. This should not be an individual user ID.
-* The _content_ such as if a video is a `Documentary`, a `Movie`, or a `TV Series`, or whether a retail item is available in store.
-* The _current_ period of time such as which day of the week it is.
-Personalizer does not prescribe, limit, or fix what features you can send for actions and context:
+## Context
-* You can send some features for some actions and not for others, if you don't have them. For example, TV series may have attributes movies don't have.
-* You may have some features available only some times. For example, a mobile application may provide more information than a web page.
-* Over time, you may add and remove features about context and actions. Personalizer continues to learn from available information.
-* There must be at least one feature for the context. Personalizer does not support an empty context. If you only send a fixed context every time, Personalizer will choose the action for rankings only regarding the features in the actions.
-* For categorical features, you don't need to define the possible values, and you don't need to pre-define ranges for numerical values.
+Information for the _context_ depends on each application and use case, but it typically may include information such as:
-Features are sent as part of the JSON payload in a [Rank API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank) call. Each Rank call is associated with a personalization _event_. By default, Personalizer will automatically assign an event ID and return it in the Rank response. This default behavior is recommended for most users, however, if you need to create your own unique event ID (for example, using a GUID), then you can provide it in the Rank call as an argument.
+* Demographic and profile information about your user.
+* Information extracted from HTTP headers such as user agent, or derived from HTTP information such as reverse geographic lookups based on IP addresses.
+* Information about the current time, such as day of the week, weekend or not, morning or afternoon, holiday season or not, etc.
+* Information extracted from mobile applications, such as location, movement, or battery level.
+* Historical aggregates of the behavior of users - such as what are the movie genres this user has viewed the most.
+* Information about the state of the system.
-## Supported feature types
+Your application is responsible for loading the information about the context from the relevant databases, sensors, and systems you may have. If your context information doesn't change, you can add logic in your application to cache this information, before sending it to the Rank API.
-Personalizer supports features of string, numeric, and boolean types. It is very likely that your application will mostly use string features, with a few exceptions.
-### How choice of feature type affects Machine Learning in Personalizer
+## Actions
-* **Strings**: For string types, every combination of key and value is treated as a One-Hot feature (e.g. genre:"ScienceFiction" and genre:"Documentary" would create two new input features for the machine learning model.
-* **Numeric**: You should use numerical values when the number is a magnitude that should proportionally affect the personalization result. This is very scenario dependent. In a simplified example e.g. when personalizing a retail experience, NumberOfPetsOwned could be a feature that is numeric as you may want people with 2 or 3 pets to influence the personalization result twice or thrice as much as having 1 pet. Features that are based on numeric units but where the meaning isn't linear - such as Age, Temperature, or Person Height - are best encoded as strings. For example DayOfMonth would be a string with "1","2"..."31". If you have many categories The feature quality can typically be improved by using ranges. For example, Age could be encoded as "Age":"0-5", "Age":"6-10", etc.
-* **Boolean** values sent with value of "false" act as if they hadn't been sent at all.
+Actions represent a list of options.
-Features that are not present should be omitted from the request. Avoid sending features with a null value, because it will be processed as existing and with a value of "null" when training the model.
+Don't send in more than 50 actions when Ranking actions. These may be the same 50 actions every time, or they may change. For example, if you have a product catalog of 10,000 items for an e-commerce application, you may use a recommendation or filtering engine to determine the top 40 a customer may like, and use Personalizer to find the one that will generate the most reward (for example, the user will add to the basket) for the current context.
-## Categorize features with namespaces
-Personalizer takes in features organized into namespaces. You determine, in your application, if namespaces are used and what they should be. Namespaces are used to group features about a similar topic, or features that come from a certain source.
+### Examples of actions
-The following are examples of feature namespaces used by applications:
+The actions you send to the Rank API will depend on what you are trying to personalize.
-* User_Profile_from_CRM
-* Time
-* Mobile_Device_Info
-* http_user_agent
-* VideoResolution
-* UserDeviceInfo
-* Weather
-* Product_Recommendation_Ratings
-* current_time
-* NewsArticle_TextAnalytics
+Here are some examples:
-You can name feature namespaces following your own conventions as long as they are valid JSON keys. Namespaces are used to organize features into distinct sets, and to disambiguate features with similar names. You can think of namespaces as a 'prefix' that is added to feature names. Namespaces cannot be nested.
+|Purpose|Action|
+|--|--|
+|Personalize which article is highlighted on a news website.|Each action is a potential news article.|
+|Optimize ad placement on a website.|Each action will be a layout or rules to create a layout for the ads (for example, on the top, on the right, small images, big images).|
+|Display personalized ranking of recommended items on a shopping website.|Each action is a specific product.|
+|Suggest user interface elements such as filters to apply to a specific photo.|Each action may be a different filter.|
+|Choose a chat bot's response to clarify user intent or suggest an action.|Each action is an option of how to interpret the response.|
+|Choose what to show at the top of a list of search results|Each action is one of the top few search results.|
-In the following JSON, `user`, `environment`, `device`, and `activity` are feature namespaces.
+### Load actions from the client application
-> [!Note]
-> Currently we strongly recommend using names for feature namespaces that are UTF-8 based and start with different letters. For example, `user`, `environment`, `device`, and `activity` start with `u`, `e`, `d`, and `a`. Currently having namespaces with same first characters could result in collisions in indexes used for machine learning.
+Features from actions may typically come from content management systems, catalogs, and recommender systems. Your application is responsible for loading the information about the actions from the relevant databases and systems you have. If your actions don't change or getting them loaded every time has an unnecessary impact on performance, you can add logic in your application to cache this information.
-JSON objects can include nested JSON objects and simple property/values. An array can be included only if the array items are numbers.
-```JSON
-{
- "contextFeatures": [
- {
- "user": {
- "profileType":"AnonymousUser",
- "latlong": ["47.6,-122.1"]
- }
- },
- {
- "environment": {
- "dayOfMonth": "28",
- "monthOfYear": "8",
- "timeOfDay": "13:00",
- "weather": "sunny"
- }
- },
- {
- "device": {
- "mobile":true,
- "Windows":true
- }
- },
- {
- "activity" : {
- "itemsInCart": 3,
- "cartValue": 250,
- "appliedCoupon": true
- }
- }
- ]
-}
-```
+### Prevent actions from being ranked
-### Restrictions in character sets for namespaces
+In some cases, there are actions that you don't want to display to users. The best way to prevent an action from being ranked is by adding it to the [Excluded Actions](https://learn.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.personalizer.models.rankrequest.excludedactions) list, or not passing it to the Rank Request.
-The string you use for naming the namespace must follow some restrictions:
-* It can't be unicode.
-* You can use some of the printable symbols with codes < 256 for the namespace names.
-* You can't use symbols with codes < 32 (not printable), 32 (space), 58 (colon), 124 (pipe), and 126ΓÇô140.
-* It should not start with an underscore "_" or the feature will be ignored.
+In some cases, you might not want events to be trained on by default, i.e., you only want to train events when a specific condition is met. For example, The personalized part of your webpage is below the fold (users have to scroll before interacting with the personalized content). In this case you will render the entire page, but only want an event to be trained on when the user scrolls and has a chance to interact with the personalized content. For these cases, you should [Defer Event Activation](concept-active-inactive-events.md) to avoid assigning default reward (and training) events which the end user did not have a chance to interact with.
-## How to make feature sets more effective for Personalizer
-A good feature set helps Personalizer learn how to predict the action that will drive the highest reward.
+## Features
-Consider sending features to the Personalizer Rank API that follow these recommendations:
+Both the **context** and possible **actions** are described using **features**. The features represent all information you think is important for the decision making process to maximize rewards. A good starting point is to imagine you are tasked with selecting the best action at each timestamp and ask yourself: "What information do I need to make an informed decision? What information do I have available to describe the context and each possible action?". Features can be very generic, or specific to an item.
-* Use categorical and string types for features that are not a magnitude.
+Personalizer does not prescribe, limit, or fix what features you can send for actions and context:
-* There are enough features to drive personalization. The more precisely targeted the content needs to be, the more features are needed.
+* Over time, you may add and remove features about context and actions. Personalizer continues to learn from available information.
+* For categorical features, there is no need to pre-define the possible values.
+* For numeric features, there is no need to pre-define ranges.
+* Feature names starting with an underscore `_` will be ignored.
+* The list of features can be large (hundreds), but we recommend starting with a concise feature set and expanding as necessary.
+* **action** features may or may not have any correlation with **context** features.
+* Features that are not available should be omitted from the request. If the value of a specific feature is not available for a given request, omit the feature for this request.
+* Avoid sending features with a null value. A null value will be processed as a string with a value of "null" which is undesired.
+
+It's ok and natural for features to change over time. However, keep in mind that Personalizer's machine learning model adapts based on the features it sees. If you send a request containing all new features, Personalizer's model will not be able to leverage past events to select the best action for the current event. Having a 'stable' feature set (with recurring features) will help the performance of Personalizer's machine learning algorithms.
+
+### Context Features
+* Some context features may only be available part of the time. For example, if a user is logged into the online grocery store website, the context will contain features describing purchase history. These features will not be available for a guest user.
+* There must be at least one context feature. Personalizer does not support an empty context.
+* If the context features are identical for every request, Personalizer will choose the globally best action.
+
+### Action Features
+* Not all actions need to contain the same features. For example, in the online grocery store scenario, microwavable popcorn will have a "cooking time" feature, while a cucumber will not.
+* Features for a certain action ID may be available one day, but later on become unavailable.
-* There are enough features of diverse *densities*. A feature is *dense* if many items are grouped in a few buckets. For example, thousands of videos can be classified as "Long" (over 5 min long) and "Short" (under 5 min long). This is a *very dense* feature. On the other hand, the same thousands of items can have an attribute called "Title", which will almost never have the same value from one item to another. This is a very non-dense or *sparse* feature.
+Examples:
-Having features of high density helps the Personalizer extrapolate learning from one item to another. But if there are only a few features and they are too dense, the Personalizer will try to precisely target content with only a few buckets to choose from.
+The following are good examples for action features. These will depend a lot on each application.
-### Improve feature sets
+* Features with characteristics of the actions. For example, is it a movie or a tv series?
+* Features about how users may have interacted with this action in the past. For example, this movie is mostly seen by people in demographics A or B, it's typically played no more than one time.
+* Features about the characteristics of how the user *sees* the actions. For example, does the poster for the movie shown in the thumbnail include faces, cars, or landscapes?
-Analyze the user behavior by doing an Offline Evaluation. This allows you to look at past data to see what features are heavily contributing to positive rewards versus those that are contributing less. You can see what features are helping, and it will be up to you and your application to find better features to send to Personalizer to improve results even further.
+## Supported feature types
-These following sections are common practices for improving features sent to Personalizer.
+Personalizer supports features of string, numeric, and boolean types. It's very likely that your application will mostly use string features, with a few exceptions.
-#### Make features more dense
+### How feature types affects the Machine Learning in Personalizer
-It is possible to improve your feature sets by editing them to make them larger and more or less dense.
+* **Strings**: For string types, every key-value (feature name, feature value) combination is treated as a One-Hot feature (e.g. category:"Produce" and category:"Meat" would internally be represented as different features in the machine learning model.
+* **Numeric**: Only use numeric values when the number is a magnitude that should proportionally affect the personalization result. This is very scenario dependent. Features that are based on numeric units but where the meaning isn't linear - such as Age, Temperature, or Person Height - are best encoded as categorical strings. For example Age could be encoded as "Age":"0-5", "Age":"6-10", etc. Height could be bucketed as "Height": "<5'0", "Height": "5'0-5'4", "Height": "5'5-5'11", "Height":"6'0-6-4", "Height":">6'4".
+* **Boolean**
+* **Arrays** ONLY numeric arrays are supported.
-For example, a timestamp down to the second is a very sparse feature. It could be made more dense (effective) by classifying times into "morning", "midday", "afternoon", etc.
-Location information also typically benefits from creating broader classifications. For example, a Latitude-Longitude coordinate such as Lat: 47.67402┬░ N, Long: 122.12154┬░ W is too precise, and forces the model to learn latitude and longitude as distinct dimensions. When you are trying to personalize based on location information, it helps to group location information in larger sectors. An easy way to do that is to choose an appropriate rounding precision for the Lat-Long numbers, and combine latitude and longitude into "areas" by making them into one string. For example, a good way to represent 47.67402┬░ N, Long: 122.12154┬░ W in regions approximately a few kilometers wide would be "location":"34.3 , 12.1".
+## Feature Engineering
+* Use categorical and string types for features that are not a magnitude.
+* Make sure there are enough features to drive personalization. The more precisely targeted the content needs to be, the more features are needed.
+* There are features of diverse *densities*. A feature is *dense* if many items are grouped in a few buckets. For example, thousands of videos can be classified as "Long" (over 5 min long) and "Short" (under 5 min long). This is a *very dense* feature. On the other hand, the same thousands of items can have an attribute called "Title", which will almost never have the same value from one item to another. This is a very non-dense or *sparse* feature.
-#### Expand feature sets with extrapolated information
+Having features of high density helps Personalizer extrapolate learning from one item to another. But if there are only a few features and they are too dense, Personalizer will try to precisely target content with only a few buckets to choose from.
-You can also get more features by thinking of unexplored attributes that can be derived from information you already have. For example, in a fictitious movie list personalization, is it possible that a weekend vs weekday elicits different behavior from users? Time could be expanded to have a "weekend" or "weekday" attribute. Do national cultural holidays drive attention to certain movie types? For example, a "Halloween" attribute is useful in places where it is relevant. Is it possible that rainy weather has significant impact on the choice of a movie for many people? With time and place, a weather service could provide that information and you can add it as an extra feature.
+### Common issues with feature design and formatting
-#### Expand feature sets with artificial intelligence and cognitive services
+* **Sending features with high cardinality.** Features that have unique values that are not likely to repeat over many events. For example, PII specific to one individual (such as name, phone number, credit card number, IP address) shouldn't be used with Personalizer.
+* **Sending user IDs** With large numbers of users, it's unlikely that this information is relevant to Personalizer learning to maximize the average reward score. Sending user IDs (even if non-PII) will likely add more noise to the model and is not recommended.
+* **Sending unique values that will rarely occur more than a few times**. It's recommended to bucket your features to a higher level-of-detail. For example, having features such as `"Context.TimeStamp.Day":"Monday"` or `"Context.TimeStamp.Hour":13` can be useful as there are only 7 and 24 unique values, respectively. However, `"Context.TimeStamp":"1985-04-12T23:20:50.52Z"` is very precise and has an extremely large number of unique values, which makes it very difficult for Personalizer to learn from it.
-Artificial Intelligence and ready-to-run Cognitive Services can be a very powerful addition to the Personalizer.
+### Improve feature sets
+
+Analyze the user behavior by running a [Feature Evaluation Job](how-to-feature-evaluation.md). This allows you to look at past data to see what features are heavily contributing to positive rewards versus those that are contributing less. You can see what features are helping, and it will be up to you and your application to find better features to send to Personalizer to improve results even further.
+
+### Expand feature sets with artificial intelligence and cognitive services
+
+Artificial Intelligence and ready-to-run Cognitive Services can be a very powerful addition to Personalizer.
By preprocessing your items using artificial intelligence services, you can automatically extract information that is likely to be relevant for personalization.
You can use several other [Azure Cognitive Services](https://www.microsoft.com/c
* [Emotion](../face/overview.md) * [Computer Vision](../computer-vision/overview.md)
-## Actions represent a list of options
-
-Each action:
-
-* Has a list of features.
-* The list of features can be large (hundreds) but we recommend evaluating feature effectiveness to remove features that aren't contributing to getting rewards.
-* The features in the **actions** may or may not have any correlation with features in the **context** used by Personalizer.
-* Features for actions may be present in some actions and not others.
-* Features for a certain action ID may be available one day, but later on become unavailable.
-
-Personalizer's machine learning algorithms will perform better when there are stable feature sets, but Rank calls will not fail if the feature set changes over time.
-
-Do not send in more than 50 actions when Ranking actions. These may be the same 50 actions every time, or they may change. For example, if you have a product catalog of 10,000 items for an e-commerce application, you may use a recommendation or filtering engine to determine the top 40 a customer may like, and use Personalizer to find the one that will generate the most reward (for example, the user will add to the basket) for the current context.
--
-### Examples of actions
+### Use Embeddings as Features
-The actions you send to the Rank API will depend on what you are trying to personalize.
+Embeddings from various Machine Learning models have proven to be affective features for Personalizer
-Here are some examples:
+* Embeddings from Large Language Models
+* Embeddings from Computer Vision Models
-|Purpose|Action|
-|--|--|
-|Personalize which article is highlighted on a news website.|Each action is a potential news article.|
-|Optimize ad placement on a website.|Each action will be a layout or rules to create a layout for the ads (for example, on the top, on the right, small images, big images).|
-|Display personalized ranking of recommended items on a shopping website.|Each action is a specific product.|
-|Suggest user interface elements such as filters to apply to a specific photo.|Each action may be a different filter.|
-|Choose a chat bot's response to clarify user intent or suggest an action.|Each action is an option of how to interpret the response.|
-|Choose what to show at the top of a list of search results|Each action is one of the top few search results.|
+## Namespaces
-### Examples of features for actions
+Optionally, features can be organized using namespaces (relevant for both context and action features). Namespaces can be used to group features by topic, by source, or any other grouping that makes sense in your application. You determine if namespaces are used and what they should be. Namespaces organize features into distinct sets, and disambiguate features with similar names. You can think of namespaces as a 'prefix' that is added to feature names. Namespaces should not be nested.
-The following are good examples of features for actions. These will depend a lot on each application.
-
-* Features with characteristics of the actions. For example, is it a movie or a tv series?
-* Features about how users may have interacted with this action in the past. For example, this movie is mostly seen by people in demographics A or B, it is typically played no more than one time.
-* Features about the characteristics of how the user *sees* the actions. For example, does the poster for the movie shown in the thumbnail include faces, cars, or landscapes?
-
-### Load actions from the client application
+The following are examples of feature namespaces used by applications:
-Features from actions may typically come from content management systems, catalogs, and recommender systems. Your application is responsible for loading the information about the actions from the relevant databases and systems you have. If your actions don't change or getting them loaded every time has an unnecessary impact on performance, you can add logic in your application to cache this information.
+* User_Profile_from_CRM
+* Time
+* Mobile_Device_Info
+* http_user_agent
+* VideoResolution
+* DeviceInfo
+* Weather
+* Product_Recommendation_Ratings
+* current_time
+* NewsArticle_TextAnalytics
-### Prevent actions from being ranked
+### Namespace naming conventions and guidelines
-In some cases, there are actions that you don't want to display to users. The best way to prevent an action from being ranked as topmost is not to include it in the action list to the Rank API in the first place.
+* Namespaces should not be nested.
+* Namespaces must start with unique ASCII characters (we recommend using names namespaces that are UTF-8 based). Currently having namespaces with same first characters could result in collisions, therefore it's strongly recommended to have your namespaces start with characters that are distinct from each other.
+* Namespaces are case sensitive. For example `user` and `User` will be considered different namespaces.
+* Feature names can be repeated across namespaces, and will be treated as separate features
+* The following characters cannot be used: codes < 32 (not printable), 32 (space), 58 (colon), 124 (pipe), and 126ΓÇô140.
+* All namespaces starting with an underscore `_` will be ignored.
-In some cases, it can only be determined later in your business logic if a resulting _action_ of a Rank API call is to be shown to a user. For these cases, you should use _Inactive Events_.
-## JSON format for actions
+## JSON Examples
+### Actions
When calling Rank, you will send multiple actions to choose from: JSON objects can include nested JSON objects and simple property/values. An array can be included only if the array items are numbers.
JSON objects can include nested JSON objects and simple property/values. An arra
} ```
-## Examples of context information
-
-Information for the _context_ depends on each application and use case, but it typically may include information such as:
-
-* Demographic and profile information about your user.
-* Information extracted from HTTP headers such as user agent, or derived from HTTP information such as reverse geographic lookups based on IP addresses.
-* Information about the current time, such as day of the week, weekend or not, morning or afternoon, holiday season or not, etc.
-* Information extracted from mobile applications, such as location, movement, or battery level.
-* Historical aggregates of the behavior of users - such as what are the movie genres this user has viewed the most.
-
-Your application is responsible for loading the information about the context from the relevant databases, sensors, and systems you may have. If your context information doesn't change, you can add logic in your application to cache this information, before sending it to the Rank API.
-
-## JSON format for context
+### Context
Context is expressed as a JSON object that is sent to the Rank API:
JSON objects can include nested JSON objects and simple property/values. An arra
```JSON { "contextFeatures": [
- {
- "user": {
- "name":"Doug"
- }
- },
{ "state": { "timeOfDay": "noon",
JSON objects can include nested JSON objects and simple property/values. An arra
"mobile":true, "Windows":true, "screensize": [1680,1050]
- }
} } ] } ```
-## Inference Explainability
-Personalizer can help you to understand which features of a chosen action are the most and least influential to then model during inference. When enabled, inference explainability includes feature scores from the underlying model into the Rank API response, so your application receives this information at the time of inference.
-Feature scores empower you to better understand the relationship between features and the decisions made by Personalizer. They can be used to provide insight to your end-users into why a particular recommendation was made, or to further analyze how the data is being used by the underlying model.
+### Namespaces
-Setting the service configuration flag IsInferenceExplainabilityEnabled in your service configuration enables Personalizer to include feature values and weights in the Rank API response. To update your current service configuration, use the [Service Configuration ΓÇô Update API](/rest/api/personalizer/1.1preview1/service-configuration/update?tabs=HTTP). In the JSON request body, include your current service configuration and add the additional entry: `ΓÇ£IsInferenceExplainabilityEnabledΓÇ¥: true`. If you donΓÇÖt know your current service configuration, you can obtain it from the [Service Configuration ΓÇô Get API](/rest/api/personalizer/1.1preview1/service-configuration/get?tabs=HTTP)
+In the following JSON, `user`, `environment`, `device`, and `activity` are namespaces.
-```JSON
-{
- "rewardWaitTime": "PT10M",
- "defaultReward": 0,
- "rewardAggregation": "earliest",
- "explorationPercentage": 0.2,
- "modelExportFrequency": "PT5M",
- "logMirrorEnabled": true,
- "logMirrorSasUri": "https://testblob.blob.core.windows.net/container?se=2020-08-13T00%3A00Z&sp=rwl&spr=https&sv=2018-11-09&sr=c&sig=signature",
- "logRetentionDays": 7,
- "lastConfigurationEditDate": "0001-01-01T00:00:00Z",
- "learningMode": "Online",
- "isAutoOptimizationEnabled": true,
- "autoOptimizationFrequency": "P7D",
- "autoOptimizationStartDate": "2019-01-19T00:00:00Z",
-"isInferenceExplainabilityEnabled": true
-}
-```
+> [!Note]
+> We strongly recommend using names for feature namespaces that are UTF-8 based and start with different letters. For example, `user`, `environment`, `device`, and `activity` start with `u`, `e`, `d`, and `a`. Currently having namespaces with same first characters could result in collisions.
-### How to interpret feature scores?
-Enabling inference explainability will add a collection to the JSON response from the Rank API called *inferenceExplanation*. This contains a list of feature names and values that were submitted in the Rank request, along with feature scores learned by PersonalizerΓÇÖs underlying model. The feature scores provide you with insight on how influential each feature was in the model choosing the action.
```JSON- {
- "ranking": [
- {
- "id": "EntertainmentArticle",
- "probability": 0.8
- },
- {
- "id": "SportsArticle",
- "probability": 0.10
- },
- {
- "id": "NewsArticle",
- "probability": 0.10
- }
- ],
- "eventId": "75269AD0-BFEE-4598-8196-C57383D38E10",
- "rewardActionId": "EntertainmentArticle",
- "inferenceExplanation": [
- {
- "idΓÇ¥: "EntertainmentArticle",
- "features": [
- {
- "name": "user.profileType",
- "score": 3.0
- },
- {
- "name": "user.latLong",
- "score": -4.3
- },
- {
- "name": "user.profileType^user.latLong",
- "score" : 12.1
- },
- ]
- ]
+ "contextFeatures": [
+ {
+ "user": {
+ "profileType":"AnonymousUser",
+ "Location": "New York, USA"
+ }
+ },
+ {
+ "environment": {
+ "monthOfYear": "8",
+ "timeOfDay": "Afternoon",
+ "weather": "Sunny"
+ }
+ },
+ {
+ "device": {
+ "mobile":true,
+ "Windows":true
+ }
+ },
+ {
+ "activity" : {
+ "itemsInCart": "3-5",
+ "cartValue": "250-300",
+ "appliedCoupon": true
+ }
+ }
+ ]
} ```
-In the example above, three action IDs are returned in the _ranking_ collection along with their respective probabilities scores. The action with the largest probability is the _best action_ as determined by the model trained on data sent to the Personalizer APIs, which in this case is `"id": "EntertainmentArticle"`. The action ID can be seen again in the _inferenceExplanation_ collection, along with the feature names and scores determined by the model for that action and the features and values sent to the Rank API.
-
-Recall that Personalizer will either return the _best action_ or an _exploratory action_ chosen by the exploration policy. The best action is the one that the model has determined has the highest probability of maximizing the average reward, whereas exploratory actions are chosen among the set of all possible actions provided in the Rank API call. Actions taken during exploration do not leverage the feature scores in determining which action to take, therefore **feature scores for exploratory actions should not be used to gain an understanding of why the action was taken.** [You can learn more about exploration here](./concepts-exploration.md).
-
-For the best actions returned by Personalizer, the feature scores can provide general insight where:
-* Larger positive scores provide more support for the model choosing this action.
-* Larger negative scores provide more support for the model not choosing this action.
-* Scores close to zero have a small effect on the decision to choose this action.
-
-### Important considerations for Inference Explainability
-* **Increased latency.** Currently, enabling _Inference Explainability_ may significantly increase the latency of Rank API calls due to processing of the feature information. Run experiments and measure the latency in your scenario to see if it satisfies your applicationΓÇÖs latency requirements.
-* **Correlated Features.** Features that are highly correlated with each other can reduce the utility of feature scores. For example, suppose Feature A is highly correlated with Feature B. It may be that Feature AΓÇÖs score is a large positive value while Feature BΓÇÖs score is a large negative value. In this case, the two features may effectively cancel each other out and have little to no impact on the model. While Personalizer is very robust to highly correlated features, when using _Inference Explainability_, ensure that features sent to Personalizer are not highly correlated
-* **Default exploration only.** Currently, Inference Explainability supports only the default exploration algorithm at this time.
- ## Next steps [Reinforcement learning](concepts-reinforcement-learning.md)
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/whats-new.md
Learn what's new in Azure Personalizer. These items may include release notes, v
## Release notes ### September 2022
-* Personalizer Inference Explainabiltiy is now available as a Public Preview. Enabling inference explainability returns feature scores on every Rank API call, providing insight into how influential each feature is to the actions chosen by your Personalizer model. [Learn more about Inference Explainability](concepts-features.md#inference-explainability).
+* Personalizer Inference Explainabiltiy is now available as a Public Preview. Enabling inference explainability returns feature scores on every Rank API call, providing insight into how influential each feature is to the actions chosen by your Personalizer model. [Learn more about Inference Explainability](how-to-inference-explainability.md).
* Personalizer SDK now available in [Java](https://search.maven.org/artifact/com.azure/azure-ai-personalizer/1.0.0-beta.1/jar) and [Javascript](https://www.npmjs.com/package/@azure-rest/ai-personalizer). ### April 2022
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-automation.md
These actions can be performed on the calls that are answered or placed using Ca
**Terminate** ΓÇô Whether your application has answered a one-to-one or group call, or placed an outbound call with one or more participants, this action will remove all participants and end the call. This operation is triggered by setting `forEveryOne` property to true in Hang-Up call action. + ## Events The following table outlines the current events emitted by Azure Communication Services. The two tables below show events emitted by Event Grid and from the Call Automation as webhook events.
Most of the events sent by Event Grid are platform agnostic meaning they're emit
| ParticipantAdded | A participant has been added to a call | | ParticipantRemoved| A participant has been removed from a call |
+Read more about these events and payload schema [here](../../../event-grid/communication-services-voice-video-events.md)
+ ### Call Automation webhook events The Call Automation events are sent to the web hook callback URI specified when you answer or place a new outbound call.
The Call Automation events are sent to the web hook callback URI specified when
| RecognizeFailed | Recognition of user input was unsuccessful <br/>*to learn more about recognize action events view our [quickstart](../../quickstarts/voice-video-calling/Recognize-Action.md)*|
+To understand which events are published for different actions, refer to [this guide](../../how-tos/call-automation-sdk/actions-for-call-control.md) that provides code samples as well as sequence diagrams for various call control flows.
+ ## Known Issues 1. Using the incorrect IdentifierType for endpoints for `Transfer` requests (like using CommunicationUserIdentifier to specify a phone number) returns a 500 error instead of a 400 error code. Solution: Use the correct type, CommunicationUserIdentifier for Communication Users and PhoneNumberIdentifier for phone numbers.
The Call Automation events are sent to the web hook callback URI specified when
> [!div class="nextstepaction"] > [Get started with Call Automation](./../../quickstarts/voice-video-calling/Callflows-for-customer-interactions.md)+
+Here are some articles of interest to you:
+1. Understand how your resource will be [charged for various calling use cases](../pricing.md) with examples.
+2. Learn about metrics and logs available for this service.
+1. Troubleshoot common issues.
communication-services Actions For Call Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation-sdk/actions-for-call-control.md
+
+ Title: Azure Communication Services Call Automation how-to for managing calls with Call Automation
+
+description: Provides a how-to guide on using call actions to steer and manage a call with Call Automation.
+++++ Last updated : 11/03/2022++++
+zone_pivot_groups: acs-csharp-java
++
+# How to control and steer calls with Call Automation
+> [!IMPORTANT]
+> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly. Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
+
+Call Automation uses a REST API interface to receive requests for actions and provide responses to notify whether the request was successfully submitted or not. Due to the asynchronous nature of calling, most actions have corresponding events that are triggered when the action completes successfully or fails. This guide covers the actions available for steering calls, like CreateCall, Transfer, Redirect, and managing participants. Actions are accompanied with sample code on how to invoke the said action and sequence diagrams describing the events expected after invoking an action. These diagrams will help you visualize how to program your service application with Call Automation.
+
+Call Automation supports various other actions to manage call media and recording that aren't included in this guide.
+
+> [!NOTE]
+> Call Automation currently doesn't interoperate with Microsoft Teams. Actions like making, redirecting a call to a Teams user or adding them to a call using Call Automation isn't supported.
+
+As a pre-requisite, we recommend you to read the below articles to make the most of this guide:
+1. Call Automation [concepts guide](../../concepts/voice-video-calling/call-automation.md#call-actions) that describes the action-event programming model and event callbacks.
+2. Learn about [user identifiers](../../concepts/identifiers.md#the-communicationidentifier-type) like CommunicationUserIdentifier and PhoneNumberIdentifier used in this guide.
+
+For all the code samples, `client` is CallAutomationClient object that can be created as shown and `callConnection` is the CallConnection object obtained from Answer or CreateCall response. You can also obtain it from callback events received by your application.
+## [csharp](#tab/csharp)
+```csharp
+var client = new CallAutomationClient("<resource_connection_string>");
+```
+## [Java](#tab/java)
+```java
+ CallAutomationClient client = new CallAutomationClientBuilder().connectionString("<resource_connection_string>").buildClient();
+```
+--
+
+## Make an outbound call
+You can place a 1:1 or group call to a communication user or phone number (public or Communication Services owned number). Below sample makes an outbound call from your service application to a phone number.
+callerIdentifier is used by Call Automation as your application's identity when making an outbound a call. When calling a PSTN endpoint, you also need to provide a phone number that will be used as the source caller ID and shown in the call notification to the target PSTN endpoint.
+To place a call to a Communication Services user, you'll need to provide a CommunicationUserIdentifier object instead of PhoneNumberIdentifier.
+### [csharp](#tab/csharp)
+```csharp
+Uri callBackUri = new Uri("https://<myendpoint>/Events"); //the callback endpoint where you want to receive subsequent events
+var callerIdentifier = new CommunicationUserIdentifier("<user_id>");
+CallSource callSource = new CallSource(callerIdentifier);
+callSource.CallerId = new PhoneNumberIdentifier("+16044561234"); // This is the ACS provisioned phone number for the caller
+var callThisPerson = new PhoneNumberIdentifier("+16041234567");
+var listOfPersonToBeCalled = new List<CommunicationIdentifier>();
+listOfPersonToBeCalled.Add(callThisPerson);
+var createCallOptions = new CreateCallOptions(callSource, listOfPersonToBeCalled, callBackUri);
+CreateCallResult response = await client.CreateCallAsync(createCallOptions);
+```
+### [Java](#tab/java)
+```java
+String callbackUri = "https://<myendpoint>/Events"; //the callback endpoint where you want to receive subsequent events
+List<CommunicationIdentifier> targets = new ArrayList<>(Arrays.asList(new PhoneNumberIdentifier("+16471234567")));
+CommunicationUserIdentifier callerIdentifier = new CommunicationUserIdentifier("<user_id>");
+CreateCallOptions createCallOptions = new CreateCallOptions(callerIdentifier, targets, callbackUri)
+ .setSourceCallerId("+18001234567"); // This is the ACS provisioned phone number for the caller
+Response<CreateCallResult> response = client.createCallWithResponse(createCallOptions).block();
+```
+--
+The response provides you with CallConnection object that you can use to take further actions on this call once it's connected. Once the call is answered, two events will be published to the callback endpoint you provided earlier:
+1. `CallConnected` event notifying that the call has been established with the callee.
+2. `ParticipantsUpdated` event that contains the latest list of participants in the call.
+![Sequence diagram for placing an outbound call.](media/make-call-flow.png)
++
+## Answer an incoming call
+Once you've subscribed to receive [incoming call notifications](../../concepts/voice-video-calling/incoming-call-notification.md) to your resource, below is sample code on how to answer that call. When answering a call, it's necessary to provide a callback url. Communication Services will post all subsequent events about this call to that url.
+### [csharp](#tab/csharp)
+
+```csharp
+string incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
+Uri callBackUri = new Uri("https://<myendpoint_where_I_want_to_receive_callback_events");
+
+var answerCallOptions = new AnswerCallOptions(incomingCallContext, callBackUri);
+AnswerCallResult answerResponse = await client.AnswerCallAsync(answerCallOptions);
+CallConnection callConnection = answerResponse.CallConnection;
+```
+### [Java](#tab/java)
+
+```java
+String incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
+String callbackUri = "https://<myendpoint>/Events";
+
+AnswerCallOptions answerCallOptions = new AnswerCallOptions(incomingCallContext, callbackUri);
+Response<AnswerCallResult> response = client.answerCallWithResponse(answerCallOptions).block();
+```
+--
+The response provides you with CallConnection object that you can use to take further actions on this call once it's connected. Once the call is answered, two events will be published to the callback endpoint you provided earlier:
+1. `CallConnected` event notifying that the call has been established with the caller.
+2. `ParticipantsUpdated` event that contains the latest list of participants in the call.
+
+![Sequence diagram for answering an incoming call.](media/answer-flow.png)
+
+## Reject a call
+You can choose to reject an incoming call as shown below. You can provide a reject reason: none, busy or forbidden. If nothing is provided, none is chosen by default.
+# [csharp](#tab/csharp)
+```csharp
+string incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
+var rejectOption = new RejectCallOptions(incomingCallContext);
+rejectOption.CallRejectReason = CallRejectReason.Forbidden;
+_ = await client.RejectCallAsync(rejectOption);
+```
+# [Java](#tab/java)
+
+```java
+String incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
+RejectCallOptions rejectCallOptions = new RejectCallOptions(incomingCallContext)
+ .setCallRejectReason(CallRejectReason.BUSY);
+Response<Void> response = client.rejectCallWithResponse(rejectCallOptions).block();
+```
+--
+No events are published for reject action.
+
+## Redirect a call
+You can choose to redirect an incoming call to one or more endpoints without answering it. Redirecting a call will remove your application's ability to control the call using Call Automation.
+# [csharp](#tab/csharp)
+```csharp
+string incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
+var target = new CommunicationUserIdentifier("<user_id_of_target>"); //user id looks like 8:a1b1c1-...
+var redirectOption = new RedirectCallOptions(incomingCallContext, target);
+_ = await client.RedirectCallAsync(redirectOption);
+```
+# [Java](#tab/java)
+```java
+String incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
+CommunicationIdentifier target = new CommunicationUserIdentifier("<user_id_of_target>"); //user id looks like 8:a1b1c1-...
+RedirectCallOptions redirectCallOptions = new RedirectCallOptions(incomingCallContext, target);
+Response<Void> response = client.redirectCallWithResponse(redirectCallOptions).block();
+```
+--
+To redirect the call to a phone number, set the target to be PhoneNumberIdentifier.
+# [csharp](#tab/csharp)
+```csharp
+var target = new PhoneNumberIdentifier("+16041234567");
+```
+# [Java](#tab/java)
+```java
+CommunicationIdentifier target = new PhoneNumberIdentifier("+18001234567");
+```
+--
+No events are published for redirect. If the target is a Communication Services user or a phone number owned by your resource, it will generate a new IncomingCall event with 'to' field set to the target you specified.
+
+## Transfer a 1:1 call
+When your application answers a call or places an outbound call to an endpoint, that endpoint can be transferred to another destination endpoint. Transferring a 1:1 call will remove your application from the call and hence remove its ability to control the call using Call Automation.
+# [csharp](#tab/csharp)
+```csharp
+var transferDestination = new CommunicationUserIdentifier("<user_id>");
+var transferOption = new TransferToParticipantOptions(transferDestination);
+TransferCallToParticipantResult result = await callConnection.TransferCallToParticipantAsync(transferOption);
+```
+# [Java](#tab/java)
+```java
+CommunicationIdentifier transferDestination = new CommunicationUserIdentifier("<user_id>");
+TransferToParticipantCallOptions options = new TransferToParticipantCallOptions(transferDestination);
+Response<TransferCallResult> transferResponse = callConnectionAsync.transferToParticipantCallWithResponse(options).block();
+```
+--
+When transferring to a phone number, it's mandatory to provide a source caller ID. This ID serves as the identity of your application(the source) for the destination endpoint.
+# [csharp](#tab/csharp)
+```csharp
+var transferDestination = new PhoneNumberIdentifier("+16041234567");
+var transferOption = new TransferToParticipantOptions(transferDestination);
+transferOption.SourceCallerId = new PhoneNumberIdentifier("+16044561234");
+TransferCallToParticipantResult result = await callConnection.TransferCallToParticipantAsync(transferOption);
+```
+# [Java](#tab/java)
+```java
+CommunicationIdentifier transferDestination = new PhoneNumberIdentifier("+16471234567");
+TransferToParticipantCallOptions options = new TransferToParticipantCallOptions(transferDestination)
+ .setSourceCallerId(new PhoneNumberIdentifier("+18001234567"));
+Response<TransferCallResult> transferResponse = callConnectionAsync.transferToParticipantCallWithResponse(options).block();
+```
+--
+The below sequence diagram shows the expected flow when your application places an outbound 1:1 call and then transfers it to another endpoint.
+![Sequence diagram for placing a 1:1 call and then transferring it.](media/transfer-flow.png)
+
+## Add a participant to a call
+You can add one or more participants (Communication Services users or phone numbers) to an existing call. When adding a phone number, it's mandatory to provide source caller ID. This caller ID will be shown on call notification to the participant being added.
+# [csharp](#tab/csharp)
+```csharp
+var addThisPerson = new PhoneNumberIdentifier("+16041234567");
+var listOfPersonToBeAdded = new List<CommunicationIdentifier>();
+listOfPersonToBeAdded.Add(addThisPerson);
+var addParticipantsOption = new AddParticipantsOptions(listOfPersonToBeAdded);
+addParticipantsOption.SourceCallerId = new PhoneNumberIdentifier("+16044561234");
+AddParticipantsResult result = await callConnection.AddParticipantsAsync(addParticipantsOption);
+```
+# [Java](#tab/java)
+```java
+CommunicationIdentifier target = new PhoneNumberIdentifier("+16041234567");
+List<CommunicationIdentifier> targets = new ArrayList<>(Arrays.asList(target));
+AddParticipantsOptions addParticipantsOptions = new AddParticipantsOptions(targets)
+ .setSourceCallerId(new PhoneNumberIdentifier("+18001234567"));
+Response<AddParticipantsResult> addParticipantsResultResponse = callConnectionAsync.addParticipantsWithResponse(addParticipantsOptions).block();
+```
+--
+To add a Communication Services user, provide a CommunicationUserIdentifier instead of PhoneNumberIdentifier. Source caller ID isn't mandatory in this case.
+
+AddParticipant will publish a `AddParticipantSucceeded` or `AddParticipantFailed` event, along with a `ParticipantUpdated` providing the latest list of participants in the call.
+
+![Sequence diagram for adding a participant to the call.](media/add-participant-flow.png)
+
+## Remove a participant from a call
+# [csharp](#tab/csharp)
+```csharp
+var removeThisUser = new CommunicationUserIdentifier("<user_id>");
+var listOfParticipantsToBeRemoved = new List<CommunicationIdentifier>();
+listOfParticipantsToBeRemoved.Add(removeThisUser);
+var removeOption = new RemoveParticipantsOptions(listOfParticipantsToBeRemoved);
+RemoveParticipantsResult result = await callConnection.RemoveParticipantsAsync(removeOption);
+```
+# [Java](#tab/java)
+```java
+CommunicationIdentifier removeThisUser = new CommunicationUserIdentifier("<user_id>");
+RemoveParticipantsOptions removeParticipantsOptions = new RemoveParticipantsOptions(new ArrayList<>(Arrays.asList(removeThisUser)));
+Response<RemoveParticipantsResult> removeParticipantsResultResponse = callConnectionAsync.removeParticipantsWithResponse(removeParticipantsOptions).block();
+```
+--
+RemoveParticipant only generates `ParticipantUpdated` event describing the latest list of participants in the call. The removed participant is excluded if remove operation was successful.
+![Sequence diagram for removing a participant from the call.](media/remove-participant-flow.png)
+
+## Hang up on a call
+Hang Up action can be used to remove your application from the call or to terminate a group call by setting forEveryone parameter to true. For a 1:1 call, hang up will terminate the call with the other participant by default.
+
+# [csharp](#tab/csharp)
+```csharp
+_ = await callConnection.HangUpAsync(true);
+```
+# [Java](#tab/java)
+```java
+Response<Void> response1 = callConnectionAsync.hangUpWithResponse(new HangUpOptions(true)).block();
+```
+--
+CallDisconnected event is published once the hangUp action has completed successfully.
+
+## Get information about a call participant
+# [csharp](#tab/csharp)
+```csharp
+CallParticipant participantInfo = await callConnection.GetParticipantAsync("<user_id>")
+```
+# [Java](#tab/java)
+```java
+CallParticipant participantInfo = callConnection.getParticipant("<user_id>").block();
+```
+--
+
+## Get information about all call participants
+# [csharp](#tab/csharp)
+```csharp
+List<CallParticipant> participantList = (await callConnection.GetParticipantsAsync()).Value.ToList();
+```
+# [Java](#tab/java)
+```java
+List<CallParticipant> participantsInfo = Objects.requireNonNull(callConnection.listParticipants().block()).getValues();
+```
+--
+
+## Get latest info about a call
+# [csharp](#tab/csharp)
+```csharp
+CallConnectionProperties thisCallsProperties = callConnection.GetCallConnectionProperties();
+```
+# [Java](#tab/java)
+```java
+CallConnectionProperties thisCallsProperties = callConnection.getCallProperties().block();
+```
+--
communication-services Redirect Inbound Telephony Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation-sdk/redirect-inbound-telephony-calls.md
Get started with Azure Communication Services by using the Call Automation SDKs
[!INCLUDE [Redirect inbound call with Java](./includes/redirect-inbound-telephony-calls-java.md)] ::: zone-end
+## Subscribe to IncomingCall event
+
+IncomingCall is an Azure Event Grid event for notifying incoming calls to your Communication Services resource. To learn more about it, see [this guide](../../concepts/voice-video-calling/incoming-call-notification.md).
+1. Navigate to your resource on Azure portal and select `Events` from the left side menu.
+1. Select `+ Event Subscription` to create a new subscription.
+1. Filter for Incoming Call event.
+1. Choose endpoint type as web hook and provide the public url generated for your application by ngrok. Make sure to provide the exact api route that you programmed to receive the event previously. In this case, it would be <ngrok_url>/api/incomingCall.
+1. Select create to start the creation of subscription and validation of your endpoint as mentioned previously. The subscription is ready when the provisioning status is marked as succeeded.
+
+This subscription currently has no filters and hence all incoming calls will be sent to your application. To filter for specific phone number or a communication user, use the Filters tab.
+ ## Testing the application 1. Place a call to the number you acquired in the Azure portal (see prerequisites above).
-2. Your Event Grid subscription to the IncomingCall should execute and call your web server.
+2. Your Event Grid subscription to the IncomingCall should execute and call your application.
3. The call will be redirected to the endpoint(s) you specified in your application. Since this call flow involves a redirected call instead of answering it, pre-call web hook callbacks to notify your application the other endpoint accepted the call aren't published.
If you want to clean up and remove a Communication Services subscription, you ca
## Next steps - Learn more about [Call Automation](../../concepts/voice-video-calling/call-automation.md) and its features. -- Learn more about [Play action](../../concepts/voice-video-calling/play-Action.md).
+- Learn about [Play action](../../concepts/voice-video-calling/play-Action.md) to play audio in a call.
- Learn how to build a [call workflow](../../quickstarts/voice-video-calling/callflows-for-customer-interactions.md) for a customer support scenario.
communication-services Get Started Raw Media Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-raw-media-access.md
[!INCLUDE [Raw media with Android](./includes/raw-medi)] ::: zone-end + ::: zone pivot="platform-web" [!INCLUDE [Raw media with JavaScript](./includes/raw-medi)] ::: zone-end
container-apps Tutorial Java Quarkus Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-java-quarkus-connect-managed-identity-postgresql-database.md
The following example creates a resource group named `myResourceGroup` in the Ea
az group create --name myResourceGroup --location eastus ```
-Create an Azure container registry instance using the [az acr create](/cli/azure/acr#az-acr-create) command. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. In the following example, `myContainerRegistry007` is used. Update this to a unique value.
+Create an Azure container registry instance using the [az acr create](/cli/azure/acr#az-acr-create) command. The registry name must be unique within Azure, contain 5-50 alphanumeric characters. All letters must be specified in lower case. In the following example, `mycontainerregistry007` is used. Update this to a unique value.
```azurecli az acr create \ --resource-group myResourceGroup \
- --name myContainerRegistry007 \
+ --name mycontainerregistry007 \
--sku Basic ```
cd quarkus-quickstarts/hibernate-orm-panache-quickstart
Delete the existing content in *application.properties* and replace with the following to configure the database for dev, test, and production modes:
+ ### [Flexible Server](#tab/flexible)
+
+ ```properties
+ quarkus.package.type=uber-jar
+
+ quarkus.hibernate-orm.database.generation=drop-and-create
+ quarkus.datasource.db-kind=postgresql
+ quarkus.datasource.jdbc.max-size=8
+ quarkus.datasource.jdbc.min-size=2
+ quarkus.hibernate-orm.log.sql=true
+ quarkus.hibernate-orm.sql-load-script=import.sql
+ quarkus.datasource.jdbc.acquisition-timeout = 10
+
+ %dev.quarkus.datasource.username=${AZURE_CLIENT_NAME}
+ %dev.quarkus.datasource.jdbc.url=jdbc:postgresql://${DBHOST}.postgres.database.azure.com:5432/${DBNAME}?\
+ authenticationPluginClassName=com.azure.identity.providers.postgresql.AzureIdentityPostgresqlAuthenticationPlugin\
+ &sslmode=require\
+ &azure.clientId=${AZURE_CLIENT_ID}\
+ &azure.clientSecret=${AZURE_CLIENT_SECRET}\
+ &azure.tenantId=${AZURE_TENANT_ID}
+
+ %prod.quarkus.datasource.username=${AZURE_MI_NAME}
+ %prod.quarkus.datasource.jdbc.url=jdbc:postgresql://${DBHOST}.postgres.database.azure.com:5432/${DBNAME}?\
+ authenticationPluginClassName=com.azure.identity.providers.postgresql.AzureIdentityPostgresqlAuthenticationPlugin\
+ &sslmode=require
+
+ %dev.quarkus.class-loading.parent-first-artifacts=com.azure:azure-core::jar,\
+ com.azure:azure-core-http-netty::jar,\
+ io.projectreactor.netty:reactor-netty-core::jar,\
+ io.projectreactor.netty:reactor-netty-http::jar,\
+ io.netty:netty-resolver-dns::jar,\
+ io.netty:netty-codec::jar,\
+ io.netty:netty-codec-http::jar,\
+ io.netty:netty-codec-http2::jar,\
+ io.netty:netty-handler::jar,\
+ io.netty:netty-resolver::jar,\
+ io.netty:netty-common::jar,\
+ io.netty:netty-transport::jar,\
+ io.netty:netty-buffer::jar,\
+ com.azure:azure-identity::jar,\
+ com.azure:azure-identity-providers-core::jar,\
+ com.azure:azure-identity-providers-jdbc-postgresql::jar,\
+ com.fasterxml.jackson.core:jackson-core::jar,\
+ com.fasterxml.jackson.core:jackson-annotations::jar,\
+ com.fasterxml.jackson.core:jackson-databind::jar,\
+ com.fasterxml.jackson.dataformat:jackson-dataformat-xml::jar,\
+ com.fasterxml.jackson.datatype:jackson-datatype-jsr310::jar,\
+ org.reactivestreams:reactive-streams::jar,\
+ io.projectreactor:reactor-core::jar,\
+ com.microsoft.azure:msal4j::jar,\
+ com.microsoft.azure:msal4j-persistence-extension::jar,\
+ org.codehaus.woodstox:stax2-api::jar,\
+ com.fasterxml.woodstox:woodstox-core::jar,\
+ com.nimbusds:oauth2-oidc-sdk::jar,\
+ com.nimbusds:content-type::jar,\
+ com.nimbusds:nimbus-jose-jwt::jar,\
+ net.minidev:json-smart::jar,\
+ net.minidev:accessors-smart::jar,\
+ io.netty:netty-transport-native-unix-common::jar
+ ```
+
+ ### [Single Server](#tab/single)
+ ```properties quarkus.package.type=uber-jar
cd quarkus-quickstarts/hibernate-orm-panache-quickstart
1. Build the container image.
- Run the following command to build the Quarkus app image. You must tag it with the fully qualified name of your registry login server. The login server name is in the format *\<registry-name\>.azurecr.io* (must be all lowercase), for example, *myContainerRegistry007.azurecr.io*. Replace the name with your own registry name.
+ Run the following command to build the Quarkus app image. You must tag it with the fully qualified name of your registry login server. The login server name is in the format *\<registry-name\>.azurecr.io* (must be all lowercase), for example, *mycontainerregistry007.azurecr.io*. Replace the name with your own registry name.
```bash mvnw quarkus:add-extension -Dextensions="container-image-jib"
- mvnw clean package -Pnative -Dquarkus.native.container-build=true -Dquarkus.container-image.build=true -Dquarkus.container-image.registry=myContainerRegistry007 -Dquarkus.container-image.name=quarkus-postgres-passwordless-app -Dquarkus.container-image.tag=v1
+ mvnw clean package -Pnative -Dquarkus.native.container-build=true -Dquarkus.container-image.build=true -Dquarkus.container-image.registry=mycontainerregistry007 -Dquarkus.container-image.name=quarkus-postgres-passwordless-app -Dquarkus.container-image.tag=v1
``` 1. Log in to the registry.
cd quarkus-quickstarts/hibernate-orm-panache-quickstart
1. Push the image to the registry.
- Use [docker push][docker-push] to push the image to the registry instance. Replace `myContainerRegistry007` with the login server name of your registry instance. This example creates the `quarkus-postgres-passwordless-app` repository, containing the `quarkus-postgres-passwordless-app:v1` image.
+ Use [docker push][docker-push] to push the image to the registry instance. Replace `mycontainerregistry007` with the login server name of your registry instance. This example creates the `quarkus-postgres-passwordless-app` repository, containing the `quarkus-postgres-passwordless-app:v1` image.
```bash
- docker push myContainerRegistry007/quarkus-postgres-passwordless-app:v1
+ docker push mycontainerregistry007/quarkus-postgres-passwordless-app:v1
``` ## 4. Create a Container App on Azure
cd quarkus-quickstarts/hibernate-orm-panache-quickstart
```azurecli CONTAINER_IMAGE_NAME=quarkus-postgres-passwordless-app:v1
- REGISTRY_SERVER=myContainerRegistry007
+ REGISTRY_SERVER=mycontainerregistry007
REGISTRY_USERNAME=<REGISTRY_USERNAME> REGISTRY_PASSWORD=<REGISTRY_PASSWORD>
cd quarkus-quickstarts/hibernate-orm-panache-quickstart
## 5. Create and connect a PostgreSQL database with identity connectivity
-Next, create a PostgreSQL Database Single Server and configure your container app to connect to a PostgreSQL Database with a system-assigned managed identity. The Quarkus app will connect to this database and store its data when running, persisting the application state no matter where you run the application.
+Next, create a PostgreSQL Database and configure your container app to connect to a PostgreSQL Database with a system-assigned managed identity. The Quarkus app will connect to this database and store its data when running, persisting the application state no matter where you run the application.
1. Create the database service.
+ ### [Flexible Server](#tab/flexible)
+
+ ```azurecli
+ DB_SERVER_NAME='msdocs-quarkus-postgres-webapp-db'
+ ADMIN_USERNAME='demoadmin'
+ ADMIN_PASSWORD='<admin-password>'
+
+ az postgres flexible-server create \
+ --resource-group $RESOURCE_GROUP \
+ --name $DB_SERVER_NAME \
+ --location $LOCATION \
+ --admin-user $DB_USERNAME \
+ --admin-password $DB_PASSWORD \
+ --sku-name GP_Gen5_2
+ ```
+
+ ### [Single Server](#tab/single)
+ ```azurecli DB_SERVER_NAME='msdocs-quarkus-postgres-webapp-db' ADMIN_USERNAME='demoadmin'
Next, create a PostgreSQL Database Single Server and configure your container ap
--sku-name GP_Gen5_2 ```
+
+ The following parameters are used in the above Azure CLI command: * *resource-group* &rarr; Use the same resource group name in which you created the web app, for example `msdocs-quarkus-postgres-webapp-rg`.
Next, create a PostgreSQL Database Single Server and configure your container ap
1. Create a database named `fruits` within the PostgreSQL service with this command:
+ ### [Flexible Server](#tab/flexible)
+
+ ```azurecli
+ az postgres flexible-server db create \
+ --resource-group $RESOURCE_GROUP \
+ --server-name $DB_SERVER_NAME \
+ --database-name fruits
+ ```
+
+ ### [Single Server](#tab/single)
+ ```azurecli az postgres db create \ --resource-group $RESOURCE_GROUP \
Next, create a PostgreSQL Database Single Server and configure your container ap
1. Connect the database to the container app with a system-assigned managed identity, using the connection command.
+ ### [Flexible Server](#tab/flexible)
+
+ ```azurecli
+ az containerapp connection create postgres-flexible \
+ --resource-group $RESOURCE_GROUP \
+ --name my-container-app \
+ --target-resource-group $RESOURCE_GROUP \
+ --server $DB_SERVER_NAME \
+ --database fruits \
+ --managed-identity
+ ```
+
+ ### [Single Server](#tab/single)
+ ```azurecli az containerapp connection create postgres \ --resource-group $RESOURCE_GROUP \
When the new webpage shows your list of fruits, your app is connecting to the da
Learn more about running Java apps on Azure in the developer guide. > [!div class="nextstepaction"]
-> [Azure for Java Developers](/java/azure/)
+> [Azure for Java Developers](/java/azure/)
cosmos-db Advanced Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/advanced-queries.md
In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account by using diagnostics logs sent to **Azure Diagnostics (legacy)** and **resource-specific (preview**) tables.
-For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
+For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](monitor-resource-logs.md) to learn how to enable this feature.
-For [resource-specific tables](monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode because it:
+For [resource-specific tables](monitor-resource-logs.md), data is written into individual tables for each category of the resource. We recommend this mode because it:
- Makes it much easier to work with the data. - Provides better discoverability of the schemas.
cosmos-db Diagnostic Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/diagnostic-queries.md
In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB Cassansra API account by using diagnostics logs sent to **resource-specific** tables.
-For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](../monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
+For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](../monitor-resource-logs.md) to learn how to enable this feature.
-For [resource-specific tables](../monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode because it:
+For [resource-specific tables](../monitor-resource-logs.md), data is written into individual tables for each category of the resource. We recommend this mode because it:
- Makes it much easier to work with the data. - Provides better discoverability of the schemas.
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
You can choose to restore any combination of provisioned throughput containers,
The following configurations aren't restored after the point-in-time recovery:
-* Firewall, VNET, private endpoint settings.
+* Firewall, VNET, Data plane RBAC or private endpoint settings.
* Consistency settings. By default, the account is restored with session consistency. ΓÇâ * Regions. * Stored procedures, triggers, UDFs.
cosmos-db Diagnostic Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/diagnostic-queries.md
In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account by using diagnostics logs sent to **Azure Diagnostics (legacy)** and **resource-specific (preview)** tables.
-For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](../monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
+For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](../monitor-resource-logs.md) to learn how to enable this feature.
-For [resource-specific tables](../monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode because it:
+For [resource-specific tables](../monitor-resource-logs.md), data is written into individual tables for each category of the resource. We recommend this mode because it:
- Makes it much easier to work with the data. - Provides better discoverability of the schemas.
cosmos-db Diagnostic Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/diagnostic-queries.md
In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account by using diagnostics logs sent to **Azure Diagnostics (legacy)** and **resource-specific (preview)** tables.
-For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](../monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
+For Azure Diagnostics tables, all data is written into one single table. Users specify which category they want to query. If you want to view the full-text query of your request, see [Monitor Azure Cosmos DB data by using diagnostic settings in Azure](../monitor-resource-logs.md) to learn how to enable this feature.
-For [resource-specific tables](../monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode because it:
+For [resource-specific tables](../monitor-resource-logs.md), data is written into individual tables for each category of the resource. We recommend this mode because it:
- Makes it much easier to work with the data. - Provides better discoverability of the schemas.
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-python.md
The following code prompts the user for the connection string. It's never a good
CONNECTION_STRING = getpass.getpass(prompt='Enter your primary connection string: ') # Prompts user for connection string ```
-The following code creates a client connection your API for MongoDB and tests to make sure it's valid.
+The following code creates a client connection to your API for MongoDB and tests to make sure it's valid.
```python client = pymongo.MongoClient(CONNECTION_STRING)
cosmos-db Update Backup Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/update-backup-storage-redundancy.md
Azure Policy helps you to enforce organizational standards and to assess complia
## Next steps
-* Provision an Azure Cosmos DB account with [periodic backup mode'(configure-periodic-backup-restore.md).
+* Provision an Azure Cosmos DB account with [periodic backup mode](configure-periodic-backup-restore.md).
* Provision an account with continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template). * Restore continuous backup account using [Azure portal](restore-account-continuous-backup.md#restore-account-portal), [PowerShell](restore-account-continuous-backup.md#restore-account-powershell), [CLI](restore-account-continuous-backup.md#restore-account-cli), or [Azure Resource Manager](restore-account-continuous-backup.md#restore-arm-template).
cost-management-billing View All Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/view-all-accounts.md
tags: billing
Previously updated : 10/25/2022 Last updated : 11/08/2022
cost-management-billing Manage Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/microsoft-customer-agreement/manage-tenants.md
The article helps you understand and manage tenants associated with your Microso
A tenant is a digital representation of your organization and is primarily associated with a domain, like Microsoft.com. It's an environment managed through Azure Active Directory that enables you to assign users permissions to manage Azure resources and billing.
-Each tenant is distinct and separate from other tenants, yet you can allow guest users from other tenants to access your tenant to track your costs and manage billing.
+Each tenant is distinct and separate from other tenants. You can allow users from other tenants to access your billing account by using one of the following methods:
+- Creating guest users in your tenants and assigning the appropriate billing role.
+- Associating the other tenant to your tenant and assigning the appropriate billing role.
## What's an associated tenant? An associated tenant is a tenant that is linked to your primary billing tenantΓÇÖs billing account. You can move Microsoft 365 subscriptions to these tenants. You can also assign billing account roles to users in associated billing tenants. Read more about associated tenants [Manage billing across multiple tenants using associated billing tenants](../manage/manage-billing-across-tenants.md).
databox-online Azure Stack Edge Gpu Deploy Configure Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-certificates.md
Previously updated : 05/31/2022 Last updated : 11/08/2022 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to configure certificates for Azure Stack Edge Pro GPU so I can use it to transfer data to Azure.
databox-online Azure Stack Edge Gpu Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md
Previously updated : 08/01/2022 Last updated : 11/07/2022 zone_pivot_groups: azure-stack-edge-device-deployment # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
databox-online Azure Stack Edge Pro R Deploy Activate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-deploy-activate.md
Previously updated : 10/13/2022 Last updated : 11/07/2022 # Customer intent: As an IT admin, I need to understand how to activate Azure Stack Edge Pro R device so I can use it to transfer data to Azure.
databox-online Azure Stack Edge Pro R Deploy Configure Certificates Vpn Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-deploy-configure-certificates-vpn-encryption.md
Previously updated : 10/13/2022 Last updated : 11/07/2022 # Customer intent: As an IT admin, I need to understand how to configure certificates for Azure Stack Edge Pro R so I can use it to transfer data to Azure.
defender-for-cloud Asset Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/asset-inventory.md
The asset management possibilities for this tool are substantial and continue to
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|Free*<br>* Some features of the inventory page, such as the [software inventory](#access-a-software-inventory) require paid solutions to be in-place|
+|Pricing:|Free<br> Some features of the inventory page, such as the [software inventory](#access-a-software-inventory) require paid solutions to be in-place|
|Required roles and permissions:|All users| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
defender-for-cloud Attack Path Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-reference.md
Title: Reference list of attack paths
+ Title: Reference list of attack paths and cloud security graph components
description: This article lists Microsoft Defender for Cloud's list of attack paths based on resource. Previously updated : 09/21/2022 Last updated : 11/08/2022
-# Reference list of attack paths
+# Reference list of attack paths and cloud security graph components
-This article lists the attack paths, connections and insights you might see in Microsoft Defender for Cloud. What you are shown in your environment depends on the resources you're protecting and your customized configuration.
+This article lists the attack paths, connections and insights you might see in Microsoft Defender for Cloud related to Defender for Cloud Security Posture Management (CSPM). What you are shown in your environment depends on the resources you're protecting and your customized configuration. You will need to [enable Defender for CSPM](enable-enhanced-security.md#enable-defender-plans-to-get-the-enhanced-security-features) to view your attack paths. Learn more about [the cloud security graph, attack path analysis, and the cloud security explorer?](concept-attack-path.md).
To learn about how to respond to these attack paths, see [Identify and remediate attack paths](how-to-manage-attack-path.md).
To learn about how to respond to these attack paths, see [Identify and remediate
### Azure VMs
+Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentless.md).
+ | Attack Path Display Name | Attack Path Description | |--|--| | Internet exposed VM has high severity vulnerabilities | Virtual machine '\[MachineName]' is reachable from the internet and has high severity vulnerabilities \[RCE] |
To learn about how to respond to these attack paths, see [Identify and remediate
### AWS VMs
+Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentless.md).
+ | Attack Path Display Name | Attack Path Description | |--|--| | Internet exposed EC2 instance has high severity vulnerabilities and high permission to an account | AWS EC2 instance '\[EC2Name]' is reachable from the internet, has high severity vulnerabilities\[RCE] and has '\[permission]' permission to account '\[AccountName]' |
To learn about how to respond to these attack paths, see [Identify and remediate
### Azure data
+Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md).
+ | Attack Path Display Name | Attack Path Description | |--|--| | Internet exposed SQL on VM has a user account with commonly used username and allows code execution on the VM | SQL on VM '\[SqlVirtualMachineName]' is reachable from the internet, has a local user account with commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying VM |
To learn about how to respond to these attack paths, see [Identify and remediate
### AWS Data
+Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md).
+ | Attack Path Display Name | Attack Path Description | |--|--| | Internet exposed AWS S3 Bucket with sensitive data is publicly accessible | S3 bucket '\[BucketName]' with sensitive data is reachable from the internet and allows public read access without authorization required. For more details, you can learn how to [prioritize security actions by data sensitivity](./information-protection.md). | ### Azure containers
+Prerequisite: [Enable Defender for Containers](defender-for-containers-enable.md), and install the relevant agents in order to view attack paths that are related to containers. This will also give you the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in security explorer.
+ | Attack Path Display Name | Attack Path Description | |--|--|--| | Internet exposed Kubernetes pod is running a container with RCE vulnerabilities | Internet exposed Kubernetes pod '\[pod name]' in namespace '\[namespace]' is running a container '\[container name]' using image '\[image name]' which has vulnerabilities allowing remote code execution | | Kubernetes pod running on an internet exposed node uses host network is running a container with RCE vulnerabilities | Kubernetes pod '\[pod name]' in namespace '\[namespace]' with host network access enabled is exposed to the internet via the host network. The pod is running container '\[container name]' using image '\[image name]' which has vulnerabilities allowing remote code execution |
-## Insights and connections
+## Cloud security graph components list
+
+This section lists all of the cloud security graph components (connections & insights) that can be used in queries with the [cloud security explorer](concept-attack-path.md).
### Insights
defender-for-cloud Concept Attack Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-attack-path.md
description: Learn how to prioritize remediation of cloud misconfigurations and
Previously updated : 09/21/2022 Last updated : 11/08/2022 # What are the cloud security graph, attack path analysis, and the cloud security explorer?
+<iframe src="https://aka.ms/docs/player?id=36a5c440-00e6-4bd8-be1f-a27fbd007119" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+ One of the biggest challenges that security teams face today is the number of security issues they face on a daily basis. There are numerous security issues that need to be resolve and never enough resources to address them all.
-Defender for Cloud's contextual security capabilities assists security teams to assess the risk behind each security issue, and identify the highest risk issues that need to be resolved soonest. Defender for Cloud assists security teams to reduce the risk of an impactful breach to their environment in the most effective way.
+Defender for Cloud's contextual security capabilities assists security teams to assess the risk behind each security issue, and identify the highest risk issues that need to be resolved soonest. Defender for Cloud assists security teams to reduce the risk of an impactful breach to their environment in the most effective way.
+
+All of these capabilities are available as part of the [Defender Cloud Security Posture Management](concept-cloud-security-posture-management.md) plan and the requiring the enablement of [agentless scanning for VMs](concept-agentless-data-collection.md)
## What is cloud security graph?
Using the cloud security explorer, you can proactively identify security risks i
Cloud security explorer provides you with the ability to perform proactive exploration features. You can search for security risks within your organization by running graph-based path-finding queries on top the contextual security data that is already provided by Defender for Cloud. Such as, cloud misconfigurations, vulnerabilities, resource context, lateral movement possibilities between resources and more.
-Learn how to use the [cloud security explorer](how-to-manage-cloud-security-explorer.md), or check out the list of [insights and connections](attack-path-reference.md#insights-and-connections).
+Learn how to use the [cloud security explorer](how-to-manage-cloud-security-explorer.md), or check out the [cloud security graph components list](attack-path-reference.md#cloud-security-graph-components-list).
## Next steps
-[Identify and remediate attack paths](how-to-manage-attack-path.md)
+- [Enable Defender CSPM on a subscription](enable-enhanced-security.md#enable-enhanced-security-features-on-a-subscription)
+- [Identify and remediate attack paths](how-to-manage-attack-path.md)
+- [Enabling agentless scanning for machines](enable-vulnerability-assessment-agentless.md#enabling-agentless-scanning-for-machines)
+- [Build a query with the cloud security explorer](how-to-manage-cloud-security-explorer.md)
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
Defender for Cloud continually assesses your resources, subscriptions, and organ
## Defender CSPM plan options
-The Defender CSPM plan comes with two options, foundational CSPM capabilities and Defender Cloud Security Posture Management (CSPM). When you deploy Defender for Cloud to your subscription and resources, you'll automatically gain the basic coverage offered by the CSPM plan. To gain access to the other capabilities provided by Defender CSPM, you'll need to [enable the Defender Cloud Security Posture Management (CSPM) plan](enable-enhanced-security.md) on your subscription and resources.
+The Defender CSPM plan comes with two options, foundational CSPM capabilities and Defender CSPM. When you deploy Defender for Cloud to your subscription and resources, you'll automatically gain the basic coverage offered by the CSPM plan. To gain access to the other capabilities provided by Defender CSPM, you'll need to [enable the Defender Cloud Security Posture Management (CSPM) plan](enable-enhanced-security.md) on your subscription and resources.
The following table summarizes what's included in each plan and their cloud availability.
-| Feature | Foundational CSPM capabilities | Defender Cloud Security Posture Management (CSPM) | Cloud availability |
+| Feature | Foundational CSPM capabilities | Defender CSPM | Cloud availability |
|--|--|--|--| | Continuous assessment of the security configuration of your cloud resources | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | [Security recommendations to fix misconfigurations and weaknesses](review-security-recommendations.md) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png":::| Azure, AWS, GCP, on-premises |
The following table summarizes what's included in each plan and their cloud avai
> [!NOTE] > If you have enabled Defender for DevOps, you will only gain cloud security graph and attack path analysis to the artifacts that arrive through those connectors. >
-> To enable Governance for for DevOps related recommendations, the Defender Cloud Security Posture Management (CSPM) plan needs to be enabled on the Azure subscription that hosts the DevOps connector.
+> To enable Governance for for DevOps related recommendations, the Defender CSPM plan needs to be enabled on the Azure subscription that hosts the DevOps connector.
## Security governance and regulatory compliance
defender-for-cloud Defender For Containers Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-architecture.md
The **Azure Policy add-on for Kubernetes** collects cluster and workload configu
| Pod Name | Namespace | Kind | Short Description | Capabilities | Resource limits | Egress Required | |--|--|--|--|--|--|--|
-| azuredefender-collector-ds-* | kube-system | [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | A set of containers that focus on collecting inventory and security events from the Kubernetes environment. | SYS_ADMIN, <br>SYS_RESOURCE, <br>SYS_PTRACE | memory: 64Mi<br> <br> cpu: 60m | No |
-| azuredefender-collector-misc-* | kube-system | [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) | A set of containers that focus on collecting inventory and security events from the Kubernetes environment that aren't bounded to a specific node. | N/A | memory: 64Mi <br> <br>cpu: 60m | No |
-| azuredefender-publisher-ds-* | kube-system | [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | Publish the collected data to Microsoft Defender for Containers backend service where the data will be processed for and analyzed. | N/A | memory: 200Mi  <br> <br> cpu: 60m | Https 443 <br> <br> Learn more about the [outbound access prerequisites](../aks/limit-egress-traffic.md#microsoft-defender-for-containers) |
+| microsoft-defender-collector-ds-* | kube-system | [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | A set of containers that focus on collecting inventory and security events from the Kubernetes environment. | SYS_ADMIN, <br>SYS_RESOURCE, <br>SYS_PTRACE | memory: 64Mi<br> <br> cpu: 60m | No |
+| microsoft-defender-collector-misc-* | kube-system | [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) | A set of containers that focus on collecting inventory and security events from the Kubernetes environment that aren't bounded to a specific node. | N/A | memory: 64Mi <br> <br>cpu: 60m | No |
+| microsoft-defender-publisher-ds-* | kube-system | [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | Publish the collected data to Microsoft Defender for Containers backend service where the data will be processed for and analyzed. | N/A | memory: 200Mi  <br> <br> cpu: 60m | Https 443 <br> <br> Learn more about the [outbound access prerequisites](../aks/limit-egress-traffic.md#microsoft-defender-for-containers) |
\* resource limits aren't configurable
defender-for-cloud Defender For Sql Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-usage.md
To enable this plan:
<a name="auto-provision-mma"></a> -- **SQL Server on Azure VM** - If your SQL machine is hosted on an Azure VM, you can [customize the Log Analytics agent configuration](working-with-log-analytics-agent.md). Alternatively, you can follow the manual procedure for [Onboard your Azure Stack Hub VMs](quickstart-onboard-machines.md?pivots=azure-portal#onboard-your-azure-stack-hub-vms).
+- **SQL Server on Azure VM** - If your SQL machine is hosted on an Azure VM, you can [customize the Log Analytics agent configuration](working-with-log-analytics-agent.md).
- **SQL Server on Azure Arc-enabled servers** - If your SQL Server is managed by [Azure Arc](../azure-arc/index.yml) enabled servers, you can deploy the Log Analytics agent using the Defender for Cloud recommendation ΓÇ£Log Analytics agent should be installed on your Windows-based Azure Arc machines (Preview)ΓÇ¥. - **SQL Server on-premises** - If your SQL Server is hosted on an on-premises Windows machine without Azure Arc, you can connect the machine to Azure by either:
defender-for-cloud Enable Vulnerability Assessment Agentless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-vulnerability-assessment-agentless.md
When you enable agentless vulnerability assessment:
## Enabling agentless scanning for machines
-When you enable Defender [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) or [Defender for Servers P2](defender-for-servers-introduction.md), agentless scanning is enabled on by default.
+When you enable [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) or [Defender for Servers P2](defender-for-servers-introduction.md), agentless scanning is enabled on by default.
If you have Defender for Servers P2 already enabled and agentless scanning is turned off, you need to turn on agentless scanning manually.
defender-for-cloud Episode Nineteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-nineteen.md
Title: Defender for DevOps | Defender for Cloud in the Field
description: Learn about Defender for Cloud integration with Defender for DevOps. Previously updated : 11/03/2022 Last updated : 11/08/2022 # Defender for DevOps | Defender for Cloud in the Field
Last updated 11/03/2022
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [Cloud security explorer and attack path analysis](episode-twenty.md)
defender-for-cloud Episode Twenty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty.md
+
+ Title: Cloud security explorer and attack path analysis | Defender for Cloud in the Field
+
+description: Learn about cloud security explorer and attack path analysis.
+ Last updated : 11/08/2022++
+# Cloud security explorer and attack path analysis | Defender for Cloud in the Field
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Tal Rosler joins Yuri Diogenes to talk about cloud security explorer and attack path analysis, two new capabilities in Defender for CSPM that were released at Ignite. The talk explains the rationale behind creating these features and how to use these features to prioritize what is more important to keep your environment more secure. Tal also demonstrates how to use these capabilities to quickly identify vulnerabilities and misconfigurations in cloud workloads.
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=ce442350-7fab-40c0-b934-d93027b00853" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+- [01:27](/shows/mdc-in-the-field/security-explorer#time=01m27s) - The business case for cloud security graph
+
+- [03:00](/shows/mdc-in-the-field/security-explorer#time=03m00s) - What is cloud security graph
+
+- [05:06](/shows/mdc-in-the-field/security-explorer#time=05m06s) - Demonstration
+
+- [09:30](/shows/mdc-in-the-field/security-explorer#time=09m30s) - How paths are created under attack path
+
+- [12:00](/shows/mdc-in-the-field/security-explorer#time=12m00s) - Cloud security explorer demonstration
+
+- [19:25](/shows/mdc-in-the-field/security-explorer#time=19m25s) - Saving cloud security explorer queries
++
+## Recommended resources
+ - [Learn more](/defender-for-cloud/concept-attack-path.md) about Attack path.
+ - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+ - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+ - For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- Follow us on social media:
+
+ - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- Learn more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud How To Manage Attack Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-attack-path.md
description: Learn how to manage your attack path analysis and build queries to locate vulnerabilities in your multicloud environment. Previously updated : 10/03/2022 Last updated : 11/08/2022 # Identify and remediate attack paths
You can check out the full list of [Attack path names and descriptions](attack-p
| Aspect | Details | |--|--| | Release state | Preview |
+| Prerequisite | - [Enable agentless scanning](enable-vulnerability-assessment-agentless.md) <br> - [Enable Defender for CSPM](enable-enhanced-security.md) <br> - [Enable Defender for Containers](defender-for-containers-enable.md), and install the relevant agents in order to view attack paths that are related to containers. This will also give you the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in security explorer. |
| Required plans | - Defender Cloud Security Posture Management (CSPM) enabled | | Required roles and permissions: | - **Security Reader** <br> - **Security Admin** <br> - **Reader** <br> - **Contributor** <br> - **Owner** | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS) <br>:::image type="icon" source="./media/icons/no-icon.png"::: Commercial clouds (GCP) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) |
On this page you can organize your attack paths based on name, environment, path
For each attack path you can see all of risk categories and any affected resources.
-The potential risk categories include Credentials exposure, Compute abuse, Data exposure, Subscription/account takeover.
+The potential risk categories include credentials exposure, compute abuse, data exposure, subscription and account takeover.
+
+Learn more about [the cloud security graph, attack path analysis, and the cloud security explorer?](concept-attack-path.md).
## Investigate and remediate attack paths
defender-for-cloud How To Manage Aws Assessments Standards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-aws-assessments-standards.md
Title: Manage AWS assessments and standards
description: Learn how to create custom security assessments and standards for your AWS environment. Previously updated : 10/20/2022 Last updated : 11/08/2022 # Manage AWS assessments and standards
You can either use the built-in regulatory compliance standards or create your o
1. Select **Save**.
-## Create a new custom assessment for your AWS account
+## Create a new custom assessment for your AWS account (Preview)
**To create a new custom assessment for your AWS account**:
defender-for-cloud How To Manage Cloud Security Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-cloud-security-explorer.md
description: Learn how to build queries in cloud security explorer to find vulnerabilities that exist on your multicloud environment. Previously updated : 10/03/2022 Last updated : 11/08/2022 # Cloud security explorer
Defender for Cloud's contextual security capabilities assists security teams in
By using the cloud security explorer, you can proactively identify security risks in your cloud environment by running graph-based queries on the cloud security graph, which is Defender for Cloud's context engine. You can prioritize your security team's concerns, while taking your organization's specific context and conventions into account.
-With the cloud security explorer, you can query all of your security issues and environment context such as assets inventory, exposure to internet, permissions, lateral movement between resources and more.
+With the cloud security explorer, you can query all of your security issues and environment context such as assets inventory, exposure to internet, permissions, lateral movement between resources and more.
+
+Learn more about [the cloud security graph, attack path analysis, and the cloud security explorer?](concept-attack-path.md).
## Availability | Aspect | Details | |--|--| | Release state | Preview |
+| Prerequisite | - [Enable agentless scanning](enable-vulnerability-assessment-agentless.md) <br> - [Enable Defender for CSPM](enable-enhanced-security.md) <br> - [Enable Defender for Containers](defender-for-containers-enable.md), and install the relevant agents in order to view attack paths that are related to containers. This will also give you the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in security explorer. |
| Required plans | - Defender Cloud Security Posture Management (CSPM) enabled | | Required roles and permissions: | - **Security Reader** <br> - **Security Admin** <br> - **Reader** <br> - **Contributor** <br> - **Owner** | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS) <br>:::image type="icon" source="./media/icons/no-icon.png"::: Commercial clouds (GCP) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) |
You can review the [full list of recommendations, insights and connections](atta
## Next steps
-[Create custom security initiatives and policies](custom-security-policies.md)
+View the [reference list of attack paths and cloud security graph components](attack-path-reference.md)
+
+Learn about the [Defender CSPM plan options](concept-cloud-security-posture-management.md#defender-cspm-plan-options)
defender-for-cloud How To Manage Gcp Assessments Standards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-gcp-assessments-standards.md
Title: Manage GCP assessments and standards
description: Learn how to create custom security assessments and standards for your GCP environment. Previously updated : 10/18/2022 Last updated : 11/08/2022 # Manage GCP assessments and standards
You can either use the built-in compliance standards or create your own custom s
1. Select **Save**.
-## Create a new custom assessment for your GCP project
+## Create a new custom assessment for your GCP project (Preview)
**To create a new custom assessment to your GCP project**:
defender-for-cloud Os Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/os-coverage.md
Virtual machines are also created in a customer subscription as part of some Azu
Virtual machines that run in a cloud service are also supported. Only cloud services web and worker roles that run in production slots are monitored. To learn more about cloud services, see [Overview of Azure Cloud Services](../cloud-services/cloud-services-choose-me.md).
-Protection for VMs residing in Azure Stack Hub is also supported. For more information about Defender for Cloud's integration with Azure Stack Hub, see [Onboard your Azure Stack Hub virtual machines to Defender for Cloud](quickstart-onboard-machines.md?pivots=azure-portal#onboard-your-azure-stack-hub-vms).
- ## Next steps - Learn how [Defender for Cloud collects data using the Log Analytics Agent](monitoring-components.md#log-analytics-agent).
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account to Microsoft Defender for Cloud description: Defend your AWS resources with Microsoft Defender for Cloud - Previously updated : 10/23/2022 Last updated : 11/07/2022 zone_pivot_groups: connect-aws-accounts
To protect your AWS-based resources, you can connect an AWS account with either:
- [**Microsoft Defender for Containers**](defender-for-containers-introduction.md) brings threat detection and advanced defenses to [supported Amazon EKS clusters](supported-machines-endpoint-solutions-clouds-containers.md). - [**Microsoft Defender for SQL**](defender-for-sql-introduction.md) brings threat detection and advanced defenses to your SQL Servers running on AWS EC2, AWS RDS Custom for SQL Server. -- **Classic cloud connector** - Requires configuration in your AWS account to create a user that Defender for Cloud can use to connect to your AWS environment. If you have classic cloud connectors, we recommend that you [delete these connectors](#remove-classic-connectors) and use the native connector to reconnect to the account. Using both the classic and native connectors can produce duplicate recommendations.
+- **Classic cloud connector** - Requires configuration in your AWS account to create a user that Defender for Cloud can use to connect to your AWS environment. If you have classic cloud connectors, we recommend that you [delete these connectors](#remove-classic-connectors), and use the native connector to reconnect to the account. Using both the classic and native connectors can produce duplicate recommendations.
For a reference list of all the recommendations Defender for Cloud can provide for AWS resources, see [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md).
The native cloud connector requires:
- An active AWS account, with EC2 instances running SQL server or RDS Custom for SQL Server.
- - Azure Arc for servers installed on your EC2 instances/RDS Custom for SQL Server.
+ - Azure Arc for servers installed on your EC2 instances/RDS Custom for SQL Server.
- (Recommended) Use the auto provisioning process to install Azure Arc on all of your existing and future EC2 instances.
- Auto provisioning is managed by AWS Systems Manager (SSM) using the SSM agent. Some Amazon Machine Images (AMIs) already have the SSM agent pre-installed. If you already have the SSM agent pre-installed, the AMI's are listed in [AMIs with SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-technical-details.html#ami-preinstalled-agent). If your EC2 instances don't have the SSM Agent, you will need to install it using either of the following relevant instructions from Amazon:
+ Auto provisioning is managed by AWS Systems Manager (SSM) using the SSM agent. Some Amazon Machine Images (AMIs) already have the SSM agent pre-installed. If you already have the SSM agent pre-installed, the AMIs are listed in [AMIs with SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-technical-details.html#ami-preinstalled-agent). If your EC2 instances don't have the SSM Agent, you'll need to install it using either of the following relevant instructions from Amazon:
- [Install SSM Agent for a hybrid environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html) > [!NOTE] > To enable the Azure Arc auto-provisioning, you'll need **Owner** permission on the relevant Azure subscription.
- - Additional extensions should be enabled on the Arc-connected machines.
+ - More extensions should be enabled on the Arc-connected machines.
- Log Analytics (LA) agent on Arc machines, and ensure the selected workspace has security solution installed. The LA agent is currently configured in the subscription level. All of your multicloud AWS accounts and GCP projects under the same subscription will inherit the subscription settings. Learn more about [monitoring components](monitoring-components.md) for Defender for Cloud.
The native cloud connector requires:
- Azure Arc for servers installed on your EC2 instances. - (Recommended) Use the auto provisioning process to install Azure Arc on all of your existing and future EC2 instances.
- Auto provisioning is managed by AWS Systems Manager (SSM) using the SSM agent. Some Amazon Machine Images (AMIs) already have the SSM agent pre-installed. If that is the case, their AMI's are listed in [AMIs with SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-technical-details.html#ami-preinstalled-agent). If your EC2 instances don't have the SSM Agent, you will need to install it using either of the following relevant instructions from Amazon:
+ Auto provisioning is managed by AWS Systems Manager (SSM) using the SSM agent. Some Amazon Machine Images (AMIs) already have the SSM agent pre-installed. If that is the case, their AMIs are listed in [AMIs with SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-technical-details.html#ami-preinstalled-agent). If your EC2 instances don't have the SSM Agent, you'll need to install it using either of the following relevant instructions from Amazon:
- [Install SSM Agent for a hybrid environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html) - [Install SSM Agent for a hybrid environment (Linux)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-linux.html) > [!NOTE]
The native cloud connector requires:
- If you want to manually install Azure Arc on your existing and future EC2 instances, use the [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) recommendation to identify instances that don't have Azure Arc installed.
- - Additional extensions should be enabled on the Arc-connected machines:
+ - Other extensions should be enabled on the Arc-connected machines:
- Microsoft Defender for Endpoint - VA solution (TVM/Qualys) - Log Analytics (LA) agent on Arc machines. Ensure the selected workspace has security solution installed.
-
- The LA agent is currently configured in the subscription level, such that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings with regards to the LA agent.
+
+ The LA agent is currently configured in the subscription level, such that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings regarding the LA agent.
Learn more about [monitoring components](monitoring-components.md) for Defender for Cloud.
The native cloud connector requires:
Using both the classic and native connectors can produce duplicate recommendations.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Navigate to **Defender for Cloud** > **Environment settings**.
The native cloud connector requires:
1. Enter the details of the AWS account, including the location where you'll store the connector resource. :::image type="content" source="media/quickstart-onboard-aws/add-aws-account-details.png" alt-text="Step 1 of the add AWS account wizard: Enter the account details.":::
-
+ (Optional) Select **Management account** to create a connector to a management account. Connectors will be created for each member account discovered under the provided management account. Auto-provisioning will be enabled for all of the newly onboarded accounts. 1. Select **Next: Select plans**.<a name="cloudtrail-implications-note"></a>
The native cloud connector requires:
1. By default the **Servers** plan is set to **On**. This is necessary to extend Defender for server's coverage to your AWS EC2. Ensure you've fulfilled the [network requirements for Azure Arc](../azure-arc/servers/network-requirements.md?tabs=azure-cloud).
- - (Optional) Select **Configure**, to edit the configuration as required.
+ - (Optional) Select **Configure**, to edit the configuration as required.
1. By default the **Containers** plan is set to **On**. This is necessary to have Defender for Containers protect your AWS EKS clusters. Ensure you've fulfilled the [network requirements](./defender-for-containers-enable.md?pivots=defender-for-container-eks&source=docs&tabs=aks-deploy-portal%2ck8s-deploy-asc%2ck8s-verify-asc%2ck8s-remove-arc%2caks-removeprofile-api#network-requirements) for the Defender for Containers plan.
- > [!Note]
+ > [!Note]
> Azure Arc-enabled Kubernetes, the Defender Arc extension, and the Azure Policy Arc extension should be installed. Use the dedicated Defender for Cloud recommendations to deploy the extensions (and Arc, if necessary) as explained in [Protect Amazon Elastic Kubernetes Service clusters](defender-for-containers-enable.md?tabs=defender-for-container-eks). - - (Optional) Select **Configure**, to edit the configuration as required. If you choose to disable this configuration, the `Threat detection (control plane)` feature will be disabled. Learn more about the [feature availability](supported-machines-endpoint-solutions-clouds-containers.md). 1. By default the **Databases** plan is set to **On**. This is necessary to extend Defender for SQL's coverage to your AWS EC2 and RDS Custom for SQL Server.
The native cloud connector requires:
1. Select **Next: Configure access**. 1. Download the CloudFormation template.
-
+ 1. Using the downloaded CloudFormation template, create the stack in AWS as instructed on screen. If you're onboarding a management account, you'll need to run the CloudFormation template both as Stack and as StackSet. Connectors will be created for the member accounts up to 24 hours after the onboarding.
-
+ 1. Select **Next: Review and generate**.
-
+ 1. Select **Create**. Defender for Cloud will immediately start scanning your AWS resources and you'll see security recommendations within a few hours. For a reference list of all the recommendations Defender for Cloud can provide for AWS resources, see [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md).
-## CloudFormation deployment source
+## AWS authentication process
+
+Federated authentication is used between Microsoft Defender for Cloud and AWS. All of the resources related to the authentication are created as a part of the CloudFormation template deployment, including:
+
+- An identity provider (OpenID connect)
+- Identity and Access Management (IAM) roles with a federated principal (connected to the identity providers).
+
+The architecture of the authentication process across clouds is as follows:
++
+1. Microsoft Defender for Cloud CSPM service acquires an Azure AD token with a validity life time of 1 hour that is signed by the Azure AD using the RS256 algorithm.
+
+1. The Azure AD token is exchanged with AWS short living credentials and Defender for Cloud's CPSM service assumes the CSPM IAM role (assumed with web identity).
+
+1. Since the principle of the role is a federated identity as defined in a trust relationship policy, the AWS identity provider validates the Azure AD token against the Azure AD through a process that includes:
-As part of connecting an AWS account to Microsoft Defender for Cloud, a CloudFormation template should be deployed to the AWS account. This CloudFormation template creates all the required resources so Microsoft Defender for Cloud can connect to the AWS account.
+ - audience validation
+ - signing of the token
+ - certificate thumbprint
-The CloudFormation template should be deployed using Stack (or StackSet if you have a management account).
+ 1. The Microsoft Defender for Cloud CSPM role is assumed only after the validation conditions defined at the trust relationship have been met. The conditions defined for the role level are used for validation within AWS and allows only the Microsoft Defender for Cloud CSPM application (validated audience) access to the specific role (and not any other Microsoft token).
-When deploying the CloudFormation template, the Stack creation wizard offers the following options:
+1. After the Azure AD token is validated by the AWS identity provider, the AWS STS exchanges the token with AWS short-living credentials which CSPM service uses to scan the AWS account.
+## CloudFormation deployment source
+
+As part of connecting an AWS account to Microsoft Defender for Cloud, a CloudFormation template should be deployed to the AWS account. This CloudFormation template creates all of the required resources necessary for Microsoft Defender for Cloud to connect to the AWS account.
+
+The CloudFormation template should be deployed using Stack (or StackSet if you have a management account).
+
+When deploying the CloudFormation template, the Stack creation wizard offers the following options:
+ 1. **Amazon S3 URL** ΓÇô upload the downloaded CloudFormation template to your own S3 bucket with your own security configurations. Enter the URL to the S3 bucket in the AWS deployment wizard.
-1. **Upload a template file** ΓÇô AWS will automatically create an S3 bucket in which the CloudFormation template will be saved. With this automation, the S3 bucket is created with a security misconfiguration that will result in the security recommendation ΓÇ£S3 buckets should require requests to use Secure Socket LayerΓÇ¥. Apply the following policy to fix this recommendation:
-
-```bash
-{
- "Id": "ExamplePolicy",
- "Version": "2012-10-17",
- "Statement": [
- {
- "Sid": "AllowSSLRequestsOnly",
- "Action": "s3:*",
- "Effect": "Deny",
- "Resource": [
- "<S3_Bucket ARN>",
- "<S3_Bucket ARN>/*"
- ],
- "Condition": {
- "Bool": {
- "aws:SecureTransport": "false"
- }
- },
- "Principal": "*"
- }
- ]
-}
-```
+1. **Upload a template file** ΓÇô AWS will automatically create an S3 bucket that the CloudFormation template will be saved to. The automation for the S3 bucket will have a security misconfiguration that will cause the `S3 buckets should require requests to use Secure Socket Layer` recommendation to appear. You can remediate this recommendation by applying the following policy:
+
+ ```bash
+ {ΓÇ»
+ ΓÇ» "Id": "ExamplePolicy",ΓÇ»
+ ΓÇ» "Version": "2012-10-17",ΓÇ»
+ ΓÇ» "Statement": [ΓÇ»
+     { 
+       "Sid": "AllowSSLRequestsOnly", 
+       "Action": "s3:*", 
+       "Effect": "Deny", 
+       "Resource": [ 
+         "<S3_Bucket ARN>", 
+         "<S3_Bucket ARN>/*" 
+       ], 
+       "Condition": { 
+         "Bool": { 
+           "aws:SecureTransport": "false" 
+         } 
+       }, 
+      "Principal": "*" 
+     } 
+ ΓÇ» ]ΓÇ»
+ }ΓÇ»
+ ```
### Remove 'classic' connectors
If you have any existing connectors created with the classic cloud connectors ex
::: zone-end - ::: zone pivot="classic-connector" ## Availability
If you have any existing connectors created with the classic cloud connectors ex
|Required roles and permissions:|**Owner** on the relevant Azure subscription<br>**Contributor** can also connect an AWS account if an owner provides the service principal details| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)| -- ## Connect your AWS account
-Follow the steps below to create your AWS cloud connector.
+Follow the steps below to create your AWS cloud connector.
### Step 1. Set up AWS Security Hub:
There are two ways to allow Defender for Cloud to authenticate to AWS:
- **AWS user for Defender for Cloud** - A less secure option if you don't have IAM enabled #### Create an IAM role for Defender for Cloud+ 1. From your Amazon Web Services console, under **Security, Identity & Compliance**, select **IAM**. :::image type="content" source="./media/quickstart-onboard-aws/aws-identity-and-compliance.png" alt-text="AWS services.":::
There are two ways to allow Defender for Cloud to authenticate to AWS:
- **Account ID** - enter the Microsoft Account ID (**158177204117**) as shown in the AWS connector page in Defender for Cloud. - **Require External ID** - should be selected
- - **External ID** - enter the subscription ID as shown in the AWS connector page in Defender for Cloud
+ - **External ID** - enter the subscription ID as shown in the AWS connector page in Defender for Cloud.
1. Select **Next**. 1. In the **Attach permission policies** section, select the following [AWS managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html):
There are two ways to allow Defender for Cloud to authenticate to AWS:
1. In The Roles list, choose the role you created
-1. Save the Amazon Resource Name (ARN) for later.
+1. Save the Amazon Resource Name (ARN) for later.
+
+#### Create an AWS user for Defender for Cloud
-#### Create an AWS user for Defender for Cloud
1. Open the **Users** tab and select **Add user**.
-1. In the **Details** step, enter a username for Defender for Cloud and ensure that you select **Programmatic access** for the AWS Access Type.
+1. In the **Details** step, enter a username for Defender for Cloud and ensure that you select **Programmatic access** for the AWS Access Type.
1. Select **Next Permissions**. 1. Select **Attach existing policies directly** and apply the following policies: - SecurityAudit - AmazonSSMAutomationRole - AWSSecurityHubReadOnlyAccess
-
+ 1. Select **Next: Tags**. Optionally add tags. Adding Tags to the user doesn't affect the connection. 1. Select **Review**. 1. Save the automatically generated **Access key ID** and **Secret access key** CSV file for later. 1. Review the summary and select **Create user**. - ### Step 3. Configure the SSM Agent AWS Systems Manager is required for automating tasks across your AWS resources. If your EC2 instances don't have the SSM Agent, follow the relevant instructions from Amazon:
AWS Systems Manager is required for automating tasks across your AWS resources.
- [Installing and Configuring SSM Agent on Windows Instances](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-ssm-win.html) - [Installing and Configuring SSM Agent on Amazon EC2 Linux Instances](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-ssm-agent.html) - ### Step 4. Complete Azure Arc prerequisites+ 1. Make sure the appropriate [Azure resources providers](../azure-arc/servers/prerequisites.md#azure-resource-providers) are registered: - Microsoft.HybridCompute - Microsoft.GuestConfiguration 1. Create a Service Principal for onboarding at scale. As an **Owner** on the subscription you want to use for the onboarding, create a service principal for Azure Arc onboarding as described in [Create a Service Principal for onboarding at scale](../azure-arc/servers/onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale). - ### Step 5. Connect AWS to Defender for Cloud 1. From Defender for Cloud's menu, open **Environment settings** and select the option to switch back to the classic connectors experience.
AWS Systems Manager is required for automating tasks across your AWS resources.
1. Select **Next**. 1. Configure the options in the **Azure Arc Configuration** tab:
- Defender for Cloud discovers the EC2 instances in the connected AWS account and uses SSM to onboard them to Azure Arc.
+ Defender for Cloud discovers the EC2 instances in the connected AWS account and uses SSM to onboard them to Azure Arc.
> [!TIP] > For the list of supported operating systems, see [What operating systems for my EC2 instances are supported?](#what-operating-systems-for-my-ec2-instances-are-supported) in the FAQ. 1. Select the **Resource Group** and **Azure Region** that the discovered AWS EC2s will be onboarded to in the selected subscription. 1. Enter the **Service Principal ID** and **Service Principal Client Secret** for Azure Arc as described here [Create a Service Principal for onboarding at scale](../azure-arc/servers/onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale)
- 1. If the machine is connecting to the internet via a proxy server, specify the proxy server IP address or the name and port number that the machine uses to communicate with the proxy server. Enter the value in the format ```http://<proxyURL>:<proxyport>```
+ 1. If the machine is connecting to the internet via a proxy server, specify the proxy server IP address, or the name and port number that the machine uses to communicate with the proxy server. Enter the value in the format ```http://<proxyURL>:<proxyport>```
1. Select **Review + create**. Review the summary information
When the connector is successfully created, and AWS Security Hub has been config
- The AWS CIS standard will be shown in the Defender for Cloud's regulatory compliance dashboard. - If Security Hub policy is enabled, recommendations will appear in the Defender for Cloud portal and the regulatory compliance dashboard 5-10 minutes after onboard completes. - ::: zone-end :::image type="content" source="./media/quickstart-onboard-aws/aws-resources-in-recommendations.png" alt-text="AWS resources and recommendations in Defender for Cloud's recommendations page" lightbox="./media/quickstart-onboard-aws/aws-resources-in-recommendations.png"::: - ## Monitoring your AWS resources As you can see in the previous screenshot, Defender for Cloud's security recommendations page displays your AWS resources. You can use the environments filter to enjoy Defender for Cloud's multicloud capabilities: view the recommendations for Azure, AWS, and GCP resources together. To view all the active recommendations for your resources by resource type, use Defender for Cloud's asset inventory page and filter to the AWS resource type in which you're interested: - ## FAQ - AWS in Defender for Cloud
defender-for-cloud Quickstart Onboard Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-machines.md
Learn more about [Azure Arc-enabled servers](../azure-arc/servers/overview.md).
From here, choose the relevant procedure below depending on the type of machines you're onboarding:
- - [Onboard your Azure Stack Hub VMs](#onboard-your-azure-stack-hub-vms)
- [Onboard your Linux machines](#onboard-your-linux-machines) - [Onboard your Windows machines](#onboard-your-windows-machines)
-### Onboard your Azure Stack Hub VMs
-
-To add Azure Stack Hub VMs, you need the information on the **Agents management** page and to configure the **Azure Monitor, Update and Configuration Management** virtual machine extension on the virtual machines running on your Azure Stack Hub instance.
-
-1. From the **Agents management** page, copy the **Workspace ID** and **Primary Key** into Notepad.
-1. Log into your **Azure Stack Hub** portal and open the **Virtual machines** page.
-1. Select the virtual machine that you want to protect with Defender for Cloud.
- >[!TIP]
- > For information on how to create a virtual machine on Azure Stack Hub, see [this quickstart for Windows virtual machines](/azure-stack/user/azure-stack-quick-windows-portal) or [this quickstart for Linux virtual machines](/azure-stack/user/azure-stack-quick-linux-portal).
-1. Select **Extensions**. The list of virtual machine extensions installed on this virtual machine is shown.
-1. Select the **Add** tab. The **New Resource** menu shows the list of available virtual machine extensions.
-1. Select the **Azure Monitor, Update and Configuration Management** extension and select **Create**. The **Install extension** configuration page opens.
- >[!NOTE]
- > If you do not see the **Azure Monitor, Update and Configuration Management** extension listed in your marketplace, please reach out to your Azure Stack Hub operator to make it available.
-1. On the **Install extension** configuration page, paste the **Workspace ID** and **Workspace Key (Primary Key)** that you copied into Notepad in the previous step.
-1. When you complete the configuration, select **OK**. The extension's status will show as **Provisioning Succeeded**. It might take up to one hour for the virtual machine to appear in Defender for Cloud.
- ### Onboard your Linux machines To add Linux machines, you need the WGET command from the **Agents management** page.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Agentless vulnerability scanning is available in both Defender Cloud Security Po
Microsoft Defender for Cloud enables comprehensive visibility, posture management, and threat protection across hybrid and multicloud environments including Azure, AWS, Google, and on-premises resources.
-Now, the new Defender for DevOps service integrates source code management systems, like GitHub and Azure DevOps, into Defender for Cloud. With this new integration we are empowering security teams to protect their resources from code to cloud.
+Now, the new Defender for DevOps plan integrates source code management systems, like GitHub and Azure DevOps, into Defender for Cloud. With this new integration we are empowering security teams to protect their resources from code to cloud.
Defender for DevOps allows you to gain visibility into and manage your connected developer environments and code resources. Currently, you can connect [Azure DevOps](quickstart-onboard-devops.md) and [GitHub](quickstart-onboard-github.md) systems to Defender for Cloud and onboard DevOps repositories to Inventory and the new DevOps Security page. It provides security teams with a high-level overview of the discovered security issues that exist within them in a unified DevOps Security page.
You can configure the Microsoft Security DevOps tools on Azure Pipelines and Git
| [Terrascan](https://github.com/tenable/terrascan) | Terraform (HCL2), Kubernetes (JSON/YAML), Helm v3, Kustomize, Dockerfiles, Cloud Formation | [Apache License 2.0](https://github.com/tenable/terrascan/blob/master/LICENSE) | | [Trivy](https://github.com/aquasecurity/trivy) | Container images, file systems, git repositories | [Apache License 2.0](https://github.com/tenable/terrascan/blob/master/LICENSE) |
-The following new recommendations are now available for DevOps security assessments:
+The following new recommendations are now available for DevOps:
| Recommendation | Description | Severity | |--|--|--|
Learn more on how to [Improve your regulatory compliance](regulatory-compliance-
We have renamed the Auto-provisioning page to **Settings & monitoring**.
-Auto-provisioning was meant to allow at-scale enablement of pre-requisites, which are needed by Defender for Cloud's advanced features and capabilities. To better support our expanded capabilities, we are launching a new experience with the following changes:
+Auto-provisioning was meant to allow at-scale enablement of prerequisites, which are needed by Defender for Cloud's advanced features and capabilities. To better support our expanded capabilities, we are launching a new experience with the following changes:
**The Defender for Cloud's plans page now includes**: - When you enable Defender plans, a Defender plan that requires monitoring components automatically turns on the required components with default settings. These settings can be edited by the user at any time.
Learn more about [managing your monitoring settings](monitoring-components.md).
One of Microsoft Defender for Cloud's main pillars for cloud security is Cloud Security Posture Management (CSPM). CSPM provides you with hardening guidance that helps you efficiently and effectively improve your security. CSPM also gives you visibility into your current security situation.
-We are announcing the addition of the new Defender Cloud Security Posture Management (CSPM) plan for Defender for Cloud. Defender Cloud Security Posture Management (CSPM) enhances the security capabilities of Defender for Cloud and includes the following new and expanded features:
+We are announcing a new Defender plan: Defender CSPM. This plan enhances the security capabilities of Defender for Cloud and includes the following new and expanded features:
- Continuous assessment of the security configuration of your cloud resources - Security recommendations to fix misconfigurations and weaknesses
We are announcing the addition of the new Defender Cloud Security Posture Manage
- Attack path analysis - Agentless scanning for machines
-Learn more about the [Defender Cloud Security Posture Management (CSPM) plan](concept-cloud-security-posture-management.md).
+Learn more about the [Defender CSPM plan](concept-cloud-security-posture-management.md).
### MITRE ATT&CK framework mapping is now available also for AWS and GCP security recommendations
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 10/23/2022 Last updated : 11/08/2022 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--|--|
-| [Deprecation of AWS Lambda recommendation](#deprecation-of-aws-lambda-recommendation) | November 2023 |
+| [The ability to create custom assessments in AWS and GCP (Preview) is set to be deprecated](#the-ability-to-create-custom-assessments-in-aws-and-gcp-preview-is-set-to-be-deprecated) | November 2022 |
+| [Recommendation to configure dead-letter queues for Lambda functions to be deprecated](#recommendation-to-configure-dead-letter-queues-for-lambda-functions-to-be-deprecated) | November 2022 |
+| [Recommendation to enable diagnostic logs for Virtual Machine Scale Sets to be deprecated](#recommendation-to-enable-diagnostic-logs-for-virtual-machine-scale-sets-to-be-deprecated) | December 2022 |
-### Deprecation of AWS Lambda recommendation
+### The ability to create custom assessments in AWS and GCP (Preview) is set to be deprecated
-**Estimated date for change: November 2023**
+**Estimated date for change: November 21st, 2022**
-The following recommendation is set to be deprecated [`Lambda functions should have a dead-letter queue configured`](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/AwsRecommendationDetailsBlade/assessmentKey/dcf10b98-798f-4734-9afd-800916bf1e65/showSecurityCenterCommandBar~/false).
+The ability to create custom assessments for [AWS accounts](how-to-manage-aws-assessments-standards.md#create-a-new-custom-assessment-for-your-aws-account-preview) and [GCP projects](how-to-manage-gcp-assessments-standards.md#create-a-new-custom-assessment-for-your-gcp-project-preview) (Preview) is set to be deprecated. This feature will be replaced by with a new feature that will be a part of the [Defender CSPM](concept-cloud-security-posture-management.md) plan, which will be released in the future.
+
+### Recommendation to configure dead-letter queues for Lambda functions to be deprecated
+
+**Estimated date for change: November 2022**
+
+The recommendation [`Lambda functions should have a dead-letter queue configured`](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/AwsRecommendationDetailsBlade/assessmentKey/dcf10b98-798f-4734-9afd-800916bf1e65/showSecurityCenterCommandBar~/false) is set to be deprecated.
+
+| Recommendation | Description | Severity |
+|--|--|--|
+| Lambda functions should have a dead-letter queue configured | This control checks whether a Lambda function is configured with a dead-letter queue. The control fails if the Lambda function is not configured with a dead-letter queue. As an alternative to an on-failure destination, you can configure your function with a dead-letter queue to save discarded events for further processing. A dead-letter queue acts the same as an on-failure destination. It is used when an event fails all processing attempts or expires without being processed. A dead-letter queue allows you to look back at errors or failed requests to your Lambda function to debug or identify unusual behavior. From a security perspective, it is important to understand why your function failed and to ensure that your function does not drop data or compromise data security as a result. For example, if your function cannot communicate to an underlying resource, that could be a symptom of a denial of service (DoS) attack elsewhere in the network. | Medium |
+
+### Recommendation to enable diagnostic logs for Virtual Machine Scale Sets to be deprecated
+
+**Estimated date for change: December 2022**
+
+The recommendation [`Diagnostic logs in Virtual Machine Scale Sets should be enabled`](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/961eb649-3ea9-f8c2-6595-88e9a3aeedeb/showSecurityCenterCommandBar~/false) is set to be deprecated.
+
+The related [policy definition](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c1b1214-f927-48bf-8882-84f0af6588b1) will also be deprecated from any standards displayed in the regulatory compliance dashboard.
+
+| Recommendation | Description | Severity |
+|--|--|--|
+| Diagnostic logs in Virtual Machine Scale Sets should be enabled | Enable logs and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised. | Low |
## Next steps
defender-for-iot How To Deploy Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-deploy-certificates.md
Title: Deploy certificates
+ Title: Setting SSL/TLS appliance certificates
description: Learn how to set up and deploy certificates for Defender for IoT. Last updated 02/06/2022
-# About certificates
+# Certificates for appliance encryption and authentication (OT appliances)
This article provides information needed when creating and deploying certificates for Microsoft Defender for IoT. A security, PKI or other qualified certificate lead should handle certificate creation and deployment.
Validation is evaluated against:
Validation is carried out twice:
-1. When uploading the certificate to sensors and on-premises management consoles. If validation fails, the certificate cannot be uploaded.
+1. When uploading the certificate to sensors and on-premises management consoles. If validation fails, the certificate can't be uploaded.
1. When initiating encrypted communication between: - Defender for IoT system components, for example, a sensor and on-premises management console.
- - Defender for IoT and certain 3rd party servers defined in Forwarding rules. See [About forwarded alert information](how-to-forward-alert-information-to-partners.md#about-forwarded-alert-information) for more information.
+ - Defender for IoT and certain third party servers defined in Forwarding rules. For more information, see [About forwarded alert information](how-to-forward-alert-information-to-partners.md#about-forwarded-alert-information).
If validation fails, communication between the relevant components is halted and a validation error is presented in the console.
Following sensor and on-premises management console installation, a local self-s
When signing into the sensor and on-premises management console for the first time, Admin users are prompted to upload an SSL/TLS certificate. Using SSL/TLS certificates is highly recommended.
-If the certificate is not created properly by the certificate lead or there are connection issues to it, the certificate cannot be uploaded and users will be forced to work with a locally signed certificate.
+If the certificate isn't created properly by the certificate lead or there are connection issues to it, the certificate can't be uploaded and users will be forced to work with a locally signed certificate.
The option to validate the uploaded certificate and third-party certificates is automatically enabled, but can be disabled. When disabled, encrypted communications between components continues, even if a certificate is invalid.
If you are working with certificate validation, verify access to port 80 is avai
Certificate validation is evaluated against a Certificate Revocation List, and the certificate expiration date. This means appliance should be able to establish connection to the CRL server defined by the certificate. By default, the certificate will reference the CRL URL on HTTP port 80.
-Some organizational security policies may block access to this port. If your organization does not have access to port 80, you can:
+Some organizational security policies may block access to this port. If your organization doesn't have access to port 80, you can:
1. Define another URL and a specific port in the certificate.
Some organizational security policies may block access to this port. If your org
### File type requirements
-Defender for IoT requires that each CA-signed certificate contains a .key file and a .crt file. These files are uploaded to the sensor and On-premises management console after login. Some organizations may require .pem file. Defender for IoT does not require this file type.
+Defender for IoT requires that each CA-signed certificate contains a .key file and a .crt file. These files are uploaded to the sensor and On-premises management console after login. Some organizations may require .pem file. Defender for IoT doesn't require this file type.
**.crt – certificate container file**
-A .pem, or .der formatted file with a different extension. The file is recognized by Windows Explorer as a certificate. The .pem file is not recognized by Windows Explorer.
+A .pem, or .der formatted file with a different extension. The file is recognized by Windows Explorer as a certificate. The .pem file isn't recognized by Windows Explorer.
**.key – Private key file**
You may need to convert existing files types to supported types. See [Convert ex
### Certificate file parameter requirements
-Verify that you have met the following parameter requirements before creating a certificate:
+Verify that you've met the following parameter requirements before creating a certificate:
- [CRT file requirements](#crt-file-requirements) - [Key file requirements](#key-file-requirements)
You can test certificates before deploying them to your sensors and on-premises
| **Test** | **CLI command** | |--|--|
-| Check a Certificate Signing Request (CSR) | openssl req -text -noout -verify -in CSR.csr |
-| Check a private key | openssl rsa -in privateKey.key -check |
-| Check a certificate | openssl x509 -in certificate.crt -text -noout |
+| Check a Certificate Signing Request (CSR) | `openssl req -text -noout -verify -in CSR.csr` |
+| Check a private key | `openssl rsa -in privateKey.key -check` |
+| Check a certificate | `openssl x509 -in certificate.crt -text -noout` |
If these tests fail, review [Certificate file parameter requirements](#certificate-file-parameter-requirements) to verify file parameters are accurate, or consult your certificate lead.
Admin users attempting to log in to the sensor or on-premises management console
| This SSL certificate has expired and is not considered valid. | Create a new certificate with valid dates.| | This SSL certificate has expired and is not considered valid. | Create a new certificate with valid dates.| |This certificate has been revoked by the CRL and cannot be trusted for a secure connection | Create a new unrevoked certificate. |
-|The CRL (Certificate Revocation List) location is not reachable. Verify the URL can be accessed from this appliance | Make sure that your network configuration allows the appliance to reach the CRL Server defined in the certificate.You can use a proxy server if there are limitations in establishing a direct connection.
+|The CRL (Certificate Revocation List) location is not reachable. Verify the URL can be accessed from this appliance | Make sure that your network configuration allows the appliance to reach the CRL Server defined in the certificate. You can use a proxy server if there are limitations in establishing a direct connection.
|Certificate validation failed | This indicates a general error in the appliance. Contact [Microsoft Support](https://support.microsoft.com/supportforbusiness/productselection?sapId=82c8f35-1b8e-f274-ec11-c6efdd6dd099).| ### Troubleshoot file conversions
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Webhook extended can be used to send extra data to the endpoint. The extended fe
### Unicode support for certificate passphrases
-Unicode characters are now supported when working with sensor certificate passphrases. For more information, see [About certificates](how-to-deploy-certificates.md#about-certificates)
+Unicode characters are now supported when working with sensor certificate passphrases. For more information, see [Certificates for appliance encryption and authentication (OT appliances)](how-to-deploy-certificates.md#certificates-for-appliance-encryption-and-authentication-ot-appliances).
## Next steps
digital-twins How To Manage Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-graph.md
description: Learn how to manage a graph of digital twins by connecting them with relationships. Previously updated : 02/23/2022 Last updated : 11/08/2022
digital-twins How To Use 3D Scenes Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-3d-scenes-studio.md
To use 3D Scenes Studio, you'll need the following resources:
* Obtain *Azure Digital Twins Data Owner* or *Azure Digital Twins Data Reader* access to the instance. For instructions, see [Set up user access permissions](how-to-set-up-instance-cli.md#set-up-user-access-permissions). * Take note of the *host name* of your instance to use later. * An Azure storage account. For instructions, see [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-portal).
+ * Take note of the *URL* of your storage account to use later.
* A private container in the storage account. For instructions, see [Create a container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container).
- * Take note of the *URL* of your storage container to use later.
+ * Take note of the *name* of your storage container to use later.
* *Storage Blob Data Owner* or *Storage Blob Data Contributor* access to your storage resources. You can grant required roles at either the storage account level or the container level. For instructions and more information about permissions to Azure storage, see [Assign an Azure role](../storage/blobs/assign-azure-role-data-access.md?tabs=portal#assign-an-azure-role). You should also configure [CORS](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) for your storage account, so that 3D Scenes Studio will be able to access your storage container. You can use the following [Azure CLI](/cli/azure/what-is-azure-cli) command to set the minimum required methods, origins, and headers. The command contains one placeholder for the name of your storage account.
In this section, you'll set the environment in *3D Scenes Studio* and customize
1. The **Azure Digital Twins instance URL** should start with *https://*, followed by the *host name* of your instance from the [Prerequisites](#prerequisites) section.
- 1. For the **Azure storage container URL**, enter the URL of your storage container from the [Prerequisites](#prerequisites) section.
+ 1. For the **Azure Storage account URL**, enter the URL of your storage container from the [Prerequisites](#prerequisites) section. For the **Azure Storage container name**, enter the name of your storage container from the [Prerequisites](#prerequisites) section.
1. Select **Save**.
digital-twins Quickstart 3D Scenes Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/quickstart-3d-scenes-studio.md
This sample scenario represents a package distribution center that contains six
1. Use the **Generate environment** button to create a sample environment with models and twins. (If you already have models and twins in your instance, this will not delete them, it will just add more.) :::image type="content" source="media/quickstart-3d-scenes-studio/data-simulator.png" alt-text="Screenshot of the Azure Digital Twins Data simulator. The Generate environment button is highlighted." lightbox="media/quickstart-3d-scenes-studio/data-simulator.png":::
-1. Select **Start simulation** to start sending simulated data to your Azure Digital Twins instance. The simulation will only run while this window is open and the **Start simulation** option is active.
+1. Scroll down and select **Start simulation** to start sending simulated data to your Azure Digital Twins instance. The simulation will only run while this window is open and the **Start simulation** option is active.
You can view the models and graph that have been created by using the Azure Digital Twins Explorer **Graph** tool. To switch to that tool, select the **Graph** icon from the left menu.
Now that all your resources are set up, you can use them to create an environmen
1. For the **Azure Digital Twins instance URL**, fill the *host name* of your instance from the [Collect host name](#collect-host-name) step into this URL: `https://<your-instance-host-name>`.
- 1. For the **Azure Storage container URL**, fill the names of your storage account and container from the [Create storage resources](#create-storage-resources) step into this URL: `https://<your-storage-account>.blob.core.windows.net/<your-container>`.
+ 1. For the **Azure Storage account URL**, fill the name of your storage account from the [Create storage resources](#create-storage-resources) step into this URL: `https://<your-storage-account>.blob.core.windows.net`.
+
+ 1. For the **Azure Storage container name**, enter the name of your storage container from the [Create storage resources](#create-storage-resources) step.
1. Select **Save**.
event-grid Auth0 How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/auth0-how-to.md
Title: How to send events from Auth0 to Azure using Azure Event Grid description: How to end events from Auth0 to Azure services with Azure Event Grid. Previously updated : 03/29/2022 Last updated : 11/07/2022 # Integrate Azure Event Grid with Auth0
This article describes how to connect your Auth0 and Azure accounts by creating
## Send events from Auth0 to Azure Event Grid To send Auth0 events to Azure:
-1. [Register the Event Grid resource provider](subscribe-to-partner-events.md#register-the-event-grid-resource-provider) with your Azure subscription.
-2. [Authorize Auth0](subscribe-to-partner-events.md#authorize-partner-to-create-a-partner-topic) to create a partner topic in your resource group.
+1. [Register the Event Grid resource provider](#register-the-event-grid-resource-provider) with your Azure subscription.
+1. [Authorize partner](#authorize-partner-to-create-a-partner-topic) to create a partner topic in your resource group.
3. Request Auth0 to enable events flow to a partner topic by [setting up an Auth0 partner topic](#set-up-an-auth0-partner-topic) in the Auth0 Dashboard.
-4. [Activate partner topic](subscribe-to-partner-events.md#activate-a-partner-topic) so that your events start flowing to your partner topic.
-5. [Subscribe to events](subscribe-to-partner-events.md#subscribe-to-events).
+4. [Activate partner topic](#activate-a-partner-topic) so that your events start flowing to your partner topic.
+5. [Subscribe to events](#subscribe-to-events).
-This article provides steps for doing the task #3 from the above list. All other tasks are documented in the [Subscribe to partner events](subscribe-to-partner-events.md) article.
++ ## Set up an Auth0 partner topic Part of the integration process is to set up Auth0 for use as an event source by using the [Auth0 Dashboard](https://manage.auth0.com/).
Part of the integration process is to set up Auth0 for use as an event source by
1. Click **Save**. You should see the partner topic in the resource group you specified. [Activate the partner topic](subscribe-to-partner-events.md#activate-a-partner-topic) so that your events start flowing to your partner topic. Then, [subscribe to events](subscribe-to-partner-events.md#subscribe-to-events).++ ++
+Try [invoking any of the Auth0 actions that trigger an event to be published](https://auth0.com/docs/logs/references/log-event-type-codes) to see events flow.
## Verify the integration To verify that the integration is working as expected: 1. Log in to the Auth0 Dashboard.
-1. Navigate to **Logs** > **Streams**.
+1. Navigate to **Monitoring** > **Streams**.
1. Click on your **Event Grid stream**. 1. Once on the stream, click on the **Health** tab. The stream should be active and as long as you don't see any errors, the stream is working.
-Try [invoking any of the Auth0 actions that trigger an event to be published](https://auth0.com/docs/logs/references/log-event-type-codes) to see events flow.
- ## Delivery attempts and retries Auth0 events are delivered to Azure via a streaming mechanism. Each event is sent as it's triggered in Auth0. If Event Grid is unable to receive the event, Auth0 will retry up to three times to deliver the event. Otherwise, Auth0 will log the failure to deliver in its system. + ## Next steps - [Auth0 Partner Topic](auth0-overview.md)
event-grid Event Schema Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-farmbeats.md
- Title: Azure FarmBeats as Event Grid source (Preview)
-description: Describes the properties and schema provided for Azure FarmBeats events with Azure Event Grid
- Previously updated : 06/06/2021-
-# Azure FarmBeats as Event Grid source (Preview)
-This article provides the properties and schema for Azure FarmBeats events. For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md).
-
-## Available event types
-
-|Event Name | Description|
-|--|-|
-|Microsoft.AgFoodPlatform.FarmerChanged|Published when a farmer is created /updated/deleted.
-|Microsoft.AgFoodPlatform.FarmChanged| Published when a farm is created/updated/deleted.
-|Microsoft.AgFoodPlatform.BoundaryChanged|Published when a boundary is created /updated/deleted.
-|Microsoft.AgFoodPlatform.FieldChanged|Published when a field is created /updated/deleted.
-|Microsoft.AgFoodPlatform.SeasonalFieldChanged|Published when a seasonal field is created /updated/deleted.
-|Microsoft.AgFoodPlatform.SeasonChanged|Published when a season is created /updated/deleted.
-|Microsoft.AgFoodPlatform.CropChanged|Published when a crop is created /updated/deleted.
-|Microsoft.AgFoodPlatform.CropVarietyChanged|Published when a crop variety is created /updated/deleted.
-|Microsoft.AgFoodPlatform.SatelliteDataIngestionJobStatusChanged| Published when a satellite data ingestion job's status changes, for example, job is created, has progressed or completed.
-|Microsoft.AgFoodPlatform.WeatherDataIngestionJobStatusChanged|Published when a weather data ingestion job's status changes, for example, job is created, has progressed or completed.
-|Microsoft.AgFoodPlatform.FarmOperationDataIngestionJobStatusChanged| Published when a farm operations data ingestion job's status changes, for example, job is created, has progressed or completed.
-|Microsoft.AgFoodPlatform.ApplicationDataChanged|Published when application data is created /updated/deleted. This event is associate with farm operations data.
-|Microsoft.AgFoodPlatform.HarvestingDataChanged|Published when harvesting data is created /updated/deleted.This event is associated with farm operations data.
-|Microsoft.AgFoodPlatform.TillageDataChanged|Published when a tillage data is created or updated or deleted. This event is associated with farm operations data.
-|Microsoft.AgFoodPlatform.PlantingDataChanged|Published when planting data is created /updated/deleted.This event is associated with farm operations data.
-
-## Event Properties
-Each FarmBeats event has two parts, one that is common across events and another (a data object) which contains properties specific to each event.
-
-The part common across events is elaborated in the following schema.
-
-### Event Grid event schema
-An event has the following top-level data:
-
-| Property | Type | Description |
-| -- | - | -- |
-| `topic` | string | Full resource path to the event source. This field isn't writeable. Event Grid provides this value. |
-| `subject` | string | Publisher-defined path to the event subject. |
-| `eventType` | string | One of the registered event types for this event source. |
-| `eventTime` | string | The time the event is generated based on the provider's UTC time. |
-| `id` | string | Unique identifier for the event. |
-| `data` | object | App Configuration event data. |
-| `dataVersion` | string | The schema version of the data object. The publisher defines the schema version. |
-| `metadataVersion` | string | The schema version of the event metadata. Event Grid defines the schema of the top-level properties. Event Grid provides this value. |
--
-The tables below elaborate on the properties within data object for each event.
-
-*For FarmerChanged, FarmChanged, SeasonChanged, CropChanged, CropVarietyChanged FarmBeats events, the data object contains following properties:*
-
-|Property | Type| Description|
-|-| -| -|
-id| string| User-defined ID of the resource, such as Farm ID, Farmer ID etc.
-actionType| string| Indicates the change triggered during publishing of the event. Applicable values are Created, Updated, Deleted
-status| string| Contains the user-defined status of the resource.
-properties| object| It contains user-defined key-value pairs
-modifiedDateTime| date-time|Date-time when resource was last modified, sample format: yyyy-MM-ddTHH:mm:ssZ.
-createdDateTime|date-time|Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ.
-eTag| string| Implements optimistic concurrency
-description| string| Textual description of the resource
--
-*BoundaryChanged FarmBeats events have the following data object:*
-
-|Property | Type| Description|
-|-| -| -|
-id| string| User-defined ID of boundary
-actionType| string| Indicates the change that is triggered during publishing of the event. Applicable values are Created, Updated, Deleted.
-parentId| string| ID of the parent boundary belongs to.
-parentType| string| Type of the parent boundary belongs to.
-isPrimary| boolean| Indicates if the boundary is primary.
-farmerId| string| Contains the ID of the farmer associated with boundary.
-properties| object| It contains user-defined key-value pairs.
-modifiedDateTime| date-time|Date-time when resource was last modified, sample format: yyyy-MM-ddTHH:mm:ssZ.
-createdDateTime|date-time|Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ.
-status| string| Contains user-defined status of the resource.
-eTag| string| Implements optimistic concurrency.
-description| string| Textual description of the resource.
-
-*FieldChanged FarmBeats events have the following data object:*
-
-Property| Type| Description
-|-| -| -|
-id| string| User-defined ID of the field
-farmId| string| User-defined ID of the farm that field is associated with
-farmerId| string| User-defined ID of the farmer that field is associated with
-name| string| User-defined name of the field
-actionType| string| Indicates the change that triggered publishing of the event. Applicable values are Created, Updated, Deleted
-properties| object| It contains user-defined key-value pairs
-modifiedDateTime| date-time|Date-time when resource was last modified, sample format: yyyy-MM-ddTHH:mm:ssZ.
-createdDateTime|date-time|Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ.
-status| string| Contains the user-defined status of the resource.
-eTag| string| Implements optimistic concurrency
-description|string| Textual description of the resource
-
-*SeasonalFieldChanged FarmBeats events have the following data object:*
-
-Property| Type| Description
-|-| -| -|
-id| string| User-defined ID of the seasonal field
-farmId| string| User-defined ID of the farm that seasonal field is associated with
-farmerId| string| User-defined ID of the farmer that seasonal field is associated with
-seasonId| string| User-defined ID of the season that seasonal field is associated with
-fieldId| string| User-defined ID of the field that seasonal field is associated with
-name| string| User-defined name of the seasonal field
-actionType| string| Indicates the change that triggered publishing of the event. Applicable values are Created, Updated, Deleted
-properties| object| It contains user-defined key-value pairs
-modifiedDateTime| date-time|Date-time when resource was last modified, sample format: yyyy-MM-ddTHH:mm:ssZ.
-createdDateTime|date-time|Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ.
-status| string| Contains the user-defined status of the resource.
-eTag| string| Implements optimistic concurrency
-description| string| Textual description of the resource
-
-*SatelliteDataIngestionJobChanged, WeatherDataIngestionJobChanged, and FarmOperationsDataIngestionJobChanged FarmBeats events have the following data object:*
-
-Property| Type| Description
-|-|-|-|
-id|String| Unique ID of the job.
-name| string| User-defined name of the job.
-status|string|Various states a job can be in.
-isCancellationRequested| boolean|Flag that gets set when job cancellation is requested.
-description|string| Textual description of the job.
-farmerId|string| ID of the farmer for which job was created.
-message|string| Status message to capture more details of the job.
-lastActionDateTime|date-time|Date-time when last action was taken on the job, sample format: yyyy-MM-ddTHH:mm:ssZ.
-createdDateTime|date-time|Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ.
--
-*FarmBeats farm operations data change events such as ApplicationDataChanged, HarvestingDataChanged, PlantingDataChanged, and TillageDataChanged have the following data object:*
-
-Property| Type| Description
-|-|-|-|
-id| string| User-defined ID of the resource, such as Farm ID, Farmer ID etc.
-status| string| Contains the status of the job.
-actionType|string|
-source| string| Message from FarmBeats giving details about the job.
-modifiedDateTime| date-time|Date-time when resource was last modified, sample format: yyyy-MM-ddTHH:mm:ssZ.
-createdDateTime|date-time|Date-time when resource was created, sample format: yyyy-MM-ddTHH:mm:ssZ.
-eTag| string| Implements optimistic concurrency
-description|string| Textual description of the resource
--
-## Sample events
-These event samples represent an event notification.
-
-**Event type: Microsoft.AgFoodPlatform.FarmerChanged**
-
-```json
-{
- "data": {
- "actionType": "Created",
- "status": "Sample status",
- "modifiedDateTime": "2021-03-05T10:53:28Z",
- "eTag": "860197cc-0000-0700-0000-60420da80000",
- "id": "UNIQUE-FARMER-ID",
- "name": "sample farmer",
- "description": "Sample description",
- "createdDateTime": "2021-03-05T10:53:28Z",
- "properties": {
- "key1": "value1",
- "key2": 123.45
- }
- },
- "id": "81fbe1de-4ae4-4284-964f-59da80a6bfe7",
- "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{FARMBEATS-RESOURCE-NAME}",
- "subject": "/farmers/UNIQUE-FARMER-ID",
- "eventType": "Microsoft.AgFoodPlatform.FarmerChanged",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-03-05T10:53:28.2783745Z"
- }
-```
-
-**Event type: Microsoft.AgFoodPlatform.FarmChanged**
-
-```json
- {
- "data": {
- "farmerId": "UNIQUE-FARMER-ID",
- "actionType": "Created",
- "status": "Sample status",
- "modifiedDateTime": "2021-03-05T10:55:57Z",
- "eTag": "8601e3d5-0000-0700-0000-60420e3d0000",
- "id": "UNIQUE-FARM-ID",
- "name": "Display name",
- "description": "Sample description",
- "createdDateTime": "2021-03-05T10:55:57Z",
- "properties": {
- "key1": "value1",
- "key2": 123.45
- }
- },
- "id": "31a31be7-51fb-48f3-adfd-6fb4400be002",
- "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{FARMBEATS-RESOURCE-NAME}",
- "subject": "/farmers/UNIQUE-FARMER-ID/farms/UNIQUE-FARM-ID",
- "eventType": "Microsoft.AgFoodPlatform.FarmChanged",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-03-05T10:55:57.6026173Z"
- }
-```
-**Event type: Microsoft.AgFoodPlatform.BoundaryChanged**
-
-```json
- {
- "data": {
- "farmerId": "UNIQUE-FARMER-ID",
- "parentId": "OPTIONAL-UNIQUE-FIELD-ID",
- "isPrimary": true,
- "actionType": "Created",
- "modifiedDateTime": "2021-03-05T11:15:29Z",
- "eTag": "860109f7-0000-0700-0000-604212d10000",
- "id": "UNIQUE-BOUNDARY-ID",
- "name": "Display name",
- "description": "Sample description",
- "createdDateTime": "2021-03-05T11:15:29Z"
- },
- "id": "3d3453b2-5a94-45a7-98eb-fc2979a00317",
- "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{FARMBEATS-RESOURCE-NAME}",
- "subject": "/farmers/UNIQUE-FARMER-ID/boundaries/UNIQUE-BOUNDARY-ID",
- "eventType": "Microsoft.AgFoodPlatform.BoundaryChanged",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-03-05T11:15:29.4797354Z"
- }
- ```
-
-**Event type: Microsoft.AgFoodPlatform.FieldChanged**
-
-```json
- {
- "data": {
- "farmerId": "UNIQUE-FARMER-ID",
- "farmId": "UNIQUE-FARM-ID",
- "actionType": "Created",
- "status": "Sample status",
- "modifiedDateTime": "2021-03-05T10:58:43Z",
- "eTag": "860124dc-0000-0700-0000-60420ee30000",
- "id": "UNIQUE-FIELD-ID",
- "name": "Display name",
- "description": "Sample description",
- "createdDateTime": "2021-03-05T10:58:43Z",
- "properties": {
- "key1": "value1",
- "key2": 123.45
- }
- },
- "id": "1ad04ed0-ac05-4c4e-aa3d-87facb3cc97c",
- "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{FARMBEATS-RESOURCE-NAME}",
- "subject": "/farmers/UNIQUE-FARMER-ID/fields/UNIQUE-FIELD-ID",
- "eventType": "Microsoft.AgFoodPlatform.FieldChanged",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-03-05T10:58:43.3222921Z"
- }
- ```
-**Event type: Microsoft.AgFoodPlatform.SeasonalFieldChanged**
-```json
- {
- "data": {
- "farmerId": "UNIQUE-FARMER-ID",
- "seasonId": "UNIQUE-SEASON-ID",
- "fieldId": "UNIQUE-FIELD-ID",
- "farmId": "UNIQUE-FARM-ID",
- "actionType": "Created",
- "status": "Sample status",
- "modifiedDateTime": "2021-03-05T11:24:56Z",
- "eTag": "8701300b-0000-0700-0000-604215080000",
- "id": "UNIQUE-SEASONAL-FIELD-ID",
- "name": "Display name",
- "description": "Sample description",
- "createdDateTime": "2021-03-05T11:24:56Z",
- "properties": {
- "key1": "value1",
- "key2": 123.45
- }
- },
- "id": "ff59a0a3-6226-42c0-9e70-01da55efa797",
- "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{FARMBEATS-RESOURCE-NAME}",
- "subject": "/farmers/UNIQUE-FARMER-ID/seasonalFields/UNIQUE-SEASONAL-FIELD-ID",
- "eventType": "Microsoft.AgFoodPlatform.SeasonalFieldChanged",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-03-05T11:24:56.4210287Z"
- }
-```
-**Event type: Microsoft.AgFoodPlatform.SeasonChanged**
-```json
- {
- "data": {
- "actionType": "Created",
- "status": "Sample status",
- "modifiedDateTime": "2021-03-05T11:18:38Z",
- "eTag": "86019afd-0000-0700-0000-6042138e0000",
- "id": "UNIQUE-SEASON-ID",
- "name": "Display name",
- "description": "Sample description",
- "createdDateTime": "2021-03-05T11:18:38Z",
- "properties": {
- "key1": "value1",
- "key2": 123.45
- }
- },
- "id": "63989475-397b-4b92-8160-8743bf8e5804",
- "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{FARMBEATS-RESOURCE-NAME}",
- "subject": "/seasons/UNIQUE-SEASON-ID",
- "eventType": "Microsoft.AgFoodPlatform.SeasonChanged",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-03-05T11:18:38.5804699Z"
- }
- ```
-
- **Event type: Microsoft.AgFoodPlatform.CropChanged**
-
-```json
- {
- "data": {
- "actionType": "Created",
- "status": "Sample status",
- "modifiedDateTime": "2021-03-05T11:03:48Z",
- "eTag": "8601c4e5-0000-0700-0000-604210150000",
- "id": "UNIQUE-CROP-ID",
- "name": "Display name",
- "description": "Sample description",
- "createdDateTime": "2021-03-05T11:03:48Z",
- "properties": {
- "key1": "value1",
- "key2": 123.45
- }
- },
- "id": "4c59a797-b76d-48ec-8915-ceff58628f35",
- "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{FARMBEATS-RESOURCE-NAME}",
- "subject": "/crops/UNIQUE-CROP-ID",
- "eventType": "Microsoft.AgFoodPlatform.CropChanged",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-03-05T11:03:49.0590658Z"
- }
- ```
-
-**Event type: Microsoft.AgFoodPlatform.CropVarietyChanged**
-
-```json
- {
- "data": {
- "cropId": "UNIQUE-CROP-ID",
- "actionType": "Created",
- "status": "string",
- "modifiedDateTime": "2021-03-05T11:10:21Z",
- "eTag": "860130ef-0000-0700-0000-6042119d0000",
- "id": "UNIQUE-CROP-VARIETY-ID",
- "name": "Sample status",
- "description": "Sample description",
- "createdDateTime": "2021-03-05T11:10:21Z",
- "properties": {
- "key1": "value1",
- "key2": 123.45
- }
- },
- "id": "29aefdb9-d648-442c-81f8-694f3f47583c",
- "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{FARMBEATS-RESOURCE-NAME}",
- "subject": "/cropVarieties/UNIQUE-CROP-VARIETY-ID",
- "eventType": "Microsoft.AgFoodPlatform.CropVarietyChanged",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-03-05T11:10:21.4572495Z"
- }
-```
-**Event type: Microsoft.AgFoodPlatform.SatelliteDataIngestionJobStatusChanged**
-```json
-[
- {
- "data": {
- "farmerId": "UNIQUE - FARMER - ID",
- "message": "Created job 'job1' to fetch satellite data for boundary 'boundary1' from startDate '06/01/2021' to endDate '06/01/2021' (both inclusive).",
- "status": "Waiting",
- "lastActionDateTime": "2021-06-01T11:25:37.8634096Z",
- "isCancellationRequested": false,
- "id": "UNIQUE - JOB - ID",
- "name": "samplejob",
- "description": "Sample for testing events",
- "createdDateTime": "2021-06-01T11:25:32.3421173Z",
- "properties": {
- "key1": "testvalue1",
- "key2": 123.45
- }
- },
- "id": "925c6be2-6561-4572-b7dd-0f3084a54567",
- "topic": "/subscriptions/{Subscription -ID}/resourceGroups/{RESOURCE - GROUP - NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{FARMBEATS-RESOURCE-NAME}",
- "subject": "/farmers/{UNIQUE-FARMER-ID}/satelliteDataIngestionJobs/{UNIQUE-JOB-ID}",
- "eventType": "Microsoft.AgFoodPlatform.SatelliteDataIngestionJobStatusChanged",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-06-01T11:25:37.8634764Z"
- }
-]
-```
-**Event type: Microsoft.AgFoodPlatform.WeatherDataIngestionJobStatusChanged**
-```json
-[
- {
- "data": {
- "farmerId": "UNIQUE-FARMER-ID",
- "message": "Created job to fetch weather data for job name 'job2', farmer id 'farmer2' and boundary id 'boundary2'.",
- "status": "Running",
- "lastActionDateTime": "2021-06-01T11:22:27.9031003Z",
- "isCancellationRequested": false,
- "id": "UNIQUE-JOB-ID",
- "createdDateTime": "2021-06-01T07:13:54.8843617Z"
- },
- "id": "ec30313a-ff2f-4b50-882b-31188113c15b",
- "topic": "/subscriptions/{Subscription -ID}/resourceGroups/{RESOURCE - GROUP - NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{FARMBEATS-RESOURCE-NAME}",
- "subject": "/farmers/UNIQUE-FARMER-ID/weatherDataIngestionJobs/UNIQUE-JOB-ID",
- "eventType": "Microsoft.AgFoodPlatform.WeatherDataIngestionJobStatusChanged",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-06-01T11:22:27.9031302Z"
- }
-]
-
-```
-**Event type: Microsoft.AgFoodPlatform.FarmOperationDataIngestionJobStatusChanged**
-```json
-[
- {
- "data": {
- "farmerId": "UNIQUE-FARMER-ID",
- "message": "Job completed successfully. Data statistics:{ Processed operations count = 6, Organizations count = 1, Processed organizations count = 1, Processed fields count = 2, Operations count = 6, ShapefileAttachmentsCount = 0, Fields count = 2 }",
- "status": "Succeeded",
- "lastActionDateTime": "2021-06-01T11:30:54.733625Z",
- "isCancellationRequested": false,
- "id": "UNIQUE-JOB-ID",
- "name": "sample-job",
- "description": "sample description",
- "createdDateTime": "2021-06-01T11:30:39.0905288Z",
- "properties": {
- "key1": "value1",
- "key2": 123.45
- }
- },
- "id": "ebdbb7a1-ad28-4af7-b3a2-a4a3a2dd1b4f",
- "topic": "/subscriptions/{Subscription -ID}/resourceGroups/{RESOURCE - GROUP - NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{FARMBEATS-RESOURCE-NAME}",
- "subject": "/farmers/UNIQUE-FARMER-ID/farmOperationDataIngestionJobs/UNIQUE-JOB-ID",
- "eventType": "Microsoft.AgFoodPlatform.FarmOperationDataIngestionJobStatusChanged",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-06-01T11:30:54.733671Z"
- }
-]
-
-```
-**Event type: Microsoft.AgFoodPlatform.ApplicationDataChanged**
-
-```json
- {
- "data": {
- "actionType": "Updated",
- "farmerId": "UNIQUE-FARMER-ID",
- "source": "Sample source",
- "modifiedDateTime": "2021-03-05T11:27:24Z",
- "eTag": "87011311-0000-0700-0000-6042159c0000",
- "id": "UNIQUE-APPLICATION-DATA-ID",
- "status": "Sample status",
- "name": "sample name",
- "description": "Sample description",
- "createdDateTime": "2021-03-05T11:27:24Z",
- "properties": {
- "key1": "value1",
- "key2": 123.45
- }
- },
- "id": "e499f6c4-63ba-4217-8261-0c6cb0e398d2",
- "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{FARMBEATS-RESOURCE-NAME}",
- "subject": "/farmers/UNIQUE-FARMER-ID/applicationData/UNIQUE-APPLICATION-DATA-ID",
- "eventType": "Microsoft.AgFoodPlatform.ApplicationDataChanged",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-03-05T11:27:24.164612Z"
- }
-```
-
-**Event type: Microsoft.AgFoodPlatform.HarvestDataChanged**
-```json
- {
- "data": {
- "actionType": "Created",
- "farmerId": "UNIQUE-FARMER-ID",
- "source": "Sample source",
- "modifiedDateTime": "2021-03-05T11:33:41Z",
- "eTag": "8701141b-0000-0700-0000-604217150000",
- "id": "UNIQUE-HARVEST-DATA-ID",
- "status": "Sample status",
- "name": "sample name",
- "description": "Sample description",
- "createdDateTime": "2021-03-05T11:33:41Z",
- "properties": {
- "key1": "value1",
- "key2": 123.45
- }
- },
- "id": "dc3837c0-1eed-4bfa-88b6-d018cf6af4db",
- "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{FARMBEATS-RESOURCE-NAME}",
- "subject": "/farmers/UNIQUE-FARMER-ID/harvestData/UNIQUE-HARVEST-DATA-ID",
- "eventType": "Microsoft.AgFoodPlatform.HarvestDataChanged",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-03-05T11:33:41.3434992Z"
- }
-```
-**Event type: Microsoft.AgFoodPlatform.TillageDataChanged**
-```json
- {
- "data": {
- "actionType": "Updated",
- "farmerId": "UNIQUE-FARMER-ID",
- "source": "sample source",
- "modifiedDateTime": "2021-06-15T10:31:07Z",
- "eTag": "6405f027-0000-0100-0000-60c8816b0000",
- "id": "c9858c3f-fb94-474a-a6de-103b453df976",
- "createdDateTime": "2021-06-15T10:31:07Z",
- "name": "sample name",
- "description":"sample description"
- "properties": {
- "_orgId": "498221",
- "_fieldId": "e61b83f4-3a12-431e-8010-596f2466dc27",
- "_cropSeason": "2010"
- }
- },
- "id": "f06f6686-1fa8-41fd-be99-46f40f495cce",
- "topic": "/subscriptions/da9091ec-d18f-456c-9c21-5783ee7f4645/resourceGroups/internal-farmbeats-resources/providers/Microsoft.AgFoodPlatform/farmBeats/internal-eus",
- "subject": "/farmers/10e3d7bf-c559-48be-af31-4e00df83bfcd/tillageData/c9858c3f-fb94-474a-a6de-103b453df976",
- "eventType": "Microsoft.AgFoodPlatform.TillageDataChanged",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-06-15T10:31:07.6778047Z"
- }
-```
-
-**Event type: Microsoft.AgFoodPlatform.PlantingDataChanged**
-```json
- {
- "data": {
- "actionType": "Created",
- "farmerId": "UNIQUE-FARMER-ID",
- "source": "Sample source",
- "modifiedDateTime": "2021-03-05T11:41:18Z",
- "eTag": "8701242a-0000-0700-0000-604218de0000",
- "id": "UNIQUE-PLANTING-DATA-ID",
- "status": "Sample status",
- "name": "sample name",
- "description": "Sample description",
- "createdDateTime": "2021-03-05T11:41:18Z",
- "properties": {
- "key1": "value1",
- "key2": 123.45
- }
- },
- "id": "42589c7f-4e16-4a4d-9314-d611c822f7ac",
- "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{FARMBEATS-RESOURCE-NAME}",
- "subject": "/farmers/UNIQUE-FARMER-ID/plantingData/UNIQUE-PLANTING-DATA-ID",
- "eventType": "Microsoft.AgFoodPlatform.PlantingDataChanged",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "eventTime": "2021-03-05T11:41:18.1744322Z"
- }
-```
---
-## Next steps
-* For an introduction to Azure Event Grid, see [What is Event Grid?](overview.md)
event-grid Manage Event Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/manage-event-delivery.md
Title: Dead letter and retry policies - Azure Event Grid description: Describes how to customize event delivery options for Event Grid. Set a dead-letter destination, and specify how long to retry delivery. Previously updated : 07/27/2021 Last updated : 11/07/2022 ms.devlang: azurecli
ms.devlang: azurecli
When creating an event subscription, you can customize the settings for event delivery. This article shows you how to set up a dead letter location and customize the retry settings. For information about these features, see [Event Grid message delivery and retry](delivery-and-retry.md). - > [!NOTE] > To learn about message delivery, retries, and dead-lettering, see the conceptual article: [Event Grid message delivery and retry](delivery-and-retry.md).
To set a dead letter location, you need a storage account for holding events tha
> [!NOTE] > - Create a storage account and a blob container in the storage before running commands in this article.
-> - The Event Grid service creates blobs in this container. The names of blobs will have the name of the Event Grid subscription with all the letters in upper case. For example, if the name of the subscription is My-Blob-Subscription, names of the dead letter blobs will have MY-BLOB-SUBSCRIPTION (myblobcontainer/MY-BLOB-SUBSCRIPTION/2019/8/8/5/111111111-1111-1111-1111-111111111111.json). This behavior is to protect against differences in case handling between Azure services.
-> - In the above example .../2019/8/8/5/... represents the non-zero padded date and hour (UTC): .../YYYY/MM/DD/HH/...
-> - The dead letter blobs created will contain one or more events in an array. An important behavior to consider when processing dead letters.
+> - The Event Grid service creates blobs in this container. The names of blobs will have the name of the Event Grid subscription with all the letters in upper case. For example, if the name of the subscription is `My-Blob-Subscription`, names of the dead letter blobs will have `MY-BLOB-SUBSCRIPTION` (`myblobcontainer/MY-BLOB-SUBSCRIPTION/2019/8/8/5/111111111-1111-1111-1111-111111111111.json`). This behavior is to protect against differences in case handling between Azure services.
+> - In the above example `.../2019/8/8/5/...` represents the non-zero padded date and hour (UTC): `.../YYYY/MM/DD/HH/...`.`
+> - The dead letter blobs created will contain one or more events in an array, which is an important behavior to consider when processing dead letters.
+
+### Azure portal
+
+While creating an event subscription, you can enable dead-lettering on the **Additional features** tab as shown in the following image. After you enable the feature, specify the blob container that will hold dead-lettered events and the Azure subscription that has the blob storage.
+
+You can optionally enable a system-assigned or user-assigned managed identity for dead-lettering. The managed identity must be a member of a [role-based access control (RBAC) role](../storage/blobs/authorize-access-azure-active-directory.md#azure-built-in-roles-for-blobs) that allows writing events to the storage.
++
+You can also enable dead-lettering and configure the settings for an existing event subscription. On the **Event Subscription** page of your event subscription, switch to the **Additional features** tab to see the dead-letter settings as shown in the following image.
+ ### Azure CLI
To turn off dead-lettering, rerun the command to create the event subscription b
## Set retry policy
-When creating an Event Grid subscription, you can set values for how long Event Grid should try to deliver the event. By default, Event Grid tries for 24 hours (1440 minutes), or 30 times. You can set either of these values for your event grid subscription. The value for event time-to-live must be an integer from 1 to 1440. The value for max retries must be an integer from 1 to 30.
+When creating an Event Grid subscription, you can set values for how long Event Grid should try to deliver the event. By default, Event Grid tries for 24 hours (1440 minutes), or 30 times. You can set either of these values for your Event Grid subscription. The value for event time-to-live must be an integer from 1 to 1440. The value for max retries must be an integer from 1 to 30.
You can't configure the [retry schedule](delivery-and-retry.md#retry-schedule).
+### Azure portal
+
+While creating an event subscription, you can configure retry policy settings on the **Additional features** tab.
++
+You can also configure retry policy settings for an existing event subscription. On the **Event Subscription** page of your event subscription, switch to the **Additional features** tab to see the retry policy settings as shown in the following image.
+++ ### Azure CLI To set the event time-to-live to a value other than 1440 minutes, use:
New-AzEventGridSubscription `
> [!NOTE] > If you set both `event-ttl` and `max-deliver-attempts`, Event Grid uses the first to expire to determine when to stop event delivery. For example, if you set 30 minutes as time-to-live (TTL) and 10 max delivery attempts. When an event isn't delivered after 30 minutes (or) isn't delivered after 10 attempts, whichever happens first, the event is dead-lettered.
-## Managed identity
-If you enable managed identity for dead-lettering, you'll need to add the managed identity to the appropriate role-based access control (RBAC) role on the Azure Storage account that will hold the dead-lettered events. For more information, see [Supported destinations and Azure roles](add-identity-roles.md#supported-destinations-and-azure-roles).
- ## Next steps * For a sample application that uses an Azure Function app to process dead letter events, see [Azure Event Grid Dead Letter Samples for .NET](https://azure.microsoft.com/resources/samples/event-grid-dotnet-handle-deadlettered-events/).
event-grid Subscribe To Graph Api Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-graph-api-events.md
This article describes steps to subscribe to events published by Microsoft Graph
>If you aren't familiar with the **Partner Events** feature, see [Partner Events overview](partner-events-overview.md).
-## Why you should use Microsoft Graph API with Event Grid as a destination?
+## Why should I use Microsoft Graph API as a destination?
Besides the ability to subscribe to Microsoft Graph API events via Event Grid, you have [other options](/graph/change-notifications-delivery) through which you can receive similar notifications (not events). Consider using Microsoft Graph API to deliver events to Event Grid if you have at least one of the following requirements: - You're developing an event-driven solution that requires events from Azure Active Directory, Outlook, Teams, etc. to react to resource changes. You require the robust eventing model and publish-subscribe capabilities that Event Grid provides. For an overview of Event Grid, see [Event Grid concepts](concepts.md).
Besides the ability to subscribe to Microsoft Graph API events via Event Grid, y
## High-level steps
-The common steps to subscribe to events published by any partner, including Graph API, are described in [subscribe to partner events](subscribe-to-partner-events.md). For a quick reference, the steps described in that article are listed here. This article deals with step 3: enable events flow to a partner topic.
+1. [Register the Event Grid resource provider](#register-the-event-grid-resource-provider) with your Azure subscription.
+1. [Authorize partner](#authorize-partner-to-create-a-partner-topic) to create a partner topic in your resource group.
+3. [Enable events to flow to a partner topic](#enable-graph-api-events-to-flow-to-your-partner-topic)
+4. [Activate partner topic](#activate-a-partner-topic) so that your events start flowing to your partner topic.
+5. [Subscribe to events](#subscribe-to-events).
-1. Register the Event Grid resource provider with your Azure subscription.
-2. Authorize partner to create a partner topic in your resource group.
-3. [Enable events to flow to a partner topic](#enable-microsoft-graph-api-events-to-flow-to-your-partner-topic)
-4. Activate partner topic so that your events start flowing to your partner topic.
-5. Subscribe to events.
-### Enable Microsoft Graph API events to flow to your partner topic
+++
+## Enable Graph API events to flow to your partner topic
> [!IMPORTANT] > Microsoft Graph API's (MGA) ability to send events to Event Grid (a generally available service) is in private preview. In the following steps, you will follow instructions from [Node.js](https://github.com/microsoftgraph/nodejs-webhooks-sample), [Java](https://github.com/microsoftgraph/java-spring-webhooks-sample), and[.NET Core](https://github.com/microsoftgraph/aspnetcore-webhooks-sample) Webhook samples to enable flow of events from Microsoft Graph API. At some point in the sample, you will have an application registered with Azure AD. Email your application ID to <a href="mailto:ask-graph-and-grid@microsoft.com?subject=Please allow my application ID">mailto:ask-graph-and-grid@service.microsoft.com?subject=Please allow my Azure AD application with ID to send events through Graph API</a> so that the Microsoft Graph API team can add your application ID to allow list to use this new capability.
When you create a Graph API subscription with a `notificationUrl` bound to Event
#### Microsoft Graph API Explorer For quick tests and to get to know the API, you could use the [Microsoft Graph API explorer](/graph/graph-explorer/graph-explorer-features). For anything else beyond casuals tests or learning, you should use the Graph SDKs as described above. ++ ## Next steps See the following articles:
event-grid Subscribe To Partner Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-partner-events.md
Title: Azure Event Grid - Subscribe to partner events description: This article explains how to subscribe to events from a partner using Azure Event Grid. Previously updated : 09/14/2022 Last updated : 10/31/2022 # Subscribe to events published by a partner with Azure Event Grid
Here are the steps that a subscriber needs to perform to receive events from a p
[!INCLUDE [register-event-grid-provider](includes/register-event-grid-provider.md)]
-## Authorize partner to create a partner topic
-You must grant your consent to the partner to create partner topics in a resource group that you designate. This authorization has an expiration time. It's effective for the time period you specify between 1 to 365 days.
-
-> [!IMPORTANT]
-> For a greater security stance, specify the minimum expiration time that offers the partner enough time to configure your events to flow to Event Grid and to provision your partner topic. Your partner won't be able to create resources (partner topics) in your Azure subscription after the authorization expiration time.
-
-> [!NOTE]
-> Event Grid started enforcing authorization checks to create partner topics around June 30th, 2022.
-
-Following example shows the way to create a partner configuration resource that contains the partner authorization. You must identify the partner by providing either its **partner registration ID** or the **partner name**. Both can be obtained from your partner, but only one of them is required. For your convenience, the following examples leave a sample expiration time in the UTC format.
-
-### Azure portal
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the search bar at the top, enter **Partner Configurations**, and select **Event Grid Partner Configurations** under **Services** in the results.
-1. On the **Event Grid Partner Configurations** page, select **Create Event Grid partner configuration** button on the page (or) select **+ Create** on the command bar.
-
- :::image type="content" source="./media/subscribe-to-partner-events/partner-configurations.png" alt-text="Event Grid Partner Configurations page showing the list of partner configurations and the link to create a partner registration.":::
-1. On the **Create Partner Configuration** page, do the following steps:
- 1. In the **Project Details** section, select the **Azure subscription** and the **resource group** where you want to allow the partner to create a partner topic.
- 1. In the **Partner Authorizations** section, specify a default expiration time for partner authorizations defined in this configuration.
- 1. To provide your authorization for a partner to create partner topics in the specified resource group, select **+ Partner Authorization** link.
-
- :::image type="content" source="./media/subscribe-to-partner-events/partner-authorization-configuration.png" alt-text="Create Partner Configuration page with the Partner Authorization link selected.":::
-
-1. On the **Add partner authorization to create resources** page, you see a list of **verified partners**. A verified partner is a partner whose identity has been validated by Microsoft. You can select a verified partner, and select **Add** button at the bottom to give the partner the authorization to add a partner topic in your resource group. This authorization is effective up to the expiration time.
-
- You also have an option to authorize a **non-verified partner.** Unless the partner is an entity that you know well, for example, an organization within your company, it's strongly encouraged that you only work with verified partners. If the partner isn't yet verified, encourage them to get verified by asking them to contact the Event Grid team at askgrid@microsoft.com.
-
- 1. To authorize a **verified partner**:
- 1. Select the partner from the list.
- 1. Specify **authorization expiration time**.
- 1. select **Add**.
-
- :::image type="content" source="./media/subscribe-to-partner-events/add-verified-partner.png" alt-text="Screenshot for granting a verified partner the authorization to create resources in your resource group.":::
- 1. To authorize a non-verified partner, select **Authorize non-verified partner**, and follow these steps:
- 1. Enter the **partner registration ID**. You need to ask your partner for this ID.
- 1. Specify authorization expiration time.
- 1. Select **Add**.
-
- :::image type="content" source="./media/subscribe-to-partner-events/add-non-verified-partner.png" alt-text="Screenshot for granting a non-verified partner the authorization to create resources in your resource group.":::
-
- > [!IMPORTANT]
- > Your partner won't be able to create resources (partner topics) in your Azure subscription after the authorization expiration time.
-1. Back on the **Create Partner Configuration** page, verify that the partner is added to the partner authorization list at the bottom.
-1. Select **Review + create** at the bottom of the page.
-
- :::image type="content" source="./media/subscribe-to-partner-events/create-partner-registration.png" alt-text="Create Partner Configuration page showing the partner authorization you just added.":::
-1. On the **Review** page, review all settings, and then select **Create** to create the partner registration.
## Request partner to enable events flow to a partner topic
Here's the list of partners and a link to submit a request to enable events flow
- [SAP](subscribe-to-sap-events.md)
-## Activate a partner topic
-
-1. In the search bar of the Azure portal, search for and select **Event Grid Partner Topics**.
-1. On the **Event Grid Partner Topics** page, select the partner topic in the list.
-
- :::image type="content" source="./media/onboard-partner/select-partner-topic.png" lightbox="./media/onboard-partner/select-partner-topic.png" alt-text="Select a partner topic in the Event Grid Partner Topics page.":::
-1. Review the activate message, and select **Activate** on the page or on the command bar to activate the partner topic before the expiration time mentioned on the page.
-
- :::image type="content" source="./media/onboard-partner/activate-partner-topic-button.png" lightbox="./media/onboard-partner/activate-partner-topic-button.png" alt-text="Image showing the selection of the Activate button on the command bar or on the page.":::
-1. Confirm that the activation status is set to **Activated** and then create event subscriptions for the partner topic by selecting **+ Event Subscription** on the command bar.
-
- :::image type="content" source="./media/onboard-partner/partner-topic-activation-status.png" lightbox="./media/onboard-partner/partner-topic-activation-status.png" alt-text="Image showing the activation state as **Activated**.":::
-
-## Subscribe to events
-First, create an event handler that will handle events from the partner. For example, create an event hub, Service Bus queue or topic, or an Azure function.
-
-Then, create an event subscription for the partner topic using the event handler you created.
-
-#### Create an event handler
-To test your partner topic, you'll need an event handler. Go to your Azure subscription and spin up a service that's supported as an [event handler](event-handlers.md) such as an [Azure Function](custom-event-to-function.md). For an example, see [Event Grid Viewer sample](custom-event-quickstart-portal.md#create-a-message-endpoint) that you can use as an event handler via webhooks.
-
-#### Subscribe to the partner topic
-Subscribing to the partner topic tells Event Grid where you want your partner events to be delivered.
-
-1. In the Azure portal, type **Event Grid Partner Topics** in the search box, and select **Event Grid Partner Topics**.
-1. On the **Event Grid Partner Topics** page, select the partner topic in the list.
-
- :::image type="content" source="./media/subscribe-to-partner-events/select-partner-topic.png" lightbox="./media/subscribe-to-partner-events/select-partner-topic.png" alt-text="Image showing the selection of a partner topic.":::
-1. On the **Event Grid Partner Topic** page for the partner topic, select **+ Event Subscription** on the command bar.
- :::image type="content" source="./media/subscribe-to-partner-events/select-add-event-subscription.png" alt-text="Image showing the selection of Add Event Subscription button on the Event Grid Partner Topic page.":::
-1. On the **Create Event Subscription** page, do the following steps:
- 1. Enter a **name** for the event subscription.
- 1. For **Filter to Event Types**, select types of events that your subscription will receive.
- 1. For **Endpoint Type**, select an Azure service (Azure Function, Storage Queues, Event Hubs, Service Bus Queue, Service Bus Topic, Hybrid Connections. etc.), or webhook.
- 1. Click the **Select an endpoint** link. In this example, let's use Azure Event Hubs destination or endpoint.
-
- :::image type="content" source="./media/subscribe-to-partner-events/select-endpoint.png" lightbox="./media/subscribe-to-partner-events/select-endpoint.png" alt-text="Image showing the configuration of an endpoint for an event subscription.":::
- 1. On the **Select Event Hub** page, select configurations for the endpoint, and then select **Confirm Selection**.
-
- :::image type="content" source="./media/subscribe-to-partner-events/select-event-hub.png" lightbox="./media/subscribe-to-partner-events/select-event-hub.png" alt-text="Image showing the configuration of an Event Hubs endpoint.":::
- 1. Now on the **Create Event Subscription** page, select **Create**.
-
- :::image type="content" source="./media/subscribe-to-partner-events/create-event-subscription.png" alt-text="Image showing the Create Event Subscription page with example configurations.":::
-
## Next steps
event-grid Subscribe To Sap Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-sap-events.md
Last updated 10/25/2022
# Subscribe to events published by SAP
-This article describes steps to subscribe to events published by a SAP S/4HANA system.
+This article describes steps to subscribe to events published by an SAP S/4HANA system.
-## High-level steps
-
-The common steps to subscribe to events published by any partner, including SAP, are described in [subscribe to partner events](subscribe-to-partner-events.md). For your quick reference, the steps are provided again here with the addition of a step to make sure that your SAP system has the required components. This article deals with steps 1 and 3.
-
-1. [Ensure you meet all prerequisites](#prerequisites).
-1. Register the Event Grid resource provider with your Azure subscription.
-1. Authorize partner to create a partner topic in your resource group.
-1. [Enable SAP S/4HANA events to flow to a partner topic](#enable-events-to-flow-to-your-partner-topic).
-1. Activate partner topic so that your events start flowing to your partner topic.
-1. Subscribe to events.
+> [!NOTE]
+> See the [New SAP events on Azure Event Grid](https://techcommunity.microsoft.com/t5/messaging-on-azure-blog/new-sap-events-on-azure-event-grid/ba-p/3663372) for an announcement of this feature.
## Prerequisites
Following are the prerequisites that your system needs to meet before attempting
If you have any questions, contact us at <a href="mailto:ask-grid-and-ms-sap@microsoft.com">ask-grid-and-ms-sap@microsoft.com</a> +
+## High-level steps
+
+1. [Register the Event Grid resource provider](#register-the-event-grid-resource-provider) with your Azure subscription.
+1. [Authorize partner](#authorize-partner-to-create-a-partner-topic) to create a partner topic in your resource group.
+1. [Enable SAP S/4HANA events to flow to a partner topic](#enable-events-to-flow-to-your-partner-topic).
+4. [Activate partner topic](#activate-a-partner-topic) so that your events start flowing to your partner topic.
+5. [Subscribe to events](#subscribe-to-events).
++++ ## Enable events to flow to your partner topic
-SAP's capability to send events to Azure Event Grid is available through SAP's [beta program](https://influence.sap.com/sap/ino/#campaign/3314). Using this program, you can let SAP know about your desire to have your S4/HANA events available on Azure. You can find the SAP's announcement of this new feature [here](https://blogs.sap.com/2022/10/11/sap-event-mesh-event-bridge-to-microsoft-azure-to-go-beta/). Through SAP's Beta program, you'll be provided with the documentation on how to configure your SAP S4/HANA system to flow events to Event Grid. At at that point, you may proceed with the next step in the process described in the [High-level steps](#high-level-steps) section.
+SAP's capability to send events to Azure Event Grid is available through SAP's [beta program](https://influence.sap.com/sap/ino/#campaign/3314). Using this program, you can let SAP know about your desire to have your S4/HANA events available on Azure. You can find the SAP's announcement of this new feature [here](https://blogs.sap.com/2022/10/11/sap-event-mesh-event-bridge-to-microsoft-azure-to-go-beta/). Through SAP's Beta program, you'll be provided with the documentation on how to configure your SAP S4/HANA system to flow events to Event Grid.
SAP's BETA program started in October 2022 and will last a couple of months. Thereafter, the feature will be released by SAP as a generally available (GA) capability. Event Grid's capability to receive events from a partner, like SAP, is already a GA feature. If you have any questions, you can contact us at <a href="mailto:ask-grid-and-ms-sap@microsoft.com">ask-grid-and-ms-sap@microsoft.com</a>. ++ ## Next steps See [subscribe to partner events](subscribe-to-partner-events.md).
frontdoor Tier Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/tier-comparison.md
Azure Front Door is offered in 2 different tiers, Azure Front Door Standard and
| Health probe log | Yes | Yes | No | | Custom Web Application Firewall (WAF) rules | Yes | Yes | Yes | | Microsoft managed rule set | No | Yes | Yes - Only default rule set 1.1 or below |
-| Bot protection | No | Yes | No |
+| Bot protection | No | Yes | Yes - Only bot manager rule set 1.0 |
| Private link support | No | Yes | No | | Simplified price (base + usage) | Yes | Yes | No | | Azure Policy integration | Yes | Yes | No |
hdinsight Hdinsight Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-overview.md
description: An introduction to HDInsight, and the Apache Hadoop and Apache Spar
Previously updated : 09/20/2022
-#Customer intent: As a data analyst, I want understand what is Hadoop and how it is offered in Azure HDInsight so that I can decide on using HDInsight instead of on premises clusters.
Last updated : 11/08/2022
+#Customer intent: As a data analyst, I want understand what is Azure HDInsight and Hadoop and how it is offered in so that I can decide on using HDInsight instead of on premises clusters.
# What is Azure HDInsight?
Azure HDInsight is a managed, full-spectrum, open-source analytics service in th
## What is HDInsight and the Hadoop technology stack?
-Azure HDInsight is a cloud distribution of Hadoop components. Azure HDInsight makes it easy, fast, and cost-effective to process massive amounts of data in a customizable environment. You can use the most popular open-source frameworks such as Hadoop, Spark, Hive, LLAP, Kafka and more. With these frameworks, you can enable a broad range of scenarios such as extract, transform, and load (ETL), data warehousing, machine learning, and IoT.
-
-To see available Hadoop technology stack components on HDInsight, see [Components and versions available with HDInsight](./hdinsight-component-versioning.md). To read more about Hadoop in HDInsight, see the [Azure features page for HDInsight](https://azure.microsoft.com/services/hdinsight/).
+Azure HDInsight is a full-spectrum, managed cluster platform which simplifies running big data frameworks in large volume and velocity using Apache Spark, Apache Hive, LLAP, Apache Kafka, Apache Hadoop, and more in your Azure environment.
## Why should I use Azure HDInsight? |Capability |Description | |||
-|Cloud native | Azure HDInsight enables you to create optimized clusters for Hadoop, Spark, [Interactive query (LLAP)](./interactive-query/apache-interactive-query-get-started.md), Kafka, HBase on Azure. HDInsight also provides an end-to-end SLA on all your production workloads. |
+|Cloud native | Azure HDInsight enables you to create optimized clusters for Spark, [Interactive query (LLAP)](./interactive-query/apache-interactive-query-get-started.md), Kafka, HBase and Hadoop on Azure. HDInsight also provides an end-to-end SLA on all your production workloads. |
|Low-cost and scalable | HDInsight enables you to scale workloads up or down. You can reduce costs by creating clusters on demand and paying only for what you use. You can also build data pipelines to operationalize your jobs. Decoupled compute and storage provide better performance and flexibility. | |Secure and compliant | HDInsight enables you to protect your enterprise data assets with Azure Virtual Network, encryption, and integration with Azure Active Directory. HDInsight also meets the most popular industry and government compliance standards. | |Monitoring | Azure HDInsight integrates with Azure Monitor logs to provide a single interface with which you can monitor all your clusters. |
You can use HDInsight to extend your existing on-premises [big data](#what-is-bi
## Open-source components in HDInsight
-Azure HDInsight enables you to create clusters with open-source frameworks such as Hadoop, Spark, Hive, LLAP, Kafka, and HBase. These clusters, by default, come with other open-source components that are included on the cluster such as Apache Ambari, Avro, Apache Hive3, HCatalog, Apache Hadoop MapReduce, Apache Hadoop YARN, Apache Phoenix, Apache Pig, Apache Sqoop, Apache Tez, Apache Oozie, and Apache ZooKeeper.
+Azure HDInsight enables you to create clusters with open-source frameworks such as Spark, Hive, LLAP, Kafka, Hadoop and HBase. These clusters, by default, come with other open-source components that are included on the cluster such as Apache Ambari, Avro, Apache Hive3, HCatalog, Apache Hadoop MapReduce, Apache Hadoop YARN, Apache Phoenix, Apache Pig, Apache Sqoop, Apache Tez, Apache Oozie, and Apache ZooKeeper.
## Programming languages in HDInsight
iot-central Concepts Device Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-authentication.md
Title: Device authentication in Azure IoT Central | Microsoft Docs
description: This article introduces key concepts relating to device authentication in Azure IoT Central Previously updated : 03/02/2022 Last updated : 10/28/2022
To automatically register devices that use SAS tokens:
1. Copy the group primary key from the **SAS-IoT-Devices** enrollment group:
- :::image type="content" source="media/concepts-device-authentication/group-primary-key.png" alt-text="Group primary key from S A S - I o T - Devices enrollment group.":::
+ :::image type="content" source="media/concepts-device-authentication/group-primary-key.png" alt-text="Screenshot that shows the group primary key from SAS IoT Devices enrollment group." lightbox="media/concepts-device-authentication/group-primary-key.png":::
1. Use the `az iot central device compute-device-key` command to generate the device SAS keys. Use the group primary key from the previous step. The device ID can contain letters, numbers, and the `-` character:
iot-central Concepts Device Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-templates.md
IoT Central can automatically assign a device to a device template when the devi
The following screenshot shows you how to view the model ID of a device template in IoT Central. In a device template, select a component, and then select **Edit identity**: You can view the [thermostat model](https://github.com/Azure/iot-plugandplay-models/blob/main/dtmi/com/example/thermostat-1.json) in the public model repository. The model ID definition looks like:
iot-central Concepts Telemetry Properties Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-telemetry-properties-commands.md
IoT Central lets you view the raw data that a device sends to an application. Th
1. Select the **Raw data** tab:
- :::image type="content" source="media/concepts-telemetry-properties-commands/raw-data.png" alt-text="Raw data view":::
+ :::image type="content" source="media/concepts-telemetry-properties-commands/raw-data.png" alt-text="Screenshot that shows the raw data view." lightbox="media/concepts-telemetry-properties-commands/raw-data.png":::
On this view, you can select the columns to display and set a time range to view. The **Unmodeled data** column shows data from the device that doesn't match any property or telemetry definitions in the device template.
iot-central How To Connect Devices X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/how-to-connect-devices-x509.md
Title: Connect devices with X.509 certificates in an Azure IoT Central applicati
description: How to connect devices with X.509 certificates using Node.js device SDK for IoT Central Application Previously updated : 09/13/2022 Last updated : 10/31/2022
Make a note of the location of these files. You need it later.
1. The status of the primary certificate is now **Verified**:
- ![Verified Certificate](./media/how-to-connect-devices-x509/verified.png)
+ :::image type="content" source="media/how-to-connect-devices-x509/verified.png" alt-text="Screenshot that shows a verified X509 certificate." lightbox="media/how-to-connect-devices-x509/verified.png":::
You can now connect devices that have an X.509 certificate derived from this primary root certificate.
To run the sample:
Verify that telemetry appears on the device view in your IoT Central application:
-![Screenshot that shows telemetry arriving in your IoT Central application.](./media/how-to-connect-devices-x509/telemetry.png)
## Use individual enrollment
These commands produce the following device certificates:
1. The device now has an individual enrollment with X.509 certificates.
- ![Individual enrollment certificates](./media/how-to-connect-devices-x509/individual-enrollment.png)
+ :::image type="content" source="media/how-to-connect-devices-x509/individual-enrollment.png" alt-text="Screenshot that shows how to connect a device using an X.509 individual enrollment." lightbox="media/how-to-connect-devices-x509/individual-enrollment.png":::
### Run a sample individual enrollment device
iot-central Howto Connect Rigado Cascade 500 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-connect-rigado-cascade-500.md
Previously updated : 06/15/2022 Last updated : 11/01/2022 # This article applies to solution builders.
To add a Cascade 500 device template:
1. The page gives you an option to **Create a custom template** or **Use a preconfigured device template**.
-1. Select the C500 device template from the list of preconfigured device templates.
+1. Select the Cascade-500 device template from the list of featured device templates.
-1. Select **Next: Customize** to continue to the next step.
+1. Select **Next: Review** to continue to the next step.
-1. On the next screen, select **Create** to onboard the C500 device template into your IoT Central application.
+1. On the next screen, select **Create** to onboard the Cascade-500 device template into your IoT Central application.
## Retrieve application connection details
To connect the Cascade 500 device to your IoT Central application, you need to r
1. Make a note of the **ID Scope** for your IoT Central application:
- ![App ID Scope](./media/howto-connect-rigado-cascade-500/app-scope-id.png)
+ :::image type="content" source="media/howto-connect-rigado-cascade-500/app-scope-id.png" alt-text="Screenshot that shows the ID scope for your application." lightbox="media/howto-connect-rigado-cascade-500/app-scope-id.png":::
1. Now select **SAS-IoT-Edge-Devices** and make a note of the **Primary key**:
- ![Primary Key](./media/howto-connect-rigado-cascade-500/primary-key-sas.png)
+ :::image type="content" source="media/howto-connect-rigado-cascade-500/primary-key-sas.png" alt-text="Screenshot that shows the primary SAS key for you device connection group." lightbox="media/howto-connect-rigado-cascade-500/primary-key-sas.png":::
## Contact Rigado to connect the gateway
When the device connects to the internet, Rigado can push down a configuration u
This update applies the IoT Central connection details on the Cascade 500 device and it then appears in your devices list:
-![Devices list](./media/howto-connect-rigado-cascade-500/devices-list-c500.png)
-You're now ready to use your C500 device in your IoT Central application.
+You're now ready to use your Cascade-500 device in your IoT Central application.
## Next steps
iot-central Howto Connect Ruuvi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-connect-ruuvi.md
Previously updated : 08/20/2021 Last updated : 11/01/2022 # This article applies to solution builders.
To connect the RuuviTag with your IoT Central application, you need to set up a
1. In a few seconds, your RuuviTag appears in the list of devices within IoT Central:
- ![RuuviTag Device List](./media/howto-connect-ruuvi/ruuvi-device-list.png)
+ :::image type="content" source="media/howto-connect-ruuvi/ruuvi-device-list.png" alt-text="Screenshot that shows the device list with a RuuviTag." lightbox="media/howto-connect-ruuvi/ruuvi-device-list.png":::
You can now use this RuuviTag device within your IoT Central application.
iot-central Howto Create Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-analytics.md
Title: Analyze device data in your Azure IoT Central application | Microsoft Doc
description: Analyze device data in your Azure IoT Central application. Previously updated : 06/21/2022 Last updated : 11/03/2022
The analytics user interface has three main components:
- **Chart control:** The chart control visualizes the data as a line chart. You can toggle the visibility of specific lines by interacting with the chart legend.
- :::image type="content" source="media/howto-create-analytics/analytics-ui.png" alt-text="Screenshot that shows the three areas of the data explorer UI.":::
+ :::image type="content" source="media/howto-create-analytics/analytics-ui.png" alt-text="Screenshot that shows the three areas of the data explorer UI." lightbox="media/howto-create-analytics/analytics-ui.png":::
## Query your data
Select **Save** to save an analytics query. Later, you can retrieve any queries
- **Time editor panel:** By default you see data from the last day. You can drag either end of the slider to change the time duration. You can also use the calendar control to select one of the predefined time buckets or select a custom time range. The time control also has an **Interval size** slider that controls the interval size used to aggregate the data.
- :::image type="content" source="media/howto-create-analytics/time-editor-panel.png" alt-text="Screenshot that shows the time editor panel.":::
+ :::image type="content" source="media/howto-create-analytics/time-editor-panel.png" alt-text="Screenshot that shows the time editor panel." lightbox="media/howto-create-analytics/time-editor-panel.png":::
- **Inner date range slider tool**: Use the two endpoint controls to highlight the time span you want. The inner date range is constrained by the outer date range slider control.
Select **Save** to save an analytics query. Later, you can retrieve any queries
- **Shared:** A graph for each telemetry type is plotted against the same y-axis. - **Overlap:** Use this mode to stack multiple lines on the same y-axis, with the y-axis data changing based on the selected line.
- :::image type="content" source="media/howto-create-analytics/y-axis-control.png" alt-text="A screenshot that highlights the y-axis control.":::
+ :::image type="content" source="media/howto-create-analytics/y-axis-control.png" alt-text="A screenshot that highlights the y-axis control." lightbox="media/howto-create-analytics/y-axis-control.png":::
- **Zoom control:** The zoom control lets you drill further into your data. If you find a time period you'd like to focus on within your result set, use your mouse pointer to highlight the area. Then select **Zoom in**.
- :::image type="content" source="media/howto-create-analytics/zoom.png" alt-text="A Screenshot that shows the use of the zoom control.":::
+ :::image type="content" source="media/howto-create-analytics/zoom.png" alt-text="A Screenshot that shows the use of the zoom control." lightbox="media/howto-create-analytics/zoom.png":::
Select the ellipsis, for more chart controls:
Select the ellipsis, for more chart controls:
- **Drop a Marker:** The **Drop Marker** control lets you anchor certain data points on the chart. It's useful when you're trying to compare data for multiple lines across different time periods.
- :::image type="content" source="media/howto-create-analytics/additional-chart-controls.png" alt-text="A Screenshot that shows how to access the additional chart controls.":::
+ :::image type="content" source="media/howto-create-analytics/additional-chart-controls.png" alt-text="A Screenshot that shows how to access the additional chart controls." lightbox="media/howto-create-analytics/additional-chart-controls.png":::
## Next steps
iot-central Howto Create Iot Central Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-iot-central-application.md
Previously updated : 06/20/2022 Last updated : 10/28/2022
The **My apps** page lists all the IoT Central applications you have access to.
You can create a copy of any application, minus any device instances, device data history, and user data. The copy uses a standard pricing plan that you'll be billed for.
-Select **Copy**. In the dialog box, enter the details for the new application. Then select **Copy** to confirm that you want to continue. To learn more about the fields in the form, see [Create an application](howto-create-iot-central-application.md).
+Navigate to **Application > Management** and select **Copy**. In the dialog box, enter the details for the new application. Then select **Copy** to confirm that you want to continue. To learn more about the fields in the form, see [Create an application](howto-create-iot-central-application.md).
+After the application copy operation succeeds, you can navigate to the new application using the link.
-After the app copy operation succeeds, you can navigate to the new application using the link.
-
-Copying an application also copies the definition of rules and email action. Some actions, such as Flow and Logic Apps, are tied to specific rules via the Rule ID. When a rule is copied to a different application, it gets its own Rule ID. In this case, users will have to create a new action and then associate the new rule with it. In general, it's a good idea to check the rules and actions to make sure they're up-to-date in the new app.
+Copying an application also copies the definition of rules and email action. Some actions, such as Flow and Logic Apps, are tied to specific rules by the rule ID. When a rule is copied to a different application, it gets its own rule ID. In this case, users must create a new action and then associate the new rule with it. In general, it's a good idea to check the rules and actions to make sure they're up-to-date in the new application.
> [!WARNING] > If a dashboard includes tiles that display information about specific devices, then those tiles show **The requested resource was not found** in the new application. You must reconfigure these tiles to display information about devices in your new application. ## Create and use a custom application template
-When you create an Azure IoT Central application, you have a choice of built-in sample templates. You can also create your own application templates from existing IoT Central applications. You can then use your own application templates when you create new applications.
+When you create an Azure IoT Central application, you choose from the built-in sample templates. You can also create your own application templates from existing IoT Central applications. You can then use your own application templates when you create new applications.
When you create an application template, it includes the following items from your existing application:
To create an application template from an existing IoT Central application:
1. On the **Template Export** page, enter a name and description for your template. 1. Select the **Export** button to create the application template. You can now copy the **Shareable Link** that enables someone to create a new application from the template: - ### Use an application template
iot-central Howto Edit Device Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-edit-device-template.md
Title: Edit a device template in your Azure IoT Central application | Microsoft
description: Iterate over your device templates without impacting your live connected devices Previously updated : 06/22/2022 Last updated : 10/31/2022
The following snippet shows the device model for a thermostat device. The device
To view this information in the IoT Central UI, select **View identity** in the device template editor: ## Version a device template
You can create multiple versions of the device template. Over time, you'll have
1. Select the device you need to migrate to another version. 1. Choose **Migrate**:
- :::image type="content" source="media/howto-edit-device-template/migrate-device.png" alt-text="Choose the option to start migrating a device":::
+ :::image type="content" source="media/howto-edit-device-template/migrate-device.png" alt-text="Screenshot that shows how to choose the option to start migrating a device." lightbox="media/howto-edit-device-template/migrate-device.png":::
1. Select the device template with the version you want to migrate the device to and select **Migrate**.
iot-central Howto Manage Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-dashboards.md
Title: Create and manage Azure IoT Central dashboards | Microsoft Docs
description: Learn how to create and manage application and personal dashboards in Azure IoT Central. Previously updated : 06/20/2022 Last updated : 11/03/2022
All users can create *personal dashboards*, visible only to themselves. Users ca
## Create a dashboard
-The following screenshot shows the dashboard in an application created from the **Custom Application** template. If you're in a role with the appropriate permissions, you can customize the default dashboard. To create a new dashboard from scratch, select **+ New dashboard** in the upper-left corner of the page. To create a new dashboard by copying the current dashboard, select **Copy**:
+The following screenshot shows the dashboard in an application created from the **Custom Application** template. If you're in a role with the appropriate permissions, you can customize the default dashboard. To create a new dashboard from scratch, select**Go to dashboard catalog** and then **+New**. To create a new dashboard by copying the current dashboard, select **Copy**:
In the **Create dashboard** or **Duplicate dashboard** panel, give your dashboard a name and select either **Organization** or **Personal** as the dashboard type. If you're creating an organization dashboard, choose the [organization](howto-create-organizations.md) the dashboard is associated with. An organization dashboard and its tiles only show the devices that are visible to the organization and any of its suborganizations. After you create the dashboard, choose items from the library to add to the dashboard. The library contains the tiles and dashboard primitives you use to customize the dashboard: If you're an administrator, you can create a personal dashboard or an organization dashboard. Users see the organization dashboards associated with the organization they're assigned to. All users can create personal dashboards, visible only to themselves.
If you're an administrator, you can create a personal dashboard or an organizati
You can have several personal dashboards and switch between them or choose from one of the organization dashboards: You can edit your personal dashboards and delete dashboards you don't need. If you have the correct [permissions](howto-manage-users-roles.md#customizing-the-app), you can edit or delete organization dashboards as well. -
-To rename a dashboard or see the organization it's assigned to, select **Dashboard settings**:
-
+You can also manage the dashboards in the catalog by selecting **Go to dashboard catalog**.
## Add tiles
-The following screenshot shows the dashboard in an application created from the **Custom application** template. To customize the current dashboard, select **Edit**:
-
+To customize the current dashboard, select **Edit**.
-After you select **Edit**, **New dashboard**, or **Copy**, the dashboard is in *edit* mode. You can use the tools in the **Edit dashboard** panel to add tiles to the dashboard. You can customize and remove tiles on the dashboard itself. For example, to add a line chart tile to track telemetry values reported by one or more devices over time:
+After you select **Edit**, the dashboard is in *edit* mode. You can use the tools in the **Add a tile** panel to add tiles to the dashboard. You can customize and remove tiles on the dashboard itself. For example, to add a line chart tile to track telemetry values reported by one or more devices over time:
1. Select **Start with a Visual**, **Line chart**, and then **Add tile**, or just drag the tile onto the canvas.
-1. To edit the tile, select its **pencil** button. Enter a **Title** and select a **Device Group**. In the **Devices** list, select the devices to show on the tile.
+1. To edit the tile, select its **pencil** icon. Enter a **Title** and select a **Device Group**. In the **Devices** list, select the devices to show on the tile.
1. After you select all the devices to show on the tile, select **Update**.
This table describes the types of tiles you can add to a dashboard:
| Property | Display the current values for properties and cloud properties for one or more devices. For example, you can use this tile to display device properties like the manufacturer or firmware version. | | Map (property) | Display the location of one or more devices on a map.| | Map (telemetry) | Display the location of one or more devices on a map. You can also display up to 100 points of a device's location history. For example, you can display a sampled route of where a device has been in the past week.|
-| Image | Display a custom image and can be clickable. The URL can be a relative link to another page in the application or an absolute link to an external site.|
+| Image (static) | Display a custom image and can be clickable. The URL can be a relative link to another page in the application or an absolute link to an external site.|
| Label | Display custom text on a dashboard. You can choose the size of the text. Use a label tile to add relevant information to the dashboard, like descriptions, contact details, or Help.| | Markdown | Clickable tiles that display a heading and description text formatted in Markdown. The URL can be a relative link to another page in the application or an absolute link to an external site.| | External content | Let you load content from an external source. | | Number of devices | Display the number of devices in a device group.|
+| Data explorer query | Display a saved data explorer query |
Currently, you can add up to 10 devices to tiles that support multiple devices.
By default, line charts show data over a range of time. The selected time range
For tiles that display aggregate values, select the **gear** button next to the telemetry type in the **Configure chart** panel to choose the aggregation. You can choose average, sum, maximum, minimum, or count: For line charts, bar charts, and pie charts, you can customize the colors of the various telemetry values. Select the **palette** button next to the telemetry you want to customize: For tiles that show string properties or telemetry values, you can choose how to display the text. For example, if the device stores a URL in a string property, you can display it as a clickable link. If the URL references an image, you can render the image in a last known value or property tile. To change how a string displays, select the **gear** button next to the telemetry type or property in the tile configuration.
For numeric KPI, LKV, and property tiles, you can use conditional formatting to
Next, add your conditional formatting rules: -
-The following screenshot shows the effect of those conditional formatting rules:
- ### Tile formatting This feature is available on the KPI, LKV, and property tiles. It lets you adjust font size, choose decimal precision, abbreviate numeric values (for example, format 1,700 as 1.7 K), or wrap string values on their tiles. -
-## Pin analytics to dashboard
+## Pin data explorer query to dashboard
-To continuously monitor the analytics queries, you can pin the query to dashboard. To pin a query to the dashboard:
+To continuously monitor the data explorer queries, you can pin a query to a dashboard. To pin a query to the dashboard:
-1. Navigate to **Data explorer** in the left pane and select the query you created.
+1. Navigate to **Data explorer** in the left pane and select a query.
1. Select a dashboard from the dropdown menu and select **Pin to dashboard**. ## Next steps
iot-central Howto Set Up Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-set-up-template.md
Title: Define a new IoT device type in Azure IoT Central | Microsoft Docs
description: This article shows you how to create a new Azure IoT device template in your Azure IoT Central application. You define the telemetry, state, properties and commands for your type. Previously updated : 06/22/2022 Last updated : 10/31/2022
This article describes how to create a device template in IoT Central. For examp
The following screenshot shows an example of a device template: The device template has the following sections: - Model - Use the model to define how your device interacts with your IoT Central application. Each model has a unique model ID and defines the capabilities of the device. Capabilities are grouped into interfaces. Interfaces let you reuse components across models or use inheritance to extend the set of capabilities.-- Cloud properties - Use cloud properties to define information that your IoT Central application stores about your devices. For example, a cloud property might record the date a device was last serviced.
+- Raw data - View the raw data sent by your designated preview device. This view is useful when you're debugging or troubleshooting a device template.
- Views - Use views to visualize the data from the device and forms to manage and control a device. To learn more, see [What are device templates?](concepts-device-templates.md).
This section shows you how to import a device template from the catalog and how
1. On the **Review** page, select **Create**. The name of the template you created is **Sensor Controller**. The model includes components such as **Sensor Controller**, **SensorTemp**, and **Device Information interface**. Components define the capabilities of an ESP32 device. Capabilities include the telemetry, properties and commands. ## Autogenerate a device template
The following steps show how to use this feature:
1. Connect your device to IoT Central, and start sending the data. When you see the data in the **Raw data** view, select **Auto-create template** in the **Manage template** drop-down:
- :::image type="content" source="media/howto-set-up-template/infer-model-1.png" alt-text="Screenshot that shows raw data from unassigned device.":::
+ :::image type="content" source="media/howto-set-up-template/infer-model-1.png" alt-text="Screenshot that shows raw data from unassigned device." lightbox="media/howto-set-up-template/infer-model-1.png":::
1. On the **Data preview** page, make any required changes to the raw data, and select **Create template**:
- :::image type="content" source="media/howto-set-up-template/infer-model-2.png" alt-text="Screenshot that shows data preview change that lets you edit data that IoT Central uses to generate the device template.":::
+ :::image type="content" source="media/howto-set-up-template/infer-model-2.png" alt-text="Screenshot that shows data preview change that lets you edit data that IoT Central uses to generate the device template." lightbox="media/howto-set-up-template/infer-model-2.png":::
1. IoT Central generates a template based on the data format shown on the **Data preview** page and assigns the device to it. You can make further changes to the device template, such as renaming it or adding capabilities, on the **Device templates** page:
- :::image type="content" source="media/howto-set-up-template/infer-model-3.png" alt-text="Screenshot that shows how to rename the autogenerated device template.":::
+ :::image type="content" source="media/howto-set-up-template/infer-model-3.png" alt-text="Screenshot that shows how to rename the autogenerated device template." lightbox="media/howto-set-up-template/infer-model-3.png":::
## Manage a device template
To create a device model, you can:
1. To view the model ID, select the root interface in the model and select **Edit identity**:
- :::image type="content" source="media/howto-set-up-template/view-id.png" alt-text="Screenshot that shows model id for device template root interface.":::
+ :::image type="content" source="media/howto-set-up-template/view-id.png" alt-text="Screenshot that shows model ID for device template root interface.":::
1. To view the component ID, select **Edit Identity** on any of the component interfaces in the model.
To view and manage the interfaces in your device model:
1. Go to **Device Templates** page and select the device template you created. The interfaces are listed in the **Models** section of the device template. The following screenshot shows an example of the **Sensor Controller** root interface in a device template:
- :::image type="content" source="media/howto-set-up-template/device-template.png" alt-text="Screenshot that shows root interface for a model":::
+ :::image type="content" source="media/howto-set-up-template/device-template.png" alt-text="Screenshot that shows root interface for a model":::
1. Select the ellipsis to add an inherited interface or component to the root interface. To learn more about interfaces and component see [multiple components](../../iot-pnp/concepts-modeling-guide.md#multiple-components) in the modeling guide.
- :::image type="content" source="media/howto-set-up-template/add-interface.png" alt-text="How to add interface or component ":::
+ :::image type="content" source="media/howto-set-up-template/add-interface.png" alt-text="Screenshot that shows how to add interface or component." lightbox="media/howto-set-up-template/add-interface.png":::
1. To export a model or interface select **Export**.
To view and manage the interfaces in your device model:
Select **+ Add capability** to add capability to an interface or component. For example, you can add **Target Temperature** capability to a **SensorTemp** component. #### Telemetry Telemetry is a stream of values sent from the device, typically from a sensor. For example, a sensor might report the ambient temperature as shown in the following screenshot: The following table shows the configuration settings for a telemetry capability:
The following table shows the configuration settings for a telemetry capability:
#### Properties
-Properties represent point-in-time values. You can set writable properties from IoT Central.
-For example, a device can use a writable property to let an operator set the target temperature as shown in the following screenshot:
+Properties represent point-in-time values. You can set writable properties from IoT Central. For example, a device can use a writable property to let an operator set the target temperature as shown in the following screenshot:
The following table shows the configuration settings for a property capability:
The following table shows the configuration settings for a property capability:
You can call device commands from IoT Central. Commands optionally pass parameters to the device and receive a response from the device. For example, you can call a command to reboot a device in 10 seconds as shown in the following screenshot: The following table shows the configuration settings for a command capability:
To learn more about how devices implement commands, see [Telemetry, property, an
You can choose queue commands if a device is currently offline by enabling the **Queue if offline** option for a command in the device template. - This option uses IoT Hub cloud-to-device messages to send notifications to devices. To learn more, see the IoT Hub article [Send cloud-to-device messages](../../iot-hub/iot-hub-devguide-messages-c2d.md). Cloud-to-device messages:
Cloud-to-device messages:
Use cloud properties to store information about devices in IoT Central. Cloud properties are never sent to a device. For example, you can use cloud properties to store the name of the customer who has installed the device, or the device's last service date. +
+> [!TIP]
+> You can only add cloud properties to the **Root** component in the model.
The following table shows the configuration settings for a cloud property:
To add a view to a device template:
1. Enter a name for your view in **View name**. 1. Select **Start with a visual** under add tiles and choose the type of visual for your tile. Then either select **Add tile** or drag and drop the visual onto the canvas. To configure the tile, select the gear icon. To test your view, select **Configure preview device**. This feature lets you see the view as an operator sees it after it's published. Use this feature to validate that your views show the correct data. Choose from the following options:
Add forms to a device template to enable operators to manage a device by viewing
1. Change the form name to **Manage device**.
-1. Select the **Customer Name** and **Last Service Date** cloud properties, and the **Target Temperature** property. Then select **Add section**.
+1. Select the properties and cloud properties to add to the form. Then select **Add section**.
1. Select **Save** to save your new form. ## Publish a device template
iot-central Howto Use Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-commands.md
Title: How to use device commands in an Azure IoT Central solution
description: How to use device commands in Azure IoT Central solution. This tutorial shows you how to use device commands in client app to your Azure IoT Central application. Previously updated : 06/22/2022 Last updated : 10/31/2022
Standard commands are sent to a device to instruct the device to do something. A
Commands are defined as part of a device template. The following screenshot shows the **Get Max-Min report** command definition in the **Thermostat** device template. This command has both request and response parameters: The following table shows the configuration settings for a command capability:
The call to `onDeviceMethod` sets up the `commandHandler` method. This command h
The following screenshot shows how the successful command response displays in the IoT Central UI: ## Long-running commands
The call to `onDeviceMethod` sets up the `commandHandler` method. This command h
The following screenshot shows the IoT Central UI when it receives the property update that indicates the command is complete: ## Offline commands
This section shows you how a device handles an offline command. If a device is o
The following screenshot shows an offline command called **GenerateDiagnostics**. The request parameter is an object with datetime property called **StartTime** and an integer enumeration property called **Bank**: The following code snippet shows how a client can listen for offline commands and display the message contents:
Properties: {"propertyList":[{"key":"iothub-ack","value":"none"},{"key":"method-
You can call commands on a device that isn't assigned to a device template. To call a command on an unassigned device navigate to the device in the **Devices** section, select **Manage device** and then **Command**. Enter the method name, payload, and any other required values. The following screenshot shows the UI you use to call a command: ## Next steps
iot-central Howto Use Location Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-location-data.md
Title: Use location data in an Azure IoT Central solution
description: Learn how to use location data sent from a device connected to your IoT Central application. Plot location data on a map or create geofencing rules. Previously updated : 06/22/2022 Last updated : 11/03/2022
When you create a view for a device, you can choose to plot the location on a ma
:::image type="content" source="media/howto-use-location-data/location-views.png" alt-text="Screenshot showing example view with location data" lightbox="media/howto-use-location-data/location-views.png":::
-You can add map tiles to a dashboard to plot the location of one or more devices. When you add a map tile to show location telemetry, you can plot the location over a time period. The following screenshot shows the location reported by a simulated device over the last 30 minutes:
-
+You can add map tiles to a dashboard to plot the location of one or more devices. When you add a map tile to show location telemetry, you can plot the location over a time period, as shown in the previous screenshot.
## Create a geofencing rule
iot-central Howto Use Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-properties.md
Title: Use properties in an Azure IoT Central solution
description: Learn how to use read-only and writable properties in an Azure IoT Central solution. Previously updated : 06/21/2022 Last updated : 10/31/2022
Properties are data fields that represent the state of your device. Use properti
The following screenshot shows a property definition in an Azure IoT Central application. The following table shows the configuration settings for a property capability.
-| Field | Description |
-|--||
-| Display name | The display name for the property value used on dashboard tiles and device forms. |
-| Name | The name of the property. Azure IoT Central generates a value for this field from the display name, but you can choose your own value if necessary. This field must be alphanumeric. The device code uses this **Name** value. |
-| Capability type | Property. |
-| Semantic type | The semantic type of the property, such as temperature, state, or event. The choice of semantic type determines which of the following fields are available. |
-| Schema | The property data type, such as double, string, or vector. The available choices are determined by the semantic type. Schema isn't available for the event and state semantic types. |
-| Writable | If the property isn't writable, the device can report property values to Azure IoT Central. If the property is writable, the device can report property values to Azure IoT Central. Then Azure IoT Central can send property updates to the device. |
-| Severity | Only available for the event semantic type. The severities are **Error**, **Information**, or **Warning**. |
-| State values | Only available for the state semantic type. Define the possible state values, each of which has display name, name, enumeration type, and value. |
-| Unit | A unit for the property value, such as **mph**, **%**, or **&deg;C**. |
-| Display unit | A display unit for use on dashboards tiles and device forms. |
-| Comment | Any comments about the property capability. |
-| Description | A description of the property capability. |
+| Field | Description |
+|||
+| Display name | The display name for the property value used on dashboard tiles and device forms. |
+| Name | The name of the property. Azure IoT Central generates a value for this field from the display name, but you can choose your own value if necessary. This field must be alphanumeric. The device code uses this **Name** value. |
+| Capability type | Property. |
+| Semantic type | The semantic type of the property, such as temperature, state, or event. The choice of semantic type determines which of the following fields are available. |
+| Schema | The property data type, such as double, string, or vector. The available choices are determined by the semantic type. Schema isn't available for the event and state semantic types. |
+| Writable | If the property isn't writable, the device can report property values to Azure IoT Central. If the property is writable, the device can report property values to Azure IoT Central. Then Azure IoT Central can send property updates to the device. |
+| Severity | Only available for the event semantic type. The severities are **Error**, **Information**, or **Warning**. |
+| State values | Only available for the state semantic type. Define the possible state values, each of which has display name, name, enumeration type, and value. |
+| Unit | A unit for the property value, such as **mph**, **%**, or **&deg;C**. |
+| Display unit | A display unit for use on dashboards tiles and device forms. |
+| Comment | Any comments about the property capability. |
+| Description | A description of the property capability. |
The properties can also be defined in an interface in a device template as shown here:
Optional fields, such as display name and description, let you add more details
When you create a property, you can specify complex schema types such as **Object** and **Enum**.
-When you select the complex **Schema**, such as **Object**, you need to define the object, too.
+When you select the complex **Schema**, such as **Object**, you need to define the object schema.
The following code shows the definition of an Object property type. This object has two fields with types string and integer. ``` json {
- "@type": "Property",
- "displayName": {
- "en": "ObjectProperty"
- },
- "name": "ObjectProperty",
- "schema": {
- "@type": "Object",
+ "@type": "Property",
+ "description": {
+ "en": "Device model name."
+ },
"displayName": {
- "en": "Object"
+ "en": "Device model"
},
- "fields": [
- {
+ "name": "model",
+ "writable": false,
+ "schema": {
+ "@type": "Object",
"displayName": {
- "en": "Field1"
+ "en": "Object"
},
- "name": "Field1",
- "schema": "integer"
- },
- {
- "displayName": {
- "en": "Field2"
- },
- "name": "Field2",
- "schema": "string"
- }
- ]
- }
+ "fields": [
+ {
+ "displayName": {
+ "en": "Model Name"
+ },
+ "name": "ModelName",
+ "schema": "string"
+ },
+ {
+ "displayName": {
+ "en": "Model ID"
+ },
+ "name": "ModelID",
+ "schema": "integer"
+ }
+ ]
+ }
} ```
This article uses Node.js for simplicity. For other language examples, see the [
The following view in Azure IoT Central application shows the properties you can see. The view automatically makes the **Device model** property a _read-only device property_. ## Implement writable properties
You can view and update writable properties on a device that isn't assigned to a
To view existing properties on an unassigned device, navigate to the device in the **Devices** section, select **Manage device**, and then **Device Properties**: You can update the writable properties in this view: ## Next steps
iot-central Overview Iot Central Operator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-operator.md
You can further customize the device management and monitoring experience using
- Create more views to display on the **Devices** page for individual devices by adding view definitions to your [device templates](concepts-device-templates.md). - Customize the text that describes your devices in the application. To learn more, see [Change application text](howto-customize-ui.md#change-application-text).-- Create [custom device management dashboards](howto-manage-dashboards.md). A dashboard can include a [pinned query](howto-manage-dashboards.md#pin-analytics-to-dashboard) from the **Data explorer**.
+- Create [custom device management dashboards](howto-manage-dashboards.md). A dashboard can include a [pinned query](howto-manage-dashboards.md#pin-data-explorer-query-to-dashboard) from the **Data explorer**.
## Automate
iot-central Troubleshoot Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/troubleshoot-connection.md
The validation commands also report an error if the same telemetry name is defin
If you prefer to use a GUI, use the IoT Central **Raw data** view to see if something isn't being modeled. When you've detected the issue, you may need to update device firmware, or create a new device template that models previously unmodeled data.
iot-hub-device-update Import Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/import-concepts.md
The *Files* part contains the metadata of update payload files like their names,
## Create an import manifest
-You may use any text editor to create import manifest JSON file. There are also sample scripts for creating import manifest programmatically in [Azure/iot-hub-device-update](https://github.com/Azure/iot-hub-device-update/tree/main/tools/AduCmdlets) on GitHub.
+While it's possible to author an import manifest JSON manually using a text editor, the Azure Command Line Interface (CLI) simplifies the process greatly. When you're ready to try out the creation of an import manifest, you can use the [How-to guide](create-update.md#create-a-basic-device-update-import-manifest).
> [!IMPORTANT] > An import manifest JSON filename must end with `.importmanifest.json` when imported through Azure portal.
load-testing How To Appservice Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-appservice-insights.md
Azure Load Testing Preview collects detailed resource metrics across your Azure
Azure Load Testing lets you monitor server-side metrics for your Azure app components for a load test. You can then visualize and analyze these metrics in the Azure Load Testing dashboard.
-When the application you're load testing is hosted on Azure App Service, you can get extra insights by using [App Service diagnostics](/azure/app-service/overview-diagnostics.md).
+When the application you're load testing is hosted on Azure App Service, you can get extra insights by using [App Service diagnostics](/azure/app-service/overview-diagnostics).
To view the App Service diagnostics information for your application under load test:
logic-apps Edit App Settings Host Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md
ms.suite: integration Previously updated : 10/15/2022 Last updated : 11/03/2022
[!INCLUDE [logic-apps-sku-standard](../../includes/logic-apps-sku-standard.md)]
-In *single-tenant* Azure Logic Apps, the *app settings* for a logic app specify the global configuration options that affect *all the workflows* in that logic app. However, these settings apply *only* when these workflows run in your *local development environment*. Locally running workflows can access these app settings as *local environment variables*, which are used by local development tools for values that can often change between environments. For example, these values can contain connection strings. When you deploy to Azure, app settings are ignored and aren't included with your deployment.
+In *single-tenant* Azure Logic Apps, the *app settings* for a Standard logic app specify the global configuration options that affect *all the workflows* in that logic app. However, these settings apply *only* when these workflows run in your *local development environment*. Locally running workflows can access these app settings as *local environment variables*, which are used by local development tools for values that can often change between environments. For example, these values can contain connection strings. When you deploy to Azure, app settings are ignored and aren't included with your deployment.
Your logic app also has *host settings*, which specify the runtime configuration settings and values that apply to *all the workflows* in that logic app, for example, default values for throughput, capacity, data size, and so on, *whether they run locally or in Azure*.
App settings in Azure Logic Apps work similarly to app settings in Azure Functio
| Setting | Default value | Description | |||-| | `AzureWebJobsStorage` | None | Sets the connection string for an Azure storage account. |
+| `ServiceProviders.Sql.QueryTimeout` | `00:02:00` <br>(2 min) | Sets the request timeout value for SQL service provider operations. |
+| `WEBSITE_LOAD_ROOT_CERTIFICATES` | None | Sets the thumbprints for the root certificates to be trusted. |
+| `Workflows.Connection.AuthenticationAudience` | None | Sets the audience for authenticating a managed (Azure-hosted) connection. |
+| `Workflows.CustomHostName` | None | Sets the host name to use for workflow and input-output URLs, for example, "logic.contoso.com". For information to configure a custom DNS name, see [Map an existing custom DNS name to Azure App Service](../app-service/app-service-web-tutorial-custom-domain.md) and [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](../app-service/configure-ssl-bindings.md). |
| `Workflows.<workflowName>.FlowState` | None | Sets the state for <*workflowName*>. |
-| `Workflows.<workflowName>.RuntimeConfiguration.RetentionInDays` | None | Sets the operation options for <*workflowName*>. |
-| `Workflows.Connection.AuthenticationAudience` | None | Sets the audience for authenticating an Azure-hosted connection. |
+| `Workflows.<workflowName>.RuntimeConfiguration.RetentionInDays` | None | Sets the amount of time in days to keep the run history for <*workflowName*>. |
+| `Workflows.RuntimeConfiguration.RetentionInDays` | `90.00:00:00` <br>(90 days) | Sets the amount of time in days to keep workflow run history after a run starts. |
| `Workflows.WebhookRedirectHostUri` | None | Sets the host name to use for webhook callback URLs. |
-| `Workflows.CustomHostName` | None | Sets the host name to use for workflow and input-output URLs, for example, "logic.contoso.com". For information to configure a custom DNS name, see [Map an existing custom DNS name to Azure App Service](../app-service/app-service-web-tutorial-custom-domain.md) and [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](../app-service/configure-ssl-bindings.md). |
-| `WEBSITE_LOAD_ROOT_CERTIFICATES` | None | Sets the thumbprints for the root certificates to be trusted. |
-| `ServiceProviders.Sql.QueryTimeout` | `00:02:00` <br>(2 min) | Sets the request timeout value for SQL service provider operations. |
<a name="manage-app-settings"></a>
These settings affect the throughput and capacity for single-tenant Azure Logic
<a name="run-duration-history"></a>
-### Run duration and history
+### Run duration and history retention
| Setting | Default value | Description | |||-|
-| `Runtime.Backend.FlowRunTimeout` | `90.00:00:00` <br>(90 days) | Sets the amount of time a workflow can continue running before forcing a timeout. <br><br>**Important**: Make sure this value is less than or equal to the `Runtime.FlowRetentionThreshold` value. Otherwise, run histories can get deleted before the associated jobs are complete. |
-| `Runtime.FlowRetentionThreshold` | `90.00:00:00` <br>(90 days) | Sets the amount of time to keep workflow run history after a run starts. |
+| `Runtime.Backend.FlowRunTimeout` | `90.00:00:00` <br>(90 days) | Sets the amount of time a workflow can continue running before forcing a timeout. <br><br>**Important**: Make sure this value is less than or equal to the value for the app setting named `Workflows.RuntimeConfiguration.RetentionInDays`. Otherwise, run histories can get deleted before the associated jobs are complete. |
+| `Runtime.FlowMaintenanceJob.RetentionCooldownInterval` | `7.00:00:00` <br>(7 days) | Sets the amount of time in days as the interval between when to check for and delete run history that you no longer want to keep. |
<a name="run-actions"></a>
These settings affect the throughput and capacity for single-tenant Azure Logic
You can add, update, or delete host settings, which specify the runtime configuration settings and values that apply to *all the workflows* in that logic app, such as default values for throughput, capacity, data size, and so on, *whether they run locally or in Azure*. For host settings specific to logic apps, review the [reference guide for available runtime and deployment settings - host.json](#reference-host-json).
+<a name="manage-host-settings-portal"></a>
+ ### Azure portal - host.json To review the host settings for your single-tenant based logic app in the Azure portal, follow these steps:
To add a setting, follow these steps:
1. Now, restart your logic app. Return to your logic app's **Overview** page, and select **Restart**.
+<a name="manage-host-settings-visual-studio-code"></a>
+ ### Visual Studio Code - host.json To review the host settings for your logic app in Visual Studio Code, follow these steps:
logic-apps Logic Apps Enterprise Integration Rosettanet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-rosettanet.md
Title: RosettaNet messages for B2B integration
-description: Exchange RosettaNet messages in Azure Logic Apps with Enterprise Integration Pack.
+ Title: Exchange RosettaNet messages
+description: Exchange RosettaNet messages for B2B enterprise integration using Azure Logic Apps. Add a PIP process configuration and an agreement to an integration account.
ms.suite: integration Previously updated : 08/25/2022 Last updated : 11/07/2022
+#Customer intent: As a logic apps developer, I want to send and receive RosettaNet messages using workflows in Azure Logic Apps so that I can use a standardized process to share business information with partners.
-# Exchange RosettaNet messages for B2B enterprise integration in Azure Logic Apps
+# Exchange RosettaNet messages for B2B integration using workflows in Azure Logic Apps
[!INCLUDE [logic-apps-sku-consumption](../../includes/logic-apps-sku-consumption.md)]
-[RosettaNet](https://resources.gs1us.org) is a non-profit consortium that has established standard processes for sharing business information. These standards are commonly used for supply chain processes and are widespread in the semiconductor, electronics, and logistics industries. The RosettaNet consortium creates and maintains Partner Interface Processes (PIPs), which provide common business process definitions for all RosettaNet message exchanges. RosettaNet is based on XML and defines message guidelines, interfaces for business processes, and implementation frameworks for communication between companies.
+To send and receive RosettaNet messages in workflows that you create using Azure Logic Apps, you can use the RosettaNet connector, which provides actions that manage and support communication that follows RosettaNet standards. RosettaNet is a non-profit consortium that has established standard processes for sharing business information. These standards are commonly used for supply chain processes and are widespread in the semiconductor, electronics, and logistics industries. The RosettaNet consortium creates and maintains Partner Interface Processes (PIPs), which provide common business process definitions for all RosettaNet message exchanges. RosettaNet is based on XML and defines message guidelines, interfaces for business processes, and implementation frameworks for communication between companies. For more information, visit the [RosettaNet site](https://resources.gs1us.org).
-In [Azure Logic Apps](../logic-apps/logic-apps-overview.md), the RosettaNet connector helps you create integration solutions that support RosettaNet standards. The connector is based on RosettaNet Implementation Framework (RNIF) version 2.0.01. RNIF is an open network application framework that enables business partners to collaboratively run RosettaNet PIPs. This framework defines the message structure, the need for acknowledgments, Multipurpose Internet Mail Extensions (MIME) encoding, and the digital signature.
+The connector is based on the RosettaNet Implementation Framework (RNIF) version 2.0.01 and supports all PIPs defined by this version. RNIF is an open network application framework that enables business partners to collaboratively run RosettaNet PIPs. This framework defines the message structure, the need for acknowledgments, Multipurpose Internet Mail Extensions (MIME) encoding, and the digital signature. Communication with the partner can be synchronous or asynchronous. The connector provides the following capabilities:
-Specifically, the connector provides these capabilities:
-
-* Encode or receive RosettaNet messages.
-* Decode or send RosettaNet messages.
+* Receive or decode RosettaNet messages.
+* Send or encode RosettaNet messages.
* Wait for the response and generation of Notification of Failure.
-For these capabilities, the connector supports all PIPs that are defined by RNIF 2.0.01. Communication with the partner can be synchronous or asynchronous.
+This how-to guide shows how to send and receive RosettaNet messages in workflows using Azure Logic Apps and the RosettaNet connector by completing the following tasks:
+
+* Add a PIP process configuration, if you don't have one already.
+* Create a RosettaNet agreement.
+* Add an action that receives or decodes RosettaNet messages.
+* Add an action that sends or encodes RosettaNet messages.
## RosettaNet concepts
-Here are some concepts and terms that are unique to the RosettaNet specification and are important when building RosettaNet-based integrations:
+The following concepts and terms are unique to the RosettaNet specification and are important to know when you build RosettaNet-based integration workflows:
* **PIP**
- The RosettaNet organization creates and maintains Partner Interface Processes (PIPs), which provide common business process definitions for all RosettaNet message exchanges. Each PIP specification provides a document type definition (DTD) file and a message guideline document. The DTD file defines the service-content message structure. The message-guideline document, which is a human-readable HTML file, specifies element-level constraints. Together, these files provide a complete definition of the business process.
+ The RosettaNet organization creates and maintains PIPs, which provide common business process definitions for all RosettaNet message exchanges. Each PIP specification provides a document type definition (DTD) file and a message guideline document. The DTD file defines the service-content message structure. The message guideline document, which is a human-readable HTML file, specifies element-level constraints. Together, these files provide a complete definition of the business process.
- PIPs are categorized by a high-level business function, or cluster, and a subfunction, or segment. For example, "3A4" is the PIP for Purchase Order, while "3" is the Order Management function, and "3A" is the Quote & Order Entry subfunction. For more information, see the [RosettaNet site](https://resources.gs1us.org).
+ PIPs are categorized by a high-level business function, or cluster, and a subfunction, or segment. For example, "3A4" is the PIP for Purchase Order, while "3" is the Order Management function, and "3A" is the Quote & Order Entry subfunction. For more information, visit the [RosettaNet site](https://resources.gs1us.org).
* **Action**
Here are some concepts and terms that are unique to the RosettaNet specification
For a single-action PIP, the only response is an acknowledgment signal message. For a double-action PIP, the initiator receives a response message and replies with an acknowledgment in addition to the single-action message flow.
+## Connector technical reference
+
+The RosettaNet connector is available only for Consumption logic app workflows.
+
+| Logic app | Environment | Connector version |
+|--|-|-|
+| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector, which appears in the designer under the **Standard** label. The **RosettaNet** connector provides only actions, but you can use any trigger that works for your scenario. For more information, review the following documentation: <br><br>- [RosettaNet connector operations](#rosettanet-operations) <br>- [B2B protocol limits for message sizes](logic-apps-limits-and-config.md#b2b-protocol-limits) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) |
+| **Consumption** | Integration service environment (ISE) | Built-in connector, which appears in the designer with the **CORE** label. The **RosettaNet** connector provides only actions, but you can use any trigger that works for your scenario. For more information, review the following documentation: <br><br>- [RosettaNet connector operations](#rosettanet-operations) <br>- [ISE message limits](logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) |
+
+<a name="rosettanet-operations"></a>
+
+### RosettaNet operations
+
+The **RosettaNet** connector has no triggers. The following table describes the actions that the **RosettaNet** connector provides for establishing security and reliability when transmitting messages:
+
+| Action | Description |
+|--|-|
+| [**RosettaNet Encode** action](#send-encode-rosettanet) | Send RosettaNet messages using encoding that follows RosettaNet standards. |
+| [**RosettaNet Decode** action](#receive-decode-rosettanet) | Receive RosettaNet messages using decoding that follows RosettaNet standards. |
+| [**RosettaNet wait for response** action](#send-encode-rosettanet) | Have the host wait for a RosettaNet response or signal message from the receiver. |
+ ## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* The Consumption logic app resource and workflow where you want to use the RosettaNet operations.
-* An [integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md) for storing your agreement and other B2B artifacts. This integration account must be associated with your Azure subscription.
+* An [integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md) for storing your agreement and other business-to-business (B2B) artifacts.
-* At least two [partners](../logic-apps/logic-apps-enterprise-integration-partners.md) that are defined in your integration account and configured with the "DUNS" qualifier under **Business Identities**
+ > [!IMPORTANT]
+ >
+ > To work together, both your integration account and logic app resource must exist in the same Azure subscription and Azure region.
+ > To use integration account artifacts in your workflow, make sure to [link your logic app resource to your integration account](logic-apps-enterprise-integration-create-integration-account.md?tabs=consumption#link-account).
-* A PIP process configuration, which is required to send or receive RosettaNet messages, in your integration account. The process configuration stores all the PIP configuration characteristics. You can then reference this configuration when you create an agreement with the partner. To create a PIP process configuration in your integration account, see [Add PIP process configuration](#add-pip).
+* At least two [partners](../logic-apps/logic-apps-enterprise-integration-partners.md) that are defined in your integration account and configured with the **DUNS** qualifier under **Business Identities** in the Azure portal.
-* Optional [certificates](../logic-apps/logic-apps-enterprise-integration-certificates.md) for encrypting, decrypting, or signing the messages that you upload to the integration account. Certificates are required only if you are use signing or encryption.
+* Optional [certificates](../logic-apps/logic-apps-enterprise-integration-certificates.md) for encrypting, decrypting, or signing the messages that you upload to the integration account. Certificates are required only if you use signing or encryption.
<a name="add-pip"></a> ## Add PIP process configuration
-To add a PIP process configuration to your integration account, follow these steps:
+To send or receive RosettaNet messages, your integration account requires a PIP process configuration, if you don't have one already. The process configuration stores all the PIP configuration characteristics. You can then reference this configuration when you create an agreement with a partner.
-1. In the [Azure portal](https://portal.azure.com), find and open your integration account.
+1. In the [Azure portal](https://portal.azure.com), go to your integration account.
-1. On the **Overview** pane, select the **RosettaNet PIP** tile.
+1. On the integration account navigation menu, under **Settings**, select **RosettaNet PIP**.
- ![Choose RosettaNet tile](media/logic-apps-enterprise-integration-rosettanet/select-rosettanet-tile.png)
+ :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/select-rosettanetpip.png" alt-text="Screenshot of the Azure portal and the integration account page. On the navigation menu, RosettaNet PIP is selected.":::
-1. Under **RosettaNet PIP**, choose **Add**. Provide your PIP details.
+1. On the **RosettaNet PIP** page, select **Add**. On the **Add Partner Interface Process** pane, enter your PIP details.
- ![Add RosettaNet PIP details](media/logic-apps-enterprise-integration-rosettanet/add-rosettanet-pip.png)
+ :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/add-rosettanet-pip.png" alt-text="Screenshot of the RosettaNet PIP page, with Add selected. The Add Partner Interface Process pane contains boxes for the name, code, and version.":::
| Property | Required | Description | |-|-|-|
- | **Name** | Yes | Your PIP name |
- | **PIP Code** | Yes | The PIP three-digit code. For more information, see [RosettaNet PIPs](/biztalk/adapters-and-accelerators/accelerator-rosettanet/rosettanet-pips). |
- | **PIP Version** | Yes | The PIP version number, which is available based on your selected PIP code |
- ||||
+ | **Name** | Yes | Your PIP name. |
+ | **PIP Code** | Yes | The three-digit PIP code. For more information, see [RosettaNet PIPs](/biztalk/adapters-and-accelerators/accelerator-rosettanet/rosettanet-pips). |
+ | **PIP Version** | Yes | The PIP version number, which depends on your selected PIP code. |
For more information about these PIP properties, visit the [RosettaNet website](https://resources.gs1us.org/RosettaNet-Standards/Standards-Library/PIP-Directory#1043208-pipsreg).
-1. When you're done, choose **OK**, which creates the PIP configuration.
+1. When you're done, select **OK** to create the PIP configuration.
-1. To view or edit the process configuration, select the PIP, and choose **Edit as JSON**.
+1. To view or edit the process configuration, select the PIP, and select **Edit as JSON**.
- All process configuration settings come from the PIP's specifications. Logic Apps populates most of the settings with the default values that are the most typically used values for these properties.
+ All process configuration settings come from the PIP's specifications. Azure Logic Apps populates most of the settings with the default values that are the most typically used values for these properties.
- ![Edit RosettaNet PIP configuration](media/logic-apps-enterprise-integration-rosettanet/edit-rosettanet-pip.png)
+ :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/edit-rosettanet-pip.png" alt-text="Screenshot of the RosettaNet PIP page, with Edit as JSON and a PIP selected. Under Edit as JSON, encoded PIP properties are visible.":::
1. Confirm that the settings correspond to the values in the appropriate PIP specification and meet your business needs. If necessary, update the values in JSON and save those changes.
+<a name="create-rosettanet-agreement"></a>
+ ## Create RosettaNet agreement
-1. In the [Azure portal](https://portal.azure.com), find and open your integration account, if not already open.
+1. In the [Azure portal](https://portal.azure.com), go to your integration account.
-1. On the **Overview** pane, select the **Agreements** tile.
+1. On the integration account navigation menu, under **Settings**, select **Agreements**.
- ![Choose Agreements tile](media/logic-apps-enterprise-integration-rosettanet/select-agreement-tile.png)
+ :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/select-agreements.png" alt-text="Screenshot of the Azure portal with the integration account page open. On the navigation menu, Agreements is selected.":::
-1. Under **Agreements**, choose **Add**. Provide your agreement details.
+1. On the **Agreements** page, select **Add**. Under **Add**, enter your agreement details.
- ![Add agreement details](media/logic-apps-enterprise-integration-rosettanet/add-agreement-details.png)
+ :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/add-agreement-details.png" alt-text="Screenshot of the Agreements page, with Add selected. On the Add pane, boxes appear for the agreement name and type and for partner information.":::
| Property | Required | Description | |-|-|-|
- | **Name** | Yes | The name of the agreement |
- | **Agreement type** | Yes | Select **RosettaNet**. |
- | **Host Partner** | Yes | An agreement requires both a host and guest partner. The host partner represents the organization that configures the agreement. |
- | **Host Identity** | Yes | An identifier for the host partner |
- | **Guest Partner** | Yes | An agreement requires both a host and guest partner. The guest partner represents the organization that's doing business with the host partner. |
- | **Guest Identity** | Yes | An identifier for the guest partner |
- | **Receive Settings** | Varies | These properties apply to all messages received by the host partner |
- | **Send Settings** | Varies | These properties apply to all messages sent by the host partner |
+ | **Name** | Yes | The name of the agreement. |
+ | **Agreement type** | Yes | The type of the agreement. Select **RosettaNet**. |
+ | **Host Partner** | Yes | The organization that configures the agreement. An agreement requires both a host and guest partner. |
+ | **Host Identity** | Yes | An identifier for the host partner. |
+ | **Guest Partner** | Yes | The organization that's doing business with the host partner. An agreement requires both a host and guest partner. |
+ | **Guest Identity** | Yes | An identifier for the guest partner. |
+ | **Receive Settings** | Varies | Properties that apply to all messages received by the host partner. |
+ | **Send Settings** | Varies | Properties that apply to all messages sent by the host partner. |
| **RosettaNet PIP references** | Yes | The PIP references for the agreement. All RosettaNet messages require PIP configurations. |
- ||||
1. To set up your agreement for receiving incoming messages from the guest partner, select **Receive Settings**.
- ![Receive settings](media/logic-apps-enterprise-integration-rosettanet/add-agreement-receive-details.png)
-
- 1. To enable signing or encryption for incoming messages, under **Messages**, select **Message should be signed** or **Message should be encrypted** respectively.
+ 1. To enable signing or encryption for incoming messages, under **Message**, select **Message should be signed** or **Message should be encrypted**, respectively.
| Property | Required | Description | |-|-|-|
- | **Message should be signed** | No | Sign incoming messages with the selected certificate. |
+ | **Message should be signed** | No | The option to sign incoming messages with the selected certificate |
| **Certificate** | Yes, if signing is enabled | The certificate to use for signing |
- | **Enable message encryption** | No | Encrypt incoming messages with the selected certificate. |
+ | **Enable message encryption** | No | The option to encrypt incoming messages with the selected certificate |
| **Certificate** | Yes, if encryption is enabled | The certificate to use for encryption |
- ||||
- 1. Under each selection, select the respective [certificate](./logic-apps-enterprise-integration-certificates.md), which you previously added to your integration account, to use for signing or encryption.
+ 1. Under each selection, select the [certificate](./logic-apps-enterprise-integration-certificates.md) in your integration account that you want to use for signing or encryption.
-1. To set up your agreement for sending messages to the guest partner, select **Send Settings**.
+ :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/add-agreement-receive-details.png" alt-text="Screenshot of the Receive Settings page, with options for signing and encrypting messages and entering certificates.":::
- ![Send settings](media/logic-apps-enterprise-integration-rosettanet/add-agreement-send-details.png)
+1. To set up your agreement for sending messages to the guest partner, select **Send Settings**.
- 1. To enable signing or encryption for outgoing messages, under **Messages**, select **Enable message signing** or **Enable message encryption** respectively. Under each selection, select the respective algorithm and [certificate](./logic-apps-enterprise-integration-certificates.md), which you previously added to your integration account, to use for signing or encryption.
+ 1. To enable signing or encryption for outgoing messages, under **Messages**, select **Enable message signing** or **Enable message encryption**, respectively. Under each selection, select the algorithm and [certificate](./logic-apps-enterprise-integration-certificates.md) in your integration account that you want to use for signing or encryption.
| Property | Required | Description | |-|-|-|
- | **Enable message signing** | No | Sign outgoing messages with the selected signing algorithm and certificate. |
+ | **Enable message signing** | No | The option to sign outgoing messages with the selected signing algorithm and certificate |
| **Signing Algorithm** | Yes, if signing is enabled | The signing algorithm to use, based on the selected certificate | | **Certificate** | Yes, if signing is enabled | The certificate to use for signing |
- | **Enable message encryption** | No | Encrypt outgoing with the selected encryption algorithm and certificate. |
+ | **Enable message encryption** | No | The option to encrypt outgoing messages with the selected encryption algorithm and certificate |
| **Encryption Algorithm** | Yes, if encryption is enabled | The encryption algorithm to use, based on the selected certificate | | **Certificate** | Yes, if encryption is enabled | The certificate to use for encryption |
- ||||
1. Under **Endpoints**, specify the required URLs to use for sending action messages and acknowledgments.
To add a PIP process configuration to your integration account, follow these ste
|-|-|-| | **Action URL** | Yes | The URL to use for sending action messages. The URL is a required field for both synchronous and asynchronous messages. | | **Acknowledgment URL** | Yes | The URL to use for sending acknowledgment messages. The URL is a required field for asynchronous messages. |
- ||||
-1. To set up your agreement with the RosettaNet PIP references for partners, select **RosettaNet PIP references**. Under **PIP Name**, select the name for your previously created PIP.
+ :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/add-agreement-send-details.png" alt-text="Screenshot of the Send Settings page, with options for signing and encrypting messages and for entering algorithms, certificates, and endpoints.":::
+
+1. To set up your agreement with the RosettaNet PIP references for partners, select **RosettaNet PIP references**. Under **PIP Name**, select the name of the PIP that you created earlier.
- ![PIP references](media/logic-apps-enterprise-integration-rosettanet/add-agreement-pip-details.png)
+ :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/add-agreement-pip-details.png" alt-text="Screenshot that shows a table of PIP information that has one row. That row contains default values except the name, MyPIPConfig, which is selected.":::
Your selection populates the remaining properties, which are based on the PIP that you set up in your integration account. If necessary, you can change the **PIP Role**.
- ![Selected PIP](media/logic-apps-enterprise-integration-rosettanet/add-agreement-selected-pip.png)
+ :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/add-agreement-selected-pip.png" alt-text="Screenshot that shows a table of PIP information. A row for the PIP called MyPIPConfig contains accurate information.":::
After you complete these steps, you're ready to send or receive RosettaNet messages.
-## RosettaNet templates
-
-To accelerate development and recommend integration patterns, you can use logic app templates for decoding and encoding RosettaNet messages. When you create a logic app, you can select from the template gallery in Logic App Designer. You can also find these templates in the [GitHub repository for Azure Logic Apps](https://github.com/Azure/logicapps).
-
-![RosettaNet templates](media/logic-apps-enterprise-integration-rosettanet/decode-encode-rosettanet-templates.png)
+<a name="receive-decode-rosettanet"></a>
## Receive or decode RosettaNet messages
-1. [Create a blank logic app](quickstart-create-first-logic-app-workflow.md).
-
-1. [Link your integration account](logic-apps-enterprise-integration-create-integration-account.md#link-account) to your logic app.
+1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app workflow in the designer.
-1. Before you can add an action to decode the RosettaNet message, you must add a trigger for starting your logic app, such as a Request trigger.
+ Your workflow should already have a trigger and any other actions that you want to run before you add the RosettaNet action. This example continues with the Request trigger.
-1. After adding the trigger, choose **New step**.
+1. Under the trigger or action, select **New step**.
- ![Add Request trigger](media/logic-apps-enterprise-integration-rosettanet/request-trigger.png)
+ :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/request-trigger.png" alt-text="Screenshot of the designer. Under the Request trigger, New step is selected.":::
-1. In the search box, enter "rosettanet", and select this action: **RosettaNet Decode**
+1. Under the **Choose an operation** search box, select **All**. In the search box, enter **rosettanet**. From the actions list, select the action named **RosettaNet Decode**.
- ![Find and select "RosettaNet Decode" action](media/logic-apps-enterprise-integration-rosettanet/select-decode-rosettanet-action.png)
+ :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/select-decode-rosettanet-action.png" alt-text="Screenshot of the designer. The Choose an operation search box contains rosettanet, and the RosettaNet Decode action is selected.":::
-1. Provide the information for the action's properties:
+1. Enter the information for the action's properties:
- ![Screenshot that shows where you provide the information for the action's properties.](media/logic-apps-enterprise-integration-rosettanet/decode-action-details.png)
+ :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/decode-action-details.png" alt-text="Screenshot of the RosettaNet Decode action where boxes are available for the message, the headers, and the role.":::
| Property | Required | Description | |-|-|-| | **Message** | Yes | The RosettaNet message to decode | | **Headers** | Yes | The HTTP headers that provide the values for the version, which is the RNIF version, and the response type, which indicates the communication type between the partners and can be synchronous or asynchronous | | **Role** | Yes | The role of the host partner in the PIP |
- ||||
- From the RosettaNet Decode action, the output, along with other properties, includes **Outbound signal**, which you can choose to encode and return back to the partner, or take any other action on that output.
+ The output of the RosettaNet Decode action includes **Outbound signal**. You can encode this output and return it to the partner, or you can take any other action on this output.
-## Send or encode RosettaNet messages
+<a name="send-encode-rosettanet"></a>
-1. [Create a blank logic app](quickstart-create-first-logic-app-workflow.md).
+## Send or encode RosettaNet messages
-1. [Link your integration account](logic-apps-enterprise-integration-create-integration-account.md#link-account) to your logic app.
+1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app workflow in the designer.
-1. Before you can add an action to encode the RosettaNet message, you must add a trigger for starting your logic app, such as a Request trigger.
+ Your workflow should already have a trigger and any other actions that you want to run before you add the RosettaNet action. This example continues with the Request trigger.
-1. After adding the trigger, choose **New step**.
+1. Under the trigger or action, select **New step**.
- ![Add Request trigger](media/logic-apps-enterprise-integration-rosettanet/request-trigger.png)
+ :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/request-trigger.png" alt-text="Screenshot of the designer. Under the Request trigger, New step is selected.":::
-1. In the search box, enter "rosettanet", and select this action: **RosettaNet Encode**
+1. Under the **Choose an operation** search box, select **All**. In the search box, enter **rosettanet**. From the actions list, select the action named **RosettaNet Encode**.
- ![Find and select "RosettaNet Encode" action](media/logic-apps-enterprise-integration-rosettanet/select-encode-rosettanet-action.png)
+ :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/select-encode-rosettanet-action.png" alt-text="Screenshot of the designer. The Choose an operation search box contains rosettanet, and the RosettaNet Encode action is selected.":::
-1. Provide the information for the action's properties:
+1. Enter the information for the action's properties:
- ![Provide action details](media/logic-apps-enterprise-integration-rosettanet/encode-action-details.png)
+ :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/encode-action-details.png" alt-text="Screenshot of the RosettaNet Encode action where boxes appear for the message, the partners, PIP information, the message type, and the role.":::
| Property | Required | Description | |-|-|-|
To accelerate development and recommend integration patterns, you can use logic
| **PIP instance identity** | Yes | The unique identifier for this PIP message | | **Message type** | Yes | The type of the message to encode | | **Role** | Yes | The role of the host partner |
- ||||
The encoded message is now ready to send to the partner.
-1. To send the encoded message, this example uses the **HTTP** action, which is renamed "HTTP - Send encoded message to partner".
+1. To send the encoded message, the following example uses the **HTTP** action, which is renamed **HTTP - Send encoded message to partner**.
+
+ :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/send-rosettanet-message-to-partner.png" alt-text="Screenshot of the designer with an HTTP action renamed as HTTP - Send encoded message to partner, and the URI, header, and body values are entered.":::
- ![HTTP action for sending RosettaNet message](media/logic-apps-enterprise-integration-rosettanet/send-rosettanet-message-to-partner.png)
+ According to RosettaNet standards, business transactions are considered complete only when all the steps defined by the PIP are complete.
- Per RosettaNet standards, business transactions are considered complete only when all the steps defined by the PIP are complete.
+1. After the host sends the encoded message to a partner, the host waits for the signal and acknowledgment. To accomplish this task, add the action named **RosettaNet wait for response**.
-1. After the host sends the encoded message to partner, the host waits for the signal and acknowledgment. To accomplish this task, add the **RosettaNet wait for response** action.
+ :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/rosettanet-wait-for-response-action.png" alt-text="Screenshot of a RosettaNet wait for response action where boxes are available for the body, PIP instance identity, retry count, and role.":::
- ![Add "RosettaNet wait for response" action](media/logic-apps-enterprise-integration-rosettanet/rosettanet-wait-for-response-action.png)
+ The duration to use for waiting and the number of retries are based on the PIP configuration in your integration account. If the response isn't received, a Notification of Failure is generated. To handle retries, always put the **Encode** and **Wait for response** actions in an **Until** loop.
+
+ :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/rosettanet-loop.png" alt-text="Screenshot of the designer. An Until loop contains actions for encoding and sending messages and for waiting for responses.":::
+
+## RosettaNet templates
- The duration to use for waiting and the number of retries are based on the PIP configuration in your integration account. If the response is not received, this action generates a Notification of Failure. To handle retries, always put the **Encode** and **Wait for response** actions in an **Until** loop.
+To accelerate development and recommend integration patterns, you can use Consumption logic app templates for decoding and encoding RosettaNet messages. When you create a Consumption logic app workflow, you can select from the template gallery in the designer. You can also find these templates in the [GitHub repository for Azure Logic Apps](https://github.com/Azure/logicapps).
- ![Until loop with RosettaNet actions](media/logic-apps-enterprise-integration-rosettanet/rosettanet-loop.png)
## Next steps
-* Learn how to validate, transform, and other message operations with the [Enterprise Integration Pack](../logic-apps/logic-apps-enterprise-integration-overview.md)
-* Learn about other [Logic Apps connectors](../connectors/apis-list.md)
+* [Managed connector reference for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
+* [About managed connectors in Azure Logic Apps](../connectors/managed.md)
+* [About built-in connectors for Azure Logic Apps](../connectors/built-in.md)
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
ms.suite: integration Previously updated : 10/21/2022 Last updated : 11/03/2022 # Limits and configuration reference for Azure Logic Apps > For Power Automate, review [Limits and configuration in Power Automate](/power-automate/limits-and-config).
-This article describes the limits and configuration information for Azure Logic Apps and related resources. To create logic app workflows, you choose the **Logic App** resource type based on your scenario, solution requirements, the capabilities that you want, and the environment where you want to run your workflows.
+This reference guide describes the limits and configuration information for Azure Logic Apps and related resources. Based on your scenario, solution requirements, the capabilities that you want, and the environment where you want to run your workflows, you choose whether to create a Consumption logic app workflow that runs in *multi-tenant* Azure Logic Apps or an integration service environment (ISE). Or, create a Standard logic app workflow that runs in *single-tenant* Azure Logic Apps or an App Service Environment (v3 - Windows plans only).
> [!NOTE] > Many limits are the same across the available environments where Azure Logic Apps runs, but differences are noted where they exist.
-The following table briefly summarizes differences between the original **Logic App (Consumption)** resource type and the **Logic App (Standard)** resource type. You'll also learn how the *single-tenant* environment compares to the *multi-tenant* and *integration service environment (ISE)* for deploying, hosting, and running your logic app workflows.
+The following table briefly summarizes differences between a Consumption logic app and a Standard logic app. You'll also learn how single-tenant Azure Logic Apps compares to multi-tenant Azure Logic Apps and an ISE for deploying, hosting, and running your logic app workflows.
[!INCLUDE [Logic app resource type and environment differences](../../includes/logic-apps-resource-environment-differences-table.md)]
The following tables list the values for a single workflow definition:
<a name="run-duration-retention-limits"></a>
-## Run duration and retention history limits
+## Run duration and history retention limits
The following table lists the values for a single workflow run:
The following table lists the values for a single workflow run:
<a name="change-duration"></a> <a name="change-retention"></a>
-### Change run duration and history retention in storage
+## Change run duration and history retention in storage
-In the designer, the same setting controls the maximum number of days that a workflow can run and for keeping run history in storage.
+If a run's duration exceeds the current run history retention limit, the run is removed from the runs history in storage. To avoid losing run history, make sure that the retention limit is *always* more than the run's longest possible duration.
-* For the multi-tenant service, the 90-day default limit is the same as the maximum limit. You can only decrease this value.
+### [Consumption](#tab/consumption)
-* For the single-tenant service, you can decrease or increase the 90-day default limit. For more information, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md).
+For Consumption logic app workflows, the same setting controls the maximum number of days that a workflow can run and for keeping run history in storage.
-* For an integration service environment, you can decrease or increase the 90-day default limit.
+* In multi-tenant Azure Logic Apps, the 90-day default limit is the same as the maximum limit. You can only decrease this value.
-For example, suppose that you reduce the retention limit from 90 days to 30 days. A 60-day-old run is removed from the runs history. If you increase the retention period from 30 days to 60 days, a 20-day-old run stays in the runs history for another 40 days.
-
-> [!IMPORTANT]
-> If the run's duration exceeds the current run history retention limit, the run is removed from the runs history in storage.
-> To avoid losing run history, make sure that the retention limit is *always* more than the run's longest possible duration.
+* In an ISE, you can decrease or increase the 90-day default limit.
-To change the default value or current limit for these properties, follow these steps:
-
-#### [Portal (multi-tenant service)](#tab/azure-portal)
+For example, suppose that you reduce the retention limit from 90 days to 30 days. A 60-day-old run is removed from the runs history. If you increase the retention period from 30 days to 60 days, a 20-day-old run stays in the runs history for another 40 days.
-1. In the [Azure portal](https://portal.azure.com) search box, find and select **Logic apps**.
+#### Portal
-1. Find and open your logic app in the Logic App Designer.
+1. In the [Azure portal](https://portal.azure.com) search box, open your logic app workflow in the designer.
-1. On the logic app's menu, select **Workflow settings**.
+1. On the logic app menu, select **Workflow settings**.
1. Under **Runtime options**, from the **Run history retention in days** list, select **Custom**.
To change the default value or current limit for these properties, follow these
1. When you're done, on the **Workflow settings** toolbar, select **Save**.
-#### [Resource Manager template](#tab/azure-resource-manager)
+#### ARM template
If you use an Azure Resource Manager template, this setting appears as a property in your workflow's resource definition, which is described in the [Microsoft.Logic workflows template reference](/azure/templates/microsoft.logic/workflows):
If you use an Azure Resource Manager template, this setting appears as a propert
} } ```+
+#### [Standard](#tab/standard)
+
+For Standard logic app workflows, you can decrease or increase the 90-day default limit, but you need to add the following settings and their values to your logic app resource or project:
+
+* An app setting named [**Workflows.RuntimeConfiguration.RetentionInDays**](edit-app-settings-host-settings.md#reference-local-settings-json)
+
+* A host setting named [**Runtime.FlowMaintenanceJob.RetentionCooldownInterval**](edit-app-settings-host-settings.md#run-duration-history)
+
+By default, the app setting named **Workflows.RuntimeConfiguration.RetentionInDays** is set to keep 90 days of data. The host setting named **Runtime.FlowMaintenanceJob.RetentionCooldownInterval** is set to check every 7 days for old data to delete. If you leave these default values, you might see data *up to* 97 days old. For example, suppose Azure Logic Apps checks on Day X and deletes data older than Day X - 90 days, and then waits for 7 days before running again. This behavior results in data that ages up to 97 days before the job runs again. However, if you set the interval to 1 day, but leave the retention days at the default value, the maximum delay to delete old data is 90+1 days.
+
+#### Portal
+
+1. [Follow these steps to add the app setting named **Workflows.RuntimeConfiguration.RetentionInDays**](edit-app-settings-host-settings.md?tabs=azure-portal#manage-app-settings), and set the value to the number days that you want to keep your workflow run history.
+
+1. [Follow these steps to add the host setting named **Runtime.FlowMaintenanceJob.RetentionCooldownInterval**](edit-app-settings-host-settings.md#manage-host-settings-portal), and set the value to the number of days as the interval between when to check for and delete run history that you don't want to keep.
+
+#### Visual Studio Code
+
+1. [Follow these steps to add the app setting named **Workflows.RuntimeConfiguration.RetentionInDays**](edit-app-settings-host-settings.md?tabs=visual-studio-code#manage-app-settings), and set the value to the number days that you want to keep your workflow run history.
+
+1. [Follow these steps to add the host setting named **Runtime.FlowMaintenanceJob.RetentionCooldownInterval**](edit-app-settings-host-settings.md#manage-host-settings-visual-studio-code), and set the value to the number of days as the interval between when to check for and delete run history that you don't want to keep.
+ <a name="concurrency-looping-and-debatching-limits"></a>
logic-apps Logic Apps Using File Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-file-connector.md
Title: Connect to file systems on premises
-description: Connect to on-premises file systems from Azure Logic Apps with the File System connector.
+ Title: Connect to on-premises file systems
+description: Connect to file systems on premises from workflows in Azure Logic Apps using the File System connector.
ms.suite: integration-- Previously updated : 08/01/2022 Last updated : 11/08/2022
-# Connect to on-premises file systems from Azure Logic Apps
+# Connect to on-premises file systems from workflows in Azure Logic Apps
-With the File System connector, you can create automated integration workflows in Azure Logic Apps that manage files on an on-premises file share, for example:
+This how-to guide shows how to access an on-premises file share from a workflow in Azure Logic Apps by using the File System connector. You can then create automated workflows that run when triggered by events in your file share or in other systems and run actions to manage your files. The connector provides the following capabilities:
- Create, get, append, update, and delete files. - List files in folders or root folders. - Get file content and metadata.
-This article shows how to connect to an on-premises file system through an example scenario where you copy a file from a Dropbox account to a file share, and then send an email. If you're new to logic apps, review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md).
+In this article, the example scenarios demonstrate the following tasks:
-## Limitations
+- Trigger a workflow when a file is created or added to a file share, and then send an email.
+- Trigger a workflow when copying a file from a Dropbox account to a file share, and then send an email.
-- The File System connector currently supports only Windows file systems on Windows operating systems.-- Mapped network drives aren't supported.-- If you have to use the on-premises data gateway, your gateway installation and file system server must exist in the same Windows domain. For more information, review [Install on-premises data gateway for Azure Logic Apps](logic-apps-gateway-install.md) and [Connect to on-premises data sources from Azure Logic Apps](logic-apps-gateway-connection.md).
+## Connector technical reference
+
+The File System connector has different versions, based on [logic app type and host environment](../logic-apps/logic-apps-overview.md#resource-environment-differences).
-## Connector reference
+| Logic app | Environment | Connector version |
+|--|-|-|
+| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector, which appears in the designer under the **Standard** label. For more information, review the following documentation: <br><br>- [File System managed connector reference](/connectors/filesystem/) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) |
+| **Consumption** | Integration service environment (ISE) | Managed connector, which appears in the designer under the **Standard** label, and the ISE version, which has different message limits than the Standard class. For more information, review the following documentation: <br><br>- [File System managed connector reference](/connectors/filesystem/) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the designer under the **Azure** label, and built-in connector, which appears in the designer under the **Built-in** label. The built-in version differs in the following ways: <br><br>- The built-in version supports only Standard logic apps that run in an App Service Environment v3 with Windows plans only. <br><br>- The built-in version can connect directly to a file share and access Azure virtual networks. You don't need an on-premises data gateway. <br><br>For more information, review the following documentation: <br><br>- [File System managed connector reference](/connectors/filesystem/) <br>- [File System built-in connector reference](/azure/logic-apps/connectors/built-in/reference/filesystem/) <br>- [Built-in connectors in Azure Logic Apps](../connectors/built-in.md) |
-For connector-specific technical information, such as triggers, actions, and limits as described by the connector's Swagger file, see the [connector's reference page](/connectors/filesystem/).
+## General limitations
-> [!NOTE]
->
-> If your logic app runs in an integration service environment (ISE), and you use this connector's ISE version,
-> review [ISE message limits](logic-apps-limits-and-config.md#message-size-limits) and
-> [Access to Azure virtual networks with an integration service environment](connect-virtual-network-vnet-isolated-environment-overview.md).
+- The File System connector currently supports only Windows file systems on Windows operating systems.
+- Mapped network drives aren't supported.
## Prerequisites * An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* To create the connection to your file system, different requirements apply based on your logic app and the hosting environment:
+* To connect to your file share, different requirements apply, based on your logic app and the hosting environment:
+
+ - Consumption logic app workflows
+
+ - In multi-tenant Azure Logic Apps, you need to meet the following requirements, if you haven't already:
+
+ 1. [Install the on-premises data gateway on a local computer](logic-apps-gateway-install.md).
+
+ The File System managed connector requires that your gateway installation and file system server must exist in the same Windows domain.
+
+ 1. [Create an on-premises data gateway resource in Azure](logic-apps-gateway-connection.md).
+
+ 1. After you add a File System managed connector trigger or action to your workflow, select the data gateway resource that you previously created so you can connect to your file system.
+
+ - In an ISE, you don't need the on-premises data gateway. Instead, you can use the ISE-versioned File System connector.
- - For Consumption logic app workflows in multi-tenant Azure Logic Apps, the *managed* File System connector requires that you use the on-premises data gateway resource in Azure to securely connect and access on-premises systems. After you install the on-premises data gateway and create the data gateway resource in Azure, you can select the data gateway resource when you create the connection to your file system from your workflow. For more information, review the following documentation:
+ - Standard logic app workflows
- - [Managed connectors in Azure Logic Apps](../connectors/managed.md)
- - [Install on-premises data gateway for Azure Logic Apps](logic-apps-gateway-install.md)
- - [Connect to on-premises data sources from Azure Logic Apps](logic-apps-gateway-connection.md)
+ You can use the File System built-in connector or managed connector.
- - For logic app workflows in an integration service environment (ISE), you can use the connector's ISE version, which doesn't require the data gateway resource.
+ * To use the File System managed connector, follow the same requirements as a Consumption logic app workflow in multi-tenant Azure Logic Apps.
+
+ * To use the File System built-in connector, your Standard logic app workflow must run in App Service Environment v3, but doesn't require the data gateway resource.
* Access to the computer that has the file system you want to use. For example, if you install the data gateway on the same computer as your file system, you need the account credentials for that computer.
-* For the example scenarios in this article, you need an email account from a provider that's supported by Azure Logic Apps, such as Office 365 Outlook, Outlook.com, or Gmail. For other providers, [review other supported email connectors](/connectors/connector-reference/connector-reference-logicapps-connectors). This logic app workflow uses the Office 365 Outlook connector with a work or school account. If you use another email account, the overall steps are the same, but your UI might slightly differ.
+* To follow the example scenario in this how-to guide, you need an email account from a provider that's supported by Azure Logic Apps, such as Office 365 Outlook, Outlook.com, or Gmail. For other providers, [review other supported email connectors](/connectors/connector-reference/connector-reference-logicapps-connectors). This example uses the Office 365 Outlook connector with a work or school account. If you use another email account, the overall steps are the same, but your UI might slightly differ.
> [!IMPORTANT] > If you want to use the Gmail connector, only G-Suite business accounts can use this connector without restriction in logic apps.
For connector-specific technical information, such as triggers, actions, and lim
* For the example File System action scenario, you need a [Dropbox account](https://www.dropbox.com/), which you can sign up for free.
-* Basic knowledge about [how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md). To add any trigger, you have to start with a blank workflow.
+* The logic app workflow where you want to access your file share. To start your workflow with a File System trigger, you have to start with a blank workflow. To add a File System action, start your workflow with any trigger.
<a name="add-file-system-trigger"></a> ## Add a File System trigger
-1. In the [Azure portal](https://portal.azure.com), open your logic app in the workflow designer.
+### [Consumption](#tab/consumption)
+
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
+
+1. On the designer, under the search box, select **Standard**. In the search box, enter **file system**.
-1. On the designer, under the search box, select **All**. In the search box, enter **file system**. From the triggers list, select the File System trigger that you want. This example continues with the trigger named **When a file is created**.
+1. From the triggers list, select the [File System trigger](/connectors/filesystem/#triggers) that you want. This example continues with the trigger named **When a file is created**.
- ![Screenshot showing Azure portal, designer for Consumption logic app, search box with "file system", and File System trigger selected.](media/logic-apps-using-file-connector/select-file-system-trigger-consumption.png)
+ ![Screenshot showing Azure portal, designer for Consumption logic app workflow, search box with "file system", and File System trigger selected.](media/logic-apps-using-file-connector/select-file-system-trigger-consumption.png)
-1. If you're prompted to create your file system server connection, provide the following information as required:
+1. In the connection information box, provide the following information as required:
| Property | Required | Value | Description | |-|-|-|-|
For connector-specific technical information, such as triggers, actions, and lim
| **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** | | **Password** | Yes | <*password*> | The password for the computer where you have your file system | | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource |
- |||||
- The following example shows the connection information for the managed File System trigger:
+ The following example shows the connection information for the File System managed connector trigger:
+
+ ![Screenshot showing Consumption workflow designer and connection information for File System managed connector trigger.](media/logic-apps-using-file-connector/file-system-connection-consumption.png)
+
+ The following example shows the connection information for the File System ISE-based trigger:
+
+ ![Screenshot showing Consumption workflow designer and connection information for File System ISE-based connector trigger.](media/logic-apps-using-file-connector/file-system-connection-ise.png)
+
+1. When you're done, select **Create**.
+
+ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected trigger.
+
+1. Continue building your workflow.
+
+ 1. Provide the required information for your trigger.
+
+ For this example, select the folder path on your file system server to check for a newly created file. Specify the number of files to return and how often you want to check.
+
+ ![Screenshot showing Consumption workflow designer and the "When a file is created" trigger.](media/logic-apps-using-file-connector/trigger-file-system-when-file-created-consumption.png)
+
+ 1. To test your workflow, add an Outlook action that sends you an email when a file is created on the file system in specified folder. Enter the email recipients, subject, and body. For testing, you can use your own email address.
+
+ ![Screenshot showing Consumption workflow designer, managed connector "When a file is created" trigger, and "Send an email" action.](media/logic-apps-using-file-connector/trigger-file-system-send-email-consumption.png)
+
+ > [!TIP]
+ >
+ > To add outputs from previous steps in the workflow, click inside the trigger's edit boxes.
+ > When the dynamic content list appears, select from the available outputs.
+
+1. Save your logic app. Test your workflow by uploading a file and triggering the workflow.
+
+ If successful, your workflow sends an email about the new file.
+
+### [Standard](#tab/standard)
+
+#### Built-in connector trigger
+
+These steps apply only to Standard logic apps in an App Service Environment v3 with Windows plans only.
- ![Screenshot showing connection information for managed File System trigger.](media/logic-apps-using-file-connector/file-system-connection-consumption.png)
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
+
+1. On the designer, under the search box, select **Built-in**. In the search box, enter **file system**.
+
+1. From the triggers list, select the [File System trigger](/azure/logic-apps/connectors/built-in/reference/filesystem/#triggers) that you want. This example continues with the trigger named **When a file is added**.
+
+ ![Screenshot showing Azure portal, designer for Standard logic app workflow, search box with "file system", and "When a file is added" selected.](media/logic-apps-using-file-connector/select-file-system-trigger-built-in-standard.png)
+
+1. In the connection information box, provide the following information as required:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
+ | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
+ | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** |
+ | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
+
+ The following example shows the connection information for the File System built-in connector trigger:
+
+ ![Screenshot showing Standard workflow designer and connection information for File System built-in connector trigger.](media/logic-apps-using-file-connector/trigger-file-system-connection-built-in-standard.png)
+
+1. When you're done, select **Create**.
+
+ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected trigger.
+
+1. Continue building your workflow.
+
+ 1. Provide the required information for your trigger.
+
+ For this example, select the folder path on your file system server to check for a newly added file. Specify how often you want to check.
+
+ ![Screenshot showing Standard workflow designer and "When a file is added" trigger information.](media/logic-apps-using-file-connector/trigger-when-file-added-built-in-standard.png)
+
+ 1. To test your workflow, add an Outlook action that sends you an email when a file is added to the file system in specified folder. Enter the email recipients, subject, and body. For testing, you can use your own email address.
+
+ ![Screenshot showing Standard workflow designer, managed connector "When a file is added" trigger, and "Send an email" action.](media/logic-apps-using-file-connector/trigger-send-email-built-in-standard.png)
+
+ > [!TIP]
+ >
+ > To add outputs from previous steps in the workflow, click inside the trigger's edit boxes.
+ > When the dynamic content list appears, select from the available outputs.
+
+1. Save your logic app. Test your workflow by uploading a file and triggering the workflow.
+
+ If successful, your workflow sends an email about the new file.
+
+#### Managed connector trigger
+
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
+
+1. On the designer, under the search box, select **Azure**. In the search box, enter **file system**.
+
+1. From the triggers list, select the [File System trigger](/connectors/filesystem/#triggers/#triggers) that you want. This example continues with the trigger named **When a file is created**.
+
+ ![Screenshot showing Azure portal, designer for Standard logic app workflow, search box with "file system", and the "When a file is created" trigger selected.](media/logic-apps-using-file-connector/select-file-system-trigger-managed-standard.png)
+
+1. In the connection information box, provide the following information as required:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
+ | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
+ | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** |
+ | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** |
+ | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
+ | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource |
- The following example shows the connection information for the ISE-based File System trigger:
+ The following example shows the connection information for the File System managed connector trigger:
- ![Screenshot showing connection information for ISE-based File System trigger.](media/logic-apps-using-file-connector/file-system-connection-ise.png)
+ ![Screenshot showing Standard workflow designer and connection information for File System managed connector trigger.](media/logic-apps-using-file-connector/trigger-file-system-connection-managed-standard.png)
-1. After you provide the required information for your connection, select **Create**.
+1. When you're done, select **Create**.
Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected trigger.
For connector-specific technical information, such as triggers, actions, and lim
For this example, select the folder path on your file system server to check for a newly created file. Specify the number of files to return and how often you want to check.
- ![Screenshot showing the "When a file is created" trigger, which checks for a newly created file on the file system server.](media/logic-apps-using-file-connector/file-system-trigger-when-file-created.png)
+ ![Screenshot showing Standard workflow designer and "When a file is created" trigger information.](media/logic-apps-using-file-connector/trigger-when-file-created-managed-standard.png)
1. To test your workflow, add an Outlook action that sends you an email when a file is created on the file system in specified folder. Enter the email recipients, subject, and body. For testing, you can use your own email address.
- ![Screenshot showing an action that sends email when a new file is created on the file system server.](media/logic-apps-using-file-connector/file-system-trigger-send-email.png)
+ ![Screenshot showing Standard workflow designer, managed connector "When a file is created" trigger, and "Send an email" action.](media/logic-apps-using-file-connector/trigger-send-email-managed-standard.png)
> [!TIP] >
For connector-specific technical information, such as triggers, actions, and lim
If successful, your workflow sends an email about the new file. ++ <a name="add-file-system-action"></a> ## Add a File System action
-1. In the [Azure portal](https://portal.azure.com), open your logic app in the workflow designer, if not already open.
+The example logic app workflow starts with the [Dropbox trigger](/connectors/dropbox/#triggers), but you can use any trigger that you want.
+
+### [Consumption](#tab/consumption)
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
-1. After the last step or between steps in your workflow, add a new step or action.
+1. Find and select the [File System action](/connectors/filesystem/#actions) that you want to use. This example continues with the action named **Create file**.
- This example uses a Dropbox trigger and follows that step with a File System action.
+ 1. Under the trigger or action where you want to add the File System action, select **New step**.
-1. Under the **Choose an operation** search box, select **All**. In the search box, enter **file system**.
+ Or, to add an action between existing steps, move your pointer over the connecting arrow. Select the plus sign (**+**) that appears, and then select **Add an action**.
-1. From the actions list, select the File System action that you want. This example continues with the action named **Create file**.
+1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **file system**.
- ![Screenshot showing Azure portal, designer for Consumption logic app, search box with "file system", and File System action selected.](media/logic-apps-using-file-connector/select-file-system-action-consumption.png)
+1. From the actions list, select the File System action named **Create file**.
-1. If you're prompted to create your file system server connection, provide the following information as required:
+ ![Screenshot showing Azure portal, designer for Consumption logic app workflow, search box with "file system", and "Create file" action selected.](media/logic-apps-using-file-connector/select-file-system-action-consumption.png)
+
+1. In the connection information box, provide the following information as required:
| Property | Required | Value | Description | |-|-|-|-|
For connector-specific technical information, such as triggers, actions, and lim
| **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** | | **Password** | Yes | <*password*> | The password for the computer where you have your file system | | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource |
- |||||
- The following example shows the connection information for the managed File System action:
+ The following example shows the connection information for the File System managed connector action:
- ![Screenshot showing connection information for managed File System action.](media/logic-apps-using-file-connector/file-system-connection-consumption.png)
+ ![Screenshot showing connection information for File System managed connector action.](media/logic-apps-using-file-connector/file-system-connection-consumption.png)
- The following example shows the connection information for the ISE-based File System action:
+ The following example shows the connection information for the File System ISE-based connector action:
- ![Screenshot showing connection information for ISE-based File System action.](media/logic-apps-using-file-connector/file-system-connection-ise.png)
+ ![Screenshot showing connection information for File System ISE-based connector action.](media/logic-apps-using-file-connector/file-system-connection-ise.png)
-1. After you provide the required information for your connection, select **Create**.
+1. When you're done, select **Create**.
Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected action.
For connector-specific technical information, such as triggers, actions, and lim
For this example, select the folder path on your file system server to use, which is the root folder here. Enter the file name and content, based on the file uploaded to Dropbox.
- ![Screenshot showing the "Create file" action, which creates a file on the file system server, based on a file uploaded to Dropbox.](media/logic-apps-using-file-connector/file-system-action-create-file.png)
+ ![Screenshot showing Consumption workflow designer and the File System managed connector "Create file" action.](media/logic-apps-using-file-connector/action-file-system-create-file-consumption.png)
> [!TIP] >
- > To add outputs from previous steps in the workflow, click inside the trigger's edit boxes.
+ > To add outputs from previous steps in the workflow, click inside the action's edit boxes.
> When the dynamic content list appears, select from the available outputs. 1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address.
- ![Screenshot showing an action that sends email after a new file is created on the file system server.](media/logic-apps-using-file-connector/file-system-action-send-email.png)
+ ![Screenshot showing Consumption workflow designer, managed connector "Create file" action, and "Send an email" action.](media/logic-apps-using-file-connector/action-file-system-send-email-consumption.png)
+
+1. Save your logic app. Test your workflow by uploading a file to Dropbox.
+
+ If successful, your workflow creates a file on your file system server, based on the uploaded file in DropBox, and sends an email about the created file.
+
+### [Standard](#tab/standard)
+
+#### Built-in connector action
+
+These steps apply only to Standard logic apps in an App Service Environment v3 with Windows plans only.
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+
+1. Find and select the [File System action](/azure/logic-apps/connectors/built-in/reference/filesystem/#actions) that you want to use. This example continues with the action named **Create file**.
+
+ 1. Under the trigger or action where you want to add the action, select the plus sign (**+**), and then select **Add an action**.
+
+ Or, to add an action between existing steps, select the plus sign (**+**) on the connecting arrow, and then select **Add an action**.
+
+ 1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **file system**.
+
+ 1. From the actions list, select the File System action named **Create file**.
+
+ ![Screenshot showing Azure portal, designer for Standard logic app workflow, search box with "file system", and built-in connector "Create file" action selected.](media/logic-apps-using-file-connector/select-file-system-action-built-in-standard.png)
+
+1. In the connection information box, provide the following information as required:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
+ | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
+ | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** |
+ | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
+
+ The following example shows the connection information for the File System built-in connector action:
+
+ ![Screenshot showing Standard workflow designer and connection information for File System built-in connector action.](media/logic-apps-using-file-connector/action-file-system-connection-built-in-standard.png)
+
+ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected action.
+
+1. Continue building your workflow.
+
+ 1. Provide the required information for your action. For this example, follow these steps:
+
+ 1. Enter path and name for the file that you want to create, including the file name extension. Make sure the path is relative to the root folder.
+
+ 1. To specify the content from the file created on Dropbox, from the **Add a parameter** list, select **File content**.
+
+ 1. Click inside the **File Content** parameter box. appears, click inside the edit box. From the dynamic content list that appears, in the **When a file is created** section, select **File Content**.
+
+ ![Screenshot showing Standard workflow designer and the File System built-in connector "Create file" action.](media/logic-apps-using-file-connector/action-file-system-create-file-built-in-standard.png)
+
+ 1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address.
+
+ ![Screenshot showing Standard workflow designer, built-in connector "Create file" action, and "Send an email" action.](media/logic-apps-using-file-connector/action-file-system-send-email-built-in-standard.png)
+
+1. Save your logic app. Test your workflow by uploading a file to Dropbox.
+
+ If successful, your workflow creates a file on your file system server, based on the uploaded file in DropBox, and sends an email about the created file.
+
+#### Managed connector action
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
+
+1. Find and select the [File System action](/connectors/filesystem/#actions) that you want to use. This example continues with the action named **Create file**.
+
+ 1. Under the trigger or action where you want to add the action, select the plus sign (**+**), and then select **Add an action**.
+
+ Or, to add an action between existing steps, select the plus sign (**+**) on the connecting arrow, and then select **Add an action**.
+
+ 1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter **file system**.
+
+ 1. From the actions list, select the File System action named **Create file**.
+
+ ![Screenshot showing Azure portal, designer for Standard logic app workflow, search box with "file system", and managed connector "Create file" action selected.](media/logic-apps-using-file-connector/select-file-system-action-managed-standard.png)
+
+1. In the connection information box, provide the following information as required:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Connection name** | Yes | <*connection-name*> | The name to use for your connection |
+ | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. |
+ | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** |
+ | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** |
+ | **Password** | Yes | <*password*> | The password for the computer where you have your file system |
+ | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource |
+
+ The following example shows the connection information for the File System managed connector action:
+
+ ![Screenshot showing connection information for File System managed connector action.](media/logic-apps-using-file-connector/action-file-system-connection-managed-standard.png)
+
+ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected action.
+
+1. Continue building your workflow.
+
+ 1. Provide the required information for your action. For this example, follow these steps:
+
+ 1. Enter path and name for the file that you want to create, including the file name extension. Make sure the path is relative to the root folder.
+
+ 1. To specify the content from the file created on Dropbox, from the **Add a parameter** list, select **File content**.
+
+ 1. Click inside the **File Content** parameter box. appears, click inside the edit box. From the dynamic content list that appears, in the **When a file is created** section, select **File Content**.
+
+ ![Screenshot showing Standard workflow designer and the File System managed connector "Create file" action.](media/logic-apps-using-file-connector/action-file-system-create-file-built-in-standard.png)
+
+ 1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address.
+
+ ![Screenshot showing Standard workflow designer, managed connector "Create file" action, and "Send an email" action.](media/logic-apps-using-file-connector/action-file-system-send-email-managed-standard.png)
1. Save your logic app. Test your workflow by uploading a file to Dropbox. If successful, your workflow creates a file on your file system server, based on the uploaded file in DropBox, and sends an email about the created file. ++ ## Next steps
-* Learn how to [connect to on-premises data](../logic-apps/logic-apps-gateway-connection.md)
-* Learn about other [Logic Apps connectors](../connectors/apis-list.md)
+* [Managed connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
+* [Built-in connectors for Azure Logic Apps](../connectors/built-in.md)
machine-learning How To Image Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-image-processing-batch.md
Batch Endpoint can only deploy registered models so we need to register it. You
```python import os
- import requests
+ import urllib.request
from zipfile import ZipFile
- requests.get('https://azuremlexampledata.blob.core.windows.net/data/imagenet/model.zip', allow_redirects=True)
+ response = urllib.request.urlretrieve('https://azuremlexampledata.blob.core.windows.net/data/imagenet/model.zip', 'model.zip')
os.mkdirs("imagenet-classifier", exits_ok=True)
- with ZipFile(file, 'r') as zip:
+ with ZipFile(response[0], 'r') as zip:
model_path = zip.extractall(path="imagenet-classifier") ```
machine-learning How To Create Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md
In this article, you learn how to create a data asset in Azure Machine Learning.
The benefits of creating data assets are:
-* You can **share and reuse data** with other members of the team such that they do not need to remember file locations.
+* You can **share and reuse data** with other members of the team such that they don't need to remember file locations.
* You can **seamlessly access data** during model training (on any supported compute type) without worrying about connection strings or data paths.
When you create a data asset in Azure Machine Learning, you'll need to specify a
## Data asset types
+ - [**URIs**](#Create a `uri_folder` data asset) - A **U**niform **R**esource **I**dentifier that is a reference to a storage location on your local computer or in the cloud that makes it easy to access data in your jobs. Azure Machine Learning distinguishes two types of URIs:`uri_file` and `uri_folder`.
+ - [**MLTable**](#Create a `mltable` data asset) - `MLTable` helps you to abstract the schema definition for tabular data so it is more suitable for complex/changing schema or to be used in AutoML. If you just want to create a data asset for a job or you want to write your own parsing logic in python you could use `uri_file`, `uri_folder`.
The ideal scenarios to use `mltable` are: - The schema of your data is complex and/or changes frequently.
+ - You only need a subset of data (for example: a sample of rows or files, specific columns, etc.)
- AutoML jobs requiring tabular data. If your scenario does not fit the above then it is likely that URIs are a more suitable type.
To create a File data asset in the Azure Machine Learning studio, use the follow
- JSON Lines - Delta Lake
-Please find more details about what are the abilities we provide via `mltable` in [reference-yaml-mltable](reference-yaml-mltable.md).
+Find more details about what are the abilities we provide via `mltable` in [reference-yaml-mltable](reference-yaml-mltable.md).
-In this section, we show you how to create a data asset when the type is an `mltable`.
+In this section, we show you how to create a data asset when the type is a `mltable`.
### The MLTable file
The MLTable file is a file that provides the specification of the data's schema
> [!NOTE] > This file needs to be named exactly as `MLTable`.
-An *example* MLTable file is provided below:
+An *example* MLTable file for delimited files is provided below:
```yml type: mltable
transformations:
encoding: ascii header: all_files_same_headers ```+
+An *example* MLTable file for Delta Lake is provided below:
+```yml
+type: mltable
+
+paths:
+ - abfss://my_delta_files
+
+transformations:
+ - read_delta_lake:
+ timestamp_as_of: '2022-08-26T00:00:00Z'
+#timestamp_as_of: Timestamp to be specified for time-travel on the specific Delta Lake data.
+#version_as_of: Version to be specified for time-travel on the specific Delta Lake data.
+```
+
+For more transformations available in `mltable`, please look into [reference-yaml-mltable](reference-yaml-mltable.md).
++ > [!IMPORTANT] > We recommend co-locating the MLTable file with the underlying data in storage. For example: >
transformations:
> ``` > Co-locating the MLTable with the data ensures a **self-contained *artifact*** where all that is needed is stored in that one folder (`my_data`); regardless of whether that folder is stored on your local drive or in your cloud store or on a public http server. You should **not** specify *absolute paths* in the MLTable file. +
+### Create an MLTable artifact via Python SDK: from_*
+If you would like to create an MLTable object in memory via Python SDK, you could use from_* methods.
+The from_* methods does not materialize the data, but rather stores is as a transformation in the MLTable definition.
+
+For example you can use from_delta_lake() to create an in-memory MLTable artifact to read delta lake data from the path `delta_table_path`.
+```python
+import mltable as mlt
+mltable = from_delta_lake(delta_table_path, timestamp_as_of="2021-01-01T00:00:00Z")
+df = mltable.to_pandas_dataframe()
+print(df.to_string())
+```
+Please find more details about [MLTable Python functions here](/python/api/mltable/mltable).
++ In your Python code, you materialize the MLTable artifact into a Pandas dataframe using: ```python
The `uri` parameter in `mltable.load()` should be a valid path to a local or clo
> [!NOTE] > You will need the `mltable` library installed in your Environment (`pip install mltable`).
-Below shows you how to create an `mltable` data asset. The `path` can be any of the supported path formats outlined above.
+Below shows you how to create a `mltable` data asset. The `path` can be any of the supported path formats outlined above.
# [Azure CLI](#tab/cli)
machine-learning How To Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-custom-dns.md
To find the internal IP addresses for the FQDNs in the VNet, use one of the foll
1. To get the ID of the private endpoint network interface, use the following command: ```azurecli
- az network private-endpoint show --endpoint-name <endpoint> --resource-group <resource-group> --query 'networkInterfaces[*].id' --output table
+ az network private-endpoint show --name <endpoint> --resource-group <resource-group> --query 'networkInterfaces[*].id' --output table
``` 1. To get the IP address and FQDN information, use the following command. Replace `<resource-id>` with the ID from the previous step:
machine-learning How To Troubleshoot Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-environments.md
Not all the vulnerabilities are exploitable, so you need to use your judgment wh
> [!IMPORTANT] > There's no guarantee that the same set of python dependencies will be materialized with an image rebuild or for a new environment with the same set of Python dependencies.
-## *Environment definition problems*
+## **Environment definition problems**
-### Environment name issues
-#### **"Curated prefix not allowed"**
+### *Environment name issues*
+### Curated prefix not allowed
Terminology: "Curated": environments Microsoft creates and maintains.
Terminology:
- To customize a curated environment, you must clone and rename the environment - For more information about proper curated environment usage, see [create and manage reusable environments](https://aka.ms/azureml/environment/create-and-manage-reusable-environments)
-#### **"Environment name is too long"**
+### Environment name is too long
- Environment names can be up to 255 characters in length - Consider renaming and shortening your environment name
-### Docker issues
+### *Docker issues*
To create a new environment, you must use one of the following approaches: 1. Base image - Provide base image name, repository from which to pull it, credentials if needed
To create a new environment, you must use one of the following approaches:
- The build context must contain at least a Dockerfile, but may contain other files as well
-#### **"Missing Docker definition"**
+### Missing Docker definition
- An environment has a `DockerSection` that must be populated with either a base image, base Dockerfile, or build context - This section configures settings related to the final Docker image built to the specifications of the environment and whether to use Docker containers to build the environment - See [DockerSection](https://aka.ms/azureml/environment/environment-docker-section)
-#### **"Missing Docker build context location"**
+### Missing Docker build context location
- If you're specifying a Docker build context as part of your environment build, you must provide the path of the build context directory - See [BuildContext](https://aka.ms/azureml/environment/build-context-class)
-#### **"Too many Docker options"**
+### Too many Docker options
Only one of the following options can be specified: *V1*
Only one of the following options can be specified:
- `build` - See [azure.ai.ml.entities.Environment](https://aka.ms/azureml/environment/environment-class-v2)
-#### **"Missing Docker option"**
+### Missing Docker option
*V1* - You must specify one of: base image, base Dockerfile, or build context *V2:* - You must specify one of: image or build context
-#### **"Container registry credentials missing either username or password"**
+### Container registry credentials missing either username or password
- To access the base image in the container registry specified, you must provide both a username and password. One is missing. - Note that providing credentials in this way is deprecated. For the current method of providing credentials, see the *secrets in base image registry* section.
-#### **"Multiple credentials for base image registry"**
+### Multiple credentials for base image registry
- When specifying credentials for a base image registry, you must specify only one set of credentials. - The following authentication types are currently supported: - Basic (username/password)
to use, and set the other credentials you won't use to `null`
- Specifying credentials in this way is deprecated. It's recommended that you use workspace connections. See *secrets in base image registry* below
-#### **"Secrets in base image registry"**
+### Secrets in base image registry
- If you specify a base image in your `DockerSection`, you must specify the registry address from which the image will be pulled, and credentials to authenticate to the registry, if needed. - Historically, credentials have been specified in the environment definition. However, this isn't secure and should be
avoided.
- Users should set credentials using workspace connections. For instructions on how to do this, see [set_connection](https://aka.ms/azureml/environment/set-connection-v1)
-#### **"Deprecated Docker attribute"**
+### Deprecated Docker attribute
- The following `DockerSection` attributes are deprecated: - `enabled` - `arguments`
do this, see [set_connection](https://aka.ms/azureml/environment/set-connection-
- Use [DockerConfiguration](https://aka.ms/azureml/environment/docker-configuration-class) instead - See [DockerSection deprecated variables](https://aka.ms/azureml/environment/docker-section-class)
-#### **"Dockerfile length over limit"**
+### Dockerfile length over limit
- The specified Dockerfile can't exceed the maximum Dockerfile size of 100KB - Consider shortening your Dockerfile to get it under this limit
-### Docker build context issues
-#### **"Missing Dockerfile path"**
+### *Docker build context issues*
+### Missing Dockerfile path
- In the Docker build context, a Dockerfile path must be specified - This is the path to the Dockerfile relative to the root of Docker build context directory - See [Build Context class](https://aka.ms/azureml/environment/build-context-class)
-#### **"Not allowed to specify attribute with Docker build context"**
+### Not allowed to specify attribute with Docker build context
- If a Docker build context is specified, then the following items can't also be specified in the environment definition: - Environment variables
environment definition:
- R - Spark
-#### **"Location type not supported/Unknown location type"**
+### Location type not supported/Unknown location type
- The following are accepted location types: - Git - Git URLs can be provided to AzureML, but images can't yet be built using them. Use a storage
environment definition:
- [How to use git repository as build context](https://aka.ms/azureml/environment/git-repo-as-build-context) - Storage account
-#### **"Invalid location"**
+### Invalid location
- The specified location of the Docker build context is invalid - If the build context is stored in a git repository, the path of the build context must be specified as a git URL - If the build context is stored in a storage account, the path of the build context must be specified as - `https://storage-account.blob.core.windows.net/container/path/`
-### Base image issues
-#### **"Base image is deprecated"**
+### *Base image issues*
+### Base image is deprecated
- The following base images are deprecated: - `azureml/base` - `azureml/base-gpu`
environment definition:
- Deprecated images are also at risk for vulnerabilities since they're no longer updated or maintained. It's best to use newer, non-deprecated versions.
-#### **"No tag or digest"**
+### No tag or digest
- For the environment to be reproducible, one of the following must be included on a provided base image: - Version tag - Digest - See [image with immutable identifier](https://aka.ms/azureml/environment/pull-image-by-digest)
-### Environment variable issues
-#### **"Misplaced runtime variables"**
+### *Environment variable issues*
+### Misplaced runtime variables
- An environment definition shouldn't contain runtime variables - Use the `environment_variables` attribute on the [RunConfiguration object](https://aka.ms/azureml/environment/environment-variables-on-run-config) instead
-### Python issues
-#### **"Python section missing"**
+### *Python issues*
+### Python section missing
*V1* - An environment definition must have a Python section
It's best to use newer, non-deprecated versions.
``` - See [PythonSection class](https://aka.ms/azureml/environment/environment-python-section)
-#### **"Python version missing"**
+### Python version missing
*V1* - A Python version must be specified in the environment definition
conda_dep.add_conda_package("python==3.8")
``` - See [Add conda package](https://aka.ms/azureml/environment/add-conda-package-v1)
-#### **"Multiple Python versions"**
+### Multiple Python versions
- Only one Python version can be specified in the environment definition
-#### **"Python version not supported"**
+### Python version not supported
- The Python version provided in the environment definition isn't supported - Consider using a newer version of Python - See [Python versions](https://aka.ms/azureml/environment/python-versions) and [Python end-of-life dates](https://aka.ms/azureml/environment/python-end-of-life)
-#### **"Python version not recommended"**
+### Python version not recommended
- The Python version used in the environment definition is deprecated, and its use should be avoided - Consider using a newer version of Python as the specified version will eventually unsupported - See [Python versions](https://aka.ms/azureml/environment/python-versions) and [Python end-of-life dates](https://aka.ms/azureml/environment/python-end-of-life)
-#### **"Failed to validate Python version"**
+### Failed to validate Python version
- The provided Python version may have been formatted improperly or specified with incorrect syntax - See [conda package pinning](https://aka.ms/azureml/environment/how-to-pin-conda-packages)
-### Conda issues
-#### **"Missing conda dependencies"**
+### *Conda issues*
+### Missing conda dependencies
- The [environment definition](https://aka.ms/azureml/environment/environment-class-v1) has a [PythonSection](https://aka.ms/azureml/environment/environment-python-section) that contains a `user_managed_dependencies` bool and a `conda_dependencies` object
The environment is built once and is reused as long as the conda dependencies re
- See [CondaDependencies class](https://aka.ms/azureml/environment/conda-dependencies-class) - See [how to set a conda specification on the environment definition](https://aka.ms/azureml/environment/set-conda-spec-on-environment-definition)
-#### **"Invalid conda dependencies"**
+### Invalid conda dependencies
- Make sure the conda dependencies specified in your conda specification are formatted correctly - See [how to create a conda file manually](https://aka.ms/azureml/environment/how-to-create-conda-file)
-#### **"Missing conda channels"**
+### Missing conda channels
- If no conda channels are specified, conda will use defaults that might change - For reproducibility of your environment, specify channels from which to pull dependencies - See [how to manage conda channels](https://aka.ms/azureml/environment/managing-conda-channels) for more information
-#### **"Base conda environment not recommended"**
+### Base conda environment not recommended
- Partial environment updates can lead to dependency conflicts and/or unexpected runtime errors, so the use of base conda environments isn't recommended - Instead, specify all packages needed for your environment in the `conda_dependencies` section of your
environment definition
- See [CondaDependencies class](https://aka.ms/azureml/environment/conda-dependencies-class) - If you're using V2, add a conda specification to your [build context](https://aka.ms/azureml/environment/environment-build-context)
-#### **"Unpinned dependencies"**
+### Unpinned dependencies
- For reproducibility, specify dependency versions for the packages in your conda specification - If versions aren't specified, there's a chance that the conda or pip package resolver will choose a different version of a package on subsequent builds of an environment. This can lead to unexpected errors and incorrect behavior - See [conda package pinning](https://aka.ms/azureml/environment/how-to-pin-conda-packages)
-### Pip issues
-#### **"Pip not specified"**
+### *Pip issues*
+### Pip not specified
- For reproducibility, pip should be specified as a dependency in your conda specification, and it should be pinned - See [how to set a conda dependency](https://aka.ms/azureml/environment/add-conda-package-v1)
-#### **"Pip not pinned"**
+### Pip not pinned
- For reproducibility, specify the pip resolver version in your conda dependencies - If the pip version isn't specified, there's a chance different versions of pip will be used on subsequent image builds on the environment
image builds on the environment
- See [conda package pinning](https://aka.ms/azureml/environment/how-to-pin-conda-packages) - See [how to set pip as a dependency](https://aka.ms/azureml/environment/add-conda-package-v1)
-### Deprecated environment property issues
-#### **"R section is deprecated"**
+### *Deprecated environment property issues*
+### R section is deprecated
- The Azure Machine Learning SDK for R will be deprecated by the end of 2021 to make way for an improved R training and deployment experience using Azure Machine Learning CLI 2.0 - See the [samples repository](https://aka.ms/azureml/environment/train-r-models-cli-v2) to get started with the edition CLI 2.0.
-## *Image build problems*
+## **Image build problems**
-### Miscellaneous issues
-#### **"Build log unavailable"**
+### *Miscellaneous issues*
+### Build log unavailable
- Build logs are optional and not available for all environments since the image might already exist
-#### **"ACR unreachable"**
-- There was a failure communicating with the workspace's container registry-- If your scenario involves a VNet, you may need to build images using a compute cluster-- See [secure a workspace using virtual networks](https://aka.ms/azureml/environment/acr-private-endpoint)
+### ACR unreachable
+<!--issueDescription-->
+This can happen by failing to access a workspace's associated Azure Container Registry (ACR) resource.
-### Docker pull issues
-#### **"Failed to pull Docker image"**
+**Potential causes:**
+* Workspace's ACR is behind a virtual network (VNet) (private endpoint or service endpoint), and no compute cluster is used to build images.
+* Workspace's ACR is behind a virtual network (private endpoint or service endpoint), and the compute cluster used for building images have no access to the workspace's ACR.
+
+**Affected areas (symptoms):**
+* Failure in building environments from UI, SDK, and CLI.
+* Failure in running jobs because it will implicitly build the environment in the first step.
+* Pipeline job failures
+* Model deployment failures
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+
+*Applies to: Python SDK azureml V1*
+
+Update the workspace image build compute property using SDK:
+
+```
+from azureml.core import Workspace
+ws = Workspace.from_config()
+ws.update(image_build_compute = 'mycomputecluster')
+```
+
+*Applies to: Azure CLI extensions V1 & V2*
+
+Update the workspace image build compute property using Azure CLI:
+
+```
+az ml workspace update --name myworkspace --resource-group myresourcegroup --image-build-compute mycomputecluster
+```
+
+> [!NOTE]
+> * Only Azure Machine Learning compute clusters are supported. Compute, Azure Kubernetes Service (AKS), or other instance types are not supported for image build compute.
+> * Make sure the compute cluster's VNet that's used for the image build compute has access to the workspace's ACR.
+> * Make sure the compute cluster is CPU based.
+
+**Resources**
+* [Enable Azure Container Registry (ACR)](https://aka.ms/azureml/environment/acr-private-endpoint)
+* [How To Use Environments](https://aka.ms/azureml/environment/how-to-use-environments)
+
+### *Docker pull issues*
+### Failed to pull Docker image
- Possible issues: - The path name to the container registry might not be resolving correctly - For a registry `my-registry.io` and image `test/image` with tag `3.2`, a valid image path would be `my-registry.io/test/image:3.2`
experience using Azure Machine Learning CLI 2.0
- You haven't provided credentials for a private registry you're trying to pull the image from, or the provided credentials are incorrect - Set [workspace connections](https://aka.ms/azureml/environment/set-connection-v1) for the container registry if needed
-### Conda issues during build
-#### **"Bad spec"**
+### *Conda issues during build*
+### Bad spec
- Failed to create or update the conda environment due to an invalid package specification - See [package match specifications](https://aka.ms/azureml/environment/conda-package-match-specifications) - See [how to create a conda file manually](https://aka.ms/azureml/environment/how-to-create-conda-file)
-#### **"Communications error"**
+### Communications error
- Failed to communicate with a conda channel or package repository - Retrying the image build may work if the issue is transient
-#### **"Compile error"**
+### Compile error
- Failed to build a package required for the conda environment - Another version of the failing package may work. If it doesn't, review the image build log, hunt for a solution, and update the environment definition.
-#### **"Missing command"**
+### Missing command
- Failed to build a package required for the conda environment due to a missing command - Identify the missing command from the image build log, determine how to add it to your image, and then update the environment definition.
-#### **"Conda timeout"**
+### Conda timeout
- Failed to create or update the conda environment because it took too long - Consider removing unnecessary packages and pinning specific versions - See [understanding and improving conda's performance](https://aka.ms/azureml/environment/improve-conda-performance)
-#### **"Out of memory"**
+### Out of memory
- Failed to create or update the conda environment due to insufficient memory - Consider removing unnecessary packages and pinning specific versions - See [understanding and improving conda's performance](https://aka.ms/azureml/environment/improve-conda-performance)
-#### **"Package not found"**
+### Package not found
- One or more packages specified in your conda specification couldn't be found - Ensure that all packages you've specified exist, and can be found using the channels you've specified in your conda specification - If you don't specify conda channels, conda will use defaults that are subject to change - For reproducibility, specify channels from which to pull dependencies - See [managing channels](https://aka.ms/azureml/environment/managing-conda-channels)
-#### **"Missing Python module"**
+### Missing Python module
- Check the Python modules specified in your environment definition and correct any misspellings or incorrect pinned versions.
-#### **"No matching distribution"**
+### No matching distribution
- Failed to find Python package matching a specified distribution - Search for the distribution you're looking for and ensure it exists: [pypi](https://aka.ms/azureml/environment/pypi)
-#### **"Cannot build mpi4py"**
+### Cannot build mpi4py
- Failed to build wheel for mpi4py - Review and update your build environment or use a different installation method - See [mpi4py installation](https://aka.ms/azureml/environment/install-mpi4py)
-#### **"Interactive auth was attempted"**
+### Interactive auth was attempted
- Failed to create or update the conda environment because pip attempted interactive authentication - Instead, provide authentication via [workspace connection](https://aka.ms/azureml/environment/set-connection-v1)
-#### **"Forbidden blob"**
+### Forbidden blob
- Failed to create or update the conda environment because a blob contained in the associated storage account was inaccessible - Either open up permissions on the blob or add/replace the SAS token in the URL
-#### **"Horovod build"**
+### Horovod build
- Failed to create or update the conda environment because horovod failed to build - See [horovod installation](https://aka.ms/azureml/environment/install-horovod)
-#### **"Conda command not found"**
+### Conda command not found
- Failed to create or update the conda environment because the conda command is missing - For system-managed environments, conda should be in the path in order to create the user's environment from the provided conda specification
-#### **"Incompatible Python version"**
+### Incompatible Python version
- Failed to create or update the conda environment because a package specified in the conda environment isn't compatible with the specified python version - Update the Python version or use a different version of the package
-#### **"Conda bare redirection"**
+### Conda bare redirection
- Failed to create or update the conda environment because a package was specified on the command line using ">" or "<" without using quotes. Consider adding quotes around the package specification
-### Pip issues during build
-#### **"Failed to install packages"**
+### *Pip issues during build*
+### Failed to install packages
- Failed to install Python packages - Review the image build log for more information on this error
-#### **"Cannot uninstall package"**
+### Cannot uninstall package
- Pip failed to uninstall a Python package that was installed via the OS's package manager - Consider creating a separate environment using conda instead
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
When you access online endpoints with REST requests, the returned status codes a
| 404 | Not found | The endpoint doesn't have any valid deployment with positive weight. | | 408 | Request timeout | The model execution took longer than the timeout supplied in `request_timeout_ms` under `request_settings` of your model deployment config.| | 424 | Model Error | If your model container returns a non-200 response, Azure returns a 424. Check the `Model Status Code` dimension under the `Requests Per Minute` metric on your endpoint's [Azure Monitor Metric Explorer](../azure-monitor/essentials/metrics-getting-started.md). Or check response headers `ms-azureml-model-error-statuscode` and `ms-azureml-model-error-reason` for more information. |
-| 429 | Too many pending requests | Your model is getting more requests than it can handle. We allow maximum 2 * `max_concurrent_requests_per_instance` * `instance_count` / `request_process_time (in seconds)` requests per second. Additional requests are rejected. You can confirm these settings in your model deployment config under `request_settings` and `scale_settings`, respectively. If you're using auto-scaling, your model is getting requests faster than the system can scale up. With auto-scaling, you can try to resend requests with [exponential backoff](https://aka.ms/exponential-backoff). Doing so can give the system time to adjust. Apart from enable auto-scaling, you could also increase the number of instances by using the below [code](#how-to-calculate-instance-count). |
+| 429 | Too many pending requests | Your model is getting more requests than it can handle. We allow maximum 2 * `max_concurrent_requests_per_instance` * `instance_count` requests in parallel at any time. Additional requests are rejected. You can confirm these settings in your model deployment config under `request_settings` and `scale_settings`, respectively. If you're using auto-scaling, your model is getting requests faster than the system can scale up. With auto-scaling, you can try to resend requests with [exponential backoff](https://aka.ms/exponential-backoff). Doing so can give the system time to adjust. Apart from enable auto-scaling, you could also increase the number of instances by using the below [code](#how-to-calculate-instance-count). |
| 429 | Rate-limiting | The number of requests per second reached the [limit](./how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints) of managed online endpoints.| | 500 | Internal server error | Azure ML-provisioned infrastructure is failing. |
To increase the number of instances, you could calculate the required replicas f
from math import ceil # target requests per second target_rps = 20
-# time to process the request (in seconds)
+# time to process the request (in seconds, choose appropriate percentile)
request_process_time = 10 # Maximum concurrent requests per instance max_concurrent_requests_per_instance = 1
postgresql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-azure-ad-authentication.md
description: Learn about the concepts of Azure Active Directory for authenticati
Previously updated : 10/12/2022 Last updated : 11/03/2022
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
-> [!NOTE]
+> [!NOTE]
> Azure Active Directory Authentication for PostgreSQL Flexible Server is currently in preview. Microsoft Azure Active Directory (Azure AD) authentication is a mechanism of connecting to Azure Database for PostgreSQL using identities defined in Azure AD.
Azure Active Directory Authentication for Flexible Server is built using our exp
The following table provides a list of high-level Azure AD features and capabilities comparisons between Single Server and Flexible Server | **Feature / Capability** | **Single Server** | **Flexible Server** |
-| - | - | - |
-| Multiple Azure AD Admins | No | Yes|
-| Managed Identities (System & User assigned) | Partial | Full|
+| | | |
+| Multiple Azure AD Admins | No | Yes |
+| Managed Identities (System & User assigned) | Partial | Full |
| Invited User Support | No | Yes |
-| Disable Password Authentication | Not Available | Available|
-| Service Principal can act as group member| No | Yes |
-| Audit Azure AD Logins | No | Yes |
+| Disable Password Authentication | Not Available | Available |
+| Service Principal can act as group member | No | Yes |
+| Audit Azure AD Logins | No | Yes |
| PG bouncer support | No | Planned for GA | ## How Azure AD Works In Flexible Server
The following high-level diagram summarizes how authentication works using Azure
## Manage PostgreSQL Access For AD Principals
-When Azure AD authentication is enabled and Azure AD principal is added as an Azure AD administrator the account gets the same privileges as the original PostgreSQL administrator. Only Azure AD administrator can manage other Azure AD enabled roles on the server using Azure portal or Database API. The Azure AD administrator log in can be an Azure AD user, Azure AD group, Service Principal or Managed Identity. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Azure AD without changing the users or permissions in the PostgreSQL server. Multiple Azure AD administrators can be configured at any time and you can optionally disable password authentication to an Azure Database for PostgreSQL Flexible Server for better auditing and compliance needs.
+When Azure AD authentication is enabled and Azure AD principal is added as an Azure AD administrator the account gets the same privileges as the original PostgreSQL administrator. Only Azure AD administrator can manage other Azure AD enabled roles on the server using Azure portal or Database API. The Azure AD administrator sign-in can be an Azure AD user, Azure AD group, Service Principal or Managed Identity. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Azure AD without changing the users or permissions in the PostgreSQL server. Multiple Azure AD administrators can be configured at any time and you can optionally disable password authentication to an Azure Database for PostgreSQL Flexible Server for better auditing and compliance needs.
![admin structure][2]
Once you've authenticated against the Active Directory, you then retrieve a toke
> [!NOTE] > Use these steps to configure Azure AD with Azure Database for PostgreSQL Flexible Server [Configure and sign in with Azure AD for Azure Database for PostgreSQL Flexible Server](how-to-configure-sign-in-azure-ad-authentication.md).
-## Additional considerations
+## Other considerations
- Multiple Azure AD principals (a user, group, service principal or managed identity) can be configured as Azure AD Administrator for an Azure Database for PostgreSQL server at any time. - Azure AD groups must be a mail enabled security group for authentication to work.-- In preview , `Azure Active Directory Authentication only` is supported post server creation, this option is currently disabled during server creation experience
+- In preview, `Azure Active Directory Authentication only` is supported post server creation, this option is currently disabled during server creation experience
- Only an Azure AD administrator for PostgreSQL can initially connect to the Azure Database for PostgreSQL using an Azure Active Directory account. The Active Directory administrator can configure subsequent Azure AD database users. - If an Azure AD principal is deleted from Azure AD, it still remains as PostgreSQL role, but it will no longer be able to acquire new access token. In this case, although the matching role still exists in the database it won't be able to authenticate to the server. Database administrators need to transfer ownership and drop roles manually.
postgresql Concepts Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-data-encryption.md
Title: Data encryption with customer-managed key - Azure Database for PostgreSQL - Flexible server description: Azure Database for PostgreSQL Flexible server data encryption with a customer-managed key enables you to Bring Your Own Key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data.+++ Last updated : 11/03/2022 --- Previously updated : 10/12/2022 # Azure Database for PostgreSQL - Flexible Server Data Encryption with a Customer-managed Key Preview
Data encryption with customer-managed keys for Azure Database for PostgreSQL - F
**Key encryption key (KEK)**: An encryption key used to encrypt the DEKs. A KEK that never leaves Key Vault allows the DEKs themselves to be encrypted and controlled. The entity that has access to the KEK might be different than the entity that requires the DEK. Since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEKs can be effectively deleted by deleting the KEK.
-The DEKs, encrypted with the KEKs, are stored separately. Only an entity with access to the KEK can decrypt these DEKs. For more information, see [Security in encryption at rest](../../security/fundamentals/encryption-atrest.md).
+The DEKs, encrypted with the KEKs, are stored separately. Only an entity with access to the KEK can decrypt these DEKs. For more information, see [Security in encryption at rest](../../security/fundamentals/encryption-atrest.md).
## How data encryption with a customer-managed key work
The DEKs, encrypted with the KEKs, are stored separately. Only an entity with ac
For a PostgreSQL server to use customer-managed keys stored in Key Vault for encryption of the DEK, a Key Vault administrator gives the following access rights to the server: -- **get**: For retrieving the public part and properties of the key in the key Vault.
+- **get**: For retrieving, the public part and properties of the key in the key Vault.
-- **list**: For listing\iterating through keys in the key Vault.
+- **list**: For listing\iterating through keys in, the key Vault.
- **wrapKey**: To be able to encrypt the DEK. The encrypted DEK is stored in the Azure Database for PostgreSQL.
Here are recommendations for configuring a customer-managed key:
- Keep a copy of the customer-managed key in a secure place, or escrow it to the escrow service. -- If Key Vault generates the key, create a key backup before using the key for the first time. You can only restore the backup to Key Vault.
+- If Key Vault generates the key, create a key backup before using the key for the first time. You can only restore the backup to Key Vault.
### Accidental key access revocation from Key Vault
To monitor the database state, and to enable alerting for the loss of transparen
## Restore and replicate with a customer's managed key in Key Vault
-After Azure Database for PostgreSQL - Flexible Server is encrypted with a customer's managed key stored in Key Vault, any newly created server copy is also encrypted. You can make this new copy through a [PITR restore](concepts-backup-restore.md) operation or read replicas.
+After Azure Database for PostgreSQL - Flexible Server is encrypted with a customer's managed key stored in Key Vault, any newly created server copy is also encrypted. You can make this new copy through a [PITR restore](concepts-backup-restore.md) operation or read replicas.
-> [!NOTE]
+> [!NOTE]
> At this time we don't support revoking the original encryption key after restoring CMK enabled server to another server Avoid issues while setting up customer-managed data encryption during restore or read replica creation by following these steps on the primary and restored/replica servers:
Prerequisites:
Follow the steps below to enable CMK while creating Postgres Flexible Server.
-1. Navigate to Azure Database for PostgreSQL - Flexible Server create blade via Azure portal
+1. Navigate to Azure Database for PostgreSQL - Flexible Server create pane via Azure portal
-2. Provide required information on Basics and Networking tabs
+1. Provide required information on Basics and Networking tabs
-3. Navigate to Security(preview) tab. On the screen, provide Azure Active Directory (Azure AD) identity that has access to the Key Vault and Key in Key Vault in the same region where you're creating this server
+1. Navigate to Security(preview) tab. On the screen, provide Azure Active Directory (Azure AD) identity that has access to the Key Vault and Key in Key Vault in the same region where you're creating this server
-4. On Review Summary tab, make sure that you provided correct information in Security section and press Create button
+1. On Review Summary tab, make sure that you provided correct information in Security section and press Create button
-5. Once it's finished, you should be able to navigate to Data Encryption (preview) screen for the server and update identity or key if necessary
+1. Once it's finished, you should be able to navigate to Data Encryption (preview) screen for the server and update identity or key if necessary
## Update Customer Managed Key on the CMK enabled Flexible Server
Prerequisites:
- Azure Active Directory (Azure AD) user-managed identity in region where Postgres Flex Server will be created. Follow this [tutorial](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) to create identity. -- Key Vault with key in region where Postgres Flex Server will be created. Follow this [tutorial](../../key-vault/general/quick-create-portal.md) to create Key Vault and generate key.
+- Key Vault with key in region where Postgres Flex Server will be created. Follow this [tutorial](../../key-vault/general/quick-create-portal.md) to create Key Vault and generate key.
Follow the steps below to update CMK on CMK enabled Flexible Server: 1. Navigate to Azure Database for PostgreSQL - Flexible Server create a page via the Azure portal.
-2. Navigate to Data Encryption (preview) screen under Security tab
+1. Navigate to Data Encryption (preview) screen under Security tab
-3. Select different identity to connect to Azure Key Vault, remembering that this identity needs to have proper access rights to the Key Vault
+1. Select different identity to connect to Azure Key Vault, remembering that this identity needs to have proper access rights to the Key Vault
-4. Select different key by choosing subscription, Key Vault and key from dropdowns provided.
+1. Select different key by choosing subscription, Key Vault and key from dropdowns provided.
## Limitations
The following are limitations for configuring the customer-managed key in Flexib
The following are other limitations for the public preview of configuring the customer-managed key that we expect to remove at the General Availability of this feature: -- No support for Geo backup enabled servers
+- No support for Geo backup enabled servers
- **No support for revoking key after restoring CMK enabled server to another server** -- No support for Azure HSM Key Vault
+- No support for Azure HSM Key Vault
- No CLI or PowerShell support ## Next steps -- [Azure Active Directory](../../active-directory-domain-services/overview.md)
+- [Azure Active Directory](../../active-directory-domain-services/overview.md)
postgresql How To Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-sign-in-azure-ad-authentication.md
description: Learn about how to set up Azure Active Directory (Azure AD) for aut
Previously updated : 10/12/2022 Last updated : 11/04/2022
-# Use Azure Active Directory for authentication with PostgreSQL Flexible Server Preview
+# Azure Active Directory for authentication with PostgreSQL Flexible Server Preview
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
-> [!NOTE]
+> [!NOTE]
> Azure Active Directory Authentication for PostgreSQL Flexible Server is currently in preview.
-This article walks you through the steps how to configure Azure Active Directory (Azure AD) access with Azure Database for PostgreSQL Flexible Server, and how to connect using an Azure AD token.
-
-## Enable Azure AD Authentication
+In this article, you'll configure Azure Active Directory (Azure AD) access and how to connect using an Azure AD token with Azure Database for PostgreSQL Flexible Server.
Azure Active Directory Authentication for Azure Database for PostgreSQL Flexible Server can be configured either during server provisioning or later.
-Only Azure AD administrator users can create/enable users for Azure AD-based authentication. We recommend not using the Azure AD administrator for regular database operations, as it has elevated user permissions (for example, CREATEDB). You can now have multiple Azure AD admin users with flexible server and Azure AD admin user can be either user, group or a service principal.
+Only Azure AD administrator users can create/enable users for Azure AD-based authentication. We recommend not using the Azure AD administrator for regular database operations, as it has elevated user permissions (for example, CREATEDB). You can now have multiple Azure AD admin users with flexible server, and Azure AD admin users can be either a user, a group, or a service principal.
+
+## Install AzureAD PowerShell: AzureAD Module
-## Prerequisites
+### Prerequisites
-The below three steps are mandatory to use Azure Active Directory Authentication with Azure Database for PostgreSQL Flexible Server and must be run by `tenant administrator`or a user with tenant admin rights and this is one time activity per tenant.
+- An Azure account with an active subscription. If you don't already have one, [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+ - One of the following roles: Global Administrator, Privileged Role Administrator, Tenant Administrator.
-Install AzureAD PowerShell: AzureAD Module
+The following steps are mandatory to use Azure Active Directory authentication with Azure Database for PostgreSQL Flexible Server.
-### Step 1: Connect to user tenant.
+### 1 - Connect to the user tenant
```powershell Connect-AzureAD -TenantId <customer tenant id> ```
-### Step 2: Grant Flexible Server Service Principal read access to customer tenant
+
+### 2 - Grant Flexible Server Service Principal read access to customer tenant
```powershell New-AzureADServicePrincipal -AppId 5657e26c-cc92-45d9-bc47-9da6cfdb4ed9 ```
-This command will grant Azure Database for PostgreSQL Flexible Server Service Principal read access to customer tenant to request Graph API tokens for Azure AD validation tasks. AppID (5657e26c-cc92-45d9-bc47-9da6cfdb4ed9) in the above command is the AppID for Azure Database for PostgreSQL Flexible Server Service.
-### Step 3: Networking Requirements
+This command grants Azure Database for PostgreSQL Flexible Server Service Principal read access to customer tenant to request Graph API tokens for Azure AD validation tasks. AppID (5657e26c-cc92-45d9-bc47-9da6cfdb4ed9) in the above command is the AppID for Azure Database for PostgreSQL Flexible Server Service.
+
+### 3 - Networking Requirements
-Azure Active Directory is a multi-tenant application and requires outbound connectivity to perform certain operations like adding Azure AD admin groups and additional networking rules are required for Azure AD connectivity to work depending upon your network topology.
+Azure Active Directory is a multi-tenant application and requires outbound connectivity to perform certain operations like adding Azure AD admin groups. Additionally, networking rules are required for Azure AD connectivity to work depending upon your network topology.
`Public access (allowed IP addresses)`
-No additional networking rules are required.
+No extra networking rules are required.
`Private access (VNet Integration)`
-* An outbound NSG rule to allow virtual network traffic to reach AzureActiveDirectory service tag only.
+- An outbound NSG rule to allow virtual network traffic to reach AzureActiveDirectory service tag only.
-* Optionally, if youΓÇÖre using a proxy then add a new firewall rule to allow http/s traffic to reach AzureActiveDirectory service tag only.
+- Optionally, if youΓÇÖre using a proxy, then add a new firewall rule to allow http/s traffic to reach AzureActiveDirectory service tag only.
-Complete the above prerequisites steps before adding Azure AD administrator to your server. To set the Azure AD admin during server provisioning, follow the below steps.
+Complete the above prerequisites steps before adding Azure AD administrator to your server. To set the Azure AD admin during server provisioning, follow the below steps.
-1. In the Azure portal, during server provisioning select either `PostgreSQL and Azure Active Directory authentication` or `Azure Active Directory authentication only` as authentication method.
+1. In the Azure portal, during server provisioning, select either `PostgreSQL and Azure Active Directory authentication` or `Azure Active Directory authentication only` as the authentication method.
1. Set Azure AD Admin using `set admin` tab and select a valid Azure AD user/ group /service principal/Managed Identity in the customer tenant to be Azure AD administrator
-1. You can also optionally add local postgreSQL admin account if you prefer `PostgreSQL and Azure Active Directory authentication` method.
+1. You can also optionally add a local PostgreSQL admin account if you prefer `PostgreSQL and Azure Active Directory authentication` method.
-Note only one Azure admin user can be added during server provisioning and you can add multiple Azure AD admin users after server is created.
+Note only one Azure admin user can be added during server provisioning, and you can add multiple Azure AD admin users after server is created.
-![set-azure-ad-administrator][3]
+![set-Azure-ad-administrator][3]
To set the Azure AD administrator after server creation, follow the below steps 1. In the Azure portal, select the instance of Azure Database for PostgreSQL Flexible Server that you want to enable for Azure AD.
-1. Under Security, select Authentication and choose either`PostgreSQL and Azure Active Directory authentication` or `Azure Active Directory authentication only` as authentication method based upon your requirements.
+1. Under Security, select Authentication and choose either `PostgreSQL and Azure Active Directory authentication` or `Azure Active Directory authentication only` as the authentication method based on your requirements.
-![set azure ad administrator][2]
+![set Azure ad administrator][2]
+
+1. Select `Add Azure AD Admins` and select a valid Azure AD user/group/service principal/Managed Identity in the customer tenant to be an Azure AD administrator.
-1. Select `Add Azure AD Admins` and select a valid Azure AD user / group /service principal/Managed Identity in the customer tenant to be Azure AD administrator.
1. Select Save, > [!IMPORTANT]
The following high-level diagram summarizes the workflow of using Azure AD authe
![authentication flow][1]
-We've designed the Azure AD integration to work with common PostgreSQL tools like psql, which aren't Azure AD aware and only support specifying username and password when connecting to PostgreSQL. We pass the Azure AD token as the password as shown in the picture above.
+We've designed the Azure AD integration to work with standard PostgreSQL tools like psql, which aren't Azure AD aware and only support specifying username and password when connecting to PostgreSQL. We pass the Azure AD token as the password, as shown in the picture above.
We currently have tested the following clients: - psql commandline (utilize the PGPASSWORD variable to pass the token, see step 3 for more information) - Azure Data Studio (using the PostgreSQL extension) - Other libpq based clients (for example, common application frameworks and ORMs)-- PgAdmin (uncheck connect now at server creation. See step 4 for more information)
+- PgAdmin (uncheck connect now at server creation.
+ - For more information, see step 4.
+
+These are the steps that a user/application needs to authenticate with Azure AD described below:
+
+## Azure CLI
-These are the steps that a user/application will need to do authenticate with Azure AD described below:
+You can follow along in Azure Cloud Shell, an Azure VM, or on your local machine.
-### CLI Prerequisites
+### Prerequisites
-You can follow along in Azure Cloud Shell, an Azure VM, or on your local machine. Make sure you have the [Azure CLI installed](/cli/azure/install-azure-cli).
+Make sure you have the [Azure CLI installed](/cli/azure/install-azure-cli).
## Authenticate with Azure AD as a Flexible Server user
-### Step 1: Log in to the user's Azure subscription
+### 1 - Sign in to the user's Azure subscription
Start by authenticating with Azure AD using the Azure CLI tool. This step isn't required in Azure Cloud Shell.
Start by authenticating with Azure AD using the Azure CLI tool. This step isn't
az login ```
-The command will launch a browser window to the Azure AD authentication page. It requires you to give your Azure AD user ID and the password.
+The command launches a browser window to the Azure AD authentication page. It requires you to give your Azure AD user ID and password.
-### Step 2: Retrieve Azure AD access token
+### 2 - Retrieve Azure AD access token
Invoke the Azure CLI tool to acquire an access token for the Azure AD authenticated user from step 1 to access Azure Database for PostgreSQL.
Example (for Public Cloud):
az account get-access-token --resource https://ossrdbms-aad.database.windows.net ```
-The above resource value must be specified exactly as shown. For other clouds, the resource value can be looked up using:
+The above resource value must be specified as shown. For other clouds, the resource value can be looked up using:
```azurecli-interactive az cloud show
For Azure CLI version 2.0.71 and later, the command can be specified in the foll
az account get-access-token --resource-type oss-rdbms ```
-After authentication is successful, Azure AD will return an access token:
+After authentication is successful, Azure AD returns an access token:
```json {
After authentication is successful, Azure AD will return an access token:
} ```
-The token is a Base 64 string that encodes all the information about the authenticated user, and which is targeted to the Azure Database for PostgreSQL service.
+The token is a Base 64 string that encodes all the information about the authenticated user and is targeted to the Azure Database for PostgreSQL service.
-### Step 3: Use token as password for logging in with client psql
+### 3 - Use token as password for logging in with client psql
-When connecting you need to use the access token as the PostgreSQL user password.
+When connecting, it's best to use the access token as the PostgreSQL user password.
-While using the `psql` command line client, the access token needs to be passed through the `PGPASSWORD` environment variable, since the access token exceeds the password length that `psql` can accept directly:
+While using the `psql` command line client, the access token needs to be passed through the `PGPASSWORD` environment variable since the access token exceeds the password length that `psql` can accept directly:
Windows Example:
Windows Example:
set PGPASSWORD=<copy/pasted TOKEN value from step 2> ```
-```PowerShell
+```powerShell
$env:PGPASSWORD='<copy/pasted TOKEN value from step 2>' ```
Now you can initiate a connection with Azure Database for PostgreSQL like you no
```shell psql "host=mydb.postgres... user=user@tenant.onmicrosoft.com dbname=postgres sslmode=require" ```
-### Step 4: Use token as a password for logging in with PgAdmin
+
+### 4 - Use token as a password for logging in with PgAdmin
To connect using an Azure AD token with pgAdmin, you need to follow the next steps:
-1. Uncheck the connect now option at server creation.
+1. Uncheck the connect now an option at server creation.
1. Enter your server details in the connection tab and save. 1. From the browser menu, select connect to the Azure Database for PostgreSQL Flexible server 1. Enter the AD token password when prompted.
-Important considerations when connecting:
+Essential considerations when connecting:
-* `user@tenant.onmicrosoft.com` is the name of the Azure AD user
-* Make sure to use the exact way the Azure user is spelled - as the Azure AD user and group names are case sensitive.
+* `user@tenant.onmicrosoft.com` is the name of the Azure AD user
+* Make sure to use the exact way the Azure user is spelled - as the Azure AD user and group names are case-sensitive.
* If the name contains spaces, use `\` before each space to escape it.
-* The access token validity is anywhere between 5 minutes to 60 minutes. We recommend you get the access token just before initiating the login to Azure Database for PostgreSQL.
+* The access token validity is anywhere between 5 minutes to 60 minutes. We recommend you get the access token just before initiating the sign-in to Azure Database for PostgreSQL.
You're now authenticated to your Azure Database for PostgreSQL server using Azure AD authentication. ## Authenticate with Azure AD as a group member
-### Step 1: Create Azure AD groups in Azure Database for PostgreSQL Flexible Server
+### 1 - Create Azure AD groups in Azure Database for PostgreSQL Flexible Server
-To enable an Azure AD group for access to your database, use the same mechanism as for users, but instead specify the group name:
+To enable an Azure AD group for access to your database, use the exact mechanism as for users, but instead specify the group name:
Example:
Example:
select * from pgAzure ADauth_create_principal('Prod DB Readonly', false, false). ```
-When logging in, members of the group will use their personal access tokens, but sign with the group name specified as the username.
+When logging in, group members use their personal access tokens but sign in with the group name specified as the username.
> [!NOTE] > PostgreSQL Flexible Servers supports Managed Identities as group members.
-### Step 2: Log in to the userΓÇÖs Azure Subscription
+### 2 - Sign in to the userΓÇÖs Azure Subscription
-Authenticate with Azure AD using the Azure CLI tool. This step isn't required in Azure Cloud Shell. The user needs to be member of the Azure AD group.
+Authenticate with Azure AD using the Azure CLI tool. This step isn't required in Azure Cloud Shell. The user needs to be a member of the Azure AD group.
``` az login ```
-### Step 3: Retrieve Azure AD access token
+### 3 - Retrieve Azure AD access token
Invoke the Azure CLI tool to acquire an access token for the Azure AD authenticated user from step 2 to access Azure Database for PostgreSQL.
For Azure CLI version 2.0.71 and later, the command can be specified in the foll
az account get-access-token --resource-type oss-rdbms ```
-After authentication is successful, Azure AD will return an access token:
+After authentication is successful, Azure AD returns an access token:
```json {
After authentication is successful, Azure AD will return an access token:
} ```
-### Step 4: Use token as password for logging in with psql or PgAdmin (see above steps for user connection)
+### 4 - Use token as password for logging in with psql or PgAdmin (see above steps for a user connection)
Important considerations when connecting as a group member:
-* groupname is the name of the Azure AD group you're trying to connect as
-* Make sure to use the exact way the Azure AD group name is spelled.
-* Azure AD user and group names are case sensitive
-* When connecting as a group, use only the group name and not the alias of a group member.
-* If the name contains spaces, use \ before each space to escape it.
-* The access token validity is anywhere between 5 minutes to 60 minutes. We recommend you get the access token just before initiating the login to Azure Database for PostgreSQL.
+- groupname is the name of the Azure AD group you're trying to connect as
+- Make sure to use the exact way the Azure AD group name is spelled.
+- Azure AD user and group names are case-sensitive
+- When connecting as a group, use only the group name and not the alias of a group member.
+- If the name contains spaces, use \ before each space to escape it.
+- The access token validity is anywhere between 5 minutes to 60 minutes. We recommend you get the access token just before initiating the sign-in to Azure Database for PostgreSQL.
You're now authenticated to your PostgreSQL server using Azure AD authentication. ## Next steps
-* Review the overall concepts for [Azure Active Directory authentication with Azure Database for PostgreSQL - Flexible Server](concepts-azure-ad-authentication.md)
-* Learn how to create and manage Azure AD enabled PostgreSQL roles [Manage Azure AD roles in Azure Database for PostgreSQL - Flexible Server](Manage Azure AD roles in Azure Database for PostgreSQL - Flexible Server.md)
+- Review the overall concepts for [Azure Active Directory authentication with Azure Database for PostgreSQL - Flexible Server](concepts-azure-ad-authentication.md)
+- Learn how to create and manage Azure AD enabled PostgreSQL roles [Manage Azure AD roles in Azure Database for PostgreSQL - Flexible Server](Manage Azure AD roles in Azure Database for PostgreSQL - Flexible Server.md)
<!--Image references-->
postgresql How To Connect With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-connect-with-managed-identity.md
Title: Connect with Managed Identity - Azure Database for PostgreSQL - Flexible Server description: Learn about how to connect and authenticate using Managed Identity for authentication with Azure Database for PostgreSQL Flexible Server+++ Last updated : 11/04/2022 --- Previously updated : 09/26/2022+
+ - devx-track-csharp
+ - devx-track-azurecli
# Connect with Managed Identity to Azure Database for PostgreSQL Flexible Server Preview [!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
-> [!NOTE]
+> [!NOTE]
> Azure Active Directory Authentication for PostgreSQL Flexible Server is currently in preview.
-You can use both system-assigned and user-assigned managed identities to authenticate to Azure Database for PostgreSQL. This article shows you how to use a system-assigned managed identity for an Azure Virtual Machine (VM) to access an Azure Database for PostgreSQL server. Managed Identities are automatically managed by Azure and enable you to authenticate to services that support Azure AD authentication, without needing to insert credentials into your code.
+You can use both system-assigned and user-assigned managed identities to authenticate to Azure Database for PostgreSQL. This article shows you how to use a system-assigned managed identity for an Azure Virtual Machine (VM) to access an Azure Database for PostgreSQL server. Managed Identities are automatically managed by Azure and enable you to authenticate to services that support Azure AD authentication without needing to insert credentials into your code.
You learn how to: - Grant your VM access to an Azure Database for PostgreSQL Flexible server
You learn how to:
## Prerequisites - If you're not familiar with the managed identities for Azure resources feature, see this [overview](../../../articles/active-directory/managed-identities-azure-resources/overview.md). If you don't have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue.-- To do the required resource creation and role management, your account needs "Owner" permissions at the appropriate scope (your subscription or resource group). If you need assistance with role assignment, see [Assign Azure roles to manage access to your Azure subscription resources](../../../articles/role-based-access-control/role-assignments-portal.md).-- You need an Azure VM (for example running Ubuntu Linux) that you'd like to use for access your database using Managed Identity
+- To do the required resource creation and role management, your account needs "Owner" permissions at the appropriate scope (your subscription or resource group). If you need assistance with a role assignment, see [Assign Azure roles to manage access to your Azure subscription resources](../../../articles/role-based-access-control/role-assignments-portal.md).
+- You need an Azure VM (for example, running Ubuntu Linux) that you'd like to use to access your database using Managed Identity
- You need an Azure Database for PostgreSQL database server that has [Azure AD authentication](how-to-configure-sign-in-azure-ad-authentication.md) configured-- To follow the C# example, first complete the guide how to [Connect with C#](connect-csharp.md)
+- To follow the C# example, first, complete the guide on how to [Connect with C#](connect-csharp.md)
-## Creating a system-assigned managed identity for your VM
+## Create a system-assigned managed identity for your VM
-Use [az vm identity assign](/cli/azure/vm/identity/) with the `identity assign` command enable the system-assigned identity to an existing VM:
+Use [az vm identity assign](/cli/azure/vm/identity/) with the `identity assign` command enables the system-assigned identity to an existing VM:
```azurecli-interactive az vm identity assign -g myResourceGroup -n myVm
Retrieve the application ID for the system-assigned managed identity, which you'
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)] - az ad sp list --display-name vm-name --query [*].appId --out tsv ```
-## Creating a PostgreSQL user for your Managed Identity
+## Create a PostgreSQL user for your Managed Identity
Now, connect as the Azure AD administrator user to your PostgreSQL database, and run the following SQL statements, replacing `CLIENT_ID` with the client ID you retrieved for your system-assigned managed identity:
Now, connect as the Azure AD administrator user to your PostgreSQL database, and
select * from pgaadauth_create_principal('<identity_name>', false, false); ```
-For more details on managing Azure AD enabled database roles see [how to manage Azure AD enabled PostgreSQL roles](./how-to-manage-azure-ad-users.md)
+For more information on managing Azure AD enabled database roles, see [how to manage Azure AD enabled PostgreSQL roles](./how-to-manage-azure-ad-users.md)
-The managed identity now has access when authenticating with the identity name as a role name and Azure AD token as a password.
+The managed identity now has access when authenticating with the identity name as a role name and the Azure AD token as a password.
-## Retrieving the access token from Azure Instance Metadata service
+## Retrieve the access token from the Azure Instance Metadata service
Your application can now retrieve an access token from the Azure Instance Metadata service and use it for authenticating with the database.
This token retrieval is done by making an HTTP request to `http://169.254.169.25
* `resource` = `https://ossrdbms-aad.database.windows.net` * `client_id` = `CLIENT_ID` (that you retrieved earlier)
-You'll get back a JSON result that contains an `access_token` field - this long text value is the Managed Identity access token, that you should use as the password when connecting to the database.
+You get back a JSON result containing an `access_token` field - this long text value is the Managed Identity access token you should use as the password when connecting to the database.
+
+For testing purposes, you can run the following commands in your shell.
-For testing purposes, you can run the following commands in your shell. Note you need `curl`, `jq`, and the `psql` client installed.
+> [!NOTE]
+> Note you need `curl`, `jq`, and the `psql` client installed.
```bash # Retrieve the access token
export PGPASSWORD=`curl -s 'http://169.254.169.254/metadata/identity/oauth2/toke
psql -h SERVER --user USER DBNAME ```
-You are now connected to the database you've configured earlier.
+You're now connected to the database you configured earlier.
-## Connecting using Managed Identity in C#
+## Connect using Managed Identity in C#
This section shows how to get an access token using the VM's user-assigned managed identity and use it to call Azure Database for PostgreSQL. Azure Database for PostgreSQL natively supports Azure AD authentication, so it can directly accept access tokens obtained using managed identities for Azure resources. When creating a connection to PostgreSQL, you pass the access token in the password field.
namespace Driver
} ```
-When run, this command will give an output like this:
+When run, this command gives an output like this:
-```
+```output
Getting access token from Azure AD... Opening connection using access token...
Postgres version: PostgreSQL 11.11, compiled by Visual C++ build 1800, 64-bit
## Next steps
-* Review the overall concepts for [Azure Active Directory authentication with Azure Database for PostgreSQL](concepts-azure-ad-authentication.md)
+- Review the overall concepts for [Azure Active Directory authentication with Azure Database for PostgreSQL](concepts-azure-ad-authentication.md)
postgresql How To Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-create-users.md
Title: Create users - Azure Database for PostgreSQL - Flexible Server description: This article describes how you can create new user accounts to interact with an Azure Database for PostgreSQL - Flexible Server.+++ Last updated : 11/04/2022 -- Previously updated : 09/26/2022 # Create users in Azure Database for PostgreSQL - Flexible Server Preview
Last updated 09/26/2022
This article describes how you can create users within an Azure Database for PostgreSQL server.
-> [!NOTE]
+> [!NOTE]
> Azure Active Directory Authentication for PostgreSQL Flexible Server is currently in preview.
-If you would like to learn about how to create and manage Azure subscription users and their privileges, you can visit the [Azure role-based access control (Azure RBAC) article](../../role-based-access-control/built-in-roles.md) or review [how to customize roles](../../role-based-access-control/custom-roles.md).
+Suppose you want to learn how to create and manage Azure subscription users and their privileges. In that case, you can visit the [Azure role-based access control (Azure RBAC) article](../../role-based-access-control/built-in-roles.md) or review [how to customize roles](../../role-based-access-control/custom-roles.md).
## The server admin account
-When you first created your Azure Database for PostgreSQL, you provided a server admin user name and password. For more information, you can follow the [Quickstart](quickstart-create-server-portal.md) to see the step-by-step approach. Since the server admin user name is a custom name, you can locate the chosen server admin user name from the Azure portal.
+When you first created your Azure Database for PostgreSQL, you provided a server admin username and password. For more information, you can follow the [Quickstart](quickstart-create-server-portal.md) to see the step-by-step approach. Since the server admin user name is a custom name, you can locate the chosen server admin user name from the Azure portal.
-The Azure Database for PostgreSQL server is created with the 3 default roles defined. You can see these roles by running the command: `SELECT rolname FROM pg_roles;`
+The Azure Database for PostgreSQL server is created with the three default roles defined. You can see these roles by running the command: `SELECT rolname FROM pg_roles;`
- azure_pg_admin - azure_superuser - your server admin user
-Your server admin user is a member of the azure_pg_admin role. However, the server admin account is not part of the azure_superuser role. Since this service is a managed PaaS service, only Microsoft is part of the super user role.
+Your server admin user is a member of the azure_pg_admin role. However, the server admin account isn't part of the azure_superuser role. Since this service is a managed PaaS service, only Microsoft is part of the super user role.
The PostgreSQL engine uses privileges to control access to database objects, as discussed in the [PostgreSQL product documentation](https://www.postgresql.org/docs/current/static/sql-createrole.html). In Azure Database for PostgreSQL, the server admin user is granted these privileges:
- LOGIN, NOSUPERUSER, INHERIT, CREATEDB, CREATEROLE, REPLICATION
-The server admin user account can be used to create additional users and grant those users into the azure_pg_admin role. Also, the server admin account can be used to create less privileged users and roles that have access to individual databases and schemas.
+- Sign in, NOSUPERUSER, INHERIT, CREATEDB, CREATEROLE, REPLICATION
+
+The server admin user account can be used to create more users and grant those users into the azure_pg_admin role. Also, the server admin account can be used to create less privileged users and roles that have access to individual databases and schemas.
-## How to create additional admin users in Azure Database for PostgreSQL
+## How to create more admin users in Azure Database for PostgreSQL
1. Get the connection information and admin user name.
- To connect to your database server, you need the full server name and admin sign-in credentials. You can easily find the server name and sign-in information from the server **Overview** page or the **Properties** page in the Azure portal.
+ You need the full server name and admin sign-in credentials to connect to your database server. You can easily find the server name and sign-in information from the server **Overview** page or the **Properties** page in the Azure portal.
-2. Use the admin account and password to connect to your database server. Use your preferred client tool, such as pgAdmin or psql.
- If you are unsure of how to connect, see [the quickstart](./quickstart-create-server-portal.md)
+1. Use the admin account and password to connect to your database server. Use your preferred client tool, such as pgAdmin or psql.
+ If you're unsure of how to connect, see [the quickstart](./quickstart-create-server-portal.md)
-3. Edit and run the following SQL code. Replace your new user name for the placeholder value <new_user>, and replace the placeholder password with your own strong password.
+1. Edit and run the following SQL code. Replace your new user name with the placeholder value <new_user>, and replace the placeholder password with your own strong password.
```sql CREATE ROLE <new_user> WITH LOGIN NOSUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION PASSWORD '<StrongPassword!>';
The server admin user account can be used to create additional users and grant t
## How to create database users in Azure Database for PostgreSQL 1. Get the connection information and admin user name.
- To connect to your database server, you need the full server name and admin sign-in credentials. You can easily find the server name and sign-in information from the server **Overview** page or the **Properties** page in the Azure portal.
+ You need the full server name and admin sign-in credentials to connect to your database server. You can easily find the server name and sign-in information from the server **Overview** page or the **Properties** page in the Azure portal.
-2. Use the admin account and password to connect to your database server. Use your preferred client tool, such as pgAdmin or psql.
+1. Use the admin account and password to connect to your database server. Use your preferred client tool, such as pgAdmin or psql.
-3. Edit and run the following SQL code. Replace the placeholder value `<db_user>` with your intended new user name, and placeholder value `<newdb>` with your own database name. Replace the placeholder password with your own strong password.
+1. Edit and run the following SQL code. Replace the placeholder value `<db_user>` with your intended new user name and placeholder value `<newdb>` with your own database name. Replace the placeholder password with your own strong password.
- This sql code syntax creates a new database named testdb, for example purposes. Then it creates a new user in the PostgreSQL service, and grants connect privileges to the new database for that user.
+ This sql code syntax creates a new database named testdb, for example, purposes. Then it creates a new user in the PostgreSQL service and grants connect privileges to the new database for that user.
```sql CREATE DATABASE <newdb>;
The server admin user account can be used to create additional users and grant t
GRANT CONNECT ON DATABASE <newdb> TO <db_user>; ```
-4. Using an admin account, you may need to grant additional privileges to secure the objects in the database. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/ddl-priv.html) for further details on database roles and privileges. For example:
+1. Using an admin account, you may need to grant other privileges to secure the objects in the database. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/ddl-priv.html) for further details on database roles and privileges. For example:
```sql GRANT ALL PRIVILEGES ON DATABASE <newdb> TO <db_user>;
The server admin user account can be used to create additional users and grant t
GRANT SELECT ON ALL TABLES IN SCHEMA <schema_name> TO <db_user>; ```
-5. Log in to your server, specifying the designated database, using the new user name and password. This example shows the psql command line. With this command, you are prompted for the password for the user name. Replace your own server name, database name, and user name.
+1. Sign in to your server, specifying the designated database, using the new username and password. This example shows the psql command line. With this command, you're prompted for the password for the user name. Replace your own server name, database name, and user name.
```shell psql --host=mydemoserver.postgres.database.azure.com --port=5432 --username=db_user@mydemoserver --dbname=newdb
The server admin user account can be used to create additional users and grant t
## Next steps Open the firewall for the IP addresses of the new users' machines to enable them to connect:
-[Create and manage Azure Database for PostgreSQL firewall rules by using the Azure portal](how-to-manage-firewall-portal.md) or [Azure CLI](how-to-manage-firewall-cli.md).
-For more information regarding user account management, see PostgreSQL product documentation for [Database Roles and Privileges](https://www.postgresql.org/docs/current/static/user-manag.html), [GRANT Syntax](https://www.postgresql.org/docs/current/static/sql-grant.html), and [Privileges](https://www.postgresql.org/docs/current/static/ddl-priv.html).
+- [Create and manage Azure Database for PostgreSQL firewall rules by using the Azure portal](how-to-manage-firewall-portal.md) or [Azure CLI](how-to-manage-firewall-cli.md).
+
+- For more information regarding user account management, see PostgreSQL product documentation for [Database Roles and Privileges](https://www.postgresql.org/docs/current/static/user-manag.html), [GRANT Syntax](https://www.postgresql.org/docs/current/static/sql-grant.html), and [Privileges](https://www.postgresql.org/docs/current/static/ddl-priv.html).
postgresql How To Manage Azure Ad Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-azure-ad-users.md
description: This article describes how you can manage Azure AD enabled roles to
Previously updated : 10/12/2022 Last updated : 11/04/2022
This article describes how you can create an Azure Active Directory (Azure AD) e
> This guide assumes you already enabled Azure Active Directory authentication on your PostgreSQL Flexible server. > See [How to Configure Azure AD Authentication](./how-to-configure-sign-in-azure-ad-authentication.md)
-> [!NOTE]
+> [!NOTE]
> Azure Active Directory Authentication for PostgreSQL Flexible Server is currently in preview. If you like to learn about how to create and manage Azure subscription users and their privileges, you can visit the [Azure role-based access control (Azure RBAC) article](../../role-based-access-control/built-in-roles.md) or review [how to customize roles](../../role-based-access-control/custom-roles.md).
If you like to learn about how to create and manage Azure subscription users and
## Create or Delete Azure AD administrators using Azure portal or Azure Resource Manager (ARM) API 1. Open **Authentication** page for your Azure Database for PostgreSQL Flexible Server in Azure portal
-2. To add an administrator - select **Add Azure AD Admin** and select a user, group, application or a managed identity from the current Azure AD tenant.
-3. To remove an administrator - select **Delete** icon for the one to remove.
-4. Select **Save** and wait for provisioning operation to completed.
+1. To add an administrator - select **Add Azure AD Admin** and select a user, group, application or a managed identity from the current Azure AD tenant.
+1. To remove an administrator - select **Delete** icon for the one to remove.
+1. Select **Save** and wait for provisioning operation to completed.
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/how-to-manage-azure-ad-users/add-aad-principal-via-portal.png" alt-text="Screenshot of managing Azure AD administrators via portal.":::
private-5g-core Modify Packet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/modify-packet-core.md
+
+ Title: Modify a packet core instance
+
+description: In this how-to guide, you'll learn how to modify a packet core instance using the Azure portal.
++++ Last updated : 09/29/2022+++
+# Modify the packet core instance in a site
+
+Each Azure Private 5G Core Preview site contains a packet core instance, which is a cloud-native implementation of the 3GPP standards-defined 5G Next Generation Core (5G NGC or 5GC). In this how-to guide, you'll learn how to modify a packet core instance using the Azure portal; this includes modifying the packet core's custom location, connected Azure Stack Edge device, and access network configuration. You'll also learn how to modify the data network attached to the packet core instance.
+
+## Prerequisites
+
+- If you want to make changes to the packet core configuration or access network, refer to [Collect packet core configuration values](collect-required-information-for-a-site.md#collect-packet-core-configuration-values) and [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) to collect the new values and make sure they're in the correct format.
+
+ > [!NOTE]
+ > You can't update a packet core instance's **Technology type** or **Version** field.
+ >
+ > - To change the technology type, you'll need to delete the site and [recreate it](create-a-site.md). <!-- link to new site deletion section -->
+ > - To change the version, [upgrade the packet core instance](upgrade-packet-core-azure-portal.md).
+
+- If you want to make changes to the attached data network, refer to [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to collect the new values and make sure they're in the correct format.
+- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.
+
+## Select the packet core instance to modify
+
+In this step, you'll navigate to the **Packet Core Control Plane** resource representing your packet core instance.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Search for and select the **Mobile Network** resource representing the private mobile network.
+
+ :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource.":::
+
+3. In the **Resource** menu, select **Sites**.
+4. Select the site containing the packet core instance you want to modify.
+5. Under the **Network function** heading, select the name of the **Packet Core Control Plane** resource shown next to **Packet Core**.
+
+ :::image type="content" source="media/packet-core-field.png" alt-text="Screenshot of the Azure portal showing the Packet Core field.":::
+
+6. Select **Modify packet core**.
+
+ :::image type="content" source="media/modify-packet-core/modify-packet-core-configuration.png" alt-text="Screenshot of the Azure portal showing the Modify packet core option.":::
+
+7. If you want to make changes to the packet core configuration or access network values, go to [Modify the packet core configuration](#modify-the-packet-core-configuration). If you only want to make changes to the attached data network, go to [Modify the attached data network configuration](#modify-the-attached-data-network-configuration).
+
+### Modify the packet core configuration
+
+To modify the packet core and/or access network configuration:
+
+1. In the **Configuration** tab, fill out the fields with any new values.
+
+ - Use the information you collected in [Collect packet core configuration values](collect-required-information-for-a-site.md#collect-packet-core-configuration-values) for the top-level configuration values.
+ - Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) for the configuration values under **Access network**.
+
+ :::image type="content" source="media/modify-packet-core/modify-packet-core-configuration-tab.png" alt-text="Screenshot of the Azure portal showing the Modify packet core Configuration tab.":::
+
+2. If you also want to make changes to the attached data network, select the **Data network** tab and go to [Modify the attached data network configuration](#modify-the-attached-data-network-configuration). Otherwise, go to [Submit and verify changes](#submit-and-verify-changes).
+
+### Modify the attached data network configuration
+
+To make changes to the data network attached to your packet core instance:
+
+1. In the **Data network** tab, select the data network.
+
+ :::image type="content" source="media/modify-packet-core/modify-packet-core-data-network-tab.png" alt-text="Screenshot of the Azure portal showing the Modify packet core Data network tab.":::
+
+2. Use the information you collected in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to fill out the fields in the **Modify a data network** window.
+
+ :::image type="content" source="media/modify-packet-core/modify-packet-core-modify-data-network.png" alt-text="Screenshot of the Azure portal showing the Attach data network screen.":::
+
+3. Select **Modify**. You should see your changes under the **Data network** tab.
+
+### Submit and verify changes
+
+1. Select **Modify**.
+2. Azure will now redeploy the packet core instance with the new configuration. The Azure portal will display the following confirmation screen when this deployment is complete.
+
+ :::image type="content" source="media/site-deployment-complete.png" alt-text="Screenshot of the Azure portal showing the confirmation of a successful deployment of a packet core instance.":::
+
+3. Navigate to the **Packet Core Control Plane** resource as described in [Select the packet core instance to modify](#select-the-packet-core-instance-to-modify).
+
+ - If you made changes to the packet core configuration, check that the fields under **Connected ASE device**, **Custom ARC location** and **Access network** contain the updated information.
+ - If you made changes to the attached data network, check that the fields under **Data network** contain the updated information.
+
+## Next steps
+
+Use Log Analytics or the packet core dashboards to confirm your packet core instance is operating normally after you modify it.
+
+- [Monitor Azure Private 5G Core with Log Analytics](monitor-private-5g-core-with-log-analytics.md)
+- [Packet core dashboards](packet-core-dashboards.md)
private-5g-core Private Mobile Network Design Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/private-mobile-network-design-requirements.md
You should ask your RAN partner for the countries and frequency bands for which
Your RAN will transmit a Public Land Mobile Network Identity (PLMN ID) to all UEs on the frequency band it is configured to use. You should define the PLMN ID and confirm your access to spectrum. In some countries, spectrum must be obtained from the national regulator or incumbent telecommunications operator. For example, if you're using the band 48 Citizens Broadband Radio Service (CBRS) spectrum, you may need to work with your RAN partner to deploy a Spectrum Access System (SAS) domain proxy on the enterprise site so that the RAN can continuously check that it is authorized to broadcast.
+#### Maximum Transmission Units (MTUs)
+
+The Maximum Transmission Unit (MTU) is a property of an IP link, and it is configured on the interfaces at each end of the link. Packets that exceed an interface's configured MTU are split into smaller packets via IPv4 fragmentation prior to sending and are then reassembled at their destination. However, if an interface's configured MTU is higher than the link's supported MTU, the packet will fail to be transmitted correctly.
+
+To avoid transmission issues caused by IPv4 fragmentation, a 4G or 5G packet core instructs UEs what MTU they should use. However, UEs do not always respect the MTU signalled by the packet core.
+
+IP packets from UEs are tunnelled through from the RAN, which adds overhead from encapsulation. Due to this, the MTU value for the UE should be smaller than the MTU value used between the RAN and the Packet Core to avoid transmission issues.
+
+RANs typically come pre-configured with an MTU of 1500. The Packet CoreΓÇÖs default UE MTU is 1300 bytes to allow for encapsulation overhead. These values maximize RAN interoperability, but risk that certain UEs will not observe the default MTU and will generate larger packets that require IPv4 fragmentation that may be dropped by the network.
+
+If you are affected by this issue, it is strongly recommended to configure the RAN to use an MTU of 1560 or higher which allows a sufficient overhead for the encapsulation and avoids fragmentation with a UE using a standard MTU of 1500.
+ ### Signal coverage The UEs must be able to communicate with the RAN from any location at the site. This means that the signals must propagate effectively in the environment, including accounting for obstructions and equipment, to support UEs moving around the site (for example, between indoor and outdoor areas).
private-5g-core Upgrade Packet Core Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/upgrade-packet-core-azure-portal.md
To check which version your packet core instance is currently running, and wheth
1. Select the site containing the packet core instance you're interested in. 1. Under the **Network function** heading, select the name of the **Packet Core Control Plane** resource shown next to **Packet Core**.
- :::image type="content" source="media/upgrade-packet-core-azure-portal/packet-core-field.png" alt-text="Screenshot of the Azure portal showing the Packet Core field.":::
+ :::image type="content" source="media/packet-core-field.png" alt-text="Screenshot of the Azure portal showing the Packet Core field.":::
-1. Check the **Version** field under the **Configuration** heading to view the current software version. If there's an attention icon next to this field, a new packet core version is available. If there's a warning that you're running an unsupported version, we advise that you upgrade your packet core instance to a version that Microsoft currently supports.
+2. Check the **Version** field under the **Configuration** heading to view the current software version. If there's an attention icon next to this field, a new packet core version is available. If there's a warning that you're running an unsupported version, we advise that you upgrade your packet core instance to a version that Microsoft currently supports.
:::image type="content" source="media/upgrade-packet-core-azure-portal/packet-core-control-plane-overview.png" alt-text="Screenshot of the Azure portal showing the Packet Core Control Plane resource overview." lightbox="media/upgrade-packet-core-azure-portal/packet-core-control-plane-overview.png":::
purview Microsoft Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/microsoft-purview-connector-overview.md
The table below shows the supported capabilities for each data source. Select th
|| [SAP HANA](register-scan-sap-hana.md) | [Yes](register-scan-sap-hana.md#register) | No | No | No | No | || [Snowflake](register-scan-snowflake.md) | [Yes](register-scan-snowflake.md#register) | No | [Yes](register-scan-snowflake.md#lineage) | No | No | || [SQL Server](register-scan-on-premises-sql-server.md)| [Yes](register-scan-on-premises-sql-server.md#register) |[Yes](register-scan-on-premises-sql-server.md#scan) | No* | No| No |
-|| [SQL Server on Azure-Arc](register-scan-azure-arc-enabled-sql-server.md)| [Yes](register-scan-azure-arc-enabled-sql-server.md#register) | [Yes](register-scan-azure-arc-enabled-sql-server.md#scan) | No* |Preview: [1.DevOps policies](how-to-policies-devops-arc-sql-server.md) [2.Data Owner](how-to-policies-data-owner-arc-sql-server.md) | No |
+|| [SQL Server on Azure-Arc](register-scan-azure-arc-enabled-sql-server.md)| [Yes](register-scan-azure-arc-enabled-sql-server.md#register) | [Yes](register-scan-azure-arc-enabled-sql-server.md#scan) | No* |[Yes](register-scan-azure-arc-enabled-sql-server.md#access-policy) | No |
|| [Teradata](register-scan-teradata-source.md)| [Yes](register-scan-teradata-source.md#register)| [Yes](register-scan-teradata-source.md#scan)| [Yes*](register-scan-teradata-source.md#lineage) | No| No | |File|[Amazon S3](register-scan-amazon-s3.md)|[Yes](register-scan-amazon-s3.md)| [Yes](register-scan-amazon-s3.md)| Limited* | No| No | ||[HDFS](register-scan-hdfs.md)|[Yes](register-scan-hdfs.md)| [Yes](register-scan-hdfs.md)| No | No| No |
purview Register Scan Azure Arc Enabled Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-arc-enabled-sql-server.md
This article outlines how to register Azure Arc-enabled SQL Server instances, an
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**| |||||||||
-| [Yes](#register) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | [1.DevOps policies](how-to-policies-devops-arc-sql-server.md) [2.Data Owner](how-to-policies-data-owner-arc-sql-server.md) | Limited** | No |
+| [Yes](#register) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | [Yes](#access-policy) | Limited** | No |
\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
To create and run a new scan, do the following:
[!INCLUDE [view and manage scans](includes/view-and-manage-scans.md)]
+## Access policy
+
+### Supported policies
+The following types of policies are supported on this data resource from Microsoft Purview:
+* [DevOps policies](how-to-policies-devops-arc-sql-server.md)
+* [Data Owner](how-to-policies-data-owner-arc-sql-server.md)
++ ## Next steps Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data.
security Key Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/key-management.md
Azure offers several options for storing and managing your keys in the cloud, in
**Azure Dedicated HSM**: A FIPS 140-2 Level 3 validated bare metal HSM offering, that lets customers lease a general-purpose HSM appliance that resides in Microsoft datacenters. The customer has complete and total ownership over the HSM device and is responsible for patching and updating the firmware when required. Microsoft has no permissions on the device or access to the key material, and Dedicated HSM is not integrated with any Azure PaaS offerings. Customers can interact with the HSM using the PKCS#11, JCE/JCA, and KSP/CNG APIs. This offering is most useful for legacy lift-and-shift workloads, PKI, SSL Offloading and Keyless TLS (supported integrations include F5, Nginx, Apache, Palo Alto, IBM GW and more), OpenSSL applications, Oracle TDE, and Azure SQL TDE IaaS. For more information, see [What is Azure Key Vault Managed HSM?](../../dedicated-hsm/overview.md)
-**Azure Payments HSM** (in public preview): A FIPS 140-2 Level 3, PCI HSM v3, validated bare metal offering that lets customers lease a payment HSM appliance in Microsoft datacenters for payments operations, including payment processing, payment credential issuing, securing keys and authentication data, and sensitive data protection. The service is PCI DSS and PCI 3DS compliant. Azure Payment HSM offers single-tenant HSMs for customers to have complete administrative control and exclusive access to the HSM. Once the HSM is allocated to a customer, Microsoft has no access to customer data. Likewise, when the HSM is no longer required, customer data is zeroized and erased as soon as the HSM is released, to ensure complete privacy and security is maintained. This offering is currently in public preview. For more information, see [About Azure Payment HSM](../../payment-hsm/overview.md).
+**Azure Payments HSM**: A FIPS 140-2 Level 3, PCI HSM v3, validated bare metal offering that lets customers lease a payment HSM appliance in Microsoft datacenters for payments operations, including payment processing, payment credential issuing, securing keys and authentication data, and sensitive data protection. The service is PCI DSS and PCI 3DS compliant. Azure Payment HSM offers single-tenant HSMs for customers to have complete administrative control and exclusive access to the HSM. Once the HSM is allocated to a customer, Microsoft has no access to customer data. Likewise, when the HSM is no longer required, customer data is zeroized and erased as soon as the HSM is released, to ensure complete privacy and security is maintained. For more information, see [About Azure Payment HSM](../../payment-hsm/overview.md).
### Pricing
security Recover From Identity Compromise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/recover-from-identity-compromise.md
editor: '' --+ na
sentinel Prerequisites For Deploying Sap Continuous Threat Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/prerequisites-for-deploying-sap-continuous-threat-monitoring.md
If you choose to retrieve additional information with the [NPLK900202 optional C
| SAP BASIS versions | Notes | | | |
-| - 750 SP01 to SP12<br>- 751 SP01 to SP06<br>- 752 SP01 to SP03 | [2641084 - Standardized read access to data of Security Audit Log](https://launchpad.support.sap.com/#/notes/2641084)* |
+| - 750 SP04 to SP12<br>- 751 SP00 to SP06<br>- 752 SP00 to SP02 | [2641084 - Standardized read access to data of Security Audit Log](https://launchpad.support.sap.com/#/notes/2641084)* |
## Next steps
sentinel Sap Deploy Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-deploy-troubleshoot.md
The change takes effect two minutes after you save the file. You don't need to r
## View all container execution logs
-Connector execution logs for your Microsoft Sentinel Solution for SAP data connector deployment are stored in **/opt/sapcon/[SID]/log**. Log filename is **OmniLog.log**. A history of logfiles is kept, suffixed with *.<number>* such as **OmniLog.log.1**, **OmniLog.log.2** etc
+Connector execution logs for your Microsoft Sentinel Solution for SAP data connector deployment are stored on your VM in **/opt/sapcon/[SID]/log/**. Log filename is **OmniLog.log**. A history of logfiles is kept, suffixed with *.[number]* such as **OmniLog.log.1**, **OmniLog.log.2** etc
## Review and update the Microsoft Sentinel for SAP data connector configuration
service-bus-messaging Service Bus Dead Letter Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dead-letter-queues.md
There are several activities in Service Bus that cause messages to get pushed to
|TTLExpiredException |The message expired and was dead lettered. See the [Time to live](#time-to-live) section for details. | |Session ID is null. |Session enabled entity doesn't allow a message whose session identifier is null. | |MaxTransferHopCountExceeded | The maximum number of allowed hops when forwarding between queues has been exceeded. This value is set to 4. |
-| MaxDeliveryCountExceededExceptionMessage | Message couldn't be consumed after maximum delivery attempts. See the [Maximum delivery count](#maximum-delivery-count) section for details. |
+| MaxDeliveryCountExceeded | Message couldn't be consumed after maximum delivery attempts. See the [Maximum delivery count](#maximum-delivery-count) section for details. |
## Maximum delivery count There is a limit on number of attempts to deliver messages for Service Bus queues and subscriptions. The default value is 10. Whenever a message has been delivered under a peek-lock, but has been either explicitly abandoned or the lock has expired, the delivery count on the message is incremented. When the delivery count exceeds the limit, the message is moved to the DLQ. The dead-letter reason for the message in DLQ is set to: MaxDeliveryCountExceeded. This behavior can't be disabled, but you can set the max delivery count to a large number.
service-connector Tutorial Java Spring Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-java-spring-mysql.md
Title: 'Tutorial: Deploy a Spring Cloud Application Connected to Azure Database for MySQL with Service Connector'
-description: Create a Spring Boot application connected to Azure Database for MySQL with Service Connector.
+ Title: 'Tutorial: Deploy an application to Azure Spring Apps and connect it to Azure Database for MySQL Flexible Server using Service Connector'
+description: Create a Spring Boot application connected to Azure Database for MySQL Flexible Server with Service Connector.
Previously updated : 05/03/2022 Last updated : 11/02/2022 ms.devlang: azurecli
-# Tutorial: Deploy Spring Cloud Application Connected to Azure Database for MySQL with Service Connector
+# Tutorial: Deploy an application to Azure Spring Apps and connect it to Azure Database for MySQL Flexible Server using Service Connector
-In this tutorial, you will complete the following tasks using the Azure portal or the Azure CLI. Both methods are explained in the following procedures.
+In this tutorial, you'll complete the following tasks using the Azure portal or the Azure CLI. Both methods are explained in the following procedures.
> [!div class="checklist"]
-> * Provision an instance of Azure Spring Cloud
-> * Build and deploy apps to Azure Spring Cloud
-> * Integrate Azure Spring Cloud with Azure Database for MySQL with Service Connector
+> * Provision an instance of Azure Spring Apps
+> * Build and deploy apps to Azure Spring Apps
+> * Integrate Azure Spring Apps with Azure Database for MySQL with Service Connector
## Prerequisites * [Install JDK 8 or JDK 11](/azure/developer/java/fundamentals/java-jdk-install) * [Sign up for an Azure subscription](https://azure.microsoft.com/free/)
-* [Install the Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli) and install the Azure Spring Cloud extension with the command: `az extension add --name spring-cloud`
+* [Install the Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli) and install the Azure Spring Apps extension with the command: `az extension add --name spring`
-## Provision an instance of Azure Spring Cloud
+## Provision an instance of Azure Spring Apps
-The following procedure uses the Azure CLI extension to provision an instance of Azure Spring Cloud.
+The following procedure uses the Azure CLI extension to provision an instance of Azure Spring Apps.
-1. Update Azure CLI with the Azure Spring Cloud extension.
+1. Update Azure CLI with the Azure Spring Apps extension.
```azurecli
- az extension update --name spring-cloud
+ az extension update --name spring
``` 1. Sign in to the Azure CLI and choose your active subscription.
The following procedure uses the Azure CLI extension to provision an instance of
az account set --subscription <Name or ID of subscription, skip if you only have 1 subscription> ```
-1. Prepare a name for your Azure Spring Cloud service. The name must be between 4 and 32 characters long and can contain only lowercase letters, numbers, and hyphens. The first character of the service name must be a letter and the last character must be either a letter or a number.
-
-1. Create a resource group to contain your Azure Spring Cloud service and an instance of the Azure Spring Cloud service.
+1. Create a resource group to contain your app and an instance of the Azure Spring Apps service.
```azurecli
- az group create --name ServiceConnector-tutorial-rg
- az spring-cloud create -n <service instance name> -g ServiceConnector-tutorial-rg
+ az group create --name ServiceConnector-tutorial-mysqlf --location eastus
```
-## Create an Azure Database for MySQL
-
-The following procedure uses the Azure CLI extension to provision an instance of Azure Database for MySQL.
-
-1. Install the [db-up](/cli/azure/mysql) extension.
+1. Create an instance of Azure Spring Apps. Its name must be between 4 and 32 characters long and can only contain lowercase letters, numbers, and hyphens. The first character of the Azure Spring Apps instance name must be a letter and the last character must be either a letter or a number.
```azurecli
- az extension add --name db-up
+ az spring create -n my-azure-spring -g ServiceConnector-tutorial-mysqlf
```
-1. Create an Azure Database for MySQL server using the following command:
+## Create an Azure Database for MySQL Flexible Server
- ```azurecli
- az mysql up --resource-group ServiceConnector-tutorial-rg --admin-user <admin-username> --admin-password <admin-password>
- ```
-
- For *`<admin-username>`* and *`<admin-password>`*, specify credentials to create an administrator user for this MySQL server. The admin username can't be *azure_superuser*, *azure_pg_admin*, *admin*, *administrator*, *root*, *guest*, or *public*. It can't start with *pg_*. The password must contain **8 to 128 characters** from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (for example, !, #, %). The password cannot contain username.
+Create a MySQL Flexible Server instance. In the command below, replace `<admin-username>` and `<admin-password>` by credentials of your choice to create an administrator user for the MySQL flexible server. The admin username can't be *azure_superuser*, *azure_pg_admin*, *admin*, *administrator*, *root*, *guest*, or *public*. It can't start with *pg_*. The password must contain **8 to 128 characters** from three of the following categories: English uppercase letters, English lowercase letters, numbers, and non-alphanumeric characters (for example, `!`, `#`, `%`). The password can't contain `username`.
- The server is created with the following default values (unless you manually override them):
+```azurecli-interactive
+az mysql flexible-server create \
+ --resource-group ServiceConnector-tutorial-mysqlf \
+ --name mysqlf-server \
+ --database-name mysqlf-db \
+ --admin-user <admin-username> \
+ --admin-password <admin-password>
+```
- **Setting** | **Default value** | **Description**
- ||
- server-name | System generated | A unique name that identifies your Azure Database for MySQL server.
- sku-name | GP_Gen5_2 | The name of the sku. Follows the convention {pricing tier}\_{compute generation}\_{vCores} in shorthand. The default is a General Purpose Gen5 server with 2 vCores. See our [pricing page](https://azure.microsoft.com/pricing/details/mysql/) for more information about the tiers.
- backup-retention | 7 | How long a backup should be retained. Unit is days.
- geo-redundant-backup | Disabled | Whether geo-redundant backups should be enabled for this server or not.
- location | westus2 | The Azure location for the server.
- ssl-enforcement | Enabled | Whether SSL should be enabled or not for this server.
- storage-size | 5120 | The storage capacity of the server (unit is megabytes).
- version | 5.7 | The MySQL major version.
+ The server is created with the following default values unless you manually override them:
+| **Setting** | **Default value** | **Description** |
+|-|-||
+| server-name | System generated | A unique name that identifies your Azure Database for MySQL server. |
+| sku-name | GP_Gen5_2 | The name of the sku. Follows the convention {pricing tier}\_{compute generation}\_{vCores} in shorthand. The default is a General Purpose Gen5 server with 2 vCores. For more information about the pricing, go to our [pricing page](https://azure.microsoft.com/pricing/details/mysql/). |
+| backup-retention | 7 | How long a backup should be retained. Unit is days. |
+| geo-redundant-backup | Disabled | Whether geo-redundant backups should be enabled for this server or not. |
+| location | westus2 | The Azure location for the server. |
+| ssl-enforcement | Enabled | Whether SSL should be enabled or not for this server. |
+| storage-size | 5120 | The storage capacity of the server (unit is megabytes). |
+| version | 5.7 | The MySQL major version. |
> [!NOTE]
-> For more information about the `az mysql up` command and its additional parameters, see the [Azure CLI documentation](/cli/azure/mysql#az-mysql-up).
-
-Once your server is created, it comes with the following settings:
+> Standard_B1ms SKU is used by default. Refer to [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/) for pricing details.
-- A firewall rule called "devbox" is created. The Azure CLI attempts to detect the IP address of the machine the `az mysql up` command is run from and allows that IP address.-- "Allow access to Azure services" is set to ON. This setting configures the server's firewall to accept connections from all Azure resources, including resources not in your subscription.-- The `wait_timeout` parameter is set to 8 hours-- An empty database named `sampledb` is created-- A new user named "root" with privileges to `sampledb` is created
+> [!NOTE]
+> For more information about the `az mysql flexible-server create` command and its additional parameters, see the [Azure CLI documentation](/cli/azure/mysql/flexible-server#az-mysql-flexible-server-create).
## Build and deploy the app
-1. Create the app with public endpoint assigned. If you selected Java version 11 when generating the Spring Cloud project, include the `--runtime-version=Java_11` switch.
+1. Create the app with public endpoint assigned. If you selected Java version 11 when generating the Azure Spring Apps project, include the `--runtime-version=Java_11` switch.
- ```azurecli
- az spring-cloud app create -n hellospring -s <service instance name> -g ServiceConnector-tutorial-rg --assign-endpoint true
+ ```azurecli-interactive
+ az spring app create -n hellospring -s my-azure-spring -g ServiceConnector-tutorial-mysqlf --assign-endpoint true
```
-1. Create service connections between Spring Cloud to MySQL database.
-
- ```azurecli
- az spring-cloud connection create mysql
+1. Run the `az spring connection create` command to connect the application deployed to Azure Spring Apps to the MySQL Flexible Server database. Replace the placeholders below with your own information.
+
+ ```azurecli-interactive
+ az spring connection create mysql-flexible \
+ --resource-group ServiceConnector-tutorial-mysqlf \
+ --service my-azure-spring \
+ --app hellospring \
+ --target-resource-group ServiceConnector-tutorial-mysqlf \
+ --server mysqlf-server \
+ --database mysqlf-db \
+ --secret name=<admin-username> secret=<admin-secret>
```
+ | Setting | Description |
+ ||-|
+ | `--resource-group` | The name of the resource group that contains the app hosted by Azure Spring Apps. |
+ | `--service` | The name of the Azure Spring Apps resource. |
+ | `--app` | The name of the application hosted by Azure Spring Apps that connects to the target service. |
+ | `--target-resource-group` | The name of the resource group with the storage account. |
+ | `--server` | The MySQL Flexible Server you want to connect to |
+ | `--database` | The name of the database you created earlier. |
+ | `--secret name` | The MySQL Flexible Server username. |
+ | `--secret` | The MySQL Flexible Server password. |
+ > [!NOTE]
- > If you see the error message "The subscription is not registered to use Microsoft.ServiceLinker", please run `az provider register -n Microsoft.ServiceLinker` to register the Service Connector resource provider and run the connection command again.
+ > If you see the error message "The subscription is not registered to use Microsoft.ServiceLinker", please run `az provider register -n Microsoft.ServiceLinker` to register the Service Connector resource provider and run the connection command again.
1. Clone sample code
Once your server is created, it comes with the following settings:
mvn clean package -DskipTests ```
-1. Deploy the JAR file for the app (`target/demo-0.0.1-SNAPSHOT.jar`).
+1. Deploy the JAR file for the app `target/demo-0.0.1-SNAPSHOT.jar`.
```azurecli
- az spring-cloud app deploy -n hellospring -s <service instance name> -g ServiceConnector-tutorial-rg --artifact-path target/demo-0.0.1-SNAPSHOT.jar
+ az spring app deploy \
+ --name hellospring \
+ --service my-azure-spring \
+ --resource-group ServiceConnector-tutorial-mysqlf \
+ --artifact-path target/demo-0.0.1-SNAPSHOT.jar
``` 1. Query app status after deployment with the following command. ```azurecli
- az spring-cloud app list -o table
+ az spring app list --resource-group ServiceConnector-tutorial-mysqlf --service my-azure-spring --output table
```
- You should see output like the following.
+ You should see the following output:
+
+ ```output
+ Name Location ResourceGroup Public Url Production Deployment Provisioning State CPU Memory Running Instance Registered Instance Persistent Storage Bind Service Registry Bind Application Configuration Service
+ -- - -- -- -- -- -- -- -- -
+ hellospring eastus ServiceConnector-tutorial-mysqlf https://my-azure-spring-hellospring.azuremicroservices.io default Succeeded 1 1Gi 1/1 0/1 - -
- ```
- Name Location ResourceGroup Production Deployment Public Url Provisioning Status CPU Memory Running Instance Registered Instance Persistent Storage
- -- - -- -- -- --
- hellospring eastus <resource group> default Succeeded 1 2 1/1 0/1 -
``` ## Next steps
spring-apps How To Bind Postgres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-postgres.md
Use the following steps to bind your app.
--secret name=$USERNAME secret=$PASSWORD ```
-### [Using a passwordless connection with a managed identity](#tab/Passwordless)
+### [Using a passwordless connection with a managed identity for flexible server](#tab/Passwordlessflex)
-Configure Azure Spring Apps to connect to the PostgreSQL Database Single Server with a system-assigned managed identity using the `az spring connection create` command.
+Configure Azure Spring Apps to connect to the PostgreSQL Database with a system-assigned managed identity using the `az spring connection create` command.
+
+```azurecli
+az spring connection create postgres-flexible \
+ --resource-group $SPRING_APP_RESOURCE_GROUP \
+ --service $Spring_APP_SERVICE_NAME \
+ --app $APP_NAME \
+ --deployment $DEPLOYMENT_NAME \
+ --target-resource-group $POSTGRES_RESOURCE_GROUP \
+ --server $POSTGRES_SERVER_NAME \
+ --database $DATABASE_NAME \
+ --system-identity
+```
+
+### [Using a passwordless connection with a managed identity for single server](#tab/Passwordlesssingle)
+
+Configure Azure Spring Apps to connect to the PostgreSQL Database with a system-assigned managed identity using the `az spring connection create` command.
```azurecli az spring connection create postgres \ --resource-group $SPRING_APP_RESOURCE_GROUP \ --service $Spring_APP_SERVICE_NAME \
- --app $APP_NAME --deployment $DEPLOYMENT_NAME \
+ --app $APP_NAME \
+ --deployment $DEPLOYMENT_NAME \
--target-resource-group $POSTGRES_RESOURCE_GROUP \ --server $POSTGRES_SERVER_NAME \ --database $DATABASE_NAME \
- --system-assigned-identity
+ --system-identity
```
spring-apps How To Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-cicd.md
steps:
Action: 'Deploy' AzureSpringCloud: $(serviceName) AppName: 'testapp'
+ DeploymentType: 'Artifacts'
UseStagingDeployment: false DeploymentName: 'default' Package: $(workingDirectory)/src/$(planetAppName)/publish-deploy-planet.zip
steps:
Action: 'Deploy' AzureSpringCloud: $(serviceName) AppName: 'testapp'
+ DeploymentType: 'Artifacts'
UseStagingDeployment: false DeploymentName: 'default' Package: $(workingDirectory)/src/$(solarAppName)/publish-deploy-solar.zip
To deploy using a pipeline, follow these steps:
Action: 'Deploy' AzureSpringCloud: <your Azure Spring Apps service> AppName: <app-name>
+ DeploymentType: 'Artifacts'
UseStagingDeployment: false DeploymentName: 'default' Package: ./target/your-result-jar.jar
steps:
Action: 'Deploy' AzureSpringCloud: <your Azure Spring Apps service> AppName: <app-name>
+ DeploymentType: 'Artifacts'
UseStagingDeployment: true Package: ./target/your-result-jar.jar - task: AzureSpringCloud@0
To deploy directly to Azure without a separate build step, use the following pip
Action: 'Deploy' AzureSpringCloud: <your Azure Spring Apps service> AppName: <app-name>
+ DeploymentType: 'Artifacts'
UseStagingDeployment: false DeploymentName: 'default' Package: $(Build.SourcesDirectory) ```
+### Deploy from custom image
+
+To deploy directly from an existing container image, use the following pipeline template.
+
+```yaml
+- task: AzureSpringCloud@0
+ inputs:
+ azureSubscription: '<your service connection name>'
+ Action: 'Deploy'
+ AzureSpringCloud: '<your Azure Spring Apps service>'
+ AppName: '<app-name>'
+ DeploymentType: 'CustomContainer'
+ UseStagingDeployment: false
+ DeploymentName: 'default'
+ ContainerRegistry: 'docker.io' # or your Azure Container Registry, e.g: 'contoso.azurecr.io'
+ RegistryUsername: '$(username)'
+ RegistryPassword: '$(password)'
+ ContainerImage: '<your image tag>'
+```
+ ::: zone-end ## Next steps
spring-apps How To Deploy With Custom Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-deploy-with-custom-container-image.md
Title: How to deploy applications in Azure Spring Apps with a custom container image (Preview)
+ Title: How to deploy applications in Azure Spring Apps with a custom container image
description: How to deploy applications in Azure Spring Apps with a custom container image
Last updated 4/28/2022
-# Deploy an application with a custom container image (Preview)
+# Deploy an application with a custom container image
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
This article explains how to deploy Spring Boot applications in Azure Spring App
* The image is pushed to an image registry. For more information, see [Azure Container Registry](../container-instances/container-instances-tutorial-prepare-acr.md). > [!NOTE]
-> The web application must listen on port `1025` for Standard tier and on port `8080` for Enterprise tier. The way to change the port depends on the framework of the application. For example, specify `SERVER_PORT=1025` for Spring Boot applications or `ASPNETCORE_URLS=http://+:1025/` for ASP.Net Core applications. The probe can be disabled for applications that do not listen on any port.
+> The web application must listen on port `1025` for Standard tier and on port `8080` for Enterprise tier. The way to change the port depends on the framework of the application. For example, specify `SERVER_PORT=1025` for Spring Boot applications or `ASPNETCORE_URLS=http://+:1025/` for ASP.Net Core applications. You can disable the probe for applications that don't listen on any port. For more information, see [How to configure health probes and graceful termination periods for apps hosted in Azure Spring Apps](how-to-configure-health-probes-graceful-termination.md).
## Deploy your application
To disable listening on a port for images that aren't web applications, add the
The following matrix shows what features are supported in each application type.
-| Feature | Spring Boot Apps - container deployment | Polyglot Apps - container deployment | Notes |
-|||||
-| App lifecycle management | ✔️ | ✔️ | |
-| Support for container registries | ✔️ | ✔️ | |
-| Assign endpoint | ✔️ | ✔️ | |
-| Azure Monitor | ✔️ | ✔️ | |
-| APM integration | ✔️ | ✔️ | Supported by [manual installation](#install-an-apm-into-the-image-manually) |
-| Blue/green deployment | ✔️ | ✔️ | |
-| Custom domain | ✔️ | ✔️ | |
-| Scaling - auto scaling | ✔️ | ✔️ | |
-| Scaling - manual scaling (in/out, up/down) | ✔️ | ✔️ | |
-| Managed Identity | ✔️ | ✔️ | |
-| Spring Cloud Eureka & Config Server | ✔️ | ❌ | |
-| API portal for VMware Tanzu® | ✔️ | ✔️ | Enterprise tier only |
-| Spring Cloud Gateway for VMware Tanzu® | ✔️ | ✔️ | Enterprise tier only |
-| Application Configuration Service for VMware Tanzu® | ✔️ | ❌ | Enterprise tier only |
-| VMware Tanzu® Service Registry | ✔️ | ❌ | Enterprise tier only |
-| VNET | ✔️ | ✔️ | Add registry to [allowlist in NSG or Azure Firewall](#avoid-not-being-able-to-connect-to-the-container-registry-in-a-vnet) |
-| Outgoing IP Address | ✔️ | ✔️ | |
-| E2E TLS | ✔️ | ✔️ | Trust a self-signed CA is supported by [manual installation](#trust-a-certificate-authority-in-the-image) |
-| Liveness and readiness settings | ✔️ | ✔️ | |
-| Advanced troubleshooting - thread/heap/JFR dump | ✔️ | ❌ | The image must include `bash` and JDK with `PATH` specified. |
-| Bring your own storage | ✔️ | ✔️ | |
-| Integrate service binding with Resource Connector | ✔️ | ❌ | |
-| Availability Zone | ✔️ | ✔️ | |
-| App Lifecycle events | ✔️ | ✔️ | |
-| Reduced app size - 0.5 vCPU and 512 MB | ✔️ | ✔️ | |
-| Automate app deployments with Terraform | ✔️ | ✔️ | |
-| Soft Deletion | ✔️ | ✔️ | |
-| Interactive diagnostic experience (AppLens-based) | ✔️ | ✔️ | |
-| SLA | ✔️ | ✔️ | |
+| Feature | Spring Boot Apps - container deployment | Polyglot Apps - container deployment | Notes |
+|--|--|--|--|
+| App lifecycle management | ✔️ | ✔️ | |
+| Support for container registries | ✔️ | ✔️ | |
+| Assign endpoint | ✔️ | ✔️ | |
+| Azure Monitor | ✔️ | ✔️ | |
+| APM integration | ✔️ | ✔️ | Supported by [manual installation](#install-an-apm-into-the-image-manually). |
+| Blue/green deployment | ✔️ | ✔️ | |
+| Custom domain | ✔️ | ✔️ | |
+| Scaling - auto scaling | ✔️ | ✔️ | |
+| Scaling - manual scaling (in/out, up/down) | ✔️ | ✔️ | |
+| Managed Identity | ✔️ | ✔️ | |
+| Spring Cloud Eureka & Config Server | ✔️ | ❌ | |
+| API portal for VMware Tanzu® | ✔️ | ✔️ | Enterprise tier only. |
+| Spring Cloud Gateway for VMware Tanzu® | ✔️ | ✔️ | Enterprise tier only. |
+| Application Configuration Service for VMware Tanzu® | ✔️ | ❌ | Enterprise tier only. |
+| VMware Tanzu® Service Registry | ✔️ | ❌ | Enterprise tier only. |
+| VNET | ✔️ | ✔️ | Add registry to [allowlist in NSG or Azure Firewall](#avoid-not-being-able-to-connect-to-the-container-registry-in-a-vnet). |
+| Outgoing IP Address | ✔️ | ✔️ | |
+| E2E TLS | ✔️ | ✔️ | [Trust a self-signed CA](#trust-a-certificate-authority). |
+| Liveness and readiness settings | ✔️ | ✔️ | |
+| Advanced troubleshooting - thread/heap/JFR dump | ✔️ | ❌ | The image must include Bash and the JDK with `PATH` specified. |
+| Bring your own storage | ✔️ | ✔️ | |
+| Integrate service binding with Resource Connector | ✔️ | ❌ | |
+| Availability Zone | ✔️ | ✔️ | |
+| App Lifecycle events | ✔️ | ✔️ | |
+| Reduced app size - 0.5 vCPU and 512 MB | ✔️ | ✔️ | |
+| Automate app deployments with Terraform | ✔️ | ✔️ | |
+| Soft Deletion | ✔️ | ✔️ | |
+| Interactive diagnostic experience (AppLens-based) | ✔️ | ✔️ | |
+| SLA | ✔️ | ✔️ | |
> [!NOTE] > Polyglot apps include non-Spring Boot Java, NodeJS, AngularJS, Python, and .NET apps.
The following matrix shows what features are supported in each application type.
The following points will help you address common situations when deploying with a custom image.
-### Trust a Certificate Authority in the image
+### Trust a Certificate Authority
+
+There are two options to trust a Certificate Authority:
+
+**Option 1: Upload via Azure Spring Apps**
+
+To load the CA certs into your apps, see [Use TLS/SSL certificates in your application in Azure Spring Apps](how-to-use-tls-certificate.md). Then the certs will be mounted into the location */etc/azure-spring-cloud/certs/public/*.
+
+**Option 2: Manual installation in the image**
To trust a CA in the image, set the following variables depending on your environment:
To trust a CA in the image, set the following variables depending on your enviro
### Avoid unexpected behavior when images change
-When your application is restarted or scaled out, the latest image will always be pulled. If the image has been changed, the newly started application instances will use the new image while the old instances will continue to use the old image. Avoid using the `latest` tag or overwrite the image without a tag change to avoid unexpected application behavior.
+When your application is restarted or scaled out, the latest image will always be pulled. If the image has been changed, the newly started application instances will use the new image while the old instances will continue to use the old image.
+
+> [!NOTE]
+> Avoid using the `latest` tag or overwrite the image without a tag change to avoid unexpected application behavior.
### Avoid not being able to connect to the container registry in a VNet
AppPlatformContainerEventLogs
### Scan your image for vulnerabilities
-We recommend that you use Microsoft Defender for Cloud with ACR to prevent your images from being vulnerable. For more information, see [Microsoft Defender for Cloud] (/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks#scanning-images-in-acr-registries)
+We recommend that you use Microsoft Defender for Cloud with ACR to prevent your images from being vulnerable. For more information, see [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks#scanning-images-in-acr-registries)
### Switch between JAR deployment and container deployment
-You can switch the deployment type directly by redeploying using the following command:
+You can switch the deployment type from JAR deployment to container deployment directly by redeploying using the following command:
```azurecli az spring app deploy \
az spring app deploy \
--service <your-service-name> ```
+Or reversely:
+
+```azurecli
+az spring app deploy \
+ --resource-group <your-resource-group> \
+ --name <your-app-name> \
+ --artifact-path <your-jar-file> \
+ --service <your-service-name>
+```
+ ### Create another deployment with an existing JAR deployment You can create another deployment using an existing JAR deployment using the following command:
az spring app deployment create \
--service <your-service-name> ```
-> [!NOTE]
-> Automating deployments using Azure Pipelines Tasks or GitHub Actions are not currently supported.
+### CI/CD
+
+Automating deployments using Azure Pipelines Tasks or GitHub Actions are supported now. For more information, see [Automate application deployments to Azure Spring Apps](how-to-cicd.md) and [Use Azure Spring Apps CI/CD with GitHub Actions](./how-to-github-actions.md)
## Next steps
spring-apps How To Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-github-actions.md
jobs:
package: ${{ env.ASC_PACKAGE_PATH }} ```
+The following example deploys to the default production deployment in Azure Spring Apps with an existing container image.
+
+```yml
+name: AzureSpringCloud
+on: push
+env:
+ ASC_PACKAGE_PATH: ${{ github.workspace }}
+ AZURE_SUBSCRIPTION: <azure subscription name>
+
+jobs:
+ deploy_to_production:
+ runs-on: ubuntu-latest
+ name: deploy to production with soruce code
+ steps:
+ - name: Checkout GitHub Action
+ uses: actions/checkout@v2
+
+ - name: Login via Azure CLI
+ uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+
+ - name: Deploy Custom Image
+ uses: Azure/spring-apps-deploy@v1
+ with:
+ azure-subscription: ${{ env.AZURE_SUBSCRIPTION }}
+ action: deploy
+ service-name: <service instance name>
+ app-name: <app name>
+ deployment-name: <deployment name>
+ container-registry: <your container image registry>
+ registry-username: ${{ env.REGISTRY_USERNAME }}
+ registry-password: ${{ secrets.REGISTRY_PASSWORD }}
+ container-image: <your image tag>
+ #### Blue-green The following examples deploy to an existing staging deployment. This deployment won't receive production traffic until it is set as a production deployment. You can set use-staging-deployment true to find the staging deployment automatically or just allocate specific deployment-name. We will only focus on the spring-cloud-deploy action and leave out the preparatory jobs in the rest of the article.
spring-apps Quickstart Sample App Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-sample-app-introduction.md
zone_pivot_groups: programming-languages-spring-apps
# Introduction to the sample app
+> [!NOTE]
+> The first 50 vCPU hours and 100 GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog).
+ > [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
The instructions in the following quickstarts refer to the source code as needed
::: zone pivot="programming-language-java"
-In this quickstart, we use the well-known sample app [PetClinic](https://github.com/spring-petclinic/spring-petclinic-microservices) that will show you how to deploy apps to the Azure Spring Apps service. The **Pet Clinic** sample demonstrates the microservice architecture pattern and highlights the services breakdown. You will see how services are deployed to Azure with Azure Spring Apps capabilities, including service discovery, config server, logs, metrics, distributed tracing, and developer-friendly tooling support.
+In this quickstart, we use the well-known sample app [PetClinic](https://github.com/spring-petclinic/spring-petclinic-microservices) that will show you how to deploy apps to the Azure Spring Apps service. The **Pet Clinic** sample demonstrates the microservice architecture pattern and highlights the services breakdown. You'll see how to deploy services to Azure with Azure Spring Apps capabilities such as service discovery, config server, logs, metrics, distributed tracing, and developer-friendly tooling support.
To follow the Azure Spring Apps deployment examples, you only need the location of the source code, which is provided as needed.
PetClinic is decomposed into 4 core Spring apps. All of them are independently d
* **Customers service**: Contains general user input logic and validation including pets and owners information (Name, Address, City, Telephone). * **Visits service**: Stores and shows visits information for each pets' comments. * **Vets service**: Stores and shows Veterinarians' information, including names and specialties.
-* **API Gateway**: The API Gateway is a single entry point into the system, used to handle requests and route them to an appropriate service or to invoke multiple services, and aggregate the results. The three core services expose an external API to client. In real-world systems, the number of functions can grow very quickly with system complexity. Hundreds of services might be involved in rendering one complex webpage.
+* **API Gateway**: The API Gateway is a single entry point into the system, used to handle requests and route them to an appropriate service or to invoke multiple services, and aggregate the results. The three core services expose an external API to client. In real-world systems, the number of functions can grow quickly with system complexity. Hundreds of services might be involved in rendering one complex webpage.
## Infrastructure services hosted by Azure Spring Apps
spring-apps Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart.md
# Quickstart: Deploy your first application to Azure Spring Apps
+> [!NOTE]
+> The first 50 vCPU hours and 100 GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog).
+ > [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
The application code used in this tutorial is a simple app. When you've complete
This quickstart explains how to: > [!div class="checklist"]+ > - Generate a basic Spring project. > - Provision a service instance. > - Build and deploy an app with a public endpoint.
At the end of this quickstart, you'll have a working spring app running on Azure
- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. - [Azure CLI](/cli/azure/install-azure-cli). Install the Azure Spring Apps extension with the following command: `az extension add --name spring`
+- If you're deploying Azure Spring Apps Enterprise tier for the first time in the target subscription, see the [Prerequisites](./how-to-enterprise-marketplace-offer.md#prerequisites) section of [View Azure Spring Apps Enterprise tier offering in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
## Provision an instance of Azure Spring Apps
Deploying the application can take a few minutes.
- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. - [IntelliJ IDEA](https://www.jetbrains.com/idea/). - [Azure Toolkit for IntelliJ](/azure/developer/java/toolkit-for-intellij/install-toolkit).
+- If you're deploying Azure Spring Apps Enterprise tier for the first time in the target subscription, see the [Prerequisites](./how-to-enterprise-marketplace-offer.md#prerequisites) section of [View Azure Spring Apps Enterprise tier offering in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
## Generate a Spring project Use the following steps to create the project:
-1. Use [Spring Initializr](https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.10&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client) to generate a sample project with recommended dependencies for Azure Spring Apps. The following URL provides default settings for you.
+1. Use [Spring Initializr](https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.10&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client) to generate a sample project with recommended dependencies for Azure Spring Apps. The following URL provides default settings for you.
```url https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.10&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client ```
-The following image shows the recommended Initializr settings for the *hellospring* sample project.
+The following image shows the recommended Initializr settings for the *hellospring* sample project.
-This example uses Java version 11. To use a different Java version, change the Java version setting under **Project Metadata**.
+This example uses Java version 11. To use a different Java version, change the Java version setting under **Project Metadata**.
:::image type="content" source="media/quickstart/initializr-page.png" alt-text="Screenshot of Spring Initializr page." lightbox="media/quickstart/initializr-page.png":::
Use the following steps to build and deploy your app.
## [Visual Studio Code](#tab/VS-Code)
+## Prerequisites
+
+- If you're deploying Azure Spring Apps Enterprise tier for the first time in the target subscription, see the [Prerequisites](./how-to-enterprise-marketplace-offer.md#prerequisites) section of [View Azure Spring Apps Enterprise tier offering in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
+ ## Deploy a Spring Boot web app to Azure Spring Apps with Visual Studio Code To deploy a Spring Boot web app to Azure Spring Apps, follow the steps in [Java on Azure Spring Apps](https://code.visualstudio.com/docs/java/java-spring-apps).
To learn how to use more Azure Spring capabilities, advance to the quickstart se
> [!div class="nextstepaction"] > [Introduction to the sample app](./quickstart-sample-app-introduction.md)
-More samples are available on GitHub: [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples).
+More samples are available on GitHub: [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples).
storage Anonymous Read Access Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-configure.md
New-AzStorageAccount -ResourceGroupName $rgName `
-Name $accountName ` -Location $location ` -SkuName Standard_GRS `
- -AllowBlobPublicAccess $false
+ -AllowBlobPublicAccess $true
# Read the AllowBlobPublicAccess property for the newly created storage account. (Get-AzStorageAccount -ResourceGroupName $rgName -Name $accountName).AllowBlobPublicAccess
To allow or disallow public access for a storage account with a template, create
> > After you update the public access setting for the storage account, it may take up to 30 seconds before the change is fully propagated.
-When a container is configured for anonymous public access, requests to read blobs in that container do not need to be authorized. However, any firewall rules that are configured for the storage account remain in effect and will block anonymous traffic.
+When a container is configured for anonymous public access, requests to read blobs in that container do not need to be authorized. However, any firewall rules that are configured for the storage account remain in effect and will block traffic inline with the configured ACLs.
Allowing or disallowing blob public access requires version 2019-04-01 or later of the Azure Storage resource provider. For more information, see [Azure Storage Resource Provider REST API](/rest/api/storagerp/).
storage Storage Blobs Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-introduction.md
A number of solutions exist for migrating existing data to Blob Storage:
- **Azure Data Factory** supports copying data to and from Blob Storage by using the account key, a shared access signature, a service principal, or managed identities for Azure resources. For more information, see [Copy data to or from Azure Blob Storage by using Azure Data Factory](../../data-factory/connector-azure-blob-storage.md?toc=/azure/storage/blobs/toc.json). - **Blobfuse** is a virtual file system driver for Azure Blob Storage. You can use BlobFuse to access your existing block blob data in your Storage account through the Linux file system. For more information, see [What is BlobFuse? - BlobFuse2 (preview)](blobfuse2-what-is.md). - **Azure Data Box** service is available to transfer on-premises data to Blob Storage when large datasets or network constraints make uploading data over the wire unrealistic. Depending on your data size, you can request [Azure Data Box Disk](../../databox/data-box-disk-overview.md), [Azure Data Box](../../databox/data-box-overview.md), or [Azure Data Box Heavy](../../databox/data-box-heavy-overview.md) devices from Microsoft. You can then copy your data to those devices and ship them back to Microsoft to be uploaded into Blob Storage.-- The **Azure Import/Export service** provides a way to import or export large amounts of data to and from your storage account using hard drives that you provide. For more information, see [Use the Microsoft Azure Import/Export service to transfer data to Blob Storage](../../import-export/storage-import-export-service.md).
+- The **Azure Import/Export service** provides a way to import or export large amounts of data to and from your storage account using hard drives that you provide. For more information, see [What is Azure Import/Export service?](../../import-export/storage-import-export-service.md).
## Next steps
storage Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-introduction.md
You can access resources in a storage account by any language that can make HTTP
- [Storage Service Management REST API (Classic)](/previous-versions/azure/reference/ee460790(v=azure.100)) - [Azure NetApp Files REST API](../../azure-netapp-files/azure-netapp-files-develop-with-rest-api.md)
-### Azure Storage data movement API and library references
+### Azure Storage data movement API
-- [Storage Import/Export Service REST API](/rest/api/storageimportexport/) - [Storage Data Movement Client Library for .NET](/dotnet/api/microsoft.azure.storage.datamovement) ### Tools and utilities
storage File Sync Networking Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-networking-endpoints.md
Access restriction to the public endpoint is done using the storage account fire
- [Create one or more private endpoints for the storage account](#create-the-storage-account-private-endpoint) and disable access to the public endpoint. This ensures that only traffic originating from within the desired virtual networks can access the Azure file shares within the storage account. - Restrict the public endpoint to one or more virtual networks. This works by using a capability of the virtual network called *service endpoints*. When you restrict the traffic to a storage account via a service endpoint, you are still accessing the storage account via the public IP address.
-#### Disable access to the storage account public endpoint
+> [!Note]
+> The **Allow Azure services on the trusted services list to access this storage account** exception must be selected on your storage account to allow trusted first party Microsoft services such as Azure File Sync to access the storage account. To learn more, see [Grant access to trusted Azure services](../common/storage-network-security.md#grant-access-to-trusted-azure-services).
+
+#### Grant access to trusted Azure services and disable access to the storage account public endpoint
When access to the public endpoint is disabled, the storage account can still be accessed through its private endpoints. Otherwise valid requests to the storage account's public endpoint will be rejected. # [Portal](#tab/azure-portal)
When access to the public endpoint is disabled, the storage account can still be
-#### Restrict access to the storage account public endpoint to specific virtual networks
+#### Grant access to trusted Azure services and restrict access to the storage account public endpoint to specific virtual networks
When you restrict the storage account to specific virtual networks, you are allowing requests to the public endpoint from within the specified virtual networks. This works by using a capability of the virtual network called *service endpoints*. This can be used with or without private endpoints. # [Portal](#tab/azure-portal)
The following pre-defined policies are available for Azure Files and Azure File
| Action | Service | Condition | Policy name | |-|-|-|-|
-| Audit | Azure Files | The storage account's public endpoint is enabled. See [Disable access to the storage account public endpoint](#disable-access-to-the-storage-account-public-endpoint) for more information. | Storage accounts should restrict network access |
+| Audit | Azure Files | The storage account's public endpoint is enabled. See [Grant access to trusted Azure services and disable access to the storage account public endpoint](#grant-access-to-trusted-azure-services-and-disable-access-to-the-storage-account-public-endpoint) for more information. | Storage accounts should restrict network access |
| Audit | Azure File Sync | The Storage Sync Service's public endpoint is enabled. See [Disable access to the Storage Sync Service public endpoint](#disable-access-to-the-storage-sync-service-public-endpoint) for more information. | Public network access should be disabled for Azure File Sync | | Audit | Azure Files | The storage account needs at least one private endpoint. See [Create the storage account private endpoint](#create-the-storage-account-private-endpoint) for more information. | Storage account should use a private link connection | | Audit | Azure File Sync | The Storage Sync Service needs at least one private endpoint. See [Create the Storage Sync Service private endpoint](#create-the-storage-sync-service-private-endpoint) for more information. | Azure File Sync should use private link |
storage Files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md
description: Learn about new features and enhancements in Azure Files and Azure
Previously updated : 11/07/2022 Last updated : 11/08/2022
Azure Files is updated regularly to offer new features and enhancements. This ar
### 2022 quarter 4 (October, November, December) #### Azure Active Directory (Azure AD) Kerberos authentication for hybrid identities on Azure Files is generally available
-This [feature](storage-files-identity-auth-azure-active-directory-enable.md) builds on top of [FSLogix profile container support](../../virtual-desktop/create-profile-container-azure-ad.md) released in December 2022 and expands it to support more use cases (SMB only). Hybrid identities, which are user identities created in Active Directory Domain Services (AD DS) and synced to Azure AD, can mount and access Azure file shares without the need for line-of-sight to an Active Directory domain controller. While the initial support is limited to hybrid identities, itΓÇÖs a significant milestone as we simplify identity-based authentication for Azure Files customers.
+This [feature](storage-files-identity-auth-azure-active-directory-enable.md) builds on top of [FSLogix profile container support](../../virtual-desktop/create-profile-container-azure-ad.md) released in December 2022 and expands it to support more use cases (SMB only). Hybrid identities, which are user identities created in Active Directory Domain Services (AD DS) and synced to Azure AD, can mount and access Azure file shares without the need for line-of-sight to an Active Directory domain controller. While the initial support is limited to hybrid identities, itΓÇÖs a significant milestone as we simplify identity-based authentication for Azure Files customers. [Read the blog post](https://techcommunity.microsoft.com/t5/azure-storage-blog/general-availability-azure-active-directory-kerberos-with-azure/ba-p/3612111).
### 2022 quarter 2 (April, May, June) #### SUSE Linux support for SAP HANA System Replication (HSR) and Pacemaker
storage Storage Files Configure P2s Vpn Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-configure-p2s-vpn-windows.md
Title: Configure a Point-to-Site (P2S) VPN on Windows for use with Azure Files | Microsoft Docs
+ Title: Configure a Point-to-Site (P2S) VPN on Windows for use with Azure Files
description: How to configure a Point-to-Site (P2S) VPN on Windows for use with Azure Files Previously updated : 05/27/2022 Last updated : 11/08/2022
The article details the steps to configure a Point-to-Site VPN on Windows (Windo
- A virtual network with a private endpoint for the storage account containing the Azure file share you want to mount on-premises. To learn more about how to create a private endpoint, see [Configuring Azure Files network endpoints](storage-files-networking-endpoints.md?tabs=azure-powershell). -- A [gateway subnet](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsub) must be created on the virtual network.
+- A [gateway subnet](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsub) must be created on the virtual network, and you'll need to know the name of the gateway subnet.
## Collect environment information In order to set up the point-to-site VPN, we first need to collect some information about your environment for use throughout the guide. See the [prerequisites](#prerequisites) section if you have not already created a storage account, virtual network, gateway subnet, and/or private endpoints.
Deploying this service requires two basic components:
1. A public IP address that will identify the gateway to your clients wherever they are in the world 2. The root certificate you created earlier, which will be used to authenticate your clients
-Remember to replace `<desired-vpn-name-here>` and `<desired-region-here>` in the below script with the proper values for these variables.
+Remember to replace `<desired-vpn-name-here>`, `<desired-region-here>`, and `<gateway-subnet-name-here>` in the below script with the proper values for these variables.
> [!Note] > Deploying the Azure virtual network gateway can take up to 45 minutes. While this resource is being deployed, this PowerShell script will block for the deployment to be completed. This is expected.
Remember to replace `<desired-vpn-name-here>` and `<desired-region-here>` in the
$vpnName = "<desired-vpn-name-here>" $publicIpAddressName = "$vpnName-PublicIP" $region = "<desired-region-here>"
+$gatewaySubnet = "<gateway-subnet-name-here>"
$publicIPAddress = New-AzPublicIpAddress ` -ResourceGroupName $resourceGroupName `
Export-PfxCertificate `
``` ## Configure the VPN client
-The Azure virtual network gateway will create a downloadable package with configuration files required to initialize the VPN connection on your on-premises Windows machine. We will configure the VPN connection using the [Always On VPN](/windows-server/remote/remote-access/vpn/always-on-vpn/) feature of Windows 10/Windows Server 2016+. This package also contains executable packages which will configure the legacy Windows VPN client, if so desired. This guide uses Always On VPN rather than the legacy Windows VPN client as the Always On VPN client allows end-users to connect/disconnect from the Azure VPN without having administrator permissions to their machine.
+The Azure virtual network gateway will create a downloadable package with configuration files required to initialize the VPN connection on your on-premises Windows machine. We will configure the VPN connection using the [Always On VPN](/windows-server/remote/remote-access/vpn/always-on-vpn/) feature introduced in Windows 10/Windows Server 2016. This package also contains executable packages which will configure the legacy Windows VPN client, if so desired. This guide uses Always On VPN rather than the legacy Windows VPN client as the Always On VPN client allows end-users to connect/disconnect from the Azure VPN without having administrator permissions to their machine.
The following script will install the client certificate required for authentication against the virtual network gateway, download, and install the VPN package. Remember to replace `<computer1>` and `<computer2>` with the desired computers. You can run this script on as many machines as you desire by adding more PowerShell sessions to the `$sessions` array. Your use account must be an administrator on each of these machines. If one of these machines is the local machine you are running the script from, you must run the script from an elevated PowerShell session.
storage Storage Files Identity Ad Ds Configure Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-configure-permissions.md
Previously updated : 11/01/2022 Last updated : 11/08/2022
Both share-level and file/directory level permissions are enforced when a user a
The following table contains the Azure RBAC permissions related to this configuration. If you're using Azure Storage Explorer, you'll also need the [Reader and Data Access](../../role-based-access-control/built-in-roles.md#reader-and-data-access) role in order to read/access the file share.
-| Built-in role | NTFS permission | Resulting access |
+| Share-level permission (built-in role) | NTFS permission | Resulting access |
|||| |Storage File Data SMB Share Reader | Full control, Modify, Read, Write, Execute | Read & execute | | | Read | Read |
The following permissions are included on the root directory of a file share:
- `NT AUTHORITY\SYSTEM:(F)` - `CREATOR OWNER:(OI)(CI)(IO)(F)`
+For more information on these advanced permissions, see [the command-line reference for icacls](/windows-server/administration/windows-commands/icacls).
+ ## Mount the file share using your storage account key Before you configure Windows ACLs, you must first mount the file share by using your storage account key. To do this, log into a domain-joined device, open a Windows command prompt, and run the following command. Remember to replace `<YourStorageAccountName>`, `<FileShareName>`, and `<YourStorageAccountKey>` with your own values. If Z: is already in use, replace it with an available drive letter. You can find your storage account key in the Azure portal by navigating to the storage account and selecting **Security + networking** > **Access keys**, or you can use the `Get-AzStorageAccountKey` PowerShell cmdlet.
storage Storage Troubleshooting Files Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshooting-files-performance.md
To confirm whether your share is being throttled, you can access and use Azure m
- ClientShareIngressThrottlingError - ClientShareIopsThrottlingError
+ If a throttled request was authenticated with Kerberos, you might see a prefix indicating the authentication protocol, such as:
+
+ - KerberosSuccessWithShareEgressThrottling
+ - KerberosSuccessWithShareIngressThrottling
+ To learn more about each response type, see [Metric dimensions](./storage-files-monitoring-reference.md#metrics-dimensions). ![Screenshot of the metrics options for premium file shares, showing a "Response type" property filter.](media/storage-troubleshooting-premium-fileshares/metrics.png)
stream-analytics Stream Analytics Job Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-reliability.md
Title: Avoid service interruptions in Azure Stream Analytics jobs
description: This article describes guidance on making your Stream Analytics jobs upgrade resilient. - Previously updated : 06/21/2019 Last updated : 11/07/2022
Part of being a fully managed service is the capability to introduce new service
## How do Azure paired regions address this concern?
-Stream Analytics guarantees jobs in paired regions are updated in separate batches. As a result there is a sufficient time gap between the updates to identify potential issues and remediate them.
-
-_With the exception of Central India_ (whose paired region, South India, does not have Stream Analytics presence), the deployment of an update to Stream Analytics would not occur at the same time in a set of paired regions. Deployments in multiple regions **in the same group** may occur **at the same time**.
+Stream Analytics guarantees jobs in paired regions are updated in separate batches. The deployment of an update to Stream Analytics would not occur at the same time in a set of paired regions. As a result there is a sufficient time gap between the updates to identify potential issues and remediate them.
The article on **[availability and paired regions](../availability-zones/cross-region-replication-azure.md)** has the most up-to-date information on which regions are paired.
synapse-analytics Sql Database Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/sql-database-synapse-link.md
This article helps you to understand the functions of Azure Synapse Link for Azu
A link connection identifies a mapping relationship between an Azure SQL database and an Azure Synapse Analytics dedicated SQL pool. You can create, manage, monitor and delete link connections in your Synapse workspace. When creating a link connection, you can select both source database and a destination Synapse dedicated SQL pool so that the operational data from your source database will be automatically replicated to the specified destination Synapse dedicated SQL pool. You can also add or remove one or more tables from your source database to be replicated.
-You can start or stop a link connection. When started, a link connection will start from a full initial load from your source database followed by incremental change feeds via the change feed feature in Azure SQL database. When you stop a link connection, the updates made to the operational data won't be synchronized to your Synapse dedicated SQL pool. For more information, see [Azure Synapse Link change feed for SQL Server 2022 and Azure SQL Database](/sql/sql-server/synapse-link/synapse-link-sql-server-change-feed).
+You can start, stop, pause or resume a link connection. When started, a link connection will start from a full initial load from your source database followed by incremental change feeds via the change feed feature in Azure SQL database. When you stop a link connection, the updates made to the operational data won't be synchronized to your Synapse dedicated SQL pool. It will do a full initial load from your source database if you start the link connection again. When you pause a link connection, the updates made to the operational data won't be synchronized to your Synapse dedicated SQL pool. When you resume a link connection, it will continue to synchronize the update from the place where you paused the link connection to your Synapse dedicated SQL pool. For more information, see [Azure Synapse Link change feed for SQL Server 2022 and Azure SQL Database](/sql/sql-server/synapse-link/synapse-link-sql-server-change-feed).
You need to select compute core counts for each link connection to replicate your data. The core counts represent the compute power and it impacts your data replication latency and cost.
You can monitor Azure Synapse Link for SQL at the link and table levels. For eac
* **Initial:** a link connection is created but not started. You will not be charged in initial state. * **Starting:** a link connection is setting up compute engines to replicate data. * **Running:** a link connection is replicating data.
-* **Stopping:** a link connection is shutting down the compute engines.
+* **Stopping:** a link connection is going to be stopped. The compute engine is being shut down.
* **Stopped:** a link connection is stopped. You will not be charged in stopped state.
+* **Pausing:** a link connection is going to be paused. The compute engine is being shut down.
+* **Paused:** a link connection is paused. You will not be charged in paused state.
+* **Resuming:** a link connection is going to be resumed by setting up compute engines to continue to replicate the changes.
For each table, you'll see the following status:
synapse-analytics Sql Server 2022 Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/sql-server-2022-synapse-link.md
This article helps you to understand the functions of Azure Synapse Link for SQL
A link connection identifies a mapping relationship between an SQL Server 2022 and an Azure Synapse Analytics dedicated SQL pool. You can create, manage, monitor and delete link connections in your Synapse workspace. When creating a link connection, you can select both source database and destination Synapse dedicated SQL pool so that the operational data from your source database will be automatically replicated to the specified destination Synapse dedicated SQL pool. You can also add or remove one or more tables from your source database to be replicated.
-You can start or stop a link connection. When started, a link connection will start from a full initial load from your source database followed by incremental change feeds via change feed feature in SQL Server 2022. When you stop a link connection, the updates made to the operational data won't be synchronized to your Synapse dedicated SQL pool. For more information, see [Azure Synapse Link change feed for SQL Server 2022 and Azure SQL Database](/sql/sql-server/synapse-link/synapse-link-sql-server-change-feed).
+You can start, stop, pause or resume a link connection. When started, a link connection will start from a full initial load from your source database followed by incremental change feeds via change feed feature in SQL Server 2022. When you stop a link connection, the updates made to the operational data won't be synchronized to your Synapse dedicated SQL pool. It will do a full initial load from your source database if you start the link connection again. When you pause a link connection, the updates made to the operational data won't be synchronized to your Synapse dedicated SQL pool. When you resume a link connection, it will continue to synchronize the update from the place where you paused the link connection to your Synapse dedicated SQL pool. For more information, see [Azure Synapse Link change feed for SQL Server 2022 and Azure SQL Database](/sql/sql-server/synapse-link/synapse-link-sql-server-change-feed).
You need to select compute core counts for each link connection to replicate your data. The core counts represent the compute power and it impacts your data replication latency and cost.
You can monitor Azure Synapse Link for SQL at the link and table levels. For eac
* **Initial:** a link connection is created but not started. You will not be charged in initial state. * **Starting:** a link connection is setting up compute engines to replicate data. * **Running:** a link connection is replicating data.
-* **Stopping:** a link connection is shutting down the compute engines.
+* **Stopping:** a link connection is going to be stopped. The compute engine is being shut down.
* **Stopped:** a link connection is stopped. You will not be charged in stopped state.
+* **Pausing:** a link connection is going to be paused. The compute engine is being shut down.
+* **Paused:** a link connection is paused. You will not be charged in paused state.
+* **Resuming:** a link connection is going to be resumed by setting up compute engines to continue to replicate the changes.
For each table, you'll see the following status:
virtual-machines Boot Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/boot-diagnostics.md
An alternative boot diagnostic experience is to use a custom storage account. A
- Make sure that access through the storage firewall is allowed for the Azure platform to publish the screenshot and serial log. To do this, go to the custom boot diagnostics storage account in the Azure portal and then select **Networking** from the **Security + networking** section. Check if the **Allow Azure services on the trusted services list to access this storage account** checkbox is selected. -- Allow storage firewall for users to view the boot screenshots or serial logs. To do this, add your network or the client/browser's Internet IPs as firewall exclusions. For more information, see [Configure Azure Storage firewalls and virtual networks](https://github.com/genlin/azure-docs-pr/blob/patch-5/articles/storage/common/storage-network-security.md).
+- Allow storage firewall for users to view the boot screenshots or serial logs. To do this, add your network or the client/browser's Internet IPs as firewall exclusions. For more information, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md).
To configure the storage firewall for Azure Serial Console, see [Use Serial Console with custom boot diagnostics storage account firewall enabled](/troubleshoot/azure/virtual-machines/serial-console-windows#use-serial-console-with-custom-boot-diagnostics-storage-account-firewall-enabled).
virtual-machines Disks Deploy Premium V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-deploy-premium-v2.md
Title: Deploy a Premium SSD v2 managed disk
description: Learn how to deploy a Premium SSD v2. Previously updated : 10/12/2022 Last updated : 11/08/2022
Update-AzVM -VM $vm -ResourceGroupName $resourceGroupName
:::image type="content" source="media/disks-deploy-premium-v2/premv2-create-data-disk.png" alt-text="Screenshot highlighting create and attach a new disk on the disk page." lightbox="media/disks-deploy-premium-v2/premv2-create-data-disk.png":::
-1. Select the **Disk SKU** and select **Premium SSD v2 (Preview)**.
+1. Select the **Disk SKU** and select **Premium SSD v2**.
:::image type="content" source="media/disks-deploy-premium-v2/premv2-select.png" alt-text="Screenshot selecting Premium SSD v2 SKU." lightbox="media/disks-deploy-premium-v2/premv2-select.png":::
virtual-machines Disks Shared https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-shared.md
description: Learn about sharing Azure managed disks across multiple Linux VMs.
Previously updated : 10/14/2022 Last updated : 11/07/2022
Ultra disks have the unique capability of allowing you to set your performance b
The following formulas explain how the performance attributes can be set, since they're user modifiable: -- DiskIOPSReadWrite/DiskIOPSReadOnly:
- - IOPS limits of 300 IOPS/GiB, up to a maximum of 160 K IOPS per disk
- - Minimum of 100 IOPS
- - DiskIOPSReadWrite + DiskIOPSReadOnly is at least 2 IOPS/GiB
-- DiskMBpsRead Write/DiskMBpsReadOnly:
- - The throughput limit of a single disk is 256 KiB/s for each provisioned IOPS, up to a maximum of 2000 MBps per disk
- - The minimum guaranteed throughput per disk is 4KiB/s for each provisioned IOPS, with an overall baseline minimum of 1 MBps
+- DiskIOPSReadWrite:
+ - Has a baseline minimum IOPS of 100, for disks 100 GiB and smaller.
+ - For disks larger than 100 GiB, the baseline minimum IOPS you can set increases by 1 per GiB. So the lowest you can set DiskIOPSReadWrite for a 101 GiB disk is 101 IOPS.
+ - The maximum you can set this attribute is determined by the size of your disk, the formula is 300 * GiB, up to a maximum of 160,000.
+- DiskMBpsReadWrite
+ - The minium throughput (MB/s) of this attribute is determined by your IOPS, the formula is 4 KiB per second per IOPS. So if you had 101 IOPS, the minium MB/s you can set is 1.
+ - The maximum you can set this attribute is determined by the amount of IOPS you set, the formula is 256 KiB per second per IOPS, up to a maximum of 4,000 MB/s.
+- DiskIOPSReadOnly
+ - The minimum baseline IOPS for this attribute is 100. For DiskIOPSReadOnly, the baseline doesn't increase with disk size.
+ - The maximum you can set this attribute is determined by the size of your disk, the formula is 300 * GiB, up to a maximum of 160,000.
+- DiskMBpsReadOnly
+ - The minimum throughput (MB/s) for this attribute is 1. For DiskMBpsReadOnly, the baseline doesn't increase with IOPS.
+ - The maximum you can set this attribute is determined by the amount of IOPS you set, the formula is 256 KiB per second per IOPS, up to a maximum of 4,000 MB/s.
#### Examples
virtual-machines Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-version.md
A [Azure Compute Gallery](shared-image-galleries.md) (formerly known as Shared Image Gallery) simplifies custom image sharing across your organization. Custom images are like marketplace images, but you create them yourself. Images can be created from a VM, VHD, snapshot, managed image, or another image version.
-The Azure Compute Gallery lets you share your custom VM images with others in your organization, within or across regions, within an Azure AD tenant, or publicly using a [community gallery (preview)](azure-compute-gallery.md#community). Choose which images you want to share, which regions you want to make them available in, and who you want to share them with. You can create multiple galleries so that you can logically group images.
+The Azure Compute Gallery lets you share your custom VM images with others in your organization, within or across regions, within an Azure AD tenant, or publicly using a [community gallery (preview)](azure-compute-gallery.md#community). Choose which images you want to share, which regions you want to make them available in, and who you want to share them with. You can create multiple galleries so that you can logically group images. Many new features like ARM64, Accelerated Networking and TrustedVM are only supported through Azure Compute Gallery and not available for managed images.
The Azure Compute Gallery feature has multiple resource types:
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
-## New Features
-Many new features like ARM64, Accelerated Networking, TrustedVM etc. are only supported through Azure Compute Gallery and not available for 'Managed images'. For a complete list of new features available through Azure Compute Gallery, please refer
-https://learn.microsoft.com/cli/azure/sig/image-definition?view=azure-cli-latest#az-sig-image-definition-create
## Next steps
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
This customizer is supported by Windows directories and Linux paths, but there a
If there's an error trying to download the file, or put it in a specified directory, then customize step will fail, and this error will be in the customization.log. > [!NOTE]
-> The file customizer is only suitable for small file downloads, < 20MB. For larger file downloads, use a script or inline command, then use code to download files, such as, Linux `wget` or `curl`, Windows, `Invoke-WebRequest`. For files that are in Azure storage, ensure that you assign an identity with permissions to view that file to the build VM by following the documentation here: [User Assigned Identity for the Image Builder Build VM](https://learn.microsoft.com/azure/virtual-machines/linux/image-builder-json?tabs=json%2Cazure-powershell#user-assigned-identity-for-the-image-builder-build-vm). Any file that is not stored in Azure must be publicly accessible for Azure Image Builder to be able to download it.
+> The file customizer is only suitable for small file downloads, < 20MB. For larger file downloads, use a script or inline command, then use code to download files, such as, Linux `wget` or `curl`, Windows, `Invoke-WebRequest`. For files that are in Azure storage, ensure that you assign an identity with permissions to view that file to the build VM by following the documentation here: [User Assigned Identity for the Image Builder Build VM](#user-assigned-identity-for-the-image-builder-build-vm). Any file that is not stored in Azure must be publicly accessible for Azure Image Builder to be able to download it.
- **sha256Checksum** - generate the SHA256 checksum of the file locally, update the checksum value to lowercase, and Image Builder will validate the checksum during the deployment of the image template.
virtual-machines Image Builder Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-troubleshoot.md
Then, to implement this solution using CLI, use the following command:
az role assignment create -g {ResourceGroupName} --assignee {AibrpSpOid} --role Contributor ```
-To implement this solution in portal, follow the instructions in this documentation: [Assign Azure roles using the Azure portal - Azure RBAC](https://learn.microsoft.com/azure/role-based-access-control/role-assignments-portal?tabs=current).
+To implement this solution in portal, follow the instructions in this documentation: [Assign Azure roles using the Azure portal - Azure RBAC](../../role-based-access-control/role-assignments-portal.md).
-For [Step 1: Identify the needed scope](https://learn.microsoft.com/azure/role-based-access-control/role-assignments-portal?tabs=current#step-1-identify-the-needed-scope): The needed scope is your resource group.
+For [Step 1: Identify the needed scope](../../role-based-access-control/role-assignments-portal.md#step-1-identify-the-needed-scope): The needed scope is your resource group.
-For [Step 3: Select the appropriate role](https://learn.microsoft.com/azure/role-based-access-control/role-assignments-portal?tabs=current#step-3-select-the-appropriate-role): The role is Contributor.
+For [Step 3: Select the appropriate role](../../role-based-access-control/role-assignments-portal.md#step-3-select-the-appropriate-role): The role is Contributor.
-For [Step 4: Select who needs access](https://learn.microsoft.com/azure/role-based-access-control/role-assignments-portal?tabs=current#step-4-select-who-needs-access): Select member “Azure Virtual Machine Image Builder”
+For [Step 4: Select who needs access](../../role-based-access-control/role-assignments-portal.md#step-4-select-who-needs-access): Select member “Azure Virtual Machine Image Builder”
-Then proceed to [Step 6: Assign role](https://learn.microsoft.com/azure/role-based-access-control/role-assignments-portal?tabs=current#step-6-assign-role) to assign the role.
+Then proceed to [Step 6: Assign role](../../role-based-access-control/role-assignments-portal.md#step-6-assign-role) to assign the role.
## Troubleshoot build failures
virtual-machines Nda100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nda100-v4-series.md
Last updated 05/26/2021
# ND A100 v4-series
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
The ND A100 v4 series virtual machine is a new flagship addition to the Azure GPU family, designed for high-end Deep Learning training and tightly-coupled scale-up and scale-out HPC workloads.
virtual-machines Unmanaged Disks Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/unmanaged-disks-deprecation.md
Previously updated : 10/03/2022 Last updated : 11/08/2022
With managed disks, you don't have to worry about managing storage accounts for
## How does this affect me? -- As of November 18, 2022, new customer subscriptions won't be eligible to create unmanaged disks.
+- As of June 30th, 2023, new subscriptions won't be eligible to create unmanaged disks.
- As of September 30, 2023, existing customers won't be able to create new unmanaged disks. - On September 30, 2025, customers will no longer be able to start IaaS VMs by using unmanaged disks. Any VMs that are still running or allocated will be stopped and deallocated.
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 10/31/2022 Last updated : 11/08/2022
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- November 07, 2022: Added HANA hook susChkSrv for scale-up pacemaker cluster in [High availability of SAP HANA on Azure VMs on SLES](sap-hana-high-availability.md), [High availability of SAP HANA Scale-up with ANF on SLES](sap-hana-high-availability-netapp-files-suse.md).
+- November 07, 2022: Added monitor operation for azure-lb resource in [High availability of SAP HANA on Azure VMs on SLES](sap-hana-high-availability.md), [SAP HANA scale-out with HSR and Pacemaker on SLES](sap-hana-high-availability-scale-out-hsr-suse.md), [Set up IBM Db2 HADR on Azure virtual machines (VMs)](dbms-guide-ha-ibm.md), [Azure VMs high availability for SAP NetWeaver on SLES for SAP Applications with simple mount and NFS](high-availability-guide-suse-nfs-simple-mount.md), [Azure VMs high availability for SAP NW on SLES with NFS on Azure File](high-availability-guide-suse-nfs-azure-files.md), [Azure VMs high availability for SAP NW on SLES with Azure NetApp Files](high-availability-guide-suse-netapp-files.md), [Azure VMs high availability for SAP NetWeaver on SLES](high-availability-guide-suse.md), [High availability for NFS on Azure VMs on SLES](high-availability-guide-suse-nfs.md), [Azure VMs high availability for SAP NetWeaver on SLES multi-SID guide](high-availability-guide-suse-multi-sid.md)
- October 31, 2022: Change in [HA for NFS on Azure VMs on SLES](./high-availability-guide-suse-nfs.md) to fix script location for DRBD 9.0 - October 31, 2022: Change in [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md) to update the guideline for sizing `/hana/shared` - October 27, 2022: Adding Ev4 and Ev5 VM families and updated OS releases to table in [SAP ASE Azure Virtual Machines DBMS deployment for SAP workload](./dbms_guide_sapase.md)
virtual-network Network Security Groups Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/network-security-groups-overview.md
A network security group contains zero, or as many rules as desired, within Azur
Security rules are evaluated and applied based on the five-tuple (source, source port, destination, destination port, and protocol) information. You can't create two security rules with the same priority and direction. A flow record is created for existing connections. Communication is allowed or denied based on the connection state of the flow record. The flow record allows a network security group to be stateful. If you specify an outbound security rule to any address over port 80, for example, it's not necessary to specify an inbound security rule for the response to the outbound traffic. You only need to specify an inbound security rule if communication is initiated externally. The opposite is also true. If inbound traffic is allowed over a port, it's not necessary to specify an outbound security rule to respond to traffic over the port.
-Existing connections may not be interrupted when you remove a security rule that enabled the flow. Traffic flows are interrupted when connections are stopped and no traffic is flowing in either direction, for at least a few minutes.
+Existing connections may not be interrupted when you remove a security rule that enabled the flow. Traffic flows are interrupted when connections are stopped and no traffic is flowing in either direction, for at least a few minutes.
+
+Modifying NSG rules will only impact the new connections that are formed. When a new rule is created or an existing rule is updated in a network security group, it will only apply to new flows and new connections. Existing workflow connections are not updated with the new rules.
There are limits to the number of security rules you can create in a network security group. For details, see [Azure limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits).
virtual-wan Routing Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/routing-deep-dive.md
# Virtual WAN routing deep dive
-[Azure Virtual WAN][virtual-wan-overview] is a networking solution that allows creating sophisticated networking topologies very easily: it encompasses routing across Azure regions between Azure VNets and on-premises locations via Point-to-Site VPN, Site-to-Site VPN, [ExpressRoute][er] and [integrated SDWAN appliances][virtual-wan-nva], including the option to [secure the traffic][virtual-wan-secured-hub]. In most scenarios it is not required any deep knowledge of how Virtual WAN internal routing works, but in certain situations it can be useful to understand Virtual WAN routing concepts.
+[Azure Virtual WAN][virtual-wan-overview] is a networking solution that allows creating sophisticated networking topologies easily: it encompasses routing across Azure regions between Azure VNets and on-premises locations via Point-to-Site VPN, Site-to-Site VPN, [ExpressRoute][er] and [integrated SDWAN appliances][virtual-wan-nva], including the option to [secure the traffic][virtual-wan-secured-hub]. In most scenarios it is not required any deep knowledge of how Virtual WAN internal routing works, but in certain situations it can be useful to understand Virtual WAN routing concepts.
-This document will explore sample Virtual WAN scenarios that will explain some of the behaviors that organizations might encounter when interconnecting their VNets and branches in complex networks. The scenarios shown in this article are by no means design recommendations, they are just sample topologies specifically designed to demonstrate certain Virtual WAN functionalities.
+This document will explore sample Virtual WAN scenarios that will explain some of the behaviors that organizations might encounter when interconnecting their VNets and branches in complex networks. The scenarios shown in this article are by no means design recommendations, they are just sample topologies designed to demonstrate certain Virtual WAN functionalities.
## Scenario 1: topology with default routing preference
-The first scenario in this article will analyze a topology with two Virtual WAN hubs, one ExpressRoute circuit connected to each hub, one branch connected over VPN to hub 1, and a second branch connected via SDWAN to an NVA deployed inside of hub 2. In each hub there are VNets connected directly (VNets 11 and 21) and indirectly through an NVA (VNets 121, 122, 221 and 222). VNet 12 exchanges routing information with hub 1 via BGP (see [BGP peering with a virtual hub][virtual-wan-bgp]), and VNet 22 is configured with static routes, so that differences between both options can be shown.
+The first scenario in this article will analyze a topology with two Virtual WAN hubs, one ExpressRoute circuit connected to each hub, one branch connected over VPN to hub 1, and a second branch connected via SDWAN to an NVA deployed inside of hub 2. In each hub, there are VNets connected directly (VNets 11 and 21) and indirectly through an NVA (VNets 121, 122, 221 and 222). VNet 12 exchanges routing information with hub 1 via BGP (see [BGP peering with a virtual hub][virtual-wan-bgp]), and VNet 22 has static routes configured, so that differences between both options can be shown.
-In each hub the VPN and SDWAN appliances server to a dual purpose: on one side they advertise their own individual prefixes (`10.4.1.0/24` over VPN in hub 1 and `10.5.3.0/24` over SDWAN in hub 2), and on the other they advertise the same prefixes as the ExpressRoute circuits in the same region (`10.4.2.0/24` in hub 1 and `10.5.2.0/24` in hub 2). This will be used to demonstrate how the [Virtual WAN hub routing preference][virtual-wan-hrp] works.
+In each hub, the VPN and SDWAN appliances server to a dual purpose: on one side they advertise their own individual prefixes (`10.4.1.0/24` over VPN in hub 1 and `10.5.3.0/24` over SDWAN in hub 2), and on the other they advertise the same prefixes as the ExpressRoute circuits in the same region (`10.4.2.0/24` in hub 1 and `10.5.2.0/24` in hub 2). This difference will be used to demonstrate how the [Virtual WAN hub routing preference][virtual-wan-hrp] works.
All VNet and branch connections are associated and propagating to the default route table. Although the hubs are secured (there is an Azure Firewall deployed in every hub), they are not configured to secure private or Internet traffic. Doing so would result in all connections propagating to the `None` route table, which would remove all non-static routes from the `Default` route table and defeat the purpose of this article since the effective route blade in the portal would be almost empty (with the exception of the static routes to send traffic to the Azure Firewall). :::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-1.png" alt-text="Diagram that shows a Virtual WAN design with two ExpressRoute circuits and two V P N branches." :::
+> [!IMPORTANT]
+> The previous diagram shows two secured virtual hubs, but this topology is not supported yet. For more information see [How to configure Virtual WAN Hub routing intent and routing policies][virtual-wan-intent].
+ Out of the box the Virtual WAN hubs will exchange information between each other so that communication across regions is enabled. You can inspect the effective routes in Virtual WAN route tables: for example, the following picture shows the effective routes in hub 1: :::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-1-hub-1-no-route.png" alt-text="Screenshot of effective routes in Virtual WAN hub 1." lightbox="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-1-hub-1-no-route-expanded.png":::
In hub 2 the route for `10.2.20.0/22` to the indirect spokes VNet 221 (10.2.21.0
:::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-1-add-route.png" alt-text="Screenshot that shows how to add a static route to a Virtual WAN hub." lightbox="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-1-add-route-expanded.png":::
-After adding the static route hub 1 will contain the `10.2.20.0/22` route as well:
+After adding the static route, hub 1 will contain the `10.2.20.0/22` route as well:
:::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-1-hub-1-with-route.png" alt-text="Screenshot of effective routes in Virtual hub 1." lightbox="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-1-hub-1-with-route-expanded.png":::
Even if hub 1 knows the ExpressRoute prefix from circuit 2 (`10.5.2.0/24`) and h
:::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-2.png" alt-text="Diagram showing a Virtual WAN design with two ExpressRoute circuits with Global Reach and two V P N branches.":::
+> [!IMPORTANT]
+> The previous diagram shows two secured virtual hubs, but this topology is not supported yet. For more information see [How to configure Virtual WAN Hub routing intent and routing policies][virtual-wan-intent].
+ As explained in [Virtual hub routing preference (Preview)][virtual-wan-hrp] per default Virtual WAN will favor routes coming from ExpressRoute. Since routes are advertised from hub 1 to the ExpressRoute circuit 1, from the ExpressRoute circuit 1 to the circuit 2, and from the ExpressRoute circuit 2 to hub 2 (and vice versa), virtual hubs will prefer this path over the more direct inter hub link now, as the effective routes in hub 1 show: :::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-2-er-hub-1.png" alt-text="Screenshot of effective routes in Virtual hub 1 with Global Reach and routing preference ExpressRoute." lightbox="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-2-er-hub-1-expanded.png":::
Now the routes for remote spokes and branches in hub 1 will have a next hop of `
:::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-2-aspath-hub-1.png" alt-text="Screenshot of effective routes in Virtual hub 1 with Global Reach and routing preference A S Path." lightbox="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-2-aspath-hub-1-expanded.png":::
-You can see that the IP prefix for hub 2 (`192.168.2.0/23`) still appears reachable over the Global Reach link, but this shouldn't impact traffic as there shouldn't be any traffic specifically addressed to devices in hub 2. This might be an issue though if there were NVAs in both hubs establishing SDWAN tunnels between each other.
+You can see that the IP prefix for hub 2 (`192.168.2.0/23`) still appears reachable over the Global Reach link, but this shouldn't impact traffic as there shouldn't be any traffic addressed to devices in hub 2. This might be an issue though if there were NVAs in both hubs establishing SDWAN tunnels between each other.
However, note that `10.4.2.0/24` is now preferred over the VPN Gateway. This can happen if the routes advertised via VPN have a shorter AS path than the routes advertised over ExpressRoute. After configuring the on-premises VPN device to prepend its Autonomous System Number (`65501`) to the VPN routes to make the less preferable, hub 1 now selects ExpressRoute as next hop for `10.4.2.0/24`:
Hub 2 will show a similar table for the effective routes, where the VNets and br
## Scenario 3: Cross-connecting the ExpressRoute circuits to both hubs
-In order to add direct links between the Azure regions and the on-premises locations connected via ExpressRoute, it is often desirable connecting an single ExpressRoute circuit to multiple Virtual WAN hubs in a topology some times described as "bow tie", as the following topology shows:
+In order to add direct links between the Azure regions and the on-premises locations connected via ExpressRoute, it is often desirable connecting a single ExpressRoute circuit to multiple Virtual WAN hubs in a topology some times described as "bow tie", as the following topology shows:
:::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-3.png" alt-text="Diagram that shows a Virtual WAN design with two ExpressRoute circuits in bow tie with Global Reach and two V P N branches." :::
+> [!IMPORTANT]
+> The previous diagram shows two secured virtual hubs, but this topology is not supported yet. For more information see [How to configure Virtual WAN Hub routing intent and routing policies][virtual-wan-intent].
+ Virtual WAN will display that both circuits are connected to both hubs: :::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-3-circuits.png" alt-text="Screenshot of Virtual WAN showing both ExpressRoute circuits connected to both virtual hubs." lightbox="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-3-circuits-expanded.png":::
For more information about Virtual WAN see:
[virtual-wan-hrp]: ./about-virtual-hub-routing-preference.md [virtual-wan-nva]: ./about-nva-hub.md [virtual-wan-bgp]: ./scenario-bgp-peering-hub.md
+[virtual-wan-intent]: ./how-to-routing-policies.md
[er]: ../expressroute/expressroute-introduction.md
-[er-gr]: ../expressroute/expressroute-global-reach.md
+[er-gr]: ../expressroute/expressroute-global-reach.md
vpn-gateway Vpn Gateway Howto Point To Site Resource Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md
description: Learn how to connect Windows, macOS, and Linux clients securely to
Previously updated : 05/26/2022 Last updated : 11/07/2022