Updates from: 07/15/2022 01:17:04
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Howto Authentication Methods Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-methods-activity.md
The registration details report shows the following information for each user:
## Limitations - The data in the report is not updated in real-time and may reflect a latency of up to a few hours.-- Temporary Access Pass registrations are not reflected in the registration tab of the report because they are only valid for short period of time. - The **PhoneAppNotification** or **PhoneAppOTP** methods that a user might have configured are not displayed in the dashboard. ## Next steps
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
Title: Passwordless sign-in with Microsoft Authenticator - Azure Active Directory description: Enable passwordless sign-in to Azure AD using Microsoft Authenticator + Previously updated : 06/23/2022 Last updated : 07/14/2022
Microsoft Authenticator can be used to sign in to any Azure AD account without using a password. Microsoft Authenticator uses key-based authentication to enable a user credential that is tied to a device, where the device uses a PIN or biometric. [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-identity-verification) uses a similar technology. + This authentication technology can be used on any device platform, including mobile. This technology can also be used with any app or website that integrates with Microsoft Authentication Libraries. People who enabled phone sign-in from Microsoft Authenticator see a message that asks them to tap a number in their app. No username or password is asked for. To complete the sign-in process in the app, a user must next take the following actions:
People who enabled phone sign-in from Microsoft Authenticator see a message that
1. Choose **Approve**. 1. Provide their PIN or biometric.
-## Prerequisites
+## Multiple accounts on iOS (preview)
-To use passwordless phone sign in with Microsoft Authenticator, the following prerequisites must be met:
+You can enable passwordless phone sign-in for multiple accounts in Microsoft Authenticator on any supported iOS device. Consultants, students, and others with multiple accounts in Azure AD can add each account to Microsoft Authenticator and use passwordless phone sign-in for all of them from the same iOS device.
-- Recommended: Azure AD Multi-Factor Authentication, with push notifications allowed as a verification method. Push notifications to your smartphone or tablet help Microsoft Authenticator to prevent unauthorized access to accounts and stop fraudulent transactions. Microsoft Authenticator can either perform traditional MFA push notifications to a device that a user must approve or deny, or it can perform passwordless authentication that requires a user to type a matching number. Microsoft Authenticator automatically generates codes when set up to do push notifications so a user has a backup sign-in method even if their device doesn't have connectivity. -- Latest version of Authenticator installed on devices running iOS 8.0 or greater, or Android 6.0 or greater.-- The device on which Microsoft Authenticator is installed must be registered within the Azure AD tenant to an individual user.
+Previously, admins might not require passwordless sign-in for users with multiple accounts because it requires them to carry more devices for sign-in. By removing the limitation of one user sign-in from a device, admins can more confidently encourage users to register passwordless phone sign-in and use it as their default sign-in method.
-> [!NOTE]
-> If you enabled Microsoft Authenticator for passwordless sign-in using Azure AD PowerShell, it was enabled for your entire directory. If you enable using this new method, it supercedes the PowerShell policy. We recommend you enable for all users in your tenant via the new *Authentication Methods* menu, otherwise users not in the new policy are no longer be able to sign in without a password.
+The Azure AD accounts can be in the same tenant or different tenants. Guest accounts aren't supported for multiple account sign-in from one device.
+
+>[!NOTE]
+>Multiple accounts on iOS is currently in public preview. Some features might not be supported or have limited capabilities. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+To use passwordless phone sign-in with Microsoft Authenticator, the following prerequisites must be met:
-## Enable passwordless authentication methods
+- Recommended: Azure AD Multi-Factor Authentication, with push notifications allowed as a verification method. Push notifications to your smartphone or tablet help the Authenticator app to prevent unauthorized access to accounts and stop fraudulent transactions. The Authenticator app automatically generates codes when set up to do push notifications so a user has a backup sign-in method even if their device doesn't have connectivity.
+- Latest version of Microsoft Authenticator installed on devices running iOS 12.0 or greater, or Android 6.0 or greater.
+- For Android, the device that runs Microsoft Authenticator must be registered to an individual user. We're actively working to enable multiple accounts on Android.
+- For iOS, the device must be registered with each tenant where it's used to sign in. For example, the following device must be registered with Contoso and Wingtiptoys to allow all accounts to sign in:
+ - balas@contoso.com
+ - balas@wingtiptoys.com and bsandhu@wingtiptoys
+- For iOS, the option in Microsoft Authenticator to allow Microsoft to gather usage data must be enabled. It's not enabled by default. To enable it in Microsoft Authenticator, go to **Settings** > **Usage Data**.
+
+ :::image type="content" border="true" source="./media/howto-authentication-passwordless-phone/telemetry.png" alt-text="Screenshot os Usage Data in Microsoft Authenticator.":::
To use passwordless authentication in Azure AD, first enable the combined registration experience, then enable users for the passwordless method.
-### Enable passwordless phone sign-in authentication methods
+## Enable passwordless phone sign-in authentication methods
Azure AD lets you choose which authentication methods can be used during the sign-in process. Users then register for the methods they'd like to use. The **Microsoft Authenticator** authentication method policy manages both the traditional push MFA method, as well as the passwordless authentication method.
+> [!NOTE]
+> If you enabled Microsoft Authenticator passwordless sign-in using Azure AD PowerShell, it was enabled for your entire directory. If you enable using this new method, it supersedes the PowerShell policy. We recommend you enable for all users in your tenant via the new **Authentication Methods** menu, otherwise users who aren't in the new policy can't sign in without a password.
+ To enable the authentication method for passwordless phone sign-in, complete the following steps:
-1. Sign in to the [Azure portal](https://portal.azure.com) with an *authentication policy administrator* account.
+1. Sign in to the [Azure portal](https://portal.azure.com) with an *Authentication Policy Administrator* account.
1. Search for and select *Azure Active Directory*, then browse to **Security** > **Authentication methods** > **Policies**. 1. Under **Microsoft Authenticator**, choose the following options: 1. **Enable** - Yes or No
A user can start to utilize passwordless sign-in after all the following actions
- An admin has enabled the user's tenant. - The user has added Microsoft Authenticator as a sign-in method. + The first time a user starts the phone sign-in process, the user performs the following steps: 1. Enters their name at the sign-in page.
The user is then presented with a number. The app prompts the user to authentica
After the user has utilized passwordless phone sign-in, the app continues to guide the user through this method. However, the user will see the option to choose another method. ## Known Issues
An end user can be enabled for multifactor authentication (MFA) through an on-pr
If the user attempts to upgrade multiple installations (5+) of Microsoft Authenticator with the passwordless phone sign-in credential, this change might result in an error.
-### Device registration
-
-Before you can create this new strong credential, there are prerequisites. One prerequisite is that the device on which Microsoft Authenticator is installed must be registered within the Azure AD tenant to an individual user.
-
-Currently, a device can only be enabled for passwordless sign-in in a single tenant. This limit means that only one work or school account in Microsoft Authenticator can be enabled for phone sign-in.
-
-> [!NOTE]
-> Device registration is not the same as device management or mobile device management (MDM). Device registration only associates a device ID and a user ID together, in the Azure AD directory.
## Next steps
active-directory Usage Analytics Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-users.md
Filters can be applied in one, two, or all three categories depending on the typ
1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**. 1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
-1. From the **Identity Subtype**, select the type of user: **All**, **ED**, **Local**, or **Cross Account**.
+1. From the **Identity Subtype**, select the type of user: **All**, **ED** (Enterprise Directory), **Local**, or **Cross Account**.
1. Select **Apply** to run your query and display the information you selected. Select **Reset filter** to discard your changes.
You can filter user details by type of user, user role, app, or service used, or
- To view assigned permissions and usage of the group and the group members, see [View analytic information about groups](usage-analytics-groups.md). - To view active resources, see [View analytic information about active resources](usage-analytics-active-resources.md). - To view the permission usage of access keys for a given user, see [View analytic information about access keys](usage-analytics-access-keys.md).-- To view assigned permissions and usage of the serverless functions, see [View analytic information about serverless functions](usage-analytics-serverless-functions.md).
+- To view assigned permissions and usage of the serverless functions, see [View analytic information about serverless functions](usage-analytics-serverless-functions.md).
active-directory Reference Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/reference-expressions.md
Requires a minimum of two arguments, which are unique value generation rules def
> - This is a top-level function, it cannot be nested. > - This function cannot be applied to attributes that have a matching precedence. > - This function is only meant to be used for entry creations. When using it with an attribute, set the **Apply Mapping** property to **Only during object creation**.
-> - This function is currently only supported for "Workday to Active Directory User Provisioning". It cannot be used with other provisioning applications.
+> - This function is currently only supported for "Workday and SuccessFactors to Active Directory User Provisioning". It cannot be used with other provisioning applications.
**Parameters:**<br>
active-directory Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/reference-powershell.md
Here are some details about what you need:
``` [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12 ```-
+- The AADCloudSyncTools module might not work correctly if the Azure AD Connect cloud provisioning agent is not running or the configuration wizard has not finished successfully.
## Install the AADCloudSyncTools PowerShell module
Here are some details about what you need:
Import-module "C:\Program Files\Microsoft Azure AD Connect Provisioning Agent\Utility\AADCloudSyncTools" ``` - ## AADCloudSyncTools cmdlets
+> [!NOTE]
+> Before using AADCloudSyncTools module make sure the Azure AD Connect cloud provisioning agent is running and the configuration wizard has finished successfully. To troubleshoot wizard issues, you can find trace logs in the folder *C:\ProgramData\Microsoft\Azure AD Connect Provisioning Agent\Trace*, see [Cloud sync troubleshooting](how-to-troubleshoot.md) for more information.
+ ### Connect-AADCloudSyncTools This cmdlet uses the MSAL.PS module to request a token for the Azure AD administrator to access Microsoft Graph.
This cmdlet uses the MSAL.PS module to request a token for the Azure AD administ
This cmdlet exports and packages all the troubleshooting data in a compressed file, as follows:
-1. Sets verbose tracing and starts collecting data from the provisioning agent (same as `Start-AADCloudSyncToolsVerboseLogs`). You can find these trace logs in the folder *C:\ProgramData\Microsoft\Azure AD Connect Provisioning Agent\Trace*.
-2. Stops data collection after three minutes and disables verbose tracing (same as `Stop-AADCloudSyncToolsVerboseLogs`). You can specify a different duration by using `-TracingDurationMins` or completely skip verbose tracing by using `-SkipVerboseTrace`.
+1. Sets verbose tracing and starts collecting data from the provisioning agent (same as `Start-AADCloudSyncToolsVerboseLogs`).
+2. Stops data collection after three minutes and disables verbose tracing (same as `Stop-AADCloudSyncToolsVerboseLogs`).
3. Collects Event Viewer logs for the last 24 hours.
-4. Compresses all the agent logs, verbose logs, and Event Viewer logs into a .zip file in the user's *Documents* folder. You can specify a different output folder by using `-OutputPath <folder path>`.
+4. Compresses all the agent logs, verbose logs, and Event Viewer logs into a .zip file in the user's *Documents* folder.
+
+You can use the following options to fine-tune your data collection:
+
+- `SkipVerboseTrace` to only export current logs without capturing verbose logs (default = false).
+- `TracingDurationMins` to specify a different capture duration (default = 3 minutes).
+- `OutputPath` to specify a different output path (default = userΓÇÖs Documents folder).
### Get-AADCloudSyncToolsInfo
active-directory Howto Configure Publisher Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-configure-publisher-domain.md
# Configure an application's publisher domain
-An applicationΓÇÖs publisher domain informs the users where their information is being sent and acts as an input/prerequisite for [publisher verification](publisher-verification-overview.md). Depending on when the app was registered and it's verified publisher status, publisher domain may be displayed directly to the user on the [application's consent prompt](application-consent-experience.md). [Multi-tenant applications](/azure/architecture/guide/multitenant/overview) that are registered after May 21, 2019, that don't have a publisher domain show up asΓÇ»**unverified**. Multi-tenant applications are applications that support accounts outside of a single organizational directory; for example, support all Azure AD accounts, or support all Azure AD accounts and personal Microsoft accounts.
+An applicationΓÇÖs publisher domain informs the users where their information is being sent and acts as an input/prerequisite for [publisher verification](publisher-verification-overview.md). Depending on whether an app is a [multi-tenant app](/azure/architecture/guide/multitenant/overview), when it was registered and it's verified publisher status, either the publisher domain or the verified publisher status will be displayed to the user on the [application's consent prompt](application-consent-experience.md). Multi-tenant applications are applications that support accounts outside of a single organizational directory; for example, support all Azure AD accounts, or support all Azure AD accounts and personal Microsoft accounts.
## New applications
The following table summarizes the default behavior of the publisher domain valu
| - *.onmicrosoft.com<br/>- domain1.com<br/>- domain2.com (primary) | domain2.com | 1. If your multi-tenant was registered between **May 21, 2019 and November 30, 2020**:
+ - If the application's publisher domain isn't set, or if it's set to a domain that ends in .onmicrosoft.com, the app's consent prompt will show **unverified** in place of the publisher domain.
+ - If the application has a verified app domain, the consent prompt will show the verified domain.
+ - If the application is publisher verified, it will show a [blue "verified" badge](publisher-verification-overview.md) indicating the same
2. If your multi-tenant was registered after **November 30, 2020**:
+ - If the application is not publisher verified, the app will show as "**unverified**" in the consent prompt (i.e, no publisher domain related info is shown)
+ - If the application is publisher verified, it will show a [blue "verified" badge](publisher-verification-overview.md) indicating the same
## Grandfathered applications
-If your app was registered before May 21, 2019, your application's consent prompt will not show **unverified** even if you have not set a publisher domain. We recommend that you set the publisher domain value so that users can see this information on your app's consent prompt.
+If your app was registered **before May 21, 2019**, your application's consent prompt will not show **unverified** even if you have not set a publisher domain. We recommend that you set the publisher domain value so that users can see this information on your app's consent prompt.
## Configure publisher domain using the Azure portal
active-directory Howto Convert App To Be Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-convert-app-to-be-multi-tenant.md
App-only permissions always require a tenant administratorΓÇÖs consent. If your
Certain delegated permissions also require a tenant administratorΓÇÖs consent. For example, the ability to write back to Azure AD as the signed in user requires a tenant administratorΓÇÖs consent. Like app-only permissions, if an ordinary user tries to sign in to an application that requests a delegated permission that requires administrator consent, your application receives an error. Whether a permission requires admin consent is determined by the developer that published the resource, and can be found in the documentation for the resource. The permissions documentation for the [Microsoft Graph API][MSFT-Graph-permission-scopes] indicate which permissions require admin consent.
-If your application uses permissions that require admin consent, have a gesture such as a button or link where the admin can initiate the action. The request your application sends for this action is the usual OAuth2/OpenID Connect authorization request that also includes the `prompt=admin_consent` query string parameter. Once the admin has consented and the service principal is created in the customerΓÇÖs tenant, subsequent sign-in requests do not need the `prompt=admin_consent` parameter. Since the administrator has decided the requested permissions are acceptable, no other users in the tenant are prompted for consent from that point forward.
+If your application uses permissions that require admin consent, have a gesture such as a button or link where the admin can initiate the action. The request your application sends for this action is the usual OAuth2/OpenID Connect authorization request that also includes the `prompt=consent` query string parameter. Once the admin has consented and the service principal is created in the customerΓÇÖs tenant, subsequent sign-in requests do not need the `prompt=consent` parameter. Since the administrator has decided the requested permissions are acceptable, no other users in the tenant are prompted for consent from that point forward.
A tenant administrator can disable the ability for regular users to consent to applications. If this capability is disabled, admin consent is always required for the application to be used in the tenant. If you want to test your application with end-user consent disabled, you can find the configuration switch in the [Azure portal][AZURE-portal] in the **[User settings](https://portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/UserSettings/menuId/)** section under **Enterprise applications**.
-The `prompt=admin_consent` parameter can also be used by applications that request permissions that do not require admin consent. An example of when this would be used is if the application requires an experience where the tenant admin ΓÇ£signs upΓÇ¥ one time, and no other users are prompted for consent from that point on.
+The `prompt=consent` parameter can also be used by applications that request permissions that do not require admin consent. An example of when this would be used is if the application requires an experience where the tenant admin ΓÇ£signs upΓÇ¥ one time, and no other users are prompted for consent from that point on.
-If an application requires admin consent and an admin signs in without the `prompt=admin_consent` parameter being sent, when the admin successfully consents to the application it will apply **only for their user account**. Regular users will still not be able to sign in or consent to the application. This feature is useful if you want to give the tenant administrator the ability to explore your application before allowing other users access.
+If an application requires admin consent and an admin signs in without the `prompt=consent` parameter being sent, when the admin successfully consents to the application it will apply **only for their user account**. Regular users will still not be able to sign in or consent to the application. This feature is useful if you want to give the tenant administrator the ability to explore your application before allowing other users access.
### Consent and multi-tier applications
To learn more about making API calls to Azure AD and Microsoft 365 services like
[OAuth2-Client-Types]: https://tools.ietf.org/html/rfc6749#section-2.1 [OAuth2-Role-Def]: https://tools.ietf.org/html/rfc6749#page-6 [OpenIDConnect]: https://openid.net/specs/openid-connect-core-1_0.html
-[OpenIDConnect-ID-Token]: https://openid.net/specs/openid-connect-core-1_0.html#IDToken
+[OpenIDConnect-ID-Token]: https://openid.net/specs/openid-connect-core-1_0.html#IDToken
active-directory Msal Node Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-node-migration.md
## Prerequisites -- Node version 10, 12 or 14. See the [note on version support](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node#node-version-support)
+- Node version 10, 12, 14, 16 or 18. See the [note on version support](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node#node-version-support)
## Update app registration settings
const cca = new msal.ConfidentialClientApplication(config);
const refreshTokenRequest = { refreshToken: "", // your previous refresh token here
- scopes: ["user.read"],
+ scopes: ["https://graph.microsoft.com/.default"],
+ forceCache: true,
}; cca.acquireTokenByRefreshToken(refreshTokenRequest).then((response) => {
- console.log(JSON.stringify(response));
+ console.log(response);
}).catch((error) => {
- console.log(JSON.stringify(error));
+ console.log(error);
}); ```
+For more information, please refer to the [ADAL Node to MSAL Node migration sample](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-node-samples/refresh-token).
+ > [!NOTE] > We recommend you to destroy the older ADAL Node token cache once you utilize the still valid refresh tokens to get a new set of tokens using the MSAL Node's `acquireTokenByRefreshToken` method as shown above.
var adal = require('adal-node');
// Authentication parameters var clientId = 'Enter_the_Application_Id_Here'; var clientSecret = 'Enter_the_Client_Secret_Here';
-var tenant = 'common';
+var tenant = 'Enter_the_Tenant_Info_Here';
var authorityUrl = 'https://login.microsoftonline.com/' + tenant; var redirectUri = 'http://localhost:3000/redirect'; var resource = 'https://graph.microsoft.com';
const msal = require('@azure/msal-node');
const config = { auth: { clientId: "Enter_the_Application_Id_Here",
- authority: "https://login.microsoftonline.com/common",
+ authority: "https://login.microsoftonline.com/Enter_the_Tenant_Info_Here",
clientSecret: "Enter_the_Client_Secret_Here" }, system: {
active-directory Workload Identity Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation.md
Previously updated : 01/10/2022 Last updated : 07/13/2022
You use workload identity federation to configure an Azure AD app registration t
## Supported scenarios > [!NOTE]
-> Azure AD-issued tokens might not be used for federated identity flows.
+> Azure AD issued tokens may not be used for federated identity flows. The federated identity credentials flow does not support tokens issued by Azure AD.
The following scenarios are supported for accessing Azure AD protected resources using workload identity federation:
active-directory Groups Assign Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-assign-sensitivity-labels.md
To apply published labels to groups, you must first enable the feature. These st
```powershell Install-Module AzureADPreview Import-Module AzureADPreview
- Connect-AzureAD
+ AzureADPreview\Connect-AzureAD
``` In the **Sign in to your account** page, enter your admin account and password to connect you to your service, and select **Sign in**.
active-directory Active Directory Access Create New Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-access-create-new-tenant.md
Title: Quickstart - Access & create new tenant - Azure AD description: Instructions about how to find Azure Active Directory and how to create a new tenant for your organization. --++ Last updated 12/22/2021-+
active-directory Active Directory Accessmanagement Managing Group Owners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-accessmanagement-managing-group-owners.md
Title: Add or remove group owners - Azure Active Directory | Microsoft Docs description: Instructions about how to add or remove group owners using Azure Active Directory. --++ Last updated 09/11/2018-+
active-directory Active Directory Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-architecture.md
Title: Architecture overview - Azure Active Directory | Microsoft Docs description: Learn what an Azure Active Directory tenant is and how to manage Azure using Azure Active Directory. --++ Last updated 07/08/2022-+
active-directory Active Directory Compare Azure Ad To Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-compare-azure-ad-to-ad.md
Title: Compare Active Directory to Azure Active Directory
description: This document compares Active Directory Domain Services (ADDS) to Azure Active Directory (AD). It outlines key concepts in both identity solutions and explains how it's different or similar. -+ tags: azuread
active-directory Active Directory Data Storage Australia Newzealand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-data-storage-australia-newzealand.md
Title: Customer data storage for Australian and New Zealand customers - Azure AD description: Learn about where Azure Active Directory stores customer-related data for its Australian and New Zealand customers. ---+++
active-directory Active Directory Data Storage Australia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-data-storage-australia.md
Title: Identity data storage for Australian and New Zealand customers - Azure AD description: Learn about where Azure Active Directory stores identity-related data for its Australian and New Zealand customers. ---+++
active-directory Active Directory Data Storage Eu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-data-storage-eu.md
Title: Identity data storage for European customers - Azure AD description: Learn about where Azure Active Directory stores identity-related data for its European customers. ---+++
active-directory Active Directory Data Storage Japan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-data-storage-japan.md
Title: Customer data storage for Japan customers - Azure AD
description: Learn about where Azure Active Directory stores customer-related data for its Japan customers. -+
active-directory Active Directory Deployment Checklist P2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-deployment-checklist-p2.md
Last updated 12/07/2021
-+
active-directory Active Directory Get Started Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-get-started-premium.md
Title: Sign up for premium editions - Azure Active Directory| Microsoft Docs description: Instructions about how to sign up for Azure Active Directory Premium editions. --++ Last updated 09/07/2017-+
active-directory Active Directory Groups Create Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-groups-create-azure-portal.md
Title: Create a basic group and add members - Azure Active Directory | Microsoft Docs description: Instructions about how to create a basic group using Azure Active Directory. --++ Last updated 06/05/2020-+
active-directory Active Directory Groups Delete Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-groups-delete-group.md
Title: Delete a group - Azure Active Directory | Microsoft Docs description: Instructions about how to delete a group using Azure Active Directory. --++ Last updated 08/29/2018-+
active-directory Active Directory Groups Members Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-groups-members-azure-portal.md
Title: Add or remove group members - Azure Active Directory | Microsoft Docs description: Instructions about how to add or remove members from a group using Azure Active Directory. --++ Last updated 08/23/2018-+
active-directory Active Directory Groups Membership Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-groups-membership-azure-portal.md
Title: Add or remove a group from another group - Azure AD description: Instructions about how to add or remove a group from another group using Azure Active Directory. --++ Last updated 10/19/2018-+
active-directory Active Directory Groups Settings Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-groups-settings-azure-portal.md
Title: Edit your group information - Azure Active Directory | Microsoft Docs description: Instructions about how to edit your group's information using Azure Active Directory. --++ Last updated 08/27/2018-+
active-directory Active Directory Groups View Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-groups-view-azure-portal.md
Title: Quickstart - View groups & members - Azure AD description: Instructions about how to search for and view your organization's groups and their assigned members. --++ Last updated 09/24/2018-+
active-directory Active Directory How Subscriptions Associated Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
Title: Add an existing Azure subscription to your tenant - Azure AD description: Instructions about how to add an existing Azure subscription to your Azure Active Directory (Azure AD) tenant. --++ Last updated 03/05/2021-+
active-directory Active Directory How To Find Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-how-to-find-tenant.md
Title: How to find your tenant ID - Azure Active Directory description: Instructions about how to find and Azure Active Directory tenant ID to an existing Azure subscription. --++ Last updated 10/30/2020-+
active-directory Active Directory Licensing Whatis Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-licensing-whatis-azure-portal.md
Title: What is group-based licensing - Azure Active Directory | Microsoft Docs
description: Learn about Azure Active Directory group-based licensing, including how it works and best practices. keywords: Azure AD licensing--++ Last updated 10/29/2018-+
active-directory Active Directory Manage Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-manage-groups.md
Title: Manage app & resource access using groups - Azure AD description: Learn about how to manage access to your organization's cloud-based apps, on-premises apps, and resources using Azure Active Directory groups. --++ Last updated 01/08/2020-+
active-directory Active Directory Ops Guide Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-auth.md
Title: Azure Active Directory Authentication management operations reference gui
description: This operations reference guide describes the checks and actions you should take to secure authentication management -+ tags: azuread
active-directory Active Directory Ops Guide Govern https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-govern.md
Title: Azure Active Directory governance operations reference guide
description: This operations reference guide describes the checks and actions you should take to secure governance management -+ tags: azuread
active-directory Active Directory Ops Guide Iam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-iam.md
Title: Azure Active Directory Identity and access management operations referenc
description: This operations reference guide describes the checks and actions you should take to secure identity and access management operations -+ tags: azuread
active-directory Active Directory Ops Guide Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-intro.md
Title: Azure Active Directory operations reference guide
description: This operations reference guide describes the checks and actions you should take to secure and maintain identity and access management, authentication, governance, and operations -+ tags: azuread
active-directory Active Directory Ops Guide Ops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-ops.md
Title: Azure Active Directory general operations guide reference
description: This operations reference guide describes the checks and actions you should take to secure general operations -+ tags: azuread
active-directory Active Directory Properties Area https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-properties-area.md
Title: Add your organization's privacy info - Azure Active Directory | Microsoft Docs description: Instructions about how to add your organization's privacy info to the Azure Active Directory Properties area. --++ Last updated 04/17/2018-+
active-directory Active Directory Troubleshooting Support Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-troubleshooting-support-howto.md
Title: Find help and open a support ticket - Azure Active Directory | Microsoft Docs description: Instructions about how to get help and open a support ticket for Azure Active Directory. --++ Last updated 08/28/2017-+
active-directory Active Directory Users Assign Role Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md
Title: Assign Azure AD roles to users - Azure Active Directory | Microsoft Docs description: Instructions about how to assign administrator and non-administrator roles to users with Azure Active Directory. --++ Last updated 08/31/2020-+
active-directory Active Directory Users Profile Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-users-profile-azure-portal.md
Title: Add or update user profile information - Azure AD description: Instructions about how to add information to a user's profile in Azure Active Directory, including a picture and job details. --++ Last updated 06/10/2021-+
active-directory Active Directory Users Reset Password Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-users-reset-password-azure-portal.md
Title: Reset a user's password - Azure Active Directory | Microsoft Docs description: Instructions about how to reset a user's password using Azure Active Directory. --++ ms.assetid: fad5624b-2f13-4abc-b3d4-b347903a8f16
Last updated 06/07/2022-+
active-directory Active Directory Users Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-users-restore.md
Title: Restore or permanently remove recently deleted user - Azure AD description: How to view restorable users, restore a deleted user, or permanently delete a user with Azure Active Directory. --++ Last updated 10/23/2020-+
active-directory Active Directory Whatis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-whatis.md
Title: What is Azure Active Directory? description: Learn about Azure Active Directory, including terminology, available licenses, and a list of associated features. --++ Last updated 01/27/2022-+ # Customer intent: As a new administrator, I want to understand what Azure Active Directory is, which license is right for me, and what features are available.
active-directory Add Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/add-custom-domain.md
Title: Add your custom domain - Azure Active Directory | Microsoft Docs description: Instructions about how to add a custom domain using Azure Active Directory. --++ Last updated 10/25/2019-+
active-directory Add Users Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/add-users-azure-active-directory.md
Title: Add or delete users - Azure Active Directory | Microsoft Docs description: Instructions about how to add new users or delete existing users using Azure Active Directory. --++ Last updated 02/16/2022-+
active-directory Concept Fundamentals Block Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-block-legacy-authentication.md
Last updated 01/26/2021
-+
active-directory Concept Fundamentals Mfa Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-mfa-get-started.md
Last updated 03/18/2020 ---+++
active-directory Concept Fundamentals Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-security-defaults.md
Last updated 04/07/2022
-+
active-directory Concept Secure Remote Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-secure-remote-workers.md
Last updated 04/27/2020
-+
active-directory Customize Branding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/customize-branding.md
Title: Add branding to your organization's sign-in page - Azure AD description: Instructions about how to add your organization's branding to the Azure Active Directory sign-in page. --++ Last updated 07/03/2021-+
active-directory Identity Secure Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/identity-secure-score.md
Last updated 06/09/2022
-+ #Customer intent: As an IT admin, I want understand the identity secure score, so that I can maximize the security posture of my tenant.
active-directory Keep Me Signed In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/keep-me-signed-in.md
Title: Configure the 'Stay signed in?' prompt for Azure Active Directory account
description: Learn about keep me signed in (KMSI), which displays the Stay signed in? prompt, how to configure it in the Azure Active Directory portal, and how to troubleshoot sign-in issues. -+
active-directory License Users Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/license-users-groups.md
Title: Assign or remove licenses - Azure Active Directory | Microsoft Docs description: Instructions about how to assign or remove Azure Active Directory licenses from your users or groups. --++ ms.assetid: f8b932bc-8b4f-42b5-a2d3-f2c076234a78
Last updated 12/14/2020-+
active-directory Resilience B2b Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-b2b-authentication.md
Title: Build resilience in external user authentication with Azure Active Direct
description: A guide for IT admins and architects to building resilient authentication for external users -+
active-directory Sign Up Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/sign-up-organization.md
Title: Sign up your organization - Azure Active Directory | Microsoft Docs description: Instructions about how to sign up your organization to use Azure and Azure Active Directory. --++ Last updated 09/14/2018-+
active-directory Users Default Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md
Title: Default user permissions - Azure Active Directory | Microsoft Docs description: Learn about the user permissions available in Azure Active Directory. --++ Last updated 08/04/2021-+
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
Title: Archive for What's new in Azure Active Directory? | Microsoft Docs description: The What's new release notes in the Overview section of this content set contains 6 months of activity. After 6 months, the items are removed from the main article and put into this archive article. --++ Last updated 1/31/2022-+
active-directory Whats New Microsoft 365 Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-microsoft-365-government.md
Title: WhatΓÇÖs new for Azure AD in Microsoft 365 Government? | Microsoft Docs
description: Learn about some changes to Azure Active Directory (Azure AD) in the Microsoft 365 Government cloud instance, which might impact you. -+
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Title: What's new? Release notes - Azure Active Directory | Microsoft Docs description: Learn what is new with Azure Active Directory; such as the latest release notes, known issues, bug fixes, deprecated functionality, and upcoming changes.--++ featureFlags: - clicktale ms.assetid: 06a149f7-4aa1-4fb9-a8ec-ac2633b031fb
Last updated 1/31/2022-+
active-directory How To Connect Fix Default Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fix-default-rules.md
Keep **Scoping filter** and **Join rules** empty. Fill in the transformation as
You now know how to make a new attribute for a user object flow from Active Directory to Azure Active Directory. You can use these steps to map any attribute from any object to source and target. For more information, see [Creating custom sync rules](how-to-connect-create-custom-sync-rule.md) and [Prepare to provision users](/office365/enterprise/prepare-for-directory-synchronization). ### Override the value of an existing attribute
-You might want to override the value of an attribute that has already been mapped. For example, if you always want to set a null value to an attribute in Azure AD, simply create an inbound rule only. Make the constant value, `AuthoritativeNull`, flow to the target attribute.
+You might want to override the value of an attribute that has already been mapped. For example, if you always want to set a null value to an attribute in Azure AD, simply create an inbound rule only. Make the expression value, `AuthoritativeNull`, flow to the target attribute.
>[!NOTE] > Use `AuthoritativeNull` instead of `Null` in this case. This is because the non-null value replaces the null value, even if it has lower precedence (a higher number value in the rule). `AuthoritativeNull`, on the other hand, isn't replaced with a non-null value by other rules.
active-directory How To Connect Import Export Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-import-export-config.md
Each time the configuration is changed from the Azure AD Connect wizard, a new t
> [!IMPORTANT] > Only changes made by Azure AD Connect are automatically exported. Any changes made by using PowerShell, the Synchronization Service Manager, or the Synchronization Rules Editor must be exported on demand as needed to maintain an up-to-date copy. Export on demand can also be used to place a copy of the settings in a secure location for disaster recovery purposes.
+>[!NOTE]
+> This feature cannot be combined with using an existing ADSync database. The use of import/export configuration and using existing database are mutually exclusive.
+ ## Export Azure AD Connect settings To view a summary of your configuration settings, open the Azure AD Connect tool, and select the additional task named **View or Export Current Configuration**. A quick summary of your settings is shown along with the ability to export the full configuration of your server.
To import previously exported settings:
> [!NOTE] > Override settings on this page like the use of SQL Server instead of LocalDB or the use of an existing service account instead of a default VSA. These settings aren't imported from the configuration settings file. They are there for information and comparison purposes.
->[!NOTE]
->It is not supported to modify the exported JSON file to change the configuration
+> [!NOTE]
+> It is not supported to modify the exported JSON file to change the configuration
### Import installation experience
active-directory How To Connect Install Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-custom.md
On the **Express Settings** page, select **Customize** to start a customized-set
- [Sync](#sync-pages) ### Install required components
-When you install the synchronization services, you can leave the optional configuration section unselected. Azure AD Connect sets up everything automatically. It sets up a SQL Server 2019 Express LocalDB instance, creates the appropriate groups, and assign permissions. If you want to change the defaults, clear the appropriate boxes. The following table summarizes these options and provides links to additional information.
+When you install the synchronization services, you can leave the optional configuration section unselected. Azure AD Connect sets up everything automatically. It sets up a SQL Server 2019 Express LocalDB instance, creates the appropriate groups, and assign permissions. If you want to change the defaults, select the appropriate boxes. The following table summarizes these options and provides links to additional information.
![Screenshot showing optional selections for the required installation components in Azure AD Connect.](./media/how-to-connect-install-custom/requiredcomponents2.png)
Now that you have installed Azure AD Connect, you can [verify the installation a
For more information about the features that you enabled during the installation, see [Prevent accidental deletes](how-to-connect-sync-feature-prevent-accidental-deletes.md) and [Azure AD Connect Health](how-to-connect-health-sync.md).
-For more information about other common topics, see [Azure AD Connect sync: Scheduler](how-to-connect-sync-feature-scheduler.md) and [Integrate your on-premises identities with Azure AD](whatis-hybrid-identity.md).
+For more information about other common topics, see [Azure AD Connect sync: Scheduler](how-to-connect-sync-feature-scheduler.md) and [Integrate your on-premises identities with Azure AD](whatis-hybrid-identity.md).
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
We recommend that you harden your Azure AD Connect server to decrease the securi
### Connectivity * The Azure AD Connect server needs DNS resolution for both intranet and internet. The DNS server must be able to resolve names both to your on-premises Active Directory and the Azure AD endpoints. * Azure AD Connect requires network connectivity to all configured domains
+* Azure AD Connect requires network connectivity to the root domain of all configured forest
* If you have firewalls on your intranet and you need to open ports between the Azure AD Connect servers and your domain controllers, see [Azure AD Connect ports](reference-connect-ports.md) for more information. * If your proxy or firewall limit which URLs can be accessed, the URLs documented in [Office 365 URLs and IP address ranges](https://support.office.com/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2) must be opened. Also see [Safelist the Azure portal URLs on your firewall or proxy server](../../azure-portal/azure-portal-safelist-urls.md?tabs=public-cloud). * If you're using the Microsoft cloud in Germany or the Microsoft Azure Government cloud, see [Azure AD Connect sync service instances considerations](reference-connect-instances.md) for URLs.
active-directory How To Connect Selective Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-selective-password-hash-synchronization.md
the attribute **adminDescription** populated in Active Directory with the value
Once you completed the steps to configure the necessary synchronization rules, re-enable the synchronization scheduler with the following steps: 1. In Windows PowerShell run:
- `set-adsyncscheduler-synccycleenabled$true`
+ `set-adsyncscheduler -synccycleenabled:$true`
2. Then confirm it has been successfully enabled by running:
active-directory How To Connect Sync Service Manager Ui Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sync-service-manager-ui-connectors.md
The delete action is used for two different things.
The option **Delete connector space only** removes all data, but keep the configuration.
-The option **Delete Connector and connector space** removes the data and the configuration. This option is used when you do not want to connect to a forest anymore.
+The option **Delete Connector and connector space** removes the data, the configuration and all the sync rules associated with the connector. This option is used when you do not want to connect to a forest anymore.
Both options sync all objects and update the metaverse objects. This action is a long running operation.
active-directory Tshoot Connect Pass Through Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-pass-through-authentication.md
If the user is unable to sign into using Pass-through Authentication, they may s
|Error|Description|Resolution | | | |AADSTS80001|Unable to connect to Active Directory|Ensure that agent servers are members of the same AD forest as the users whose passwords need to be validated and they are able to connect to Active Directory.
-|AADSTS8002|A timeout occurred connecting to Active Directory|Check to ensure that Active Directory is available and is responding to requests from the agents.
+|AADSTS80002|A timeout occurred connecting to Active Directory|Check to ensure that Active Directory is available and is responding to requests from the agents.
|AADSTS80004|The username passed to the agent was not valid|Ensure the user is attempting to sign in with the right username. |AADSTS80005|Validation encountered unpredictable WebException|A transient error. Retry the request. If it continues to fail, contact Microsoft support. |AADSTS80007|An error occurred communicating with Active Directory|Check the agent logs for more information and verify that Active Directory is operating as expected.
active-directory Manage Application Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md
# Review permissions granted to applications
-In this article you'll learn how to review permissions granted to applications in your Azure Active Directory (Azure AD) tenant. You may need to review permissions when you have detected a malicious application or the application has been granted more permissions than is necessary.
+In this article, you'll learn how to review permissions granted to applications in your Azure Active Directory (Azure AD) tenant. You may need to review permissions when you've detected a malicious application or the application has been granted more permissions than is necessary.
The steps in this article apply to all applications that were added to your Azure Active Directory (Azure AD) tenant via user or admin consent. For more information on consenting to applications, see [Azure Active Directory consent framework](../develop/consent-framework.md).
The steps in this article apply to all applications that were added to your Azur
To review permissions granted to applications, you need: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator.
+- A Service principal owner who isn't an administrator is able to invalidate refresh tokens.
+ You can access the Azure AD portal to get contextual PowerShell scripts to perform the actions.
To review application permissions:
1. Select the application that you want to restrict access to. 1. Select **Permissions**. In the command bar, select **Review permissions**. ![Screenshot of the review permissions window.](./media/manage-application-permissions/review-permissions.png)
-1. Give a reason for why you want to review permissions for the application by selecting any of the options listed after the question , **Why do you want to review permissions for this application?**
+1. Give a reason for why you want to review permissions for the application by selecting any of the options listed after the question, **Why do you want to review permissions for this application?**
Each option generates PowerShell scripts that enable you to control user access to the application and to review permissions granted to the application. For information about how to control user access to an application, see [How to remove a user's access to an application](methods-for-removing-user-access.md)
active-directory Manage Consent Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-consent-requests.md
Previously updated : 11/25/2021 Last updated : 07/14/2022
Microsoft recommends that you [restrict user consent](../../active-directory/manage-apps/configure-user-consent.md) to allow users to consent only for apps from verified publishers, and only for permissions that you select. For apps that don't meet these criteria, the decision-making process will be centralized with your organization's security and identity administrator team.
-After you've disabled or restricted user consent, you have several important steps to take to help keep your organization secure as you continue to allow business-critical applications to be used. These steps are crucial to minimize impact on your organization's support team and IT administrators, and to help prevent the use of unmanaged accounts in third-party applications.
+After you've disabled or restricted user consent, you have several important steps to take to help keep your organization secure as you continue to allow business-critical applications to be used. These steps are crucial to minimize impact on your organization's support team and IT administrators, and to help prevent the use of un-managed accounts in third-party applications.
## Process changes and education
To minimize impact on trusted, business-critical applications that are already i
Granting tenant-wide admin consent is a sensitive operation. Permissions will be granted on behalf of the entire organization, and they can include permissions to attempt highly privileged operations. Examples of such operations are role management, full access to all mailboxes or all sites, and full user impersonation.
-Before you grant tenant-wide admin consent, it's important to ensure that you trust the application and the application publisher for the level of access you're granting. If you aren't confident that you understand who controls the application and why the application is requesting the permissions, do *not* grant consent.
+Before you grant tenant-wide admin consent, it's important to ensure that you trust the application, and the application publisher for the level of access you're granting. If you aren't confident that you understand who controls the application and why the application is requesting the permissions, do *not* grant consent.
When you're evaluating a request to grant admin consent, here are some recommendations to consider:
When you're evaluating a request to grant admin consent, here are some recommend
* Understand the permissions that are being requested.
- The permissions requested by the application are listed in the [consent prompt](../develop/application-consent-experience.md). Expanding the permission title displays the permissionΓÇÖs description. The description for application permissions generally end in "without a signed-in user." The description for delegated permissions generally end with "on behalf of the signed-in user." Permissions for the Microsoft Graph API are described in [Microsoft Graph Permissions Reference](/graph/permissions-reference). Refer to the documentation for other APIs to understand the permissions they expose.
+ The permissions requested by the application are listed in the [consent prompt](../develop/application-consent-experience.md). Expanding the permission title displays the permissionΓÇÖs description. The description for application permissions generally ends in "without a signed-in user." The description for delegated permissions generally end with "on behalf of the signed-in user." Permissions for the Microsoft Graph API are described in [Microsoft Graph Permissions Reference](/graph/permissions-reference). Refer to the documentation for other APIs to understand the permissions they expose.
If you don't understand a permission that's being requested, do *not* grant consent.
When you're evaluating a request to grant admin consent, here are some recommend
For step-by-step instructions for granting tenant-wide admin consent from the Azure portal, see [Grant tenant-wide admin consent to an application](grant-admin-consent.md).
+## Revoke tenant wide admin consent
+
+To revoke tenant-wide admin consent, you can review and revoke the permissions previously granted to the application. For more information, see [review permissions granted to applications](manage-application-permissions.md). You can also remove userΓÇÖs access to the application by [disabling user sign-in to application](disable-user-sign-in-portal.md) or by [hiding the application](hide-application-from-user-portal.md) so that it doesnΓÇÖt appear in the My apps portal.
+ ### Grant consent on behalf of a specific user Instead of granting consent for the entire organization, an administrator can also use the [Microsoft Graph API](/graph/use-the-api) to grant consent to delegated permissions on behalf of a single user. For a detailed example that uses Microsoft Graph PowerShell, see [Grant consent on behalf of a single user by using PowerShell](grant-consent-single-user.md). ## Limit user access to applications
-User access to applications can still be limited even when tenant-wide admin consent has been granted. For more information about how to require user assignment to an application, see [Methods for assigning users and groups](./assign-user-or-group-access-portal.md). Administrators can also limit user access to applications by disabling all future user consent operations to any application.
+User access to applications can still be limited even when tenant-wide admin consent has been granted. To limit user access, require user assignment to an application. For more information, see [Methods for assigning users and groups](./assign-user-or-group-access-portal.md). Administrators can also limit user access to applications by disabling all future user consent operations to any application.
For a broader overview, including how to handle more complex scenarios, see [Use Azure Active Directory (Azure AD) for application access management](what-is-access-management.md).
active-directory Birst Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/birst-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Birst Agile Business Analytics | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Birst Agile Business Analytics'
description: Learn how to configure single sign-on between Azure Active Directory and Birst Agile Business Analytics.
Previously updated : 02/07/2019 Last updated : 07/08/2022
-# Tutorial: Azure Active Directory integration with Birst Agile Business Analytics
+# Tutorial: Azure AD SSO integration with Birst Agile Business Analytics
-In this tutorial, you learn how to integrate Birst Agile Business Analytics with Azure Active Directory (Azure AD).
-Integrating Birst Agile Business Analytics with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Birst Agile Business Analytics with Azure Active Directory (Azure AD). When you integrate Birst Agile Business Analytics with Azure AD, you can:
-* You can control in Azure AD who has access to Birst Agile Business Analytics.
-* You can enable your users to be automatically signed-in to Birst Agile Business Analytics (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Birst Agile Business Analytics.
+* Enable your users to be automatically signed-in to Birst Agile Business Analytics with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Birst Agile Business Analytics, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Birst Agile Business Analytics single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Birst Agile Business Analytics single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Birst Agile Business Analytics supports **SP** initiated SSO
-
-## Adding Birst Agile Business Analytics from the gallery
-
-To configure the integration of Birst Agile Business Analytics into Azure AD, you need to add Birst Agile Business Analytics from the gallery to your list of managed SaaS apps.
-
-**To add Birst Agile Business Analytics from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Birst Agile Business Analytics**, select **Birst Agile Business Analytics** from result panel then click **Add** button to add the application.
+* Birst Agile Business Analytics supports **SP** initiated SSO.
- ![Birst Agile Business Analytics in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Birst Agile Business Analytics based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Birst Agile Business Analytics needs to be established.
-
-To configure and test Azure AD single sign-on with Birst Agile Business Analytics, you need to complete the following building blocks:
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Birst Agile Business Analytics Single Sign-On](#configure-birst-agile-business-analytics-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Birst Agile Business Analytics test user](#create-birst-agile-business-analytics-test-user)** - to have a counterpart of Britta Simon in Birst Agile Business Analytics that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+## Add Birst Agile Business Analytics from the gallery
-### Configure Azure AD single sign-on
+To configure the integration of Birst Agile Business Analytics into Azure AD, you need to add Birst Agile Business Analytics from the gallery to your list of managed SaaS apps.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Birst Agile Business Analytics** in the search box.
+1. Select **Birst Agile Business Analytics** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure Azure AD single sign-on with Birst Agile Business Analytics, perform the following steps:
+## Configure and test Azure AD SSO for Birst Agile Business Analytics
-1. In the [Azure portal](https://portal.azure.com/), on the **Birst Agile Business Analytics** application integration page, select **Single sign-on**.
+Configure and test Azure AD SSO with Birst Agile Business Analytics using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Birst Agile Business Analytics.
- ![Configure single sign-on link](common/select-sso.png)
+To configure and test Azure AD SSO with Birst Agile Business Analytics, perform the following steps:
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Birst Agile Business Analytics SSO](#configure-birst-agile-business-analytics-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Birst Agile Business Analytics test user](#create-birst-agile-business-analytics-test-user)** - to have a counterpart of B.Simon in Birst Agile Business Analytics that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
- ![Single sign-on select mode](common/select-saml-option.png)
+## Configure Azure AD SSO
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+1. In the Azure portal, on the **Birst Agile Business Analytics** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-4. On the **Basic SAML Configuration** section, perform the following steps:
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
- ![Birst Agile Business Analytics Domain and URLs single sign-on information](common/sp-intiated.png)
+1. On the **Basic SAML Configuration** section, perform the following steps:
- In the **Sign-on URL** textbox, type a URL using the following pattern: `https://login.bws.birst.com/SAMLSSO/Services.aspx?birst.idpid=TENANTIDPID`
+ In the **Sign-on URL** textbox, type a URL using the following pattern: `https://login.bws.birst.com/SAMLSSO/Services.aspx?birst.idpid=<TENANTIDPID>`
The URL depends on the datacenter that your Birst account is located:
- * For US datacenter use following the pattern: `https://login.bws.birst.com/SAMLSSO/Services.aspx?birst.idpid=TENANTIDPID`
+ * For US datacenter use following the pattern: `https://login.bws.birst.com/SAMLSSO/Services.aspx?birst.idpid=<TENANTIDPID>`
- * For Europe datacenter use the following pattern: `https://login.eu1.birst.com/SAMLSSO/Services.aspx?birst.idpid=TENANTIDPID`
+ * For Europe datacenter use the following pattern: `https://login.eu1.birst.com/SAMLSSO/Services.aspx?birst.idpid=<TENANTIDPID>`
> [!NOTE] > This value is not real. Update the value with the actual Sign-On URL. Contact [Birst Agile Business Analytics Client support team](mailto:info@birst.com) to get the value.
-5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
-
- ![The Certificate download link](common/certificatebase64.png)
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
-6. On the **Set up Birst Agile Business Analytics** section, copy the appropriate URL(s) as per your requirement.
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+1. On the **Set up Birst Agile Business Analytics** section, copy the appropriate URL(s) as per your requirement.
- a. Login URL
-
- b. Azure Ad Identifier
-
- c. Logout URL
-
-### Configure Birst Agile Business Analytics Single Sign-On
-
-To configure single sign-on on **Birst Agile Business Analytics** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Birst Agile Business Analytics support team](mailto:info@birst.com). They set this setting to have the SAML SSO connection set properly on both sides.
-
-> [!NOTE]
-> Mention to Birst team that this integration needs SHA256 Algorithm (SHA1 will not be supported) so that they can set the SSO on the appropriate server like **app2101** etc.
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
+In this section, you'll create a test user in the Azure portal called B.Simon.
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Birst Agile Business Analytics.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Birst Agile Business Analytics**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Birst Agile Business Analytics**.
-
- ![The Birst Agile Business Analytics link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Birst Agile Business Analytics.
- ![The "Users and groups" link](common/users-groups-blade.png)
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Birst Agile Business Analytics**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
+## Configure Birst Agile Business Analytics SSO
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+To configure single sign-on on **Birst Agile Business Analytics** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Birst Agile Business Analytics support team](mailto:info@birst.com). They set this setting to have the SAML SSO connection set properly on both sides.
-7. In the **Add Assignment** dialog click the **Assign** button.
+> [!NOTE]
+> Mention to Birst team that this integration needs SHA256 Algorithm (SHA1 will not be supported) so that they can set the SSO on the appropriate server like **app2101** etc.
### Create Birst Agile Business Analytics test user In this section, you create a user called Britta Simon in Birst Agile Business Analytics. Work with [Birst Agile Business Analytics support team](mailto:info@birst.com) to add the users in the Birst Agile Business Analytics platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Birst Agile Business Analytics tile in the Access Panel, you should be automatically signed in to the Birst Agile Business Analytics for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to Birst Agile Business Analytics Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Birst Agile Business Analytics Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Birst Agile Business Analytics tile in the My Apps, this will redirect to Birst Agile Business Analytics Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Birst Agile Business Analytics you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Blue Access For Members Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/blue-access-for-members-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Blue Access for Members (BAM) | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Blue Access for Members (BAM)'
description: Learn how to configure single sign-on between Azure Active Directory and Blue Access for Members (BAM).
Previously updated : 11/06/2019 Last updated : 07/09/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Blue Access for Members (BAM)
+# Tutorial: Azure AD SSO integration with Blue Access for Members (BAM)
In this tutorial, you'll learn how to integrate Blue Access for Members (BAM) with Azure Active Directory (Azure AD). When you integrate Blue Access for Members (BAM) with Azure AD, you can:
In this tutorial, you'll learn how to integrate Blue Access for Members (BAM) wi
* Enable your users to be automatically signed-in to Blue Access for Members (BAM) with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Blue Access for Members (BAM) single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
+* Blue Access for Members (BAM) supports **IDP** initiated SSO.
-* Blue Access for Members (BAM) supports **IDP** initiated SSO
----
-## Adding Blue Access for Members (BAM) from the gallery
+## Add Blue Access for Members (BAM) from the gallery
To configure the integration of Blue Access for Members (BAM) into Azure AD, you need to add Blue Access for Members (BAM) from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Blue Access for Members (BAM)** in the search box. 1. Select **Blue Access for Members (BAM)** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for Blue Access for Members (BAM)
+## Configure and test Azure AD SSO for Blue Access for Members (BAM)
Configure and test Azure AD SSO with Blue Access for Members (BAM) using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Blue Access for Members (BAM).
-To configure and test Azure AD SSO with Blue Access for Members (BAM), complete the following building blocks:
+To configure and test Azure AD SSO with Blue Access for Members (BAM), perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure Blue Access for Members (BAM) SSO](#configure-blue-access-for-members-bam-sso)** - to configure the single sign-on settings on application side.
- * **[Create Blue Access for Members (BAM) test user](#create-blue-access-for-members-bam-test-user)** - to have a counterpart of B.Simon in Blue Access for Members (BAM) that is linked to the Azure AD representation of user.
+ 1. **[Create Blue Access for Members (BAM) test user](#create-blue-access-for-members-bam-test-user)** - to have a counterpart of B.Simon in Blue Access for Members (BAM) that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Blue Access for Members (BAM)** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Blue Access for Members (BAM)** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Identifier** text box, type a URL using the following pattern:
+ a. In the **Identifier** text box, type a value using the following pattern:
`<Custom Domain Value>` b. In the **Reply URL** text box, type a URL using the following pattern:
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Blue Access for Members (BAM) application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
- ![image](common/default-attributes.png)
+ ![Screenshot shows the image of attribute mappings.](common/default-attributes.png "Attributes")
1. In addition to above, Blue Access for Members (BAM) application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
1. On the **Set up Blue Access for Members (BAM)** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Blue Access for Members (BAM)**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you create a user called B.Simon in Blue Access for Members (BA
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Blue Access for Members (BAM) tile in the Access Panel, you should be automatically signed in to the Blue Access for Members (BAM) for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on Test this application in Azure portal and you should be automatically signed in to the Blue Access for Members (BAM) for which you set up the SSO.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the Blue Access for Members (BAM) tile in the My Apps, you should be automatically signed in to the Blue Access for Members (BAM) for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Blue Access for Members (BAM) with Azure AD](https://aad.portal.azure.com/)
+Once you configure Blue Access for Members (BAM) you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Folloze Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/folloze-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Folloze | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Folloze'
description: Learn how to configure single sign-on between Azure Active Directory and Folloze.
Previously updated : 10/23/2019 Last updated : 07/09/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Folloze
+# Tutorial: Azure AD SSO integration with Folloze
In this tutorial, you'll learn how to integrate Folloze with Azure Active Directory (Azure AD). When you integrate Folloze with Azure AD, you can:
In this tutorial, you'll learn how to integrate Folloze with Azure Active Direct
* Enable your users to be automatically signed-in to Folloze with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Folloze single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
+* Folloze supports **IDP** initiated SSO.
-* Folloze supports **IDP** initiated SSO
-
-* Folloze supports **Just In Time** user provisioning
+* Folloze supports **Just In Time** user provisioning.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding Folloze from the gallery
+## Add Folloze from the gallery
To configure the integration of Folloze into Azure AD, you need to add Folloze from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Folloze** in the search box. 1. Select **Folloze** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for Folloze
+## Configure and test Azure AD SSO for Folloze
Configure and test Azure AD SSO with Folloze using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Folloze.
-To configure and test Azure AD SSO with Folloze, complete the following building blocks:
+To configure and test Azure AD SSO with Folloze, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure Folloze SSO](#configure-folloze-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Folloze test user](#create-folloze-test-user)** - to have a counterpart of B.Simon in Folloze that is linked to the Azure AD representation of user.
+ 1. **[Create Folloze test user](#create-folloze-test-user)** - to have a counterpart of B.Simon in Folloze that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Folloze** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Folloze** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section the application is pre-configured in **IDP** initiated mode and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
+1. On the **Basic SAML Configuration** section, the application is pre-configured in **IDP** initiated mode and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
1. Folloze application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
- ![image](common/edit-attribute.png)
+ ![Screenshot shows the image of attributes configuration.](common/edit-attribute.png "Attributes")
1. In addition to above, Folloze application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirement.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
1. On the **Set up Folloze** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Folloze**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, a user called Britta Simon is created in Folloze. Folloze suppo
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Folloze tile in the Access Panel, you should be automatically signed in to the Folloze for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on Test this application in Azure portal and you should be automatically signed in to the Folloze for which you set up the SSO.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the Folloze tile in the My Apps, you should be automatically signed in to the Folloze for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Folloze with Azure AD](https://aad.portal.azure.com/)
+Once you configure Folloze you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Freshgrade Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/freshgrade-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with FreshGrade | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with FreshGrade'
description: Learn how to configure single sign-on between Azure Active Directory and FreshGrade.
Previously updated : 02/15/2019 Last updated : 07/09/2022
-# Tutorial: Azure Active Directory integration with FreshGrade
+# Tutorial: Azure AD SSO integration with FreshGrade
-In this tutorial, you learn how to integrate FreshGrade with Azure Active Directory (Azure AD).
-Integrating FreshGrade with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate FreshGrade with Azure Active Directory (Azure AD). When you integrate FreshGrade with Azure AD, you can:
-* You can control in Azure AD who has access to FreshGrade.
-* You can enable your users to be automatically signed-in to FreshGrade (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to FreshGrade.
+* Enable your users to be automatically signed-in to FreshGrade with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with FreshGrade, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* FreshGrade single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* FreshGrade single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* FreshGrade supports **SP** initiated SSO
+* FreshGrade supports **SP** initiated SSO.
-## Adding FreshGrade from the gallery
+## Add FreshGrade from the gallery
To configure the integration of FreshGrade into Azure AD, you need to add FreshGrade from the gallery to your list of managed SaaS apps.
-**To add FreshGrade from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **FreshGrade**, select **FreshGrade** from result panel then click **Add** button to add the application.
-
- ![FreshGrade in the results list](common/search-new-app.png)
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **FreshGrade** in the search box.
+1. Select **FreshGrade** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on
+## Configure and test Azure AD SSO for FreshGrade
-In this section, you configure and test Azure AD single sign-on with FreshGrade based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in FreshGrade needs to be established.
+Configure and test Azure AD SSO with FreshGrade using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in FreshGrade.
-To configure and test Azure AD single sign-on with FreshGrade, you need to complete the following building blocks:
+To configure and test Azure AD SSO with FreshGrade, perform the following steps:
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure FreshGrade Single Sign-On](#configure-freshgrade-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create FreshGrade test user](#create-freshgrade-test-user)** - to have a counterpart of Britta Simon in FreshGrade that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure FreshGrade SSO](#configure-freshgrade-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create FreshGrade test user](#create-freshgrade-test-user)** - to have a counterpart of B.Simon in FreshGrade that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-### Configure Azure AD single sign-on
+## Configure Azure AD SSO
-In this section, you enable Azure AD single sign-on in the Azure portal.
+Follow these steps to enable Azure AD SSO in the Azure portal.
-To configure Azure AD single sign-on with FreshGrade, perform the following steps:
+1. In the Azure portal, on the **FreshGrade** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-1. In the [Azure portal](https://portal.azure.com/), on the **FreshGrade** application integration page, select **Single sign-on**.
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
- ![Configure single sign-on link](common/select-sso.png)
+1. On the **Basic SAML Configuration** section, perform the following steps:
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+ a. In the **Identifier (Entity ID)** textbox, type a URL using one of the following patterns:
- ![Single sign-on select mode](common/select-saml-option.png)
+ | **Identifier** |
+ |-|
+ |`https://login.onboarding.freshgrade.com:443/saml/metadata/alias/<instancename>`|
+ |`https://login.freshgrade.com:443/saml/metadata/alias/<instancename>`|
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Set up Single Sign-On with SAML** page, perform the following steps:
-
- ![FreshGrade Domain and URLs single sign-on information](common/sp-identifier.png)
-
- a. In the **Sign-on URL** textbox, type a URL using the following patterns:
-
- ```http
- https://<subdomain>.freshgrade.com/login
- https://<subdomain>.onboarding.freshgrade.com/login
- ```
-
- b. In the **Identifier (Entity ID)** textbox, type a URL using the following patterns:
-
- ```http
- https://login.onboarding.freshgrade.com:443/saml/metadata/alias/<instancename>
- https://login.freshgrade.com:443/saml/metadata/alias/<instancename>
- ```
+ b. In the **Sign-on URL** textbox, type a URL using one of the following patterns:
+
+ | **Sign-on URL** |
+ ||
+ |`https://<subdomain>.freshgrade.com/login`|
+ |`https://<subdomain>.onboarding.freshgrade.com/login`|
> [!NOTE]
- > These values are not real. Update these values with the actual Sign-On URL and Identifier. Contact [FreshGrade Client support team](mailto:support@freshgrade.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-
-5. On the **Set up Single Sign-On with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [FreshGrade Client support team](mailto:support@freshgrade.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
- ![The Certificate download link](common/copy-metadataurl.png)
+1. On the **Set up Single Sign-On with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
-### Configure FreshGrade Single Sign-On
-
-To configure single sign-on on **FreshGrade** side, you need to send the **App Federation Metadata Url** to [FreshGrade support team](mailto:support@freshgrade.com). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
+In this section, you'll create a test user in the Azure portal called B.Simon.
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to FreshGrade.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **FreshGrade**.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to FreshGrade.
- ![Enterprise applications blade](common/enterprise-applications.png)
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **FreshGrade**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-2. In the applications list, select **FreshGrade**.
+## Configure FreshGrade SSO
- ![The FreshGrade link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **FreshGrade** side, you need to send the **App Federation Metadata Url** to [FreshGrade support team](mailto:support@freshgrade.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create FreshGrade test user In this section, you create a user called Britta Simon in FreshGrade. Work with [FreshGrade support team](mailto:support@freshgrade.com) to add the users in the FreshGrade platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the FreshGrade tile in the Access Panel, you should be automatically signed in to the FreshGrade for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to FreshGrade Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to FreshGrade Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the FreshGrade tile in the My Apps, this will redirect to FreshGrade Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure FreshGrade you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Headspace Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/headspace-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Headspace'
+description: Learn how to configure single sign-on between Azure Active Directory and Headspace.
++++++++ Last updated : 07/14/2022++++
+# Tutorial: Azure AD SSO integration with Headspace
+
+In this tutorial, you'll learn how to integrate Headspace with Azure Active Directory (Azure AD). When you integrate Headspace with Azure AD, you can:
+
+* Control in Azure AD who has access to Headspace.
+* Enable your users to be automatically signed-in to Headspace with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Headspace single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Headspace supports **SP** initiated SSO.
+* Headspace supports **Just In Time** user provisioning.
+
+## Add Headspace from the gallery
+
+To configure the integration of Headspace into Azure AD, you need to add Headspace from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Headspace** in the search box.
+1. Select **Headspace** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Headspace
+
+Configure and test Azure AD SSO with Headspace using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Headspace.
+
+To configure and test Azure AD SSO with Headspace, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Headspace SSO](#configure-headspace-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Headspace test user](#create-headspace-test-user)** - to have a counterpart of B.Simon in Headspace that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Headspace** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `urn:auth0:<Auth0TenantName>:<CustomerConnectionName>`
+
+ b. In the **Reply URL** textbox, type a value using the following pattern:
+ `https://auth.<Enviornment>.headspace.com/login/callback?connection=<CustomerConnectionName>`
+
+ c. In the **Sign on URL** textbox, type a value using the following pattern:
+ `https://<Environment>.headspace.com/sso-login`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Headspace Client support team](mailto:ecosystem-integration-squad@headspace.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Headspace application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of Headspace application.](common/default-attributes.png "Attributes")
+
+1. In addition to above, Headspace application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | email | user.mail |
+ | family_name | user.surname |
+ | userName | user.userprincipalname |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificate-base64-download.png "Certificate")
+
+1. On the **Set up Headspace** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Headspace.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Headspace**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Headspace SSO
+
+To configure single sign-on on **Headspace** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [Headspace support team](mailto:ecosystem-integration-squad@headspace.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Headspace test user
+
+In this section, a user called B.Simon is created in Headspace. Headspace supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Headspace, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Headspace Sign-on URL where you can initiate the login flow.
+
+* Go to Headspace Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Headspace tile in the My Apps, this will redirect to Headspace Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Headspace you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Infogix Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/infogix-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Infogix Data3Sixty Govern | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Infogix Data3Sixty Govern'
description: Learn how to configure single sign-on between Azure Active Directory and Infogix Data3Sixty Govern.
Previously updated : 03/14/2019 Last updated : 07/09/2022
-# Tutorial: Azure Active Directory integration with Infogix Data3Sixty Govern
+# Tutorial: Azure AD SSO integration with Infogix Data3Sixty Govern
-In this tutorial, you learn how to integrate Infogix Data3Sixty Govern with Azure Active Directory (Azure AD).
-Integrating Infogix Data3Sixty Govern with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Infogix Data3Sixty Govern with Azure Active Directory (Azure AD). When you integrate Infogix Data3Sixty Govern with Azure AD, you can:
-* You can control in Azure AD who has access to Infogix Data3Sixty Govern.
-* You can enable your users to be automatically signed-in to Infogix Data3Sixty Govern (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Infogix Data3Sixty Govern.
+* Enable your users to be automatically signed-in to Infogix Data3Sixty Govern with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Infogix Data3Sixty Govern, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Infogix Data3Sixty Govern single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Infogix Data3Sixty Govern single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Infogix Data3Sixty Govern supports **SP and IDP** initiated SSO
-* Infogix Data3Sixty Govern supports **Just In Time** user provisioning
-
-## Adding Infogix Data3Sixty Govern from the gallery
-
-To configure the integration of Infogix Data3Sixty Govern into Azure AD, you need to add Infogix Data3Sixty Govern from the gallery to your list of managed SaaS apps.
-
-**To add Infogix Data3Sixty Govern from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
+* Infogix Data3Sixty Govern supports **SP and IDP** initiated SSO.
+* Infogix Data3Sixty Govern supports **Just In Time** user provisioning.
-4. In the search box, type **Infogix Data3Sixty Govern**, select **Infogix Data3Sixty Govern** from result panel then click **Add** button to add the application.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
- ![Infogix Data3Sixty Govern in the results list](common/search-new-app.png)
+## Add Infogix Data3Sixty Govern from the gallery
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Infogix Data3Sixty Govern based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Infogix Data3Sixty Govern needs to be established.
-
-To configure and test Azure AD single sign-on with Infogix Data3Sixty Govern, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Infogix Data3Sixty Govern Single Sign-On](#configure-infogix-data3sixty-govern-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Infogix Data3Sixty Govern test user](#create-infogix-data3sixty-govern-test-user)** - to have a counterpart of Britta Simon in Infogix Data3Sixty Govern that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
-
-### Configure Azure AD single sign-on
+To configure the integration of Infogix Data3Sixty Govern into Azure AD, you need to add Infogix Data3Sixty Govern from the gallery to your list of managed SaaS apps.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Infogix Data3Sixty Govern** in the search box.
+1. Select **Infogix Data3Sixty Govern** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure Azure AD single sign-on with Infogix Data3Sixty Govern, perform the following steps:
+## Configure and test Azure AD SSO for Infogix Data3Sixty Govern
-1. In the [Azure portal](https://portal.azure.com/), on the **Infogix Data3Sixty Govern** application integration page, select **Single sign-on**.
+Configure and test Azure AD SSO with Infogix Data3Sixty Govern using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Infogix Data3Sixty Govern.
- ![Configure single sign-on link](common/select-sso.png)
+To configure and test Azure AD SSO with Infogix Data3Sixty Govern, perform the following steps:
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Infogix Data3Sixty Govern SSO](#configure-infogix-data3sixty-govern-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Infogix Data3Sixty Govern test user](#create-infogix-data3sixty-govern-test-user)** - to have a counterpart of B.Simon in Infogix Data3Sixty Govern that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
- ![Single sign-on select mode](common/select-saml-option.png)
+## Configure Azure AD SSO
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+1. In the Azure portal, on the **Infogix Data3Sixty Govern** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-4. On the **Basic SAML Configuration** section, If you wish to configure the application in **IDP** initiated mode, perform the following steps:
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
- ![Screenshot shows the Basic SAML Configuration, where you can enter Identifier, Reply U R L, and select Save.](common/idp-intiated.png)
+1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Identifier** text box, type a URL:
+ a. In the **Identifier** text box, type the URL:
`https://data3sixty.com/ui` b. In the **Reply URL** text box, type a URL using the following pattern: `https://<subdomain>.data3sixty.com/sso/acs`
-5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
-
- ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/metadata-upload-additional-signon.png)
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
In the **Sign-on URL** text box, type a URL using the following pattern: `https://<subdomain>.data3sixty.com`
To configure Azure AD single sign-on with Infogix Data3Sixty Govern, perform the
> [!NOTE] > These values are not real. Update these values with the actual Reply URL and Sign-On URL. Contact [Infogix Data3Sixty Govern Client support team](mailto:data3sixtysupport@infogix.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-6. Infogix Data3Sixty Govern application expects the SAML assertions in a specific format. Configure the following claims for this application. You can manage the values of these attributes from the **User Attributes** section on application integration page. On the **Set up Single Sign-On with SAML** page, click **Edit** button to open **User Attributes** dialog.
+1. Infogix Data3Sixty Govern application expects the SAML assertions in a specific format. Configure the following claims for this application. You can manage the values of these attributes from the **User Attributes** section on application integration page. On the **Set up Single Sign-On with SAML** page, click **Edit** button to open **User Attributes** dialog.
- ![Screenshot shows User Attributes with the Edit icon selected.](common/edit-attribute.png)
+ ![Screenshot shows User Attributes with the Edit icon selected.](common/edit-attribute.png "Attributes")
-7. In the **User Claims** section on the **User Attributes** dialog, edit the claims by using **Edit icon** or add the claims by using **Add new claim** to configure SAML token attribute as shown in the image above and perform the following steps:
+1. In the **User Claims** section on the **User Attributes** dialog, edit the claims by using **Edit icon** or add the claims by using **Add new claim** to configure SAML token attribute as shown in the image above and perform the following steps:
| Name | Source Attribute| | --| -- |
To configure Azure AD single sign-on with Infogix Data3Sixty Govern, perform the
a. Click **Add new claim** to open the **Manage user claims** dialog.
- ![Screenshot shows User claims with the option to Add new claim.](common/new-save-attribute.png)
+ ![Screenshot shows User claims with the option to Add new claim.](common/new-save-attribute.png "Claims")
- ![Screenshot shows the Manage user claims dialog box where you can enter the values described.](common/new-attribute-details.png)
+ ![Screenshot shows the Manage user claims dialog box where you can enter the values described.](common/new-attribute-details.png "Values")
b. In the **Name** textbox, type the attribute name shown for that row.
To configure Azure AD single sign-on with Infogix Data3Sixty Govern, perform the
g. Click **Save**.
-8. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Raw)** from the given options as per your requirement and save it on your computer.
-
- ![The Certificate download link](common/certificateraw.png)
-
-9. On the **Set up Infogix Data3Sixty Govern** section, copy the appropriate URL(s) as per your requirement.
-
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
- a. Login URL
-
- b. Azure AD Identifier
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Raw)** from the given options as per your requirement and save it on your computer.
- c. Logout URL
+ ![Screenshot shows the Certificate download link.](common/certificateraw.png "Certificate")
-### Configure Infogix Data3Sixty Govern Single Sign-On
+1. On the **Set up Infogix Data3Sixty Govern** section, copy the appropriate URL(s) as per your requirement.
-To configure single sign-on on **Infogix Data3Sixty Govern** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [Infogix Data3Sixty Govern support team](mailto:data3sixtysupport@infogix.com). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Infogix Data3Sixty Govern.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Infogix Data3Sixty Govern**.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Infogix Data3Sixty Govern.
- ![Enterprise applications blade](common/enterprise-applications.png)
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Infogix Data3Sixty Govern**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-2. In the applications list, select **Infogix Data3Sixty Govern**.
+## Configure Infogix Data3Sixty Govern SSO
- ![The Infogix Data3Sixty Govern link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Infogix Data3Sixty Govern** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [Infogix Data3Sixty Govern support team](mailto:data3sixtysupport@infogix.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Infogix Data3Sixty Govern test user
In this section, a user called Britta Simon is created in Infogix Data3Sixty Gov
> [!Note] > If you need to create a user manually, contact [Infogix Data3Sixty Govern support team](mailto:data3sixtysupport@infogix.com).
-### Test single sign-on
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Infogix Data3Sixty Govern Sign-on URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to Infogix Data3Sixty Govern Sign-on URL directly and initiate the login flow from there.
-When you click the Infogix Data3Sixty Govern tile in the Access Panel, you should be automatically signed in to the Infogix Data3Sixty Govern for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+#### IDP initiated:
-## Additional Resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Infogix Data3Sixty Govern for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Infogix Data3Sixty Govern tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Infogix Data3Sixty Govern for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Infogix Data3Sixty Govern you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Intacct Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/intacct-tutorial.md
Previously updated : 03/16/2022 Last updated : 07/14/2022
To configure and test Azure AD SSO with Sage Intacct, perform the following step
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
-2. **[Configure Sage Intacct SSO](#configure-sage-intacct-sso)** - to configure the Single Sign-On settings on application side.
+2. **[Configure Sage Intacct SSO](#configure-sage-intacct-sso)** - to configure the single sign-on settings on application side.
1. **[Set up individual users in Intacct](#set-up-individual-users-in-intacct)** - to have a counterpart of B.Simon in Sage Intacct that is linked to the Azure AD representation of user. 6. **[Test SSO](#test-sso)** - to verify whether the configuration works.
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, perform the following step:
+1. On the **Basic SAML Configuration** section, perform the following steps:
- In the **Reply URL** text box, type one of the following URLs:
+ a. In the **Identifier (Entity ID)** text box, type a unique identifier for your Sage Intacct company, such as `https://saml.intacct.com`.
- | Reply URL |
- | - |
- | `https://www.intacct.com/ia/acct/sso_response.phtml` (Select as the default.) |
- | `https://www-p02.intacct.com/ia/acct/sso_response.phtml` |
- | `https://www-p03.intacct.com/ia/acct/sso_response.phtml` |
- | `https://www-p04.intacct.com/ia/acct/sso_response.phtml` |
- | `https://www-p05.intacct.com/ia/acct/sso_response.phtml` |
- |
+ b. In the **Reply URL** text box, add the following URLs:
-1. The Sage Intacct application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click **Edit** icon to open User Attributes dialog..
+ | Reply URL |
+ | - |
+ | `https://www.intacct.com/ia/acct/sso_response.phtml` (Select as the default.) |
+ | `https://www-p02.intacct.com/ia/acct/sso_response.phtml` |
+ | `https://www-p03.intacct.com/ia/acct/sso_response.phtml` |
+ | `https://www-p04.intacct.com/ia/acct/sso_response.phtml` |
+ | `https://www-p05.intacct.com/ia/acct/sso_response.phtml` |
+
+
+1. The Sage Intacct application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click **Edit** icon to open User Attributes dialog.
![image](common/edit-attribute.png)
-1. In addition to above, Sage Intacct application expects few more attributes to be passed back in SAML response. In the **User Attributes & Claims** dialog, perform the following steps to add SAML token attribute as shown in the below table:
+1. In the **Attributes & Claims** dialog, perform the following steps:
+ a. Edit **Unique User Identifier (Name ID)** and set source attribute to user.mail and verify Name identifier format is set to Email address and click **Save**
+
+ b. Remove all default Additional claims attributes by clicking ***...*** and Delete.
+
| Attribute Name | Source Attribute| | | | | Company Name | **Sage Intacct Company ID** | | name | `<User ID>`| > [!NOTE]
- > Enter the `<User ID>` value should be same as the Sage Intacct **User ID**, which you enter in the **[Set up individual users in Intacct](#set-up-individual-users-in-intacct)**, which is explained later in the tutorial
+ > Enter the `<User ID>` value should be same as the Sage Intacct **User ID**, which you enter in the **[Set up individual users in Intacct](#set-up-individual-users-in-intacct)**, which is explained later in the tutorial. Usually, this is the prefix of the email address. In this case, you can set the source as a transformation and use ExtractMailPrefix() on user.mail parameter.
- a. Click **Add new claim** to open the **Manage user claims** dialog.
+ c. Click **Add new claim** to open the **Manage user claims** dialog.
- b. In the **Name** textbox, type the attribute name shown for that row.
+ d. In the **Name** textbox, type the attribute name shown for that row.
- c. Leave the **Namespace** blank.
+ e. Leave the **Namespace** blank.
- d. Select Source as **Attribute**.
+ f. Select Source as **Attribute**.
- e. From the **Source attribute** list, type or select the attribute value shown for that row.
+ g. From the **Source attribute** list, type or select the attribute value shown for that row.
- f. Click **Ok**
+ h. Click **Ok**
- g. Click **Save**.
+ i. Click **Save**.
+
+ > Repeat steps c-i to add both custom attibutes.
+
-1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Edit** to open the dialog. Click **...** next to the Active certificate and select **PEM certificate download** to download the certificate and save it to your local drive.
- ![The Certificate download link](common/certificatebase64.png)
+ ![The Certificate download link](common/certificate-base64-download.png)
-1. On the **Set up Sage Intacct** section, copy the appropriate URL(s) based on your requirement.
+1. On the **Set up Sage Intacct** section, copy the Login URL as you will use it within Sage Intacct configuration.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
b. As **Identity provider type**, select **SAML 2.0**.
- c. In **Issuer URL** textbox, paste the value of **Azure AD Identifier**, which you have copied from Azure portal.
+ c. In **Issuer URL** textbox, paste the value of **Identifier (Entity ID)**, which you created in the Basic SAML Configuration dialog.
d. In **Login URL** textbox, paste the value of **Login URL**, which you have copied from Azure portal.
- e. Open your **base-64** encoded certificate in notepad, copy the content of it into your clipboard, and then paste it to the **Certificate** box.
+ e. Open your **PEM** encoded certificate in notepad, copy the content of it into your clipboard, and then paste it to the **Certificate** box.
f. Set **Requested authentication content type** to **Exact**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
### Set up individual users in Intacct
-When SSO is enabled for your company, you can individually require users to use SSO when logging in to your company. After you set up a user for SSO, the user will no longer be able to use a password to log in to your company directly. Instead, that user will need to use single sign-on and will be authenticated by your SSO identity provider as being an authorized user. Any users who aren't set up for SSO can continue to log in to your company using the basic signin page.
+When SSO is enabled for your company, you can individually require users to use SSO when logging in to your company. After you set up a user for SSO, the user will no longer be able to use a password to log in to your company directly. Instead, that user will need to use single sign-on and be authenticated by your SSO identity provider as an authorized user. Any users who are not set up for SSO can continue to log in to your company using the basic sign-in page.
**To enable SSO for a user, perform the following steps:**
-1. Sign in to your **Sage Intacct** tenant.
+1. Sign in to your **Sage Intacct** company.
1. Go to **Company**, click the **Admin** tab, then click **Users**.
When SSO is enabled for your company, you can individually require users to use
1. Locate the desired user and click **Edit** next to it.
- ![Edit the user](./media/intacct-tutorial/user-edit.png "edit")
+ ![Screenshot to Edit the user](./media/intacct-tutorial/user-edit.png "edit")
+
+1. Click the **Single sign-on** tab and type the **Federated SSO user ID**.
-1. Click **Single sign-on** tab and make sure that the **Federated SSO user ID** in below screenshot and the **Source Attribute** value which is mapped with the `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier` in the **User Attributes** section in the Azure portal should be same.
+> [!NOTE]
+> This value is mapped with the Unique User Identifier found in Azure's Attributes & Claims dialog.
- ![Screenshot shows the User Information section where you can enter the Federated S S O user i d.](./media/intacct-tutorial/user-information.png "User Information")
+![Screenshot shows the User Information section where you can enter the Federated S S O user i d.](./media/intacct-tutorial/user-information.png "User Information")
> [!NOTE] > To provision Azure AD user accounts, you can use other Sage Intacct user account creation tools or APIs that are provided by Sage Intacct.
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure Sage Intacct you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Sage Intacct you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Lift Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lift-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with LIFT | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with LIFT'
description: Learn how to configure single sign-on between Azure Active Directory and LIFT.
Previously updated : 03/11/2020 Last updated : 07/08/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with LIFT
+# Tutorial: Azure AD SSO integration with LIFT
In this tutorial, you'll learn how to integrate LIFT with Azure Active Directory (Azure AD). When you integrate LIFT with Azure AD, you can:
In this tutorial, you'll learn how to integrate LIFT with Azure Active Directory
* Enable your users to be automatically signed-in to LIFT with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * LIFT single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* LIFT supports **SP** initiated SSO
-* Once you configure LIFT you can enforce Session Control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session Control extend from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+* LIFT supports **SP** initiated SSO.
-## Adding LIFT from the gallery
+## Add LIFT from the gallery
To configure the integration of LIFT into Azure AD, you need to add LIFT from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **LIFT** in the search box. 1. Select **LIFT** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. -
-## Configure and test Azure AD single sign-on for LIFT
+## Configure and test Azure AD SSO for LIFT
Configure and test Azure AD SSO with LIFT using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in LIFT.
-To configure and test Azure AD SSO with LIFT, complete the following building blocks:
+To configure and test Azure AD SSO with LIFT, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with LIFT, complete the following building bl
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **LIFT** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **LIFT** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
-
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<companyname>.portal.liftsoftware.nl/lift/secure`
+1. On the **Basic SAML Configuration** section, perform the following steps:
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
`https://<companyname>.portal.liftsoftware.nl/saml-metadata/<identifier>`
+
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<companyname>.portal.liftsoftware.nl/lift/secure`
> [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [LIFT Client support team](mailto:support@liftsoftware.nl) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [LIFT Client support team](mailto:support@liftsoftware.nl) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
- ![The Certificate download link](common/copy-metadataurl.png)
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **LIFT**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button. - ## Configure LIFT SSO To configure single sign-on on **LIFT** side, you need to send the **App Federation Metadata Url** to [LIFT support team](mailto:support@liftsoftware.nl). They set this setting to have the SAML SSO connection set properly on both sides.
In this section, you create a user called B.Simon in LIFT. Work with [LIFT suppo
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the LIFT tile in the Access Panel, you should be automatically signed in to the LIFT for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on **Test this application** in Azure portal. This will redirect to LIFT Sign-on URL where you can initiate the login flow.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* Go to LIFT Sign-on URL directly and initiate the login flow from there.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* You can use Microsoft My Apps. When you click the LIFT tile in the My Apps, this will redirect to LIFT Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [Try LIFT with Azure AD](https://aad.portal.azure.com/)
+## Next steps
-- [What is session control in Microsoft Defender for Cloud Apps?](/cloud-app-security/proxy-intro-aad)
+Once you configure LIFT you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Maxxpoint Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/maxxpoint-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with MaxxPoint | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with MaxxPoint'
description: Learn how to configure single sign-on between Azure Active Directory and MaxxPoint.
Previously updated : 02/21/2019 Last updated : 07/09/2022
-# Tutorial: Azure Active Directory integration with MaxxPoint
+# Tutorial: Azure AD SSO integration with MaxxPoint
-In this tutorial, you learn how to integrate MaxxPoint with Azure Active Directory (Azure AD).
-Integrating MaxxPoint with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate MaxxPoint with Azure Active Directory (Azure AD). When you integrate MaxxPoint with Azure AD, you can:
-* You can control in Azure AD who has access to MaxxPoint.
-* You can enable your users to be automatically signed-in to MaxxPoint (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to MaxxPoint.
+* Enable your users to be automatically signed-in to MaxxPoint with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with MaxxPoint, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* MaxxPoint single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* MaxxPoint single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* MaxxPoint supports **SP** and **IDP** initiated SSO
+* MaxxPoint supports **SP** and **IDP** initiated SSO.
-## Adding MaxxPoint from the gallery
+## Add MaxxPoint from the gallery
To configure the integration of MaxxPoint into Azure AD, you need to add MaxxPoint from the gallery to your list of managed SaaS apps.
-**To add MaxxPoint from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **MaxxPoint**, select **MaxxPoint** from result panel then click **Add** button to add the application.
-
- ![MaxxPoint in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **MaxxPoint** in the search box.
+1. Select **MaxxPoint** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with MaxxPoint based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in MaxxPoint needs to be established.
+## Configure and test Azure AD SSO for MaxxPoint
-To configure and test Azure AD single sign-on with MaxxPoint, you need to complete the following building blocks:
+Configure and test Azure AD SSO with MaxxPoint using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in MaxxPoint.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure MaxxPoint Single Sign-On](#configure-maxxpoint-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create MaxxPoint test user](#create-maxxpoint-test-user)** - to have a counterpart of Britta Simon in MaxxPoint that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with MaxxPoint, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure MaxxPoint SSO](#configure-maxxpoint-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create MaxxPoint test user](#create-maxxpoint-test-user)** - to have a counterpart of B.Simon in MaxxPoint that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with MaxxPoint, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **MaxxPoint** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **MaxxPoint** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, the user does not have to perform any step as the app is already pre-integrated with Azure.
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, the user does not have to perform any step as the app is already pre-integrated with Azure.
-
- ![Screenshot shows Basic SAML Configuration.](common/preintegrated.png)
-
-5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
-
- ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/metadata-upload-additional-signon.png)
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
In the **Sign on URL** text box, type a URL using the following pattern: `https://maxxpoint.westipc.com/default/sso/login/entity/<customer-id>-azure`
To configure Azure AD single sign-on with MaxxPoint, perform the following steps
>[!NOTE] >This is not the real value. Update the value with the actual Sign on URL. Call MaxxPoint team on 888-728-0950 to get this value.
-6. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
-
- ![The Certificate download link](common/metadataxml.png)
-
-7. On the **Set up MaxxPoint** section, copy the appropriate URL(s) as per your requirement.
-
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
- a. Login URL
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
- b. Azure AD Identifier
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
- c. Logout URL
+1. On the **Set up MaxxPoint** section, copy the appropriate URL(s) as per your requirement.
-### Configure MaxxPoint Single Sign-On
-
-To get SSO configured for your application, call MaxxPoint support team on **888-728-0950** and they'll assist you further on how to provide them the downloaded **Federation Metadata XML** file.
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to MaxxPoint.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to MaxxPoint.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **MaxxPoint**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **MaxxPoint**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure MaxxPoint SSO
-2. In the applications list, select **MaxxPoint**.
-
- ![The MaxxPoint link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
+To get SSO configured for your application, call MaxxPoint support team on **888-728-0950** and they'll assist you further on how to provide them the downloaded **Federation Metadata XML** file.
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
+### Create MaxxPoint test user
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+In this section, you create a user called Britta Simon in MaxxPoint. Please call MaxxPoint support team on **888-728-0950** to add the users in the MaxxPoint application.
-7. In the **Add Assignment** dialog click the **Assign** button.
+## Test SSO
-### Create MaxxPoint test user
+In this section, you test your Azure AD single sign-on configuration with following options.
-In this section, you create a user called Britta Simon in MaxxPoint. Please call MaxxPoint support team on **888-728-0950** to add the users in the MaxxPoint application.
+#### SP initiated:
-### Test single sign-on
+* Click on **Test this application** in Azure portal. This will redirect to MaxxPoint Sign-on URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to MaxxPoint Sign-on URL directly and initiate the login flow from there.
-When you click the MaxxPoint tile in the Access Panel, you should be automatically signed in to the MaxxPoint for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+#### IDP initiated:
-## Additional Resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the MaxxPoint for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the MaxxPoint tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the MaxxPoint for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure MaxxPoint you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Mindflash Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mindflash-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Mindflash | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Mindflash'
description: Learn how to configure single sign-on between Azure Active Directory and Mindflash.
Previously updated : 02/25/2019 Last updated : 07/09/2022
-# Tutorial: Azure Active Directory integration with Mindflash
+# Tutorial: Azure AD SSO integration with Mindflash
-In this tutorial, you learn how to integrate Mindflash with Azure Active Directory (Azure AD).
-Integrating Mindflash with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Mindflash with Azure Active Directory (Azure AD). When you integrate Mindflash with Azure AD, you can:
-* You can control in Azure AD who has access to Mindflash.
-* You can enable your users to be automatically signed-in to Mindflash (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Mindflash.
+* Enable your users to be automatically signed-in to Mindflash with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Mindflash, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Mindflash single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Mindflash single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Mindflash supports **SP** initiated SSO
+* Mindflash supports **SP** initiated SSO.
-## Adding Mindflash from the gallery
+## Add Mindflash from the gallery
To configure the integration of Mindflash into Azure AD, you need to add Mindflash from the gallery to your list of managed SaaS apps.
-**To add Mindflash from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Mindflash**, select **Mindflash** from result panel then click **Add** button to add the application.
-
- ![Mindflash in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Mindflash** in the search box.
+1. Select **Mindflash** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Mindflash based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Mindflash needs to be established.
+## Configure and test Azure AD SSO for Mindflash
-To configure and test Azure AD single sign-on with Mindflash, you need to complete the following building blocks:
+Configure and test Azure AD SSO with Mindflash using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Mindflash.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Mindflash Single Sign-On](#configure-mindflash-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Mindflash test user](#create-mindflash-test-user)** - to have a counterpart of Britta Simon in Mindflash that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with Mindflash, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Mindflash SSO](#configure-mindflash-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Mindflash test user](#create-mindflash-test-user)** - to have a counterpart of B.Simon in Mindflash that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with Mindflash, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Mindflash** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **Mindflash** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. On the **Basic SAML Configuration** section, perform the following steps:
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Basic SAML Configuration** section, perform the following steps:
-
- ![Mindflash Domain and URLs single sign-on information](common/sp-identifier.png)
-
- a. In the **Sign on URL** text box, type a URL using the following pattern:
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
`https://<companyname>.mindflash.com`
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
`https://<companyname>.mindflash.com` > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Mindflash Client support team](https://www.mindflash.com/contact/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-
-5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
-
- ![The Certificate download link](common/metadataxml.png)
-
-6. On the **Set up Mindflash** section, copy the appropriate URL(s) as per your requirement.
-
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [Mindflash Client support team](https://www.mindflash.com/contact/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
- a. Login URL
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
- b. Azure Ad Identifier
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
- c. Logout URL
+1. On the **Set up Mindflash** section, copy the appropriate URL(s) as per your requirement.
-### Configure Mindflash Single Sign-On
-
-To configure single sign-on on **Mindflash** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Mindflash support team](https://www.mindflash.com/contact/). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Mindflash.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Mindflash.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Mindflash**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Mindflash**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure Mindflash SSO
-2. In the applications list, select **Mindflash**.
-
- ![The Mindflash link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Mindflash** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Mindflash support team](https://www.mindflash.com/contact/). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Mindflash test user
In order to enable Azure AD users to log into Mindflash, they must be provisione
1. Go to **Manage Users**.
- ![Manage Users](./media/mindflash-tutorial/ic787140.png "Manage Users")
+ ![Screenshot shows the Manage Users of account.](./media/mindflash-tutorial/account.png "Manage Users")
1. Click the **Add Users**, and then click **New**. 1. In the **Add New Users** section, perform the following steps of a valid Azure AD account you want to provision:
- ![Add New Users](./media/mindflash-tutorial/ic787141.png "Add New Users")
+ ![Screenshot shows to Add New Users of the account.](./media/mindflash-tutorial/user.png "Add New Users")
a. In the **First name** textbox, type **First name** of the user as **Britta**.
In order to enable Azure AD users to log into Mindflash, they must be provisione
>You can use any other Mindflash user account creation tools or APIs provided by Mindflash to provision Azure AD user accounts. >
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Mindflash tile in the Access Panel, you should be automatically signed in to the Mindflash for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to Mindflash Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Mindflash Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Mindflash tile in the My Apps, this will redirect to Mindflash Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Mindflash you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Moxiengage Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/moxiengage-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Moxi Engage | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Moxi Engage'
description: Learn how to configure single sign-on between Azure Active Directory and Moxi Engage.
Previously updated : 02/25/2019 Last updated : 07/09/2022
-# Tutorial: Azure Active Directory integration with Moxi Engage
+# Tutorial: Azure AD SSO integration with Moxi Engage
-In this tutorial, you learn how to integrate Moxi Engage with Azure Active Directory (Azure AD).
-Integrating Moxi Engage with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Moxi Engage with Azure Active Directory (Azure AD). When you integrate Moxi Engage with Azure AD, you can:
-* You can control in Azure AD who has access to Moxi Engage.
-* You can enable your users to be automatically signed-in to Moxi Engage (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Moxi Engage.
+* Enable your users to be automatically signed-in to Moxi Engage with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Moxi Engage, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Moxi Engage single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Moxi Engage single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Moxi Engage supports **SP** initiated SSO
-
-## Adding Moxi Engage from the gallery
-
-To configure the integration of Moxi Engage into Azure AD, you need to add Moxi Engage from the gallery to your list of managed SaaS apps.
-
-**To add Moxi Engage from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
+* Moxi Engage supports **SP** initiated SSO.
- ![The New application button](common/add-new-app.png)
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-4. In the search box, type **Moxi Engage**, select **Moxi Engage** from result panel then click **Add** button to add the application.
+## Add Moxi Engage from the gallery
- ![Moxi Engage in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Moxi Engage based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Moxi Engage needs to be established.
-
-To configure and test Azure AD single sign-on with Moxi Engage, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Moxi Engage Single Sign-On](#configure-moxi-engage-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Moxi Engage test user](#create-moxi-engage-test-user)** - to have a counterpart of Britta Simon in Moxi Engage that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
-
-### Configure Azure AD single sign-on
+To configure the integration of Moxi Engage into Azure AD, you need to add Moxi Engage from the gallery to your list of managed SaaS apps.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Moxi Engage** in the search box.
+1. Select **Moxi Engage** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure Azure AD single sign-on with Moxi Engage, perform the following steps:
+## Configure and test Azure AD SSO for Moxi Engage
-1. In the [Azure portal](https://portal.azure.com/), on the **Moxi Engage** application integration page, select **Single sign-on**.
+Configure and test Azure AD SSO with Moxi Engage using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Moxi Engage.
- ![Configure single sign-on link](common/select-sso.png)
+To configure and test Azure AD SSO with Moxi Engage, perform the following steps:
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Moxi Engage SSO](#configure-moxi-engage-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Moxi Engage test user](#create-moxi-engage-test-user)** - to have a counterpart of B.Simon in Moxi Engage that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
- ![Single sign-on select mode](common/select-saml-option.png)
+## Configure Azure AD SSO
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+1. In the Azure portal, on the **Moxi Engage** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-4. On the **Basic SAML Configuration** section, perform the following steps:
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
- ![Moxi Engage Domain and URLs single sign-on information](common/sp-signonurl.png)
+1. On the **Basic SAML Configuration** section, perform the following step:
In the **Sign-on URL** text box, type a URL using the following pattern: `https://svc.<moxiworks-integration-domain>/service/v1/auth/inbound/saml/aad`
To configure Azure AD single sign-on with Moxi Engage, perform the following ste
> [!NOTE] > The value is not real. Update the value with the actual Sign-On URL. Contact [Moxi Engage Client support team](mailto:support@moxiworks.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
-
- ![The Certificate download link](common/metadataxml.png)
-
-6. On the **Set up Moxi Engage** section, copy the appropriate URL(s) as per your requirement.
-
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
- a. Login URL
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
- b. Azure AD Identifier
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
- c. Logout URL
+1. On the **Set up Moxi Engage** section, copy the appropriate URL(s) as per your requirement.
-### Configure Moxi Engage Single Sign-On
-
-To configure single sign-on on **Moxi Engage** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Moxi Engage support team](mailto:support@moxiworks.com). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field, enter **BrittaSimon**.
-
- b. In the **User name** field, type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Moxi Engage.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Moxi Engage.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Moxi Engage**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Moxi Engage**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure Moxi Engage SSO
-2. In the applications list, select **Moxi Engage**.
-
- ![The Moxi Engage link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog, select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog, click the **Assign** button.
+To configure single sign-on on **Moxi Engage** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Moxi Engage support team](mailto:support@moxiworks.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Moxi Engage test user In this section, you create a user called Britta Simon in Moxi Engage. Work with [Moxi Engage support team](mailto:support@moxiworks.com) to add the users in the Moxi Engage platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Moxi Engage tile in the Access Panel, you should be automatically signed in to the Moxi Engage for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to Moxi Engage Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Moxi Engage Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Moxi Engage tile in the My Apps, this will redirect to Moxi Engage Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Moxi Engage you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Oracle Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-cloud-tutorial.md
Previously updated : 01/28/2022 Last updated : 07/14/2022
Follow these steps to enable Azure AD SSO in the Azure portal.
> If the **Identifier** and **Reply URL** values do not get auto populated, then fill in the values manually according to your requirement. In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://console.<REGIONNAME>.oraclecloud.com/`
+ `https://cloud.oracle.com/?region=<REGIONNAME>`
> [!NOTE] > The value is not real. Update the value with the actual Sign-On URL. Contact [Oracle Cloud Infrastructure Console Client support team](https://www.oracle.com/support/advanced-customer-services/cloud/) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
active-directory Patentsquare Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/patentsquare-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with PatentSQUARE | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with PatentSQUARE'
description: Learn how to configure single sign-on between Azure Active Directory and PatentSQUARE.
Previously updated : 03/14/2019 Last updated : 07/09/2022
-# Tutorial: Azure Active Directory integration with PatentSQUARE
+# Tutorial: Azure AD SSO integration with PatentSQUARE
-In this tutorial, you learn how to integrate PatentSQUARE with Azure Active Directory (Azure AD).
-Integrating PatentSQUARE with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate PatentSQUARE with Azure Active Directory (Azure AD). When you integrate PatentSQUARE with Azure AD, you can:
-* You can control in Azure AD who has access to PatentSQUARE.
-* You can enable your users to be automatically signed-in to PatentSQUARE (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to PatentSQUARE.
+* Enable your users to be automatically signed-in to PatentSQUARE with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with PatentSQUARE, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* PatentSQUARE single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* PatentSQUARE single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* PatentSQUARE supports **SP** initiated SSO
+* PatentSQUARE supports **SP** initiated SSO.
-## Adding PatentSQUARE from the gallery
+## Add PatentSQUARE from the gallery
To configure the integration of PatentSQUARE into Azure AD, you need to add PatentSQUARE from the gallery to your list of managed SaaS apps.
-**To add PatentSQUARE from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **PatentSQUARE**, select **PatentSQUARE** from result panel then click **Add** button to add the application.
-
- ![PatentSQUARE in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with PatentSQUARE based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in PatentSQUARE needs to be established.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **PatentSQUARE** in the search box.
+1. Select **PatentSQUARE** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure and test Azure AD single sign-on with PatentSQUARE, you need to complete the following building blocks:
+## Configure and test Azure AD SSO for PatentSQUARE
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure PatentSQUARE Single Sign-On](#configure-patentsquare-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create PatentSQUARE test user](#create-patentsquare-test-user)** - to have a counterpart of Britta Simon in PatentSQUARE that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+Configure and test Azure AD SSO with PatentSQUARE using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in PatentSQUARE.
-### Configure Azure AD single sign-on
+To configure and test Azure AD SSO with PatentSQUARE, perform the following steps:
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure PatentSQUARE SSO](#configure-patentsquare-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create PatentSQUARE test user](#create-patentsquare-test-user)** - to have a counterpart of B.Simon in PatentSQUARE that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-To configure Azure AD single sign-on with PatentSQUARE, perform the following steps:
+## Configure Azure AD SSO
-1. In the [Azure portal](https://portal.azure.com/), on the **PatentSQUARE** application integration page, select **Single sign-on**.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Configure single sign-on link](common/select-sso.png)
+1. In the Azure portal, on the **PatentSQUARE** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
- ![Single sign-on select mode](common/select-saml-option.png)
+1. On the **Basic SAML Configuration** section, perform the following steps:
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Basic SAML Configuration** section, perform the following steps:
-
- ![PatentSQUARE Domain and URLs single sign-on information](common/sp-identifier.png)
-
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<companysubdomain>.pat-dss.com:443/patlics/secure/aad`
-
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
`https://<companysubdomain>.pat-dss.com:443/patlics`
+
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<companysubdomain>.pat-dss.com:443/patlics/secure/aad`
> [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [PatentSQUARE Client support team](https://www.panasonic.com/jp/business/its/patentsquare.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-
-4. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
-
- ![The Certificate download link](common/metadataxml.png)
-
-5. On the **Set up PatentSQUARE** section, copy the appropriate URL(s) as per your requirement.
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [PatentSQUARE Client support team](https://www.panasonic.com/jp/business/its/patentsquare.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
- a. Login URL
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
- b. Azure AD Identifier
+1. On the **Set up PatentSQUARE** section, copy the appropriate URL(s) as per your requirement.
- c. Logout URL
-
-### Configure PatentSQUARE Single Sign-On
-
-To configure single sign-on on **PatentSQUARE** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [PatentSQUARE support team](https://www.panasonic.com/jp/business/its/patentsquare.html). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
+In this section, you'll create a test user in the Azure portal called B.Simon.
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to PatentSQUARE.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **PatentSQUARE**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **PatentSQUARE**.
-
- ![The PatentSQUARE link in the Applications list](common/all-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to PatentSQUARE.
-3. In the menu on the left, select **Users and groups**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **PatentSQUARE**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![The "Users and groups" link](common/users-groups-blade.png)
+## Configure PatentSQUARE SSO
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **PatentSQUARE** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [PatentSQUARE support team](https://www.panasonic.com/jp/business/its/patentsquare.html). They set this setting to have the SAML SSO connection set properly on both sides.
### Create PatentSQUARE test user In this section, you create a user called Britta Simon in PatentSQUARE. Work with [PatentSQUARE support team](https://www.panasonic.com/jp/business/its/patentsquare.html) to add the users in the PatentSQUARE platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the PatentSQUARE tile in the Access Panel, you should be automatically signed in to the PatentSQUARE for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to PatentSQUARE Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to PatentSQUARE Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the PatentSQUARE tile in the My Apps, this will redirect to PatentSQUARE Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure PatentSQUARE you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Planview Id Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/planview-id-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Planview ID'
+description: Learn how to configure single sign-on between Azure Active Directory and Planview ID.
++++++++ Last updated : 07/11/2022++++
+# Tutorial: Azure AD SSO integration with Planview ID
+
+In this tutorial, you'll learn how to integrate Planview ID with Azure Active Directory (Azure AD). When you integrate Planview ID with Azure AD, you can:
+
+* Control in Azure AD who has access to Planview ID.
+* Enable your users to be automatically signed-in to Planview ID with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Planview ID single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Planview ID supports **SP** and **IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Planview ID from the gallery
+
+To configure the integration of Planview ID into Azure AD, you need to add Planview ID from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Planview ID** in the search box.
+1. Select **Planview ID** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Planview ID
+
+Configure and test Azure AD SSO with Planview ID using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Planview ID.
+
+To configure and test Azure AD SSO with Planview ID, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Planview ID SSO](#configure-planview-id-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Planview ID test user](#create-planview-id-test-user)** - to have a counterpart of B.Simon in Planview ID that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Planview ID** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following step:
+
+ In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<Region>.id.planview.com/api/loginsso/callback`
+
+1. Click **Set additional URLs** and perform the following step, if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<Region>.id.planview.com`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Reply URL and Sign on URL. Contact [Planview ID support team](mailto:jordan.nguyen@planview.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Planview ID.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Planview ID**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Planview ID SSO
+
+To configure single sign-on on **Planview ID** side, you need to send the **App Federation Metadata Url** to [Planview ID support team](mailto:jordan.nguyen@planview.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Planview ID test user
+
+In this section, you create a user called Britta Simon in Planview ID. Work with [Planview ID support team](mailto:jordan.nguyen@planview.com) to add the users in the Planview ID platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Planview ID Sign-on URL where you can initiate the login flow.
+
+* Go to Planview ID Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Planview ID for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Planview ID tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Planview ID for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Planview ID you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Risecom Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/risecom-tutorial.md
Previously updated : 06/24/2022 Last updated : 07/14/2022
Follow these steps to enable Azure AD SSO in the Azure portal.
![Screenshot shows the image of attributes.](common/default-attributes.png "Attributes")
-1. In addition to above, Rise.com application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+1. The Rise.com application expects the default attributes to be replaced with the specific attributes as shown below. These attributes are also pre populated but you can review them as per your requirements.
| Name | Source Attribute| | | |
active-directory Sap Hana Cloud Platform Identity Authentication Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-hana-cloud-platform-identity-authentication-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with SAP Cloud Platform Identity Authentication | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and SAP Cloud Platform Identity Authentication.
+ Title: 'Tutorial: Azure Active Directory integration with SAP Cloud Identity Services | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and SAP Cloud Identity Services.
Previously updated : 09/01/2021 Last updated : 07/14/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with SAP Cloud Platform Identity Authentication
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with SAP Cloud Identity Services
-In this tutorial, you'll learn how to integrate SAP Cloud Platform Identity Authentication with Azure Active Directory (Azure AD). When you integrate SAP Cloud Platform Identity Authentication with Azure AD, you can:
+In this tutorial, you'll learn how to integrate SAP Cloud Identity Services with Azure Active Directory (Azure AD). When you integrate SAP Cloud Identity Services with Azure AD, you can:
-* Control in Azure AD who has access to SAP Cloud Platform Identity Authentication.
-* Enable your users to be automatically signed-in to SAP Cloud Platform Identity Authentication with their Azure AD accounts.
+* Control in Azure AD who has access to SAP Cloud Identity Services.
+* Enable your users to be automatically signed-in to SAP Cloud Identity Services with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate SAP Cloud Platform Identity Auth
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* SAP Cloud Platform Identity Authentication single sign-on (SSO) enabled subscription.
+* SAP Cloud Identity Services single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* SAP Cloud Platform Identity Authentication supports **SP** and **IDP** initiated SSO.
-* SAP Cloud Platform Identity Authentication supports [Automated user provisioning](sap-cloud-platform-identity-authentication-provisioning-tutorial.md).
+* SAP Cloud Identity Services supports **SP** and **IDP** initiated SSO.
+* SAP Cloud Identity Services supports [Automated user provisioning](sap-cloud-platform-identity-authentication-provisioning-tutorial.md).
-Before you dive into the technical details, it's vital to understand the concepts you're going to look at. The SAP Cloud Platform Identity Authentication and Active Directory Federation Services enable you to implement SSO across applications or services that are protected by Azure AD (as an IdP) with SAP applications and services that are protected by SAP Cloud Platform Identity Authentication.
+Before you dive into the technical details, it's vital to understand the concepts you're going to look at. The SAP Cloud Identity Services and Active Directory Federation Services enable you to implement SSO across applications or services that are protected by Azure AD (as an IdP) with SAP applications and services that are protected by SAP Cloud Identity Services.
-Currently, SAP Cloud Platform Identity Authentication acts as a Proxy Identity Provider to SAP applications. Azure Active Directory in turn acts as the leading Identity Provider in this setup.
+Currently, SAP Cloud Identity Services acts as a Proxy Identity Provider to SAP applications. Azure Active Directory in turn acts as the leading Identity Provider in this setup.
The following diagram illustrates this relationship: ![Creating an Azure AD test user](./media/sap-hana-cloud-platform-identity-authentication-tutorial/architecture-01.png)
-With this setup, your SAP Cloud Platform Identity Authentication tenant is configured as a trusted application in Azure Active Directory.
+With this setup, your SAP Cloud Identity Services tenant is configured as a trusted application in Azure Active Directory.
-All SAP applications and services that you want to protect this way are subsequently configured in the SAP Cloud Platform Identity Authentication management console.
+All SAP applications and services that you want to protect this way are subsequently configured in the SAP Cloud Identity Services management console.
-Therefore, the authorization for granting access to SAP applications and services needs to take place in SAP Cloud Platform Identity Authentication (as opposed to Azure Active Directory).
+Therefore, the authorization for granting access to SAP applications and services needs to take place in SAP Cloud Identity Services (as opposed to Azure Active Directory).
-By configuring SAP Cloud Platform Identity Authentication as an application through the Azure Active Directory Marketplace, you don't need to configure individual claims or SAML assertions.
+By configuring SAP Cloud Identity Services as an application through the Azure Active Directory Marketplace, you don't need to configure individual claims or SAML assertions.
> [!NOTE] > Currently only Web SSO has been tested by both parties. The flows that are necessary for App-to-API or API-to-API communication should work but have not been tested yet. They will be tested during subsequent activities.
-## Adding SAP Cloud Platform Identity Authentication from the gallery
+## Adding SAP Cloud Identity Services from the gallery
-To configure the integration of SAP Cloud Platform Identity Authentication into Azure AD, you need to add SAP Cloud Platform Identity Authentication from the gallery to your list of managed SaaS apps.
+To configure the integration of SAP Cloud Identity Services into Azure AD, you need to add SAP Cloud Identity Services from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **SAP Cloud Platform Identity Authentication** in the search box.
-1. Select **SAP Cloud Platform Identity Authentication** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **SAP Cloud Identity Services** in the search box.
+1. Select **SAP Cloud Identity Services** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for SAP Cloud Platform Identity Authentication
+## Configure and test Azure AD SSO for SAP Cloud Identity Services
-Configure and test Azure AD SSO with SAP Cloud Platform Identity Authentication using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in SAP Cloud Platform Identity Authentication.
+Configure and test Azure AD SSO with SAP Cloud Identity Services using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in SAP Cloud Identity Services.
-To configure and test Azure AD SSO with SAP Cloud Platform Identity Authentication, perform the following steps:
+To configure and test Azure AD SSO with SAP Cloud Identity Services, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure SAP Cloud Platform Identity Authentication SSO](#configure-sap-cloud-platform-identity-authentication-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create SAP Cloud Platform Identity Authentication test user](#create-sap-cloud-platform-identity-authentication-test-user)** - to have a counterpart of B.Simon in SAP Cloud Platform Identity Authentication that is linked to the Azure AD representation of user.
+1. **[Configure SAP Cloud Identity Services SSO](#configure-sap-cloud-identity-services-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create SAP Cloud Identity Services test user](#create-sap-cloud-identity-services-test-user)** - to have a counterpart of B.Simon in SAP Cloud Identity Services that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **SAP Cloud Platform Identity Authentication** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **SAP Cloud Identity Services** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<IAS-tenant-id>.accounts.ondemand.com/saml2/idp/acs/<IAS-tenant-id>.accounts.ondemand.com` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier and Reply URL. Contact the [SAP Cloud Platform Identity Authentication Client support team](https://cloudplatform.sap.com/capabilities/security/trustcenter.html) to get these values. If you don't understand Identifier value, read the SAP Cloud Platform Identity Authentication documentation about [Tenant SAML 2.0 configuration](https://help.hana.ondemand.com/cloud_identity/frameset.htm?e81a19b0067f4646982d7200a8dab3ca.html).
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact the [SAP Cloud Identity Services Client support team](https://cloudplatform.sap.com/capabilities/security/trustcenter.html) to get these values. If you don't understand Identifier value, read the SAP Cloud Identity Services documentation about [Tenant SAML 2.0 configuration](https://help.hana.ondemand.com/cloud_identity/frameset.htm?e81a19b0067f4646982d7200a8dab3ca.html).
5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP**-initiated mode:
- ![SAP Cloud Platform Identity Authentication Domain and URLs single sign-on information](common/metadata-upload-additional-signon.png)
+ ![SAP Cloud Identity Services Domain and URLs single sign-on information](common/metadata-upload-additional-signon.png)
In the **Sign-on URL** text box, type a value using the following pattern: `{YOUR BUSINESS APPLICATION URL}` > [!NOTE]
- > This value is not real. Update this value with the actual sign-on URL. Please use your specific business application Sign-on URL. Contact the [SAP Cloud Platform Identity Authentication Client support team](https://cloudplatform.sap.com/capabilities/security/trustcenter.html) if you have any doubt.
+ > This value is not real. Update this value with the actual sign-on URL. Please use your specific business application Sign-on URL. Contact the [SAP Cloud Identity Services Client support team](https://cloudplatform.sap.com/capabilities/security/trustcenter.html) if you have any doubt.
-1. SAP Cloud Platform Identity Authentication application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+1. SAP Cloud Identity Services application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
![image](common/default-attributes.png)
-1. In addition to above, SAP Cloud Platform Identity Authentication application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+1. In addition to above, SAP Cloud Identity Services application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
| Name | Source Attribute| | | |
Follow these steps to enable Azure AD SSO in the Azure portal.
![The Certificate download link](common/metadataxml.png)
-9. On the **Set up SAP Cloud Platform Identity Authentication** section, copy the appropriate URL(s) as per your requirement.
+9. On the **Set up SAP Cloud Identity Services** section, copy the appropriate URL(s) as per your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SAP Cloud Platform Identity Authentication.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SAP Cloud Identity Services.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **SAP Cloud Platform Identity Authentication**.
+1. In the applications list, select **SAP Cloud Identity Services**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure SAP Cloud Platform Identity Authentication SSO
+## Configure SAP Cloud Identity Services SSO
-1. To automate the configuration within SAP Cloud Platform Identity Authentication, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+1. To automate the configuration within SAP Cloud Identity Services, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
![My apps extension](common/install-myappssecure-extension.png)
-2. After adding extension to the browser, click on **Set up SAP Cloud Platform Identity Authentication** will direct you to the SAP Cloud Platform Identity Authentication application. From there, provide the admin credentials to sign into SAP Cloud Platform Identity Authentication. The browser extension will automatically configure the application for you and automate steps 3-7.
+2. After adding extension to the browser, click on **Set up SAP Cloud Identity Services** will direct you to the SAP Cloud Identity Services application. From there, provide the admin credentials to sign into SAP Cloud Identity Services. The browser extension will automatically configure the application for you and automate steps 3-7.
![Setup configuration](common/setup-sso.png)
-3. If you want to setup SAP Cloud Platform Identity Authentication manually, in a different web browser window, go to the SAP Cloud Platform Identity Authentication administration console. The URL has the following pattern: `https://<tenant-id>.accounts.ondemand.com/admin`. Then read the documentation about SAP Cloud Platform Identity Authentication at [Integration with Microsoft Azure AD](https://developers.sap.com/tutorials/cp-ias-azure-ad.html).
+3. If you want to set up SAP Cloud Identity Services manually, in a different web browser window, go to the SAP Cloud Identity Services administration console. The URL has the following pattern: `https://<tenant-id>.accounts.ondemand.com/admin`. Then read the documentation about SAP Cloud Identity Services at [Integration with Microsoft Azure AD](https://developers.sap.com/tutorials/cp-ias-azure-ad.html).
2. In the Azure portal, select the **Save** button.
-3. Continue with the following only if you want to add and enable SSO for another SAP application. Repeat the steps under the section **Adding SAP Cloud Platform Identity Authentication from the gallery**.
+3. Continue with the following only if you want to add and enable SSO for another SAP application. Repeat the steps under the section **Adding SAP Cloud Identity Services from the gallery**.
-4. In the Azure portal, on the **SAP Cloud Platform Identity Authentication** application integration page, select **Linked Sign-on**.
+4. In the Azure portal, on the **SAP Cloud Identity Services** application integration page, select **Linked Sign-on**.
![Configure Linked Sign-On](./media/sap-hana-cloud-platform-identity-authentication-tutorial/linked-sign-on.png) 5. Save the configuration. > [!NOTE]
-> The new application leverages the single sign-on configuration of the previous SAP application. Make sure you use the same Corporate Identity Providers in the SAP Cloud Platform Identity Authentication administration console.
+> The new application leverages the single sign-on configuration of the previous SAP application. Make sure you use the same Corporate Identity Providers in the SAP Cloud Identity Services administration console.
-### Create SAP Cloud Platform Identity Authentication test user
+### Create SAP Cloud Identity Services test user
-You don't need to create a user in SAP Cloud Platform Identity Authentication. Users who are in the Azure AD user store can use the SSO functionality.
+You don't need to create a user in SAP Cloud Identity Services. Users who are in the Azure AD user store can use the SSO functionality.
-SAP Cloud Platform Identity Authentication supports the Identity Federation option. This option allows the application to check whether users who are authenticated by the corporate identity provider exist in the user store of SAP Cloud Platform Identity Authentication.
+SAP Cloud Identity Services supports the Identity Federation option. This option allows the application to check whether users who are authenticated by the corporate identity provider exist in the user store of SAP Cloud Identity Services.
-The Identity Federation option is disabled by default. If Identity Federation is enabled, only the users that are imported in SAP Cloud Platform Identity Authentication can access the application.
+The Identity Federation option is disabled by default. If Identity Federation is enabled, only the users that are imported in SAP Cloud Identity Services can access the application.
-For more information about how to enable or disable Identity Federation with SAP Cloud Platform Identity Authentication, see "Enable Identity Federation with SAP Cloud Platform Identity Authentication" in [Configure Identity Federation with the User Store of SAP Cloud Platform Identity Authentication](https://help.sap.com/viewer/6d6d63354d1242d185ab4830fc04feb1/Cloud/c029bbbaefbf4350af15115396ba14e2.html).
+For more information about how to enable or disable Identity Federation with SAP Cloud Identity Services, see "Enable Identity Federation with SAP Cloud Identity Services" in [Configure Identity Federation with the User Store of SAP Cloud Identity Services](https://help.sap.com/viewer/6d6d63354d1242d185ab4830fc04feb1/Cloud/c029bbbaefbf4350af15115396ba14e2.html).
> [!NOTE]
-> SAP Cloud Platform Identity Authentication also supports automatic user provisioning, you can find more details [here](./sap-cloud-platform-identity-authentication-provisioning-tutorial.md) on how to configure automatic user provisioning.
+> SAP Cloud Identity Services also supports automatic user provisioning, you can find more details [here](./sap-cloud-platform-identity-authentication-provisioning-tutorial.md) on how to configure automatic user provisioning.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to SAP Cloud Platform Identity Authentication Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to SAP Cloud Identity Services Sign on URL where you can initiate the login flow.
-* Go to SAP Cloud Platform Identity Authentication Sign-on URL directly and initiate the login flow from there.
+* Go to SAP Cloud Identity Services Sign-on URL directly and initiate the login flow from there.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the SAP Cloud Platform Identity Authentication for which you set up the SSO
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the SAP Cloud Identity Services for which you set up the SSO
-You can also use Microsoft My Apps to test the application in any mode. When you click the SAP Cloud Platform Identity Authentication tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the SAP Cloud Platform Identity Authentication for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the SAP Cloud Identity Services tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the SAP Cloud Identity Services for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
-Once you configure the SAP Cloud Platform Identity Authentication you can enforce session controls, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session controls extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure the SAP Cloud Identity Services you can enforce session controls, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session controls extend from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Seekout Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/seekout-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with SeekOut'
+description: Learn how to configure single sign-on between Azure Active Directory and SeekOut.
++++++++ Last updated : 07/11/2022++++
+# Tutorial: Azure AD SSO integration with SeekOut
+
+In this tutorial, you'll learn how to integrate SeekOut with Azure Active Directory (Azure AD). When you integrate SeekOut with Azure AD, you can:
+
+* Control in Azure AD who has access to SeekOut.
+* Enable your users to be automatically signed-in to SeekOut with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* SeekOut single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* SeekOut supports **SP** and **IDP** initiated SSO.
+* SeekOut supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add SeekOut from the gallery
+
+To configure the integration of SeekOut into Azure AD, you need to add SeekOut from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **SeekOut** in the search box.
+1. Select **SeekOut** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for SeekOut
+
+Configure and test Azure AD SSO with SeekOut using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in SeekOut.
+
+To configure and test Azure AD SSO with SeekOut, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure SeekOut SSO](#configure-seekout-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create SeekOut test user](#create-seekout-test-user)** - to have a counterpart of B.Simon in SeekOut that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **SeekOut** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following step:
+
+ In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://app.seekout.io/api/auth/sso/<ID>`
+
+1. Click **Set additional URLs** and perform the following step, if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://app.seekout.io`
+
+ > [!Note]
+ > This value is not real. Update this value with the actual Reply URL. Contact [SeekOut support team](mailto:support@seekout.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up SeekOut** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SeekOut.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **SeekOut**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure SeekOut SSO
+
+To configure single sign-on on **SeekOut** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [SeekOut support team](mailto:support@seekout.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create SeekOut test user
+
+In this section, a user called B.Simon is created in SeekOut. SeekOut supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in SeekOut, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to SeekOut Sign-on URL where you can initiate the login flow.
+
+* Go to SeekOut Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the SeekOut for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the SeekOut tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the SeekOut for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure SeekOut you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Sharevault Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sharevault-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with ShareVault | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with ShareVault'
description: Learn how to configure single sign-on between Azure Active Directory and ShareVault.
Previously updated : 08/20/2020 Last updated : 07/08/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with ShareVault
+# Tutorial: Azure AD SSO integration with ShareVault
In this tutorial, you'll learn how to integrate ShareVault with Azure Active Directory (Azure AD). When you integrate ShareVault with Azure AD, you can:
In this tutorial, you'll learn how to integrate ShareVault with Azure Active Dir
* Enable your users to be automatically signed-in to ShareVault with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * ShareVault single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* ShareVault supports **SP and IDP** initiated SSO
-* ShareVault supports **Just In Time** user provisioning
-* Once you configure ShareVault you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+* ShareVault supports **SP and IDP** initiated SSO.
+* ShareVault supports **Just In Time** user provisioning.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding ShareVault from the gallery
+## Add ShareVault from the gallery
To configure the integration of ShareVault into Azure AD, you need to add ShareVault from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
To configure the integration of ShareVault into Azure AD, you need to add ShareV
Configure and test Azure AD SSO with ShareVault using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ShareVault.
-To configure and test Azure AD SSO with ShareVault, complete the following building blocks:
+To configure and test Azure AD SSO with ShareVault, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with ShareVault, complete the following build
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **ShareVault** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **ShareVault** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. ShareVault application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
- ![image](common/default-attributes.png)
+ ![Screenshot shows the image of ShareVault application.](common/default-attributes.png "Attributes")
1. In addition to above, ShareVault application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
- ![The Certificate download link](common/copy-metadataurl.png)
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **ShareVault**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, a user called Britta Simon is created in ShareVault. ShareVault
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the ShareVault tile in the Access Panel, you should be automatically signed in to the ShareVault for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+#### SP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal. This will redirect to ShareVault Sign-on URL where you can initiate the login flow.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Go to ShareVault Sign-on URL directly and initiate the login flow from there.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+#### IDP initiated:
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the ShareVault for which you set up the SSO.
-- [Try ShareVault with Azure AD](https://aad.portal.azure.com/)
+You can also use Microsoft My Apps to test the application in any mode. When you click the ShareVault tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the ShareVault for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is session control in Microsoft Defender for Cloud Apps?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect ShareVault with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+Once you configure ShareVault you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Smartlpa Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smartlpa-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with SmartLPA | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with SmartLPA'
description: Learn how to configure single sign-on between Azure Active Directory and SmartLPA.
Previously updated : 03/07/2019 Last updated : 07/09/2022
-# Tutorial: Azure Active Directory integration with SmartLPA
+# Tutorial: Azure AD SSO integration with SmartLPA
-In this tutorial, you learn how to integrate SmartLPA with Azure Active Directory (Azure AD).
-Integrating SmartLPA with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate SmartLPA with Azure Active Directory (Azure AD). When you integrate SmartLPA with Azure AD, you can:
-* You can control in Azure AD who has access to SmartLPA.
-* You can enable your users to be automatically signed-in to SmartLPA (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to SmartLPA.
+* Enable your users to be automatically signed-in to SmartLPA with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with SmartLPA, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* SmartLPA single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* SmartLPA single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* SmartLPA supports **SP** initiated SSO
+* SmartLPA supports **SP** initiated SSO.
-## Adding SmartLPA from the gallery
+## Add SmartLPA from the gallery
To configure the integration of SmartLPA into Azure AD, you need to add SmartLPA from the gallery to your list of managed SaaS apps.
-**To add SmartLPA from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **SmartLPA**, select **SmartLPA** from result panel then click **Add** button to add the application.
-
- ![SmartLPA in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with SmartLPA based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in SmartLPA needs to be established.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **SmartLPA** in the search box.
+1. Select **SmartLPA** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure and test Azure AD single sign-on with SmartLPA, you need to complete the following building blocks:
+## Configure and test Azure AD SSO for SmartLPA
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure SmartLPA Single Sign-On](#configure-smartlpa-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create SmartLPA test user](#create-smartlpa-test-user)** - to have a counterpart of Britta Simon in SmartLPA that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+Configure and test Azure AD SSO with SmartLPA using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in SmartLPA.
-### Configure Azure AD single sign-on
+To configure and test Azure AD SSO with SmartLPA, perform the following steps:
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure SmartLPA SSO](#configure-smartlpa-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create SmartLPA test user](#create-smartlpa-test-user)** - to have a counterpart of B.Simon in SmartLPA that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-To configure Azure AD single sign-on with SmartLPA, perform the following steps:
+## Configure Azure AD SSO
-1. In the [Azure portal](https://portal.azure.com/), on the **SmartLPA** application integration page, select **Single sign-on**.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Configure single sign-on link](common/select-sso.png)
+1. In the Azure portal, on the **SmartLPA** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
- ![Single sign-on select mode](common/select-saml-option.png)
+1. On the **Basic SAML Configuration** section, perform the following steps:
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Basic SAML Configuration** section, perform the following steps:
-
- ![SmartLPA Domain and URLs single sign-on information](common/sp-identifier.png)
-
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<TENANTNAME>.smartlpa.com/`
-
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
`https://<TENANTNAME>.smartlpa.com/<UNIQUE ID>`
+
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<TENANTNAME>.smartlpa.com/`
> [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [SmartLPA Client support team](mailto:support@smartlpa.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-
-5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
-
- ![The Certificate download link](common/certificatebase64.png)
-
-6. On the **Set up SmartLPA** section, copy the appropriate URL(s) as per your requirement.
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [SmartLPA Client support team](mailto:support@smartlpa.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
- a. Login URL
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
- b. Azure AD Identifier
+1. On the **Set up SmartLPA** section, copy the appropriate URL(s) as per your requirement.
- c. Logout URL
-
-### Configure SmartLPA Single Sign-On
-
-To configure single sign-on on **SmartLPA** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [SmartLPA support team](mailto:support@smartlpa.com). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
+In this section, you'll create a test user in the Azure portal called B.Simon.
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to SmartLPA.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **SmartLPA**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **SmartLPA**.
-
- ![The SmartLPA link in the Applications list](common/all-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to SmartLPA.
-3. In the menu on the left, select **Users and groups**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **SmartLPA**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![The "Users and groups" link](common/users-groups-blade.png)
+## Configure SmartLPA SSO
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **SmartLPA** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [SmartLPA support team](mailto:support@smartlpa.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create SmartLPA test user In this section, you create a user called Britta Simon in SmartLPA. Work with [SmartLPA support team](mailto:support@smartlpa.com) to add the users in the SmartLPA platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the SmartLPA tile in the Access Panel, you should be automatically signed in to the SmartLPA for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to SmartLPA Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to SmartLPA Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the SmartLPA tile in the My Apps, this will redirect to SmartLPA Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure SmartLPA you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Snowflake Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/snowflake-tutorial.md
Previously updated : 06/03/2022 Last updated : 07/14/2022 # Tutorial: Azure AD SSO integration with Snowflake
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. Open the **downloaded Base 64 certificate** in notepad. Copy the value between ΓÇ£--BEGIN CERTIFICATE--ΓÇ¥ and ΓÇ£--END CERTIFICATE--" and paste this content into the **SAML2_X509_CERT**.
-1. In the **SAML2_ISSUER**, paste **Identifier** value which you have copied from the Azure portal.
+1. In the **SAML2_ISSUER**, paste **Identifier** value, which you have copied from the Azure portal.
-1. In the **SAML2_SSO_URL**, paste **Login URL** value which you have copied from the Azure portal.
+1. In the **SAML2_SSO_URL**, paste **Login URL** value, which you have copied from the Azure portal.
-1. In the **SAML2_PROVIDER**, give the value like `AzureAD`.
+1. In the **SAML2_PROVIDER**, give the value like `CUSTOM`.
1. Select the **All Queries** and click **Run**.
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to Snowflake Sign-on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Snowflake Sign on URL where you can initiate the login flow.
* Go to Snowflake Sign on URL directly and initiate the login flow from there.
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps
-Once you configure Snowflake you can enforce Session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure Snowflake you can enforce Session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Stackby Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/stackby-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Stackby'
+description: Learn how to configure single sign-on between Azure Active Directory and Stackby.
++++++++ Last updated : 07/11/2022++++
+# Tutorial: Azure AD SSO integration with Stackby
+
+In this tutorial, you'll learn how to integrate Stackby with Azure Active Directory (Azure AD). When you integrate Stackby with Azure AD, you can:
+
+* Control in Azure AD who has access to Stackby.
+* Enable your users to be automatically signed-in to Stackby with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Stackby single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Stackby supports **IDP** initiated SSO.
+* Stackby supports **Just In Time** user provisioning.
+
+## Add Stackby from the gallery
+
+To configure the integration of Stackby into Azure AD, you need to add Stackby from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Stackby** in the search box.
+1. Select **Stackby** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Stackby
+
+Configure and test Azure AD SSO with Stackby using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user at Stackby.
+
+To configure and test Azure AD SSO with Stackby, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Stackby SSO](#configure-stackby-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Stackby test user](#create-stackby-test-user)** - to have a counterpart of B.Simon in Stackby that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Stackby** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the application is pre-configured and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Stackby** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Stackby.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Stackby**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Stackby SSO
+
+To configure single sign-on on **Stackby** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Stackby support team](mailto:support@stackby.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Stackby test user
+
+In this section, a user called B.Simon is created in Stackby. Stackby supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Stackby, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the Stackby for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the Stackby tile in the My Apps, you should be automatically signed in to the Stackby for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Stackby you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Tivitz Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tivitz-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with TiViTz | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with TiViTz'
description: Learn how to configure single sign-on between Azure Active Directory and TiViTz.
Previously updated : 03/27/2019 Last updated : 07/09/2022
-# Tutorial: Azure Active Directory integration with TiViTz
+# Tutorial: Azure AD SSO integration with TiViTz
-In this tutorial, you learn how to integrate TiViTz with Azure Active Directory (Azure AD).
-Integrating TiViTz with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate TiViTz with Azure Active Directory (Azure AD). When you integrate TiViTz with Azure AD, you can:
-* You can control in Azure AD who has access to TiViTz.
-* You can enable your users to be automatically signed-in to TiViTz (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to TiViTz.
+* Enable your users to be automatically signed-in to TiViTz with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with TiViTz, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* TiViTz single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* TiViTz single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* TiViTz supports **SP** initiated SSO
+* TiViTz supports **SP** initiated SSO.
-* TiViTz supports **Just In Time** user provisioning
+* TiViTz supports **Just In Time** user provisioning.
-## Adding TiViTz from the gallery
+## Add TiViTz from the gallery
To configure the integration of TiViTz into Azure AD, you need to add TiViTz from the gallery to your list of managed SaaS apps.
-**To add TiViTz from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **TiViTz**, select **TiViTz** from result panel then click **Add** button to add the application.
-
- ![TiViTz in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **TiViTz** in the search box.
+1. Select **TiViTz** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with TiViTz based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in TiViTz needs to be established.
+## Configure and test Azure AD SSO for TiViTz
-To configure and test Azure AD single sign-on with TiViTz, you need to complete the following building blocks:
+Configure and test Azure AD SSO with TiViTz using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in TiViTz.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure TiViTz Single Sign-On](#configure-tivitz-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create TiViTz test user](#create-tivitz-test-user)** - to have a counterpart of Britta Simon in TiViTz that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with TiViTz, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure TiViTz SSO](#configure-tivitz-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create TiViTz test user](#create-tivitz-test-user)** - to have a counterpart of B.Simon in TiViTz that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with TiViTz, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **TiViTz** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **TiViTz** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. On the **Basic SAML Configuration** section, perform the following steps:
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Basic SAML Configuration** section, perform the following steps:
-
- ![TiViTz Domain and URLs single sign-on information](common/sp-identifier.png)
-
- a. In the **Sign on URL** text box, type a URL using the following pattern:
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
`https://<companyname>.o365.tivitz.com/`
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
`https://<companyname>.o365.tivitz.com/` > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [TiViTz Client support team](mailto:info@tivitz.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-
-5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
-
- ![The Certificate download link](common/metadataxml.png)
-
-6. On the **Set up TiViTz** section, copy the appropriate URL(s) as per your requirement.
-
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [TiViTz Client support team](mailto:info@tivitz.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
- a. Login URL
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
- b. Azure AD Identifier
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
- c. Logout URL
+1. On the **Set up TiViTz** section, copy the appropriate URL(s) as per your requirement.
-### Configure TiViTz Single Sign-On
-
-To configure single sign-on on **TiViTz** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [TiViTz support team](mailto:info@tivitz.com). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type brittasimon@yourcompanydomain.extension. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to TiViTz.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to TiViTz.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **TiViTz**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **TiViTz**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure TiViTz SSO
-2. In the applications list, select **TiViTz**.
-
- ![The TiViTz link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **TiViTz** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [TiViTz support team](mailto:info@tivitz.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create TiViTz test user
In this section, a user called Britta Simon is created in TiViTz. TiViTz support
>[!NOTE] >If you need to create a user manually, you need to contact [TiViTz support team](mailto:info@tivitz.com).
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the TiViTz tile in the Access Panel, you should be automatically signed in to the TiViTz for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to TiViTz Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to TiViTz Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the TiViTz tile in the My Apps, this will redirect to TiViTz Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure TiViTz you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Userecho Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/userecho-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with UserEcho | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with UserEcho'
description: Learn how to configure single sign-on between Azure Active Directory and UserEcho.
Previously updated : 03/29/2019 Last updated : 07/09/2022
-# Tutorial: Azure Active Directory integration with UserEcho
+# Tutorial: Azure AD SSO integration with UserEcho
-In this tutorial, you learn how to integrate UserEcho with Azure Active Directory (Azure AD).
-Integrating UserEcho with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate UserEcho with Azure Active Directory (Azure AD). When you integrate UserEcho with Azure AD, you can:
-* You can control in Azure AD who has access to UserEcho.
-* You can enable your users to be automatically signed-in to UserEcho (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to UserEcho.
+* Enable your users to be automatically signed-in to UserEcho with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with UserEcho, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* UserEcho single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* UserEcho single sign-on enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* UserEcho supports **SP** initiated SSO
+* UserEcho supports **SP** initiated SSO.
-## Adding UserEcho from the gallery
+## Add UserEcho from the gallery
To configure the integration of UserEcho into Azure AD, you need to add UserEcho from the gallery to your list of managed SaaS apps.
-**To add UserEcho from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **UserEcho**, select **UserEcho** from result panel then click **Add** button to add the application.
-
- ![UserEcho in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with UserEcho based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in UserEcho needs to be established.
-
-To configure and test Azure AD single sign-on with UserEcho, you need to complete the following building blocks:
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **UserEcho** in the search box.
+1. Select **UserEcho** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure UserEcho Single Sign-On](#configure-userecho-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create UserEcho test user](#create-userecho-test-user)** - to have a counterpart of Britta Simon in UserEcho that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+## Configure and test Azure AD SSO for UserEcho
-### Configure Azure AD single sign-on
+Configure and test Azure AD SSO with UserEcho using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in UserEcho.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+To configure and test Azure AD SSO with UserEcho, perform the following steps:
-To configure Azure AD single sign-on with UserEcho, perform the following steps:
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure UserEcho SSO](#configure-userecho-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create UserEcho test user](#create-userecho-test-user)** - to have a counterpart of B.Simon in UserEcho that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-1. In the [Azure portal](https://portal.azure.com/), on the **UserEcho** application integration page, select **Single sign-on**.
+## Configure Azure AD SSO
- ![Configure single sign-on link](common/select-sso.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. In the Azure portal, on the **UserEcho** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Single sign-on select mode](common/select-saml-option.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. On the **Basic SAML Configuration** section, perform the following steps:
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://<companyname>.userecho.com/saml/metadata/`
-4. On the **Basic SAML Configuration** section, perform the following steps:
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<companyname>.userecho.com/`
- ![UserEcho Domain and URLs single sign-on information](common/sp-identifier.png)
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [UserEcho Client support team](https://feedback.userecho.com/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<companyname>.userecho.com/`
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://<companyname>.userecho.com/saml/metadata/`
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
- > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [UserEcho Client support team](https://feedback.userecho.com/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+1. On the **Set up UserEcho** section, copy the appropriate URL(s) as per your requirement.
-4. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
- ![The Certificate download link](common/certificatebase64.png)
+### Create an Azure AD test user
-6. On the **Set up UserEcho** section, copy the appropriate URL(s) as per your requirement.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
- a. Login URL
+### Assign the Azure AD test user
- b. Azure AD Identifier
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to UserEcho.
- c. Logout URL
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **UserEcho**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-### Configure UserEcho Single Sign-On
+## Configure UserEcho SSO
1. In another browser window, sign on to your UserEcho company site as an administrator. 2. In the toolbar on the top, click your user name to expand the menu, and then click **Setup**.
- ![Screenshot shows Setup selected from the UserEcho site.](./media/userecho-tutorial/tutorial_userecho_06.png)
+ ![Screenshot shows Setup selected from the UserEcho site.](./media/userecho-tutorial/profile.png "Site")
3. Click **Integrations**.
- ![Screenshot shows Integrations selected from the Settings menu.](./media/userecho-tutorial/tutorial_userecho_07.png)
+ ![Screenshot shows Integrations selected from the Settings menu.](./media/userecho-tutorial/menu.png "Integrations")
4. Click **Website**, and then click **Single sign-on (SAML2)**.
- ![Screenshot shows Single sign-on SAML2 selected from the Integrations menu.](./media/userecho-tutorial/tutorial_userecho_08.png)
+ ![Screenshot shows Single sign-on SAML2 selected from the Integrations menu.](./media/userecho-tutorial/website.png "Folder")
5. On the **Single sign-on (SAML)** page, perform the following steps:
- ![Screenshot shows the Single Sign-on SAML page where you can enter the values described.](./media/userecho-tutorial/tutorial_userecho_09.png)
+ ![Screenshot shows the Single Sign-on SAML page where you can enter the values described.](./media/userecho-tutorial/values.png "Details")
a. As **SAML-enabled**, select **Yes**.
To configure Azure AD single sign-on with UserEcho, perform the following steps:
e. Click **Save**.
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type brittasimon@yourcompanydomain.extension. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to UserEcho.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **UserEcho**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **UserEcho**.
-
- ![The UserEcho link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
- ### Create UserEcho test user The objective of this section is to create a user called Britta Simon in UserEcho.
The objective of this section is to create a user called Britta Simon in UserEch
2. In the toolbar on the top, click your user name to expand the menu, and then click **Setup**.
- ![Screenshot shows Setup selected from the UserEcho site.](./media/userecho-tutorial/tutorial_userecho_06.png)
+ ![Screenshot shows Setup selected from the UserEcho site.](./media/userecho-tutorial/profile.png "Site")
3. Click **Users**, to expand the **Users** section.
- ![Screenshot shows Users selected from the Settings menu.](./media/userecho-tutorial/tutorial_userecho_10.png)
+ ![Screenshot shows Users selected from the Settings menu.](./media/userecho-tutorial/user.png "Settings")
4. Click **Users**.
- ![Screenshot shows Users selected.](./media/userecho-tutorial/tutorial_userecho_11.png)
+ ![Screenshot shows Users selected button.](./media/userecho-tutorial/new-user.png "Users")
5. Click **Invite a new user**.
- ![Screenshot shows the Invite a new user control.](./media/userecho-tutorial/tutorial_userecho_12.png)
+ ![Screenshot shows the Invite a new user control.](./media/userecho-tutorial/control.png "Information")
6. On the **Invite a new user** dialog, perform the following steps:
- ![Screenshot shows the Invite a new user dialog box where you can enter user information.](./media/userecho-tutorial/tutorial_userecho_13.png)
+ ![Screenshot shows the Invite a new user dialog box where you can enter user information.](./media/userecho-tutorial/name.png "Steps")
a. In the **Name** textbox, type name of the user like Britta Simon.
The objective of this section is to create a user called Britta Simon in UserEch
c. Click **Invite**.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the UserEcho tile in the Access Panel, you should be automatically signed in to the UserEcho for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to UserEcho Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to UserEcho Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the UserEcho tile in the My Apps, this will redirect to UserEcho Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure UserEcho you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Visitorg Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/visitorg-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Visit.org | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Visit.org'
description: Learn how to configure single sign-on between Azure Active Directory and Visit.org.
Previously updated : 10/16/2019 Last updated : 07/09/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Visit.org
+# Tutorial: Azure AD SSO integration with Visit.org
In this tutorial, you'll learn how to integrate Visit.org with Azure Active Directory (Azure AD). When you integrate Visit.org with Azure AD, you can:
In this tutorial, you'll learn how to integrate Visit.org with Azure Active Dire
* Enable your users to be automatically signed-in to Visit.org with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Visit.org single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Visit.org supports **IDP** initiated SSO
+* Visit.org supports **IDP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding Visit.org from the gallery
+## Add Visit.org from the gallery
To configure the integration of Visit.org into Azure AD, you need to add Visit.org from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Visit.org** in the search box. 1. Select **Visit.org** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Visit.org
+## Configure and test Azure AD SSO for Visit.org
Configure and test Azure AD SSO with Visit.org using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Visit.org.
-To configure and test Azure AD SSO with Visit.org, complete the following building blocks:
+To configure and test Azure AD SSO with Visit.org, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure Visit.org SSO](#configure-visitorg-sso)** - to configure the single sign-on settings on application side.
- * **[Create Visit.org test user](#create-visitorg-test-user)** - to have a counterpart of B.Simon in Visit.org that is linked to the Azure AD representation of user.
+ 1. **[Create Visit.org test user](#create-visitorg-test-user)** - to have a counterpart of B.Simon in Visit.org that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Visit.org** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Visit.org** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
1. On the **Basic SAML Configuration** section, the application is pre-configured and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button. 1. Visit.org application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
- ![image](common/default-attributes.png)
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Attributes")
1. In addition to above, Visit.org application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/certificateraw.png)
+ ![Screenshot shows the Certificate download link.](common/certificateraw.png "Certificate")
1. On the **Set up Visit.org** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
### Create an Azure AD test user
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Visit.org**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, you create a user called B.Simon in Visit.org. Work with [Visi
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Visit.org tile in the Access Panel, you should be automatically signed in to the Visit.org for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on Test this application in Azure portal and you should be automatically signed in to the Visit.org for which you set up the SSO.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the Visit.org tile in the My Apps, you should be automatically signed in to the Visit.org for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Visit.org with Azure AD](https://aad.portal.azure.com/)
+Once you configure Visit.org you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Workable Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workable-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Workable | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Workable'
description: Learn how to configure single sign-on between Azure Active Directory and Workable.
Previously updated : 08/09/2021 Last updated : 07/14/2022
-# Tutorial: Azure Active Directory integration with Workable
+# Tutorial: Azure AD SSO integration with Workable
In this tutorial, you'll learn how to integrate Workable with Azure Active Directory (Azure AD). When you integrate Workable with Azure AD, you can:
Follow these steps to enable Azure AD SSO in the Azure portal.
4. On the **Basic SAML Configuration** section, If you wish to configure the application in **IDP** initiated mode, perform the following step: In the **Reply URL** text box, type a URL using the following pattern:
- `https://www.workable.com/auth/saml/<SUBDOMAIN>/callback`
+ `https://id.workable.com/auth/saml/ats_server/<SUBDOMAIN>/callback`
5. Click **set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- In the **Sign-on URL** text box, type the URL:
- `https://www.workable.com/sso/signin`
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.workable.com/signin`
> [!NOTE]
- > The Reply URL value is not real. Update Reply URL value with the actual Reply URL. Contact [Workable Client support team](mailto:support@workable.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Reply URL and Sign on URL. Contact [Workable Client support team](mailto:support@workable.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
6. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
active-directory Verifiable Credentials Configure Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-issuer.md
In this step, you create the verified credential expert card by using Azure AD V
"required": false } ]
+ },
+ "validityInterval": 2592000,
+ "vc": {
+ "type": [
+ "VerifiedCredentialExpert"
+ ]
} } ```
aks Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/integrations.md
Azure Kubernetes Service (AKS) provides additional, supported functionality for
## Add-ons
-Add-ons provide extra capabilities for your AKS cluster and their installation and configuration is managed by Azure. Use `az aks addon` to manage all add-ons for your cluster.
+Add-ons are a fully-supported way to provide extra capabilities for your AKS cluster. Add-ons' installation, configuration, and lifecycle is managed by AKS. Use `az aks addon` to install an add-on or manage the add-ons for your cluster.
+
+The following rules are used by AKS for applying updates to installed add-ons:
+
+- Only an add-on's patch version can be upgraded within a Kubernetes minor version. The add-on's major/minor version will not be upgraded within the same Kubernetes minor version.
+- The major/minor version of the add-on will only be upgraded when moving to a later Kubernetes minor version.
+- Any breaking or behavior changes to the add-on will be announced well before, usually 60 days, a later minor version of Kubernetes is released on AKS.
+- Add-ons can be patched weekly with every new release of AKS which will be announced in the release notes. AKS releases can be controlled using [maintenance windows][maintenance-windows] and followed using [release tracker][release-tracker].
The below table shows the available add-ons.
The below table shows a few examples of open-source and third-party integrations
[gitops-overview]: ../azure-arc/kubernetes/conceptual-gitops-flux2.md [managed-grafana]: ../managed-grafan [keda]: keda-about.md
-[web-app-routing]: web-app-routing.md
+[web-app-routing]: web-app-routing.md
+[maintenance-windows]: planned-maintenance.md
+[release-tracker]: release-tracker.md
aks Support Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/support-policies.md
Microsoft manages and monitors the following components through the control pane
* Etcd or a compatible key-value store, providing Quality of Service (QoS), scalability, and runtime * DNS services (for example, kube-dns or CoreDNS) * Kubernetes proxy or networking (except when [BYOCNI](use-byo-cni.md) is used)
-* Any additional addon or system component running in the kube-system namespace
+* Any additional [add-ons][add-ons] or system component running in the kube-system namespace
AKS isn't a Platform-as-a-Service (PaaS) solution. Some components, such as agent nodes, have *shared responsibility*, where users must help maintain the AKS cluster. User input is required, for example, to apply an agent node operating system (OS) security patch.
When a technical support issue is root-caused by one or more upstream bugs, AKS
* The issue, including links to upstream bugs. * The workaround and details about an upgrade or another persistence of the solution. * Rough timelines for the issue's inclusion, based on the upstream release cadence.++
+[add-ons]: integrations.md#add-ons
application-gateway Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/features.md
The application gateway Standard_v2 SKU supports static VIP type exclusively. Th
## Web Application Firewall
-Web Application Firewall (WAF) is a service that provides centralized protection of your web applications from common exploits and vulnerabilities. WAF is based on rules from the [OWASP (Open Web Application Security Project) core rule sets](https://www.owasp.org/index.php/Category:OWASP_ModSecurity_Core_Rule_Set_Project) 3.1 (WAF_v2 only), 3.0, and 2.2.9.
+Web Application Firewall (WAF) is a service that provides centralized protection of your web applications from common exploits and vulnerabilities. WAF is based on rules from the [OWASP (Open Web Application Security Project) core rule sets](https://owasp.org/www-project-modsecurity-core-rule-set/) 3.1 (WAF_v2 only), 3.0, and 2.2.9.
Web applications are increasingly targets of malicious attacks that exploit common known vulnerabilities. Common among these exploits are SQL injection attacks, cross site scripting attacks to name a few. Preventing such attacks in application code can be challenging and may require rigorous maintenance, patching and monitoring at many layers of the application topology. A centralized web application firewall helps make security management much simpler and gives better assurance to application administrators against threats or intrusions. A WAF solution can also react to a security threat faster by patching a known vulnerability at a central location versus securing each of individual web applications. Existing application gateways can be converted to a Web Application Firewall enabled application gateway easily.
For an Application Gateway v1-v2 feature comparison, see [What is Azure Applicat
## Next steps -- Learn how Application Gateway works - [How an application gateway works](how-application-gateway-works.md)
+- Learn [how an application gateway works](how-application-gateway-works.md)
application-gateway Redirect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-overview.md
A common redirection scenario for many web applications is to support automatic
A redirect type sets the response status code for the clients to understand the purpose of the redirect. The following types of redirection are supported: - 301 (Moved permanently): Indicates that the target resource has been assigned a new permanent URI. Any future references to this resource will use one of the enclosed URIs. Use 301 status code for HTTP to HTTPS redirection.-- 303 (Permanent redirect): Indicates that the target resource has been assigned a new permanent URI. Any future references to this resource should use one of the enclosed URIs. - 302 (Found): Indicates that the target resource is temporarily under a different URI. Since the redirection can change on occasion, the client should continue to use the effective request URI for future requests.
+- 303 (See Other): Indicates that the target resource is redirecting the user agent to a different resource, as indicated by a URI in the Location header field.
- 307 (Temporary redirect): Indicates that the target resource is temporarily under a different URI. The user agent MUST NOT change the request method if it does an automatic redirection to that URI. Since the redirection can change over time, the client ought to continue using the original effective request URI for future requests. ## Redirection capabilities
applied-ai-services Compose Custom Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/compose-custom-models.md
Use the programming language code of your choice to create a composed model that
* [**C#/.NET**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/samples/Sample_ModelCompose.md).
-* [**Java**](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/CreateComposedModel.java).
+* [**Java**](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/src/samples/java/com/azure/ai/formrecognizer/administration/ComposeModel.java).
* [**JavaScript**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/formrecognizer/ai-form-recognizer/samples/v3/javascript/createComposedModel.js).
-* [**Python**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2-beta/sample_create_composed_model.py)
+* [**Python**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/formrecognizer/azure-ai-formrecognizer/samples/v3.2-beta/sample_compose_model.py)
Learn more about the Form Recognizer client library by exploring our API referen
> [!div class="nextstepaction"] > [Form Recognizer API reference](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-06-30-preview/operations/AnalyzeDocument)
->
+>
applied-ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-neural.md
Tabular fields are also useful when extracting repeating information within a do
## Supported regions
-Starting on August 1st 2022, Form Recognizer custom neural model training will only be available in the following Azure regions until further notice:
+Starting August 01, 2022, Form Recognizer custom neural model training will only be available in the following Azure regions until further notice:
* Brazil South * Canada Central
applied-ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/service-limits.md
Last updated 06/06/2022
-# Form Recognizer service Quotas and Limits
+# Form Recognizer service quotas and limits
This article contains a quick reference and the **detailed description** of Azure Form Recognizer service Quotas and Limits for all [pricing tiers](https://azure.microsoft.com/pricing/details/form-recognizer/). It also contains some best practices to avoid request throttling.
automation Add User Assigned Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/add-user-assigned-identity.md
If you don't have an Azure subscription, create a [free account](https://azure.m
- An Azure resource that you want to access from your Automation runbook. This resource needs to have a role defined for the user-assigned managed identity, which helps the Automation runbook authenticate access to the resource. To add roles, you need to be an owner for the resource in the corresponding Azure AD tenant. -- To add the user assigned managed identity you must have the ```Microsoft.ManagedIdentity/userAssignedIdentities/*/read``` and ```Microsoft.ManagedIdentity/userAssignedIdentities/*/assign/action``` permissions over the user assigned managed identity, which are granted to [Managed Identity Operator](/azure/role-based-access-control/built-in-roles.md#managed-identity-operator) and [Managed Identity Contributor](/azure/role-based-access-control/built-in-roles.md#managed-identity-contributor)--- To assign an Azure role to the managed identity, you must have ```Microsoft.Authorization/roleAssignments/write``` permission, which is granted either to [User Access Administrator](/azure/role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](/azure/role-based-access-control/built-in-roles.md#owner)
+- To assign an Azure role, you must have ```Microsoft.Authorization/roleAssignments/write``` permissions, such as [User Access Administrator](/azure/role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](/azure/role-based-access-control/built-in-roles.md.md#owner).
## Add user-assigned managed identity for Azure Automation account
automation Manage Run As Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-run-as-account.md
+
+ Title: Manage an Azure Automation Run As account
+description: This article tells how to manage your Azure Automation Run As account with PowerShell or from the Azure portal.
++ Last updated : 08/02/2021++++
+# Manage an Azure Automation Run As account
+
+Run As accounts in Azure Automation provide authentication for managing resources on the Azure Resource Manager or Azure Classic deployment model using Automation runbooks and other Automation features.
+
+In this article we cover how to manage a Run as or Classic Run As account, including:
+
+ * How to renew a self-signed certificate
+ * How to renew a certificate from an enterprise or third-party certificate authority (CA)
+ * Manage permissions for the Run As account
+
+To learn more about Azure Automation account authentication, permissions required to manage the Run as account, and guidance related to process automation scenarios, see [Automation Account authentication overview](automation-security-overview.md).
+
+## <a name="cert-renewal"></a>Renew a self-signed certificate
+
+The self-signed certificate that you have created for the Run As account expires one year from the date of creation. At some point before your Run As account expires, you must renew the certificate. You can renew it any time before it expires.
+
+When you renew the self-signed certificate, the current valid certificate is retained to ensure that any runbooks that are queued up or actively running, and that authenticate with the Run As account, aren't negatively affected. The certificate remains valid until its expiration date.
+
+>[!NOTE]
+>If you think that the Run As account has been compromised, you can delete and re-create the self-signed certificate.
+
+>[!NOTE]
+>If you have configured your Run As account to use a certificate issued by your enterprise or third-party CA and you use the option to renew a self-signed certificate option, the enterprise certificate is replaced by a self-signed certificate. To renew your certificate in this case, see [Renew an enterprise or third-party certificate](#renew-an-enterprise-or-third-party-certificate).
+
+Use the following steps to renew the self-signed certificate.
+
+1. Sign-in to the [Azure portal](https://portal.azure.com).
+
+1. Go to your Automation account and select **Run As Accounts** in the account settings section.
+
+ :::image type="content" source="media/manage-run-as-account/automation-account-properties-pane.png" alt-text="Automation account properties pane.":::
+
+1. On the **Run As Accounts** properties page, select either **Run As Account** or **Classic Run As Account** depending on which account you need to renew the certificate for.
+
+1. On the **Properties** page for the selected account, select **Renew certificate**.
+
+ :::image type="content" source="media/manage-run-as-account/automation-account-renew-run-as-certificate.png" alt-text="Renew certificate for Run As account.":::
+
+1. While the certificate is being renewed, you can track the progress under **Notifications** from the menu.
+
+## Renew an enterprise or third-party certificate
+
+Every certificate has a built-in expiration date. If the certificate you assigned to the Run As account was issued by a certification authority (CA), you need to perform a different set of steps to configure the Run As account with the new certificate before it expires. You can renew it any time before it expires.
+
+1. Import the renewed certificate following the steps for [Create a new certificate](./shared-resources/certificates.md#create-a-new-certificate). Automation requires the certificate to have the following configuration:
+
+ * Specify the provider **Microsoft Enhanced RSA and AES Cryptographic Provider**
+ * Marked as exportable
+ * Configured to use the SHA256 algorithm
+ * Saved in the `*.pfx` or `*.cer` format.
+
+ After you import the certificate, note or copy the certificate **Thumbprint** value. This value is used to update the Run As connection properties with the new certificate.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Search for and select **Automation Accounts**.
+
+1. On the Automation Accounts page, select your Automation account from the list.
+
+1. In the left pane, select **Connections**.
+
+1. On the **Connections** page, select **AzureRunAsConnection** and update the **Certificate Thumbprint** with the new certificate thumbprint.
+
+1. Select **Save** to commit your changes.
+
+## Grant Run As account permissions in other subscriptions
+
+Azure Automation supports using a single Automation account from one subscription, and executing runbooks against Azure Resource Manager resources across multiple subscriptions. This configuration does not support the Azure Classic deployment model.
+
+You assign the Run As account service principal the [Contributor](../role-based-access-control/built-in-roles.md#contributor) role in the other subscription, or more restrictive permissions. For more information, see [Role-based access control](automation-role-based-access-control.md) in Azure Automation. To assign the Run As account to the role in the other subscription, the user account performing this task needs to be a member of the **Owner** role in that subscription.
+
+> [!NOTE]
+> This configuration only supports multiple subscriptions of an organization using a common Azure AD tenant.
+
+Before granting the Run As account permissions, you need to first note the display name of the service principal to assign.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. From your Automation account, select **Run As Accounts** under **Account Settings**.
+1. Select **Azure Run As Account**.
+1. Copy or note the value for **Display Name** on the **Azure Run As Account** page.
+
+For detailed steps for how to add role assignments, check out the following articles depending on the method you want to use.
+
+* [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md)
+* [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md)
+* [Assign Azure roles using the Azure CLI](../role-based-access-control/role-assignments-cli.md)
+* [Assign Azure roles using the REST API](..//role-based-access-control/role-assignments-rest.md)
+
+After assigning the Run As account to the role, in your runbook specify `Set-AzContext -SubscriptionId "xxxx-xxxx-xxxx-xxxx"` to set the subscription context to use. For more information, see [Set-AzContext](/powershell/module/az.accounts/set-azcontext).
+
+## Check role assignment for Azure Automation Run As account
+
+To check the role assigned to the Automation Run As account Azure AD, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Go to your Automation account and in **Account Settings**, select **Run as accounts**.
+1. Select **Azure Run as Account** to view the **Application ID**.
+
+ :::image type="content" source="media/manage-run-as-account/automation-run-as-app-id.png" alt-text="Screenshot that describes on how to copy application ID.":::
+
+1. Go to Azure portal and search for **Azure Active Directory**.
+1. On the **Active Directory Overview** page, **Overview** tab, in the search box, enter the Application ID.
+
+ :::image type="content" source="media/manage-run-as-account/active-directory-app-id-inline.png" alt-text="Screenshot that describes application ID copied in the Overview tab." lightbox="media/manage-run-as-account/active-directory-app-id-expanded.png":::
+
+ In the **Enterprise applications** section, you will see the display name of your Run As Account.
+
+1. Select the application ID and in the properties page of that ID, go to **Overview** blade, **Properties**, and copy the name of the Enterprise application.
+1. Go to Azure portal and search for your **Subscription** and select your subscription.
+1. Go to **Access Control (IAM)**, **Role Assignment** and paste the name of the Enterprise application in the search box to view the App along with the role and scope assigned to it.
+For example: in the screenshot below, the Run As Account Azure AD App has the Contributor access at the subscription level.
+
+ :::image type="content" source="media/manage-run-as-account/check-role-assignments-inline.png" alt-text="Screenshot that describes how to view the role and scope assigned to the enterprise application." lightbox="media/manage-run-as-account/check-role-assignments-expanded.png":::
++
+## Limit Run As account permissions
+
+To control the targeting of Automation against resources in Azure, you can run the [Update-AutomationRunAsAccountRoleAssignments.ps1](https://aka.ms/AA5hug8) script. This script changes your existing Run As account service principal to create and use a custom role definition. The role has permissions for all resources except [Key Vault](../key-vault/index.yml).
+
+>[!IMPORTANT]
+>After you run the **Update-AutomationRunAsAccountRoleAssignments.ps1** script, runbooks that access Key Vault through the use of Run As accounts no longer work. Before running the script, you should review runbooks in your account for calls to Azure Key Vault. To enable access to Key Vault from Azure Automation runbooks, you must [add the Run As account to Key Vault's permissions](#add-permissions-to-key-vault).
+
+If you need to further restrict what the Run As service principal can do, you can add other resource types to the `NotActions` element of the custom role definition. The following example restricts access to `Microsoft.Compute/*`. If you add this resource type to `NotActions` for the role definition, the role will not be able to access any Compute resource. To learn more about role definitions, see [Understand role definitions for Azure resources](../role-based-access-control/role-definitions.md).
+
+```powershell
+$roleDefinition = Get-AzRoleDefinition -Name 'Automation RunAs Contributor'
+$roleDefinition.NotActions.Add("Microsoft.Compute/*")
+$roleDefinition | Set-AzRoleDefinition
+```
+
+You can determine if the service principal used by your Run As account assigned the **Contributor** role or a custom one.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to your Automation account and select **Run As Accounts** in the account settings section.
+1. Select **Azure Run As Account**.
+1. Select **Role** to locate the role definition that is being used.
++
+You can also determine the role definition used by the Run As accounts for multiple subscriptions or Automation accounts. Do this by using the [Check-AutomationRunAsAccountRoleAssignments.ps1](https://aka.ms/AA5hug5) script in the PowerShell Gallery.
+
+### Add permissions to Key Vault
+
+You can allow Azure Automation to verify if Key Vault and your Run As account service principal are using a custom role definition. You must:
+
+* Grant permissions to Key Vault.
+* Set the access policy.
+
+You can use the [Extend-AutomationRunAsAccountRoleAssignmentToKeyVault.ps1](https://aka.ms/AA5hugb) script in the PowerShell Gallery to grant your Run As account permissions to Key Vault. See [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-powershell.md) for more details on setting permissions on Key Vault.
+
+## Resolve misconfiguration issues for Run As accounts
+
+Some configuration items necessary for a Run As or Classic Run As account might have been deleted or created improperly during initial setup. Possible instances of misconfiguration include:
+
+* Certificate asset
+* Connection asset
+* Run As account removed from the Contributor role
+* Service principal or application in Azure AD
+
+For such misconfiguration instances, the Automation account detects the changes and displays a status of *Incomplete* on the Run As Accounts properties pane for the account.
++
+When you select the Run As account, the account properties pane displays the following error message:
+
+```text
+The Run As account is incomplete. Either one of these was deleted or not created - Azure Active Directory Application, Service Principal, Role, Automation Certificate asset, Automation Connect asset - or the Thumbprint is not identical between Certificate and Connection. Please delete and then re-create the Run As Account.
+```
+
+You can quickly resolve these Run As account issues by [deleting](delete-run-as-account.md) and [re-creating](create-run-as-account.md) the Run As account.
+
+## Next steps
+
+* [Application Objects and Service Principal Objects](../active-directory/develop/app-objects-and-service-principals.md).
+* [Certificates overview for Azure Cloud Services](../cloud-services/cloud-services-certs-create.md).
+* To create or re-create a Run As account, see [Create a Run As account](create-run-as-account.md).
+* If you no longer need to use a Run As account, see [Delete a Run As account](delete-run-as-account.md).
azure-app-configuration Concept Config File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-config-file.md
az appconfig kv import --profile appconfig/kvset --name <your store name> --sour
``` > [!NOTE]
-> The KVSet file content profile is currently supported in Azure CLI only and requires CLI version 2.30.0 or later.
+> The KVSet file content profile is currently supported in
+> - Azure CLI version 2.30.0 or later
+> - [Azure App Configuration Push Task](./push-kv-devops-pipeline.md) version 3.3.0 or later
The following table shows all the imported data in your App Configuration store.
azure-arc Concepts Distributed Postgres Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/concepts-distributed-postgres-hyperscale.md
The key concepts around Azure Arc-enabled PostgreSQL Hyperscale are summarized b
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)] ## Nodes and tables
-It is important to know about the following concepts to benefit the most from Azure Arc-enabled Postgres Hyperscale:
+It is important to know about the following concepts to benefit the most from Azure Arc-enabled PostgreSQL Hyperscale:
- Specialized Postgres nodes in Azure Arc-enabled PostgreSQL Hyperscale: coordinator and workers - Types of tables: distributed tables, reference tables and local tables - Shards
See more information at [Nodes and tables in Azure Database for PostgreSQL ΓÇô H
## Determine the application type Clearly identifying the type of application you are building is important. Why? Because running efficient queries on a Azure Arc-enabled PostgreSQL Hyperscale server group requires that tables be properly distributed across servers.
-The recommended distribution varies by the type of application and its query patterns. There are broadly two kinds of applications that work well on Azure Arc-enabled Postgres Hyperscale:
+The recommended distribution varies by the type of application and its query patterns. There are broadly two kinds of applications that work well on Azure Arc-enabled PostgreSQL Hyperscale:
- Multi-Tenant Applications - Real-Time Applications
azure-arc Create Postgresql Hyperscale Server Group Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-hyperscale-server-group-azure-data-studio.md
In a few minutes, your creation should successfully complete.
### Important parameters you should consider: -- **the number of worker nodes** you want to deploy to scale out and potentially reach better performances. Before proceeding here, read the [concepts about Postgres Hyperscale](concepts-distributed-postgres-hyperscale.md). The table below indicates the range of supported values and what form of Postgres deployment you get with them. For example, if you want to deploy a server group with 2 worker nodes, indicate 2. This will create three pods, one for the coordinator node/instance and two for the worker nodes/instances (one for each of the workers).
+- **the number of worker nodes** you want to deploy to scale out and potentially reach better performances. Before proceeding here, read the [concepts about PostgreSQL Hyperscale](concepts-distributed-postgres-hyperscale.md). The table below indicates the range of supported values and what form of Postgres deployment you get with them. For example, if you want to deploy a server group with 2 worker nodes, indicate 2. This will create three pods, one for the coordinator node/instance and two for the worker nodes/instances (one for each of the workers).
|You need |Shape of the server group you will deploy |Number of worker nodes to indicate |Note | ||||| |A scaled out form of Postgres to satisfy the scalability needs of your applications. |3 or more Postgres instances, 1 is coordinator, n are workers with n >=2. |n, with n>=2. |The Citus extension that provides the Hyperscale capability is loaded. |
- |A basic form of Postgres Hyperscale for you to do functional validation of your application at minimum cost. Not valid for performance and scalability validation. For that you need to use the type of deployments described above. |1 Postgres instance that is both coordinator and worker. |0 and add Citus to the list of extensions to load. |The Citus extension that provides the Hyperscale capability is loaded. |
+ |A basic form of PostgreSQL Hyperscale for you to do functional validation of your application at minimum cost. Not valid for performance and scalability validation. For that you need to use the type of deployments described above. |1 Postgres instance that is both coordinator and worker. |0 and add Citus to the list of extensions to load. |The Citus extension that provides the Hyperscale capability is loaded. |
|A simple instance of Postgres that is ready to scale out when you need it. |1 Postgres instance. It is not yet aware of the semantic for coordinator and worker. To scale it out after deployment, edit the configuration, increase the number of worker nodes and distribute the data. |0 |The Citus extension that provides the Hyperscale capability is present on your deployment but is not yet loaded. | | | | | | This table is demonstrated in the following figure:
- :::image type="content" source="media/postgres-hyperscale/deployment-parameters.png" alt-text="Diagram that depicts Postgres Hyperscale worker node parameters and associated architecture." border="false":::
+ :::image type="content" source="media/postgres-hyperscale/deployment-parameters.png" alt-text="Diagram that depicts PostgreSQL Hyperscale worker node parameters and associated architecture." border="false":::
While indicating 1 worker works, we do not recommend you use it. This deployment will not provide you much value. With it, you will get 2 instances of Postgres: 1 coordinator and 1 worker. With this setup you actually do not scale out the data since you deploy a single worker. As such you will not see an increased level of performance and scalability. We will remove the support of this deployment in a future release.
In a few minutes, your creation should successfully complete.
## Next steps - [Manage your server group using Azure Data Studio](manage-postgresql-hyperscale-server-group-with-azure-data-studio.md) - [Monitor your server group](monitor-grafana-kibana.md)-- Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from all the power of Azure Database for Postgres Hyperscale. :
+- Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from all the power of Azure Database for PostgreSQL Hyperscale. :
* [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md) * [Determine application type](../../postgresql/hyperscale/howto-app-type.md) * [Choose a distribution column](../../postgresql/hyperscale/howto-choose-distribution-column.md)
azure-arc Create Postgresql Hyperscale Server Group Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-hyperscale-server-group-azure-portal.md
Be aware of the following considerations when you're deploying:
This table is demonstrated in the following figure:
- :::image type="content" source="media/postgres-hyperscale/deployment-parameters.png" alt-text="Diagram that depicts Postgres Hyperscale worker node parameters and associated architecture." border="false":::
+ :::image type="content" source="media/postgres-hyperscale/deployment-parameters.png" alt-text="Diagram that depicts PostgreSQL Hyperscale worker node parameters and associated architecture." border="false":::
Although you can indicate *1* worker, it's not a good idea to do so. This deployment doesn't provide you with much value. With it, you get two instances of Azure Arc-enabled PostgreSQL Hyperscale: one coordinator and one worker. You don't scale out the data because you deploy a single worker. As such, you don't see an increased level of performance and scalability.
azure-arc Create Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-hyperscale-server-group.md
The main parameters should consider are:
- **the version of the PostgreSQL engine** you want to deploy: by default it is version 12. To deploy version 12, you can either omit this parameter or pass one of the following parameters: `--engine-version 12` or `-ev 12`. To deploy version 11, indicate `--engine-version 11` or `-ev 11`. -- **the number of worker nodes** you want to deploy to scale out and potentially reach better performances. Before proceeding here, read the [concepts about Postgres Hyperscale](concepts-distributed-postgres-hyperscale.md). To indicate the number of worker nodes to deploy, use the parameter `--workers` or `-w` followed by an integer. The table below indicates the range of supported values and what form of Postgres deployment you get with them. For example, if you want to deploy a server group with two worker nodes, indicate `--workers 2` or `-w 2`. This will create three pods, one for the coordinator node/instance and two for the worker nodes/instances (one for each of the workers).
+- **the number of worker nodes** you want to deploy to scale out and potentially reach better performances. Before proceeding here, read the [concepts about PostgreSQL Hyperscale](concepts-distributed-postgres-hyperscale.md). To indicate the number of worker nodes to deploy, use the parameter `--workers` or `-w` followed by an integer. The table below indicates the range of supported values and what form of Postgres deployment you get with them. For example, if you want to deploy a server group with two worker nodes, indicate `--workers 2` or `-w 2`. This will create three pods, one for the coordinator node/instance and two for the worker nodes/instances (one for each of the workers).
|You need |Shape of the server group you will deploy |`-w` parameter to use |Note | ||||| |A scaled out form of Postgres to satisfy the scalability needs of your applications. |Three or more Postgres instances, one is coordinator, n are workers with n >=2. |Use `-w n`, with n>=2. |The Citus extension that provides the Hyperscale capability is loaded. |
- |A basic form of Postgres Hyperscale for you to do functional validation of your application at minimum cost. Not valid for performance and scalability validation. For that you need to use the type of deployments described above. |One Postgres instance that is both coordinator and worker. |Use `-w 0` and load the Citus extension. Use the following parameters if deploying from command line: `-w 0` --extensions Citus. |The Citus extension that provides the Hyperscale capability is loaded. |
+ |A basic form of PostgreSQL Hyperscale for you to do functional validation of your application at minimum cost. Not valid for performance and scalability validation. For that you need to use the type of deployments described above. |One Postgres instance that is both coordinator and worker. |Use `-w 0` and load the Citus extension. Use the following parameters if deploying from command line: `-w 0` --extensions Citus. |The Citus extension that provides the Hyperscale capability is loaded. |
|A simple instance of Postgres that is ready to scale out when you need it. |One Postgres instance. It is not yet aware of the semantic for coordinator and worker. To scale it out after deployment, edit the configuration, increase the number of worker nodes and distribute the data. |Use `-w 0` or do not specify `-w`. |The Citus extension that provides the Hyperscale capability is present on your deployment but is not yet loaded. | | | | | | This table is demonstrated in the following figure:
- :::image type="content" source="media/postgres-hyperscale/deployment-parameters.png" alt-text="Diagram that depicts Postgres Hyperscale worker node parameters and associated architecture." border="false":::
+ :::image type="content" source="media/postgres-hyperscale/deployment-parameters.png" alt-text="Diagram that depicts PostgreSQL Hyperscale worker node parameters and associated architecture." border="false":::
While using `-w 1` works, we do not recommend you use it. This deployment will not provide you much value. With it, you will get two instances of Postgres: One coordinator and one worker. With this setup, you actually do not scale out the data since you deploy a single worker. As such you will not see an increased level of performance and scalability. We will remove the support of this deployment in a future release.
azure-arc Migrate Postgresql Data Into Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/migrate-postgresql-data-into-postgresql-hyperscale-server-group.md
Within your Arc setup you can use `psql` to connect to your Postgres instance, s
* [Design a multi-tenant database](../../postgresql/hyperscale/tutorial-design-database-multi-tenant.md)* * [Design a real-time analytics dashboard](../../postgresql/hyperscale/tutorial-design-database-realtime.md)*
-> *In these documents, skip the sections **Sign in to the Azure portal**, and **Create an Azure Database for Postgres - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
+> *In these documents, skip the sections **Sign in to the Azure portal**, and **Create an Azure Database for PostgreSQL - Hyperscale (Citus)**. Implement the remaining steps in your Azure Arc deployment. Those sections are specific to the Azure Database for PostgreSQL Hyperscale (Citus) offered as a PaaS service in the Azure cloud but the other parts of the documents are directly applicable to your Azure Arc-enabled PostgreSQL Hyperscale.
- [Scale out your Azure Database for PostgreSQL Hyperscale server group](scale-out-in-postgresql-hyperscale-server-group.md)
azure-arc Preview Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/preview-testing.md
If you use the Azure CLI extension:
If you use the Azure Data Studio extension to install: - Uninstall the Azure Data Studio extension. Select the Extensions panel and select on the **Azure Arc** extension, select **Uninstall**.-- Download the latest pre-release Azure Data Studio extension .vsix file from [https://aka.ms/ads-arcdata-ext](https://aka.ms/ads-arcdata-ext).-- Install the extension by choosing File -> Install Extension from VSIX package and then browsing to the download location of the .vsix file.
+- Download the latest pre-release Azure Data Studio extension .vsix files from [https://aka.ms/ads-arcdata-ext](https://aka.ms/ads-arcdata-ext) and [https://aka.ms/ads-azcli-ext](https://aka.ms/ads-azcli-ext).
+- Install the extensions by choosing File -> Install Extension from VSIX package and then browsing to the download location of the .vsix files. Install the `azcli` extension first and then `arc`.
### Install using Azure CLI
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
For complete release version information, see [Version log](version-log.md#july-
- Permissions required to deploy the Arc data controller have been reduced to a least-privilege level. - When deployed via the Azure CLI, the Arc data controller is now installed via a K8s job that uses a helm chart to do the installation. There's no change to the user experience.
+- Resource Sync rule is created automatically when Data Controller is deployed in Direct connected mode. This enables customers to deploy an Azure Arc enabled SQL Managed Instance by directly talking to the kubernetes APIs.
## June 14, 2022
For instructions see [What are Azure Arc-enabled data services?](overview.md)
- [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) (requires installing the client tools first) - [Create an Azure SQL Managed Instance on Azure Arc](create-sql-managed-instance.md) (requires creation of an Azure Arc data controller first) - [Create an Azure Database for PostgreSQL Hyperscale server group on Azure Arc](create-postgresql-hyperscale-server-group.md) (requires creation of an Azure Arc data controller first)-- [Resource providers for Azure services](../../azure-resource-manager/management/azure-services-resource-providers.md)
+- [Resource providers for Azure services](../../azure-resource-manager/management/azure-services-resource-providers.md)
azure-arc Resource Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/resource-sync.md
+
+ Title: Resource sync
+description: Synchronize resources for Azure Arc-enabled data services in directly connected mode
++++++ Last updated : 07/14/2022+++
+# Resource sync
+
+Resource sync lets you create, update, or delete resources directly on the Kubernetes cluster using Kubernetes APIs in the direct connected mode, and automatically synchronizes those changes to Azure. This article explains resource sync.
++
+When you deploy Azure Arc-enabled data services in direct connected mode, the deployment creates a *resource sync* rule. This resource sync rule ensures that the Arc resources such as SQL managed instance created or updated by directly calling the Kubernetes APIs get updated appropriately in the mapped resources in Azure and the resource metadata is continually synced back to Azure. This rule is created within the same resource group as the data controller.
+
+ > [!NOTE]
+ > The resource sync rule is created by default, during the Azure Arc Data Controller deployment and is only applicable in direct connected mode.
+
+Without the resource sync rule, the SQL managed instance is created using the following command:
+
+```azurecli
+az sql mi-arc create --name <name> --resource-group <group> --location <Azure location> -ΓÇôsubscription <subscription> --custom-location <custom-location> --storage-class-backups <RWX capable storageclass>
+```
+
+In this scenario, first the Azure ARM APIs are called and the mapped Azure resource is created. Once this mapped resource is created successfully, then the Kubernetes API is called to create the SQL managed instance on the Kubernetes cluster.
++
+With the resource sync rule, you can use the Kubernetes API to create the Arc-enabled SQL managed instance, as follows:
+
+```azurecli
+az sql mi-arc create --name <name> -k <namespace> --use-k8 --storage-class-backups <RWX capable storageclass>
+```
+
+In this scenario, the SQL managed instance is directly created in the Kubernetes cluster. The resource sync rule ensures that the equivalent resource in Azure is created as well.
+
+If the resource sync rule is deleted accidentally, you can add it back to restore the sync functionality by using the below REST API. Refer to Azure REST API reference for guidance on executing REST APIs. Please make sure to use data controller Azure resource subscription and resource group.
++
+```rest
+https://management.azure.com/subscriptions/{{subscription}}/resourcegroups/{{resource_group}}/providers/microsoft.extendedlocation/customlocations/{{custom_location_name}}/resourcesyncrules/defaultresourcesyncrule?api-version=2021-08-31-preview
+```
+++
+```azurecli
+ "location": "{{Azure region}}",
+ "properties": {
+ "targetResourceGroup": "/subscriptions/{{subscription}}/resourcegroups/{{resource_group_of_ data_controller}}",
+ "priority": 100,
+ "selector": {
+ "matchLabels": {
+ "management.azure.com/resourceProvider": "Microsoft.AzureArcData" //Mandatory
+ }
+ }
+ }
+}
+```
+
+### Limitations:
+
+- Resource sync rule does not hydrate Azure Arc Data controller. The Azure Arc Data controller must be deployed via ARM API.
+- Resource sync only applies to the data services such as Arc enabled SQL managed instance, post deployment of Data controller.
+- Resource sync rule does not hydrate Azure Arc enabled PostgreSQL
+- Resource sync rule does not hydrate Azure Arc Active Directory connector
+- Resource sync rule does not hydrate Azure Arc Instance Failover Groups
+
+## Next steps
+
+[Create Azure Arc-enabled data controller using Kubernetes tools](create-data-controller-using-kubernetes-native-tools.md)
azure-arc Restore Adventureworks Sample Db Into Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/restore-adventureworks-sample-db-into-postgresql-hyperscale-server-group.md
Run a command like this to download the files replace the value of the pod name
> Your container will need to have Internet connectivity over 443 to download the file from GitHub. > [!NOTE]
-> Use the pod name of the Coordinator node of the Postgres Hyperscale server group. Its name is \<server group name\>c-0 (for example postgres01c-0, where c stands for Coordinator node). If you are not sure of the pod name run the command `kubectl get pod`
+> Use the pod name of the Coordinator node of the PostgreSQL Hyperscale server group. Its name is \<server group name\>c-0 (for example postgres01c-0, where c stands for Coordinator node). If you are not sure of the pod name run the command `kubectl get pod`
```console kubectl exec <PostgreSQL pod name> -n <namespace name> -c postgres -- /bin/bash -c "cd /tmp && curl -k -O https://raw.githubusercontent.com/microsoft/azure_arc/main/azure_arc_data_jumpstart/cluster_api/capi_azure/arm_template/artifacts/AdventureWorks2019.sql"
azure-arc Scale Out In Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/scale-out-in-postgresql-hyperscale-server-group.md
# Scale out and in your Azure Arc-enabled PostgreSQL Hyperscale server group by adding more worker nodes This document explains how to scale out and scale in an Azure Arc-enabled PostgreSQL Hyperscale server group. It does so by taking you through a scenario. **If you do not want to run through the scenario and want to just read about how to scale out, jump to the paragraph [Scale out](#scale-out)** or [Scale in]().
-You scale out when you add Postgres instances (Postgres Hyperscale worker nodes) to your Azure Arc-enabled PosrgreSQL Hyperscale server group.
+You scale out when you add Postgres instances (PostgreSQL Hyperscale worker nodes) to your Azure Arc-enabled PosrgreSQL Hyperscale server group.
-You scale in when you remove Postgres instances (Postgres Hyperscale worker nodes) from your Azure Arc-enabled PosrgreSQL Hyperscale server group.
+You scale in when you remove Postgres instances (PostgreSQL Hyperscale worker nodes) from your Azure Arc-enabled PosrgreSQL Hyperscale server group.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
The scale-in operation is an online operation. Your applications continue to acc
- Read about how to [scale up and down (memory, vCores) your Azure Arc-enabled PostgreSQL Hyperscale server group](scale-up-down-postgresql-hyperscale-server-group-using-cli.md) - Read about how to set server parameters in your Azure Arc-enabled PostgreSQL Hyperscale server group-- Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from all the power of Azure Database for Postgres Hyperscale. :
+- Read the concepts and How-to guides of Azure Database for PostgreSQL Hyperscale to distribute your data across multiple PostgreSQL Hyperscale nodes and to benefit from all the power of Azure Database for PostgreSQL Hyperscale. :
* [Nodes and tables](../../postgresql/hyperscale/concepts-nodes.md) * [Determine application type](../../postgresql/hyperscale/howto-app-type.md) * [Choose a distribution column](../../postgresql/hyperscale/howto-choose-distribution-column.md)
azure-arc Conceptual Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-extensions.md
Title: "Cluster extensions - Azure Arc-enabled Kubernetes" Previously updated : 11/24/2021 Last updated : 07/12/2022 description: "This article provides a conceptual overview of cluster extensions capability of Azure Arc-enabled Kubernetes" # Cluster extensions
-[Helm charts](https://helm.sh/) help you manage Kubernetes applications by providing the building blocks needed to define, install, and upgrade even the most complex Kubernetes applications. Cluster extension feature builds on top of the packaging components of Helm by providing an Azure Resource Manager driven experience for installation and lifecycle management of different Azure capabilities on top of your Kubernetes cluster. A cluster operator or admin can use the cluster extensions feature to:
+[Helm charts](https://helm.sh/) help you manage Kubernetes applications by providing the building blocks needed to define, install, and upgrade even the most complex Kubernetes applications. The cluster extension feature builds on top of the packaging components of Helm by providing an Azure Resource Manager-driven experience for installation and lifecycle management of different Azure capabilities on top of your Kubernetes cluster.
+
+A cluster operator or admin can use the cluster extensions feature to:
- Install and manage key management, data, and application offerings on your Kubernetes cluster. List of available extensions can be found [here](extensions.md#currently-available-extensions) - Use Azure Policy to automate at-scale deployment of cluster extensions across all clusters in your environment.
description: "This article provides a conceptual overview of cluster extensions
- Set up auto-upgrade for extensions or pin to a specific version and manually upgrade versions. - Update extension properties or delete extension instances.
-An extension can be cluster-scoped or scoped to a namespace. Each extension type (like Azure Monitor for containers, Microsoft Defender for Cloud, Azure App services) defines the scope at which they operate on the cluster.
+An extension can be [cluster-scoped or scoped to a namespace](extensions.md#extension-scope). Each extension type (such as Azure Monitor for containers, Microsoft Defender for Cloud, Azure App services) defines the scope at which they operate on the cluster.
## Architecture
The `config-agent` running in your cluster tracks new and updated extension reso
Both the `config-agent` and `extensions-manager` components running in the cluster handle extension instance updates, version updates and extension instance deletion. These agents use the system-assigned managed identity of the cluster to securely communicate with Azure services. > [!NOTE]
-> * `config-agent` checks for new or updated extension instances on top of Azure Arc-enabled Kubernetes cluster. The agents require connectivity for the desired state of the extension to be pulled down to the cluster. If agents are unable to connect to Azure, propagation of the desired state to the cluster is delayed.
-> * Protected configuration settings for an extension instance are stored for up to 48 hours in the Azure Arc-enabled Kubernetes services. As a result, if the cluster remains disconnected during the 48 hours after the extension resource was created on Azure, the extension transitions from a `Pending` state to `Failed` state. As a result, we advise bringing the clusters online regularly.
+> `config-agent` checks for new or updated extension instances on top of Azure Arc-enabled Kubernetes cluster. The agents require connectivity for the desired state of the extension to be pulled down to the cluster. If agents are unable to connect to Azure, propagation of the desired state to the cluster is delayed.
+>
+> Protected configuration settings for an extension instance are stored for up to 48 hours in the Azure Arc-enabled Kubernetes services. As a result, if the cluster remains disconnected during the 48 hours after the extension resource was created on Azure, the extension changes from a `Pending` state to `Failed` state. To prevent this, we recommend bringing clusters online regularly.
## Next steps
-* Use our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).
-* [Deploy cluster extensions](./extensions.md) on your Azure Arc-enabled Kubernetes cluster.
+- Use our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).
+- [Deploy cluster extensions](./extensions.md) on your Azure Arc-enabled Kubernetes cluster.
azure-arc Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions.md
Title: "Azure Arc-enabled Kubernetes cluster extensions"
Previously updated : 05/24/2022 Last updated : 07/12/2022 description: "Deploy and manage lifecycle of extensions on Azure Arc-enabled Kubernetes"
In this article, you learn:
A conceptual overview of this feature is available in [Cluster extensions - Azure Arc-enabled Kubernetes](conceptual-extensions.md). - ## Prerequisites * [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to version >= 2.16.0.
A conceptual overview of this feature is available in [Cluster extensions - Azur
## Currently available extensions
+The following extensions are currently available.
+ | Extension | Description | | | -- | | [Azure Monitor for containers](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json&bc=/azure/azure-arc/kubernetes/breadcrumb/toc.json) | Provides visibility into the performance of workloads deployed on the Kubernetes cluster. Collects memory and CPU utilization metrics from controllers, nodes, and containers. |
A conceptual overview of this feature is available in [Cluster extensions - Azur
| [Azure Key Vault Secrets Provider](tutorial-akv-secrets-provider.md) | The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integration of Azure Key Vault as a secrets store with a Kubernetes cluster via a CSI volume. | | [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-kubernetes-azure-arc.md?toc=/azure/azure-arc/kubernetes/toc.json&bc=/azure/azure-arc/kubernetes/breadcrumb/toc.json) | Gathers information related to security like audit log data from the Kubernetes cluster. Provides recommendations and threat alerts based on gathered data. | | [Azure Arc-enabled Open Service Mesh](tutorial-arc-enabled-open-service-mesh.md) | Deploys Open Service Mesh on the cluster and enables capabilities like mTLS security, fine grained access control, traffic shifting, monitoring with Azure Monitor or with open source add-ons of Prometheus and Grafana, tracing with Jaeger, integration with external certification management solution. |
-| [Azure Arc-enabled Data Services](../../azure-arc/kubernetes/custom-locations.md#create-custom-location) | Makes it possible for you to run Azure data services on-prem, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice. |
+| [Azure Arc-enabled Data Services](../../azure-arc/kubernetes/custom-locations.md#create-custom-location) | Makes it possible for you to run Azure data services on-premises, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice. |
| [Azure App Service on Azure Arc](../../app-service/overview-arc-integration.md) | Allows you to provision an App Service Kubernetes environment on top of Azure Arc-enabled Kubernetes clusters. |
-| [Event Grid on Kubernetes](../../event-grid/kubernetes/overview.md) | Create and manage event grid resources such as topics and event subscriptions on top of Azure Arc-enabled Kubernetes clusters. |
+| [Azure Event Grid on Kubernetes](../../event-grid/kubernetes/overview.md) | Create and manage event grid resources such as topics and event subscriptions on top of Azure Arc-enabled Kubernetes clusters. |
| [Azure API Management on Azure Arc](../../api-management/how-to-deploy-self-hosted-gateway-azure-arc.md) | Deploy and manage API Management gateway on Azure Arc-enabled Kubernetes clusters. | | [Azure Arc-enabled Machine Learning](../../machine-learning/how-to-attach-kubernetes-anywhere.md) | Deploy and run Azure Machine Learning on Azure Arc-enabled Kubernetes clusters. | | [Flux (GitOps)](./conceptual-gitops-flux2.md) | Use GitOps with Flux to manage cluster configuration and application deployment. | | [Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes](../../aks/dapr.md)| Eliminates the overhead of downloading Dapr tooling and manually installing and managing the runtime on your clusters. |
+### Extension scope
+
+Extension installations on the Arc-enabled Kubernetes cluster are either *cluster-scoped* or *namespace-scoped*.
+
+A cluster-scoped extension will be installed in the `release-namespace` specified during extension creation. Typically, only one instance of the cluster-scoped extension and its components, such as pods, operators, and Custom Resource Definitions (CRDs), are installed in the release namespace on the cluster.
+
+A namespace-scoped extension can be installed in a given namespace provided using the `ΓÇônamespace` property. Since the extension can be deployed at a namespace scope, multiple instances of the namespace-scoped extension and its components can run on the cluster. Each extension instance has permissions on the namespace where it is deployed to. All the above extensions are cluster-scoped except Event Grid on Kubernetes.
+
+All of the extensions listed above are cluster-scoped, except for [Azure API Management on Azure Arc](../../api-management/how-to-deploy-self-hosted-gateway-azure-arc.md) .
+ ## Usage of cluster extensions ### Create extensions instance
az k8s-extension create --name azuremonitor-containers --extension-type Microso
``` > [!NOTE]
-> * The service is unable to retain sensitive information for more than 48 hours. If Azure Arc-enabled Kubernetes agents don't have network connectivity for more than 48 hours and cannot determine whether to create an extension on the cluster, then the extension transitions to `Failed` state. Once in `Failed` state, you will need to run `k8s-extension create` again to create a fresh extension Azure resource.
-> * Azure Monitor for containers is a singleton extension (only one required per cluster). You'll need to clean up any previous Helm chart installations of Azure Monitor for containers (without extensions) before installing the same via extensions. Follow the instructions for [deleting the Helm chart before running `az k8s-extension create`](../../azure-monitor/containers/container-insights-optout-hybrid.md).
+> The service is unable to retain sensitive information for more than 48 hours. If Azure Arc-enabled Kubernetes agents don't have network connectivity for more than 48 hours and cannot determine whether to create an extension on the cluster, then the extension transitions to `Failed` state. Once in `Failed` state, you will need to run `k8s-extension create` again to create a fresh extension Azure resource.
+>
+> Azure Monitor for containers is a singleton extension (only one required per cluster). You'll need to clean up any previous Helm chart installations of Azure Monitor for containers (without extensions) before installing the same via extensions. Follow the instructions for [deleting the Helm chart before running `az k8s-extension create`](../../azure-monitor/containers/container-insights-optout-hybrid.md).
**Required parameters**
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
Title: Azure Arc resource bridge (preview) overview description: Learn how to use Azure Arc resource bridge (preview) to support VM self-servicing on Azure Stack HCI, VMware, and System Center Virtual Machine Manager. Previously updated : 11/08/2021 Last updated : 07/14/2022 # What is Azure Arc resource bridge (preview)?
-Azure Arc resource bridge (preview) is part of the core Azure Arc platform, and is designed to host other Azure Arc services. In this release, the resource bridge supports VM self-servicing and management from Azure, for virtualized Windows and Linux virtual machines hosted in an on-premises environment on [Azure Stack HCI](/azure-stack/hci/overview) and VMware. The resource bridge is a packaged virtual machine, which hosts a *management* Kubernetes cluster that requires no user management. This virtual appliance delivers the following benefits:
+Azure Arc resource bridge (preview) is part of the core Azure Arc platform, and is designed to host other Azure Arc services. In this release, the resource bridge supports VM self-servicing and management from Azure, for virtualized Windows and Linux virtual machines hosted in an on-premises environment on [Azure Stack HCI](/azure-stack/hci/overview) and VMware.
-* Enables VM self-servicing from Azure without having to create and manage a Kubernetes cluster
-* It is fully supported by Microsoft, including update of core components.
+The resource bridge is a packaged virtual machine, which hosts a *management* Kubernetes cluster that requires no user management. This virtual appliance delivers the following benefits:
+
+* Enables VM self-servicing from Azure without having to create and manage a Kubernetes cluster.
+* Fully supported by Microsoft, including updates to core components.
* Designed to recover from software failures. * Supports deployment to any private cloud hosted on Hyper-V or VMware from the Azure portal or using the Azure Command-Line Interface (CLI).
-All management operations are performed from Azure, no local configuration is required on the appliance.
+All management operations are performed from Azure, so no local configuration is required on the appliance.
## Overview
-Azure resource bridge (preview) hosts other components such as Custom Locations, cluster extensions, and other Azure Arc agents in order to deliver the level of functionality with the private cloud infrastructures it supports. This complex system is composed of three layers:
+Azure Arc resource bridge (preview) hosts other components such as [custom locations](..\platform\conceptual-custom-locations.md), cluster extensions, and other Azure Arc agents in order to deliver the level of functionality with the private cloud infrastructures it supports. This complex system is composed of three layers:
-* The base layer that represents the resource bridge and the Arc agents
-* The platform layer that includes the Custom Location and Cluster extension
+* The base layer that represents the resource bridge and the Arc agents.
+* The platform layer that includes the custom location and cluster extension.
* The solution layer for each service supported by Arc resource bridge (that is, the different type of VMs). :::image type="content" source="media/overview/architecture-overview.png" alt-text="Azure Arc resource bridge architecture diagram." border="false"::: Azure Arc resource bridge (preview) can host other Azure services or solutions running on-premises. For this preview, there are two objects hosted on the Arc resource bridge (preview):
-* Cluster extension: Is the Azure service deployed to run on-premises. For the preview release, it supports two
+* Cluster extension: The Azure service deployed to run on-premises. For the preview release, it supports two
- - Azure Arc-enabled VMware
+ * Azure Arc-enabled VMware
- - Azure Arc-enabled Azure Stack HCI
+ * Azure Arc-enabled Azure Stack HCI
-* Custom Locations: Is a deployment target, where you can create Azure resources. It maps to different resource for different Azure services. For example, for Arc-enabled VMware, the Custom Locations resource maps to an instance of vCenter, and for Arc-enabled Azure Stack HCI, it maps to an HCI cluster instance.
+* Custom locations: A deployment target where you can create Azure resources. It maps to different resource for different Azure services. For example, for Arc-enabled VMware, the custom locations resource maps to an instance of vCenter, and for Arc-enabled Azure Stack HCI, it maps to an HCI cluster instance.
-Custom Locations and cluster extension are both Azure resources, they are linked to the Azure Arc resource bridge (preview) resource in Azure Resource Manager. When you create an on-premises VM from Azure, you can select the custom location, and that routes that *create action* to the mapped vCenter or Azure Stack HCI cluster.
+Custom locations and cluster extension are both Azure resources, which are linked to the Azure Arc resource bridge (preview) resource in Azure Resource Manager. When you create an on-premises VM from Azure, you can select the custom location, and that routes that *create action* to the mapped vCenter or Azure Stack HCI cluster.
-There is a set of resources unique to the infrastructure. For example, vCenter has a resource pool, network, and template resources. During VM creation, these resources need to be specified. With Azure Stack HCI, you just need to select the custom location, network and template to create a VM.
+Some resources are unique to the infrastructure. For example, vCenter has a resource pool, network, and template resources. During VM creation, these resources need to be specified. With Azure Stack HCI, you just need to select the custom location, network and template to create a VM.
-To summarize, the Azure resources are projections of the resources running in your on-premises private cloud. If the on-premises resource is not healthy, it can impact the health of the related resources. For example, if the Arc resource bridge (preview) has been deleted by accident, all the resources hosted in the Arc resource bridge (preview) are impacted. That is, the Custom Locations and cluster extensions are deleted as a result. The actual VMs are not impacted, as they are running on vCenter, but the management path to those VMs is interrupted. You won't be able to start/stop the VM from Azure. It is not recommended to manage or modify the Arc resource bridge (preview) using any on-premises applications directly.
+To summarize, the Azure resources are projections of the resources running in your on-premises private cloud. If the on-premises resource is not healthy, it can impact the health of the related resources. For example, if the Arc resource bridge (preview) has been deleted by accident, all the resources hosted in the Arc resource bridge (preview) are impacted. That is, the custom locations and cluster extensions are deleted as a result. The actual VMs are not impacted, as they are running on vCenter, but the management path to those VMs is interrupted, and you won't be able to start or stop the VM from Azure. It is not recommended to manage or modify the Arc resource bridge (preview) using any on-premises applications directly.
## Benefits of Azure Arc resource bridge (preview)
-Through the Azure Arc resource bridge (preview), you can accomplish the following for each private cloud infrastructure from Azure:
+Through Azure Arc resource bridge (preview), you can accomplish the following for each private cloud infrastructure from Azure:
+
+### VMware vSphere
-* VMware vSphere - By registering resource pools, networks, and VM templates in Azure you can represent a subset of your vCenter resources in Azure to enable self-service. Integration with Azure allows you to not only manage access to your vCenter resources in Azure to maintain a secure environment, but also to perform various operations on the VMware virtual machines that are enabled by Arc-enabled VMware vSphere:
+By registering resource pools, networks, and VM templates, you can represent a subset of your vCenter resources in Azure to enable self-service. Integration with Azure allows you to manage access to your vCenter resources in Azure to maintain a secure environment. You can also perform various operations on the VMware virtual machines that are enabled by Arc-enabled VMware vSphere:
-- Start, stop, and restart a virtual machine-- Control access and add Azure tags-- Add, remove, and update network interfaces-- Add, remove, and update disks and update VM size (CPU cores and memory)-- Enable guest management-- Install extensions
+* Start, stop, and restart a virtual machine
+* Control access and add Azure tags
+* Add, remove, and update network interfaces
+* Add, remove, and update disks and update VM size (CPU cores and memory)
+* Enable guest management
+* Install extensions
-* Azure Stack HCI - You can provision and manage on-premises Windows and Linux virtual machines (VMs) running on Azure Stack HCI clusters.
+### Azure Stack HCI
+
+You can provision and manage on-premises Windows and Linux virtual machines (VMs) running on Azure Stack HCI clusters.
## Prerequisites
If you are deploying on Azure Stack HCI, the x32 Azure CLI installer can be used
Azure Arc resource bridge currently supports the following Azure regions: -- East US--- West Europe
+* East US
+* West Europe
### Regional resiliency
The following private cloud environments and their versions are officially suppo
### Required Azure permissions
-* To onboard the Arc resource bridge, you are a member of the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role in the resource group.
-
-* To read, modify, and delete the resource bridge, you are a member of the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role in the resource group.
+* To onboard the Arc resource bridge, you must have the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role for the resource group.
+* To read, modify, and delete the Arc resource bridge, you must have the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role for the resource group.
### Networking The Arc resource bridge communicates outbound securely to Azure Arc over TCP port 443. If the appliance needs to connect through a firewall or proxy server to communicate over the internet, it communicates outbound using the HTTPS protocol.
-If outbound connectivity is restricted by your firewall or proxy server, make sure the URLs listed below are not blocked.
-
-URLS:
-
-| Agent resource | Description |
-|||
-|`https://mcr.microsoft.com`|Microsoft container registry|
-|`https://*.his.arc.azure.com`|Azure Arc Identity service|
-|`https://*.dp.kubernetesconfiguration.azure.com`|Azure Arc configuration service|
-|`https://*.servicebus.windows.net`|Cluster connect|
-|`https://guestnotificationservice.azure.com` |Guest notification service|
-|`https://*.dp.prod.appliances.azure.com`|Resource bridge data plane service|
-|`https://ecpacr.azurecr.io` |Resource bridge container image download |
-|`.blob.core.windows.net`<br> `*.dl.delivery.mp.microsoft.com`<br> `*.do.dsp.mp.microsoft.com` |Resource bridge image download |
+You may need to allow specific URLs to [ensure outbound connectivity is not blocked](troubleshoot-resource-bridge.md#restricted-outbound-connectivity) by your firewall or proxy server.
## Next steps
-To learn more about how Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure, see the following [Overview](../vmware-vsphere/overview.md) article.
+* Learn more about [how Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure](../vmware-vsphere/overview.md).
+* Learn more about [provisioning and managing on-premises Windows and Linux VMs running on Azure Stack HCI clusters](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines).
azure-arc Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/security-overview.md
Title: Azure Arc resource bridge (preview) security overview description: Security information about Azure resource bridge (preview). Previously updated : 11/08/2021 Last updated : 07/14/2022 # Azure Arc resource bridge (preview) security overview
This article describes the security configuration and considerations you should
## Using a managed identity
-By default, an Azure Active Directory system-assigned [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) is created and assigned to the Azure Arc resource bridge (preview). Azure Arc resource bridge (preview) currently supports only a system-assigned identity. The `clusteridentityoperator` identity initiates the first outbound communication and fetches the Managed Service Identity (MSI) certificate used by other agents for communication with Azure.
+By default, an Azure Active Directory system-assigned [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) is created and assigned to the Azure Arc resource bridge (preview). Azure Arc resource bridge currently supports only a system-assigned identity. The `clusteridentityoperator` identity initiates the first outbound communication and fetches the Managed Service Identity (MSI) certificate used by other agents for communication with Azure.
## Identity and access control Azure Arc resource bridge (preview) is represented as a resource in a resource group inside an Azure subscription. Access to this resource is controlled by standard [Azure role-based access control](../../role-based-access-control/overview.md). From the [**Access Control (IAM)**](../../role-based-access-control/role-assignments-portal.md) page in the Azure portal, you can verify who has access to your Azure Arc resource bridge (preview).
-Users and applications granted [contributor](../../role-based-access-control/built-in-roles.md#contributor) or administrator role access to the resource can make changes to the resource, including deploying or deleting cluster extensions.
+Users and applications who are granted the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) or Administrator role to the resource group can make changes to the resource bridge, including deploying or deleting cluster extensions.
## Data encryption at rest
-The Azure Arc resource bridge stores the resource information in the Cosmos DB, and as described in the [Encryption at rest in Azure Cosmos DB](../../cosmos-db/database-encryption-at-rest.md) article, all the data is encrypted at rest.
+The Azure Arc resource bridge stores resource information in Azure Cosmos DB. As described in [Encryption at rest in Azure Cosmos DB](../../cosmos-db/database-encryption-at-rest.md), all the data is encrypted at rest.
## Security audit logs
-The Activity log is a platform log in Azure that provides insight into subscription-level events. This includes such information as when the Azure Arc resource bridge is modified, deleted, or added. You can view the Activity log in the Azure portal or retrieve entries with PowerShell and CLI. See [View the Activity log](../../azure-monitor/essentials/activity-log.md#view-the-activity-log) for details. See [retention of the Activity log](../../azure-monitor/essentials/activity-log.md#retention-period) for details.
+The [activity log](../../azure-monitor/essentials/activity-log.md) is a platform log in Azure that provides insight into subscription-level events. This includes tracking when the Azure Arc resource bridge is modified, deleted, or added. You can [view the activity log](../../azure-monitor/essentials/activity-log.md#view-the-activity-log) in the Azure portal or retrieve entries with PowerShell and Azure CLI. By default, activity log events are [retained for 90 days](../../azure-monitor/essentials/activity-log.md#retention-period) and then deleted.
## Next steps
-Before evaluating or enabling Azure Arc-enabled vSphere or Azure Stack HCI, review the Azure Arc resource bridge (preview) [overview](overview.md) to understand requirements and technical details.
+- Review the [Azure Arc resource bridge (preview) overview](overview.md) to understand more about requirements and technical details.
+- Learn more about [Azure Arc](../overview.md).
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
Title: Troubleshoot Azure Arc resource bridge (preview) issues description: This article tells how to troubleshoot and resolve issues with the Azure Arc resource bridge (preview) when trying to deploy or connect to the service. Previously updated : 06/27/2022 Last updated : 07/14/2022
When the appliance is deployed to a host resource pool, there is no high availab
## Networking issues
+### Restricted outbound connectivity
+
+If outbound connectivity is restricted by your firewall or proxy server, make sure the URLs listed below are not blocked.
+
+URLS:
+
+| Agent resource | Description |
+|||
+|`https://mcr.microsoft.com`|Microsoft container registry|
+|`https://*.his.arc.azure.com`|Azure Arc Identity service|
+|`https://*.dp.kubernetesconfiguration.azure.com`|Azure Arc configuration service|
+|`https://*.servicebus.windows.net`|Cluster connect|
+|`https://guestnotificationservice.azure.com` |Guest notification service|
+|`https://*.dp.prod.appliances.azure.com`|Resource bridge data plane service|
+|`https://ecpacr.azurecr.io` |Resource bridge container image download |
+|`.blob.core.windows.net`<br> `*.dl.delivery.mp.microsoft.com`<br> `*.do.dsp.mp.microsoft.com` |Resource bridge image download |
+ ### Azure Arc resource bridge is unreachable Azure Arc resource bridge (preview) runs a Kubernetes cluster, and its control plane requires a static IP address. The IP address is specified in the `infra.yaml` file. If the IP address is assigned from a DHCP server, the address can change if not reserved. Rebooting the Azure Arc resource bridge (preview) or VM can trigger an IP address change, resulting in failing services.
azure-cache-for-redis Cache How To Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-geo-replication.md
Some features aren't supported with geo-replication:
- Clustering is supported if both caches have clustering enabled and have the same number of shards. - Caches in the same Virtual Network (VNet) are supported. - Caches in different VNets are supported with caveats. See [Can I use geo-replication with my caches in a VNet?](#can-i-use-geo-replication-with-my-caches-in-a-vnet) for more information.-- Caches with more than one replica cannot be geo-replicated.
+- Caches with more than one replica can't be geo-replicated.
After geo-replication is configured, the following restrictions apply to your linked cache pair:
After geo-replication is configured, the following restrictions apply to your li
- You can't delete either linked cache, or the resource group that contains them, until you unlink the caches. For more information, see [Why did the operation fail when I tried to delete my linked cache?](#why-did-the-operation-fail-when-i-tried-to-delete-my-linked-cache) - If the caches are in different regions, network egress costs apply to the data moved across regions. For more information, see [How much does it cost to replicate my data across Azure regions?](#how-much-does-it-cost-to-replicate-my-data-across-azure-regions) - Automatic failover doesn't occur between the primary and secondary linked cache. For more information and information on how to failover a client application, see [How does failing over to the secondary linked cache work?](#how-does-failing-over-to-the-secondary-linked-cache-work)
+- Private links can't be added to caches that are already geo-replicated. To add a private link to a geo-replicated cache: 1. Unlink the geo-replication. 2. Add a Private Link. 3. Last, relink the geo-replication.
## Add a geo-replication link
azure-cache-for-redis Cache How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-monitor.md
The other options under **Monitoring**, provide other ways to view and use the m
## View metrics charts with Azure Monitor for Azure Cache for Redis
-Use [Azure Monitor for Azure Cache for Redis](../azure-monitor/insights/redis-cache-insights-overview.md) for a view of the overall performance, failures, capacity, and operational health of all your Azure Cache for Redis resources. View metrics in a customizable, unified, and interactive experience that lets you drill down into details for individual resources. Azure Monitor for Azure Cache for Redis is based on the [workbooks feature of Azure Monitor](../azure-monitor/visualize/workbooks-overview.md) that provides rich visualizations for metrics and other data. To learn more, see the [Explore Azure Monitor for Azure Cache for Redis](../azure-monitor/insights/redis-cache-insights-overview.md) article.
+Use [Azure Monitor for Azure Cache for Redis](redis-cache-insights-overview.md) for a view of the overall performance, failures, capacity, and operational health of all your Azure Cache for Redis resources. View metrics in a customizable, unified, and interactive experience that lets you drill down into details for individual resources. Azure Monitor for Azure Cache for Redis is based on the [workbooks feature of Azure Monitor](../azure-monitor/visualize/workbooks-overview.md) that provides rich visualizations for metrics and other data. To learn more, see the [Explore Azure Monitor for Azure Cache for Redis](redis-cache-insights-overview.md) article.
While you can access Azure Monitor features from the Monitor menu in the Azure portal, Azure Monitor features can be accessed directly from the Resource menu for an Azure Cache for Redis resource. For more information on working with metrics using Azure Monitor, see [Overview of metrics in Microsoft Azure](../azure-monitor/data-platform.md).
For information on creating a metric, see [Create your own metrics](#create-your
## Next steps -- [Azure Monitor for Azure Cache for Redis](../azure-monitor/insights/redis-cache-insights-overview.md)
+- [Azure Monitor for Azure Cache for Redis](redis-cache-insights-overview.md)
- [Azure Monitor Metrics REST API](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md) - [`INFO`](https://redis.io/commands/info)
azure-cache-for-redis Cache Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-private-link.md
Title: Azure Cache for Redis with Azure Private Link
-description: Azure Private Endpoint is a network interface that connects you privately and securely to Azure Cache for Redis powered by Azure Private Link. In this article, you'll learn how to create an Azure Cache, an Azure Virtual Network, and a Private Endpoint using the Azure portal.
+description: Learn how to create an Azure Cache, an Azure Virtual Network, and a Private Endpoint using the Azure portal.
+
You can restrict public access to the private endpoint of your cache by disablin
> [!IMPORTANT] > Currently, portal console support, and persistence to firewall storage accounts are not supported. >
->
## Create a private endpoint with a new Azure Cache for Redis instance
For more information, see [Azure services DNS zone configuration](../private-lin
### What features aren't supported with private endpoints?
-Trying to connect from the Azure portal console is an unsupported scenario where you'll see a connection failure.
+- Trying to connect from the Azure portal console is an unsupported scenario where you'll see a connection failure.
+- Private links can't be added to caches that are already geo-replicated. To add a private link to a geo-replicated cache: 1. Unlink the geo-replication. 2. Add a Private Link. 3. Last, relink the geo-replication.
### How do I verify if my private endpoint is configured correctly?
azure-cache-for-redis Cache Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-reserved-pricing.md
The size of reservation should be based on the total amount of memory size that
For example, let's suppose that you're running two caches - one at 13 GB and the other at 26 GB. You'll need both for at least one year. Further, let's suppose that you plan to scale the existing 13-GB caches to 26 GB for a month to meet your seasonal demand, and then scale back. In this case, you can purchase either one P2-cache and one P3-cache or three P2-caches on a one-year reservation to maximize savings. You'll receive discount on the total amount of cache memory you reserve, independent of how that amount is allocated across your caches.
+Reserved capacity is sold in increments of nodes. Each shard contains 2 nodes by default. To buy reserved capacity for a shard, you buy 2 reserved capacity. For the number of nodes calculation, see "View Cost Calculation" on [Pricing calculator](https://azure.microsoft.com/pricing/calculator/). For an explanation of the architecture of a cache, see [A quick summary of cache architecture](cache-failover.md#a-quick-summary-of-cache-architecture).
+ ## Buy Azure Cache for Redis reserved capacity You can buy a reserved VM instance in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/CreateBlade/). Pay for the reservation [up front or with monthly payments](../cost-management-billing/reservations/prepare-buy-reservation.md).
azure-cache-for-redis Redis Cache Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/redis-cache-insights-overview.md
+
+ Title: Azure Monitor for Azure Cache for Redis | Microsoft Docs
+description: This article describes the Azure Monitor for Azure Redis Cache feature, which provides cache owners with a quick understanding of performance and utilization problems.
++++ Last updated : 09/10/2020+++++
+# Explore Azure Monitor for Azure Cache for Redis
+
+For all of your Azure Cache for Redis resources, Azure Monitor for Azure Cache for Redis provides a unified, interactive view of:
+
+- Overall performance
+- Failures
+- Capacity
+- Operational health
+
+This article helps you understand the benefits of this new monitoring experience. It also shows how to modify and adapt the experience to fit the unique needs of your organization.
+
+## Introduction
+
+Before starting the experience, you should understand how Azure Monitor for Azure Cache for Redis visually presents information.
+
+It delivers:
+
+- **At scale perspective** of your Azure Cache for Redis resources in a single location across all of your subscriptions. You can selectively scope to only the subscriptions and resources you want to evaluate.
+
+- **Drill-down analysis** of a particular Azure Cache for Redis resource. You can diagnose problems and see detailed analysis of utilization, failures, capacity, and operations. Select any of these categories to see an in-depth view of relevant information.
+
+- **Customization** of this experience, which is built atop Azure Monitor workbook templates. The experience lets you change what metrics are displayed and modify or set thresholds that align with your limits. You can save the changes in a custom workbook and then pin workbook charts to Azure dashboards.
+
+This feature doesn't require you to enable or configure anything. Azure Cache for Redis information is collected by default.
+
+>[!NOTE]
+>There is no charge to access this feature. You're charged only for the Azure Monitor essential features you configure or enable, as described on the [Azure Monitor pricing details](https://azure.microsoft.com/pricing/details/monitor/) page.
+
+## View utilization and performance metrics for Azure Cache for Redis
+
+To view the utilization and performance of your storage accounts across all of your subscriptions, do the following steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Search for **Monitor**, and select **Monitor**.
+
+ ![Search box with the word "Monitor" and the Services search result that shows "Monitor" with a speedometer symbol](../azure-monitor/insights/media/cosmosdb-insights-overview/search-monitor.png)
+
+1. Select **Azure Cache for Redis**. If this option isn't present, select **More** > **Azure Cache for Redis**.
+
+### Overview
+
+On **Overview**, the table displays interactive Azure Cache for Redis metrics. You can filter the results based on the options you select from the following drop-down lists:
+
+- **Subscriptions**: Only subscriptions that have an Azure Cache for Redis resource are listed.
+
+- **Azure Cache for Redis**: You can select all, a subset, or a single Azure Cache for Redis resource.
+
+- **Time Range**: By default, the table displays the last four hours of information based on the corresponding selections.
+
+There's a counter tile under the drop-down lists. The tile shows the total number of Azure Cache for Redis resources in the selected subscriptions. Conditional color codes or heat maps for workbook columns report transaction metrics. The deepest color represents the highest value. Lighter colors represent lower values.
+
+Selecting a drop-down list arrow next to one of the Azure Cache for Redis resources reveals a breakdown of the performance metrics at the individual resource level.
+
+![Screenshot of the overview experience](./media/redis-cache-insights-overview/overview.png)
+
+When you select the Azure Cache for Redis resource name highlighted in blue, you see the default **Overview** table for the associated account. It shows these columns:
+
+- **Used Memory**
+- **Used Memory Percentage**
+- **Server Load**
+- **Server Load Timeline**
+- **CPU**
+- **Connected Clients**
+- **Cache Misses**
+- **Errors (Max)**
+
+### Operations
+
+When you select **Operations** at the top of the page, the **Operations** table of the workbook template opens. It shows these columns:
+
+- **Total Operations**
+- **Total Operations Timeline**
+- **Operations Per Second**
+- **Gets**
+- **Sets**
+
+![Screenshot of the operations experience](./media/redis-cache-insights-overview/operations.png)
+
+### Usage
+
+When you select **Usage** at the top of the page, the **Usage** table of the workbook template opens. It shows these columns:
+
+- **Cache Read**
+- **Cache Read Timeline**
+- **Cache Write**
+- **Cache Hits**
+- **Cache Misses**
+
+![Screenshot of the usage experience](./media/redis-cache-insights-overview/usage.png)
+
+### Failures
+
+When you select **Failures** at the top of the page, the **Failures** table of the workbook template opens. It shows these columns:
+
+- **Total Errors**
+- **Failover/Errors**
+- **UnresponsiveClient/Errors**
+- **RDB/Errors**
+- **AOF/Errors**
+- **Export/Errors**
+- **Dataloss/Errors**
+- **Import/Errors**
+
+![Screenshot of failures with a breakdown by HTTP request type](./media/redis-cache-insights-overview/failures.png)
+
+### Metric definitions
+
+For a full list of the metric definitions that form these workbooks, check out the [article on available metrics and reporting intervals](./cache-how-to-monitor.md#create-your-own-metrics).
+
+## View from an Azure Cache for Redis resource
+
+To access Azure Monitor for Azure Cache for Redis directly from an individual resource:
+
+1. In the Azure portal, select Azure Cache for Redis.
+
+2. From the list, choose an individual Azure Cache for Redis resource. In the monitoring section, choose Insights.
+
+ ![Screenshot of Menu options with the words "Insights" highlighted in a red box](./media/redis-cache-insights-overview/insights.png)
+
+These views are also accessible by selecting the resource name of an Azure Cache for Redis resource from the Azure Monitor level workbook.
+
+### Resource-level overview
+
+On the **Overview** workbook for the Azure Redis Cache, it shows several performance metrics that give you access to:
+
+- Interactive performance charts showing the most essential details related to Azure Cache for Redis performance.
+
+- Metrics and status tiles highlighting shard performance, total number of connected clients, and overall latency.
+
+![Screenshot of overview dashboard displaying information on CPU performance, used memory, connected clients, errors, expired keys, and evicted keys](./media/redis-cache-insights-overview/resource-overview.png)
+
+Selecting any of the other tabs for **Performance** or **Operations** opens the respective workbooks.
+
+### Resource-level performance
+
+![Screenshot of resource performance graphs](./media/redis-cache-insights-overview/resource-performance.png)
+
+### Resource-level operations
+
+![Screenshot of resource operations graphs](./media/redis-cache-insights-overview/resource-operations.png)
+
+## Pin, export, and expand
+
+To pin any metric section to an [Azure dashboard](../azure-portal/azure-portal-dashboards.md), select the pushpin symbol in the section's upper right.
+
+![A metric section with the pushpin symbol highlighted](../azure-monitor/insights/media/cosmosdb-insights-overview/pin.png)
+
+To export your data into an Excel format, select the down arrow symbol to the left of the pushpin symbol.
+
+![A highlighted export-workbook symbol](../azure-monitor/insights/media/cosmosdb-insights-overview/export.png)
+
+To expand or collapse all views in a workbook, select the expand symbol to the left of the export symbol.
+
+![A highlighted expand-workbook symbol](../azure-monitor/insights/media/cosmosdb-insights-overview/expand.png)
+
+## Customize Azure Monitor for Azure Cache for Redis
+
+Because this experience is built atop Azure Monitor workbook templates, you can select **Customize** > **Edit** > **Save** to save a copy of your modified version into a custom workbook.
+
+![A command bar with Customize highlighted](../azure-monitor/insights/media/cosmosdb-insights-overview/customize.png)
+
+Workbooks are saved within a resource group in either the **My Reports** section or the **Shared Reports** section. **My Reports** is available only to you. **Shared Reports** is available to everyone with access to the resource group.
+
+After you save a custom workbook, go to the workbook gallery to open it.
+
+![A command bar with Gallery highlighted](../azure-monitor/insights/media/cosmosdb-insights-overview/gallery.png)
+
+## Troubleshooting
+
+For troubleshooting guidance, refer to the dedicated workbook-based insights [troubleshooting article](../azure-monitor/insights/troubleshoot-workbooks.md).
+
+## Next steps
+
+* Configure [metric alerts](../azure-monitor/alerts/alerts-metric.md) and [service health notifications](../service-health/alerts-activity-log-service-notifications-portal.md) to set up automated alerts that aid in detecting problems.
+
+* Learn the scenarios that workbooks support, how to author or customize reports, and more by reviewing [Create interactive reports with Azure Monitor workbooks](../azure-monitor/visualize/workbooks-overview.md).
azure-functions Functions Develop Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-local.md
The following application settings can be included in the **`Values`** array whe
|--|--|--| |**`AzureWebJobsStorage`**| Storage account connection string, or<br/>`UseDevelopmentStorage=true`| Contains the connection string for an Azure storage account. Required when using triggers other than HTTP. For more information, see the [`AzureWebJobsStorage`] reference.<br/>When you have the [Azurite Emulator](../storage/common/storage-use-azurite.md) installed locally and you set [`AzureWebJobsStorage`] to `UseDevelopmentStorage=true`, Core Tools uses the emulator. The emulator is useful during development, but you should test with an actual storage connection before deployment.| |**`AzureWebJobs.<FUNCTION_NAME>.Disabled`**| `true`\|`false` | To disable a function when running locally, add `"AzureWebJobs.<FUNCTION_NAME>.Disabled": "true"` to the collection, where `<FUNCTION_NAME>` is the name of the function. To learn more, see [How to disable functions in Azure Functions](disable-function.md#localsettingsjson). |
-|**`FUNCTIONS_WORKER_RUNTIME`** | `dotnet`<br/>`node`<br/>`java`<br/>`powershell`<br/>`python`| Indicates the targeted language of the Functions runtime. Required for version 2.x and higher of the Functions runtime. This setting is generated for your project by Core Tools. To learn more, see the [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime) reference.|
+|**`FUNCTIONS_WORKER_RUNTIME`** | `dotnet`<br/>`dotnet-isolated`<br/>`node`<br/>`java`<br/>`powershell`<br/>`python`| Indicates the targeted language of the Functions runtime. Required for version 2.x and higher of the Functions runtime. This setting is generated for your project by Core Tools. To learn more, see the [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime) reference.|
| **`FUNCTIONS_WORKER_RUNTIME_VERSION`** | `~7` |Indicates to use PowerShell 7 when running locally. If not set, then PowerShell Core 6 is used. This setting is only used when running locally. The PowerShell runtime version is determined by the `powerShellVersion` site configuration setting, when it runs in Azure, which can be [set in the portal](functions-reference-powershell.md#changing-the-powershell-version). | ## Next steps
azure-government Azure Secure Isolation Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/azure-secure-isolation-guidance.md
recommendations: false Previously updated : 06/02/2022 Last updated : 07/14/2022 # Azure guidance for secure isolation
The Azure Management Console and Management Plane follow strict security archite
- **Management Console (MC)** ΓÇô The MC in Azure Cloud is composed of the Azure portal GUI and the Azure Resource Manager API layers. They both use user credentials to authenticate and authorize all operations. - **Management Plane (MP)** ΓÇô This layer performs the actual management actions and is composed of the Compute Resource Provider (CRP), Fabric Controller (FC), Fabric Agent (FA), and the underlying Hypervisor, which has its own Hypervisor Agent to service communication. These layers all use system contexts that are granted the least permissions needed to perform their operations.
-The Azure FC allocates infrastructure resources to tenants and manages unidirectional communications from the Host OS to Guest VMs. The VM placement algorithm of the Azure FC is highly sophisticated and nearly impossible to predict. The FA resides in the Host OS and it manages tenant VMs. The collection of the Azure Hypervisor, Host OS and FA, and customer VMs constitute a compute node, as shown in Figure 4. FCs manage FAs although FCs exist outside of compute nodes ΓÇô separate FCs exist to manage compute and storage clusters. If you update your applicationΓÇÖs configuration file while running in the MC, the MC communicates through CRP with the FC and the FC communicates with the FA.
+The Azure FC allocates infrastructure resources to tenants and manages unidirectional communications from the Host OS to Guest VMs. The VM placement algorithm of the Azure FC is highly sophisticated and nearly impossible to predict. The FA resides in the Host OS and it manages tenant VMs. The collection of the Azure Hypervisor, Host OS and FA, and customer VMs constitute a compute node, as shown in Figure 4. FCs manage FAs although FCs exist outside of compute nodes ΓÇô separate FCs exist to manage compute and storage clusters. If you update your applicationΓÇÖs configuration file while running in the MC, the MC communicates through CRP with the FC, and the FC communicates with the FA.
CRP is the front-end service for Azure Compute, exposing consistent compute APIs through Azure Resource Manager, thereby enabling you to create and manage virtual machine resources and extensions via simple templates.
The Target of Evaluation (TOE) was composed of Microsoft Windows Server, Microso
- **Security Management** ΓÇô Windows includes several functions to manage security policies. Access to administrative functions is enforced through administrative roles. Windows also has the ability to support the separation of management and operational networks and to prohibit data sharing between Guest VMs. - **Protection of the TOE Security Functions (TSF)** ΓÇô Windows implements various self-protection mechanisms to ensure that it can't be used as a platform to gain unauthorized access to data stored on a Guest VM, that the integrity of both the TSF and its Guest VMs is maintained, and that Guest VMs are accessed solely through well-documented interfaces. - **TOE Access** ΓÇô In the context of this evaluation, Windows allows an authorized administrator to configure the system to display a logon banner before the logon dialog.-- **Trusted Path/Channels** ΓÇô Windows implements IPsec, TLS, and HTTPS trusted channels and paths for the purpose of remote administration, transfer of audit data to the operational environment, and separation of management and operational networks.
+- **Trusted Path/Channels** ΓÇô Windows implements IPsec, TLS, and HTTPS trusted channels and paths for remote administration, transfer of audit data to the operational environment, and separation of management and operational networks.
More information is available from the [third-party certification report](https://www.niap-ccevs.org/MMO/Product/st_vid11087-vr.pdf).
For [Windows VMs](../virtual-machines/windows/disk-encryption-faq.yml), Azure Di
Customer-managed keys (CMK) enable you to have [full control](../virtual-machines/disk-encryption.md#full-control-of-your-keys) over your encryption keys. You can grant access to managed disks in your Azure Key Vault so that your keys can be used for encrypting and decrypting the DEK. You can also disable your keys or revoke access to managed disks at any time. Finally, you have full audit control over key usage with Azure Key Vault monitoring to ensure that only managed disks or other authorized resources are accessing your encryption keys. ##### *Encryption at host*
-Encryption at host ensures that data stored on the VM host is encrypted at rest and flows encrypted to the Storage service. Disks with encryption at host enabled aren't encrypted with Azure Storage encryption; instead, the server hosting your VM provides the encryption for your data, and that encrypted data flows into Azure Storage. For more information, see [Encryption at host - End-to-end encryption for your VM data](../virtual-machines/disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data). As mentioned previously, [Azure Disk encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) for VM and VMSS isn't supported by Managed HSM. However, encryption at host with CMK is supported by Managed HSM.
+Encryption at host ensures that data stored on the VM host is encrypted at rest and flows encrypted to the Storage service. Disks with encryption at host enabled aren't encrypted with Azure Storage encryption; instead, the server hosting your VM provides the encryption for your data, and that encrypted data flows into Azure Storage. For more information, see [Encryption at host - End-to-end encryption for your VM data](../virtual-machines/disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data). As mentioned previously, [Azure Disk encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) for virtual machines and virtual machine scale sets isn't supported by Managed HSM. However, encryption at host with CMK is supported by Managed HSM.
You're [always in control of your customer data](https://www.microsoft.com/trust-center/privacy/data-management) in Azure. You can access, extract, and delete your customer data stored in Azure at will. When you terminate your Azure subscription, Microsoft takes the necessary steps to ensure that you continue to own your customer data. A common concern upon data deletion or subscription termination is whether another customer or Azure administrator can access your deleted data. The following sections explain how data deletion, retention, and destruction work in Azure.
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
recommendations: false Previously updated : 06/02/2022 Last updated : 07/14/2022 # Compare Azure Government and global Azure
Table below lists API endpoints in Azure vs. Azure Government for accessing and
|||docs.loganalytics.io|docs.loganalytics.us|| |||adx.monitor.azure.com|adx.monitor.azure.us|[Data Explorer queries](/azure/data-explorer/query-monitor-data)| ||Azure Resource Manager|management.azure.com|management.usgovcloudapi.net||
+||Cost Management|consumption.azure.com|consumption.azure.us||
||Gallery URL|gallery.azure.com|gallery.azure.us|| ||Microsoft Azure portal|portal.azure.com|portal.azure.us|| ||Microsoft Intune|enterpriseregistration.windows.net|enterpriseregistration.microsoftonline.us|Enterprise registration|
azure-government Documentation Government Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-impact-level-5.md
recommendations: false Previously updated : 03/07/2022 Last updated : 07/14/2022 # Isolation guidelines for Impact Level 5 workloads
You need to address two key areas for Azure services in IL5 scope: compute isola
### Compute isolation
-IL5 separation requirements are stated in Section 5.2.2.3 (Page 51) of the [Cloud Computing SRG](https://public.cyber.mil/dccs/dccs-documents/). The SRG focuses on compute separation during "processing" of IL5 data. This separation ensures that a virtual machine that could potentially compromise the physical host can't affect a DoD workload. To remove the risk of runtime attacks and ensure long running workloads aren't compromised from other workloads on the same host, **all IL5 virtual machines and virtual machine scale sets** should be isolated via [Azure Dedicated Host](https://azure.microsoft.com/services/virtual-machines/dedicated-host/) or [isolated virtual machines](../virtual-machines/isolation.md). Doing so provides a dedicated physical server to host your Azure Virtual Machines (VMs) for Windows and Linux.
+IL5 separation requirements are stated in Section 5.2.2.3 (Page 51) of the [Cloud Computing SRG](https://public.cyber.mil/dccs/dccs-documents/). The SRG focuses on compute separation during "processing" of IL5 data. This separation ensures that a virtual machine that could potentially compromise the physical host can't affect a DoD workload. To remove the risk of runtime attacks and ensure long running workloads aren't compromised from other workloads on the same host, **all IL5 virtual machines and virtual machine scale sets** should be isolated by DoD mission owners via [Azure Dedicated Host](https://azure.microsoft.com/services/virtual-machines/dedicated-host/) or [isolated virtual machines](../virtual-machines/isolation.md). Doing so provides a dedicated physical server to host your Azure Virtual Machines (VMs) for Windows and Linux.
For services where the compute processes are obfuscated from access by the owner and stateless in their processing of data, you should accomplish isolation by focusing on the data being processed and how it's stored and retained. This approach ensures the data is stored in protected mediums. It also ensures the data isn't present on these services for extended periods unless it's encrypted as needed. ### Storage isolation
-The DoD requirements for encrypting data at rest are provided in Section 5.11 (Page 122) of the [Cloud Computing SRG](https://public.cyber.mil/dccs/dccs-documents/). DoD emphasizes encrypting all data at rest stored in virtual machine virtual hard drives, mass storage facilities at the block or file level, and database records where the mission owner does not have sole control over the database service. For cloud applications where encrypting data at rest with DoD key control is not possible, mission owners must perform a risk analysis with relevant data owners before transmitting data into a cloud service offering.
+The DoD requirements for encrypting data at rest are provided in Section 5.11 (Page 122) of the [Cloud Computing SRG](https://public.cyber.mil/dccs/dccs-documents/). DoD emphasizes encrypting all data at rest stored in virtual machine virtual hard drives, mass storage facilities at the block or file level, and database records where the mission owner doesn't have sole control over the database service. For cloud applications where encrypting data at rest with DoD key control isn't possible, mission owners must perform a risk analysis with relevant data owners before transmitting data into a cloud service offering.
In a recent PA for Azure Government, DISA approved logical separation of IL5 from other data via cryptographic means. In Azure, this approach involves data encryption via keys that are maintained in Azure Key Vault and stored in [FIPS 140 validated](/azure/compliance/offerings/offering-fips-140-2) Hardware Security Modules (HSMs). The keys are owned and managed by the IL5 system owner (also known as customer-managed keys).
For Containers services availability in Azure Government, see [Products availabl
### [Container Registry](../container-registry/index.yml) -- When you store images and other artifacts in a Container Registry, Azure automatically encrypts the registry content at rest by using service-managed keys. You can supplement the default encryption with an additional encryption layer by [using a key that you create and manage in Azure Key Vault](../container-registry/container-registry-customer-managed-keys.md).
+- When you store images and other artifacts in a Container Registry, Azure automatically encrypts the registry content at rest by using service-managed keys. You can supplement the default encryption with an extra encryption layer by [using a key that you create and manage in Azure Key Vault](../container-registry/container-registry-customer-managed-keys.md).
## Databases
For Management and governance services availability in Azure Government, see [Pr
Log Analytics, which is a feature of Azure Monitor, is intended to be used for monitoring the health and status of services and infrastructure. The monitoring data and logs primarily store [logs and metrics](../azure-monitor/logs/data-security.md#data-retention) that are service generated. When used in this primary capacity, Log Analytics supports Impact Level 5 workloads in Azure Government with no extra configuration required.
-Log Analytics may also be used to ingest additional customer-provided logs. These logs may include data ingested as part of operating Microsoft Defender for Cloud or Microsoft Sentinel. If the ingested logs or the queries written against these logs are categorized as IL5 data, then you should configure customer-managed keys (CMK) for your Log Analytics workspaces and Application Insights components. Once configured, any data sent to your workspaces or components is encrypted with your Azure Key Vault key. For more information, see [Azure Monitor customer-managed keys](../azure-monitor/logs/customer-managed-keys.md).
+Log Analytics may also be used to ingest extra customer-provided logs. These logs may include data ingested as part of operating Microsoft Defender for Cloud or Microsoft Sentinel. If the ingested logs or the queries written against these logs are categorized as IL5 data, then you should configure customer-managed keys (CMK) for your Log Analytics workspaces and Application Insights components. Once configured, any data sent to your workspaces or components is encrypted with your Azure Key Vault key. For more information, see [Azure Monitor customer-managed keys](../azure-monitor/logs/customer-managed-keys.md).
### [Azure Site Recovery](../site-recovery/index.yml)
Log Analytics may also be used to ingest additional customer-provided logs. Thes
### [Microsoft Intune](/mem/intune/fundamentals/) -- Intune supports Impact Level 5 workloads in Azure Government with no extra configuration required. Line-of-business apps should be evaluated for IL5 restrictions prior to [uploading to Intune storage](/mem/intune/apps/apps-add). While Intune does encrypt applications that are uploaded to the service for distribution, it does not support customer-managed keys.
+- Intune supports Impact Level 5 workloads in Azure Government with no extra configuration required. Line-of-business apps should be evaluated for IL5 restrictions prior to [uploading to Intune storage](/mem/intune/apps/apps-add). While Intune does encrypt applications that are uploaded to the service for distribution, it doesn't support customer-managed keys.
## Migration
To implement Impact Level 5 compliant controls on an Azure Storage account that
For more information about how to enable this Azure Storage encryption feature, see [Configure encryption with customer-managed keys stored in Azure Key Vault](../storage/common/customer-managed-keys-configure-key-vault.md). > [!NOTE]
-> When you use this encryption method, you need to enable it before you add content to the storage account. Any content that's added earlier won't be encrypted with the selected key. It will be encrypted only via the standard encryption at rest provided by Azure Storage that uses Microsoft-managed keys.
+> When you use this encryption method, you need to enable it before you add content to the storage account. Any content that's added before the customer-managed key is configured will be protected with Microsoft-managed keys.
### [StorSimple](../storsimple/index.yml)
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
The following tables list the operating systems that are supported by the Azure
| Oracle Linux 7 | X | X | | X | | Oracle Linux 6 | | X | | | | Oracle Linux 6.4+ | | X | | X |
-| Red Hat Enterprise Linux Server 8.5, 8.6 | X | | | |
+| Red Hat Enterprise Linux Server 8.5, 8.6 | X | X | | |
| Red Hat Enterprise Linux Server 8, 8.1, 8.2, 8.3, 8.4 | X <sup>3</sup> | X | X | | | Red Hat Enterprise Linux Server 7 | X | X | X | X | | Red Hat Enterprise Linux Server 6 | | X | X | |
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md
Reporting of SDK load failures isn't supported on Internet Explorer 8 or earlier
#### Snippet configuration options
-All configuration options have been move towards the end of the script. This placement avoids accidentally introducing JavaScript errors that wouldn't just cause the SDK to fail to load, but also it would disable the reporting of the failure.
+All configuration options have been moved towards the end of the script. This placement avoids accidentally introducing JavaScript errors that wouldn't just cause the SDK to fail to load, but also it would disable the reporting of the failure.
Each configuration option is shown above on a new line, if you don't wish to override the default value of an item listed as [optional] you can remove that line to minimize the resulting size of your returned page.
azure-monitor Sdk Support Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-support-guidance.md
# Application Insights SDK support guidance
-Microsoft announces feature deprecations or breaking changes at least three years in advance and strives to provide a seamless process for migration to the replacement experience.
+Microsoft announces feature deprecations or breaking changes at least one year in advance and strives to provide a seamless process for migration to the replacement experience.
-The [Microsoft Azure SDK lifecycle policy](/lifecycle/faq/azure) is followed when features are enhanced in a new SDK or before an SDK is designated as legacy. Microsoft strives to retain legacy SDK functionality, but newer features may not be available with older versions.
+For more information, review the [Azure SDK Lifecycle and Support Policy](https://azure.github.io/azure-sdk/policies_support.html).
> [!NOTE] > Diagnostic tools often provide better insight into the root cause of a problem when the latest stable SDK version is used.
Support engineers are expected to provide SDK update guidance according to the f
|Current SDK version in use |Alternative version available |Update policy for support | ||||
-|Stable and less than one year old | Newer supported stable version | **UPDATE RECOMMENDED** |
-|Stable and more than one year old | Newer supported stable version | **UPDATE REQUIRED** |
-|Unsupported ([support policy](/lifecycle/faq/azure)) | Any supported version | **UPDATE REQUIRED** |
+|Latest stable minor version of a GA SDK | Newer supported stable version | **UPDATE REQUIRED** |
+|Unsupported ([support policy](/lifecycle/faq/azure)) | Any supported version | **UPDATE REQUIRED** |
|Preview | Stable version | **UPDATE REQUIRED** | |Preview | Older stable version | **UPDATE RECOMMENDED** | |Preview | Newer preview version, no older stable version | **UPDATE RECOMMENDED** |
azure-monitor Data Collector Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-collector-api.md
description: You can use the Azure Monitor HTTP Data Collector API to add POST J
Previously updated : 10/20/2021 Last updated : 07/14/2022
The data posted to the Azure Monitor Data collection API is subject to certain c
* Maximum of 32 KB for field values. If the field value is greater than 32 KB, the data will be truncated. * Recommended maximum of 50 fields for a given type. This is a practical limit from a usability and search experience perspective. * Tables in Log Analytics workspaces support only up to 500 columns (referred to as fields in this article).
-* Maximum of 50 characters for column names.
+* Maximum of 45 characters for column names.
## Return codes The HTTP status code 200 means that the request has been received for processing. This indicates that the operation finished successfully.
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md
The table below lists the available curated visualizations and more detailed inf
|:--|:--|:--|:--| | [Azure Monitor Workbooks for Azure Active Directory](../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) | GA (General availability) | [Yes](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Workbooks) | Azure Active Directory provides workbooks to understand the effect of your Conditional Access policies, to troubleshoot sign-in failures, and to identify legacy authentications. | | [Azure Backup](../backup/backup-azure-monitoring-use-azuremonitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. |
-| [Azure Monitor for Azure Cache for Redis (preview)](./insights/redis-cache-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/redisCacheInsights) | Provides a unified, interactive view of overall performance, failures, capacity, and operational health |
+| [Azure Monitor for Azure Cache for Redis (preview)](../azure-cache-for-redis/redis-cache-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/redisCacheInsights) | Provides a unified, interactive view of overall performance, failures, capacity, and operational health |
| [Azure Cosmos DB Insights](./insights/cosmosdb-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/cosmosDBInsights) | Provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. | | [Azure Container Insights](/azure/azure-monitor/insights/container-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/containerInsights) | Monitors the performance of container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS). It gives you performance visibility by collecting metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. | | [Azure Data Explorer insights](/azure/data-explorer/data-explorer-insights) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/adxClusterInsights) | Azure Data Explorer Insights provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures. |
The following table lists Azure services and the data they collect into Azure Mo
| [Azure Blockchain Service](../blockchain/workbench/index.yml) | Microsoft.Blockchain/blockchainMembers | [**Yes**](./essentials/metrics-supported.md#microsoftblockchainblockchainmembers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftblockchainblockchainmembers) | | | | [Azure Blockchain Service](../blockchain/workbench/index.yml) | Microsoft.Blockchain/cordaMembers | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftblockchaincordamembers) | | | | [Azure Bot Service](/azure/bot-service/) | Microsoft.BotService/botServices | [**Yes**](./essentials/metrics-supported.md#microsoftbotservicebotservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftbotservicebotservices) | | |
- | [Azure Cache for Redis](../azure-cache-for-redis/index.yml) | Microsoft.Cache/Redis | [**Yes**](./essentials/metrics-supported.md#microsoftcacheredis) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcacheredis) | [Azure Monitor for Azure Cache for Redis (preview)](./insights/redis-cache-insights-overview.md) | |
- | [Azure Cache for Redis](../azure-cache-for-redis/index.yml) | Microsoft.Cache/redisEnterprise | [**Yes**](./essentials/metrics-supported.md#microsoftcacheredisenterprise) | No | [Azure Monitor for Azure Cache for Redis (preview)](./insights/redis-cache-insights-overview.md) | |
+ | [Azure Cache for Redis](../azure-cache-for-redis/index.yml) | Microsoft.Cache/Redis | [**Yes**](./essentials/metrics-supported.md#microsoftcacheredis) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcacheredis) | [Azure Monitor for Azure Cache for Redis (preview)](../azure-cache-for-redis/redis-cache-insights-overview.md) | |
+ | [Azure Cache for Redis](../azure-cache-for-redis/index.yml) | Microsoft.Cache/redisEnterprise | [**Yes**](./essentials/metrics-supported.md#microsoftcacheredisenterprise) | No | [Azure Monitor for Azure Cache for Redis (preview)](../azure-cache-for-redis/redis-cache-insights-overview.md) | |
| [Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/CdnWebApplicationFirewallPolicies | [**Yes**](./essentials/metrics-supported.md#microsoftcdncdnwebapplicationfirewallpolicies) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcdncdnwebapplicationfirewallpolicies) | | | | [Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/profiles | [**Yes**](./essentials/metrics-supported.md#microsoftcdnprofiles) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcdnprofiles) | | | | [Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/profiles/endpoints | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftcdnprofilesendpoints) | | |
azure-monitor Workbook Templates Move Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbook-templates-move-region.md
Title: Azure Monitor Workbook Templates - Move Regions
+ Title: Move and Azure Workbook template to another region
description: How to move a workbook template to a different region
azure-monitor Workbooks Create Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-create-workbook.md
Last updated 05/30/2022
-# Creating an Azure Workbook
+# Create an Azure Workbook
This article describes how to create a new workbook and how to add elements to your Azure Workbook. This video walks you through creating workbooks.
azure-monitor Workbooks Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-getting-started.md
Last updated 05/30/2022
-# Getting started with Azure Workbooks
+# Get started with Azure Workbooks
This article describes how to access Azure Workbooks and the common tasks used to work with Workbooks.
azure-monitor Workbooks Jsonpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-jsonpath.md
Last updated 07/05/2022
-# How to use JSONPath to transform JSON data in workbooks
+# Use JSONPath to transform JSON data in workbooks
Workbooks is able to query data from many sources. Some endpoints, such as [Azure Resource Manager](../../azure-resource-manager/management/overview.md) or custom endpoint, can return results in JSON. If the JSON data returned by the queried endpoint is not configured in a format that you desire, JSONPath can be used to transform the results.
azure-monitor Workbooks Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-text.md
Textbox parameters provide a simple way to collect text input from workbook user
A common use of textboxes is as internal variables used by other workbook controls. This is done by using a query for default values, and making the input control invisible in read-mode. For example, a user may want a threshold to come from a formula (not a user) and then use the threshold in subsequent queries.
-## Creating a text parameter
+## Create a text parameter
1. Start with an empty workbook in edit mode. 2. Choose _Add parameters_ from the links within the workbook. 3. Select on the blue _Add Parameter_ button.
Text parameter supports following field style:
:::image type="content" source="./media/workbooks-text/kql-text.png" alt-text="Screenshot showing multiline text field.":::
-## Referencing a text parameter
+## Reference a text parameter
1. Add a query control to the workbook by selecting the blue `Add query` link and select an Application Insights resource. 2. In the KQL box, add this snippet: ```kusto
Text parameter supports following field style:
> [!NOTE] > In the example above, `{SlowRequestThreshold}` represents an integer value. If you were querying for a string like `{ComputerName}` you would need to modify your Kusto query to add quotes `"{ComputerName}"` in order for the parameter field to an accept input without quotes.
-## Setting default values using queries
+## Set the default values using queries
1. Start with an empty workbook in edit mode. 2. Choose _Add parameters_ from the links within the workbook. 3. Select on the blue _Add Parameter_ button.
Text parameter supports following field style:
> [!NOTE] > While this example queries Application Insights data, the approach can be used for any log based data source - Log Analytics, Azure Resource Graph, etc.
-## Adding validations
+## Add validations
For standard and password text parameters, user can add validation rules that are applied to the text field. Add a valid regex with error message. If message is set, it's shown as error when field is invalid.
azure-resource-manager Msbuild Bicep File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/msbuild-bicep-file.md
+
+ Title: Use MSBuild to convert Bicep to JSON
+description: Use MSBuild to convert a Bicep file to Azure Resource Manager template (ARM template) JSON.
Last updated : 07/14/2022++++
+# Customer intent: As a developer I want to convert Bicep files to Azure Resource Manager template (ARM template) JSON in an MSBuild pipeline.
++
+# Quickstart: Use MSBuild to convert Bicep to JSON
+
+This article describes how to use MSBuild to convert a Bicep file to Azure Resource Manager template (ARM template) JSON. The examples use MSBuild from the command line with C# project files that convert Bicep to JSON. The project files are examples that can be used in an MSBuild continuous integration (CI) pipeline.
+
+## Prerequisites
+
+You'll need the latest versions of the following software:
+
+- [Visual Studio](/visualstudio/install/install-visual-studio). The free community version will install .NET 6.0, .NET Core 3.1, .NET SDK, MSBuild, .NET Framework 4.8, NuGet package manager, and C# compiler. From the installer, select **Workloads** > **.NET desktop development**.
+- [Visual Studio Code](https://code.visualstudio.com/) with the extensions for [Bicep](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep) and [Azure Resource Manager (ARM) Tools](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools).
+- [PowerShell](/powershell/scripting/install/installing-powershell) or a command-line shell for your operating system.
+
+## MSBuild tasks and CLI packages
+
+If your existing continuous integration (CI) pipeline relies on [MSBuild](/visualstudio/msbuild/msbuild), you can use MSBuild tasks and CLI packages to convert Bicep files into ARM template JSON.
+
+The functionality relies on the following NuGet packages. The latest NuGet package versions match the latest Bicep version.
+
+| Package Name | Description |
+| - |- |
+| [Azure.Bicep.MSBuild](https://www.nuget.org/packages/Azure.Bicep.MSBuild) | Cross-platform MSBuild task that invokes the Bicep CLI and compiles Bicep files into ARM template JSON. |
+| [Azure.Bicep.CommandLine.win-x64](https://www.nuget.org/packages/Azure.Bicep.CommandLine.win-x64) | Bicep CLI for Windows. |
+| [Azure.Bicep.CommandLine.linux-x64](https://www.nuget.org/packages/Azure.Bicep.CommandLine.linux-x64) | Bicep CLI for Linux. |
+| [Azure.Bicep.CommandLine.osx-x64](https://www.nuget.org/packages/Azure.Bicep.CommandLine.osx-x64) | Bicep CLI for macOS. |
+
+### Azure.Bicep.MSBuild package
+
+When referenced in a project file's `PackageReference` the `Azure.Bicep.MSBuild` package imports the `Bicep` task that's used to invoke the Bicep CLI. The package converts its output into MSBuild errors and the `BicepCompile` target that's used to simplify the `Bicep` task's usage. By default the `BicepCompile` runs after the `Build` target and compiles all `@(Bicep)` items and places the output in `$(OutputPath)` with the same file name and the _.json_ extension.
+
+The following example compiles _one.bicep_ and _two.bicep_ files in the same directory as the project file and places the compiled _one.json_ and _two.json_ in the `$(OutputPath)` directory.
+
+```xml
+<ItemGroup>
+ <Bicep Include="one.bicep" />
+ <Bicep Include="two.bicep" />
+</ItemGroup>
+```
+
+You can override the output path per file using the `OutputFile` metadata on `Bicep` items. The following example will recursively find all _main.bicep_ files and place the compiled _.json_ files in `$(OutputPath)` under a subdirectory with the same name in `$(OutputPath)`:
+
+```xml
+<ItemGroup>
+ <Bicep Include="**\main.bicep" OutputFile="$(OutputPath)\%(RecursiveDir)\%(FileName).json" />
+</ItemGroup>
+```
+
+More customizations can be performed by setting one of the following properties in your project:
+
+| Property Name | Default Value | Description |
+| - |- | - |
+| `BicepCompileAfterTargets` | `Build` | Used as `AfterTargets` value for the `BicepCompile` target. Change the value to override the scheduling of the `BicepCompile` target in your project. |
+| `BicepCompileDependsOn` | None | Used as `DependsOnTargets` value for the `BicepCompile` target. This property can be set to targets that you want `BicepCompile` target to depend on. |
+| `BicepCompileBeforeTargets` | None | Used as `BeforeTargets` value for the `BicepCompile` target. |
+| `BicepOutputPath` | `$(OutputPath)` | Set this property to override the default output path for the compiled ARM template. `OutputFile` metadata on `Bicep` items takes precedence over this value. |
+
+The `Azure.Bicep.MSBuild` requires the `BicepPath` property to be set either in order to function. You may set it by referencing the appropriate `Azure.Bicep.CommandLine.*` package for your operating system or manually by installing the Bicep CLI and setting the `BicepPath` environment variable or MSBuild property.
+
+### Azure.Bicep.CommandLine packages
+
+The `Azure.Bicep.CommandLine.*` packages are available for Windows, Linux, and macOS. When referenced in a project file via a `PackageReference`, the `Azure.Bicep.CommandLine.*` packages set the `BicepPath` property to the full path of the Bicep executable for the platform. The reference to this package may be omitted if Bicep CLI is installed through other means and the `BicepPath` environment variable or MSBuild property are set accordingly.
+
+### SDK-based examples
+
+The following examples contain a default Console App SDK-based C# project file that was modified to convert Bicep files into ARM templates. Replace `__LATEST_VERSION__` with the latest version of the Bicep NuGet packages.
+
+The .NET Core 3.1 and .NET 6 examples are similar. But .NET 6 uses a different format for the _Program.cs_ file. For more information, see [.NET 6 C# console app template generates top-level statements](/dotnet/core/tutorials/top-level-templates).
+
+### .NET 6
+
+In this example, the `RootNamespace` property contains a placeholder value. When you create a project file, the value matches your project's name.
+
+```xml
+<Project Sdk="Microsoft.NET.Sdk">
+ <PropertyGroup>
+ <OutputType>Exe</OutputType>
+ <TargetFramework>net6.0</TargetFramework>
+ <RootNamespace>net6-sdk-project-name</RootNamespace>
+ <ImplicitUsings>enable</ImplicitUsings>
+ <Nullable>enable</Nullable>
+ </PropertyGroup>
+
+ <ItemGroup>
+ <PackageReference Include="Azure.Bicep.CommandLine.win-x64" Version="__LATEST_VERSION__" />
+ <PackageReference Include="Azure.Bicep.MSBuild" Version="__LATEST_VERSION__" />
+ </ItemGroup>
+
+ <ItemGroup>
+ <Bicep Include="**\main.bicep" OutputFile="$(OutputPath)\%(RecursiveDir)\%(FileName).json" />
+ </ItemGroup>
+</Project>
+```
+
+### .NET Core 3.1
+
+```xml
+<Project Sdk="Microsoft.NET.Sdk">
+ <PropertyGroup>
+ <OutputType>Exe</OutputType>
+ <TargetFramework>netcoreapp3.1</TargetFramework>
+ </PropertyGroup>
+
+ <ItemGroup>
+ <PackageReference Include="Azure.Bicep.CommandLine.win-x64" Version="__LATEST_VERSION__" />
+ <PackageReference Include="Azure.Bicep.MSBuild" Version="__LATEST_VERSION__" />
+ </ItemGroup>
+
+ <ItemGroup>
+ <Bicep Include="**\main.bicep" OutputFile="$(OutputPath)\%(RecursiveDir)\%(FileName).json" />
+ </ItemGroup>
+</Project>
+```
+
+### NoTargets SDK
+
+The following example contains a project that converts Bicep files into ARM templates using [Microsoft.Build.NoTargets](https://www.nuget.org/packages/Microsoft.Build.NoTargets). This SDK allows creation of standalone projects that compile only Bicep files. Replace `__LATEST_VERSION__` with the latest version of the Bicep NuGet packages.
+
+For [Microsoft.Build.NoTargets](/dotnet/core/project-sdk/overview#project-files), specify a version like `Microsoft.Build.NoTargets/3.5.6`.
+
+```xml
+<Project Sdk="Microsoft.Build.NoTargets/__LATEST_VERSION__">
+ <PropertyGroup>
+ <TargetFramework>net48</TargetFramework>
+ </PropertyGroup>
+
+ <ItemGroup>
+ <PackageReference Include="Azure.Bicep.CommandLine.win-x64" Version="__LATEST_VERSION__" />
+ <PackageReference Include="Azure.Bicep.MSBuild" Version="__LATEST_VERSION__" />
+ </ItemGroup>
+
+ <ItemGroup>
+ <Bicep Include="main.bicep"/>
+ </ItemGroup>
+</Project>
+```
+
+### Classic framework
+
+The following example converts Bicep to JSON inside a classic project file that's not SDK-based. Only use the classic example if the previous examples don't work for you. Replace `__LATEST_VERSION__` with the latest version of the Bicep NuGet packages.
+
+In this example, the `ProjectGuid`, `RootNamespace` and `AssemblyName` properties contain placeholder values. When you create a project file, a unique GUID is created and the name values match your project's name.
+
+```xml
+<?xml version="1.0" encoding="utf-8"?>
+<Project ToolsVersion="15.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
+ <Import Project="$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props" Condition="Exists('$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props')" />
+ <PropertyGroup>
+ <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
+ <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
+ <ProjectGuid>{11111111-1111-1111-1111-111111111111}</ProjectGuid>
+ <OutputType>Exe</OutputType>
+ <RootNamespace>ClassicFramework</RootNamespace>
+ <AssemblyName>ClassicFramework</AssemblyName>
+ <TargetFrameworkVersion>v4.8</TargetFrameworkVersion>
+ <FileAlignment>512</FileAlignment>
+ <AutoGenerateBindingRedirects>true</AutoGenerateBindingRedirects>
+ <Deterministic>true</Deterministic>
+ </PropertyGroup>
+ <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
+ <PlatformTarget>AnyCPU</PlatformTarget>
+ <DebugSymbols>true</DebugSymbols>
+ <DebugType>full</DebugType>
+ <Optimize>false</Optimize>
+ <OutputPath>bin\Debug\</OutputPath>
+ <DefineConstants>DEBUG;TRACE</DefineConstants>
+ <ErrorReport>prompt</ErrorReport>
+ <WarningLevel>4</WarningLevel>
+ </PropertyGroup>
+ <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">
+ <PlatformTarget>AnyCPU</PlatformTarget>
+ <DebugType>pdbonly</DebugType>
+ <Optimize>true</Optimize>
+ <OutputPath>bin\Release\</OutputPath>
+ <DefineConstants>TRACE</DefineConstants>
+ <ErrorReport>prompt</ErrorReport>
+ <WarningLevel>4</WarningLevel>
+ </PropertyGroup>
+ <ItemGroup>
+ <Reference Include="System" />
+ <Reference Include="System.Core" />
+ <Reference Include="System.Xml.Linq" />
+ <Reference Include="System.Data.DataSetExtensions" />
+ <Reference Include="Microsoft.CSharp" />
+ <Reference Include="System.Data" />
+ <Reference Include="System.Net.Http" />
+ <Reference Include="System.Xml" />
+ </ItemGroup>
+ <ItemGroup>
+ <Compile Include="Program.cs" />
+ <Compile Include="Properties\AssemblyInfo.cs" />
+ </ItemGroup>
+ <ItemGroup>
+ <None Include="App.config" />
+ <Bicep Include="main.bicep" />
+ </ItemGroup>
+ <ItemGroup>
+ <PackageReference Include="Azure.Bicep.CommandLine.win-x64">
+ <Version>__LATEST_VERSION__</Version>
+ </PackageReference>
+ <PackageReference Include="Azure.Bicep.MSBuild">
+ <Version>__LATEST_VERSION__</Version>
+ </PackageReference>
+ </ItemGroup>
+ <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />
+</Project>
+```
+
+## Convert Bicep to JSON
+
+The following examples show how MSBuild converts a Bicep file to JSON. Follow the instructions to create one of the project files for .NET, .NET Core 3.1, or Classic framework. Then continue to create the Bicep file and run MSBuild.
+
+# [.NET](#tab/dotnet)
+
+Build a project in .NET with the dotnet CLI.
+
+1. Open Visual Studio code and select **Terminal** > **New Terminal** to start a PowerShell session.
+1. Create a directory named _bicep-msbuild-demo_ and go to the directory. This example uses _C:\bicep-msbuild-demo_.
+
+ ```powershell
+ New-Item -Name .\bicep-msbuild-demo -ItemType Directory
+ Set-Location -Path .\bicep-msbuild-demo
+ ```
+1. Run the `dotnet` command to create a new console with the .NET 6 framework.
+
+ ```powershell
+ dotnet new console --framework net6.0
+ ```
+
+ The project file uses the same name as your directory, _bicep-msbuild-demo.csproj_. For more information about how to create a console application from Visual Studio Code, see the [tutorial](/dotnet/core/tutorials/with-visual-studio-code).
+
+1. Replace the contents of _bicep-msbuild-demo.csproj_ with the [.NET 6](#net-6) or [NoTargets SDK](#notargets-sdk) examples.
+1. Replace `__LATEST_VERSION__` with the latest version of the Bicep NuGet packages.
+1. Save the file.
+
+# [.NET Core 3.1](#tab/netcore31)
+
+Build a project in .NET Core 3.1 using the dotnet CLI.
+
+1. Open Visual Studio code and select **Terminal** > **New Terminal** to start a PowerShell session.
+1. Create a directory named _bicep-msbuild-demo_ and go to the directory. This example uses _C:\bicep-msbuild-demo_.
+
+ ```powershell
+ New-Item -Name .\bicep-msbuild-demo -ItemType Directory
+ Set-Location -Path .\bicep-msbuild-demo
+ ```
+1. Run the `dotnet` command to create a new console with the .NET 6 framework.
+
+ ```powershell
+ dotnet new console --framework netcoreapp3.1
+ ```
+
+ The project file is named the same as your directory, _bicep-msbuild-demo.csproj_. For more information about how to create a console application from Visual Studio Code, see the [tutorial](/dotnet/core/tutorials/with-visual-studio-code).
+
+1. Replace the contents of _bicep-msbuild-demo.csproj_ with the [.NET Core 3.1](#net-core-31) or [NoTargets SDK](#notargets-sdk) examples.
+1. Replace `__LATEST_VERSION__` with the latest version of the Bicep NuGet packages.
+1. Save the file.
+
+# [Classic framework](#tab/classicframework)
+
+Build a project using the classic framework.
+
+To create the project file and dependencies, use Visual Studio.
+
+1. Open Visual Studio.
+1. Select **Create a new project**.
+1. For the C# language, select **Console App (.NET Framework)** and select **Next**.
+1. Enter a project name. For this example, use _bicep-msbuild-demo_ for the project.
+1. Select **Place solution and project in same directory**.
+1. Select **.NET Framework 4.8**.
+1. Select **Create**.
+
+If you know how to unload a project and reload a project, you can edit _bicep-msbuild-demo.csproj_ in Visual Studio.
+
+Otherwise, edit the project file in Visual Studio Code.
+
+1. Open Visual Studio Code and go to the _bicep-msbuild-demo_ directory.
+1. Replace _bicep-msbuild-demo.csproj_ with the [Classic framework](#classic-framework) code sample.
+1. Replace `__LATEST_VERSION__` with the latest version of the Bicep NuGet packages.
+1. Save the file.
+++
+### Create Bicep file
+
+You'll need a Bicep file that will be converted to JSON.
+
+1. Use Visual Studio Code and create a new file.
+1. Copy the following sample and save it as _main.bicep_ in the _C:\bicep-msbuild-demo_ directory.
+
+```bicep
+@allowed([
+ 'Premium_LRS'
+ 'Premium_ZRS'
+ 'Standard_GRS'
+ 'Standard_GZRS'
+ 'Standard_LRS'
+ 'Standard_RAGRS'
+ 'Standard_RAGZRS'
+ 'Standard_ZRS'
+])
+@description('Storage account type.')
+param storageAccountType string = 'Standard_LRS'
+
+@description('Location for all resources.')
+param location string = resourceGroup().location
+
+var storageAccountName = 'storage${uniqueString(resourceGroup().id)}'
+
+resource storageAccount 'Microsoft.Storage/storageAccounts@2021-09-01' = {
+ name: storageAccountName
+ location: location
+ sku: {
+ name: storageAccountType
+ }
+ kind: 'StorageV2'
+}
+
+output storageAccountNameOutput string = storageAccount.name
+```
+
+### Run MSBuild
+
+Run MSBuild to convert the Bicep file to JSON.
+
+1. Open a Visual Studio Code terminal session.
+1. In the PowerShell session, go to the _C:\bicep-msbuild-demo_ directory.
+1. Run MSBuild.
+
+ ```powershell
+ MSBuild.exe -restore .\bicep-msbuild-demo.csproj
+ ```
+
+ The `restore` parameter creates dependencies needed to compile the Bicep file during the initial build. The parameter is optional after the initial build.
+
+1. Go to the output directory and open the _main.json_ file that should look like the sample.
+
+ MSBuild creates an output directory based on the SDK or framework version:
+
+ - .NET 6: _\bin\Debug\net6.0_
+ - .NET Core 3.1: _\bin\Debug\netcoreapp3.1_
+ - NoTargets SDK: _\bin\Debug\net48_
+ - Classic framework: _\bin\Debug_
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "metadata": {
+ "_generator": {
+ "name": "bicep",
+ "version": "0.8.9.13224",
+ "templateHash": "12345678901234567890"
+ }
+ },
+ "parameters": {
+ "storageAccountType": {
+ "type": "string",
+ "defaultValue": "Standard_LRS",
+ "metadata": {
+ "description": "Storage account type."
+ },
+ "allowedValues": [
+ "Premium_LRS",
+ "Premium_ZRS",
+ "Standard_GRS",
+ "Standard_GZRS",
+ "Standard_LRS",
+ "Standard_RAGRS",
+ "Standard_RAGZRS",
+ "Standard_ZRS"
+ ]
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Location for all resources."
+ }
+ }
+ },
+ "variables": {
+ "storageAccountName": "[format('storage{0}', uniqueString(resourceGroup().id))]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-09-01",
+ "name": "[variables('storageAccountName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "[parameters('storageAccountType')]"
+ },
+ "kind": "StorageV2"
+ }
+ ],
+ "outputs": {
+ "storageAccountNameOutput": {
+ "type": "string",
+ "value": "[variables('storageAccountName')]"
+ }
+ }
+ }
+ ```
+
+If you make changes or want to rerun the build, delete the output directory so new files can be created.
+
+## Clean up resources
+
+When you're finished with the files, delete the directory. For this example, delete _C:\bicep-msbuild-demo_.
+
+```powershell
+Remove-Item -Path "C:\bicep-msbuild-demo" -Recurse
+```
+
+## Next steps
+
+- For more information about MSBuild, see [MSBuild reference](/visualstudio/msbuild/msbuild-reference) and [.NET project files](/dotnet/core/project-sdk/overview#project-files).
+- To learn more about MSBuild properties, items, targets, and tasks, see [MSBuild concepts](/visualstudio/msbuild/msbuild-concepts).
+- For more information about the .NET CLI, see [.NET CLI overview](/dotnet/core/tools/).
azure-resource-manager Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/cli-samples.md
Title: Azure CLI samples description: Provides Azure CLI sample scripts to use when working with Azure Managed Applications.-+ Last updated 10/25/2017-+ # Azure CLI Samples for Azure Managed Applications
azure-resource-manager Create Ui Definition Collection Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/create-ui-definition-collection-functions.md
Title: Create UI definition collection functions description: Describes the functions to use when working with collections, like arrays and objects.-+ Last updated 07/13/2020-+ # CreateUiDefinition collection functions
azure-resource-manager Create Ui Definition Conversion Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/create-ui-definition-conversion-functions.md
Title: Create UI definition conversion functions description: Describes the functions to use when converting values between data types and encodings.-+ Last updated 07/13/2020-+ # CreateUiDefinition conversion functions
azure-resource-manager Create Ui Definition Date Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/create-ui-definition-date-functions.md
Title: Create UI definition date functions description: Describes the functions to use when working with date values.-+ Last updated 07/13/2020-+ # CreateUiDefinition date functions
azure-resource-manager Create Ui Definition Logical Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/create-ui-definition-logical-functions.md
Title: Create UI definition logical functions description: Describes the functions to perform logical operations.-+ Last updated 07/13/2020-+ # CreateUiDefinition logical functions
azure-resource-manager Create Ui Definition Math Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/create-ui-definition-math-functions.md
Title: Create UI definition math functions description: Describes the functions to use when performing math operations.-+ Last updated 07/13/2020-+ # CreateUiDefinition math functions
azure-resource-manager Create Ui Definition Referencing Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/create-ui-definition-referencing-functions.md
Title: Create UI definition referencing functions description: Describes the functions to use when constructing UI definitions for Azure portal that reference other objects.-+ Last updated 07/13/2020-+ # CreateUiDefinition referencing functions
azure-resource-manager Create Ui Definition String Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/create-ui-definition-string-functions.md
Title: Create UI definition string functions description: Describes the string functions to use when constructing UI definitions for Azure Managed Applications-+ Last updated 07/13/2020-+ # CreateUiDefinition string functions
azure-resource-manager Create Uidefinition Elements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/create-uidefinition-elements.md
Title: Create UI definition elements description: Describes the elements to use when constructing UI definitions for Azure portal.-+ Last updated 10/27/2020-+ # CreateUiDefinition elements
azure-resource-manager Create Uidefinition Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/create-uidefinition-functions.md
Title: Create UI definition functions description: Describes the functions to use when constructing UI definitions for Azure Managed Applications-+ Last updated 07/13/2020-+ # CreateUiDefinition functions
azure-resource-manager Create Uidefinition Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/create-uidefinition-overview.md
Title: CreateUiDefinition.json file for portal pane description: Describes how to create user interface definitions for the Azure portal. Used when defining Azure Managed Applications.-+ Last updated 03/26/2021-+ # CreateUiDefinition.json for Azure managed application's create experience
azure-resource-manager Deploy Service Catalog Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/deploy-service-catalog-quickstart.md
Title: Use Azure portal to deploy service catalog app description: Shows consumers of Managed Applications how to deploy a service catalog app through the Azure portal. -+ Last updated 10/04/2018-+ # Quickstart: Deploy service catalog app through Azure portal
azure-resource-manager Existing Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/existing-vnet-integration.md
Title: Deploy to existing virtual network description: Describes how to enable users of your managed application to select an existing virtual network. The virtual network can be outside of the managed application.-+ Last updated 05/11/2020-+
azure-resource-manager Microsoft Common Checkbox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-checkbox.md
Title: CheckBox UI element description: Describes the Microsoft.Common.CheckBox UI element for Azure portal. Enables users to select to check or uncheck an option.-+ Last updated 07/09/2020-+ # Microsoft.Common.CheckBox UI element
azure-resource-manager Microsoft Common Dropdown https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-dropdown.md
Title: DropDown UI element description: Describes the Microsoft.Common.DropDown UI element for Azure portal. Use to select from available options when deploying a managed application.-+ Last updated 07/14/2020-+
azure-resource-manager Microsoft Common Editablegrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-editablegrid.md
Title: EditableGrid UI element description: Describes the Microsoft.Common.EditableGrid UI element for Azure portal. Enables users to gather tabular input.-+ Last updated 08/24/2020-+ # Microsoft.Common.EditableGrid UI element
azure-resource-manager Microsoft Common Fileupload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-fileupload.md
Title: FileUpload UI element description: Describes the Microsoft.Common.FileUpload UI element for Azure portal. Enables users need to upload files when deploying a managed application.-+ Last updated 09/05/2018-+ # Microsoft.Common.FileUpload UI element
azure-resource-manager Microsoft Common Infobox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-infobox.md
Title: InfoBox UI element description: Describes the Microsoft.Common.InfoBox UI element for Azure portal. Use to add text or warnings when deploying managed application.-+ Last updated 06/15/2018-+
azure-resource-manager Microsoft Common Optionsgroup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-optionsgroup.md
Title: OptionsGroup UI element description: Describes the Microsoft.Common.OptionsGroup UI element for Azure portal. Enables users to select from available options when deploying a managed application.-+ Last updated 07/09/2020-+ # Microsoft.Common.OptionsGroup UI element
azure-resource-manager Microsoft Common Passwordbox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-passwordbox.md
Title: PasswordBox UI element description: Describes the Microsoft.Common.PasswordBox UI element for Azure portal. Enables users to provide a secret value when deploying managed applications.-+ Last updated 06/27/2018-+ # Microsoft.Common.PasswordBox UI element
azure-resource-manager Microsoft Common Section https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-section.md
Title: Section UI element description: Describes the Microsoft.Common.Section UI element for Azure portal. Use to group elements in the portal for deploying managed applications.-+ Last updated 06/27/2018-+ # Microsoft.Common.Section UI element
azure-resource-manager Microsoft Common Serviceprincipalselector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-serviceprincipalselector.md
Title: ServicePrincipalSelector UI element description: Describes the Microsoft.Common.ServicePrincipalSelector UI element for Azure portal. Provides a control to choose an application and a textbox to input a password or certificate thumbprint.-+ Last updated 11/17/2020-+ # Microsoft.Common.ServicePrincipalSelector UI element
azure-resource-manager Microsoft Common Slider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-slider.md
Title: Slider UI element description: Describes the Microsoft.Common.Slider UI element for Azure portal. Enables users to set a value from a range of options.-+ Last updated 07/10/2020-+ # Microsoft.Common.Slider UI element
azure-resource-manager Microsoft Common Tagsbyresource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-tagsbyresource.md
Title: TagsByResource UI element description: Describes the Microsoft.Common.TagsByResource UI element for Azure portal. Use to apply tags to a resource during deployment.-+ Last updated 11/11/2019-+
azure-resource-manager Microsoft Common Textblock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-textblock.md
Title: TextBlock UI element description: Describes the Microsoft.Common.TextBlock UI element for Azure portal. Use to add text to the interface.-+ Last updated 06/27/2018-+
azure-resource-manager Microsoft Common Textbox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-textbox.md
Title: TextBox UI element description: Describes the Microsoft.Common.TextBox UI element for Azure portal. Use for adding unformatted text.-+ Last updated 03/03/2021-+
azure-resource-manager Microsoft Compute Credentialscombo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-compute-credentialscombo.md
Title: CredentialsCombo UI element description: Describes the Microsoft.Compute.CredentialsCombo UI element for Azure portal.-+ Last updated 09/29/2018-+ # Microsoft.Compute.CredentialsCombo UI element
azure-resource-manager Microsoft Compute Sizeselector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-compute-sizeselector.md
Title: SizeSelector UI element description: Describes the Microsoft.Compute.SizeSelector UI element for Azure portal. Use for selecting the size of a virtual machine.-+ Last updated 06/27/2018-+ # Microsoft.Compute.SizeSelector UI element
azure-resource-manager Microsoft Compute Usernametextbox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-compute-usernametextbox.md
Title: UserNameTextBox UI element description: Describes the Microsoft.Compute.UserNameTextBox UI element for Azure portal. Enables users to provide Windows or Linux user names.-+ Last updated 06/27/2018-+ # Microsoft.Compute.UserNameTextBox UI element
azure-resource-manager Microsoft Keyvault Keyvaultcertificateselector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-keyvault-keyvaultcertificateselector.md
Title: KeyVaultCertificateSelector UI element description: Describes the Microsoft.KeyVault.KeyVaultCertificateSelector UI element for Azure portal.-+ Last updated 10/27/2020-+ # Microsoft.KeyVault.KeyVaultCertificateSelector UI element
azure-resource-manager Microsoft Managedidentity Identityselector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-managedidentity-identityselector.md
Title: IdentitySelector UI element description: Describes the Microsoft.ManagedIdentity.IdentitySelector UI element for Azure portal. Use to assign managed identities to a resource.-+ Last updated 02/06/2020-+
azure-resource-manager Microsoft Network Publicipaddresscombo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-network-publicipaddresscombo.md
Title: PublicIpAddressCombo UI element description: Describes the Microsoft.Network.PublicIpAddressCombo UI element for Azure portal.-+ Last updated 06/28/2018-+ # Microsoft.Network.PublicIpAddressCombo UI element
azure-resource-manager Microsoft Network Virtualnetworkcombo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-network-virtualnetworkcombo.md
Title: VirtualNetworkCombo UI element description: Describes the Microsoft.Network.VirtualNetworkCombo UI element for Azure portal.-+ Last updated 06/28/2018-+ # Microsoft.Network.VirtualNetworkCombo UI element
azure-resource-manager Microsoft Solutions Armapicontrol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-solutions-armapicontrol.md
Title: ArmApiControl UI element description: Describes the Microsoft.Solutions.ArmApiControl UI element for Azure portal. Used for calling API operations.-+ Last updated 07/14/2020-+
azure-resource-manager Microsoft Solutions Resourceselector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-solutions-resourceselector.md
Title: ResourceSelector UI element description: Describes the Microsoft.Solutions.ResourceSelector UI element for Azure portal. Used for getting a list of existing resources.-+ Last updated 07/13/2020-+
azure-resource-manager Microsoft Storage Multistorageaccountcombo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-storage-multistorageaccountcombo.md
Title: MultiStorageAccountCombo UI element description: Describes the Microsoft.Storage.MultiStorageAccountCombo UI element for Azure portal.-+ Last updated 06/28/2018-+ # Microsoft.Storage.MultiStorageAccountCombo UI element
azure-resource-manager Microsoft Storage Storageaccountselector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-storage-storageaccountselector.md
Title: StorageAccountSelector UI element description: Describes the Microsoft.Storage.StorageAccountSelector UI element for Azure portal.-+ Last updated 06/28/2018-+ # Microsoft.Storage.StorageAccountSelector UI element
azure-resource-manager Microsoft Storage Storageblobselector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-storage-storageblobselector.md
Title: StorageBlobSelector UI element description: Describes the Microsoft.Storage.StorageBlobSelector UI element for Azure portal.-+ Last updated 10/27/2020-+ # Microsoft.Storage.StorageBlobSelector UI element
azure-resource-manager Monitor Managed Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/monitor-managed-application-portal.md
Title: Use Azure portal to monitor a managed app description: Shows how to use the Azure portal to monitor availability and alerts for a managed application.-+ Last updated 10/04/2018-+ # Monitor a deployed instance of a managed application
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/overview.md
Title: Overview of managed applications description: Describes the concepts for Azure Managed Applications, which provides cloud solutions that are easy for consumers to deploy and operate.-+ Last updated 07/12/2019-+ # Azure managed applications overview
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications
description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Last updated 07/06/2022 --++ # Azure Policy built-in definitions for Azure Managed Applications
azure-resource-manager Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/powershell-samples.md
Title: Azure PowerShell samples description: Provides Azure PowerShell sample scripts to use when working with Azure Managed Applications.-+ Last updated 10/27/2017-+ # Azure PowerShell samples
azure-resource-manager Publish Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-portal.md
Title: Publish managed apps through portal description: Shows how to use the Azure portal to create an Azure managed application that is intended for members of your organization.-+ Last updated 11/02/2017-+ # Publish a service catalog application through Azure portal
azure-resource-manager Sample Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/sample-projects.md
Title: Sample projects description: Provides a summary of sample projects that are available for Azure Managed Applications.-+ Last updated 09/04/2019-+ # Sample projects for Azure managed applications
azure-resource-manager Managed Application Define Create Cli Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/scripts/managed-application-define-create-cli-sample.md
Title: Create managed application definition - Azure CLI description: Provides an Azure CLI script sample that publishes a managed application definition to a service catalog and then deploys a managed application definition from the service catalog.-+ ms.devlang: azurecli Last updated 03/07/2022-+
azure-resource-manager Managed Application Powershell Sample Create Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/scripts/managed-application-powershell-sample-create-definition.md
Title: Create managed application definition - Azure PowerShell description: Provides an Azure PowerShell script sample that creates a managed application definition in the Azure subscription.-+ ms.devlang: powershell Last updated 10/27/2017-+ # Create a managed application definition with PowerShell
azure-resource-manager Managed Application Powershell Sample Get Managed Group Resize Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/scripts/managed-application-powershell-sample-get-managed-group-resize-vm.md
Title: Get managed resource group & resize VMs - Azure PowerShell description: Provides Azure PowerShell sample script that gets a managed resource group for an Azure Managed Application. The script resizes VMs.-+ ms.devlang: powershell Last updated 10/27/2017-+ # Get resources in a managed resource group and resize VMs with PowerShell
azure-resource-manager Managed Application Poweshell Sample Create Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/scripts/managed-application-poweshell-sample-create-application.md
Title: Azure PowerShell script sample - Deploy a managed application description: Provides Azure PowerShell sample script sample that deploys a managed application definition to the subscription.-+ ms.devlang: powershell Last updated 10/27/2017-+ # Deploy a managed application for a service catalog with PowerShell
azure-resource-manager Test Createuidefinition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/test-createuidefinition.md
Title: Test the UI definition file description: Describes how to test the user experience for creating your Azure Managed Application through the portal.-+ Last updated 06/04/2021-+ # Test your portal interface for Azure Managed Applications
azure-resource-manager Update Managed Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/update-managed-resources.md
Title: Update managed resources description: Describes how to work on resources in the managed resource group for an Azure managed application.-+ Last updated 10/26/2017-+ # Work with resources in the managed resource group for Azure managed application
azure-vmware Enable Public Ip Nsx Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-ip-nsx-edge.md
The Distributed Firewall could be used to filter traffic to VMs. This feature is
[Enable Managed SNAT for Azure VMware Solution Workloads (Preview)](enable-managed-snat-for-workloads.md) [Disable Internet access or enable a default route](disable-internet-access.md)+
+[Enable HCX access over the internet](enable-hcx-access-over-internet.md)
azure-vmware Enable Sql Azure Hybrid Benefit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-sql-azure-hybrid-benefit.md
+
+ Title: Enable SQL Azure hybrid benefit for Azure VMware Solution (Preview)
+description: This article shows you how to apply SQL Azure hybrid benefits to your Azure VMware Solution private cloud by configuring a placement policy.
++ Last updated : 06/14/2022++
+# Enable SQL Azure hybrid benefit for Azure VMware Solution (Preview)
+
+In this article, youΓÇÖll learn how to apply SQL Azure hybrid benefits to an Azure VMware Solution private cloud by configuring a placement policy. The placement policy defines the number of hosts that are running SQL.
+>[!IMPORTANT]
+> It is important to note that SQL benefits are applied at the host level.
+
+For example, if each host in Azure VMware Solution has 36 cores and you signal that two hosts run SQL, then SQL Azure hybrid benefit will apply to 72 cores.
+
+## Configure host-VM placement policy
+1. From your Azure VMware Solution private cloud, select Azure hybrid benefit, then Create host-VM placement policy.
+ :::image type="content" source="media/sql-azure-hybrid-benefit/azure-hybrid-benefit.png" alt-text="Diagram that shows how to create a host new virtual machine placement policy.":::
+
+1. Fill in the required fields for creating the placement policy.
+ 1. **Name** ΓÇô Select the name that identifies this policy.
+ 2. **Type** ΓÇô Select the type of policy. This type must be VM-Host affinity only.
+ 3. **Azure hybrid benefit** ΓÇô Select the checkbox to apply the SQL Azure hybrid benefit.
+ 4. **Cluster** ΓÇô Select the necessary cluster. The policy is applicable per cluster only.
+ 1. **Enabled** ΓÇô Select enabled to apply the policy immediately once created.
+
+ :::image type="content" source="media/sql-azure-hybrid-benefit/create-placement-policy.png" alt-text="Diagram that shows how to create a host virtual machine placement policy using the host VM affinity.":::
+3. Select the hosts and VMs that will be applied to the VM-Host affinity policy.
+ 1. **Add Hosts** ΓÇô Select the hosts that will be running SQL.
+ 2. **Add VMs** ΓÇô Select the VMs that should run on the selected hosts.
+ 3. **Review and Create** the policy.
+ :::image type="content" source="media/sql-azure-hybrid-benefit/select-policy-host.png" alt-text="Diagram that shows how to create a host virtual machine affinity.":::
+
+## Manage placement policies
+
+After creating the placement policy, you can review, manage, or edit the policy by way of the Placement policies menu in the Azure VMware Solution private cloud.
+
+By checking the Azure hybrid benefit checkbox in the configuration setting, you can enable existing host-VM affinity policies with the SQL Azure hybrid benefit.
++
+## Next steps
+[Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/)
+
+[Attach Azure NetApp Files datastores to Azure VMware Solution hosts (Preview)](attach-azure-netapp-files-to-azure-vmware-solution-hosts.md)
+
bastion Bastion Connect Vm Ssh Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-ssh-windows.md
# Create an SSH connection to a Windows VM using Azure Bastion
-This article shows you how to securely and seamlessly create an RDP connection to your Windows VMs located in an Azure virtual network directly through the Azure portal. When you use Azure Bastion, your VMs don't require a client, agent, or additional software. You can also connect to a Windows VM using RDP. For information, see [Create an RDP connection to a Windows VM](bastion-connect-vm-rdp-windows.md).
+This article shows you how to securely and seamlessly create an SSH connection to your Windows VMs located in an Azure virtual network directly through the Azure portal. When you use Azure Bastion, your VMs don't require a client, agent, or additional software. You can also connect to a Windows VM using RDP. For information, see [Create an RDP connection to a Windows VM](bastion-connect-vm-rdp-windows.md).
Azure Bastion provides secure connectivity to all of the VMs in the virtual network in which it is provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH. For more information, see the [What is Azure Bastion?](bastion-overview.md).
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
Azure Bastion needs to be able to communicate with certain internal endpoints to
* vault.azure.com * azure.com
-You may use a private DNS zone ending with one of the names listed above (ex: dummy.blob.core.windows.net).
+You may use a private DNS zone ending with one of the names listed above (ex: privatelink.blob.core.windows.net).
Azure Bastion isn't supported with Azure Private DNS Zones in national clouds.
bastion Vm Upload Download Native https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/vm-upload-download-native.md
The steps in this section apply when connecting to a target VM from a Windows lo
1. Sign in to your target VM via RDP using the following command. You can use either a local username and password, or your Azure AD credentials. To learn more about how to use Azure AD to sign in to your Azure Windows VMs, see [Azure Windows VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md). ```azurecli
- az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-resource-id "<VMResourceId>"
+ az network bastion rdp --name "<BastionName>" --resource-group "<BastionResourceGroupName>" --target-resource-id "<VMResourceId>"
``` 1. Once you sign in to your target VM, the native client on your computer will open up with your VM session. You can now transfer files between your VM and local machine using right-click, then **Copy** and **Paste**.
cognitive-services Batch Anomaly Detection Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/tutorials/batch-anomaly-detection-powerbi.md
In this tutorial, you'll learn how to:
## Prerequisites * An [Azure subscription](https://azure.microsoft.com/free/cognitive-services) * [Microsoft Power BI Desktop](https://powerbi.microsoft.com/get-started/), available for free.
-* An excel file (.xlsx) containing time series data points. The example data for this quickstart can be found on [GitHub](https://github.com/Azure-Samples/AnomalyDetector/blob/master/sampledata/example-data.xlsx)
+* An excel file (.xlsx) containing time series data points.
* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector" title="Create an Anomaly Detector resource" target="_blank">create an Anomaly Detector resource </a> in the Azure portal to get your key and endpoint. * You will need the key and endpoint from the resource you create to connect your application to the Anomaly Detector API. You'll do this later in the quickstart.
cognitive-services Utterances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/concepts/utterances.md
Each intent needs to have example utterances - at least 15. If you have an inten
## Add small groups of utterances
-Each time you [iterate on your model](https://microsoft-my.sharepoint.com/personal/v-babdullah_microsoft_com/Documents/Documents/work/LUIS%20Documentation/Application%20Design%20concepts.docx) to improve it, don't add large quantities of utterances. Consider adding utterances in quantities of 15. Then [Train](/azure/cognitive-services/luis/luis-how-to-train), [publish](/azure/cognitive-services/luis/luis-how-to-publish-app), and [test](/azure/cognitive-services/luis/luis-interactive-test) again.
+Each time you iterate on your model to improve it, don't add large quantities of utterances. Consider adding utterances in quantities of 15. Then [Train](/azure/cognitive-services/luis/luis-how-to-train), [publish](/azure/cognitive-services/luis/luis-how-to-publish-app), and [test](/azure/cognitive-services/luis/luis-interactive-test) again.
LUIS builds effective models with utterances that are carefully selected by the LUIS model author. Adding too many utterances isn't valuable because it introduces confusion.
After the app is published, only add utterances from active learning in the deve
## Next steps * [Intents](intents.md)
-* [Patterns and features concepts](patterns-features.md)
+* [Patterns and features concepts](patterns-features.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
To improve accuracy, customization is available for some languages and base mode
| Sinhala (Sri Lanka) | `si-LK` | | Slovak (Slovakia) | `sk-SK` | | Slovenian (Slovenia) | `sl-SI` |
+| Somali (Somalia) | `so-SO` |
| Spanish (Argentina) | `es-AR` | | Spanish (Bolivia) | `es-BO` | | Spanish (Chile) | `es-CL` |
To improve accuracy, customization is available for some languages and base mode
| Ukrainian (Ukraine) | `uk-UA` | | Uzbek (Uzbekistan) | `uz-UZ` | | Vietnamese (Vietnam) | `vi-VN` |
+| Welsh (United Kingdom) | `cy-GB` |
| Zulu (South Africa) | `zu-ZA` | ### [Plain text](#tab/plaintext)
The following neural voices are in public preview.
| Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunfengNeural` <sup>New</sup> | General, multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunhaoNeural` <sup>New</sup> | Optimized for promoting a product or service, 1 new multiple style available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | Chinese (Mandarin, Simplified) | `zh-CN` | Male | `zh-CN-YunjianNeural` <sup>New</sup> | Optimized for broadcasting sports event, 2 new multiple styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
-| Chinese (Mandarin, Simplified) | `zh-CN-LN` | Female | `zh-CN-LN-XiaobeiNeural` <sup>New</sup> | General, Liaoning accent |
-| Chinese (Mandarin, Simplified) | `zh-CN-SC` | Male | `zh-CN-SC-YunxiSichuanNeural` <sup>New</sup> | General, Sichuan accent |
+| Chinese (Mandarin, Simplified) | `zh-CN-liaoning` | Female | `zh-CN-liaoning-XiaobeiNeural` <sup>New</sup> | General, Liaoning accent |
+| Chinese (Mandarin, Simplified) | `zh-CN-sichuan` | Male | `zh-CN-sichuan-YunxiSichuanNeural` <sup>New</sup> | General, Sichuan accent |
| English (United States) | `en-US` | Female | `en-US-JaneNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | English (United States) | `en-US` | Female | `en-US-NancyNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | English (United States) | `en-US` | Male | `en-US-DavisNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
cognitive-services Rest Speech To Text V3 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text-v3-1.md
Use the REST API v3.1 to:
```json "features": { …
- "supportsAdaptationsWith": [
- ΓÇ£AcousticΓÇ¥,
- "Language",
- ΓÇ£LanguageMarkdownΓÇ¥,
+ "supportsAdaptationsWith": [
+ "Acoustic",
+ "Language",
+ "LanguageMarkdown",
"Pronunciation" ] }
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
The `say-as` element is optional. It indicates the content type, such as number
| `format` | Provides additional information about the precise formatting of the element's text for content types that might have ambiguous formats. SSML defines formats for content types that use them. See the following table. | Optional | | `detail` | Indicates the level of detail to be spoken. For example, this attribute might request that the speech synthesis engine pronounce punctuation marks. There are no standard values defined for `detail`. | Optional |
-The following content types are supported for the `interpret-as` and `format` attributes. Include the `format` attribute only if `interpret-as` is set to date and time.
+The following content types are supported for the `interpret-as` and `format` attributes. Include the `format` attribute only if `format` column is not empty in the table below.
| interpret-as | format | Interpretation | |--|--|-|
The following content types are supported for the `interpret-as` and `format` at
| `ordinal` | | The text is spoken as an ordinal number. The speech synthesis engine pronounces:<br /><br />`Select the <say-as interpret-as="ordinal">3rd</say-as> option`<br /><br />As "Select the third option." | | `telephone` | | The text is spoken as a telephone number. The `format` attribute can contain digits that represent a country code. Examples are "1" for the United States or "39" for Italy. The speech synthesis engine can use this information to guide its pronunciation of a phone number. The phone number might also include the country code, and if so, takes precedence over the country code in the `format` attribute. The speech synthesis engine pronounces:<br /><br />`The number is <say-as interpret-as="telephone" format="1">(888) 555-1212</say-as>`<br /><br />As "My number is area code eight eight eight five five five one two one two." | | `time` | hms12, hms24 | The text is spoken as a time. The `format` attribute specifies whether the time is specified by using a 12-hour clock (hms12) or a 24-hour clock (hms24). Use a colon to separate numbers representing hours, minutes, and seconds. Here are some valid time examples: 12:35, 1:14:32, 08:15, and 02:50:45. The speech synthesis engine pronounces:<br /><br />`The train departs at <say-as interpret-as="time" format="hms12">4:00am</say-as>`<br /><br />As "The train departs at four A M." |
+| `duration` | hms, hm, ms | The text is spoken as a duration. The `format` attribute specifies the duration's format (*h=hour, m=minute, and s=second*). The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="duration">01:18:30</say-as>`<br /><br /> As "one hour eighteen minutes and thirty seconds".<br />Pronounces:<br /><br />`<say-as interpret-as="duration" format="ms">01:18</say-as>`<br /><br /> As "one minute and eighteen seconds".<br />This tag is only supported on English and Spanish.|
| `name` | | The text is spoken as a person's name. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="name">ED</say-as>`<br /><br />As [æd]. <br />In Chinese names, some characters pronounce differently when they appear in a family name. For example, the speech synthesis engine says 仇 in <br /><br />`<say-as interpret-as="name">仇先生</say-as>`<br /><br /> As [qiú] instead of [chóu]. | **Usage**
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/overview.md
Previously updated : 05/24/2022 Last updated : 07/13/2022 recommendations: false
The following document file types are supported by Document Translation:
|Adobe PDF|pdf|Portable document file format.| |Comma-Separated Values |csv| A comma-delimited raw-data file used by spreadsheet programs.| |HTML|html, htm|Hyper Text Markup Language.|
-|Localization Interchange File Format|xlf. , xliff| A parallel document format, export of Translation Memory systems. The languages used are defined inside the file.|
+|Localization Interchange File Format|xlf| A parallel document format, export of Translation Memory systems. The languages used are defined inside the file.|
|Markdown| markdown, mdown, mkdn, md, mkd, mdwn, mdtxt, mdtext, rmd| A lightweight markup language for creating formatted text.| |MHTML|mthml, mht| A web page archive format used to combine HTML code and its companion resources.| |Microsoft Excel|xls, xlsx|A spreadsheet file for data analysis and documentation.|
cognitive-services Migrate Language Service Latest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/migrate-language-service-latest.md
Previously updated : 01/10/2022 Last updated : 07/13/2022
# Migrate to the latest version of Azure Cognitive Service for Language > [!TIP]
-> Just getting started with Azure Cognitive Service for Language? See the [overview article](../overview.md) for details on the service, available features, and links to quickstarts for sending your first API requests.
+> Just getting started with Azure Cognitive Service for Language? See the [overview article](../overview.md) for details on the service, available features, and links to quickstarts for information on the current version of the API.
-If your applications are using an older version of the Text Analytics API (before v3.1), or client library (before stable v5.1.0), this article will help you upgrade your applications to use the latest version of the [Azure Cognitive Service for language](../overview.md) features.
+If your applications are still using the Text Analytics API, or client library (before stable v5.1.0), this article will help you upgrade your applications to use the latest version of the [Azure Cognitive Service for language](../overview.md) features.
-## Features
+## Unified Language endpoint (REST API)
-Select one of the features below to see information you can use to update your application.
+This section applies to applications that use the older `/text/analytics/...` endpoint format for REST API calls. For example:
-## [Sentiment analysis](#tab/sentiment-analysis)
+```http
+https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/<version>/<feature>
+```
-> [!NOTE]
-> * Want to use the latest version of the API in your application? See the [sentiment analysis](../sentiment-opinion-mining/how-to/call-api.md) how-to article and [quickstart](../sentiment-opinion-mining/quickstart.md) for information on the current version of the API.
-> * The version `3.1-preview.x` REST API endpoints and `5.1.0-beta.x` client library has been deprecated.
+If your application uses the above endpoint format, the REST API endpoint for the following Language service features has changed:
-## Feature changes from version 2.1
+* [Entity linking](../entity-linking/quickstart.md?pivots=rest-api)
+* [Key phrase extraction](../key-phrase-extraction/quickstart.md?pivots=rest-api)
+* [Language detection](../language-detection/quickstart.md?pivots=rest-api)
+* [Named entity recognition (NER)](../named-entity-recognition/quickstart.md?pivots=rest-api)
+* [Personally Identifying Information (PII) detection](../personally-identifiable-information/quickstart.md?pivots=rest-api)
+* [Sentiment analysis and opinion mining](../sentiment-opinion-mining/quickstart.md?pivots=rest-api)
+* [Text analytics for health](../text-analytics-for-health/quickstart.md?pivots=rest-api)
-Sentiment Analysis in version 2.1 returns sentiment scores between 0 and 1 for each document sent to the API, with scores closer to 1 indicating more positive sentiment. The current version of this feature returns sentiment labels (such as "positive" or "negative") for both the sentences and the document as a whole, and their associated confidence scores.
+The Language service now provides a unified endpoint for sending REST API requests to these features. If your application uses the REST API, update its request endpoint to use the current endpoint:
-## Migrate to the current version
+```http
+https://<your-language-resource-endpoint>/language/:analyze-text?api-version=2022-05-01
+```
-### REST API
+Additionally, the format of the JSON request body has changed. You'll need to update the request structure that your application sends to the API, for example the following entity recognition JSON body:
-If your application uses the REST API, update its request endpoint to use the [current endpoint](../sentiment-opinion-mining/quickstart.md?pivots=rest-api) for sentiment analysis. For example:`https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1/sentiment`. You will also need to update the application to use the sentiment labels returned in the [API's response](../sentiment-opinion-mining/how-to/call-api.md).
+```json
+{
+ "kind": "EntityRecognition",
+ "parameters": {
+ "modelVersion": "latest"
+ },
+ "analysisInput":{
+ "documents":[
+ {
+ "id":"1",
+ "language": "en",
+ "text": "I had a wonderful trip to Seattle last week."
+ }
+ ]
+ }
+}
+```
-See the reference documentation for examples of the JSON response.
-* [Version 2.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v2-1/operations/56f30ceeeda5650db055a3c9)
-* [Version 3.0](https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0/operations/Sentiment)
-* [Version 3.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/Sentiment)
+Use the quickstarts linked above to see current example REST API calls for the feature(s) you're using, and the associated API output.
-### Client libraries
+## Client libraries
-To use the latest version of the sentiment analysis client library, you will need to download the latest software package in the `Azure.AI.TextAnalytics` namespace. The [quickstart article](../sentiment-opinion-mining/quickstart.md) lists the commands you can use for your preferred language, with example code.
+To use the latest version of the client library, you will need to download the latest software package in the `Azure.AI.TextAnalytics` namespace. See the quickstart articles linked above for example code and instructions for using the client library in your preferred language.
+<!--[!INCLUDE [SDK target versions](../includes/sdk-target-versions.md)]-->
-## [NER, PII, and entity linking](#tab/named-entity-recognition)
-> [!NOTE]
-> Want to use the latest version of the API in your application? See the following articles for information on the current version of the APIs:
->
-> * [NER quickstart](../named-entity-recognition/quickstart.md)
-> * [Entity linking quickstart](../entity-linking/quickstart.md)
-> * [Personally Identifying Information (PII) detection quickstart](../personally-identifiable-information/quickstart.md)
->
-> The version `3.1-preview.x` REST API endpoints and `5.1.0-beta.x` client libraries has been deprecated.
+## Version 2.1 functionality changes
-## Feature changes from version 2.1
+If you're migrating an application from v2.1 of the API, there are several changes to feature functionality you should be aware of.
-In version 2.1, the Text Analytics API uses one endpoint for Named Entity Recognition (NER) and entity linking. The current version of this feature provides expanded named entity detection, and uses separate endpoints for NER and entity linking requests. Additionally, you can use another feature offered in the Language service that lets you detect [detect personal (pii) and health (phi) information](../personally-identifiable-information/overview.md).
+### Sentiment analysis v2.1
-## Migrate to the current version
+[Sentiment Analysis](../sentiment-opinion-mining/quickstart.md) in version 2.1 returns sentiment scores between 0 and 1 for each document sent to the API, with scores closer to 1 indicating more positive sentiment. The current version of this feature returns sentiment labels (such as "positive" or "negative") for both the sentences and the document as a whole, and their associated confidence scores.
-### REST API
+### NER, PII, and entity linking v2.1
-If your application uses the REST API, update its request endpoint to the [current endpoints](../named-entity-recognition/quickstart.md?pivots=rest-api) for NER and/or entity linking. For example:
-
-Entity Linking
-* `https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1/entities/linking`
-
-NER
-* `https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1/entities/recognition/general`
+In version 2.1, the Text Analytics API used one endpoint for Named Entity Recognition (NER) and entity linking. The current version of this feature provides expanded named entity detection, and has separate endpoints for [NER](../named-entity-recognition/quickstart.md?pivots=rest-api) and [entity linking](../entity-linking/quickstart.md?pivots=rest-api) requests. Additionally, you can use another feature offered in the Language service that lets you detect [detect personal (PII) and health (PHI) information](../personally-identifiable-information/overview.md).
You will also need to update your application to use the [entity categories](../named-entity-recognition/concepts/named-entity-categories.md) returned in the [API's response](../named-entity-recognition/how-to-call.md).
-See the reference documentation for examples of the JSON response.
-* [Version 2.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v2-1/operations/5ac4251d5b4ccd1554da7634)
-* [Version 3.0](https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0/operations/EntitiesRecognitionGeneral)
-* [Version 3.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/EntitiesRecognitionGeneral)
-
-### Client libraries
-
-To use the latest version of the NER and entity linking client libraries, you will need to download the latest software package in the `Azure.AI.TextAnalytics` namespace. The quickstart article for [Named Entity Recognition](../named-entity-recognition/quickstart.md) and [entity linking](../entity-linking/quickstart.md) lists the commands you can use for your preferred language, with example code.
--
-#### Version 2.1 entity categories
+### Version 2.1 entity categories
The following table lists the entity categories returned for NER v2.1.
The following table lists the entity categories returned for NER v2.1.
| Dimension | Dimensions and measurements. | | Temperature | Temperatures. |
-## [Language detection](#tab/language-detection)
-
-> [!NOTE]
-> * Want to use the latest version of the API in your application? See the [language detection](../language-detection/how-to/call-api.md) how-to article and [quickstart](../language-detection/quickstart.md) for information on the current version of the API.
-> * The version `3.1-preview.x` REST API endpoints and `5.1.0-beta.x` client libraries has been deprecated.
-
-## Feature changes from version 2.1
-
-The language detection feature output has changed in the current version. The JSON response will contain `ConfidenceScore` instead of `score`. The current version also only returns one language in a `detectedLanguage` attribute for each document.
-
-## Migrate to the current version
-
-### REST API
-
-If your application uses the REST API, update its request endpoint to the [current endpoint](../language-detection/quickstart.md?pivots=rest-api) for language detection. For example:`https://<your-custom-subdomain>.cognitiveservices.azure.com/text/analytics/v3.1/languages`. You will also need to update the application to use `ConfidenceScore` instead of `score` in the [API's response](../language-detection/how-to/call-api.md).
-
-See the reference documentation for examples of the JSON response.
-* [Version 2.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v2-1/operations/56f30ceeeda5650db055a3c7)
-* [Version 3.0](https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0/operations/Languages)
-* [Version 3.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/Languages)
+### Language detection v2.1
-#### Client libraries
+The [language detection](../language-detection/quickstart.md) feature output has changed in the current version. The JSON response will contain `ConfidenceScore` instead of `score`. The current version also only returns one language for each document.
-To use the latest version of the sentiment analysis client library, you will need to download the latest software package in the `Azure.AI.TextAnalytics` namespace. The [quickstart article](../language-detection/quickstart.md) lists the commands you can use for your preferred language, with example code.
+### Key phrase extraction v2.1
-
-## [Key phrase extraction](#tab/key-phrase-extraction)
-
-> [!NOTE]
-> * Want to use the latest version of the API in your application? See the [key phrase extraction](../key-phrase-extraction/how-to/call-api.md) how-to article and [quickstart](../key-phrase-extraction/quickstart.md) for information on the current version of the API.
-> * The version `3.1-preview.x` REST API endpoints and `5.1.0-beta.x` client library has been deprecated.
-
-## Feature changes from version 2.1
-
-The key phrase extraction feature currently has not changed outside of the endpoint version.
-
-## Migrate to the current version
-
-### REST API
-
-If your application uses the REST API, update its request endpoint to the [current endpoint](../key-phrase-extraction/quickstart.md?pivots=rest-api) for key phrase extraction. For example: `https://<your-custom-subdomain>.api.cognitiveservices.azure.com/text/analytics/v3.1/keyPhrases`
-
-See the reference documentation for examples of the JSON response.
-* [Version 2.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v2-1/operations/56f30ceeeda5650db055a3c6)
-* [Version 3.0](https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0/operations/KeyPhrases)
-* [Version 3.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/KeyPhrases)
-
-### Client libraries
-
-To use the latest version of the sentiment analysis client library, you will need to download the latest software package in the `Azure.AI.TextAnalytics` namespace. The [quickstart article](../key-phrase-extraction/quickstart.md) lists the commands you can use for your preferred language, with example code.
---
+The key phrase extraction feature functionality currently has not changed outside of the endpoint and request format.
## See also
-* [What is Azure Cognitive Service for language?](../overview.md)
+* [What is Azure Cognitive Service for Language?](../overview.md)
+* [Language service developer guide](developer-guide.md)
+* See the following reference documentation for information on previous API versions.
+ * [Version 2.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v2-1/operations/56f30ceeeda5650db055a3c9)
+ * [Version 3.0](https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0/operations/Sentiment)
+ * [Version 3.1](https://westcentralus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-1/operations/Sentiment)
+* Use the following quickstart guides to see examples for the current version of these features.
+ * [Entity linking](../entity-linking/quickstart.md)
+ * [Key phrase extraction](../key-phrase-extraction/quickstart.md)
+ * [Named entity recognition (NER)](../named-entity-recognition/quickstart.md)
+ * [Language detection](../language-detection/quickstart.md)
+ * [Personally Identifying Information (PII) detection](../personally-identifiable-information/quickstart.md)
+ * [Sentiment analysis and opinion mining](../sentiment-opinion-mining/quickstart.md)
+ * [Text analytics for health](../text-analytics-for-health/quickstart.md)
+
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pricing.md
Alice makes an outbound call from an Azure Communication Services app to a telep
**Total cost for the call**: $0.04 + $0.04 = $0.08
+### Pricing example: Outbound Call from a Dynamics 365 Omnichannel (D365 OC) agent application via Azure Communication Services direct routing
+
+Alice is a D365 contact center agent, who makes an outbound call from D365 OC to a telephone number (Bob) via Azure Communication Services direct routing.
+- Alice uses D365 OC client application
+- D365 OC bot starts new outgoing call via direct routing
+- Call goes to a Session Border Controller (SBC) connected via Communication Services direct routing
+- D365 OC bot adds Alice to a call by escalating the direct routing call to a group call
+- The call lasts a total of 10 minutes.
+
+**Cost calculations**
+
+- One participant on the VoIP leg (Alice) from D365 OC client application x 10 minutes x $0.004 per participant leg per minute = $0.04
+- One participant on the Communication Services direct routing outbound leg (Bob) from Communication Services servers to an SBC x 10 minutes x $0.004 per participant leg per minute = $0.04.
+- D365 OC bot does not introduce additional ACS charges.
+
+**Total cost for the call**: $0.04 + $0.04 = $0.08
+ ### Pricing example: Group audio call using JS SDK and one PSTN leg Alice and Bob are on a VOIP Call. Bob escalated the call to Charlie on Charlie's PSTN number, a US phone number beginning with `+1-425`.
communication-services Pstn Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pstn-pricing.md
All prices shown below are in USD.
Note: Pricing for all countries is subject to change as pricing is market-based and depends on third-party suppliers of telephony services. Additionally, pricing may include requisite taxes and fees. ***
+## Direct routing pricing
+For Azure Communication Services direct routing there is a flat rate regardless of the geography:
+
+|Number type |To make calls |To receive calls|
+|--|--||
+|Direct routing|USD 0.0040/min|USD 0.0040/min |
+ ## Next steps In this quickstart, you learned how Telephony (PSTN) Offers are priced for Azure Communication Services.
communication-services Troubleshooting Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/troubleshooting-info.md
To verify your Teams License eligibility via Teams web client, follow the steps
1. If the authentication is successful and you remain in the https://teams.microsoft.com/ domain, then your Teams License is eligible. If authentication fails or you're redirected to the https://www.teams.live.com domain, then your Teams License isn't eligible to use Azure Communication Services support for Teams users. #### Checking your current Teams license via Microsoft Graph API
-You can find your current Teams license using [licenseDetails](https://docs.microsoft.com/graph/api/resources/licensedetails) Microsoft Graph API that returns licenses assigned to a user. Follow the steps below to use the Graph Explorer tool to view licenses assigned to a user:
+You can find your current Teams license using [licenseDetails](/graph/api/resources/licensedetails) Microsoft Graph API that returns licenses assigned to a user. Follow the steps below to use the Graph Explorer tool to view licenses assigned to a user:
1. Open your browser and navigate to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) 1. Sign in to Graph Explorer using the credentials.
The Azure Communication Services SMS SDK uses the following error codes to help
## Related information - [Logs and diagnostics](logging-and-diagnostics.md) - [Metrics](metrics.md)-- [Service limits](service-limits.md)
+- [Service limits](service-limits.md)
communication-services Eligible Teams Licenses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/eligible-teams-licenses.md
For more information, see [Azure AD Product names and service plan identifiers](
### How to find current Teams license
-You can find your current Teams license using [licenseDetails](https://docs.microsoft.com/graph/api/resources/licensedetails) Microsoft Graph API that returns licenses assigned to a user.
+You can find your current Teams license using [licenseDetails](/graph/api/resources/licensedetails) Microsoft Graph API that returns licenses assigned to a user.
For more information on verification for eligibility, see [Verification of Teams license eligibility](../concepts/troubleshooting-info.md#verification-of-teams-license-eligibility-to-use-azure-communication-services-support-for-teams-users).
communication-services Pstn Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/pstn-call.md
Title: Quickstart - Call To Phone
+ Title: Quickstart - Call to a telephone number
description: In this quickstart, you'll learn how to add PSTN calling capabilities to your app using Azure Communication Services.
zone_pivot_groups: acs-plat-web-ios-android
-# Quickstart: Call To Phone
+# Quickstart: Outbound call to a telephone number
Get started with Azure Communication Services by using the Communication Services Calling SDK to add PSTN calling to your app.
communication-services File Sharing Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/file-sharing-tutorial.md
You can follow the tutorial [Upload file to Azure Blob Storage with an Azure Fun
Once implemented, you can call this Azure Function inside the `uploadHandler` function to upload files to Azure Blob Storage. For the remaining of the tutorial, we will assume you have generated the function using the tutorial for Azure Blob Storage linked above.
+### Securing your Azure Blob Storage Container
+
+Note that the tutorial above assumes that your Azure blob storage container allows public access to the files you upload. Making your Azure storage containers public isn't recommended for real world production applications.
+
+For downloading the files you upload to Azure blob storage, you can use shared access signatures (SAS). A shared access signature (SAS) provides secure delegated access to resources in your storage account. With a SAS, you have granular control over how a client can access your data.
+
+The downloadable [GitHub sample](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/ui-library-filesharing-chat-composite) showcases the use of SAS for creating SAS URLs to Azure Storage contents. Additionally, you can [read more about SAS](/azure/storage/common/storage-sas-overview).
+ UI Library requires a React environment to be setup. Next we will do that. If you already have a React App, you can skip this section. ### Set Up React App
You may also want to:
- [Add chat to your app](../quickstarts/chat/get-started.md) - [Creating user access tokens](../quickstarts/access-tokens.md) - [Learn about client and server architecture](../concepts/client-and-server-architecture.md)-- [Learn about authentication](../concepts/authentication.md)
+- [Learn about authentication](../concepts/authentication.md)
container-apps Compare Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/compare-options.md
Azure Container Apps doesn't provide direct access to the underlying Kubernetes
You can get started building your first container app [using the quickstarts](get-started.md). ### Azure App Service
-[Azure App Service](/azure/app-service) provides fully managed hosting for web applications including websites and web APIs. These web applications may be deployed using code or containers. Azure App Service is optimized for web applications. Azure App Service is integrated with other Azure services including Azure Container Apps or Azure Functions. When building web apps, Azure App Service is an ideal option.
+[Azure App Service](../app-service/index.yml) provides fully managed hosting for web applications including websites and web APIs. These web applications may be deployed using code or containers. Azure App Service is optimized for web applications. Azure App Service is integrated with other Azure services including Azure Container Apps or Azure Functions. When building web apps, Azure App Service is an ideal option.
### Azure Container Instances
-[Azure Container Instances (ACI)](/azure/container-instances) provides a single pod of Hyper-V isolated containers on demand. It can be thought of as a lower-level "building block" option compared to Container Apps. Concepts like scale, load balancing, and certificates are not provided with ACI containers. For example, to scale to five container instances, you create five distinct container instances. Azure Container Apps provide many application-specific concepts on top of containers, including certificates, revisions, scale, and environments. Users often interact with Azure Container Instances through other services. For example, Azure Kubernetes Service can layer orchestration and scale on top of ACI through [virtual nodes](../aks/virtual-nodes.md). If you need a less "opinionated" building block that doesn't align with the scenarios Azure Container Apps is optimizing for, Azure Container Instances is an ideal option.
+[Azure Container Instances (ACI)](../container-instances/index.yml) provides a single pod of Hyper-V isolated containers on demand. It can be thought of as a lower-level "building block" option compared to Container Apps. Concepts like scale, load balancing, and certificates are not provided with ACI containers. For example, to scale to five container instances, you create five distinct container instances. Azure Container Apps provide many application-specific concepts on top of containers, including certificates, revisions, scale, and environments. Users often interact with Azure Container Instances through other services. For example, Azure Kubernetes Service can layer orchestration and scale on top of ACI through [virtual nodes](../aks/virtual-nodes.md). If you need a less "opinionated" building block that doesn't align with the scenarios Azure Container Apps is optimizing for, Azure Container Instances is an ideal option.
### Azure Kubernetes Service [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) provides a fully managed Kubernetes option in Azure. It supports direct access to the Kubernetes API and runs any Kubernetes workload. The full cluster resides in your subscription, with the cluster configurations and operations within your control and responsibility. Teams looking for a fully managed version of Kubernetes in Azure, Azure Kubernetes Service is an ideal option.
You can get started building your first container app [using the quickstarts](ge
## Next steps > [!div class="nextstepaction"]
-> [Deploy your first container app](get-started.md)
+> [Deploy your first container app](get-started.md)
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
There's no forced tunneling in Container Apps routes.
## Managed resources
-When you deploy an internal or an external environment into your own network, a new resource group prefixed with `MC_` is created in the Azure subscription where your environment is hosted. This resource group contains infrastructure components managed by the Azure Container Apps platform, and shouldn't be modified. The resource group contains Public IP addresses used specifically for outbound connectivity from your environment and a load balancer. In addition to the [Azure Container Apps billing](https://docs.microsoft.com/azure/container-apps/billing), you will be billed for the following:
+When you deploy an internal or an external environment into your own network, a new resource group prefixed with `MC_` is created in the Azure subscription where your environment is hosted. This resource group contains infrastructure components managed by the Azure Container Apps platform, and shouldn't be modified. The resource group contains Public IP addresses used specifically for outbound connectivity from your environment and a load balancer. In addition to the [Azure Container Apps billing](./billing.md), you will be billed for the following:
- Three standard static [public IPs](https://azure.microsoft.com/pricing/details/ip-addresses/) if using an internal environment, or four standard static [public IPs](https://azure.microsoft.com/pricing/details/ip-addresses/) if using an external environment. - Two standard [Load Balancers](https://azure.microsoft.com/pricing/details/load-balancer/) if using an internal environment, or one standard [Load Balancer](https://azure.microsoft.com/pricing/details/load-balancer/) if using an external environment. Each load balancer has less than six rules. The cost of data processed (GB) includes both ingress and egress for management operations.
When you deploy an internal or an external environment into your own network, a
## Next steps - [Deploy with an external environment](vnet-custom.md)-- [Deploy with an internal environment](vnet-custom-internal.md)
+- [Deploy with an internal environment](vnet-custom-internal.md)
container-apps Vnet Custom Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md
zone_pivot_groups: azure-cli-or-portal
The following example shows you how to create a Container Apps environment in an existing virtual network.
-> [!IMPORTANT]
-> Container Apps environments are deployed on a virtual network. This network can be managed or custom (pre-configured by the user beforehand). In either case, the environment has dependencies on services outside of that virtual network. For a list of these dependencies see [Outbound FQDN dependencies](firewall-integration.md#outbound-fqdn-dependencies).
- ::: zone pivot="azure-portal" <!-- Create -->
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
zone_pivot_groups: azure-cli-or-portal
The following example shows you how to create a Container Apps environment in an existing virtual network.
-> [!IMPORTANT]
-> Container Apps environments are deployed on a virtual network. This network can be managed or custom (pre-configured by the user beforehand). In either case, the environment has dependencies on services outside of that virtual network. For a list of these dependencies see [Outbound FQDN dependencies](firewall-integration.md#outbound-fqdn-dependencies).
- ::: zone pivot="azure-portal" <!-- Create -->
container-instances Container Instances Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-custom-dns.md
See the Azure quickstart template [Create an Azure container group with VNet](ht
[az-container-delete]: /cli/azure/container#az-container-delete [az-network-vnet-delete]: /cli/azure/network/vnet#az-network-vnet-delete [az-group-delete]: /cli/azure/group#az-group-create
-[cloud-shell-bash]: /azure/cloud-shell/overview
+[cloud-shell-bash]: ../cloud-shell/overview.md
container-instances Container Instances Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-gpu.md
This article shows how to add GPU resources when you deploy a container group by
> [!IMPORTANT] > This feature is currently in preview, and some [limitations apply](#preview-limitations). Previews are made available to you on the condition that you agree to the [supplemental terms of use][terms-of-use]. Some aspects of this feature may change prior to general availability (GA).
+## Prerequisites
+> [!NOTE}
+> Due to some current limitations, not all limit increase requests are guaranteed to be approved.
+
+* If you would like to use this sku for your production container deployments, create an [Azure Support request](https://azure.microsoft.com/support) to increase the limit.
+ ## Preview limitations In preview, the following limitations apply when using GPU resources in container groups.
container-instances Container Instances Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-virtual-network-concepts.md
Container groups deployed into an Azure virtual network enable scenarios like:
* Outbound connection to port 25 is not supported at this time. * If you are connecting your container group to an Azure Storage Account, you must add a [service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) to that resource. * [IPv6 addresses](../virtual-network/ip-services/ipv6-overview.md) are not supported at this time.
-* Depending on your subscription type, [certain ports may be blocked](/azure/virtual-network/network-security-groups-overview#azure-platform-considerations).
+* Depending on your subscription type, [certain ports may be blocked](../virtual-network/network-security-groups-overview.md#azure-platform-considerations).
## Required network resources
In the following diagram, several container groups have been deployed to a subne
<!-- LINKS - Internal --> [az-container-create]: /cli/azure/container#az_container_create
-[az-network-profile-list]: /cli/azure/network/profile#az_network_profile_list
+[az-network-profile-list]: /cli/azure/network/profile#az_network_profile_list
container-instances Monitor Azure Container Instances Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/monitor-azure-container-instances-reference.md
Azure Container Instances has the following dimension associated with its metric
## Activity log
-The following table lists the operations that Azure Container Instances may record in the Activity log. This is a subset of the possible entries you might find in the activity log. You can also find this information in the [Azure role-based access control (RBAC) Resource provider operations documentation](/azure/role-based-access-control/resource-provider-operations#microsoftcontainerinstance).
+The following table lists the operations that Azure Container Instances may record in the Activity log. This is a subset of the possible entries you might find in the activity log. You can also find this information in the [Azure role-based access control (RBAC) Resource provider operations documentation](../role-based-access-control/resource-provider-operations.md#microsoftcontainerinstance).
| Operation | Description | |:|:|
The following table lists the operations that Azure Container Instances may reco
| Microsoft.ContainerInstance/operations/read | List the operations for Azure Container Instance service. | | Microsoft.ContainerInstance/serviceassociationlinks/delete | Delete the service association link created by Azure Container Instance resource provider on a subnet. |
-See [all the possible resource provider operations in the activity log](/azure/role-based-access-control/resource-provider-operations).
+See [all the possible resource provider operations in the activity log](../role-based-access-control/resource-provider-operations.md).
-For more information on the schema of Activity Log entries, see [Activity Log schema](/azure/azure-monitor/essentials/activity-log-schema).
+For more information on the schema of Activity Log entries, see [Activity Log schema](../azure-monitor/essentials/activity-log-schema.md).
## Schemas
The following schemas are in use by Azure Container Instances.
## See also - See [Monitoring Azure Container Instances](monitor-azure-container-instances.md) for a description of monitoring Azure Container Instances.-- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
container-instances Monitor Azure Container Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/monitor-azure-container-instances.md
Last updated 06/06/2022
When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
-This article describes the monitoring data generated by Azure Container Instances. Azure Container Instances includes built-in support for [Azure Monitor](/azure/azure-monitor/overview). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
+This article describes the monitoring data generated by Azure Container Instances. Azure Container Instances includes built-in support for [Azure Monitor](../azure-monitor/overview.md). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
## Monitoring overview page in Azure portal
The **Overview** page in the Azure portal for each container instance includes a
## Monitoring data
-Azure Container Instances collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-Azure-resources).
+Azure Container Instances collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data-from-azure-resources).
See [Monitoring *Azure Container Instances* data reference](monitor-azure-container-instances-reference.md) for detailed information on the metrics and logs metrics created by Azure Container Instances.
The metrics and logs you can collect are discussed in the following sections.
## Analyzing metrics
-You can analyze metrics for *Azure Container Instances* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started) for details on using this tool.
+You can analyze metrics for *Azure Container Instances* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
For a list of the platform metrics collected for Azure Container Instances, see [Monitoring Azure Container Instances data reference metrics](monitor-azure-container-instances-reference.md#metrics). All metrics for Azure Container Instances are in the namespace **Container group standard metrics**. In a container group with multiple containers, you can additionally filter on the [dimension](monitor-azure-container-instances-reference.md#metric-dimensions) **containerName** to acquire metrics from a specific container within the group.
-For reference, you can see a list of [all resource metrics supported in Azure Monitor](/azure/azure-monitor/essentials/metrics-supported).
+For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
### View operation level metrics for Azure Container Instances
In a scenario where you have a container group with multiple containers, you may
Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema) The schema for Azure Container Instances resource logs is found in the [Azure Container Instances Data Reference](monitor-azure-container-instances-reference.md#schemas).
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md) The schema for Azure Container Instances resource logs is found in the [Azure Container Instances Data Reference](monitor-azure-container-instances-reference.md#schemas).
-The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of Azure platform log that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics. You can see a list of the kinds of operations that will be logged in the [Azure Container Instances Data Reference](monitor-azure-container-instances-reference.md#activity-log)
+The [Activity log](../azure-monitor/essentials/activity-log.md) is a type of Azure platform log that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics. You can see a list of the kinds of operations that will be logged in the [Azure Container Instances Data Reference](monitor-azure-container-instances-reference.md#activity-log)
### Sample Kusto queries
ContainerInstanceLog_CL
``` > [!IMPORTANT]
-> When you select **Logs** from the Azure Container Instances menu, Log Analytics is opened with the query scope set to the current Azure Container Instances. This means that log queries will only include data from that resource. If you want to run a query that includes data from other resources or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](/azure/azure-monitor/logs/scope) for details.
+> When you select **Logs** from the Azure Container Instances menu, Log Analytics is opened with the query scope set to the current Azure Container Instances. This means that log queries will only include data from that resource. If you want to run a query that includes data from other resources or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
-For a list of common queries for Azure Container Instances, see the [Log Analytics queries interface](/azure/azure-monitor/logs/queries).
+For a list of common queries for Azure Container Instances, see the [Log Analytics queries interface](../azure-monitor/logs/queries.md).
## Alerts
For Azure Container Instances, there are three categories for alerting:
## Next steps * See the [Monitoring Azure Container Instances data reference](monitor-azure-container-instances-reference.md) for a reference of the metrics, logs, and other important values created by Azure Container Instances.
-* See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
+* See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
container-registry Container Registry Authentication Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-authentication-managed-identity.md
For this article, you learn more about managed identities and how to:
> * Grant the identity access to an Azure container registry > * Use the managed identity to access the registry and pull a container image
-To create the Azure resources, this article requires that you run the Azure CLI version 2.0.55 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli].
+### [Azure CLI](#tab/azure-cli)
+
+To create the Azure resources, this article requires that you run the Azure CLI version 2.0.55 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+To create the Azure resources, this article requires that you run the Azure PowerShell module version 7.5.0 or later. Run `Get-Module Az -ListAvailable` to find the version. If you need to install or upgrade, see [Install Azure PowerShell module][azure-powershell-install].
++ To set up a container registry and push a container image to it, you must also have Docker installed locally. Docker provides packages that easily configure Docker on any [macOS][docker-mac], [Windows][docker-windows], or [Linux][docker-linux] system.
Then, use the identity to authenticate to any [service that supports Azure AD au
## Create a container registry
+### [Azure CLI](#tab/azure-cli)
+ If you don't already have an Azure container registry, create a registry and push a sample container image to it. For steps, see [Quickstart: Create a private container registry using the Azure CLI](container-registry-get-started-azure-cli.md). This article assumes you have the `aci-helloworld:v1` container image stored in your registry. The examples use a registry name of *myContainerRegistry*. Replace with your own registry and image names in later steps.
+### [Azure PowerShell](#tab/azure-powershell)
+
+If you don't already have an Azure container registry, create a registry and push a sample container image to it. For steps, see [Quickstart: Create a private container registry using Azure PowerShell](container-registry-get-started-powershell.md).
+
+This article assumes you have the `aci-helloworld:v1` container image stored in your registry. The examples use a registry name of *myContainerRegistry*. Replace with your own registry and image names in later steps.
+++ ## Create a Docker-enabled VM
-Create a Docker-enabled Ubuntu virtual machine. You also need to install the [Azure CLI](/cli/azure/install-azure-cli) on the virtual machine. If you already have an Azure virtual machine, skip this step to create the virtual machine.
+### [Azure CLI](#tab/azure-cli)
+
+Create a Docker-enabled Ubuntu virtual machine. You also need to install the [Azure CLI][azure-cli-install] on the virtual machine. If you already have an Azure virtual machine, skip this step to create the virtual machine.
Deploy a default Ubuntu Azure virtual machine with [az vm create][az-vm-create]. The following example creates a VM named *myDockerVM* in an existing resource group named *myResourceGroup*:
-```azurecli
+```azurecli-interactive
az vm create \ --resource-group myResourceGroup \ --name myDockerVM \
az vm create \
It takes a few minutes for the VM to be created. When the command completes, take note of the `publicIpAddress` displayed by the Azure CLI. Use this address to make SSH connections to the VM.
+### [Azure PowerShell](#tab/azure-powershell)
+
+Create a Docker-enabled Ubuntu virtual machine. You also need to install the [Azure PowerShell][azure-powershell-install] on the virtual machine. If you already have an Azure virtual machine, skip this step to create the virtual machine.
+
+Deploy a default Ubuntu Azure virtual machine with [New-AzVM][new-azvm]. The following example creates a VM named *myDockerVM* in an existing resource group named *myResourceGroup*. You will be prompted for a user name that will be used when you connect to the VM. Specify *azureuser* as the user name. You will also be asked for a password, which you can leave blank. Password login for the VM is disabled when using an SSH key.
+
+```azurepowershell-interactive
+$vmParams = @{
+ ResourceGroupName = 'MyResourceGroup'
+ Name = 'myDockerVM'
+ Image = 'UbuntuLTS'
+ PublicIpAddressName = 'myPublicIP'
+ GenerateSshKey = $true
+ SshKeyName = 'mySSHKey'
+}
+New-AzVM @vmParams
+```
+
+It takes a few minutes for the VM to be created. When the command completes, run the following command to get the public IP address. Use this address to make SSH connections to the VM.
+
+```azurepowershell-interactive
+Get-AzPublicIpAddress -Name myPublicIP -ResourceGroupName myResourceGroup | Select-Object -ExpandProperty IpAddress
+```
+++ ### Install Docker on the VM After the VM is running, make an SSH connection to the VM. Replace *publicIpAddress* with the public IP address of your VM.
After installation, run the following command to verify that Docker is running p
sudo docker run -it mcr.microsoft.com/hello-world ```
-Output:
-
-```
+```output
Hello from Docker! This message shows that your installation appears to be working correctly. [...] ```
+### [Azure CLI](#tab/azure-cli)
### Install the Azure CLI Follow the steps in [Install Azure CLI with apt](/cli/azure/install-azure-cli-apt) to install the Azure CLI on your Ubuntu virtual machine. For this article, ensure that you install version 2.0.55 or later.
+### [Azure PowerShell](#tab/azure-powershell)
+
+### Install the Azure PowerShell
+
+Follow the steps in [Installing PowerShell on Ubuntu][powershell-install] and [Install the Azure Az PowerShell module][azure-powershell-install] to install PowerShell and Azure PowerShell on your Ubuntu virtual machine. For this article, ensure that you install Azure PowerShell version 7.5.0 or later.
+++ Exit the SSH session. ## Example 1: Access with a user-assigned identity ### Create an identity
-Create an identity in your subscription using the [az identity create](/cli/azure/identity#az-identity-create) command. You can use the same resource group you used previously to create the container registry or virtual machine, or a different one.
+### [Azure CLI](#tab/azure-cli)
+
+Create an identity in your subscription using the [az identity create][az-identity-create] command. You can use the same resource group you used previously to create the container registry or virtual machine, or a different one.
```azurecli-interactive az identity create --resource-group myResourceGroup --name myACRId ```
-To configure the identity in the following steps, use the [az identity show][az_identity_show] command to store the identity's resource ID and service principal ID in variables.
+To configure the identity in the following steps, use the [az identity show][az-identity-show] command to store the identity's resource ID and service principal ID in variables.
-```azurecli
+```azurecli-interactive
# Get resource ID of the user-assigned identity userID=$(az identity show --resource-group myResourceGroup --name myACRId --query id --output tsv)
echo $userID
The ID is of the form:
+```output
+/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myACRId
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+Create an identity in your subscription using the [New-AzUserAssignedIdentity][new-azuserassignedidentity] cmdlet. You can use the same resource group you used previously to create the container registry or virtual machine, or a different one.
+
+```azurepowershell-interactive
+New-AzUserAssignedIdentity -ResourceGroupName myResourceGroup -Location eastus -Name myACRId
```+
+To configure the identity in the following steps, use the [Get-AzUserAssignedIdentity][get-azuserassignedidentity] cmdlet to store the identity's resource ID and service principal ID in variables.
+
+```azurepowershell-interactive
+# Get resource ID of the user-assigned identity
+$userID = (Get-AzUserAssignedIdentity -ResourceGroupName myResourceGroup -Name myACRId).Id
+
+# Get service principal ID of the user-assigned identity
+$spID = (Get-AzUserAssignedIdentity -ResourceGroupName myResourceGroup -Name myACRId).PrincipalId
+```
+
+Because you need the identity's ID in a later step when you sign in to the Azure PowerShell from your virtual machine, show the value:
+
+```azurepowershell-interactive
+$userID
+```
+
+The ID is of the form:
+
+```output
/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myACRId ``` ++ ### Configure the VM with the identity
+### [Azure CLI](#tab/azure-cli)
+ The following [az vm identity assign][az-vm-identity-assign] command configures your Docker VM with the user-assigned identity:
-```azurecli
+```azurecli-interactive
az vm identity assign --resource-group myResourceGroup --name myDockerVM --identities $userID ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+The following [Update-AzVM][update-azvm] command configures your Docker VM with the user-assigned identity:
+
+```azurepowershell-interactive
+$vm = Get-AzVM -ResourceGroupName myResourceGroup -Name myDockerVM
+Update-AzVM -ResourceGroupName myResourceGroup -VM $vm -IdentityType UserAssigned -IdentityID $userID
+```
+++ ### Grant identity access to the container registry
+### [Azure CLI](#tab/azure-cli)
+ Now configure the identity to access your container registry. First use the [az acr show][az-acr-show] command to get the resource ID of the registry:
-```azurecli
+```azurecli-interactive
resourceID=$(az acr show --resource-group myResourceGroup --name myContainerRegistry --query id --output tsv) ```
-Use the [az role assignment create][az-role-assignment-create] command to assign the AcrPull role to the registry. This role provides [pull permissions](container-registry-roles.md) to the registry. To provide both pull and push permissions, assign the ACRPush role.
+Use the [az role assignment create][az-role-assignment-create] command to assign the AcrPull role to the identity. This role provides [pull permissions](container-registry-roles.md) to the registry. To provide both pull and push permissions, assign the AcrPush role.
-```azurecli
+```azurecli-interactive
az role assignment create --assignee $spID --scope $resourceID --role acrpull ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+Now configure the identity to access your container registry. First use the [Get-AzContainerRegistry][get-azcontainerregistry] command to get the resource ID of the registry:
+
+```azurepowershell-interactive
+$resourceID = (Get-AzContainerRegistry -ResourceGroupName myResourceGroup -Name myContainerRegistry).Id
+```
+
+Use the [New-AzRoleAssignment][new-azroleassignment] cmdlet to assign the AcrPull role to the identity. This role provides [pull permissions](container-registry-roles.md) to the registry. To provide both pull and push permissions, assign the AcrPush role.
+
+```azurepowershell-interactive
+New-AzRoleAssignment -ObjectId $spID -Scope $resourceID -RoleDefinitionName AcrPull
+```
+++ ### Use the identity to access the registry
+### [Azure CLI](#tab/azure-cli)
+ SSH into the Docker virtual machine that's configured with the identity. Run the following Azure CLI commands, using the Azure CLI installed on the VM. First, authenticate to the Azure CLI with [az login][az-login], using the identity you configured on the VM. For `<userID>`, substitute the ID of the identity you retrieved in a previous step.
You should see a `Login succeeded` message. You can then run `docker` commands w
docker pull mycontainerregistry.azurecr.io/aci-helloworld:v1 ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+SSH into the Docker virtual machine that's configured with the identity. Run the following Azure PowerShell commands, using the Azure PowerShell installed on the VM.
+
+First, authenticate to the Azure PowerShell with [Connect-AzAccount][connect-azaccount], using the identity you configured on the VM. For `-AccountId` specify a client ID of the identity.
+
+```azurepowershell
+$clientId = (Get-AzUserAssignedIdentity -ResourceGroupName myResourceGroup -Name myACRId).ClientId
+Connect-AzAccount -Identity -AccountId $clientId
+```
+
+Then, authenticate to the registry with [Connect-AzContainerRegistry][connect-azcontainerregistry]. When you use this command, the Azure PowerShell uses the Active Directory token created when you ran `Connect-AzAccount` to seamlessly authenticate your session with the container registry. (Depending on your VM's setup, you might need to run this command and docker commands with `sudo`.)
+
+```azurepowershell
+sudo pwsh -command Connect-AzContainerRegistry -Name myContainerRegistry
+```
+
+You should see a `Login succeeded` message. You can then run `docker` commands without providing credentials. For example, run [docker pull][docker-pull] to pull the `aci-helloworld:v1` image, specifying the login server name of your registry. The login server name consists of your container registry name (all lowercase) followed by `.azurecr.io` - for example, `mycontainerregistry.azurecr.io`.
+
+```
+docker pull mycontainerregistry.azurecr.io/aci-helloworld:v1
+```
+++ ## Example 2: Access with a system-assigned identity ### Configure the VM with a system-managed identity
+### [Azure CLI](#tab/azure-cli)
+ The following [az vm identity assign][az-vm-identity-assign] command configures your Docker VM with a system-assigned identity:
-```azurecli
+```azurecli-interactive
az vm identity assign --resource-group myResourceGroup --name myDockerVM ```
Use the [az vm show][az-vm-show] command to set a variable to the value of `prin
spID=$(az vm show --resource-group myResourceGroup --name myDockerVM --query identity.principalId --out tsv) ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+The following [Update-AzVM][update-azvm] command configures your Docker VM with a system-assigned identity:
+
+```azurepowershell-interactive
+$vm = Get-AzVM -ResourceGroupName myResourceGroup -Name myDockerVM
+Update-AzVM -ResourceGroupName myResourceGroup -VM $vm -IdentityType SystemAssigned
+```
+
+Use the [Get-AzVM][get-azvm] command to set a variable to the value of `principalId` (the service principal ID) of the VM's identity, to use in later steps.
+
+```azurepowershell-interactive
+$spID = (Get-AzVM -ResourceGroupName myResourceGroup -Name myDockerVM).Identity.PrincipalId
+```
+++ ### Grant identity access to the container registry
+### [Azure CLI](#tab/azure-cli)
+ Now configure the identity to access your container registry. First use the [az acr show][az-acr-show] command to get the resource ID of the registry:
-```azurecli
+```azurecli-interactive
resourceID=$(az acr show --resource-group myResourceGroup --name myContainerRegistry --query id --output tsv) ```
-Use the [az role assignment create][az-role-assignment-create] command to assign the AcrPull role to the identity. This role provides [pull permissions](container-registry-roles.md) to the registry. To provide both pull and push permissions, assign the ACRPush role.
+Use the [az role assignment create][az-role-assignment-create] command to assign the AcrPull role to the identity. This role provides [pull permissions](container-registry-roles.md) to the registry. To provide both pull and push permissions, assign the AcrPush role.
-```azurecli
+```azurecli-interactive
az role assignment create --assignee $spID --scope $resourceID --role acrpull ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+Now configure the identity to access your container registry. First use the [[Get-AzContainerRegistry][get-azcontainerregistry] command to get the resource ID of the registry:
+
+```azurepowershell-interactive
+$resourceID = (Get-AzContainerRegistry -ResourceGroupName myResourceGroup -Name myContainerRegistry).Id
+```
+
+Use the [New-AzRoleAssignment][new-azroleassignment] cmdlet to assign the AcrPull role to the identity. This role provides [pull permissions](container-registry-roles.md) to the registry. To provide both pull and push permissions, assign the AcrPush role.
+
+```azurepowershell-interactive
+New-AzRoleAssignment -ObjectId $spID -Scope $resourceID -RoleDefinitionName AcrPull
+```
+++ ### Use the identity to access the registry
+### [Azure CLI](#tab/azure-cli)
+ SSH into the Docker virtual machine that's configured with the identity. Run the following Azure CLI commands, using the Azure CLI installed on the VM. First, authenticate the Azure CLI with [az login][az-login], using the system-assigned identity on the VM.
You should see a `Login succeeded` message. You can then run `docker` commands w
``` docker pull mycontainerregistry.azurecr.io/aci-helloworld:v1 ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+SSH into the Docker virtual machine that's configured with the identity. Run the following Azure PowerShell commands, using the Azure PowerShell installed on the VM.
+
+First, authenticate the Azure PowerShell with [Connect-AzAccount][connect-azaccount], using the system-assigned identity on the VM.
+
+```azurepowershell
+Connect-AzAccount -Identity
+```
+
+Then, authenticate to the registry with [Connect-AzContainerRegistry][connect-azcontainerregistry]. When you use this command, the PowerShell uses the Active Directory token created when you ran `Connect-AzAccount` to seamlessly authenticate your session with the container registry. (Depending on your VM's setup, you might need to run this command and docker commands with `sudo`.)
+
+```azurepowershell
+sudo pwsh -command Connect-AzContainerRegistry -Name myContainerRegistry
+```
+
+You should see a `Login succeeded` message. You can then run `docker` commands without providing credentials. For example, run [docker pull][docker-pull] to pull the `aci-helloworld:v1` image, specifying the login server name of your registry. The login server name consists of your container registry name (all lowercase) followed by `.azurecr.io` - for example, `mycontainerregistry.azurecr.io`.
+
+```
+docker pull mycontainerregistry.azurecr.io/aci-helloworld:v1
+```
++ ## Next steps
In this article, you learned about using managed identities with Azure Container
<!-- LINKS - Internal --> [az-login]: /cli/azure/reference-index#az_login
+[connect-azaccount]: /powershell/module/az.accounts/connect-azaccount
[az-acr-login]: /cli/azure/acr#az_acr_login
+[connect-azcontainerregistry]: /powershell/module/az.containerregistry/connect-azcontainerregistry
[az-acr-show]: /cli/azure/acr#az_acr_show
+[get-azcontainerregistry]: /powershell/module/az.containerregistry/get-azcontainerregistry
[az-vm-create]: /cli/azure/vm#az_vm_create
+[new-azvm]: /powershell/module/az.compute/new-azvm
[az-vm-show]: /cli/azure/vm#az_vm_show
+[get-azvm]: /powershell/module/az.compute/get-azvm
+[az-identity-create]: /cli/azure/identity#az_identity_create
+[new-azuserassignedidentity]: /powershell/module/az.managedserviceidentity/new-azuserassignedidentity
[az-vm-identity-assign]: /cli/azure/vm/identity#az_vm_identity_assign
+[update-azvm]: /powershell/module/az.compute/update-azvm
[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
-[az-acr-login]: /cli/azure/acr#az_acr_login
+[new-azroleassignment]: /powershell/module/az.resources/new-azroleassignment
[az-identity-show]: /cli/azure/identity#az_identity_show
-[azure-cli]: /cli/azure/install-azure-cli
+[get-azuserassignedidentity]: /powershell/module/az.managedserviceidentity/get-azuserassignedidentity
+[azure-cli-install]: /cli/azure/install-azure-cli
+[azure-powershell-install]: /powershell/azure/install-az-ps
+[powershell-install]: /powershell/scripting/install/install-ubuntu
container-registry Container Registry Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-skus.md
Throttling could occur temporarily when you generate a burst of image pull or pu
## Show registry usage
-Use the [az acr show-usage](/cli/azure/acr#az-acr-show-usage) command, or the [List Usages](/rest/api/containerregistry/registries/list-usages) REST API, to get a snapshot of your registry's current consumption of storage and other resources, compared with the limits for that registry's service tier. Storage usage also appears on the registry's **Overview** page in the portal.
+Use the [az acr show-usage](/cli/azure/acr#az-acr-show-usage) command in the Azure CLI, [Get-AzContainerRegistryUsage](/powershell/module/az.containerregistry/get-azcontainerregistryusage) in Azure PowerShell, or the [List Usages](/rest/api/containerregistry/registries/list-usages) REST API, to get a snapshot of your registry's current consumption of storage and other resources, compared with the limits for that registry's service tier. Storage usage also appears on the registry's **Overview** page in the portal.
Usage information helps you make decisions about [changing the service tier](#changing-tiers) when your registry nears a limit. This information also helps you [manage consumption](container-registry-best-practices.md#manage-registry-size).
There is no registry downtime or impact on registry operations when you move bet
To move between service tiers in the Azure CLI, use the [az acr update][az-acr-update] command. For example, to switch to Premium: ```azurecli
-az acr update --name myregistry --sku Premium
+az acr update --name myContainerRegistry --sku Premium
+```
+
+### Azure PowerShell
+
+To move between service tiers in Azure PowerShell, use the [Update-AzContainerRegistry][update-azcontainerregistry] cmdlet. For example, to switch to Premium:
+
+```azurepowershell
+Update-AzContainerRegistry -ResourceGroupName myResourceGroup -Name myContainerRegistry -Sku Premium
``` ### Azure portal
Submit and vote on new feature suggestions in [ACR UserVoice][container-registry
<!-- LINKS - Internal --> [az-acr-update]: /cli/azure/acr#az_acr_update
+[update-azcontainerregistry]: /powershell/module/az.containerregistry/update-azcontainerregistry
[container-registry-geo-replication]: container-registry-geo-replication.md [container-registry-storage]: container-registry-storage.md [container-registry-delete]: container-registry-delete.md
container-registry Container Registry Troubleshoot Login https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-troubleshoot-login.md
Run the [az acr check-health](/cli/azure/acr#az-acr-check-health) command to get
See [Check the health of an Azure container registry](container-registry-check-health.md) for command examples. If errors are reported, review the [error reference](container-registry-health-error-reference.md) and the following sections for recommended solutions.
-Follow the instructions from the [AKS support doc](https://docs.microsoft.com/troubleshoot/azure/azure-kubernetes/cannot-pull-image-from-acr-to-aks-cluster) if you fail to pull images from ACR to the AKS cluster.
+Follow the instructions from the [AKS support doc](/troubleshoot/azure/azure-kubernetes/cannot-pull-image-from-acr-to-aks-cluster) if you fail to pull images from ACR to the AKS cluster.
> [!NOTE] > Some authentication or authorization errors can also occur if there are firewall or network configurations that prevent registry access. See [Troubleshoot network issues with registry](container-registry-troubleshoot-access.md).
If you don't resolve your problem here, see the following options.
* [Troubleshoot registry performance](container-registry-troubleshoot-performance.md) * [Community support](https://azure.microsoft.com/support/community/) options * [Microsoft Q&A](/answers/products/)
-* [Open a support ticket](https://azure.microsoft.com/support/create-ticket/) - based on information you provide, a quick diagnostic might be run for authentication failures in your registry
+* [Open a support ticket](https://azure.microsoft.com/support/create-ticket/) - based on information you provide, a quick diagnostic might be run for authentication failures in your registry
container-registry Container Registry Tutorial Sign Build Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md
In this tutorial:
## Prerequisites
-> * Install, create and sign in to [ORAS artifact enabled registry](/azure/container-registry/container-registry-oras-artifacts#create-oras-artifact-enabled-registry)
-> * Create or use an [Azure Key Vault](/azure/key-vault/general/quick-create-cli)
+> * Install, create and sign in to [ORAS artifact enabled registry](./container-registry-oras-artifacts.md#create-oras-artifact-enabled-registry)
+> * Create or use an [Azure Key Vault](../key-vault/general/quick-create-cli.md)
>* This tutorial can be run in the [Azure Cloud Shell](https://portal.azure.com/#cloudshell/) ## Install the notation CLI and AKV plugin
cosmos-db Connect Spark Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/connect-spark-configuration.md
This article is one among a series of articles on Azure Cosmos DB Cassandra API
## Dependencies for connectivity * **Spark connector for Cassandra:**
- Spark connector is used to connect to Azure Cosmos DB Cassandra API. Identify and use the version of the connector located in [Maven central]( https://mvnrepository.com/artifact/com.datastax.spark/spark-cassandra-connector) that is compatible with the Spark and Scala versions of your Spark environment. We recommend an environment which supports Spark 3.0 or higher, and the spark connector available at maven coordinates `com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0`. If using Spark 2.x, we recommend an environment with Spark version 2.4.5, using spark connector at maven coordinates `com.datastax.spark:spark-cassandra-connector_2.11:2.4.3`.
+ Spark connector is used to connect to Azure Cosmos DB Cassandra API. Identify and use the version of the connector located in [Maven central](https://mvnrepository.com/artifact/com.datastax.spark/spark-cassandra-connector-assembly) that is compatible with the Spark and Scala versions of your Spark environment. We recommend an environment that supports Spark 3.2.1 or higher, and the spark connector available at maven coordinates `com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0`. If using Spark 2.x, we recommend an environment with Spark version 2.4.5, using spark connector at maven coordinates `com.datastax.spark:spark-cassandra-connector_2.11:2.4.3`.
* **Azure Cosmos DB helper library for Cassandra API:**
- If you are using a version Spark 2.x then in addition to the Spark connector, you need another library called [azure-cosmos-cassandra-spark-helper]( https://search.maven.org/artifact/com.microsoft.azure.cosmosdb/azure-cosmos-cassandra-spark-helper/1.2.0/jar) with maven coordinates `com.microsoft.azure.cosmosdb:azure-cosmos-cassandra-spark-helper:1.2.0` from Azure Cosmos DB in order to handle [rate limiting](./scale-account-throughput.md#handling-rate-limiting-429-errors). This library contains custom connection factory and retry policy classes.
+ If you're using a version Spark 2.x, then in addition to the Spark connector, you need another library called [azure-cosmos-cassandra-spark-helper]( https://search.maven.org/artifact/com.microsoft.azure.cosmosdb/azure-cosmos-cassandra-spark-helper/1.2.0/jar) with maven coordinates `com.microsoft.azure.cosmosdb:azure-cosmos-cassandra-spark-helper:1.2.0` from Azure Cosmos DB in order to handle [rate limiting](./scale-account-throughput.md#handling-rate-limiting-429-errors). This library contains custom connection factory and retry policy classes.
- The retry policy in Azure Cosmos DB is configured to handle HTTP status code 429("Request Rate Large") exceptions. The Azure Cosmos DB Cassandra API translates these exceptions into overloaded errors on the Cassandra native protocol, and you can retry with back-offs. Because Azure Cosmos DB uses provisioned throughput model, request rate limiting exceptions occur when the ingress/egress rates increase. The retry policy protects your spark jobs against data spikes that momentarily exceed the throughput allocated for your container. If using the Spark 3.x connector, implementing this library is not required.
+ The retry policy in Azure Cosmos DB is configured to handle HTTP status code 429("Request Rate Large") exceptions. The Azure Cosmos DB Cassandra API translates these exceptions into overloaded errors on the Cassandra native protocol, and you can retry with back-offs. Because Azure Cosmos DB uses provisioned throughput model, request rate limiting exceptions occur when the ingress/egress rates increase. The retry policy protects your spark jobs against data spikes that momentarily exceed the throughput allocated for your container. If using the Spark 3.x connector, implementing this library isn't required.
> [!NOTE] > The retry policy can protect your spark jobs against momentary spikes only. If you have not configured enough RUs required to run your workload, then the retry policy is not applicable and the retry policy class rethrows the exception.
This article is one among a series of articles on Azure Cosmos DB Cassandra API
Listed in the next section are all the relevant parameters for controlling throughput using the Spark Connector for Cassandra. In order to optimize parameters to maximize throughput for spark jobs, the `spark.cassandra.output.concurrent.writes`, `spark.cassandra.concurrent.reads`, and `spark.cassandra.input.reads_per_sec` configs needs to be correctly configured, in order to avoid too much throttling and back-off (which in turn can lead to lower throughput).
-The optimal value of these configurations depends on 4 factors:
+The optimal value of these configurations depends on four factors:
- The amount of throughput (Request Units) configured for the table that data is being ingested into. - The number of workers in your Spark cluster. - The number of executors configured for your spark job (which can be controlled using `spark.cassandra.connection.connections_per_executor_max` or `spark.cassandra.connection.remoteConnectionsPerExecutor` depending on Spark version)-- The average latency of each request to cosmos DB, if you are collocated in the same Data Center. Assume this value to be 10 ms for writes and 3 ms for reads.
+- The average latency of each request to Cosmos DB, if you're collocated in the same Data Center. Assume this value to be 10 ms for writes and 3 ms for reads.
-As an example, if we have 5 workers and a value of `spark.cassandra.output.concurrent.writes`= 1, and a value of `spark.cassandra.connection.remoteConnectionsPerExecutor` = 1, then we have 5 workers that are concurrently writing into the table, each with 1 thread. If it takes 10 ms to perform a single write, then we can send 100 requests (1000 milliseconds divided by 10) per second, per thread. With 5 workers, this would be 500 writes per second. At an average cost of 5 request units (RUs) per write, the target table would need a minimum 2500 request units provisioned (5 RUs x 500 writes per second).
+As an example, if we have five workers and a value of `spark.cassandra.output.concurrent.writes`= 1, and a value of `spark.cassandra.connection.remoteConnectionsPerExecutor` = 1, then we have five workers that are concurrently writing into the table, each with one thread. If it takes 10 ms to perform a single write, then we can send 100 requests (1000 milliseconds divided by 10) per second, per thread. With five workers, this would be 500 writes per second. At an average cost of five request units (RUs) per write, the target table would need a minimum 2500 request units provisioned (5 RUs x 500 writes per second).
Increasing the number of executors can increase the number of threads in a given job, which can in turn increase throughput. However, the exact impact of this can be variable depending on the job, while controlling throughput with number of workers is more deterministic. You can also determine the exact cost of a given request by profiling it to get the Request Unit (RU) charge. This will help you to be more accurate when provisioning throughput for your table or keyspace. Have a look at our article [here](./find-request-unit-charge-cassandra.md) to understand how to get request unit charges at a per request level. ### Scaling throughput in the database
-The Cassandra Spark connector will saturate throughput in Azure Cosmos DB very efficiently. As a result, even with effective retries, you will need to ensure you have sufficient throughput (RUs) provisioned at the table or keyspace level to prevent rate limiting related errors. The minimum setting of 400 RUs in a given table or keyspace will not be sufficient. Even at minimum throughput configuration settings, the Spark connector can write at a rate corresponding to around **6000 request units** or more.
+The Cassandra Spark connector will saturate throughput in Azure Cosmos DB efficiently. As a result, even with effective retries, you'll need to ensure you have sufficient throughput (RUs) provisioned at the table or keyspace level to prevent rate limiting related errors. The minimum setting of 400 RUs in a given table or keyspace won't be sufficient. Even at minimum throughput configuration settings, the Spark connector can write at a rate corresponding to around **6000 request units** or more.
If the RU setting required for data movement using Spark is higher than what is required for your steady state workload, you can easily scale throughput up and down systematically in Azure Cosmos DB to meet the needs of your workload for a given time period. Read our article on [elastic scale in Cassandra API](scale-account-throughput.md) to understand the different options for scaling programmatically and dynamically.
The following table lists Azure Cosmos DB Cassandra API-specific throughput conf
| **Property Name** | **Default value** | **Description** | |||| | spark.cassandra.output.batch.size.rows | 1 |Number of rows per single batch. Set this parameter to 1. This parameter is used to achieve higher throughput for heavy workloads. |
-| spark.cassandra.connection.connections_per_executor_max (Spark 2.x) spark.cassandra.connection.remoteConnectionsPerExecutor (Spark 3.x) | None | Maximum number of connections per node per executor. 10*n is equivalent to 10 connections per node in an n-node Cassandra cluster. So, if you require 5 connections per node per executor for a 5 node Cassandra cluster, then you should set this configuration to 25. Modify this value based on the degree of parallelism or the number of executors that your spark jobs are configured for. |
+| spark.cassandra.connection.connections_per_executor_max (Spark 2.x) spark.cassandra.connection.remoteConnectionsPerExecutor (Spark 3.x) | None | Maximum number of connections per node per executor. 10*n is equivalent to 10 connections per node in an n-node Cassandra cluster. So, if you require five connections per node per executor for a five node Cassandra cluster, then you should set this configuration to 25. Modify this value based on the degree of parallelism or the number of executors that your spark jobs are configured for. |
| spark.cassandra.output.concurrent.writes | 100 | Defines the number of parallel writes that can occur per executor. Because you set "batch.size.rows" to 1, make sure to scale up this value accordingly. Modify this value based on the degree of parallelism or the throughput that you want to achieve for your workload. | | spark.cassandra.concurrent.reads | 512 | Defines the number of parallel reads that can occur per executor. Modify this value based on the degree of parallelism or the throughput that you want to achieve for your workload | | spark.cassandra.output.throughput_mb_per_sec | None | Defines the total write throughput per executor. This parameter can be used as an upper limit for your spark job throughput, and base it on the provisioned throughput of your Cosmos container. |
The following table lists Azure Cosmos DB Cassandra API-specific throughput conf
| spark.cassandra.output.batch.grouping.buffer.size | 1000 | Defines the number of batches per single spark task that can be stored in memory before sending to Cassandra API | | spark.cassandra.connection.keep_alive_ms | 60000 | Defines the period of time until which unused connections are available. |
-Adjust the throughput and degree of parallelism of these parameters based on the workload you expect for your spark jobs, and the throughput you have provisioned for your Cosmos DB account.
+Adjust the throughput and degree of parallelism of these parameters based on the workload you expect for your spark jobs, and the throughput you've provisioned for your Cosmos DB account.
## Connecting to Azure Cosmos DB Cassandra API from Spark ### cqlsh
-The following commands detail how to connect to Azure CosmosDB Cassandra API from cqlsh. This is useful for validation as you run through the samples in Spark.<br>
+The following commands detail how to connect to Azure Cosmos DB Cassandra API from cqlsh. This is useful for validation as you run through the samples in Spark.<br>
**From Linux/Unix/Mac:** ```bash
import com.microsoft.azure.cosmosdb.cassandra
#### Spark session configuration: ```scala
-//Connection-related
-spark.conf.set("spark.cassandra.connection.host","YOUR_ACCOUNT_NAME.cassandra.cosmosdb.azure.com")
-spark.conf.set("spark.cassandra.connection.port","10350")
-spark.conf.set("spark.cassandra.connection.ssl.enabled","true")
-spark.conf.set("spark.cassandra.auth.username","YOUR_ACCOUNT_NAME")
-spark.conf.set("spark.cassandra.auth.password","YOUR_ACCOUNT_KEY")
-spark.conf.set("spark.cassandra.connection.factory", "com.microsoft.azure.cosmosdb.cassandra.CosmosDbConnectionFactory")
-
-//Throughput-related. You can adjust the values as needed
-spark.conf.set("spark.cassandra.output.batch.size.rows", "1")
-//spark.conf.set("spark.cassandra.connection.connections_per_executor_max", "10") // Spark 2.x
-spark.conf.set("spark.cassandra.connection.remoteConnectionsPerExecutor", "10") // Spark 3.x
-spark.conf.set("spark.cassandra.output.concurrent.writes", "1000")
-spark.conf.set("spark.cassandra.concurrent.reads", "512")
-spark.conf.set("spark.cassandra.output.batch.grouping.buffer.size", "1000")
-spark.conf.set("spark.cassandra.connection.keep_alive_ms", "600000000")
+ spark.cassandra.connection.host YOUR_ACCOUNT_NAME.cassandra.cosmosdb.azure.com
+ spark.cassandra.connection.port 10350
+ spark.cassandra.connection.ssl.enabled true
+ spark.cassandra.auth.username YOUR_ACCOUNT_NAME
+ spark.cassandra.auth.password YOUR_ACCOUNT_KEY
+// if using Spark 2.x
+// spark.cassandra.connection.factory com.microsoft.azure.cosmosdb.cassandra.CosmosDbConnectionFactory
+
+//Throughput-related...adjust as needed
+ spark.cassandra.output.batch.size.rows 1
+// spark.cassandra.connection.connections_per_executor_max 10 // Spark 2.x
+ spark.cassandra.connection.remoteConnectionsPerExecutor 10 // Spark 3.x
+ spark.cassandra.output.concurrent.writes 1000
+ spark.cassandra.concurrent.reads 512
+ spark.cassandra.output.batch.grouping.buffer.size 1000
+ spark.cassandra.connection.keep_alive_ms 600000000
``` ## Next steps
cosmos-db Manage Data Cqlsh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-cqlsh.md
In this quickstart, you create an Azure Cosmos DB Cassandra API account, and use
## Prerequisites-- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](/azure/cosmos-db/try-free) without an Azure subscription.
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](../try-free.md) without an Azure subscription.
## Create a database account
In the Azure portal, open **Data Explorer** to query, modify, and work with this
In this quickstart, you learned how to create an Azure Cosmos DB account with Cassandra API using CQLSH that creates a Cassandra database and container. You can now import additional data into your Azure Cosmos DB account. > [!div class="nextstepaction"]
-> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
+> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
cosmos-db Manage Data Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-dotnet-core.md
Azure Cosmos DB is Microsoft's globally distributed multi-model database service
## Prerequisites In addition, you need: * Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
Now go back to the Azure portal to get your connection string information and co
In this quickstart, you've learned how to create an Azure Cosmos DB account, create a container using the Data Explorer, and run a web app. You can now import other data to your Cosmos DB account. > [!div class="nextstepaction"]
-> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
+> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
cosmos-db Manage Data Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-dotnet.md
Azure Cosmos DB is Microsoft's globally distributed multi-model database service
## Prerequisites In addition, you need: * Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
Now go back to the Azure portal to get your connection string information and co
In this quickstart, you've learned how to create an Azure Cosmos DB account, create a container using the Data Explorer, and run a web app. You can now import other data to your Cosmos DB account. > [!div class="nextstepaction"]
-> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
+> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
cosmos-db Manage Data Java V4 Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-java-v4-sdk.md
In this quickstart, you create an Azure Cosmos DB Cassandra API account, and use
## Prerequisites -- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](/azure/cosmos-db/try-free) without an Azure subscription.
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](../try-free.md) without an Azure subscription.
- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed. - A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven. - [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git.
Now go back to the Azure portal to get your connection string information and co
In this quickstart, you learned how to create an Azure Cosmos DB account with Cassandra API, and run a Cassandra Java app that creates a Cassandra database and container. You can now import additional data into your Azure Cosmos DB account. > [!div class="nextstepaction"]
-> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
+> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
cosmos-db Manage Data Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-java.md
In this quickstart, you create an Azure Cosmos DB Cassandra API account, and use
## Prerequisites -- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](/azure/cosmos-db/try-free) without an Azure subscription.
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](../try-free.md) without an Azure subscription.
- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed. - A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven. - [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git.
Now go back to the Azure portal to get your connection string information and co
In this quickstart, you learned how to create an Azure Cosmos DB account with Cassandra API, and run a Cassandra Java app that creates a Cassandra database and container. You can now import additional data into your Azure Cosmos DB account. > [!div class="nextstepaction"]
-> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
+> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
cosmos-db Manage Data Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-nodejs.md
In this quickstart, you create an Azure Cosmos DB Cassandra API account, and use
## Prerequisites In addition, you need:
Go to the Azure portal to get your connection string information and copy it int
In this quickstart, you learned how to create an Azure Cosmos DB account with Cassandra API, and run a Cassandra Node.js app that creates a Cassandra database and container. You can now import more data into your Azure Cosmos DB account. > [!div class="nextstepaction"]
-> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
+> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
cosmos-db Manage Data Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-python.md
In this quickstart, you create an Azure Cosmos DB Cassandra API account, and use
## Prerequisites -- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](/azure/cosmos-db/try-free) without an Azure subscription.
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](../try-free.md) without an Azure subscription.
- [Python 2.7 or 3.6+](https://www.python.org/downloads/). - [Git](https://git-scm.com/downloads). - [Python Driver for Apache Cassandra](https://github.com/datastax/python-driver).
Now go back to the Azure portal to get your connection string information and co
In this quickstart, you learned how to create an Azure Cosmos DB account with Cassandra API, and run a Cassandra Python app that creates a Cassandra database and container. You can now import additional data into your Azure Cosmos DB account. > [!div class="nextstepaction"]
-> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
+> [Import Cassandra data into Azure Cosmos DB](migrate-data.md)
cosmos-db Materialized Views Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/materialized-views-cassandra.md
New Cassandra API accounts with Materialized Views enabled can be provisioned on
### Log in to the Azure command line interface
-Install Azure CLI as mentioned at [How to install the Azure CLI | Microsoft Docs](https://docs.microsoft.com/cli/azure/install-azure-cli) and log on using the below:
+Install Azure CLI as mentioned at [How to install the Azure CLI | Microsoft Docs](/cli/azure/install-azure-cli) and log on using the below:
```azurecli-interactive az login ```
This step is optional ΓÇô you can skip this step if you don't want to use Custom
To use Customer Managed Keys feature and Materialized views together on Cosmos DB account, you must first configure managed identities with Azure Active Directory for your account and then enable support for materialized views.
-You can use the documentation [here](https://docs.microsoft.com/azure/cosmos-db/how-to-setup-cmk) to configure your Cosmos DB Cassandra account with customer managed keys and setup managed identity access to the key Vault. Make sure you follow all the steps in [Using a managed identity in Azure key vault access policy](https://docs.microsoft.com/azure/cosmos-db/how-to-setup-managed-identity). The next step to enable materializedViews on the account.
+You can use the documentation [here](../how-to-setup-cmk.md) to configure your Cosmos DB Cassandra account with customer managed keys and setup managed identity access to the key Vault. Make sure you follow all the steps in [Using a managed identity in Azure key vault access policy](../how-to-setup-managed-identity.md). The next step to enable materializedViews on the account.
Once your account is set up with CMK and managed identity, you can enable materialized views on the account by enabling ΓÇ£enableMaterializedViewsΓÇ¥ property in the request body.
cosmos-db Spark Aggregation Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-aggregation-operations.md
This article describes basic aggregation operations against Azure Cosmos DB Cass
> Server-side filtering, and server-side aggregation is currently not supported in Azure Cosmos DB Cassandra API. ## Cassandra API configuration-
+Set below spark configuration in your notebook cluster. It's one time activity.
```scala
-import org.apache.spark.sql.cassandra._
-//Spark connector
-import com.datastax.spark.connector._
-import com.datastax.spark.connector.cql.CassandraConnector
-import org.apache.spark.sql.functions._
-
-//if using Spark 2.x, CosmosDB library for multiple retry
-//import com.microsoft.azure.cosmosdb.cassandra
- //Connection-related
-spark.conf.set("spark.cassandra.connection.host","YOUR_ACCOUNT_NAME.cassandra.cosmosdb.azure.com")
-spark.conf.set("spark.cassandra.connection.port","10350")
-spark.conf.set("spark.cassandra.connection.ssl.enabled","true")
-spark.conf.set("spark.cassandra.auth.username","YOUR_ACCOUNT_NAME")
-spark.conf.set("spark.cassandra.auth.password","YOUR_ACCOUNT_KEY")
+ spark.cassandra.connection.host YOUR_ACCOUNT_NAME.cassandra.cosmosdb.azure.com
+ spark.cassandra.connection.port 10350
+ spark.cassandra.connection.ssl.enabled true
+ spark.cassandra.auth.username YOUR_ACCOUNT_NAME
+ spark.cassandra.auth.password YOUR_ACCOUNT_KEY
// if using Spark 2.x
-// spark.conf.set("spark.cassandra.connection.factory", "com.microsoft.azure.cosmosdb.cassandra.CosmosDbConnectionFactory")
+// spark.cassandra.connection.factory com.microsoft.azure.cosmosdb.cassandra.CosmosDbConnectionFactory
//Throughput-related...adjust as needed
-spark.conf.set("spark.cassandra.output.batch.size.rows", "1")
-//spark.conf.set("spark.cassandra.connection.connections_per_executor_max", "10") // Spark 2.x
-spark.conf.set("spark.cassandra.connection.remoteConnectionsPerExecutor", "10") // Spark 3.x
-spark.conf.set("spark.cassandra.output.concurrent.writes", "1000")
-spark.conf.set("spark.cassandra.concurrent.reads", "512")
-spark.conf.set("spark.cassandra.output.batch.grouping.buffer.size", "1000")
-spark.conf.set("spark.cassandra.connection.keep_alive_ms", "600000000")
+ spark.cassandra.output.batch.size.rows 1
+// spark.cassandra.connection.connections_per_executor_max 10 // Spark 2.x
+ spark.cassandra.connection.remoteConnectionsPerExecutor 10 // Spark 3.x
+ spark.cassandra.output.concurrent.writes 1000
+ spark.cassandra.concurrent.reads 512
+ spark.cassandra.output.batch.grouping.buffer.size 1000
+ spark.cassandra.connection.keep_alive_ms 600000000
``` > [!NOTE]
-> If you are using Spark 3.0 or higher, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector(see above). You will see that connection related properties are defined within the notebook above. Using the syntax below, connection properties can be defined in this manner without needing to be defined at the cluster level (Spark context initialization). However, when using operations that require spark context (e.g. `CassandraConnector(sc)` for some of the operations shown below), connection properties need to be defined at the cluster level.
+> If you are using Spark 3.x, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
+
+> [!WARNING]
+> The Spark 3 samples shown in this article have been tested with Spark **version 3.2.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
## Sample data generator ```scala
+import org.apache.spark.sql.cassandra._
+//Spark connector
+import com.datastax.spark.connector._
+import com.datastax.spark.connector.cql.CassandraConnector
+import org.apache.spark.sql.functions._
+
+//if using Spark 2.x, CosmosDB library for multiple retry
+//import com.microsoft.azure.cosmosdb.cassandra
+ // Generate a simple dataset containing five values val booksDF = Seq( ("b00001", "Arthur Conan Doyle", "A study in scarlet", 1887,11.33),
Count against dataframes is currently not supported. The sample below shows how
Choose a [storage option]( https://spark.apache.org/docs/2.2.0/rdd-programming-guide.html#which-storage-level-to-choose) from the following available options, to avoid running into "out of memory" issues:
-* MEMORY_ONLY: This is the default storage option. Stores RDD as deserialized Java objects in the JVM. If the RDD does not fit in memory, some partitions will not be cached and they are recomputed on the fly each time they're needed.
+* MEMORY_ONLY: It's the default storage option. Stores RDD as deserialized Java objects in the JVM. If the RDD doesn't fit in memory, some partitions won't be cached, and they're recomputed on the fly each time they're needed.
-* MEMORY_AND_DISK: Stores RDD as deserialized Java objects in the JVM. If the RDD does not fit in memory, store the partitions that don't fit on disk, and whenever required, read them from the location they are stored.
+* MEMORY_AND_DISK: Stores RDD as deserialized Java objects in the JVM. If the RDD doesn't fit in memory, store the partitions that don't fit on disk, and whenever required, read them from the location they're stored.
-* MEMORY_ONLY_SER (Java/Scala): Stores RDD as serialized Java objects- one-byte array per partition. This option is space-efficient when compared to deserialized objects, especially when using a fast serializer, but more CPU-intensive to read.
+* MEMORY_ONLY_SER (Java/Scala): Stores RDD as serialized Java objects- 1-byte array per partition. This option is space-efficient when compared to deserialized objects, especially when using a fast serializer, but more CPU-intensive to read.
* MEMORY_AND_DISK_SER (Java/Scala): This storage option is like MEMORY_ONLY_SER, the only difference is that it spills partitions that don't fit in the disk memory instead of recomputing them when they're needed.
cosmos-db Spark Create Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-create-operations.md
Last updated 09/24/2018
This article describes how to insert sample data into a table in Azure Cosmos DB Cassandra API from Spark. ## Cassandra API configuration
+Set below spark configuration in your notebook cluster. It's one time activity.
```scala
-import org.apache.spark.sql.cassandra._
-//Spark connector
-import com.datastax.spark.connector._
-import com.datastax.spark.connector.cql.CassandraConnector
-
-//if using Spark 2.x, CosmosDB library for multiple retry
-//import com.microsoft.azure.cosmosdb.cassandra
- //Connection-related
-spark.conf.set("spark.cassandra.connection.host","YOUR_ACCOUNT_NAME.cassandra.cosmosdb.azure.com")
-spark.conf.set("spark.cassandra.connection.port","10350")
-spark.conf.set("spark.cassandra.connection.ssl.enabled","true")
-spark.conf.set("spark.cassandra.auth.username","YOUR_ACCOUNT_NAME")
-spark.conf.set("spark.cassandra.auth.password","YOUR_ACCOUNT_KEY")
+ spark.cassandra.connection.host YOUR_ACCOUNT_NAME.cassandra.cosmosdb.azure.com
+ spark.cassandra.connection.port 10350
+ spark.cassandra.connection.ssl.enabled true
+ spark.cassandra.auth.username YOUR_ACCOUNT_NAME
+ spark.cassandra.auth.password YOUR_ACCOUNT_KEY
// if using Spark 2.x
-// spark.conf.set("spark.cassandra.connection.factory", "com.microsoft.azure.cosmosdb.cassandra.CosmosDbConnectionFactory")
+// spark.cassandra.connection.factory com.microsoft.azure.cosmosdb.cassandra.CosmosDbConnectionFactory
//Throughput-related...adjust as needed
-spark.conf.set("spark.cassandra.output.batch.size.rows", "1")
-//spark.conf.set("spark.cassandra.connection.connections_per_executor_max", "10") // Spark 2.x
-spark.conf.set("spark.cassandra.connection.remoteConnectionsPerExecutor", "10") // Spark 3.x
-spark.conf.set("spark.cassandra.output.concurrent.writes", "1000")
-spark.conf.set("spark.cassandra.concurrent.reads", "512")
-spark.conf.set("spark.cassandra.output.batch.grouping.buffer.size", "1000")
-spark.conf.set("spark.cassandra.connection.keep_alive_ms", "600000000")
+ spark.cassandra.output.batch.size.rows 1
+// spark.cassandra.connection.connections_per_executor_max 10 // Spark 2.x
+ spark.cassandra.connection.remoteConnectionsPerExecutor 10 // Spark 3.x
+ spark.cassandra.output.concurrent.writes 1000
+ spark.cassandra.concurrent.reads 512
+ spark.cassandra.output.batch.grouping.buffer.size 1000
+ spark.cassandra.connection.keep_alive_ms 600000000
``` > [!NOTE]
-> If you are using Spark 3.0, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above). You will see that connection related properties are defined within the notebook above. Using the syntax below, connection properties can be defined in this manner without needing to be defined at the cluster level (Spark context initialization).
+> If you are using Spark 3.x, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
> [!WARNING]
-> The Spark 3 samples shown in this article have been tested with Spark **version 3.0.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
+> The Spark 3 samples shown in this article have been tested with Spark **version 3.2.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
## Dataframe API ### Create a Dataframe with sample data ```scala
+import org.apache.spark.sql.cassandra._
+//Spark connector
+import com.datastax.spark.connector._
+import com.datastax.spark.connector.cql.CassandraConnector
+
+//if using Spark 2.x, CosmosDB library for multiple retry
+//import com.microsoft.azure.cosmosdb.cassandra
+ // Generate a dataframe containing five records val booksDF = Seq( ("b00001", "Arthur Conan Doyle", "A study in scarlet", 1887),
select * from books;
## Resilient Distributed Database (RDD) API
-### Create a RDD with sample data
+### Create an RDD with sample data
```scala //Drop and re-create table to delete records created in the previous section val cdbConnector = CassandraConnector(sc)
When saving data to Cassandra API, you can also set time-to-live and consistency
```scala import com.datastax.spark.connector.writer._
+import com.datastax.oss.driver.api.core.ConsistencyLevel
//Persist booksRDD.saveToCassandra("books_ks", "books", SomeColumns("book_id", "book_author", "book_name", "book_pub_year"),writeConf = WriteConf(ttl = TTLOption.constant(900000),consistencyLevel = ConsistencyLevel.ALL))
cosmos-db Spark Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-databricks.md
This article details how to work with Azure Cosmos DB Cassandra API from Spark o
* **Cassandra API instance configuration for Cassandra connector:**
- The connector for Cassandra API requires the Cassandra connection details to be initialized as part of the spark context. When you launch a Databricks notebook, the spark context is already initialized and it is not advisable to stop and reinitialize it. One solution is to add the Cassandra API instance configuration at a cluster level, in the cluster spark configuration. This is a one-time activity per cluster. Add the following code to the Spark configuration as a space separated key value pair:
+ The connector for Cassandra API requires the Cassandra connection details to be initialized as part of the spark context. When you launch a Databricks notebook, the spark context is already initialized, and it isn't advisable to stop and reinitialize it. One solution is to add the Cassandra API instance configuration at a cluster level, in the cluster spark configuration. It's one-time activity per cluster. Add the following code to the Spark configuration as a space separated key value pair:
```scala spark.cassandra.connection.host YOUR_COSMOSDB_ACCOUNT_NAME.cassandra.cosmosdb.azure.com
This article details how to work with Azure Cosmos DB Cassandra API from Spark o
* **Cassandra Spark connector:** - To integrate Azure Cosmos DB Cassandra API with Spark, the Cassandra connector should be attached to the Azure Databricks cluster. To attach the cluster:
- * Review the Databricks runtime version, the Spark version. Then find the [maven coordinates](https://mvnrepository.com/artifact/com.datastax.spark/spark-cassandra-connector) that are compatible with the Cassandra Spark connector, and attach it to the cluster. See ["Upload a Maven package or Spark package"](https://docs.databricks.com/user-guide/libraries.html) article to attach the connector library to the cluster. We recommend selecting Databricks runtime version 7.5, which supports Spark 3.0. To add the Apache Spark Cassandra Connector, your cluster, select **Libraries** > **Install New** > **Maven**, and then add `com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0` in Maven coordinates. If using Spark 2.x, we recommend an environment with Spark version 2.4.5, using spark connector at maven coordinates `com.datastax.spark:spark-cassandra-connector_2.11:2.4.3`.
+ * Review the Databricks runtime version, the Spark version. Then find the [maven coordinates](https://mvnrepository.com/artifact/com.datastax.spark/spark-cassandra-connector-assembly) that are compatible with the Cassandra Spark connector, and attach it to the cluster. See ["Upload a Maven package or Spark package"](https://docs.databricks.com/user-guide/libraries.html) article to attach the connector library to the cluster. We recommend selecting Databricks runtime version 10.4 LTS, which supports Spark 3.2.1. To add the Apache Spark Cassandra Connector, your cluster, select **Libraries** > **Install New** > **Maven**, and then add `com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0` in Maven coordinates. If using Spark 2.x, we recommend an environment with Spark version 2.4.5, using spark connector at maven coordinates `com.datastax.spark:spark-cassandra-connector_2.11:2.4.3`.
-* **Azure Cosmos DB Cassandra API-specific library:** - If you are using Spark 2.x, a custom connection factory is required to configure the retry policy from the Cassandra Spark connector to Azure Cosmos DB Cassandra API. Add the `com.microsoft.azure.cosmosdb:azure-cosmos-cassandra-spark-helper:1.2.0`[maven coordinates](https://search.maven.org/artifact/com.microsoft.azure.cosmosdb/azure-cosmos-cassandra-spark-helper/1.2.0/jar) to attach the library to the cluster.
+* **Azure Cosmos DB Cassandra API-specific library:** - If you're using Spark 2.x, a custom connection factory is required to configure the retry policy from the Cassandra Spark connector to Azure Cosmos DB Cassandra API. Add the `com.microsoft.azure.cosmosdb:azure-cosmos-cassandra-spark-helper:1.2.0`[maven coordinates](https://search.maven.org/artifact/com.microsoft.azure.cosmosdb/azure-cosmos-cassandra-spark-helper/1.2.0/jar) to attach the library to the cluster.
> [!NOTE]
-> If you are using Spark 3.0, you do not need to install the Cosmos DB Cassandra API-specific library mentioned above.
+> If you are using Spark 3.x, you do not need to install the Cosmos DB Cassandra API-specific library mentioned above.
> [!WARNING]
-> The Spark 3 samples shown in this article have been tested with Spark **version 3.0.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
+> The Spark 3 samples shown in this article have been tested with Spark **version 3.2.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
## Sample notebooks
-A list of Azure Databricks [sample notebooks](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-api-spark-notebooks-databricks/tree/main/notebooks/scala) are available in GitHub repo for you to download. These samples include how to connect to Azure Cosmos DB Cassandra API from Spark and perform different CRUD operations on the data. You can also [import all the notebooks](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-api-spark-notebooks-databricks/tree/main/dbc) into your Databricks cluster workspace and run it.
+A list of Azure Databricks [sample notebooks](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-api-spark-notebooks-databricks/tree/main/notebooks/scala) is available in GitHub repo for you to download. These samples include how to connect to Azure Cosmos DB Cassandra API from Spark and perform different CRUD operations on the data. You can also [import all the notebooks](https://github.com/Azure-Samples/azure-cosmos-db-cassandra-api-spark-notebooks-databricks/tree/main/dbc) into your Databricks cluster workspace and run it.
## Accessing Azure Cosmos DB Cassandra API from Spark Scala programs
cosmos-db Spark Ddl Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-ddl-operations.md
This article details keyspace and table DDL operations against Azure Cosmos DB C
## Spark context
- The connector for Cassandra API requires the Cassandra connection details to be initialized as part of the spark context. When you launch a notebook, the spark context is already initialized and it is not advisable to stop and reinitialize it. One solution is to add the Cassandra API instance configuration at a cluster level, in the cluster spark configuration. This is a one-time activity per cluster. Add the following code to the Spark configuration as a space separated key value pair:
+ The connector for Cassandra API requires the Cassandra connection details to be initialized as part of the spark context. When you launch a notebook, the spark context is already initialized, and it isn't advisable to stop and reinitialize it. One solution is to add the Cassandra API instance configuration at a cluster level, in the cluster spark configuration. It's one-time activity per cluster. Add the following code to the Spark configuration as a space separated key value pair:
```scala spark.cassandra.connection.host YOUR_COSMOSDB_ACCOUNT_NAME.cassandra.cosmosdb.azure.com
This article details keyspace and table DDL operations against Azure Cosmos DB C
spark.cassandra.connection.ssl.enabled true spark.cassandra.auth.username YOUR_COSMOSDB_ACCOUNT_NAME spark.cassandra.auth.password YOUR_COSMOSDB_KEY+
+ //Throughput-related...adjust as needed
+ spark.cassandra.output.batch.size.rows 1
+ // spark.cassandra.connection.connections_per_executor_max 10 // Spark 2.x
+ spark.cassandra.connection.remoteConnectionsPerExecutor 10 // Spark 3.x
+ spark.cassandra.output.concurrent.writes 1000
+ spark.cassandra.concurrent.reads 512
+ spark.cassandra.output.batch.grouping.buffer.size 1000
+ spark.cassandra.connection.keep_alive_ms 600000000
``` ## Cassandra API-related configuration
import com.datastax.spark.connector.cql.CassandraConnector
//if using Spark 2.x, CosmosDB library for multiple retry //import com.microsoft.azure.cosmosdb.cassandra //spark.conf.set("spark.cassandra.connection.factory", "com.microsoft.azure.cosmosdb.cassandra.CosmosDbConnectionFactory")-
-//Throughput-related...adjust as needed
-spark.conf.set("spark.cassandra.output.batch.size.rows", "1")
-//spark.conf.set("spark.cassandra.connection.connections_per_executor_max", "10") // Spark 2.x
-spark.conf.set("spark.cassandra.connection.remoteConnectionsPerExecutor", "10") // Spark 3.x
-spark.conf.set("spark.cassandra.output.concurrent.writes", "1000")
-spark.conf.set("spark.cassandra.concurrent.reads", "512")
-spark.conf.set("spark.cassandra.output.batch.grouping.buffer.size", "1000")
-spark.conf.set("spark.cassandra.connection.keep_alive_ms", "600000000")
``` > [!NOTE]
-> If you are using Spark 3.0, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
+> If you are using Spark 3.x, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
> [!WARNING]
-> The Spark 3 samples shown in this article have been tested with Spark **version 3.0.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
+> The Spark 3 samples shown in this article have been tested with Spark **version 3.2.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.1**. Later versions of Spark and/or the Cassandra connector may not function as expected.
## Keyspace DDL operations
USE books_ks;
DESCRIBE books; ```
-Provisioned throughput and default TTL values are not shown in the output of the previous command, you can get these values from the portal.
+Provisioned throughput and default TTL values aren't shown in the output of the previous command, you can get these values from the portal.
### Alter table
After creating the keyspace and the table, proceed to the following articles for
* [Upsert operations](spark-upsert-operations.md) * [Delete operations](spark-delete-operation.md) * [Aggregation operations](spark-aggregation-operations.md)
-* [Table copy operations](spark-table-copy-operations.md)
+* [Table copy operations](spark-table-copy-operations.md)
cosmos-db Spark Delete Operation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-delete-operation.md
ms.devlang: scala
This article describes how to delete data in Azure Cosmos DB Cassandra API tables from Spark. ## Cassandra API configuration-
+Set below spark configuration in your notebook cluster. It's one time activity.
```scala
-import org.apache.spark.sql.cassandra._
-//Spark connector
-import com.datastax.spark.connector._
-import com.datastax.spark.connector.cql.CassandraConnector
-
-//if using Spark 2.x, CosmosDB library for multiple retry
-//import com.microsoft.azure.cosmosdb.cassandra
- //Connection-related
-spark.conf.set("spark.cassandra.connection.host","YOUR_ACCOUNT_NAME.cassandra.cosmosdb.azure.com")
-spark.conf.set("spark.cassandra.connection.port","10350")
-spark.conf.set("spark.cassandra.connection.ssl.enabled","true")
-spark.conf.set("spark.cassandra.auth.username","YOUR_ACCOUNT_NAME")
-spark.conf.set("spark.cassandra.auth.password","YOUR_ACCOUNT_KEY")
+ spark.cassandra.connection.host YOUR_ACCOUNT_NAME.cassandra.cosmosdb.azure.com
+ spark.cassandra.connection.port 10350
+ spark.cassandra.connection.ssl.enabled true
+ spark.cassandra.auth.username YOUR_ACCOUNT_NAME
+ spark.cassandra.auth.password YOUR_ACCOUNT_KEY
// if using Spark 2.x
-// spark.conf.set("spark.cassandra.connection.factory", "com.microsoft.azure.cosmosdb.cassandra.CosmosDbConnectionFactory")
+// spark.cassandra.connection.factory com.microsoft.azure.cosmosdb.cassandra.CosmosDbConnectionFactory
//Throughput-related...adjust as needed
-spark.conf.set("spark.cassandra.output.batch.size.rows", "1")
-//spark.conf.set("spark.cassandra.connection.connections_per_executor_max", "10") // Spark 2.x
-spark.conf.set("spark.cassandra.connection.remoteConnectionsPerExecutor", "10") // Spark 3.x
-spark.conf.set("spark.cassandra.output.concurrent.writes", "1000")
-spark.conf.set("spark.cassandra.concurrent.reads", "512")
-spark.conf.set("spark.cassandra.output.batch.grouping.buffer.size", "1000")
-spark.conf.set("spark.cassandra.connection.keep_alive_ms", "600000000")
+ spark.cassandra.output.batch.size.rows 1
+// spark.cassandra.connection.connections_per_executor_max 10 // Spark 2.x
+ spark.cassandra.connection.remoteConnectionsPerExecutor 10 // Spark 3.x
+ spark.cassandra.output.concurrent.writes 1000
+ spark.cassandra.concurrent.reads 512
+ spark.cassandra.output.batch.grouping.buffer.size 1000
+ spark.cassandra.connection.keep_alive_ms 600000000
``` > [!NOTE]
-> If you are using Spark 3.0, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above). You will see that connection related properties are defined within the notebook above. Using the syntax below, connection properties can be defined in this manner without needing to be defined at the cluster level (Spark context initialization). However, when using operations that require spark context (for example, `CassandraConnector(sc)` for `delete` as shown below), connection properties need to be defined at the cluster level.
+> If you are using Spark 3.x, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
> [!WARNING]
-> The Spark 3 samples shown in this article have been tested with Spark **version 3.0.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
+> The Spark 3 samples shown in this article have been tested with Spark **version 3.2.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
## Sample data generator
-We will use this code fragment to generate sample data:
+We'll use this code fragment to generate sample data:
```scala
+import org.apache.spark.sql.cassandra._
+//Spark connector
+import com.datastax.spark.connector._
+import com.datastax.spark.connector.cql.CassandraConnector
+
+//if using Spark 2.x, CosmosDB library for multiple retry
+//import com.microsoft.azure.cosmosdb.cassandra
+ //Create dataframe val booksDF = Seq( ("b00001", "Arthur Conan Doyle", "A study in scarlet", 1887,11.33),
cosmos-db Spark Read Operation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-read-operation.md
This article describes how to read data stored in Azure Cosmos DB Cassandra API from Spark. ## Cassandra API configuration
+Set below spark configuration in your notebook cluster. It's one time activity.
```scala
-import org.apache.spark.sql.cassandra._
-//Spark connector
-import com.datastax.spark.connector._
-import com.datastax.spark.connector.cql.CassandraConnector
-
-//if using Spark 2.x, CosmosDB library for multiple retry
-//import com.microsoft.azure.cosmosdb.cassandra
- //Connection-related
-spark.conf.set("spark.cassandra.connection.host","YOUR_ACCOUNT_NAME.cassandra.cosmosdb.azure.com")
-spark.conf.set("spark.cassandra.connection.port","10350")
-spark.conf.set("spark.cassandra.connection.ssl.enabled","true")
-spark.conf.set("spark.cassandra.auth.username","YOUR_ACCOUNT_NAME")
-spark.conf.set("spark.cassandra.auth.password","YOUR_ACCOUNT_KEY")
+ spark.cassandra.connection.host YOUR_ACCOUNT_NAME.cassandra.cosmosdb.azure.com
+ spark.cassandra.connection.port 10350
+ spark.cassandra.connection.ssl.enabled true
+ spark.cassandra.auth.username YOUR_ACCOUNT_NAME
+ spark.cassandra.auth.password YOUR_ACCOUNT_KEY
// if using Spark 2.x
-// spark.conf.set("spark.cassandra.connection.factory", "com.microsoft.azure.cosmosdb.cassandra.CosmosDbConnectionFactory")
+// spark.cassandra.connection.factory com.microsoft.azure.cosmosdb.cassandra.CosmosDbConnectionFactory
//Throughput-related...adjust as needed
-spark.conf.set("spark.cassandra.output.batch.size.rows", "1")
-//spark.conf.set("spark.cassandra.connection.connections_per_executor_max", "10") // Spark 2.x
-spark.conf.set("spark.cassandra.connection.remoteConnectionsPerExecutor", "10") // Spark 3.x
-spark.conf.set("spark.cassandra.output.concurrent.writes", "1000")
-spark.conf.set("spark.cassandra.concurrent.reads", "512")
-spark.conf.set("spark.cassandra.output.batch.grouping.buffer.size", "1000")
-spark.conf.set("spark.cassandra.connection.keep_alive_ms", "600000000")
+ spark.cassandra.output.batch.size.rows 1
+// spark.cassandra.connection.connections_per_executor_max 10 // Spark 2.x
+ spark.cassandra.connection.remoteConnectionsPerExecutor 10 // Spark 3.x
+ spark.cassandra.output.concurrent.writes 1000
+ spark.cassandra.concurrent.reads 512
+ spark.cassandra.output.batch.grouping.buffer.size 1000
+ spark.cassandra.connection.keep_alive_ms 600000000
``` > [!NOTE]
-> If you are using Spark 3.0, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector(see above). You will see that connection related properties are defined within the notebook above. Using the syntax below, connection properties can be defined in this manner without needing to be defined at the cluster level (Spark context initialization).
+> If you are using Spark 3.x, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
> [!WARNING]
-> The Spark 3 samples shown in this article have been tested with Spark **version 3.0.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
+> The Spark 3 samples shown in this article have been tested with Spark **version 3.2.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
## Dataframe API ### Read table using session.read.format command ```scala
+import org.apache.spark.sql.cassandra._
+//Spark connector
+import com.datastax.spark.connector._
+import com.datastax.spark.connector.cql.CassandraConnector
+
+//if using Spark 2.x, CosmosDB library for multiple retry
+//import com.microsoft.azure.cosmosdb.cassandra
+ val readBooksDF = sqlContext .read .format("org.apache.spark.sql.cassandra")
cosmos-db Spark Table Copy Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-table-copy-operations.md
ms.devlang: scala
This article describes how to copy data between tables in Azure Cosmos DB Cassandra API from Spark. The commands described in this article can also be used to copy data from Apache Cassandra tables to Azure Cosmos DB Cassandra API tables. ## Cassandra API configuration-
+Set below spark configuration in your notebook cluster. It's one time activity.
```scala
-import org.apache.spark.sql.cassandra._
-//Spark connector
-import com.datastax.spark.connector._
-import com.datastax.spark.connector.cql.CassandraConnector
-
-//if using Spark 2.x, CosmosDB library for multiple retry
-//import com.microsoft.azure.cosmosdb.cassandra
- //Connection-related
-spark.conf.set("spark.cassandra.connection.host","YOUR_ACCOUNT_NAME.cassandra.cosmosdb.azure.com")
-spark.conf.set("spark.cassandra.connection.port","10350")
-spark.conf.set("spark.cassandra.connection.ssl.enabled","true")
-spark.conf.set("spark.cassandra.auth.username","YOUR_ACCOUNT_NAME")
-spark.conf.set("spark.cassandra.auth.password","YOUR_ACCOUNT_KEY")
+ spark.cassandra.connection.host YOUR_ACCOUNT_NAME.cassandra.cosmosdb.azure.com
+ spark.cassandra.connection.port 10350
+ spark.cassandra.connection.ssl.enabled true
+ spark.cassandra.auth.username YOUR_ACCOUNT_NAME
+ spark.cassandra.auth.password YOUR_ACCOUNT_KEY
// if using Spark 2.x
-// spark.conf.set("spark.cassandra.connection.factory", "com.microsoft.azure.cosmosdb.cassandra.CosmosDbConnectionFactory")
+// spark.cassandra.connection.factory com.microsoft.azure.cosmosdb.cassandra.CosmosDbConnectionFactory
//Throughput-related...adjust as needed
-spark.conf.set("spark.cassandra.output.batch.size.rows", "1")
-//spark.conf.set("spark.cassandra.connection.connections_per_executor_max", "10") // Spark 2.x
-spark.conf.set("spark.cassandra.connection.remoteConnectionsPerExecutor", "10") // Spark 3.x
-spark.conf.set("spark.cassandra.output.concurrent.writes", "1000")
-spark.conf.set("spark.cassandra.concurrent.reads", "512")
-spark.conf.set("spark.cassandra.output.batch.grouping.buffer.size", "1000")
-spark.conf.set("spark.cassandra.connection.keep_alive_ms", "600000000")
+ spark.cassandra.output.batch.size.rows 1
+// spark.cassandra.connection.connections_per_executor_max 10 // Spark 2.x
+ spark.cassandra.connection.remoteConnectionsPerExecutor 10 // Spark 3.x
+ spark.cassandra.output.concurrent.writes 1000
+ spark.cassandra.concurrent.reads 512
+ spark.cassandra.output.batch.grouping.buffer.size 1000
+ spark.cassandra.connection.keep_alive_ms 600000000
``` > [!NOTE]
-> If you are using Spark 3.0 or higher, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above). You will see that connection related properties are defined within the notebook above. Using the syntax below, connection properties can be defined in this manner without needing to be defined at the cluster level (Spark context initialization).
+> If you are using Spark 3.x, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
> [!WARNING]
-> The Spark 3 samples shown in this article have been tested with Spark **version 3.0.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
+> The Spark 3 samples shown in this article have been tested with Spark **version 3.2.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
## Insert sample data ```scala
+import org.apache.spark.sql.cassandra._
+//Spark connector
+import com.datastax.spark.connector._
+import com.datastax.spark.connector.cql.CassandraConnector
+
+//if using Spark 2.x, CosmosDB library for multiple retry
+//import com.microsoft.azure.cosmosdb.cassandra
+ val booksDF = Seq( ("b00001", "Arthur Conan Doyle", "A study in scarlet", 1887,11.33), ("b00023", "Arthur Conan Doyle", "A sign of four", 1890,22.45),
sqlContext
.show ```
-### Copy data between tables (destination table does not exist)
+### Copy data between tables (destination table doesn't exist)
```scala import com.datastax.spark.connector._
cosmos-db Spark Upsert Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-upsert-operations.md
ms.devlang: scala
This article describes how to upsert data into Azure Cosmos DB Cassandra API from Spark. ## Cassandra API configuration-
+Set below spark configuration in your notebook cluster. It's one time activity.
```scala
-import org.apache.spark.sql.cassandra._
-//Spark connector
-import com.datastax.spark.connector._
-import com.datastax.spark.connector.cql.CassandraConnector
-
-//if using Spark 2.x, CosmosDB library for multiple retry
-//import com.microsoft.azure.cosmosdb.cassandra
- //Connection-related
-spark.conf.set("spark.cassandra.connection.host","YOUR_ACCOUNT_NAME.cassandra.cosmosdb.azure.com")
-spark.conf.set("spark.cassandra.connection.port","10350")
-spark.conf.set("spark.cassandra.connection.ssl.enabled","true")
-spark.conf.set("spark.cassandra.auth.username","YOUR_ACCOUNT_NAME")
-spark.conf.set("spark.cassandra.auth.password","YOUR_ACCOUNT_KEY")
+ spark.cassandra.connection.host YOUR_ACCOUNT_NAME.cassandra.cosmosdb.azure.com
+ spark.cassandra.connection.port 10350
+ spark.cassandra.connection.ssl.enabled true
+ spark.cassandra.auth.username YOUR_ACCOUNT_NAME
+ spark.cassandra.auth.password YOUR_ACCOUNT_KEY
// if using Spark 2.x
-// spark.conf.set("spark.cassandra.connection.factory", "com.microsoft.azure.cosmosdb.cassandra.CosmosDbConnectionFactory")
+// spark.cassandra.connection.factory com.microsoft.azure.cosmosdb.cassandra.CosmosDbConnectionFactory
//Throughput-related...adjust as needed
-spark.conf.set("spark.cassandra.output.batch.size.rows", "1")
-//spark.conf.set("spark.cassandra.connection.connections_per_executor_max", "10") // Spark 2.x
-spark.conf.set("spark.cassandra.connection.remoteConnectionsPerExecutor", "10") // Spark 3.x
-spark.conf.set("spark.cassandra.output.concurrent.writes", "1000")
-spark.conf.set("spark.cassandra.concurrent.reads", "512")
-spark.conf.set("spark.cassandra.output.batch.grouping.buffer.size", "1000")
-spark.conf.set("spark.cassandra.connection.keep_alive_ms", "600000000")
+ spark.cassandra.output.batch.size.rows 1
+// spark.cassandra.connection.connections_per_executor_max 10 // Spark 2.x
+ spark.cassandra.connection.remoteConnectionsPerExecutor 10 // Spark 3.x
+ spark.cassandra.output.concurrent.writes 1000
+ spark.cassandra.concurrent.reads 512
+ spark.cassandra.output.batch.grouping.buffer.size 1000
+ spark.cassandra.connection.keep_alive_ms 600000000
``` > [!NOTE]
-> If you are using Spark 3.0, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above). You will see that connection related properties are defined within the notebook above. Using the syntax below, connection properties can be defined in this manner without needing to be defined at the cluster level (Spark context initialization). However, when using operations that require spark context (for example, `CassandraConnector(sc)` for `update` as shown below), connection properties need to be defined at the cluster level.
+> If you are using Spark 3.x, you do not need to install the Cosmos DB helper and connection factory. You should also use `remoteConnectionsPerExecutor` instead of `connections_per_executor_max` for the Spark 3 connector (see above).
> [!WARNING]
-> The Spark 3 samples shown in this article have been tested with Spark **version 3.0.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.0.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
+> The Spark 3 samples shown in this article have been tested with Spark **version 3.2.1** and the corresponding Cassandra Spark Connector **com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0**. Later versions of Spark and/or the Cassandra connector may not function as expected.
## Dataframe API ### Create a dataframe ```scala
+import org.apache.spark.sql.cassandra._
+//Spark connector
+import com.datastax.spark.connector._
+import com.datastax.spark.connector.cql.CassandraConnector
+
+//if using Spark 2.x, CosmosDB library for multiple retry
+//import com.microsoft.azure.cosmosdb.cassandra
+ // (1) Update: Changing author name to include prefix of "Sir" // (2) Insert: adding a new book
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-dotnet.md
This quickstart will create a single Azure Cosmos DB account using the MongoDB A
### Create a new .NET app
-Create a new .NET application in an empty folder using your preferred terminal. Use the [``dotnet new console``](/dotnet/core/tools/dotnet-newt) to create a new console app.
+Create a new .NET application in an empty folder using your preferred terminal. Use the [``dotnet new console``](/dotnet/core/tools/dotnet-new) to create a new console app.
```console dotnet new console -o <app-name>
cosmos-db Quickstart Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-javascript.md
Remove-AzResourceGroup @parameters
In this quickstart, you learned how to create an Azure Cosmos DB MongoDB API account, create a database, and create a collection using the MongoDB driver. You can now dive deeper into the Cosmos DB MongoDB API to import more data, perform complex queries, and manage your Azure Cosmos DB MongoDB resources. > [!div class="nextstepaction"]
-> [Migrate MongoDB to Azure Cosmos DB API for MongoDB offline](/azure/dms/tutorial-mongodb-cosmos-db?toc=%2Fazure%2Fcosmos-db%2Ftoc.json%3Ftoc%3D%2Fazure%2Fcosmos-db%2Ftoc.json)
+> [Migrate MongoDB to Azure Cosmos DB API for MongoDB offline](../../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%3ftoc%3d%2fazure%2fcosmos-db%2ftoc.json)
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/lock.md
az group delete --name $resourceGroup
## Next steps - [Prevent Azure Cosmos DB resources from being deleted or changed](../../../resource-locks.md)-- [Lock resources to prevent unexpected changes](/azure/azure-resource-manager/management/lock-resources)
+- [Lock resources to prevent unexpected changes](../../../../azure-resource-manager/management/lock-resources.md)
- [How to audit Azure Cosmos DB control plane operations](../../../audit-control-plane-logs.md) - [Azure Cosmos DB CLI documentation](/cli/azure/cosmosdb)-- [Azure Cosmos DB CLI GitHub repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb)
+- [Azure Cosmos DB CLI GitHub repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb)
cosmos-db Bulk Executor Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/bulk-executor-java.md
Currently, the bulk executor library is supported only by Azure Cosmos DB SQL AP
* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
-* You can [try Azure Cosmos DB for free](/azure/cosmos-db/try-free) without an Azure subscription, free of charge and commitments. Or, you can use the [Azure Cosmos DB Emulator](../local-emulator.md) with the `https://localhost:8081` endpoint. The Primary Key is provided in [Authenticating requests](../local-emulator.md#authenticate-requests).
+* You can [try Azure Cosmos DB for free](../try-free.md) without an Azure subscription, free of charge and commitments. Or, you can use the [Azure Cosmos DB Emulator](../local-emulator.md) with the `https://localhost:8081` endpoint. The Primary Key is provided in [Authenticating requests](../local-emulator.md#authenticate-requests).
* [Java Development Kit (JDK) 1.8+](/java/azure/jdk/) - On Ubuntu, run `apt-get install default-jdk` to install the JDK.
cosmos-db Create Table Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/create-table-dotnet.md
This quickstart shows how to get started with the Azure Cosmos DB Table API from
> [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/cosmos-db-table-api-dotnet-samples) are available on GitHub as a .NET project.
-[Table API reference documentation](/azure/storage/tables) | [Azure.Data.Tables Package (NuGet)](https://www.nuget.org/packages/Azure.Data.Tables/)
+[Table API reference documentation](../../storage/tables/index.yml) | [Azure.Data.Tables Package (NuGet)](https://www.nuget.org/packages/Azure.Data.Tables/)
## Prerequisites
You can retrieve a specific item from a table using the [``TableEntity.GetEntity
After you insert an item, you can also run a query to get all items that match a specific filter by using the `TableClient.Query<T>` method. This example filters products by category using [Linq](/dotnet/standard/linq) syntax, which is a benefit of using strongly typed `ITableEntity` models like the `Product` class. > [!NOTE]
-> You can also query items using [OData](/rest/api/storageservices/querying-tables-and-entities) syntax. You can see an example of this approach in the [Query Data](/azure/cosmos-db/table/tutorial-query-table) tutorial.
+> You can also query items using [OData](/rest/api/storageservices/querying-tables-and-entities) syntax. You can see an example of this approach in the [Query Data](./tutorial-query-table.md) tutorial.
:::code language="csharp" source="~/azure-cosmos-tableapi-dotnet/001-quickstart/Program.cs" id="query_items" :::
Remove-AzResourceGroup @parameters
In this quickstart, you learned how to create an Azure Cosmos DB Table API account, create a table, and manage entries using the .NET SDK. You can now dive deeper into the SDK to learn how to perform more advanced data queries and management tasks in your Azure Cosmos DB Table API resources. > [!div class="nextstepaction"]
-> [Get started with Azure Cosmos DB Table API and .NET](/azure/cosmos-db/table/how-to-dotnet-get-started)
+> [Get started with Azure Cosmos DB Table API and .NET](./how-to-dotnet-get-started.md)
cost-management-billing Group Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/group-filter.md
The following table lists some of the most common grouping and filtering options
| **Frequency** | Break down usage-based, one-time, and recurring costs. | | | **Invoice ID** | Break down costs by billed invoice. | Unbilled charges don't have an invoice ID yet and EA costs don't include invoice details and will show as **No invoice ID**. | | **Location** | Break down costs by resource location or region. | Purchases and Marketplace usage may be shown as **unassigned**, or **No resource location**. |
-| **Meter** | Break down costs by usage meter. | Purchases and Marketplace usage will show as **No meter**. Refer to **Charge type** to identify purchases and **Publisher type** to identify Marketplace charges. |
+| **Meter** | Break down costs by usage meter. | Purchases and Marketplace usage will show as **unassigned** or **No meter**. Refer to **Charge type** to identify purchases and **Publisher type** to identify Marketplace charges. |
| **Operation** | Break down AWS costs by operation. | Applicable only to AWS scopes and management groups. Azure data doesn't include operation and will show as **No operation** - use **Meter** instead. | | **Pricing model** | Break down costs by on-demand, reservation, or spot usage. | Purchases show as **OnDemand**. If you see **Not applicable**, group by **Reservation** to determine whether the usage is reservation or on-demand usage and **Charge type** to identify purchases. | **Provider** | Break down costs by the provider type: Azure, Microsoft 365, Dynamics 365, AWS, and so on. | Identifier for product and line of business. |
cost-management-billing Review Individual Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/review-individual-bill.md
Your usage charges are displayed at the meter level. The following terms mean th
|Resource |MeterName | |Region |MeterRegion | |Consumed | Quantity |
-|Included |Included Quantity |
|Billable |Overage Quantity | |Rate | EffectivePrice| | Value | Cost |
data-factory Data Flow Cast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-cast.md
+
+ Title: Cast transformation in mapping data flow
+description: Learn how to use a mapping data flow cast transformation to easily change column data types in Azure Data Factory or Synapse Analytics pipelines.
++++++++ Last updated : 07/13/2022++
+# Cast transformation in mapping data flow
+++
+Use the cast transformation to easily modify the data types of individual columns in a data flow. The cast transformation also enables an easy way to check for casting errors.
+
+## Configuration
++
+To modify the data type for columns in your data flow, add columns to "Cast settings" using the plus (+) sign.
+
+**Column name:** Pick the column you wish to cast from your list of metadata columns.
+
+**Type:** Choose the data type to cast your column to. If you pick "complex", you can then select "Define complex type" and define structures, arrays, and maps inside the expression builder.
+
+**Format:** Some data types, like decimal and dates, will allow for additional formatting options.
+
+**Assert type check:** The cast transformation allows for type checking. If the casting fails, the row will be marked as an assertion error that you can trap later in the stream.
+
+## Data flow script
+
+### Syntax
+
+```
+<incomingStream>
+ cast(output(
+ AddressID as integer,
+ AddressLine1 as string,
+ AddressLine2 as string,
+ City as string
+ ),
+ errors: true) ~> <castTransformationName<>
+```
+## Next steps
+
+Modify existing columns and new columns using the [derived column transformation](data-flow-derived-column.md).
data-factory Data Flow Transformation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-transformation-overview.md
Previously updated : 12/20/2021 Last updated : 07/14/2022 # Mapping data flow transformation overview
Below is a list of the transformations currently supported in mapping data flow.
| [Aggregate](data-flow-aggregate.md) | Schema modifier | Define different types of aggregations such as SUM, MIN, MAX, and COUNT grouped by existing or computed columns. | | [Alter row](data-flow-alter-row.md) | Row modifier | Set insert, delete, update, and upsert policies on rows. | | [Assert](data-flow-assert.md) | Row modifier | Set assert rules for each row. |
+| [Cast](data-flow-cast.md) | Schema modifier | Change column data types with type checking. |
| [Conditional split](data-flow-conditional-split.md) | Multiple inputs/outputs | Route rows of data to different streams based on matching conditions. | | [Derived column](data-flow-derived-column.md) | Schema modifier | Generate new columns or modify existing fields using the data flow expression language. | | [External call](data-flow-external-call.md) | Schema modifier | Call external endpoints inline row-by-row. | | [Exists](data-flow-exists.md) | Multiple inputs/outputs | Check whether your data exists in another source or stream. | | [Filter](data-flow-filter.md) | Row modifier | Filter a row based upon a condition. | | [Flatten](data-flow-flatten.md) | Formatters | Take array values inside hierarchical structures such as JSON and unroll them into individual rows. |
+| [Flowlet](concepts-data-flow-flowlet.md) | Flowlets | Build and include custom re-usable transformation logic. |
| [Join](data-flow-join.md) | Multiple inputs/outputs | Combine data from two sources or streams. | | [Lookup](data-flow-lookup.md) | Multiple inputs/outputs | Reference data from another source. | | [New branch](data-flow-new-branch.md) | Multiple inputs/outputs | Apply multiple sets of operations and transformations against the same data stream. |
databox Data Box Deploy Export Picked Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-export-picked-up.md
Previously updated : 04/21/2022 Last updated : 06/16/2022
+zone_pivot_groups: data-box-shipping
# Customer intent: As an IT admin, I need to be able to return Data Box to upload on-premises data from my server onto Azure.
Before you begin, make sure:
The next steps are determined by where you are returning the device.
-## Ship Data Box back
+## Ship Data Box back
-Ensure that the data copy from the device is complete and **Prepare to ship** run is successful.
+Based on the region where you're shipping the device, the procedure is different. In many countries/regions, you can use Microsoft managed shipping or [self-managed shipping](#self-managed-shipping).
-Based on the region where you're shipping the device, the procedure is different. In many countries/regions, you can use Microsoft managed shipping or self-managed shipping.
-### Microsoft managed shipping
+If using Microsoft managed shipping, follow these steps.
-Follow the guidelines for the region you're shipping from if you're using Microsoft managed shipping.
+## Shipping in Americas
-## [US & Canada](#tab/in-us-canada)
+### US & Canada
[!INCLUDE [data-box-shipping-in-us-canada](../../includes/data-box-shipping-in-us-canada.md)]
-## [EU](#tab/in-eu)
++
+If using Microsoft managed shipping, follow these steps.
+
+## Shipping in Europe
+
+### [EU](#tab/in-europe)
[!INCLUDE [data-box-shipping-in-eu](../../includes/data-box-shipping-in-eu.md)] **If you're shipping back to Azure datacenters in Germany or Switzerland,** you can also [use self-managed shipping](#self-managed-shipping).
-## [UK](#tab/in-uk)
+### [UK](#tab/in-uk)
[!INCLUDE [data-box-shipping-in-uk](../../includes/data-box-shipping-in-uk.md)]
-## [Australia](#tab/in-australia)
+### [Norway](#tab/in-norway)
++
+If using Microsoft managed shipping, follow these steps.
+
+## Shipping in Asia
-## [Japan](#tab/in-japan)
+### [Japan](#tab/in-japan)
[!INCLUDE [data-box-shipping-in-japan](../../includes/data-box-shipping-in-japan.md)]
-## [Singapore](#tab/in-singapore)
+### [Singapore](#tab/in-singapore)
[!INCLUDE [data-box-shipping-in-singapore](../../includes/data-box-shipping-in-singapore.md)]
-## [Hong Kong](#tab/in-hk)
+### [Hong Kong](#tab/in-hk)
[!INCLUDE [data-box-shipping-in-hk](../../includes/data-box-shipping-in-hk.md)]
-## [Korea](#tab/in-korea)
+### [Korea](#tab/in-korea)
[!INCLUDE [data-box-shipping-in-korea](../../includes/data-box-shipping-in-korea.md)]
-## [S Africa](#tab/in-sa)
+### [UAE](#tab/in-uae)
-## [UAE](#tab/in-uae)
-## [Norway](#tab/in-norway)
-
+If using Microsoft managed shipping, follow these steps.
+
+## Shipping in Australia
-### Self-managed shipping
+### Australia
++++
+If using Microsoft managed shipping, follow these steps.
+
+## Shipping in Africa
+
+### S Africa
+++
+## Self-managed shipping
+ [!INCLUDE [data-box-shipping-self-managed](../../includes/data-box-shipping-self-managed.md)] +
+### Shipping in Brazil
+
+To schedule a device return in Brazil, send an email to [adbops@microsoft.com](mailto:adbops@microsoft.com) with the following information:
+
+```
+Subject: Request Azure Data Box Disk drop-off for order: <ordername>
+
+- Order name
+- Contact name of the person who will drop off the Data Box Disk (A government-issued photo ID will be required to validate the contactΓÇÖs identity upon arrival.)
+- Inbound Nota Fiscal (A copy of the inbound Nota Fiscal will be required at drop-off.)
+```
++ ## Erasure of data from Data Box Once the device reaches Azure datacenter, the Data Box erases the data on its disks as per the [NIST SP 800-88 Revision 1 guidelines](https://csrc.nist.gov/News/2014/Released-SP-800-88-Revision-1,-Guidelines-for-Medi).
databox Data Box Deploy Picked Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-picked-up.md
Title: Tutorial to ship Azure Data Box back| Microsoft Docs
-description: In this tutorial, learn how to return Azure Data Box, including preparing to ship, shipping Data Box, verifying data upload, and erasing data from Data Box.
+ Title: Tutorial to return Azure Data Box
+description: In this tutorial, learn how to return Azure Data Box, including shipping the device, verifying data upload to Azure, and erasing data from Data Box.
Previously updated : 03/31/2022 Last updated : 06/16/2022
+zone_pivot_groups: data-box-shipping
# Customer intent: As an IT admin, I need to be able to return a Data Box to upload on-premises data from my server onto Azure. ::: zone target="docs"
-# Tutorial: Return Azure Data Box and verify data upload to Azure
+# Tutorial: Return Azure Data Box and verify data has been uploaded to Azure
::: zone-end
In this tutorial, you will learn about topics such as:
> [!div class="checklist"] > > * Prerequisites
-> * Prepare to ship
> * Ship Data Box to Microsoft > * Verify data upload to Azure > * Erasure of data from Data Box
In this tutorial, you will learn about topics such as:
Before you begin, make sure:
-* You've have completed the [Tutorial: Copy data to Azure Data Box and verify](data-box-deploy-copy-data.md).
-* Copy jobs are complete and there are no errors on the **Connect and copy** page. **Prepare to ship** can't run if copy jobs are in progress or there are errors in the **Connect and copy** page.
-
-## Prepare to ship
-
+* You've completed the [Tutorial: Prepare to ship Azure Data Box](data-box-deploy-prepare-to-ship.md).
+* The data copy to the device completed and the **Prepare to ship** run was successful.
::: zone-end -
-After the data copy is complete, you prepare and ship the device. When the device reaches Azure datacenter, data is automatically uploaded to Azure.
-
-## Prepare to ship
+## Ship Data Box back
-Before you prepare to ship, make sure that copy jobs are complete.
+Based on the region where you're shipping the device, the procedure is different. In many countries/regions, you can use Microsoft managed shipping or [self-managed shipping](#self-managed-shipping).
-1. Go to **Prepare to ship** page in the local web UI and start the ship preparation.
-2. Turn off the device from the local web UI. Remove the cables from the device.
-The next steps are determined by where you're returning the device.
+If using Microsoft managed shipping, follow these steps.
+## Shipping in Americas
+### US & Canada
-## Ship Data Box back
-
-Make sure the data copy to the device completed and the **Prepare to ship** run was successful.
-
-Based on the region where you're shipping the device, the procedure is different. In many countries/regions, you can use [Microsoft managed shipping](#microsoft-managed-shipping) or [self-managed shipping](#self-managed-shipping).
-### Microsoft managed shipping
-Follow the guidelines for the region you're shipping from if you're using Microsoft managed shipping.
-## [US & Canada](#tab/in-us-canada)
+If using Microsoft managed shipping, follow these steps.
+## Shipping in Europe
-## [EU](#tab/in-europe)
+### [EU](#tab/in-europe)
[!INCLUDE [data-box-shipping-in-eu](../../includes/data-box-shipping-in-eu.md)] **If you're shipping back to Azure datacenters in Germany or Switzerland,** you can also [use self-managed shipping](#self-managed-shipping).
-## [UK](#tab/in-uk)
+### [UK](#tab/in-uk)
[!INCLUDE [data-box-shipping-in-uk](../../includes/data-box-shipping-in-uk.md)]
-## [Australia](#tab/in-australia)
+### [Norway](#tab/in-norway)
++
+If using Microsoft managed shipping, follow these steps.
+
+## Shipping in Asia
-## [Japan](#tab/in-japan)
+### [Japan](#tab/in-japan)
[!INCLUDE [data-box-shipping-in-japan](../../includes/data-box-shipping-in-japan.md)]
-## [Singapore](#tab/in-singapore)
+### [Singapore](#tab/in-singapore)
[!INCLUDE [data-box-shipping-in-singapore](../../includes/data-box-shipping-in-singapore.md)]
-## [Hong Kong](#tab/in-hk)
+### [Hong Kong](#tab/in-hk)
[!INCLUDE [data-box-shipping-in-hk](../../includes/data-box-shipping-in-hk.md)]
-## [Korea](#tab/in-korea)
+### [Korea](#tab/in-korea)
[!INCLUDE [data-box-shipping-in-korea](../../includes/data-box-shipping-in-korea.md)]
-## [S Africa](#tab/in-sa)
+### [UAE](#tab/in-uae)
-## [UAE](#tab/in-uae)
-## [Norway](#tab/in-norway)
+If using Microsoft managed shipping, follow these steps.
-
+## Shipping in Australia
+
+### Australia
++++
+If using Microsoft managed shipping, follow these steps.
+
+## Shipping in Africa
+
+### S Africa
++
-### Self-managed shipping
+## Self-managed shipping
+
+Self-managed shipping is available as an option when you [Order Azure Data Box](data-box-disk-deploy-ordered.md). For detailed steps, see [Use self-managed shipping](data-box-portal-customer-managed-shipping.md).
[!INCLUDE [data-box-shipping-regions](../../includes/data-box-shipping-regions.md)] [!INCLUDE [data-box-shipping-self-managed](../../includes/data-box-shipping-self-managed.md)] +
+### Shipping in Brazil
+
+To schedule a device return in Brazil, send an email to [adbops@microsoft.com](mailto:adbops@microsoft.com) with the following information:
+
+```
+Subject: Request Azure Data Box Disk drop-off for order: <ordername>
+
+- Order name
+- Contact name of the person who will drop off the Data Box Disk (A government-issued photo ID will be required to validate the contactΓÇÖs identity upon arrival.)
+- Inbound Nota Fiscal (A copy of the inbound Nota Fiscal will be required at drop-off.)
+```
++ ::: zone target="chromeless"
-## Verify data upload to Azure
+## Verify data has been uploaded to Azure
[!INCLUDE [data-box-verify-upload](../../includes/data-box-verify-upload.md)]
-## Erasure of data from Data Box
+## Data erasure from Data Box
Once the upload to Azure is complete, the Data Box erases the data on its disks as per the [NIST SP 800-88 Revision 1 guidelines](https://csrc.nist.gov/News/2014/Released-SP-800-88-Revision-1,-Guidelines-for-Medi).
Once the upload to Azure is complete, the Data Box erases the data on its disks
::: zone target="docs"
-## Verify data upload to Azure
+## Verify data has uploaded to Azure
[!INCLUDE [data-box-verify-upload-return](../../includes/data-box-verify-upload-return.md)]
databox Data Box Deploy Prepare To Ship https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-prepare-to-ship.md
+
+ Title: Tutorial to ship Azure Data Box
+description: In this tutorial, learn how to prepare to ship Data Box for return.
+++++++ Last updated : 05/05/2022++
+# Customer intent: As an IT admin, I need to be able to return a Data Box to upload on-premises data from my server onto Azure.
+++
+# Tutorial: Prepare to ship Azure Data Box
+++
+This tutorial describes how to prepare your Azure Data Box to ship.
+
+In this tutorial, you will learn about topics such as:
+
+> [!div class="checklist"]
+>
+> * Prerequisites
+> * Prepare to ship
+
+## Prerequisites
+
+Before you begin, make sure:
+
+* You've have completed the [Tutorial: Copy data to Azure Data Box and verify](data-box-deploy-copy-data.md).
+* Copy jobs are complete and there are no errors on the **Connect and copy** page. **Prepare to ship** can't run if copy jobs are in progress or there are errors in the **Connect and copy** page.
+
+## Prepare to ship
++++
+After the data copy is complete, you prepare and ship the device. When the device reaches Azure datacenter, data is automatically uploaded to Azure.
+
+## Prepare to ship
+
+Before you prepare to ship, make sure that copy jobs are complete.
+
+1. Go to **Prepare to ship** page in the local web UI and start the ship preparation.
+2. Turn off the device from the local web UI. Remove the cables from the device.
+
+The next steps are determined by where you're returning the device.
+++
+## Next steps
+In this tutorial, you learned about Azure Data Box topics such as:
+
+> [!div class="checklist"]
+> * Prerequisites
+> * Prepare to ship
+
+Advance to the following article to learn how to ship your Azure Data Box and verify the data uploaded to Azure.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Return Azure Data Box and verify data upload to Azure](data-box-deploy-picked-up.md)
+
databox Data Box Disk Deploy Picked Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-deploy-picked-up.md
Previously updated : 01/25/2022 Last updated : 06/16/2022 +
+zone_pivot_groups: data-box-shipping
# Customer intent: As an IT admin, I need to be able to order Data Box Disk to upload on-premises data from my server onto Azure.
Before you begin, make sure you've completed the [Tutorial: Copy data to Azure D
- We recommend that you pack disks using a well-secured bubbled wrap. - Make sure the fit is snug to reduce any movements within the box.
-The next steps are determined by where you are returning the device. In many countries/regions, you can use [Microsoft managed shipping](#microsoft-managed-shipping) or [self-managed shipping](#self-managed-shipping).
+The next steps are determined by where you are returning the device. In many countries/regions, you can use Microsoft managed shipping or [self-managed shipping](#self-managed-shipping).
-### Microsoft managed shipping
-Follow the guidelines for the region you're shipping from if you're using Microsoft managed shipping.
+If using Microsoft managed shipping, follow these steps.
-### [US & Canada](#tab/in-us-canada)
+## Shipping in Americas
+
+### US & Canada
Take the following steps if returning the device in US or Canada.
Take the following steps if returning the device in US or Canada.
- If the tracking number isn't quoted, UPS will require you to pay an additional charge during pickup. - Instead of scheduling the pickup, you can also drop off the Data Box Disk at the nearest drop-off location.
-### [EU & UK](#tab/in-europe-uk)
++
+If using Microsoft managed shipping, follow these steps.
+
+## Shipping in Europe
+
+### EU & UK
Take the following steps if returning the device in Europe or the UK.
Take the following steps if returning the device in Europe or the UK.
3. Go to the country/region DHL Express website and select **Schedule a Pickup**. Under **Do you need a shipping label**, select **No** > **I have a DHL Waybill Number**. 4. Specify the waybill number, and click **Schedule Pickup** to arrange for pickup.
-### [Australia](#tab/in-australia)
++
+If using Microsoft managed shipping, follow these steps.
+
+## Shipping in Australia
+
+### Australia
Azure datacenters in Australia have an additional security notification. All the inbound shipments must have an advanced notification. Take the following steps for pickup in Australia.
Azure datacenters in Australia have an additional security notification. All the
2. Affix the label on the box. 3. Book a pickup online at the link https://mydhl.express.dhl/au/en/schedule-pickup.html#/schedule-pickup#label-reference. ++
+If using Microsoft managed shipping, follow these steps.
+
+## Shipping in Asia
+ ### [Japan](#tab/in-japan) 1. Write your company name and address information on the consignment note as your sender information.
Take the following steps if returning the device in China.
|Phone: | 400.889.6066 ext. 3603 | |E-mail: | [739951@fedex.com](mailto:739951@fedex.com) | ++
+If using Microsoft managed shipping, follow these steps.
+
+## Shipping in Africa
### [S Africa](#tab/in-sa)
Take the following steps if returning the device in South Africa.
5. If you come across any issues, email [Priority.Support@dhl.com](mailto:Priority.Support@dhl.com) with details of the issue(s), and put the waybill number in the Subject: line. You can also call +27(0)119213902. -
-### Self-managed shipping
+## Self-managed shipping
-If you are using Data Box Disk in US Government, Japan, Singapore, Korea, United Kingdom, West Europe, Australia, South Africa, India, or Brazil, and have selected the self-managed shipping option during order creation, follow these instructions. For detailed steps, see [Use self-managed shipping](data-box-disk-portal-customer-managed-shipping.md).
+ Self-managed shipping is available as an option when you [Order Azure Data Box](data-box-disk-deploy-ordered.md). For detailed steps, see [Use self-managed shipping](data-box-disk-portal-customer-managed-shipping.md).
+
+Self-managed shipping is available in the following regions:
+
+| Region | Region | Region | Region | Region |
+||-|-|--|--|
+| US Government | United Kingdom | West Europe | Japan | Singapore |
+| Korea | India | South Africa | Australia | Brazil |
+
+If you are using Data Box Disk and have selected the self-managed shipping option during order creation, follow these instructions.
1. Go to the **Overview** blade for your order in the Azure portal. Go through the instructions displayed when you select **Schedule pickup**. You should see an Authorization code that is used at the time of dropping off the order.
-2. Send an email to the Azure Data Box Operations team using the following template when you're ready to return the device.
+2. Send an email to the Azure Data Box Operations team using the following template when you're ready to return the device.
``` To: adbops@microsoft.com
If you are using Data Box Disk in US Government, Japan, Singapore, Korea, United
2. Contact name of the person dropping off. You will need to display a government-approved ID during the drop-off. ```
- > [!NOTE]
- > - Required information for return may vary by region.
- > - If you're returning a Data Box Disk in Brazil, see [Use self-managed shipping for Azure Data Box Disk](data-box-disk-portal-customer-managed-shipping.md) for detailed instructions.
-
- 3. Azure Data Box Operations team will work with you to arrange the drop-off to the Azure datacenter. -+
+### Shipping in Brazil
+
+To schedule a device return in Brazil, send an email to [adbops@microsoft.com](mailto:adbops@microsoft.com) with the following information:
+
+```
+Subject: Request Azure Data Box Disk drop-off for order: <ordername>
+
+- Order name
+- Contact name of the person who will drop off the Data Box Disk (A government-issued photo ID will be required to validate the contactΓÇÖs identity upon arrival.)
+- Inbound Nota Fiscal (A copy of the inbound Nota Fiscal will be required at drop-off.)
+```
+
::: zone target="docs"
databox Data Box Disk Portal Customer Managed Shipping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-portal-customer-managed-shipping.md
Previously updated : 06/22/2021 Last updated : 06/07/2022
When you place a Data Box Disk order, you can choose self-managed shipping optio
5. Follow the instructions in the **Schedule pickup for Azure**. Before you can get your authorization code, you must email [adbops@microsoft.com](mailto:adbops@microsoft.com) to schedule the device pickup from your region's datacenter. ![Schedule pickup](media\data-box-disk-portal-customer-managed-shipping\data-box-disk-user-pickup-02c.png)-
+
**Instructions for Brazil:** If you're scheduling a device pickup in Brazil, include the following information in your email. The datacenter will schedule the pickup after they receive an inbound `Nota Fiscal`, which can take up to 4 business days. ```
When you place a Data Box Disk order, you can choose self-managed shipping optio
> [!NOTE] > Do not share the authorization code over email. This is only to be verified at the datacenter during drop-off.
- **Instructions for Brazil:** To schedule a device return in Brazil, send an email to [adbops@microsoft.com](mailto:adbops@microsoft.com) with the following information:
-
- ```
- Subject: Request Azure Data Box Disk drop-off for order: <ordername>
+ If you're returning a Data Box Disk in Brazil, see [Return Azure Data Box Disk](data-box-deploy-picked-up.md) for detailed instructions.
- - Order name
- - Contact name of the person who will drop off the Data Box Disk (A government-issued photo ID will be required to validate the contactΓÇÖs identity upon arrival.)
- - Inbound Nota Fiscal (A copy of the inbound Nota Fiscal will be required at drop-off.)
- ```
10. After you receive an appointment for drop-off, the order should be in the **Ready to receive at Azure datacenter** state in the Azure portal.
databox Data Box Portal Customer Managed Shipping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-portal-customer-managed-shipping.md
Previously updated : 03/31/2022 Last updated : 06/06/2022
When you place a Data Box order, you can choose the self-managed shipping option
![Schedule pickup for Azure instructions](media\data-box-portal-customer-managed-shipping\data-box-portal-schedule-pickup-email-01.png)
- **Instructions for Brazil:** If you're scheduling a device pickup in Brazil, include the following information in your email. The datacenter will schedule the pickup after they receive an inbound `Nota Fiscal`, which can take up to 4 business days.
+ If you're returning a Data Box in Brazil, see [Return Azure Data Box](data-box-deploy-picked-up.md) for detailed instructions.
``` Subject: Request Azure Data Box Disk pickup for order: <ordername>
When you place a Data Box order, you can choose the self-managed shipping option
![An order in Picked up state](media\data-box-portal-customer-managed-shipping\data-box-portal-picked-up-boxed-01.png)
-9. After the device is picked up, copy data to the Data Box at your site. After the data copy is complete, you can prepare to ship the Data Box. For more information, see [Prepare to ship](data-box-deploy-picked-up.md#prepare-to-ship).
+9. After the device is picked up, copy data to the Data Box at your site. After the data copy is complete, you can prepare to ship the Data Box. For more information, see [Prepare to ship](data-box-deploy-prepare-to-ship.md#prepare-to-ship).
The **Prepare to ship** step needs to complete without any critical errors. Otherwise, you'll need to run this step again after making the necessary fixes. After the **Prepare to ship** step completes successfully, you can view the authorization code for the drop-off on the device local user interface. > [!NOTE] > Do not share the authorization code over email. This is only to be verified at the datacenter during drop off.
- **Instructions for Brazil:** To schedule a device return in Brazil, send an email to [adbops@microsoft.com](mailto:adbops@microsoft.com) with the following information:
-
- ```
- Subject: Request Azure Data Box Disk drop-off for order: <ordername>
-
- - Order name
- - Contact name of the person who will drop off the Data Box Disk (A government-issued photo ID will be required to validate the contactΓÇÖs identity upon arrival.)
- - Inbound Nota Fiscal (A copy of the inbound Nota Fiscal will be required at drop-off.)
- ```
10. If you've received an appointment for drop-off, the order should have **Ready to receive at Azure datacenter** status in the Azure portal. Follow the instructions under **Schedule drop-off** to return the device.
databox Data Box Troubleshoot Data Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-troubleshoot-data-upload.md
Previously updated : 03/22/2022 Last updated : 06/06/2022
Other REST API errors might occur during data uploads. For more information, see
## Next steps - [Review common REST API errors](/rest/api/storageservices/common-rest-api-error-codes).-- [Verify data upload to Azure](data-box-deploy-picked-up.md#verify-data-upload-to-azure)
+- [Verify data upload to Azure](data-box-deploy-picked-up.md#verify-data-has-uploaded-to-azure)
defender-for-cloud Adaptive Network Hardening https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-network-hardening.md
Title: Adaptive network hardening in Microsoft Defender for Cloud | Microsoft Docs
+ Title: Adaptive network hardening in Microsoft Defender for Cloud
description: Learn how to use actual traffic patterns to harden your network security groups (NSG) rules and further improve your security posture.
defender-for-cloud Alert Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alert-validation.md
Title: Alert validation in Microsoft Defender for Cloud | Microsoft Docs
+ Title: Alert validation in Microsoft Defender for Cloud
description: Learn how to validate that your security alerts are correctly configured in Microsoft Defender for Cloud Last updated 07/04/2022
You can simulate alerts for both of the control plane, and workload alerts with
**Prerequisites** - Ensure the Defender for Containers plan is enabled.-- **ARC only** - Ensure the defender extension is installed.
+- **ARC only** - Ensure the Defender extension is installed.
- **EKS or GKE only** - Ensure the default audit log collection auto-provisioning options are enabled. **To simulate a Kubernetes control plane security alert**:
defender-for-cloud Cross Tenant Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/cross-tenant-management.md
Title: Cross-tenant management in Microsoft Defender for Cloud | Microsoft Docs
+ Title: Cross-tenant management in Microsoft Defender for Cloud
description: Learn how to set up cross-tenant management to manage the security posture of multiple tenants in Defender for Cloud using Azure Lighthouse. documentationcenter: na ms.assetid: 7d51291a-4b00-4e68-b872-0808b60e6d9c
defender-for-cloud Custom Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-security-policies.md
Title: Create custom security policies in Microsoft Defender for Cloud | Microsoft Docs
+ Title: Create custom security policies in Microsoft Defender for Cloud
description: Azure custom policy definitions monitored by Microsoft Defender for Cloud.
defender-for-cloud Data Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-security.md
Title: Microsoft Defender for Cloud data security | Microsoft Docs
+ Title: Microsoft Defender for Cloud data security
description: Learn how data is managed and safeguarded in Microsoft Defender for Cloud.
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
Title: How to enable Microsoft Defender for Containers in Microsoft Defender for
description: Enable the container protections of Microsoft Defender for Containers zone_pivot_groups: k8s-host Previously updated : 06/28/2022 Last updated : 07/14/2022 # Enable Microsoft Defender for Containers
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
Title: Microsoft Defender for Servers - the benefits and features description: Learn about the benefits and features of Microsoft Defender for Servers. Previously updated : 07/13/2022 Last updated : 07/14/2022 # Overview of Microsoft Defender for Servers
Defender for Servers offers you a choice between two paid plans:
| [Just-in time VM access](#just-in-time-jit-virtual-machine-vm-access) | | :::image type="icon" source="./media/icons/yes-icon.png"::: | | [Adaptive network hardening](#adaptive-network-hardening-anh) | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+You can learn more about the different [benefits for each server plan](#benefits-of-the-defender-for-servers-plans) .
+ ### Plan 1 Plan 1 includes the following benefits:
Plan 1 includes the following benefits:
- Flexibility to use Microsoft Defender for Cloud or Microsoft 365 Defender portal - A Microsoft Defender for Endpoint subscription that includes access to alerts, software inventory, Vulnerability Assessment and an automatic integration with Microsoft Defender for Cloud.
-The subscription to Microsoft Defender for Endpoint allows you to deploy Defender for Endpoint to your servers. Defender for Endpoint includes the following capabilities:
+The subscription to [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide) allows you to deploy Defender for Endpoint to your servers. Defender for Endpoint includes the following capabilities:
- Licenses are charged per hour instead of per seat, lowering your costs to protect virtual machines only when they are in use. - Microsoft Defender for Endpoint deploys automatically to all cloud workloads so that you know that they're protected when they spin up.
The subscription to Microsoft Defender for Endpoint allows you to deploy Defende
### Plan 2 (formerly Defender for Servers)
-Plan 2 includes all of the benefits included with Plan 1. However, plan 2 also includes all of the other Microsoft Defender for Servers features listed in the [table above](#available-defender-for-server-plans).
+Plan 2 includes all of the benefits included with Plan 1. However, plan 2 also includes all of the following features:
+
+- Security Policy and Regulatory Compliance
+- Log-analytics (500 MB free)
+- [Vulnerability Assessment using Qualys](#vulnerability-scanner-powered-by-qualys)
+- Threat detections: OS level, network layer, control plane
+- [Adaptive application controls](#adaptive-application-controls-aac)
+- [File integrity monitoring](#file-integrity-monitoring-fim)
+- [Just-in time VM access](#just-in-time-jit-virtual-machine-vm-access)
+- [Adaptive network hardening](#adaptive-network-hardening-anh)
For pricing details in your currency of choice and according to your region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). ## Select a plan
-You can select your plan when you [Enable enhanced security features on your subscriptions and workspaces:](enable-enhanced-security.md#enable-enhanced-security-features-on-your-subscriptions-and-workspaces). By default, plan 2 is selected when you set the Defender for Servers plan to On.
+You can select your plan when you [Enable enhanced security features on your subscriptions and workspaces](enable-enhanced-security.md#enable-enhanced-security-features-from-the-azure-portal). By default, plan 2 is selected when you set the Defender for Servers plan to **On**.
If at any point, you want to change the Defender for Servers plan, you can change it on the Defender plans page by selecting **Change plan**.
If at any point, you want to change the Defender for Servers plan, you can chang
Defender for Servers offers both threat detection and protection capabilities that consist of:
-### Plan 1 & Plan 2
+### Included in plan 1 & plan 2
#### Microsoft threat and vulnerability management
Defender for Servers includes [Microsoft Defender for Endpoint](https://www.micr
When Defender for Endpoint detects a threat, it triggers an alert. The alert is shown on Defender for Cloud's Recommendation page. From Defender for Cloud, you can also pivot to the Defender for Endpoint console, and perform a detailed investigation to uncover the scope of the attack. Learn how to [Protect your endpoints](integration-defender-for-endpoint.md).
-### Plan 2 only
+### Included in plan 2 only
#### Vulnerability scanner powered by Qualys
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
Title: Defender for Cloud's integrated vulnerability assessment solution for Azure, hybrid, and multicloud machines description: Install a vulnerability assessment solution on your Azure machines to get recommendations in Microsoft Defender for Cloud that can help you protect your Azure and hybrid machines++ Previously updated : 04/13/2022 Last updated : 07/12/2022 # Defender for Cloud's integrated Qualys vulnerability scanner for Azure and hybrid machines
-A core component of every cyber risk and security program is the identification and analysis of vulnerabilities.
-
-Defender for Cloud regularly checks your connected machines to ensure they're running vulnerability assessment tools.
-
-When a machine is found that doesn't have vulnerability assessment solution deployed, Defender for Cloud generates the following security recommendation:
-
-**Machines should have a vulnerability assessment solution**
+A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Defender for Cloud regularly checks your connected machines to ensure they're running vulnerability assessment tools.
+When a machine is found that doesn't have a vulnerability assessment solution deployed, Defender for Cloud generates the security recommendation: **Machines should have a vulnerability assessment solution**.
Use this recommendation to deploy the vulnerability assessment solution to your Azure virtual machines and your Azure Arc-enabled hybrid machines.
-Deploy the vulnerability assessment solution that best meets your needs and budget:
--- **Microsoft Defender for Endpoint's threat and vulnerability management tools** - Discover vulnerabilities and misconfigurations in real time with sensors, and without the need of agents or periodic scans. It prioritizes vulnerabilities based on the threat landscape, detections in your organization, sensitive information on vulnerable devices, and business context. Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md).--- **Integrated vulnerability assessment solution (powered by Qualys)** - Defender for Cloud includes vulnerability scanning for your machines at no extra cost. You don't need a Qualys license or even a Qualys account - everything's handled seamlessly inside Defender for Cloud. This page provides details of this scanner and instructions for how to deploy it.
+Defender for Cloud includes vulnerability scanning for your machines at no extra cost. You don't need a Qualys license or even a Qualys account - everything's handled seamlessly inside Defender for Cloud. This page provides details of this scanner and instructions for how to deploy it.
- > [!TIP]
- > The integrated vulnerability assessment solution supports both Azure virtual machines and hybrid machines. To deploy the vulnerability assessment scanner to your on-premises and multicloud machines, connect them to Azure first with Azure Arc as described in [Connect your non-Azure machines to Defender for Cloud](quickstart-onboard-machines.md).
- >
- > Defender for Cloud's integrated vulnerability assessment solution works seamlessly with Azure Arc. When you've deployed Azure Arc, your machines will appear in Defender for Cloud and no Log Analytics agent is required.
+> [!TIP]
+> The integrated vulnerability assessment solution supports both Azure virtual machines and hybrid machines. To deploy the vulnerability assessment scanner to your on-premises and multicloud machines, connect them to Azure first with Azure Arc as described in [Connect your non-Azure machines to Defender for Cloud](quickstart-onboard-machines.md).
+>
+> Defender for Cloud's integrated vulnerability assessment solution works seamlessly with Azure Arc. When you've deployed Azure Arc, your machines will appear in Defender for Cloud and no Log Analytics agent is required.
-- **Bring your own license (BYOL) solutions** - Defender for Cloud supports the integration of tools from other vendors, but you'll need to handle the licensing costs, deployment, and configuration. By deploying your tool with Defender for Cloud, you'll get information about which Azure virtual machines are missing the tool. You'll also be able to view findings within Defender for Cloud. If you'd prefer to use your organization's private Qualys or Rapid7 license instead of the Qualys license included with Defender for Cloud, see [How to deploy a BYOL solution](deploy-vulnerability-assessment-byol-vm.md).
+Deploy the vulnerability assessment solution that best meets your needs and budget:
+If you don't want to use the vulnerability assessment powered by Qualys, you can use [Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md) or [deploy a BYOL solution](deploy-vulnerability-assessment-byol-vm.md) with your own Qualys license, Rapid7 license, or another vulnerability assessment solution.
## Availability
The vulnerability scanner extension works as follows:
Your machines will appear in one or more of the following groups:
- * **Healthy resources** ΓÇô Defender for Cloud has detected a vulnerability assessment solution running on these machines.
- * **Unhealthy resources** ΓÇô A vulnerability scanner extension can be deployed to these machines.
- * **Not applicable resources** ΓÇô these machines can't have a vulnerability scanner extension deployed. Your machine might be in this tab because it's an image in an AKS cluster, it's part of a virtual machine scale set, or it's not running one of the supported operating systems for the integrated vulnerability scanner:
-
- | **Vendor** | **OS** | **Supported versions** |
- ||--|--|
- | Microsoft | Windows | All |
- | Amazon | Amazon Linux | 2015.09-2018.03 |
- | Amazon | Amazon Linux 2 | 2017.03-2.0.2021 |
- | Red Hat | Enterprise Linux | 5.4+, 6, 7-7.9, 8-8.5, 9 beta |
- | Red Hat | CentOS | 5.4-5.11, 6-6.7, 7-7.8, 8-8.5 |
- | Red Hat | Fedora | 22-33 |
- | SUSE | Linux Enterprise Server (SLES) | 11, 12, 15, 15 SP1 |
- | SUSE | openSUSE | 12, 13, 15.0-15.3 |
- | SUSE | Leap | 42.1 |
- | Oracle | Enterprise Linux | 5.11, 6, 7-7.9, 8-8.5 |
- | Debian | Debian | 7.x-11.x |
- | Ubuntu | Ubuntu | 12.04 LTS, 14.04 LTS, 15.x, 16.04 LTS, 18.04 LTS, 19.10, 20.04 LTS |
-
+ - **Healthy resources** ΓÇô Defender for Cloud has detected a vulnerability assessment solution running on these machines.
+ - **Unhealthy resources** ΓÇô A vulnerability scanner extension can be deployed to these machines.
+ - **Not applicable resources** ΓÇô these machines [aren't supported for the vulnerability scanner extension](#why-does-my-machine-show-as-not-applicable-in-the-recommendation).
1. From the list of unhealthy machines, select the ones to receive a vulnerability assessment solution and select **Remediate**.
The vulnerability scanner extension works as follows:
> > - `https://qagpublic.qg2.apps.qualys.eu` - Qualys' European data center >
- > If your machine is in a European Azure region, its artifacts will be processed in Qualys' European data center. Artifacts for virtual machines located elsewhere are sent to the US data center.
+ > If your machine is in a region in an Azure European geography (such as Europe, UK, Germany), its artifacts will be processed in Qualys' European data center. Artifacts for virtual machines located elsewhere are sent to the US data center.
## Automate at-scale deployments
The following commands trigger an on-demand scan:
- **Windows machines**: ```REG ADD HKLM\SOFTWARE\Qualys\QualysAgent\ScanOnDemand\Vulnerability /v "ScanOnDemand" /t REG_DWORD /d "1" /f``` - **Linux machines**: ```sudo /usr/local/qualys/cloud-agent/bin/cloudagentctl.sh action=demand type=vm```
-## FAQ - Integrated vulnerability scanner (powered by Qualys)
+## FAQ
+
+- [Are there any additional charges for the Qualys license?](#are-there-any-additional-charges-for-the-qualys-license)
+- [What prerequisites and permissions are required to install the Qualys extension?](#what-prerequisites-and-permissions-are-required-to-install-the-qualys-extension)
+- [Can I remove the Defender for Cloud Qualys extension?](#can-i-remove-the-defender-for-cloud-qualys-extension)
+- [How does the extension get updated?](#how-does-the-extension-get-updated)
+- [Why does my machine show as "not applicable" in the recommendation?](#why-does-my-machine-show-as-not-applicable-in-the-recommendation)
+- [Can the built-in vulnerability scanner find vulnerabilities on the VMs network?](#can-the-built-in-vulnerability-scanner-find-vulnerabilities-on-the-vms-network)
+- [Does the scanner integrate with my existing Qualys console?](#does-the-scanner-integrate-with-my-existing-qualys-console)
+- [How quickly will the scanner identify newly disclosed critical vulnerabilities?](#how-quickly-will-the-scanner-identify-newly-disclosed-critical-vulnerabilities)
### Are there any additional charges for the Qualys license?+ No. The built-in scanner is free to all Microsoft Defender for Servers users. The recommendation deploys the scanner with its licensing and configuration information. No additional licenses are required. ### What prerequisites and permissions are required to install the Qualys extension?+ You'll need write permissions for any machine on which you want to deploy the extension. The Microsoft Defender for Cloud vulnerability assessment extension (powered by Qualys), like other extensions, runs on top of the Azure Virtual Machine agent. So it runs as Local Host on Windows, and Root on Linux.
-During setup, Defender for Cloud checks to ensure that the machine can communicate with the following two Qualys data centers (via port 443 - the default for HTTPS):
+During setup, Defender for Cloud checks to ensure that the machine can communicate over HTTPS (default port 443) with the following two Qualys data centers:
- `https://qagpublic.qg3.apps.qualys.com` - Qualys' US data center - `https://qagpublic.qg2.apps.qualys.eu` - Qualys' European data center
If you want to remove the extension from a machine, you can do it manually or wi
You'll need the following details:
-* On Linux, the extension is called "LinuxAgent.AzureSecurityCenter" and the publisher name is "Qualys"
-* On Windows, the extension is called "WindowsAgent.AzureSecurityCenter" and the provider name is "Qualys"
+* On Linux, the extension is called "LinuxAgent.AzureSecurityCenter" and the publisher name is "Qualys".
+* On Windows, the extension is called "WindowsAgent.AzureSecurityCenter" and the provider name is "Qualys".
### How does the extension get updated? Like the Microsoft Defender for Cloud agent itself and all other Azure extensions, minor updates of the Qualys scanner might automatically happen in the background. All agents and extensions are tested extensively before being automatically deployed. ### Why does my machine show as "not applicable" in the recommendation?
-The recommendation details page groups your machines into the following lists: **healthy**, **unhealthy**, and **not applicable**.
-
-If you have machines in the **not applicable** resources group, it means Defender for Cloud can't deploy the vulnerability scanner extension on those machines.
-Your machine might be in this tab because:
+If you have machines in the **not applicable** resources group, Defender for Cloud can't deploy the vulnerability scanner extension on those machines because:
-- It's not protected by Defender for Cloud - As explained above, the vulnerability scanner included with Microsoft Defender for Cloud is only available for machines protected by [Microsoft Defender for Servers](defender-for-servers-introduction.md).
+- The vulnerability scanner included with Microsoft Defender for Cloud is only available for machines protected by [Microsoft Defender for Servers](defender-for-servers-introduction.md).
-- It's an image in an AKS cluster or part of a virtual machine scale set - This extension doesn't support VMs that are PaaS resources.
+- It's a PaaS resource, such as an image in an AKS cluster or part of a virtual machine scale set.
- It's not running one of the supported operating systems:
Your machine might be in this tab because:
| Debian | Debian | 7.x-11.x | | Ubuntu | Ubuntu | 12.04 LTS, 14.04 LTS, 15.x, 16.04 LTS, 18.04 LTS, 19.10, 20.04 LTS |
-### What is scanned by the built-in vulnerability scanner?
-The scanner runs on your machine to look for vulnerabilities of the machine itself. From the machine, it can't scan your network.
+### Can the built-in vulnerability scanner find vulnerabilities on the VMs network?
+
+No. The scanner runs on your machine to look for vulnerabilities of the machine itself, not for your network.
### Does the scanner integrate with my existing Qualys console?+ The Defender for Cloud extension is a separate tool from your existing Qualys scanner. Licensing restrictions mean that it can only be used within Microsoft Defender for Cloud. ### How quickly will the scanner identify newly disclosed critical vulnerabilities? Within 48 hrs of the disclosure of a critical vulnerability, Qualys incorporates the information into their processing and can identify affected machines. -- ## Next steps > [!div class="nextstepaction"]
Within 48 hrs of the disclosure of a critical vulnerability, Qualys incorporates
Defender for Cloud also offers vulnerability analysis for your: -- SQL databases - see [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)-- Azure Container Registry images - see [Use Defender for Containers to scan your ACR images for vulnerabilities](defender-for-containers-usage.md)
+- SQL databases - [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)
+- Azure Container Registry images - [Use Defender for Containers to scan your ACR images for vulnerabilities](defender-for-containers-usage.md)
defender-for-cloud Enable Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-data-collection.md
Title: Auto-deploy agents for Microsoft sign for Cloud | Microsoft Docs
+ Title: Configure auto provisioning of agents for Microsoft Defender for Cloud
description: This article describes how to set up auto provisioning of the Log Analytics agent and other agents and extensions used by Microsoft Defender for Cloud Last updated 07/06/2022
# Configure auto provisioning for agents and extensions from Microsoft Defender for Cloud
-Microsoft Defender for Cloud collects data from your resources using the relevant agent or extensions for that resource and the type of data collection you've enabled. Use the procedures below to automatically provision the necessary agents and extensions used by Defender for Cloud to your resources.
+Microsoft Defender for Cloud collects data from your resources using the relevant agent or extensions for that resource and the type of data collection you've enabled. Use the procedures below to auto-provision the necessary agents and extensions used by Defender for Cloud to your resources.
:::image type="content" source="media/enable-data-collection/auto-provisioning-list-of-extensions.png" alt-text="Screenshot of Microsoft Defender for Cloud's extensions that can be auto provisioned.":::
Microsoft Defender for Cloud collects data from your resources using the relevan
> When you enable auto provisioning of any of the supported extensions, you'll potentially impact *existing* and *future* machines. But when you **disable** auto provisioning for an extension, you'll only affect the *future* machines: nothing is uninstalled by disabling auto provisioning. ## Prerequisites+ To get started with Defender for Cloud, you must have a subscription to Microsoft Azure. If you don't have a subscription, you can sign up for a [free account](https://azure.microsoft.com/pricing/free-trial/). ## Availability
Data is collected using:
> As Defender for Cloud has grown, the types of resources that can be monitored has also grown. The number of extensions has also grown. Auto provisioning has expanded to support additional resource types by leveraging the capabilities of Azure Policy. ## Why use auto provisioning?+ Any of the agents and extensions described on this page *can* be installed manually (see [Manual installation of the Log Analytics agent](#manual-agent)). However, **auto provisioning** reduces management overhead by installing all required agents and extensions on existing - and new - machines to ensure faster security coverage for all supported resources. We recommend enabling auto provisioning, but it's disabled by default. ## How does auto provisioning work?+ Defender for Cloud's auto provisioning settings has a toggle for each type of supported extension. When you enable auto provisioning of an extension, you assign the appropriate **Deploy if not exists** policy. This policy type ensures the extension is provisioned on all existing and future resources of that type. > [!TIP]
Defender for Cloud's auto provisioning settings has a toggle for each type of su
## Enable auto provisioning of the Log Analytics agent and extensions <a name="auto-provision-mma"></a>
-When automatic provisioning is on for the Log Analytics agent, Defender for Cloud deploys the agent on all supported Azure VMs and any new ones created. For the list of supported platforms, see [Supported platforms in Microsoft Defender for Cloud](security-center-os-coverage.md).
+When auto provisioning is on for the Log Analytics agent, Defender for Cloud deploys the agent on all supported Azure VMs and any new ones created. For the list of supported platforms, see [Supported platforms in Microsoft Defender for Cloud](security-center-os-coverage.md).
To enable auto provisioning of the Log Analytics agent:
To enable auto provisioning of the Log Analytics agent:
1. Set Security posture management to **on** or select **Enable all** to turn all Microsoft Defender plans on. 1. From the **Windows security events** configuration, select the amount of raw event data to store:
- - **None** ΓÇô Disable security event storage. This is the default setting.
+ - **None** ΓÇô Disable security event storage. (Default)
- **Minimal** ΓÇô A small set of events for when you want to minimize the event volume. - **Common** ΓÇô A set of events that satisfies most customers and provides a full audit trail. - **All events** ΓÇô For customers who want to make sure all events are stored.
To enable auto provisioning of the Log Analytics agent:
1. Select **Apply** in the configuration pane.
-1. To enable automatic provisioning of an extension other than the Log Analytics agent:
+1. To enable auto provisioning of an extension other than the Log Analytics agent:
1. Toggle the status to **On** for the relevant extension.
To enable auto provisioning of the Log Analytics agent:
## Windows security event options for the Log Analytics agent <a name="data-collection-tier"></a>
-Selecting a data collection tier in Microsoft Defender for Cloud only affects the *storage* of security events in your Log Analytics workspace. The Log Analytics agent will still collect and analyze the security events required for Defender for CloudΓÇÖs threat protection, regardless of the level of security events you choose to store in your workspace. Choosing to store security events enables investigation, search, and auditing of those events in your workspace.
+When you select a data collection tier in Microsoft Defender for Cloud, the security events of the selected tier are stored in your Log Analytics workspace so that you can investigate, search, and audit the events in your workspace. The Log Analytics agent also collects and analyzes the security events required for Defender for CloudΓÇÖs threat protection.
+
+### Requirements
-### Requirements
The enhanced security protections of Defender for Cloud are required for storing Windows security event data. Learn more about [the enhanced protection plans](defender-for-cloud-introduction.md).
-Storing data in Log Analytics might incur additional charges for data storage. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+You maybe charged for storing data in Log Analytics. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
### Information for Microsoft Sentinel users
-Users of Microsoft Sentinel: note that security events collection within the context of a single workspace can be configured from either Microsoft Defender for Cloud or Microsoft Sentinel, but not both. If you're planning to add Microsoft Sentinel to a workspace that is already getting alerts from Microsoft Defender for Cloud, and is set to collect Security Events, you have two options:
-- Leave the Security Events collection in Microsoft Defender for Cloud as is. You will be able to query and analyze these events in Microsoft Sentinel as well as in Defender for Cloud. You will not, however, be able to monitor the connector's connectivity status or change its configuration in Microsoft Sentinel. If this is important to you, consider the second option.-- Disable Security Events collection in Microsoft Defender for Cloud (by setting **Windows security events** to **None** in the configuration of your Log Analytics agent). Then add the Security Events connector in Microsoft Sentinel. As with the first option, you will be able to query and analyze events in both Microsoft Sentinel and Defender for Cloud, but you will now be able to monitor the connector's connectivity status or change its configuration in - and only in - Microsoft Sentinel.
-### What event types are stored for "Common" and "Minimal"?
-These sets were designed to address typical scenarios. Make sure to evaluate which one fits your needs before implementing it.
+Security events collection within the context of a single workspace can be configured from either Microsoft Defender for Cloud or Microsoft Sentinel, but not both. If you want to add Microsoft Sentinel to a workspace that already gets alerts from Microsoft Defender for Cloud and to collect Security Events, you can either:
+
+- Leave the Security Events collection in Microsoft Defender for Cloud as is. You'll be able to query and analyze these events in both Microsoft Sentinel and Defender for Cloud. If you want to monitor the connector's connectivity status or change its configuration in Microsoft Sentinel, consider the second option.
+- Disable Security Events collection in Microsoft Defender for Cloud and then add the Security Events connector in Microsoft Sentinel. You'll be able to query and analyze events in both Microsoft Sentinel, and Defender for Cloud, but you'll also be able to monitor the connector's connectivity status or change its configuration in - and only in - Microsoft Sentinel. To disable Security Events collection in Defender for Cloud, set **Windows security events** to **None** in the configuration of your Log Analytics agent.
-To determine the events for the **Common** and **Minimal** options, we worked with customers and industry standards to learn about the unfiltered frequency of each event and their usage. We used the following guidelines in this process:
+### What event types are stored for "Common" and "Minimal"?
-- **Minimal** - Make sure that this set covers only events that might indicate a successful breach and important events that have a very low volume. For example, this set contains user successful and failed login (event IDs 4624, 4625), but it doesnΓÇÖt contain sign out which is important for auditing but not meaningful for detection and has relatively high volume. Most of the data volume of this set is the login events and process creation event (event ID 4688).-- **Common** - Provide a full user audit trail in this set. For example, this set contains both user logins and user sign outs (event ID 4634). We include auditing actions like security group changes, key domain controller Kerberos operations, and other events that are recommended by industry organizations.
+The **Common** and **Minimal** event sets were designed to address typical scenarios based on customer and industry standards for the unfiltered frequency of each event and their usage.
-Events that have very low volume were included in the common set as the main motivation to choose it over all the events is to reduce the volume and not to filter out specific events.
+- **Minimal** - This set is intended to cover only events that might indicate a successful breach and important events with low volume. Most of the data volume of this set is successful user logon (event ID 4625), failed user logon events (event ID 4624), and process creation events (event ID 4688). Sign out events are important for auditing only and have relatively high volume, so they aren't included in this event set.
+- **Common** - This set is intended to provide a full user audit trail, including events with low volume. For example, this set contains both user logon events (event ID 4624) and user logoff events (event ID 4634). We include auditing actions like security group changes, key domain controller Kerberos operations, and other events that are recommended by industry organizations.
-Here is a complete breakdown of the Security and App Locker event IDs for each set:
+Here's a complete breakdown of the Security and App Locker event IDs for each set:
| Data tier | Collected event indicators | | | |
To manually install the Log Analytics agent:
> [!TIP] > For more information about onboarding, see [Automate onboarding of Microsoft Defender for Cloud using PowerShell](powershell-onboarding.md).
-## Automatic provisioning in cases of a pre-existing agent installation <a name="preexisting"></a>
+## Auto provisioning in cases of a pre-existing agent installation <a name="preexisting"></a>
-The following use cases specify how automatic provision works in cases when there is already an agent or extension installed.
+The following use cases explain how auto provisioning works in cases when there's already an agent or extension installed.
-- **Log Analytics agent is installed on the machine, but not as an extension (Direct agent)** - If the Log Analytics agent is installed directly on the VM (not as an Azure extension), Defender for Cloud will install the Log Analytics agent extension, and might upgrade the Log Analytics agent to the latest version. The agent installed will continue to report to its already configured workspace(s), and additionally will report to the workspace configured in Defender for Cloud (Multi-homing is supported on Windows machines).
+- **Log Analytics agent is installed on the machine, but not as an extension (Direct agent)** - If the Log Analytics agent is installed directly on the VM (not as an Azure extension), Defender for Cloud will install the Log Analytics agent extension and might upgrade the Log Analytics agent to the latest version. The installed agent will continue to report to its already configured workspaces and to the workspace configured in Defender for Cloud. (Multi-homing is supported on Windows machines.)
- If the configured workspace is a user workspace (not Defender for Cloud's default workspace), then you will need to install the "Security" or "SecurityCenterFree" solution on it for Defender for Cloud to start processing events from VMs and computers reporting to that workspace.
+ If the Log Analytics is configured with a user workspace and not Defender for Cloud's default workspace, you'll need to install the "Security" or "SecurityCenterFree" solution on it for Defender for Cloud to start processing events from VMs and computers reporting to that workspace.
- For Linux machines, Agent multi-homing is not yet supported - hence, if an existing agent installation is detected, automatic provisioning will not occur and the machine's configuration will not be altered.
+ For Linux machines, Agent multi-homing isn't yet supported. If an existing agent installation is detected, the Log Analytics agent won't be auto provisioned.
- For existing machines on subscriptions onboarded to Defender for Cloud before 17 March 2019, when an existing agent will be detected, the Log Analytics agent extension will not be installed and the machine will not be affected. For these machines, see to the "Resolve monitoring agent health issues on your machines" recommendation to resolve the agent installation issues on these machines.
+ For existing machines on subscriptions onboarded to Defender for Cloud before 17 March 2019, when an existing agent will be detected, the Log Analytics agent extension won't be installed and the machine won't be affected. For these machines, see to the "Resolve monitoring agent health issues on your machines" recommendation to resolve the agent installation issues on these machines.
-- **System Center Operations Manager agent is installed on the machine** - Defender for Cloud will install the Log Analytics agent extension side by side to the existing Operations Manager. The existing Operations Manager agent will continue to report to the Operations Manager server normally. The Operations Manager agent and Log Analytics agent share common run-time libraries, which will be updated to the latest version during this process. If Operations Manager agent version 2012 is installed, **do not** enable automatic provisioning.
+- **System Center Operations Manager agent is installed on the machine** - Defender for Cloud will install the Log Analytics agent extension side by side to the existing Operations Manager. The existing Operations Manager agent will continue to report to the Operations Manager server normally. The Operations Manager agent and Log Analytics agent share common run-time libraries, which will be updated to the latest version during this process. If Operations Manager agent version 2012 is installed, **do not** enable auto provisioning.
- **A pre-existing VM extension is present**:
- - When the Monitoring Agent is installed as an extension, the extension configuration allows reporting to only a single workspace. Defender for Cloud does not override existing connections to user workspaces. Defender for Cloud will store security data from the VM in the workspace already connected, provided that the "Security" or "SecurityCenterFree" solution has been installed on it. Defender for Cloud may upgrade the extension version to the latest version in this process.
+ - When the Monitoring Agent is installed as an extension, the extension configuration allows reporting to only a single workspace. Defender for Cloud doesn't override existing connections to user workspaces. Defender for Cloud will store security data from the VM in the workspace already connected, if the "Security" or "SecurityCenterFree" solution has been installed on it. Defender for Cloud may upgrade the extension version to the latest version in this process.
- To see to which workspace the existing extension is sending data to, run the test to [Validate connectivity with Microsoft Defender for Cloud](/archive/blogs/yuridiogenes/validating-connectivity-with-azure-security-center). Alternatively, you can open Log Analytics workspaces, select a workspace, select the VM, and look at the Log Analytics agent connection. - If you have an environment where the Log Analytics agent is installed on client workstations and reporting to an existing Log Analytics workspace, review the list of [operating systems supported by Microsoft Defender for Cloud](security-center-os-coverage.md) to make sure your operating system is supported. For more information, see [Existing log analytics customers](./faq-azure-monitor-logs.yml). ## Disable auto provisioning <a name="offprovisioning"></a>
-When you disable auto provisioning, agents will not be provisioned on new VMs.
+When you disable auto provisioning, agents won't be provisioned on new VMs.
-To turn off automatic provisioning of an agent:
+To turn off auto provisioning of an agent:
1. From Defender for Cloud's menu in the portal, select **Environment settings**. 1. Select the relevant subscription.
To turn off automatic provisioning of an agent:
:::image type="content" source="./media/enable-data-collection/agent-toggles.png" alt-text="Toggles to disable auto provisioning per agent type.":::
-1. Select **Save**. When auto provisioning is disabled, the default workspace configuration section is not displayed:
+1. Select **Save**. When auto provisioning is disabled, the default workspace configuration section isn't displayed:
:::image type="content" source="./media/enable-data-collection/empty-configuration-column.png" alt-text="When auto provisioning is disabled, the configuration cell is empty"::: > [!NOTE]
-> Disabling automatic provisioning does not remove the Log Analytics agent from Azure VMs where the agent was provisioned. For information on removing the OMS extension, see [How do I remove OMS extensions installed by Defender for Cloud](./faq-data-collection-agents.yml#how-do-i-remove-oms-extensions-installed-by-defender-for-cloud-).
+> Disabling auto provisioning does not remove the Log Analytics agent from Azure VMs where the agent was provisioned. For information on removing the OMS extension, see [How do I remove OMS extensions installed by Defender for Cloud](./faq-data-collection-agents.yml#how-do-i-remove-oms-extensions-installed-by-defender-for-cloud-).
>
To turn off automatic provisioning of an agent:
## Next steps
-This page explained how to enable auto provisioning for the Log Analytics agent and other Defender for Cloud extensions. It also described how to define a Log Analytics workspace in which to store the collected data. Both operations are required to enable data collection. Storing data in Log Analytics, whether you use a new or existing workspace, might incur more charges for data storage. For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+
+This page explained how to enable auto provisioning for the Log Analytics agent and other Defender for Cloud extensions. It also described how to define a Log Analytics workspace in which to store the collected data. Both operations are required to enable data collection. Data storage in a new or existing Log Analytics workspace might incur more charges for data storage. For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
defender-for-cloud Enable Enhanced Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-enhanced-security.md
Title: Enable Microsoft Defender for Cloud's integrated workload protections+ description: Learn how to enable enhanced security features to extend the protections of Microsoft Defender for Cloud to your hybrid and multicloud resources Previously updated : 05/31/2022 Last updated : 07/14/2022 # Quickstart: Enable enhanced security features
-To learn about the benefits of enhanced security features, see [Microsoft Defender for Cloud's enhanced security features](enhanced-security-features-overview.md).
+Get started with Defender for Cloud by using it's enhanced security features to protect you hybrid and multicloud environments.
-## Prerequisites
+In this quickstart you will learn how to enable the enhanced security features by enabling the different Defender for Cloud plans through the Azure portal.
-For the purpose of the Defender for Cloud quickstarts and tutorials you must enable the enhanced security features.
+To learn more about the benefits of enhanced security features, see [Microsoft Defender for Cloud's enhanced security features](enhanced-security-features-overview.md).
-You can protect an entire Azure subscription with Defender for Cloud's enhanced security features and the protections will be inherited by all resources within the subscription.
+## Prerequisites
-A free 30-day trial is available. For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+
+- You must have [enabled Defender for Cloud](get-started.md) on your Azure subscription.
## Enable enhanced security features from the Azure portal
-To enable all Defender for Cloud features including threat protection capabilities, you must enable enhanced security features on the subscription containing the applicable workloads. Enabling it at the workspace level doesn't enable just-in-time VM access, adaptive application controls, and network detections for Azure resources. In addition, the only Microsoft Defender plans available at the workspace level are Microsoft Defender for Servers and Microsoft Defender for SQL servers on machines.
+To enable all Defender for Cloud features including threat protection capabilities, you must enable enhanced security features on the subscription containing the applicable workloads.
-- You can enable **Microsoft Defender for Storage accounts** at either the subscription level or resource level-- You can enable **Microsoft Defender for SQL** at either the subscription level or resource level-- You can enable **Microsoft Defender for open-source relational databases** at the resource level only
+If you only enable Defender for Cloud at the workspace level, Defender for Cloud will not enable just-in-time VM access, adaptive application controls, and network detections for Azure resources. In addition, the only Microsoft Defender plans available at the workspace level are Microsoft Defender for Servers and Microsoft Defender for SQL servers on machines.
-### Enable enhanced security features on your subscriptions and workspaces:
+> [!NOTE]
+> - You can enable **Microsoft Defender for Storage accounts** at either the subscription level or resource level.
+> - You can enable **Microsoft Defender for SQL** at either the subscription level or resource level.
+> - You can enable **Microsoft Defender for open-source relational databases** at the resource level only.
-- To enable enhanced security features on one subscription:
+You can protect an entire Azure subscription with Defender for Cloud's enhanced security features and the protections will be inherited by all resources within the subscription.
- 1. From Defender for Cloud's main menu, select **Environment settings**.
-
- 1. Select the subscription or workspace that you want to protect.
+**To enable enhanced security features on one subscription**:
+
+1. Sign in to the [Azure portal](https://ms.portal.azure.com).
+
+1. Search for and select **Microsoft Defender for Cloud**.
+
+1. From Defender for Cloud's main menu, select **Environment settings**.
- 1. Select **Enable all** to upgrade.
+1. Select the subscription or workspace that you want to protect.
- 1. Select **Save**.
+1. Select **Enable all** to enable all of the plans for Defender for Cloud.
- :::image type="content" source="./media/enhanced-security-features-overview/pricing-tier-page.png" alt-text="Defender for Cloud's pricing page in the portal" lightbox="media/enhanced-security-features-overview/pricing-tier-page.png":::
+ :::image type="content" source="./media/enhanced-security-features-overview/pricing-tier-page.png" alt-text="Screenshot of the Defender for Cloud's pricing page in the Azure portal." lightbox="media/enhanced-security-features-overview/pricing-tier-page.png":::
-- To enable enhanced security on multiple subscriptions or workspaces:
+1. Select **Save**.
+
+**To enable enhanced security on multiple subscriptions or workspaces**:
+
+1. Sign in to the [Azure portal](https://ms.portal.azure.com).
+
+1. Search for and select **Microsoft Defender for Cloud**.
- 1. From Defender for Cloud's menu, select **Getting started**.
+1. From Defender for Cloud's menu, select **Getting started**.
- The **Upgrade** tab lists subscriptions and workspaces eligible for onboarding.
+ The Upgrade tab lists subscriptions and workspaces eligible for onboarding.
- :::image type="content" source="./media/enable-enhanced-security/get-started-upgrade-tab.png" alt-text="Upgrade tab of the getting started page." lightbox="media/enable-enhanced-security/get-started-upgrade-tab.png":::
+ :::image type="content" source="./media/enable-enhanced-security/get-started-upgrade-tab.png" alt-text="Screenshot of the upgrade tab of the getting started page." lightbox="media/enable-enhanced-security/get-started-upgrade-tab.png":::
- 1. From the **Select subscriptions and workspaces to protect with Microsoft Defender for Cloud** list, select the subscriptions and workspaces to upgrade and select **Upgrade** to enable all Microsoft Defender for Cloud security features.
+1. Select the desired subscriptions and workspace from the list.
- - If you select subscriptions and workspaces that aren't eligible for trial, the next step will upgrade them and charges will begin.
-
- - If you select a workspace that's eligible for a free trial, the next step will begin a trial.
+1. Select **Upgrade**.
- :::image type="content" source="./media/enable-enhanced-security/upgrade-selected-workspaces-and-subscriptions.png" alt-text="Upgrade all selected workspaces and subscriptions from the getting started page." lightbox="media/enable-enhanced-security/upgrade-selected-workspaces-and-subscriptions.png":::
+ :::image type="content" source="./media/enable-enhanced-security/upgrade-selected-workspaces-and-subscriptions.png" alt-text="Screenshot that shows where the upgrade button is located on the screen." lightbox="media/enable-enhanced-security/upgrade-selected-workspaces-and-subscriptions.png":::
+
+ > [!NOTE]
+ > - If you select subscriptions and workspaces that aren't eligible for trial, the next step will upgrade them and charges will begin.
+ > - If you select a workspace that's eligible for a free trial, the next step will begin a trial.
+
+## Customize plans
+
+Certain plans allow you to customize your protection.
+
+You can learn about the differences between the [Defender for Servers plans](defender-for-servers-introduction.md#available-defender-for-server-plans) to help you choose which one you would like to apply to your subscription.
+
+Defender for Databases allows you to [select which type of resources you want to protect](quickstart-enable-database-protections.md). You can learn about the different types of protections offered.
+
+Defender for Containers is available on hybrid and multicloud environments. You can learn more about the [enablement process](defender-for-containers-enable.md) for Defender for Containers for each environment type.
## Disable enhanced security features
-If you need to disable enhanced security features for a subscription, the procedure is the same but you select **Enhanced security off**:
+If you choose to disable the enhanced security features for a subscription, you will just need to change the plan to **Off**.
-1. From Defender for Cloud's menu, open **Environment settings**.
-1. Select the relevant subscription.
-1. Find the plan you wish to turn off and select **off**.
+**To disable enhanced security features**:
+
+1. Sign in to the [Azure portal](https://ms.portal.azure.com).
+
+1. Search for and select **Microsoft Defender for Cloud**.
+
+1. From Defender for Cloud's menu, select **Environment settings**.
+
+1. Select the relevant subscriptions and workspaces.
+
+1. Find the plan you wish to turn off and select **Off**.
- :::image type="content" source="./media/enable-enhanced-security/disable-plans.png" alt-text="Enable or disable Defender for Cloud's enhanced security features." lightbox="media/enable-enhanced-security/disable-plans.png":::
+ :::image type="content" source="./media/enable-enhanced-security/disable-plans.png" alt-text="Screenshot that shows you how to enable or disable Defender for Cloud's enhanced security features." lightbox="media/enable-enhanced-security/disable-plans.png":::
> [!NOTE] > After you disable enhanced security features - whether you disable a single plan or all plans at once - data collection may continue for a short period of time.
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enhanced-security-features-overview.md
Title: Understand the enhanced security features of Microsoft Defender for Cloud
+ Title: Understand the basic and extended security features of Microsoft Defender for Cloud
description: Learn about the benefits of enabling enhanced security in Microsoft Defender for Cloud Previously updated : 06/29/2022 Last updated : 07/11/2022
-# Microsoft Defender for Cloud's enhanced security features
+# Microsoft Defender for Cloud's basic and enhanced security features
-The enhanced security features are free for the first 30 days. At the end of 30 days, if you decide to continue using the service, we'll automatically start charging for usage.
+Defender for Cloud offers a number of enhanced security features that can help protect your organization against threats and attacks.
-You can upgrade from the **Environment settings** page, as described in [Quickstart: Enable enhanced security features](enable-enhanced-security.md). For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+- **Basic security features** (Free) - When you open Defender for Cloud in the Azure portal for the first time or if you enable it through the API, Defender for Cloud is enabled for free on all your Azure subscriptions. By default, Defender for Cloud provides the [secure score](secure-score-security-controls.md), [security policy and basic recommendations](security-policy-concept.md), and [network security assessment](protect-network-resources.md) to help you protect your Azure resources.
-## What are the benefits of enabling enhanced security features?
+ If you want to try out the enhanced security features, [enable enhanced security features](enable-enhanced-security.md) for free for the first 30 days. At the end of 30 days, if you decide to continue using the service, we'll automatically start charging for usage. For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
-Defender for Cloud is offered in two modes:
--- **Without enhanced security features** (Free) - Defender for Cloud is enabled for free on all your Azure subscriptions when you visit the workload protection dashboard in the Azure portal for the first time, or if enabled programmatically via API. Using this free mode provides the secure score and its related features: security policy, continuous security assessment, and actionable security recommendations to help you protect your Azure resources.--- **Defender for Cloud with all enhanced security features** - Enabling enhanced security extends the capabilities of the free mode to workloads running in private and other public clouds, providing unified security management and threat protection across your hybrid cloud workloads. Some of the major benefits include:
+- **Enhanced security features** (Paid) - When you enable the enhanced security features, Defender for Cloud can provide unified security management and threat protection across your hybrid cloud workloads, including:
- **Microsoft Defender for Endpoint** - Microsoft Defender for Servers includes [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender) for comprehensive endpoint detection and response (EDR). Learn more about the benefits of using Microsoft Defender for Endpoint together with Defender for Cloud in [Use Defender for Cloud's integrated EDR solution](integration-defender-for-endpoint.md). - **Vulnerability assessment for virtual machines, container registries, and SQL resources** - Easily enable vulnerability assessment solutions to discover, manage, and resolve vulnerabilities. View, investigate, and remediate the findings directly from within Defender for Cloud.
Defender for Cloud is offered in two modes:
- **Hybrid security** ΓÇô Get a unified view of security across all of your on-premises and cloud workloads. Apply security policies and continuously assess the security of your hybrid cloud workloads to ensure compliance with security standards. Collect, search, and analyze security data from multiple sources, including firewalls and other partner solutions. - **Threat protection alerts** - Advanced behavioral analytics and the Microsoft Intelligent Security Graph provide an edge over evolving cyber-attacks. Built-in behavioral analytics and machine learning can identify attacks and zero-day exploits. Monitor networks, machines, data stores (SQL servers hosted inside and outside Azure, Azure SQL databases, Azure SQL Managed Instance, and Azure Storage) and cloud services for incoming attacks and post-breach activity. Streamline investigation with interactive tools and contextual threat intelligence. - **Track compliance with a range of standards** - Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in [Azure Security Benchmark](/security/benchmark/azure/introduction). When you enable the enhanced security features, you can apply a range of other industry standards, regulatory standards, and benchmarks according to your organization's needs. Add standards and track your compliance with them from the [regulatory compliance dashboard](update-regulatory-compliance-packages.md).
- - **Access and application controls** - Block malware and other unwanted applications by applying machine learning powered recommendations adapted to your specific workloads to create allow and blocklists. Reduce the network attack surface with just-in-time, controlled access to management ports on Azure VMs. Access and application controls drastically reduce exposure to brute force and other network attacks.
+ - **Access and application controls** - Block malware and other unwanted applications by applying machine learning powered recommendations adapted to your specific workloads to create allowlists and blocklists. Reduce the network attack surface with just-in-time, controlled access to management ports on Azure VMs. Access and application controls drastically reduce exposure to brute force and other network attacks.
- **Container security features** - Benefit from vulnerability management and real-time threat protection on your containerized environments. Charges are based on the number of unique container images pushed to your connected registry. After an image has been scanned once, you won't be charged for it again unless it's modified and pushed once more. - **Breadth threat protection for resources connected to Azure** - Cloud-native threat protection for the Azure services common to all of your resources: Azure Resource Manager, Azure DNS, Azure network layer, and Azure Key Vault. Defender for Cloud has unique visibility into the Azure management layer and the Azure DNS layer, and can therefore protect cloud resources that are connected to those layers.
Defender for Cloud is offered in two modes:
- [What are the plans offered by Defender for Cloud?](#what-are-the-plans-offered-by-defender-for-cloud) - [How do I enable Defender for Cloud's enhanced security for my subscription?](#how-do-i-enable-defender-for-clouds-enhanced-security-for-my-subscription) - [Can I enable Microsoft Defender for Servers on a subset of servers in my subscription?](#can-i-enable-microsoft-defender-for-servers-on-a-subset-of-servers-in-my-subscription)-- [If I already have a license for Microsoft Defender for Endpoint can I get a discount for Defender for Servers?](#if-i-already-have-a-license-for-microsoft-defender-for-endpoint-can-i-get-a-discount-for-defender-for-servers)
+- [If I already have a license for Microsoft Defender for Endpoint, can I get a discount for Defender for Servers?](#if-i-already-have-a-license-for-microsoft-defender-for-endpoint-can-i-get-a-discount-for-defender-for-servers)
- [My subscription has Microsoft Defender for Servers enabled, do I pay for not-running servers?](#my-subscription-has-microsoft-defender-for-servers-enabled-do-i-pay-for-not-running-servers) - [Will I be charged for machines without the Log Analytics agent installed?](#will-i-be-charged-for-machines-without-the-log-analytics-agent-installed) - [If a Log Analytics agent reports to multiple workspaces, will I be charged twice?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-will-i-be-charged-twice)-- [If a Log Analytics agent reports to multiple workspaces, is the 500 MB free data ingestion available on all of them?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-is-the-500-mb-free-data-ingestion-available-on-all-of-them)-- [Is the 500 MB free data ingestion calculated for an entire workspace or strictly per machine?](#is-the-500-mb-free-data-ingestion-calculated-for-an-entire-workspace-or-strictly-per-machine)-- [What data types are included in the 500 MB data daily allowance?](#what-data-types-are-included-in-the-500-mb-data-daily-allowance)
+- [If a Log Analytics agent reports to multiple workspaces, is the 500-MB free data ingestion available on all of them?](#if-a-log-analytics-agent-reports-to-multiple-workspaces-is-the-500-mb-free-data-ingestion-available-on-all-of-them)
+- [Is the 500-MB free data ingestion calculated for an entire workspace or strictly per machine?](#is-the-500-mb-free-data-ingestion-calculated-for-an-entire-workspace-or-strictly-per-machine)
+- [What data types are included in the 500-MB data daily allowance?](#what-data-types-are-included-in-the-500-mb-data-daily-allowance)
- [How can I monitor my daily usage](#how-can-i-monitor-my-daily-usage) ### How can I track who in my organization enabled a Microsoft Defender plan in Defender for Cloud?
No. When you enable [Microsoft Defender for Servers](defender-for-servers-introd
An alternative is to enable Microsoft Defender for Servers at the Log Analytics workspace level. If you do this, only servers reporting to that workspace will be protected and billed. However, several capabilities will be unavailable. These include Microsoft Defender for Endpoint, VA solution (TVM/Qualys), just-in-time VM access, and more.
-### If I already have a license for Microsoft Defender for Endpoint can I get a discount for Defender for Servers?
+### If I already have a license for Microsoft Defender for Endpoint, can I get a discount for Defender for Servers?
If you've already got a license for **Microsoft Defender for Endpoint for Servers Plan 2**, you won't have to pay for that part of your Microsoft Defender for Servers license. Learn more about [this license](/microsoft-365/security/defender-endpoint/minimum-requirements#licensing-requirements).
No. When you enable [Microsoft Defender for Servers](defender-for-servers-introd
|--|--|--| | Starting | VM is starting up. | Not billed | | Running | Normal working state for a VM | Billed |
-| Stopping | This is a transitional state. When completed, it will show as Stopped. | Billed |
+| Stopping | This state is transitional. When completed, it will show as Stopped. | Billed |
| Stopped | The VM has been shut down from within the guest OS or using the PowerOff APIs. Hardware is still allocated to the VM and it remains on the host. | Billed |
-| Deallocating | Transitional state. When completed, the VM will show as Deallocated. | Not billed |
+| Deallocating | This state is transitional. When completed, the VM will show as Deallocated. | Not billed |
| Deallocated | The VM has been stopped successfully and removed from the host. | Not billed | :::image type="content" source="media/enhanced-security-features-overview/deallocated-virtual-machines.png" alt-text="Azure Virtual Machines showing a deallocated machine."::: ### If I enable Defender for Clouds Servers plan on the subscription level, do I need to enable it on the workspace level?
-When you enable the Servers plan on the subscription level, Defender for Cloud will enable the Servers plan on your default workspace(s) automatically when auto-provisioning is enabled. This can be accomplished on the Auto provisioning page by selecting **Connect Azure VMs to the default workspace(s) created by Defender for Cloud** option and selecting **Apply**.
+When you enable the Servers plan on the subscription level, Defender for Cloud will enable the Servers plan on your default workspace(s) automatically when auto-provisioning is enabled. Enable auto-provisioning on the Auto provisioning page by selecting **Connect Azure VMs to the default workspace(s) created by Defender for Cloud** option and selecting **Apply**.
-However, if you're using a custom workspace in place of the default workspace, you'll need to enable the Servers plan on all of your custom workspaces that do not have it enabled.
+However, if you're using a custom workspace in place of the default workspace, you'll need to enable the Servers plan on all of your custom workspaces that don't have it enabled.
-If you're using a custom workspace and enable the plan on the subscription level only, the `Microsoft Defender for servers should be enabled on workspaces` recommendation will appear on the Recommendations page. This recommendation will give you the option to enable the servers plan on the workspace level with the Fix button. Until the workspace has the Servers plan enabled, any connected VM will not benefit from the full security coverage (Microsoft Defender for Endpoint, VA solution (TVM/Qualys), just-in-time VM access, and more) offered by the Defender for Cloud, but will still incur the cost.
+If you're using a custom workspace and enable the plan on the subscription level only, the `Microsoft Defender for servers should be enabled on workspaces` recommendation will appear on the Recommendations page. This recommendation will give you the option to enable the servers plan on the workspace level with the Fix button. You're charged for all VMs in the subscription even if the Servers plan isn't enabled for the workspace. The VMs won't benefit from features that depend on the Log Analytics workspace, such as Microsoft Defender for Endpoint, VA solution (TVM/Qualys), and Just-in-Time VM access.
-Enabling the Servers plan on both the subscription and its connected workspaces, will not incur a double charge. The system will identify each unique VM.
+Enabling the Servers plan on both the subscription and its connected workspaces, won't incur a double charge. The system will identify each unique VM.
-If you enable the Servers plan on cross-subscription workspaces, all connected VMs, even those from subscriptions that it was not enabled on, will be billed.
+If you enable the Servers plan on cross-subscription workspaces, connected VMs from all subscriptions will be billed, including subscriptions that don't have the Servers plan enabled.
### Will I be charged for machines without the Log Analytics agent installed?
-Yes. When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on a subscription, the machines in that subscription get a range of protections even if you haven't installed the Log Analytics agent. This is applicable for Azure virtual machines, Azure virtual machine scale sets instances, and Azure Arc-enabled servers.
+Yes. When you enable [Microsoft Defender for Servers](defender-for-servers-introduction.md) on a subscription, you're charged for all machines in the subscription, including Azure virtual machines, Azure virtual machine scale sets instances, and Azure Arc-enabled servers. Machines that don't have Log Analytics installed are covered by protections that don't depend on the Log Analytics agent.
### If a Log Analytics agent reports to multiple workspaces, will I be charged twice? If a machine reports to multiple workspaces, and all of them have Defender for Servers enabled, the machines will be billed for each attached workspace.
-### If a Log Analytics agent reports to multiple workspaces, is the 500 MB free data ingestion available on all of them?
+### If a Log Analytics agent reports to multiple workspaces, is the 500-MB free data ingestion available on all of them?
-Yes. If you've configured your Log Analytics agent to send data to two or more different Log Analytics workspaces (multi-homing), you'll get 500 MB free data ingestion. It's calculated per node, per reported workspace, per day, and available for every workspace that has a 'Security' or 'AntiMalware' solution installed. You'll be charged for any data ingested over the 500 MB limit.
+Yes. If you configure your Log Analytics agent to send data to two or more different Log Analytics workspaces (multi-homing), you'll get 500-MB free data ingestion for each workspace. It's calculated per node, per reported workspace, per day, and available for every workspace that has a 'Security' or 'AntiMalware' solution installed. You'll be charged for any data ingested over the 500-MB limit.
-### Is the 500 MB free data ingestion calculated for an entire workspace or strictly per machine?
+### Is the 500-MB free data ingestion calculated for an entire workspace or strictly per machine?
-You'll get 500 MB free data ingestion per day, for every VM connected to the workspace. Specifically for the [security data types](#what-data-types-are-included-in-the-500-mb-data-daily-allowance) that are directly collected by Defender for Cloud.
+You'll get 500-MB free data ingestion per day, for every VM connected to the workspace. Specifically for the [security data types](#what-data-types-are-included-in-the-500-mb-data-daily-allowance) that are directly collected by Defender for Cloud.
-This data is a daily rate averaged across all nodes. Your total daily free limit is equal to **[number of machines] x 500 MB**. So even if some machines send 100-MB and others send 800-MB, if the total doesn't exceed your total daily free limit, you won't be charged extra.
+This data is a daily rate averaged across all nodes. Your total daily free limit is equal to **[number of machines] x 500 MB**. So even if some machines send 100 MB and others send 800 MB, if the total doesn't exceed your total daily free limit, you won't be charged extra.
-### What data types are included in the 500 MB data daily allowance?
+### What data types are included in the 500-MB data daily allowance?
Defender for Cloud's billing is closely tied to the billing for Log Analytics. [Microsoft Defender for Servers](defender-for-servers-introduction.md) provides a 500 MB/node/day allocation for machines against the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security): - [SecurityAlert](/azure/azure-monitor/reference/tables/securityalert)
You can also view estimated costs under different pricing tiers by selecting :::
You can learn how to [Analyze usage in Log Analytics workspace](../azure-monitor/logs/analyze-usage.md).
-Based on your usage, you won't be billed until you've used your daily allowance. If you're receiving a bill, it's only for the data used after the 500mb has been consumed, or for other service that does not fall under the coverage of Defender for Cloud.
+Based on your usage, you won't be billed until you've used your daily allowance. If you're receiving a bill, it's only for the data used after the 500-MB limit is reached, or for other service that doesn't fall under the coverage of Defender for Cloud.
## Next steps This article explained Defender for Cloud's pricing options. For related material, see:
defender-for-cloud Episode Fifteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-fifteen.md
+
+ Title: Remediate security recommendations with governance
+
+description: Learn about the new governance feature in Defender for Cloud, and how to drive security posture improvement.
+ Last updated : 07/14/2022++
+# Remediate security recommendations with governance
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Amit Biton joins Yuri Diogenes to talk about the new governance feature in Defender for Cloud. Amit explains the rationale behind this feature. Amit explains why it's important to have governance in place in order to drive security posture improvement and how this feature can help with that. Amit demonstrates how to create governance rules, how to monitor and take action to improve the secure score.
++
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=ceb3ef0e-257a-466a-9e90-dcfb08f54f8e" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+- [01:14](/shows/mdc-in-the-field/remediate-security-with-governance#time=01m14s) - What is the Governance feature?
+
+- [05:54](/shows/mdc-in-the-field/remediate-security-with-governance#time=05m54s) - What are the permissions required to configure Governance rules?
+
+- [06:51](/shows/mdc-in-the-field/remediate-security-with-governance#time=06m51s) - How workload owners receive notifications
+
+- [10:13](/shows/mdc-in-the-field/remediate-security-with-governance#time=10m13s) - Understanding grace period
+
+- [15:20](/shows/mdc-in-the-field/remediate-security-with-governance#time=15m20s) - Enabling Governance at scale
+
+- [16:25](/shows/mdc-in-the-field/remediate-security-with-governance#time=16m25s) - Demonstration
+
+## Recommended resources
+
+[Driving your organization to remediate security issues with recommendation governance in Microsoft Defender for Cloud](governance-rules.md).
+
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqa0ZoTml2Qm9kZ2pjRzNMUXFqVUwyNl80YVNtd3xBQ3Jtc0trVm9QM2Z0NlpOeC1KSUE2UEd1cVJ5aHQ0MTN6WjJEYmNlOG9rWC1KZ1ZqaTNmcHdOOHMtWXRLSGhUTVBhQlhhYzlUc2xmTHZtaUpkd1c4LUQzLWt1YmRTbkVQVE5EcTJIM0Foc042SGdQZU5acVRJbw&q=https%3A%2F%2Faka.ms%2FSubscribeMicrosoftSecurity)
+
+- Follow us on social media:
+ [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Episode Fourteen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-fourteen.md
Title: Defender for Servers deployment in AWS and GCP+ description: Learn about the capabilities available for Defender for Servers deployment within AWS and GCP. Last updated 06/26/2022
Last updated 06/26/2022
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [Remediate Security Recommendations with Governance](episode-fifteen.md)
defender-for-cloud Governance Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/governance-rules.md
Previously updated : 05/29/2022 Last updated : 07/03/2022 # Drive your organization to remediate security recommendations with governance
Security teams are responsible for improving the security posture of their organ
Stay on top of the progress on the recommendations in the security posture. Weekly email notifications to the owners and managers make sure that they take timely action on the recommendations that can improve your security posture and recommendations.
+You can learn more by watching this video from the Defender for Cloud in the Field video series:
+- [Remediate Security Recommendations with Governance](episode-fifteen.md)
+ ## Building an automated process for improving security with governance rules To make sure your organization is systematically improving its security posture, you can define rules that assign an owner and set the due date for resources in the specified recommendations. That way resource owners have a clear set of tasks and deadlines for remediating recommendations.
defender-for-cloud Implement Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/implement-security-recommendations.md
Title: Implement security recommendations in Microsoft Defender for Cloud | Microsoft Docs
+ Title: Implement security recommendations in Microsoft Defender for Cloud
description: This article explains how to respond to recommendations in Microsoft Defender for Cloud to protect your resources and satisfy security policies.
defender-for-cloud Incidents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/incidents.md
Title: Manage security incidents in Microsoft Defender for Cloud | Microsoft Docs
+ Title: Manage security incidents in Microsoft Defender for Cloud
description: This document helps you to use Microsoft Defender for Cloud to manage security incidents.
defender-for-cloud Just In Time Access Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-usage.md
Title: Just-in-time virtual machine access in Microsoft Defender for Cloud | Microsoft Docs
+ Title: Just-in-time virtual machine access in Microsoft Defender for Cloud
description: Learn how just-in-time VM access (JIT) in Microsoft Defender for Cloud helps you control access to your Azure virtual machines. Last updated 05/17/2022
defender-for-cloud Managing And Responding Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/managing-and-responding-alerts.md
Title: Manage security alerts in Microsoft Defender for Cloud | Microsoft Docs
+ Title: Manage security alerts in Microsoft Defender for Cloud
description: This document helps you to use Microsoft Defender for Cloud capabilities to manage and respond to security alerts. Previously updated : 04/24/2022 Last updated : 07/14/2022 # Manage and respond to security alerts in Microsoft Defender for Cloud
This topic shows you how to view and process Defender for Cloud's alerts and pro
Advanced detections that trigger security alerts are only available with Microsoft Defender for Cloud's enhanced security features enabled. A free trial is available. To upgrade, see [Enable enhanced protections](enable-enhanced-security.md). ## What are security alerts?
-Defender for Cloud automatically collects, analyzes, and integrates log data from your Azure resources, the network, and connected partner solutions - like firewall and endpoint protection solutions - to detect real threats and reduce false positives. A list of prioritized security alerts is shown in Defender for Cloud along with the information you need to quickly investigate the problem and steps to take to remediate an attack.
+
+Defender for Cloud collects, analyzes, and integrates log data from your Azure resources, the network, and connected partner solutions, such as firewalls and endpoint agents. Defender for Cloud uses the log data to detect real threats and reduce false positives. A list of prioritized security alerts is shown in Defender for Cloud along with the information you need to quickly investigate the problem and steps to take to remediate an attack.
To learn about the different types of alerts, see [Security alerts - a reference guide](alerts-reference.md).
For an overview of how Defender for Cloud generates alerts, see [How Microsoft D
:::image type="content" source="./media/managing-and-responding-alerts/alerts-adding-filters-small.png" alt-text="Adding filters to the alerts view." lightbox="./media/managing-and-responding-alerts/alerts-adding-filters-large.png":::
- The list updates according to the filtering options you've selected. Filtering can be very helpful. For example, you might you want to address security alerts that occurred in the last 24 hours because you are investigating a potential breach in the system.
+ The list updates according to the filtering options you've selected. For example, you might you want to address security alerts that occurred in the last 24 hours because you are investigating a potential breach in the system.
## Respond to security alerts
For an overview of how Defender for Cloud generates alerts, see [How Microsoft D
1. For further information, select **View full details**.
- The left pane of the security alert page shows high-level information regarding the security alert: title, severity, status, activity time, description of the suspicious activity, and the affected resource. Alongside the affected resource are the Azure tags relevant to the resource. Use these to infer the organizational context of the resource when investigating the alert.
+ The left pane of the security alert page shows high-level information regarding the security alert: title, severity, status, activity time, description of the suspicious activity, and the affected resource. The Azure tags for the affected resource helps you to understand the organizational context of the resource.
The right pane includes the **Alert details** tab containing further details of the alert to help you investigate the issue: IP addresses, files, processes, and more.
The alerts list includes checkboxes so you can handle multiple alerts at once. F
In this example, we've selected all alerts with severity of 'Informational' for the resource 'ASC-AKS-CLOUD-TALK'.
- :::image type="content" source="media/managing-and-responding-alerts/processing-alerts-bulk-filter.png" alt-text="Screenshot of filtering the alerts to the list of those to handle together.":::
+ :::image type="content" source="media/managing-and-responding-alerts/processing-alerts-bulk-filter.png" alt-text="Screenshot of filtering the alerts to show related alerts.":::
1. Use the checkboxes to select the alerts to be processed - or use the checkbox at the top of the list to select them all.
defender-for-cloud Os Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/os-coverage.md
Title: Platforms supported by Microsoft Defender for Cloud | Microsoft Docs
+ Title: Platforms supported by Microsoft Defender for Cloud
description: This document provides a list of platforms supported by Microsoft Defender for Cloud. Last updated 11/09/2021
defender-for-cloud Partner Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/partner-integration.md
Title: Integrate security solutions in Microsoft Defender for Cloud | Microsoft Docs
+ Title: Integrate security solutions in Microsoft Defender for Cloud
description: Learn about how Microsoft Defender for Cloud integrates with partners to enhance the overall security of your Azure resources. Previously updated : 11/09/2021 Last updated : 07/14/2022 # Integrate security solutions in Microsoft Defender for Cloud
Currently, integrated security solutions include vulnerability assessment by [Qu
> [!NOTE] > Defender for Cloud does not install the Log Analytics agent on partner virtual appliances because most security vendors prohibit external agents running on their appliances.
-To learn more about the integration of vulnerability scanning tools from Qualys, including a built-in scanner available to customers who've enabled Microsoft Defender for Servers, see [Defender for Cloud's integrated Qualys vulnerability scanner for Azure and hybrid machines](deploy-vulnerability-assessment-vm.md).
+Learn more about the integration of [vulnerability scanning tools from Qualys](deploy-vulnerability-assessment-vm.md), including a built-in scanner available to customers that enable Microsoft Defender for Servers.
Defender for Cloud also offers vulnerability analysis for your: - SQL databases - see [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)
-* Azure Container Registry images - see [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-usage.md)
+- Azure Container Registry images - see [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-usage.md)
## How security solutions are integrated Azure security solutions that are deployed from Defender for Cloud are automatically connected. You can also connect other security data sources, including computers running on-premises or in other clouds.
The **Add data sources** section includes other available data sources that can
## Next steps
-In this article, you learned how to integrate partner solutions in Defender for Cloud. To learn how to setup an integration with Microsoft Sentinel, or any other SIEM, see [Continuously export Defender for Cloud data](continuous-export.md).
+In this article, you learned how to integrate partner solutions in Defender for Cloud. To learn how to set up an integration with Microsoft Sentinel, or any other SIEM, see [Continuously export Defender for Cloud data](continuous-export.md).
defender-for-cloud Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/permissions.md
Title: Permissions in Microsoft Defender for Cloud | Microsoft Docs
+ Title: Permissions in Microsoft Defender for Cloud
description: This article explains how Microsoft Defender for Cloud uses role-based access control to assign permissions to users and identify the permitted actions for each role. Last updated 05/22/2022
defender-for-cloud Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/privacy.md
Title: Manage user data in Microsoft Defender for Cloud | Microsoft Docs
+ Title: Manage user data in Microsoft Defender for Cloud
description: Learn how to manage the user data in Microsoft Defender for Cloud. Managing user data includes the ability to access, delete, or export data. Last updated 11/09/2021
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
# Connect your AWS accounts to Microsoft Defender for Cloud
-With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same.
+With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same. Microsoft Defender for Cloud protects workloads in Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
-Microsoft Defender for Cloud protects workloads in Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
+To protect your AWS-based resources, you can connect an AWS account with either:
-To protect your AWS-based resources, you can connect an account with one of two mechanisms:
+- **Native cloud connector** (recommended) - Provides an agentless connection to your AWS account that you can extend with Defender for Cloud's Defender plans to secure your AWS resources:
-- **Classic cloud connectors experience** - As part of the initial multicloud offering, we introduced these cloud connectors as a way to connect your AWS and GCP projects. If you've already configured an AWS connector through the classic cloud connectors experience, we recommend deleting these connectors (as explained in [Remove classic connectors](#remove-classic-connectors)), and connecting the account again using the newer mechanism. If you don't do this before creating the new connector through the environment settings page, do so afterwards to avoid seeing duplicate recommendations.
+ - [**Cloud Security Posture Management (CSPM)**](overview-page.md) assesses your AWS resources according to AWS-specific security recommendations and reflects your security posture in your secure score. The [asset inventory](asset-inventory.md) gives you one place to see all of your protected AWS resources. The [regulatory compliance dashboard](regulatory-compliance-dashboard.md) shows your compliance with built-in standards specific to AWS, including AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices.
+ - [**Microsoft Defender for Servers**](defender-for-servers-introduction.md) brings threat detection and advanced defenses to [supported Windows and Linux EC2 instances](supported-machines-endpoint-solutions-clouds-servers.md?tabs=tab/features-multicloud).
+ - [**Microsoft Defender for Containers**](defender-for-containers-introduction.md) brings threat detection and advanced defenses to [supported Amazon EKS clusters](supported-machines-endpoint-solutions-clouds-containers.md).
+ - [**Microsoft Defender for SQL**](defender-for-sql-introduction.md) brings threat detection and advanced defenses to your SQL Servers running on AWS EC2, AWS RDS Custom for SQL Server.
-- **Environment settings page** (recommended) - This page provides a greatly improved, simpler, onboarding experience (including auto provisioning). This mechanism also extends Defender for Cloud's enhanced security features to your AWS resources:-
- - **Defender for Cloud's CSPM features** extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's [asset inventory page](asset-inventory.md) is a multicloud enabled feature helping you manage your AWS resources alongside your Azure resources.
- - **Microsoft Defender for Containers** brings threat detection and advanced defenses to your Amazon EKS clusters. This plan includes Kubernetes threat protection, behavioral analytics, Kubernetes best practices, admission control recommendations and more. You can view the full list of available features in [Defender for Containers feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
- - **Microsoft Defender for Servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [feature availability table](supported-machines-endpoint-solutions-clouds-servers.md?tabs=tab/features-multicloud).
- - **Microsoft Defender for SQL** brings threat detection and advanced defenses to your SQL Servers running on AWS EC2, AWS RDS Custom for SQL Server. This plan includes the advanced threat protection and vulnerability assessment scanning. You can view the [full list of available features](defender-for-sql-introduction.md).
+- **Classic cloud connector** - Requires configuration in your AWS account to create a user that Defender for Cloud can use to connect to your AWS environment. If you have classic cloud connectors, we recommend that you [delete these connectors](#remove-classic-connectors) and use the native connector to reconnect to the account. Using both the classic and native connectors can produce duplicate recommendations.
For a reference list of all the recommendations Defender for Cloud can provide for AWS resources, see [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md).
This screenshot shows AWS accounts displayed in Defender for Cloud's [overview d
You can learn more by watching this video from the Defender for Cloud in the Field video series: - [New AWS connector](episode-one.md) - ::: zone pivot="env-settings" ## Availability
You can learn more by watching this video from the Defender for Cloud in the Fie
|Required roles and permissions:|**Contributor** permission for the relevant Azure subscription. <br> **Administrator** on the AWS account.| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)| - ## Prerequisites
+The native cloud connector requires:
+ - Access to an AWS account. - **To enable the Defender for Containers plan**, you'll need:
You can learn more by watching this video from the Defender for Cloud in the Fie
## Connect your AWS account
-Follow the steps below to create your AWS cloud connector.
+**To connect your AWS account to Defender for Cloud with a native connector**:
-### Remove 'classic' connectors
+1. If you have any classic connectors, [remove them](#remove-classic-connectors).
-If you have any existing connectors created with the classic cloud connectors experience, remove them first:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Navigate to **Defender for Cloud** > **Environment settings**.
-
-1. Select the option to switch back to the classic connectors experience.
-
- :::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="Switching back to the classic cloud connectors experience in Defender for Cloud.":::
-
-1. For each connector, select the three dot button **…** at the end of the row, and select **Delete**.
-
-1. On AWS, delete the role ARN, or the credentials created for the integration.
-
-### Create a new connector
-
-**To create a new connector**:
+ Using both the classic and native connectors can produce duplicate recommendations.
1. Sign in to the [Azure portal](https://portal.azure.com).
If you have any existing connectors created with the classic cloud connectors ex
Defender for Cloud will immediately start scanning your AWS resources and you'll see security recommendations within a few hours. For a reference list of all the recommendations Defender for Cloud can provide for AWS resources, see [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md).
+### Remove 'classic' connectors
+
+If you have any existing connectors created with the classic cloud connectors experience, remove them first:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Defender for Cloud** > **Environment settings**.
+
+1. Select the option to switch back to the classic connectors experience.
+
+ :::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="Switching back to the classic cloud connectors experience in Defender for Cloud.":::
+
+1. For each connector, select the three dot button **…** at the end of the row, and select **Delete**.
+
+1. On AWS, delete the role ARN, or the credentials created for the integration.
+ ::: zone-end
The following IAM permissions are needed to discover AWS resources:
| DataCollector | AWS Permissions | |--|--|
-| API Gateway | apigateway:GET |
-| Application Auto Scaling | application-autoscaling:Describe* |
-| Auto scaling | autoscaling-plans:Describe* <br> autoscaling:Describe* |
-| Certificate manager | acm-pca:Describe* <br> acm-pca:List* <br> acm:Describe* <br>acm:List* |
-| CloudFormation | cloudformation:Describe* <br> cloudformation:List* |
-| CloudFront | cloudfront:DescribeFunction <br> cloudfront:GetDistribution <br> cloudfront:GetDistributionConfig <br>cloudfront:List* |
-| CloudTrail | cloudtrail:Describe* <br> cloudtrail:GetEventSelectors <br> cloudtrail:List* <br> cloudtrail:LookupEvents |
-| CloudWatch | cloudwatch:Describe* <br> cloudwatch:List* |
-| CloudWatch logs | logs:DescribeLogGroups <br> logs:DescribeMetricFilters |
-| CodeBuild | codebuild:DescribeCodeCoverages <br> codebuild:DescribeTestCases <br> codebuild:List* |
-| Config Service | config:Describe* <br> config:List* |
-| DMS ΓÇô database migration service | dms:Describe* <br> dms:List* |
-| DAX | dax:Describe* |
-| DynamoDB | dynamodb:Describe* <br> dynamodb:List* |
-| Ec2 | ec2:Describe* <br> ec2:GetEbsEncryptionByDefault |
-| ECR | ecr:Describe* <br> ecr:List* |
-| ECS | ecs:Describe* <br> ecs:List* |
-| EFS | elasticfilesystem:Describe* |
-| EKS | eks:Describe* <br> eks:List* |
-| Elastic Beanstalk | elasticbeanstalk:Describe* <br> elasticbeanstalk:List* |
-| ELB ΓÇô elastic load balancing (v1/2) | elasticloadbalancing:Describe* |
-| Elastic search | es:Describe* <br> es:List* |
-| EMR ΓÇô elastic map reduce | elasticmapreduce:Describe* <br> elasticmapreduce:GetBlockPublicAccessConfiguration <br> elasticmapreduce:List* <br> elasticmapreduce:View* |
-| GuardDute | guardduty:DescribeOrganizationConfiguration <br> guardduty:DescribePublishingDestination <br> guardduty:List* |
-| IAM | iam:Generate* <br> iam:Get* <br> iam:List*<br> iam:Simulate* |
-| KMS | kms:Describe* <br> kms:List* |
-| LAMDBA | lambda:GetPolicy <br> lambda:List* |
-| Network firewall | network-firewall:DescribeFirewall <br> network-firewall:DescribeFirewallPolicy <br> network-firewall:DescribeLoggingConfiguration <br> network-firewall:DescribeResourcePolicy <br> network-firewall:DescribeRuleGroup <br> network-firewall:DescribeRuleGroupMetadata <br> network-firewall:ListFirewallPolicies <br> network-firewall:ListFirewalls <br> network-firewall:ListRuleGroups <br> network-firewall:ListTagsForResource |
-| RDS | rds:Describe* <br> rds:List* |
-| RedShift | redshift:Describe* |
-| S3 and S3Control | s3:DescribeJob <br> s3:GetEncryptionConfiguration <br> s3:GetBucketPublicAccessBlock <br> s3:GetBucketTagging <br> s3:GetBucketLogging <br> s3:GetBucketAcl <br> s3:GetBucketLocation <br> s3:GetBucketPolicy <br> s3:GetReplicationConfiguration <br> s3:GetAccountPublicAccessBlock <br> s3:GetObjectAcl <br> s3:GetObjectTagging <br> s3:List* |
-| SageMaker | sagemaker:Describe* <br> sagemaker:GetSearchSuggestions <br> sagemaker:List* <br> sagemaker:Search |
-| Secret manager | secrets
-| Simple notification service ΓÇô SNS | sns:Check* <br> sns:List* |
-| SSM | ssm:Describe* <br> ssm:List* |
-| SQS | sqs:List* <br> sqs:Receive* |
-| STS | sts:GetCallerIdentity |
-| WAF | waf-regional:Get* <br> waf-regional:List* <br> waf:List* <br> wafv2:CheckCapacity <br> wafv2:Describe* <br> wafv2:List* |
+| API Gateway | `apigateway:GET` |
+| Application Auto Scaling | `application-autoscaling:Describe*` |
+| Auto scaling | `autoscaling-plans:Describe*` <br> `autoscaling:Describe*` |
+| Certificate manager | `acm-pca:Describe*` <br> `acm-pca:List*` <br> `acm:Describe* <br>acm:List*` |
+| CloudFormation | `cloudformation:Describe*` <br> `cloudformation:List*` |
+| CloudFront | `cloudfront:DescribeFunction` <br> `cloudfront:GetDistribution` <br> `cloudfront:GetDistributionConfig` <br> `cloudfront:List*` |
+| CloudTrail | `cloudtrail:Describe*` <br> `cloudtrail:GetEventSelectors` <br> `cloudtrail:List*` <br> `cloudtrail:LookupEvents` |
+| CloudWatch | `cloudwatch:Describe*` <br> `cloudwatch:List*` |
+| CloudWatch logs | `logs:DescribeLogGroups` <br> `logs:DescribeMetricFilters` |
+| CodeBuild | `codebuild:DescribeCodeCoverages` <br> `codebuild:DescribeTestCases` <br> `codebuild:List*` |
+| Config Service | `config:Describe*` <br> `config:List*` |
+| DMS ΓÇô database migration service | `dms:Describe*` <br> `dms:List*` |
+| DAX | `dax:Describe*` |
+| DynamoDB | `dynamodb:Describe*` <br> `dynamodb:List*` |
+| Ec2 | `ec2:Describe*` <br> `ec2:GetEbsEncryptionByDefault` |
+| ECR | `ecr:Describe*` <br> `ecr:List*` |
+| ECS | `ecs:Describe*` <br> `ecs:List*` |
+| EFS | `elasticfilesystem:Describe*` |
+| EKS | `eks:Describe*` <br> `eks:List*` |
+| Elastic Beanstalk | `elasticbeanstalk:Describe*` <br> `elasticbeanstalk:List*` |
+| ELB ΓÇô elastic load balancing (v1/2) | `elasticloadbalancing:Describe*` |
+| Elastic search | `es:Describe*` <br> `es:List*` |
+| EMR ΓÇô elastic map reduce | `elasticmapreduce:Describe*` <br> `elasticmapreduce:GetBlockPublicAccessConfiguration` <br> `elasticmapreduce:List*` <br> `elasticmapreduce:View*` |
+| GuardDute | `guardduty:DescribeOrganizationConfiguration` <br> `guardduty:DescribePublishingDestination` <br> `guardduty:List*` |
+| IAM | `iam:Generate*` <br> `iam:Get*` <br> `iam:List*` <br> `iam:Simulate*` |
+| KMS | `kms:Describe*` <br> `kms:List*` |
+| LAMDBA | `lambda:GetPolicy` <br> `lambda:List*` |
+| Network firewall | `network-firewall:DescribeFirewall` <br> `network-firewall:DescribeFirewallPolicy` <br> `network-firewall:DescribeLoggingConfiguration` <br> `network-firewall:DescribeResourcePolicy` <br> `network-firewall:DescribeRuleGroup` <br> `network-firewall:DescribeRuleGroupMetadata` <br> `network-firewall:ListFirewallPolicies` <br> `network-firewall:ListFirewalls` <br> `network-firewall:ListRuleGroups` <br> `network-firewall:ListTagsForResource` |
+| RDS | `rds:Describe*` <br> `rds:List*` |
+| RedShift | `redshift:Describe*` |
+| S3 and S3Control | `s3:DescribeJob` <br> `s3:GetEncryptionConfiguration` <br> `s3:GetBucketPublicAccessBlock` <br> `s3:GetBucketTagging` <br> `s3:GetBucketLogging` <br> `s3:GetBucketAcl` <br> `s3:GetBucketLocation` <br> `s3:GetBucketPolicy` <br> `s3:GetReplicationConfiguration` <br> `s3:GetAccountPublicAccessBlock` <br> `s3:GetObjectAcl` <br> `s3:GetObjectTagging` <br> `s3:List*` |
+| SageMaker | `sagemaker:Describe*` <br> `sagemaker:GetSearchSuggestions` <br> `sagemaker:List*` <br> `sagemaker:Search` |
+| Secret manager | `secrets
+| Simple notification service ΓÇô SNS | `sns:Check*` <br> `sns:List*` |
+| SSM | `ssm:Describe*` <br> `ssm:List*` |
+| SQS | `sqs:List*` <br> `sqs:Receive*` |
+| STS | `sts:GetCallerIdentity` |
+| WAF | `waf-regional:Get*` <br> `waf-regional:List*` <br> `waf:List*` <br> `wafv2:CheckCapacity` <br> `wafv2:Describe*` <br> `wafv2:List*` |
## Learn more
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
# Connect your GCP projects to Microsoft Defender for Cloud
-With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same.
+With cloud workloads commonly spanning multiple cloud platforms, cloud security services must do the same. Microsoft Defender for Cloud protects workloads in Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
-Microsoft Defender for Cloud protects workloads in Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
+To protect your GCP-based resources, you can connect a GCP project with either:
-To protect your GCP-based resources, you can connect an account in two different ways:
+- **Native cloud connector** (recommended) - Provides an agentless connection to your GCP account that you can extend with Defender for Cloud's Defender plans to secure your GCP resources:
-- **Classic cloud connectors experience** - As part of the initial multicloud offering, we introduced these cloud connectors as a way to connect your AWS and GCP projects.
+ - [**Cloud Security Posture Management (CSPM)**](overview-page.md) assesses your GCP resources according to GCP-specific security recommendations and reflects your security posture in your secure score. The resources are shown in Defender for Cloud's [asset inventory](asset-inventory.md) and are assessed for compliance with built-in standards specific to GCP.
+ - [**Microsoft Defender for Servers**](defender-for-servers-introduction.md) brings threat detection and advanced defenses to [supported Windows and Linux EC2 instances](supported-machines-endpoint-solutions-clouds-servers.md?tabs=tab/features-multicloud). This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more.
+ - [**Microsoft Defender for Containers**](defender-for-containers-introduction.md) brings threat detection and advanced defenses to [supported Amazon EKS clusters](supported-machines-endpoint-solutions-clouds-containers.md). This plan includes Kubernetes threat protection, behavioral analytics, Kubernetes best practices, admission control recommendations and more.
+ - [**Microsoft Defender for SQL**](defender-for-sql-introduction.md) brings threat detection and advanced defenses to your SQL Servers running on GCP compute engine instances, including the advanced threat protection and vulnerability assessment scanning.
-- **Environment settings page** (Recommended) - This page provides the onboarding experience (including auto provisioning). This mechanism also extends Defender for Cloud's enhanced security features to your GCP resources:-
- - **Defender for Cloud's CSPM features** extends to your GCP resources. This agentless plan assesses your GCP resources according to GCP-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to GCP. Defender for Cloud's [asset inventory page](asset-inventory.md) is a multicloud enabled feature helping you manage your GCP resources alongside your Azure resources.
- - **Microsoft Defender for Servers** brings threat detection and advanced defenses to your GCP VM instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [Supported features for virtual machines and servers table](supported-machines-endpoint-solutions-clouds-servers.md)
- - **Microsoft Defender for Containers** - Microsoft Defender for Containers brings threat detection and advanced defenses to your Google's Kubernetes Engine (GKE) Standard clusters. This plan includes Kubernetes threat protection, behavioral analytics, Kubernetes best practices, admission control recommendations and more. You can view the full list of available features in [Defender for Containers feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
- - **Microsoft Defender for SQL** brings threat detection and advanced defenses to your SQL Servers running on GCP compute engine instances. This plan includes the advanced threat protection and vulnerability assessment scanning. You can view the [full list of available features](defender-for-sql-introduction.md).
+- **Classic cloud connector** - Requires configuration in your GCP project to create a user that Defender for Cloud can use to connect to your GCP environment. If you have classic cloud connectors, we recommend that you [delete these connectors](#remove-classic-connectors) and use the native connector to reconnect to the account. Using both the classic and native connectors can produce duplicate recommendations.
:::image type="content" source="./media/quickstart-onboard-gcp/gcp-account-in-overview.png" alt-text="Screenshot of GCP projects shown in Microsoft Defender for Cloud's overview dashboard." lightbox="./media/quickstart-onboard-gcp/gcp-account-in-overview.png":::
To protect your GCP-based resources, you can connect an account in two different
|Required roles and permissions:| **Contributor** on the relevant Azure Subscription <br> **Owner** on the GCP organization or project| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet, Other Gov)|
-## Remove 'classic' connectors
-
-If you have any existing connectors created with the classic cloud connectors experience, remove them first:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Navigate to **Defender for Cloud** > **Environment settings**.
-
-1. Select the option to switch back to the classic connectors experience.
-
- :::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="Switching back to the classic cloud connectors experience in Defender for Cloud.":::
-
-1. For each connector, select the three dot button at the end of the row, and select **Delete**.
- ## Connect your GCP projects When connecting your GCP projects to specific Azure subscriptions, consider the [Google Cloud resource hierarchy](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy#resource-hierarchy-detail) and these guidelines:
When connecting your GCP projects to specific Azure subscriptions, consider the
Follow the steps below to create your GCP cloud connector.
-**To connect your GCP project**:
+**To connect your GCP project to Defender for Cloud with a native connector**:
1. Sign in to the [Azure portal](https://portal.azure.com).
After creating a connector, a scan will start on your GCP environment. New recom
## (Optional) Configure selected plans
-By default, all plans are toggled to `On`, on the plans select screen.
+By default, all plans are `On`. You can disable plans that you don't need.
:::image type="content" source="media/quickstart-onboard-gcp/toggle-plans-to-on.png" alt-text="Screenshot showing that all plans are toggle to on.":::
Microsoft Defender for Containers brings threat detection, and advanced defenses
1. Continue from step number 8, of the [Connect your GCP projects](#connect-your-gcp-projects) instructions.
+### Remove 'classic' connectors
+
+If you have any existing connectors created with the classic cloud connectors experience, remove them first:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Defender for Cloud** > **Environment settings**.
+
+1. Select the option to switch back to the classic connectors experience.
+
+ :::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="Switching back to the classic cloud connectors experience in Defender for Cloud.":::
+
+1. For each connector, select the three dot button at the end of the row, and select **Delete**.
+ ::: zone-end ::: zone pivot="classic-connector"
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
The new security agent is a Kubernetes DaemonSet, based on eBPF technology and i
The security agent enablement is available through auto-provisioning, recommendations flow, AKS RP or at scale using Azure Policy.
-You can [deploy the Defender profile](/azure/defender-for-cloud/defender-for-containers-enable?tabs=aks-deploy-portal%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Caks-removeprofile-api&pivots=defender-for-container-aks#deploy-the-defender-profile) today on your AKS clusters.
+You can [deploy the Defender profile](./defender-for-containers-enable.md?pivots=defender-for-container-aks&tabs=aks-deploy-portal%2ck8s-deploy-asc%2ck8s-verify-asc%2ck8s-remove-arc%2caks-removeprofile-api#deploy-the-defender-profile) today on your AKS clusters.
With this announcement, the runtime protection - threat detection (workload) is now also generally available.
Learn how to [enable your database security at the subscription level](quickstar
### Threat protection for Google Kubernetes Engine (GKE) clusters
-Following our recent announcement [Native CSPM for GCP and threat protection for GCP compute instances](#native-cspm-for-gcp-and-threat-protection-for-gcp-compute-instances), Microsoft Defender for Containers has extended its Kubernetes threat protection, behavioral analytics, and built-in admission control policies to Google's Kubernetes Engine (GKE) Standard clusters. You can easily onboard any existing, or new GKE Standard clusters to your environment through our Automatic onboarding capabilities. Check out [Container security with Microsoft Defender for Cloud](defender-for-containers-introduction.md#vulnerability-assessment), for a full list of available features.
+Following our recent announcement [Native CSPM for GCP and threat protection for GCP compute instances](#native-cspm-for-gcp-and-threat-protection-for-gcp-compute-instances), Microsoft Defender for Containers has extended its Kubernetes threat protection, behavioral analytics, and built-in admission control policies to Google's Kubernetes Engine (GKE) Standard clusters. You can easily onboard any existing, or new GKE Standard clusters to your environment through our Automatic onboarding capabilities. Check out [Container security with Microsoft Defender for Cloud](defender-for-containers-introduction.md#vulnerability-assessment), for a full list of available features.
defender-for-cloud Security Center Readiness Roadmap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/security-center-readiness-roadmap.md
Title: Defender for Cloud Readiness Roadmap | Microsoft Docs
+ Title: Defender for Cloud Readiness Roadmap
description: This document provides you a readiness roadmap to ramp up on Defender for Cloud. Last updated 11/09/2021
defender-for-cloud Threat Intelligence Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/threat-intelligence-reports.md
Title: Microsoft Defender for Cloud threat intelligence report | Microsoft Docs
+ Title: Microsoft Defender for Cloud threat intelligence report
description: This page helps you to use Microsoft Defender for Cloud threat intelligence reports during an investigation to find more information about security alerts Last updated 11/09/2021
defender-for-cloud Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/troubleshooting-guide.md
Title: Microsoft Defender for Cloud troubleshooting guide | Microsoft Docs
+ Title: Microsoft Defender for Cloud troubleshooting guide
description: This guide is for IT professionals, security analysts, and cloud admins who need to troubleshoot Microsoft Defender for Cloud related issues. Last updated 12/26/2021
defender-for-cloud Tutorial Security Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-security-policy.md
Title: Working with security policies | Microsoft Docs
+ Title: Working with security policies
description: Learn how to work with security policies in Microsoft Defender for Cloud. Last updated 01/25/2022
defender-for-cloud Workflow Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workflow-automation.md
Title: Workflow automation in Microsoft Defender for Cloud | Microsoft Docs
+ Title: Workflow automation in Microsoft Defender for Cloud
description: Learn how to create and automate workflows in Microsoft Defender for Cloud Last updated 06/26/2022
defender-for-iot Virtual Sensor Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-sensor-hyper-v.md
This procedure describes how to create a virtual machine by using Hyper-V.
**To create the virtual machine using Hyper-V**:
-1. Create a virtual disk in Hyper-V Manager.
+1. Create a virtual disk in Hyper-V Manager (Fixed size, as required by the hardware profile).
1. Select **format = VHDX**.
-1. Select **type = Dynamic Expanding**.
- 1. Enter the name and location for the VHD.
-1. Enter the required size [according to your organization's needs](../ot-appliance-sizing.md).
+1. Enter the required size [according to your organization's needs](../ot-appliance-sizing.md) (select Fixed Size disk type).
1. Review the summary, and select **Finish**.
This procedure describes how to create a virtual machine by using Hyper-V.
1. Select **Specify Generation** > **Generation 1**.
-1. Specify the memory allocation [according to your organization's needs](../ot-appliance-sizing.md), and select the check box for dynamic memory.
+1. Specify the memory allocation [according to your organization's needs](../ot-appliance-sizing.md), in standard RAM denomination (eg. 8192, 16384, 32768). Do not enable **Dyanmic Memory**.
-1. Configure the network adaptor according to your server network topology.
+1. Configure the network adaptor according to your server network topology. Under the "Hardware Acceleration" blade, disable "Virtual Machine Queue" for the monitoring (SPAN) network interface.
1. Connect the VHDX created previously to the virtual machine.
These commands set the name of the newly added adapter hardware to be `Monitor`.
:::image type="content" source="../media/tutorial-install-components/vswitch-span.png" alt-text="Screenshot of selecting the following options on the virtual switch screen.":::
-1. In the Hardware list, under the Network Adapter drop-down list, select **Advanced Features**.
+1. In the Hardware list, under the Network Adapter drop-down list, select **Hardware Acceleration** and disable "Virtual Machine Queue" for the monitoring (SPAN) network interface.
-1. In the Port Mirroring section, select **Destination** as the mirroring mode for the new virtual interface.
+1. In the Hardware list, under the Network Adapter drop-down list, select **Advanced Features**. Under the Port Mirroring section, select **Destination** as the mirroring mode for the new virtual interface.
:::image type="content" source="../media/tutorial-install-components/destination.png" alt-text="Screenshot of the selections needed to configure mirroring mode.":::
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
Defender for IoT users require the following permissions:
| View details and access software, activation files and threat intelligence packages | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Recover passwords | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-For more information, see [Azure roles](/azure/role-based-access-control/rbac-and-directory-admin-roles).
+For more information, see [Azure roles](../../role-based-access-control/rbac-and-directory-admin-roles.md).
### Supported service regions
Continue with one of the following tutorials, depending on whether you're settin
For more information, see: - [Welcome to Microsoft Defender for IoT for organizations](overview.md)-- [Microsoft Defender for IoT architecture](architecture.md)
+- [Microsoft Defender for IoT architecture](architecture.md)
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
Business considerations may require that you apply your existing IoT sensors to
- [Activate and set up your on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md) -- [Create an additional Azure subscription](/azure/cost-management-billing/manage/create-subscription)
+- [Create an additional Azure subscription](../../cost-management-billing/manage/create-subscription.md)
-- [Upgrade your Azure subscription](/azure/cost-management-billing/manage/upgrade-azure-subscription)
+- [Upgrade your Azure subscription](../../cost-management-billing/manage/upgrade-azure-subscription.md)
defender-for-iot References Work With Defender For Iot Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-apis.md
After generating the token, add an HTTP header titled **Authorization** to your
- [Retrieve operational vulnerabilities - /api/v1/reports/vulnerabilities/operational](#retrieve-operational-vulnerabilitiesapiv1reportsvulnerabilitiesoperational)
-### Version 2
--- [Retrieve alert PCAP - /api/v2/alerts/pcap](#retrieve-alert-pcapapiv2alertspcap)- ### Validate user credentials - /api/external/authentication/validation Use this API to validate a Defender for IoT username and password. All Defender for IoT user roles can work with the API.
JSON object that represents assessed results. Each key contains a JSON array of
|--|--|--| | GET | `curl -k -H "Authorization: <AUTH_TOKEN>" https://<IP_ADDRESS>/api/v1/reports/vulnerabilities/operational` | `curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https://127.0.0.1/api/v1/reports/vulnerabilities/operational` |
-### Retrieve alert PCAP - /api/v2/alerts/pcap
-
-Use this API to retrieve a PCAP file related to an alert.
-
-This endpoint does not use a regular access token for authorization. Instead, it requires a special token created by the `/external/v2/alerts/pcap` API endpoint on the CM.
-
-#### Method
--- **GET**-
-#### Query Parameters
--- id: Xsense Alert ID
-Example:
-`/api/v2/alerts/pcap/<id>`
-
-#### Response type
--- **JSON**-
-#### Response content
--- **Success**: Binary file containing PCAP data-- **Failure**: JSON object that contains error message-
-#### Response example
-
-#### Error
-
-```json
-{
- "error": "PCAP file is not available"
-}
-```
-
-#### Curl command
-
-|Type|APIs|Example|
-|-|-|-|
-|GET|`curl -k -H "Authorization: <AUTH_TOKEN>" 'https://<IP_ADDRESS>/api/v2/alerts/pcap/<ID>'`|`curl -k -H "Authorization: d2791f58-2a88-34fd-ae5c-2651fe30a63c" 'https://10.1.0.2/api/v2/alerts/pcap/1'`|
- ## Management console API This section describes on-premises management console APIs for:
The below API's can be used with the ServiceNow integration via the ServiceNow's
- [Investigate sensor detections in a device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md) -- [Investigate all enterprise sensor detections in a device inventory](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md)
+- [Investigate all enterprise sensor detections in a device inventory](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md)
defender-for-iot Update Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-ot-software.md
In such cases:
1. Verify that your sensors are connected successfully.
-1. Delete any private IoT Hubs that are no longer needed. For more information, see the [IoT Hub documentation](/azure/iot-hub/iot-hub-create-through-portal).
+1. Delete any private IoT Hubs that are no longer needed. For more information, see the [IoT Hub documentation](../../iot-hub/iot-hub-create-through-portal.md).
## Next steps
For more information, see:
- [Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md) - [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md) - [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md)-- [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
+- [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
defender-for-iot Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/workbooks.md
Use the Defender for IoT **Workbooks** page to create custom Azure Monitor workb
|**Query** | Add a query to use when creating your workbook graphs and charts. <br><br>- Make sure to select **Azure Resource Graph** as your **Data source** and select all of your relevant subscriptions. <br>- Add a graphical representation for your data by selecting a type from the **Visualization** options. | |**Metric** | Add metrics to use when creating workbook graphs and charts. | |**Group** | Add groups to organize your workbooks into sub-areas. |
- | | |
For each option, after you've defined all available settings, select the **Add...** or **Run...** button to create that workbook element. For example, **Add parameter** or **Run Query**.
Learn more about Azure Monitor workbooks and Azure Resource Graph:
- [Azure Resource Graph documentation](../../governance/resource-graph/index.yml) - [Azure Monitor workbook documentation](../../azure-monitor/visualize/workbooks-overview.md)-- [Kusto Query Language (KQL) documentation](/azure/data-explorer/kusto/query/)
+- [Kusto Query Language (KQL) documentation](/azure/data-explorer/kusto/query/)
digital-twins Concepts Data Ingress Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-ingress-egress.md
Once historical data is being collected, you can query this data in Azure Data E
You can also use data history in combination with [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md) to aggregate data from disparate sources. This can be useful in many scenarios. Here are two examples: * Combine information technology (IT) data from ERP or CRM systems (like Dynamics 365, SAP, or Salesforce) with operational technology (OT) data from IoT devices and production management systems. For an example that illustrates how a company might combine this data, see the following blog post: [Integrating IT and OT Data with Azure Digital Twins, Azure Data Explorer, and Azure Synapse](https://techcommunity.microsoft.com/t5/internet-of-things-blog/integrating-it-and-ot-data-with-azure-digital-twins-azure-data/ba-p/3401981).
-* Integrate with the Azure AI and Cognitive Services [Multivariate Anomaly Detector](/azure/cognitive-services/anomaly-detector/overview-multivariate), to quickly connect your Azure Digital Twins data with a downstream AI/machine learning solution that specializes in anomaly detection. The [Azure Digital Twins Multivariate Anomaly Detection Toolkit](/samples/azure-samples/digital-twins-mvad-integration/adt-mvad-integration/) is a sample project that provides a workflow for training multiple Multivariate Anomaly Detector models for several scenario analyses, based on historical digital twin data. It then leverages the trained models to detect abnormal operations and anomalies in modeled Azure Digital Twins environments, in near real-time.
+* Integrate with the Azure AI and Cognitive Services [Multivariate Anomaly Detector](../cognitive-services/anomaly-detector/overview-multivariate.md), to quickly connect your Azure Digital Twins data with a downstream AI/machine learning solution that specializes in anomaly detection. The [Azure Digital Twins Multivariate Anomaly Detection Toolkit](/samples/azure-samples/digital-twins-mvad-integration/adt-mvad-integration/) is a sample project that provides a workflow for training multiple Multivariate Anomaly Detector models for several scenario analyses, based on historical digital twin data. It then leverages the trained models to detect abnormal operations and anomalies in modeled Azure Digital Twins environments, in near real-time.
## Next steps
Learn more about endpoints and routing events to external
* [Endpoints and event routes](concepts-route-events.md) See how to set up Azure Digital Twins to ingest data from IoT Hub:
-* [Ingest telemetry from IoT Hub](how-to-ingest-iot-hub-data.md)
+* [Ingest telemetry from IoT Hub](how-to-ingest-iot-hub-data.md)
digital-twins How To Integrate Time Series Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-integrate-time-series-insights.md
If you allow a simulation to run for much longer, your visualization will look s
## Next steps
-The digital twins are stored by default as a flat hierarchy in Time Series Insights, but they can be enriched with model information and a multi-level hierarchy for organization. To learn more about this process, read:
-
-* [Define and apply a model](../time-series-insights/tutorial-set-up-environment.md#define-and-apply-a-model)
-
-You can write custom logic to automatically provide this information using the model and graph data already stored in Azure Digital Twins. To read more about managing, upgrading, and retrieving information from the twins graph, see the following references:
-
-* [Manage a digital twin](./how-to-manage-twin.md)
-* [Query the twin graph](./how-to-query-graph.md)
+After establishing a data pipeline to send time series data from Azure Digital Twins to Time Series Insights, you might want to think about how to translate asset models designed for Azure Digital Twins into asset models for Time Series Insights. For a tutorial on this next step in the integration process, see [Model synchronization between Azure Digital Twins and Time Series Insights Gen2](../time-series-insights/tutorials-model-sync.md).
event-grid Blob Event Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/blob-event-quickstart-bicep.md
+
+ Title: Send Blob storage events to web endpoint - Bicep
+description: Use Azure Event Grid and a Bicep file to create Blob storage account, subscribe to events, and send events to a Webhook.
Last updated : 07/13/2022+++
+# Quickstart: Route Blob storage events to web endpoint by using Bicep
+
+Azure Event Grid is an eventing service for the cloud. In this article, you use a Bicep file to create a Blob storage account, subscribe to events for that blob storage, and trigger an event to view the result. Typically, you send events to an endpoint that processes the event data and takes actions. However, to simplify this article, you send the events to a web app that collects and displays the messages.
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+
+### Create a message endpoint
+
+Before subscribing to the events for the Blob storage, let's create the endpoint for the event message. Typically, the endpoint takes actions based on the event data. To simplify this quickstart, you deploy a [pre-built web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub.
+
+1. Select **Deploy to Azure** to deploy the solution to your subscription. In the Azure portal, provide values for the parameters.
+
+ [Deploy to Azure](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fazure-event-grid-viewer%2Fmaster%2Fazuredeploy.json)
+1. The deployment may take a few minutes to complete. After the deployment has succeeded, view your web app to make sure it's running. In a web browser, navigate to:
+`https://<your-site-name>.azurewebsites.net`
+
+1. You see the site but no events have been posted to it yet.
+
+ ![Screenshot that shows how to view the new site.](./media/blob-event-quickstart-portal/view-site.png)
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.eventgrid/event-grid-subscription-and-storage).
++
+Two Azure resources are defined in the Bicep file:
+
+* [**Microsoft.Storage/storageAccounts**](/azure/templates/microsoft.storage/storageaccounts): create an Azure Storage account.
+* [**Microsoft.EventGrid/systemTopics**](/azure/templates/microsoft.eventgrid/systemtopics): create a system topic with the specified name for the storage account.
+* [**Microsoft.EventGrid/systemTopics/eventSubscriptions**](/azure/templates/microsoft.eventgrid/systemtopics/eventsubscriptions): create an Azure Event Grid subscription for the system topic.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters endpoint=<endpoint>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -endpoint "<endpoint>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<endpoint \>** with the URL of your web app and append `api/updates` to the URL.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+> [!NOTE]
+> You can find more Azure Event Grid template samples [here](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Eventgrid&pageNumber=1&sort=Popular).
+
+## Validate the deployment
+
+View your web app again, and notice that a subscription validation event has been sent to it. Select the eye icon to expand the event data. Event Grid sends the validation event so the endpoint can verify that it wants to receive event data. The web app includes code to validate the subscription.
+
+![Screenshot that shows how to view a subscription event.](./media/blob-event-quickstart-portal/view-subscription-event.png)
+
+Now, let's trigger an event to see how Event Grid distributes the message to your endpoint.
+
+You trigger an event for the Blob storage by uploading a file. The file doesn't need any specific content. The articles assumes you have a file named testfile.txt, but you can use any file.
+
+When you upload the file to the Azure Blob storage, Event Grid sends a message to the endpoint you configured when subscribing. The message is in the JSON format and it contains an array with one or more events. In the following example, the JSON message contains an array with one event. View your web app and notice that a blob created event was received.
+
+![Screenshot that shows how to view the results.](./media/blob-event-quickstart-portal/view-results.png)
+
+## Clean up resources
+
+When no longer needed, [delete the resource group](../azure-resource-manager/management/delete-resource-group.md?tabs=azure-portal#delete-resource-group
+).
+
+## Next steps
+
+For more information about Azure Resource Manager templates and Bicep, see the following articles:
+
+* [Azure Resource Manager documentation](../azure-resource-manager/index.yml)
+* [Define resources in Azure Resource Manager templates](/azure/templates/)
+* [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/)
+* [Azure Event Grid templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Eventgrid).
expressroute Designing For Disaster Recovery With Expressroute Privatepeering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md
If you rely on ExpressRoute connectivity between your on-premises network and Mi
- using diverse service provider network(s) for different ExpressRoute circuit - designing each of the ExpressRoute circuit for [high availability][HA] - terminating the different ExpressRoute circuit in different location on the customer network-- using [Availability zone aware ExpressRoute Virtual Network Gateways](/azure/vpn-gateway/about-zone-redundant-vnet-gateways)
+- using [Availability zone aware ExpressRoute Virtual Network Gateways](../vpn-gateway/about-zone-redundant-vnet-gateways.md)
## Challenges of using multiple ExpressRoute circuits
In this article, we discussed how to design for disaster recovery of an ExpressR
[Enterprise DR]: https://azure.microsoft.com/solutions/architecture/disaster-recovery-enterprise-scale-dr/ [SMB DR]: https://azure.microsoft.com/solutions/architecture/disaster-recovery-smb-azure-site-recovery/ [con wgt]: ./expressroute-optimize-routing.md#solution-assign-a-high-weight-to-local-connection
-[AS Path Pre]: ./expressroute-optimize-routing.md#solution-use-as-path-prepending
+[AS Path Pre]: ./expressroute-optimize-routing.md#solution-use-as-path-prepending
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[BCX](https://www.bcx.co.za/solutions/connectivity/data-networks)** |Supported |Supported | Cape Town, Johannesburg| | **[Bell Canada](https://business.bell.ca/shop/enterprise/cloud-connect-access-to-cloud-partner-services)** |Supported |Supported | Montreal, Toronto, Quebec City, Vancouver | | **[British Telecom](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)** |Supported |Supported | Amsterdam, Amsterdam2, Chicago, Frankfurt, Hong Kong SAR, Johannesburg, London, London2, Newport(Wales), Paris, Sao Paulo, Silicon Valley, Singapore, Sydney, Tokyo, Washington DC |
-| **[BSNL](https://www.bsnl.co.in/opencms/bsnl/BSNL/services/enterprises/cloudway.html)** |Supported |Supported | Chennai, Mumbai |
+| **BSNL** |Supported |Supported | Chennai, Mumbai |
| **[C3ntro](https://www.c3ntro.com/)** |Supported |Supported | Miami | | **CDC** | Supported | Supported | Canberra, Canberra2 | | **[CenturyLink Cloud Connect](https://www.centurylink.com/cloudconnect)** |Supported |Supported | Amsterdam2, Bogota, Chicago, Dallas, Dublin, Frankfurt, Hong Kong, Las Vegas, London, London2, Montreal, New York, Paris, Phoenix, San Antonio, Seattle, Silicon Valley, Singapore2, Tokyo, Toronto, Washington DC, Washington DC2 |
expressroute Expressroute Monitoring Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-monitoring-metrics-alerts.md
This metric shows the number of virtual machines that are using the ExpressRoute
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/number-of-virtual-machines-virtual-network.png" alt-text="Screenshot of number of virtual machines in the virtual network metric.":::
+>[!NOTE]
+> To maintain reliability of the service, Microsoft often performs platform or OS maintenance on the gateway service. During this time, this metric may fluctuate and report inaccurately.
+>
+ ## <a name = "connectionbandwidth"></a>ExpressRoute gateway connections in bits/seconds Aggregation type: *Avg*
firewall-manager Quick Firewall Policy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-firewall-policy-bicep.md
+
+ Title: 'Quickstart: Create an Azure Firewall and a firewall policy - Bicep'
+description: In this quickstart, you deploy an Azure Firewall and a firewall policy using Bicep.
+++ Last updated : 07/05/2022+++++
+# Quickstart: Create an Azure Firewall and a firewall policy - Bicep
+
+In this quickstart, you use Bicep to create an Azure Firewall and a firewall policy. The firewall policy has an application rule that allows connections to `www.microsoft.com` and a rule that allows connections to Windows Update using the **WindowsUpdate** FQDN tag. A network rule allows UDP connections to a time server at 13.86.101.172.
+
+Also, IP Groups are used in the rules to define the **Source** IP addresses.
++
+For information about Azure Firewall Manager, see [What is Azure Firewall Manager?](overview.md).
+
+For information about Azure Firewall, see [What is Azure Firewall?](../firewall/overview.md).
+
+For information about IP Groups, see [IP Groups in Azure Firewall](../firewall/ip-groups.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Review the Bicep file
+
+This Bicep file creates a hub virtual network, along with the necessary resources to support the scenario.
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/azurefirewall-create-with-firewallpolicy-apprule-netrule-ipgroups/).
++
+Multiple Azure resources are defined in the Bicep file:
+
+- [**Microsoft.Network/ipGroups**](/azure/templates/microsoft.network/ipGroups)
+- [**Microsoft.Network/firewallPolicies**](/azure/templates/microsoft.network/firewallPolicies)
+- [**Microsoft.Network/firewallPolicies/ruleCollectionGroups**](/azure/templates/microsoft.network/firewallPolicies/ruleCollectionGroups)
+- [**Microsoft.Network/azureFirewalls**](/azure/templates/microsoft.network/azureFirewalls)
+- [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks)
+- [**Microsoft.Network/publicIPAddresses**](/azure/templates/microsoft.network/publicipaddresses)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as `main.bicep` to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters firewallName=<firewall-name>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -firewallName "<firewall-name>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<firewall-name\>** with the name of the Azure Firewall.
+
+When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use Azure CLI or Azure PowerShell to review the deployed resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When you no longer need the resources that you created with the firewall, delete the resource group. This removes the firewall and all the related resources.
++
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Azure Firewall Manager policy overview](policy-overview.md)
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
Azure Firewall Standard has the following known issues:
|Availability zones can only be configured during deployment.|Availability zones can only be configured during deployment. You can't configure Availability Zones after a firewall has been deployed.|This is by design.| |SNAT on inbound connections|In addition to DNAT, connections via the firewall public IP address (inbound) are SNATed to one of the firewall private IPs. This requirement today (also for Active/Active NVAs) to ensure symmetric routing.|To preserve the original source for HTTP/S, consider using [XFF](https://en.wikipedia.org/wiki/X-Forwarded-For) headers. For example, use a service such as [Azure Front Door](../frontdoor/front-door-http-headers-protocol.md#front-door-to-backend) or [Azure Application Gateway](../application-gateway/rewrite-http-headers-url.md) in front of the firewall. You can also add WAF as part of Azure Front Door and chain to the firewall. |SQL FQDN filtering support only in proxy mode (port 1433)|For Azure SQL Database, Azure Synapse Analytics, and Azure SQL Managed Instance:<br><br>SQL FQDN filtering is supported in proxy-mode only (port 1433).<br><br>For Azure SQL IaaS:<br><br>If you're using non-standard ports, you can specify those ports in the application rules.|For SQL in redirect mode (the default if connecting from within Azure), you can instead filter access using the SQL service tag as part of Azure Firewall network rules.
-|Outbound SMTP traffic on TCP port 25 is blocked|Outbound email messages that are sent directly to external domains (like `outlook.com` and `gmail.com`) on TCP port 25 are blocked by Azure Firewall. This is the default platform behavior in Azure. |Use authenticated SMTP relay services, which typically connect through TCP port 587, but also supports other ports. For more information, see [Troubleshoot outbound SMTP connectivity problems in Azure](../virtual-network/troubleshoot-outbound-smtp-connectivity.md). Currently, Azure Firewall may be able to communicate to public IPs by using outbound TCP 25, but it's not guaranteed to work, and it's not supported for all subscription types. For private IPs like virtual networks, VPNs, and Azure ExpressRoute, Azure Firewall supports an outbound connection of TCP port 25.
+|Outbound SMTP traffic on TCP port 25 is blocked|Outbound email messages that are sent directly to external domains (like `outlook.com` and `gmail.com`) on TCP port 25 can be blocked by Azure platform. This is the default platform behavior in Azure, Azure Firewall does not introduce any additional specific restriction. |Use authenticated SMTP relay services, which typically connect through TCP port 587, but also supports other ports. For more information, see [Troubleshoot outbound SMTP connectivity problems in Azure](../virtual-network/troubleshoot-outbound-smtp-connectivity.md). Currently, Azure Firewall may be able to communicate to public IPs by using outbound TCP 25, but it's not guaranteed to work, and it's not supported for all subscription types. For private IPs like virtual networks, VPNs, and Azure ExpressRoute, Azure Firewall supports an outbound connection of TCP port 25.
|SNAT port exhaustion|Azure Firewall currently supports 2496 ports per Public IP address per backend virtual machine scale set instance. By default, there are two virtual machine scale set instances. So, there are 4992 ports per flow (destination IP, destination port and protocol (TCP or UDP). The firewall scales up to a maximum of 20 instances. |This is a platform limitation. You can work around the limits by configuring Azure Firewall deployments with a minimum of five public IP addresses for deployments susceptible to SNAT exhaustion. This increases the SNAT ports available by five times. Allocate from an IP address prefix to simplify downstream permissions. For a more permanent solution, you can deploy a NAT gateway to overcome the SNAT port limits. This approach is supported for VNET deployments. <br /><br /> For more information, see [Scale SNAT ports with Azure Virtual Network NAT](integrate-with-nat-gateway.md).| |DNAT isn't supported with Forced Tunneling enabled|Firewalls deployed with Forced Tunneling enabled can't support inbound access from the Internet because of asymmetric routing.|This is by design because of asymmetric routing. The return path for inbound connections goes via the on-premises firewall, which hasn't seen the connection established. |Outbound Passive FTP may not work for Firewalls with multiple public IP addresses, depending on your FTP server configuration.|Passive FTP establishes different connections for control and data channels. When a Firewall with multiple public IP addresses sends data outbound, it randomly selects one of its public IP addresses for the source IP address. FTP may fail when data and control channels use different source IP addresses, depending on your FTP server configuration.|An explicit SNAT configuration is planned. In the meantime, you can configure your FTP server to accept data and control channels from different source IP addresses (see [an example for IIS](/iis/configuration/system.applicationhost/sites/sitedefaults/ftpserver/security/datachannelsecurity)). Alternatively, consider using a single IP address in this situation.|
firewall Protect Azure Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-azure-virtual-desktop.md
Previously updated : 06/22/2022 Last updated : 07/14/2022
You will need to create an Azure Firewall Policy and create Rule Collections for
| Rule Name | IP Address | VNet or Subnet IP Address | TCP | 80 | IP Address | 169.254.169.254, 168.63.129.16 | | Rule Name | IP Address | VNet or Subnet IP Address | TCP | 443 | Service Tag | AzureCloud, WindowsVirtualDesktop, AzureFrontDoor.Frontend | | Rule Name | IP Address | VNet or Subnet IP Address | TCP, UDP | 53 | IP Address | * |
+|Rule name | IP Address | VNet or Subnet IP Address | TCP | 1688 | IP address | 20.118.99.244, 40.83.235.53 (azkms.core.windows.net)|
|Rule name | IP Address | VNet or Subnet IP Address | TCP | 1688 | IP address | 23.102.135.246 (kms.core.windows.net)| > [!NOTE]
guides Azure Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/guides/developer/azure-developer-guide.md
You can migrate your existing applications to App Service by using the [online m
> **When to use**: Use App Service when you're migrating existing web applications to Azure, and when you need a fully-managed hosting platform for your web apps. You can also use App Service when you need to support mobile clients or expose REST APIs with your app. > > **Get started**: App Service makes it easy to create and deploy your first [web app](../../app-service/quickstart-dotnetcore.md), [mobile app](/previous-versions/azure/app-service-mobile/app-service-mobile-ios-get-started), or [API app](../../app-service/app-service-web-tutorial-rest-api.md).
->
-> **Try it now**: App Service lets you provision a short-lived app to try the platform without having to sign up for an Azure account. Try the platform and [create your Azure App Service app](https://tryappservice.azure.com/).
#### Azure Virtual Machines
Rather than worrying about building out and managing a whole application or the
> > **Get started**: Follow the Functions quickstart tutorial to [create your first function](../../azure-functions/functions-get-started.md) from the portal. >
-> **Try it now**: Azure Functions lets you run your code without having to sign up for an Azure account. Try it now at and [create your first Azure Function](https://tryappservice.azure.com/).
+> **Try it now**: Azure Functions lets you run your code without having to sign up for an Azure account. Try it now at and create your first Azure Function.
#### Azure Service Fabric
You develop these deployments by using an Azure Resource Manager template, which
## Understanding accounts, subscriptions, and billing
-As developers, we like to dive right into the code and try to get started as fast as possible with making our applications run. We certainly want to encourage you to start working in Azure as easily as possible. To help make it easy, Azure offers a [free trial](https://azure.microsoft.com/free/). Some services even have a "Try it for free" functionality, like [Azure App Service](https://tryappservice.azure.com/), which doesn't require you to even create an account. As fun as it is to dive into coding and deploying your application to Azure, it's also important to take some time to understand how Azure works. Specifically, you should understand how it works from a standpoint of user accounts, subscriptions, and billing.
+As developers, we like to dive right into the code and try to get started as fast as possible with making our applications run. We certainly want to encourage you to start working in Azure as easily as possible. To help make it easy, Azure offers a [free trial](https://azure.microsoft.com/free/). Some services even have a "Try it for free" functionality, like Azure App Service, which doesn't require you to even create an account. As fun as it is to dive into coding and deploying your application to Azure, it's also important to take some time to understand how Azure works. Specifically, you should understand how it works from a standpoint of user accounts, subscriptions, and billing.
### What is an Azure account?
iot-central Howto Create Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-analytics.md
Azure IoT Central provides rich analytics capabilities to analyze historical trends and correlate telemetry from your devices. To get started, select **Data explorer** on the left pane.
-> [!NOTE]
-> Only users in a role that have the necessary permissions can view, create, edit, and delete queries. To learn more, see [Manage users and roles in your IoT Central application](howto-manage-users-roles.md).
- ## Understand the data explorer UI The analytics user interface has three main components:
iot-central Howto Manage Users Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles.md
When you define a custom role, you choose the set of permissions that a user is
| Delete | View | | Full Control | View, Update, Create, Delete |
-**Data explorer permissions**
-
-| Name | Dependencies |
-| - | -- |
-| View | None <br/> Other dependencies: View device groups, device templates, device instances |
-| Update | View <br/> Other dependencies: View device groups, device templates, device instances |
-| Create | View, Update <br/> Other dependencies: View device groups, device templates, device instances |
-| Delete | View <br/> Other dependencies: View device groups, device templates, device instances |
-| Full Control | View, Update, Create, Delete <br/> Other dependencies: View device groups, device templates, device instances |
- **Branding, favicon, and colors permissions** | Name | Dependencies |
iot-edge How To Update Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-update-iot-edge.md
Some of the key differences between the latest release and version 1.1 and earli
* The default config file has a new name and location. Formerly `/etc/iotedge/config.yaml`, your device configuration information is now expected to be in `/etc/aziot/config.toml` by default. The `iotedge config import` command can be used to help migrate configuration information from the old location and syntax to the new one. * The import command cannot detect or modify access rules to a device's trusted platform module (TPM). If your device uses TPM attestation, you need to manually update the /etc/udev/rules.d/tpmaccess.rules file to give access to the aziottpm service. For more information, see [Give IoT Edge access to the TPM](how-to-auto-provision-simulated-device-linux.md?view=iotedge-2020-11&preserve-view=true#give-iot-edge-access-to-the-tpm). * The workload API in the latest version saves encrypted secrets in a new format. If you upgrade from an older version to latest version, the existing master encryption key is imported. The workload API can read secrets saved in the prior format using the imported encryption key. However, the workload API can't write encrypted secrets in the old format. Once a secret is re-encrypted by a module, it is saved in the new format. Secrets encrypted in the latest version are unreadable by the same module in version 1.1. If you persist encrypted data to a host-mounted folder or volume, always create a backup copy of the data *before* upgrading to retain the ability to downgrade if necessary.
-* For backward compatibility when connecting devices that do not support TLS 1.2, you can configure Edge Hub to still accept TLS 1.0 or 1.1 via the [SslProtocols environment variable](https://github.com/Azure/iotedge/blob/main/doc/EnvironmentVariables.md#edgehub).  Please note that support for [TLS 1.0 and 1.1 in IoT Hub is considered legacy](/azure/iot-hub/iot-hub-tls-support) and may also be removed from Edge Hub in future releases.  To avoid future issues, use TLS 1.2 as the only TLS version when connecting to Edge Hub or IoT Hub.
+* For backward compatibility when connecting devices that do not support TLS 1.2, you can configure Edge Hub to still accept TLS 1.0 or 1.1 via the [SslProtocols environment variable](https://github.com/Azure/iotedge/blob/main/doc/EnvironmentVariables.md#edgehub).  Please note that support for [TLS 1.0 and 1.1 in IoT Hub is considered legacy](../iot-hub/iot-hub-tls-support.md) and may also be removed from Edge Hub in future releases.  To avoid future issues, use TLS 1.2 as the only TLS version when connecting to Edge Hub or IoT Hub.
* The preview for the experimental MQTT broker in Edge Hub 1.2 has ended and is not included in Edge Hub 1.3. We are continuing to refine our plans for an MQTT broker based on feedback received. In the meantime, if you need a standards-compliant MQTT broker on IoT Edge, consider deploying an open-source broker like Mosquitto as an IoT Edge module. Before automating any update processes, validate that it works on test machines.
iot-edge Iot Edge For Linux On Windows Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-security.md
Because you may need write access to `/etc`, `/home`, `/root`, `/var` for specif
| Partition | Size | Description | | |- | | | BootEFIA | 8 MB | Firmware partition A for future GRUBless boot |
-| BootEFIB | 8 MB | Firmware partition B for future GRUBless boot |
| BootA | 192 MB | Contains the bootloader for A partition |
-| BootB | 192 MB | Contains the bootloader for B partition |
| RootFS A | 4 GB | One of two active/passive partitions holding the root file system |
+| BootEFIB | 8 MB | Firmware partition B for future GRUBless boot |
+| BootB | 192 MB | Contains the bootloader for B partition |
| RootFS B | 4 GB | One of two active/passive partitions holding the root file system | | Unused | 4 GB | This partition is reserved for future use | | Log | 1 GB or 6 GB | Logs specific partition mounted under /logs |
iot-edge Tutorial Configure Est Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-configure-est-server.md
The Dockerfile uses Ubuntu 18.04, a [Cisco library called `libest`](https://gith
1. You should see `--BEGIN CERTIFICATE--` midway through the output. Retrieving the certificate verifies that the server is reachable and can present its certificate. > [!TIP]
-> To run this container in the cloud, build the image and [push the image to Azure Container Registry](../container-registry/container-registry-get-started-portal.md). Then, follow the [quickstart to deploy to Azure Container Instance](/azure/container-instances/container-instances-quickstart-portal).
+> To run this container in the cloud, build the image and [push the image to Azure Container Registry](../container-registry/container-registry-get-started-portal.md). Then, follow the [quickstart to deploy to Azure Container Instance](../container-instances/container-instances-quickstart-portal.md).
## Download CA certificate
You can keep the resources and configurations that you created in this tutorial
* To use EST server to issue IoT Edge CA certificates, see [example configuration](https://github.com/Azure/iotedge/blob/main/edgelet/doc/est.md#edge-ca-certificate). * EST server can be used to issue certificates for all devices in a hierarchy as well. Depending on if you have ISA-95 requirements, it may be necessary to run a chain of EST servers with one at every layer or use the API proxy module to forward the requests. To learn more, see [Kevin's blog](https://kevinsaye.wordpress.com/2021/07/21/deep-dive-creating-hierarchies-of-azure-iot-edge-devices-isa-95-part-3/). * For enterprise grade solutions, consider: [GlobalSign IoT Edge Enroll](https://www.globalsign.com/en/iot-edge-enroll) or [DigiCert IoT Device Manager](https://www.digicert.com/iot/iot-device-manager)
-* To learn more about certificates, see [Understand how Azure IoT Edge uses certificates](iot-edge-certs.md).
+* To learn more about certificates, see [Understand how Azure IoT Edge uses certificates](iot-edge-certs.md).
iot-hub-device-update Device Update Agent Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-agent-provisioning.md
Follow these instructions to provision the Device Update agent on [IoT Edge enab
sudo apt-get install deviceupdate-agent deliveryoptimization-plugin-apt ```
- - For any 'rc' i.e. release candidate agent versions from [Artifacts](https://github.com/Azure/iot-hub-device-update/releases) : Download the .dep file to the machine you want to install the Device Update agent on, then:
+ - For any 'rc' i.e. release candidate agent versions from [Artifacts](https://github.com/Azure/iot-hub-device-update/releases) : Download the .deb file to the machine you want to install the Device Update agent on, then:
```shell
- sudo apt-get install -y ./"<PATH TO FILE>"/"<.DEP FILE NAME>"
+ sudo apt-get install -y ./"<PATH TO FILE>"/"<.DEB FILE NAME>"
``` 1. You are now ready to start the Device Update agent on your IoT Edge device.
iot-hub Iot Hub Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-event-grid.md
Azure IoT Hub integrates with Azure Event Grid so that you can send event notifi
[Azure Event Grid](../event-grid/overview.md) is a fully managed event routing service that uses a publish-subscribe model. Event Grid has built-in support for Azure services like [Azure Functions](../azure-functions/functions-overview.md) and [Azure Logic Apps](../logic-apps/logic-apps-overview.md), and can deliver event alerts to non-Azure services using webhooks. For a complete list of the event handlers that Event Grid supports, see [An introduction to Azure Event Grid](../event-grid/overview.md).
+To watch a video discussing this integration, see [Azure IoT Hub integration with Azure Event Grid](/shows/internet-of-things-show/iot-devices-and-event-grid).
+ ![Azure Event Grid architecture](./media/iot-hub-event-grid/event-grid-functional-model.png) ## Regional availability
-The Event Grid integration is available for IoT hubs located in the regions where Event Grid is supported. For the latest list of regions, see [An introduction to Azure Event Grid](../event-grid/overview.md).
+The Event Grid integration is available for IoT hubs located in the regions where Event Grid is supported. For the latest list of regions, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=event-grid&regions=all).
## Event types
The following example shows the schema of a device connected event:
}] ```
+### Device telemetry schema
+Device telemetry messages must be in a valid JSON format with the contentType set to **application/json** and contentEncoding set to **UTF-8** in the message [system properties](iot-hub-devguide-routing-query-syntax.md#system-properties). Both of these properties are case insensitive. If the content encoding is not set, then IoT Hub will write the messages in base 64 encoded format.
-### Device Telemetry schema
-
-Device telemetry message must be in a valid JSON format with the contentType set to **application/json** and contentEncoding set to **UTF-8** in the message [system properties](iot-hub-devguide-routing-query-syntax.md#system-properties). Both of these properties are case insensitive. If the content encoding is not set, then IoT Hub will write the messages in base 64 encoded format.
-
-You can enrich device telemetry events before they are published to Event Grid by selecting the endpoint as Event Grid. For more information, see [Message Enrichments Overview](iot-hub-message-enrichments-overview.md).
+You can enrich device telemetry events before they are published to Event Grid by selecting the endpoint as Event Grid. For more information, see [message enrichments](iot-hub-message-enrichments-overview.md).
The following example shows the schema of a device telemetry event:
The following example shows the schema of a device created event:
}] ``` - > [!WARNING]
-> *Twin data* associated with a device creation event is a default configuration and *shouldn't* be relied on for actual `authenticationType` and other device properties in a newly created device. For `authenticationType` and other device properties in a newly created device, use the Register Manager API provided in Azure IoT SDKs.
+> *Twin data* associated with a device creation event is a default configuration and shouldn't be relied on for actual `authenticationType` and other device properties in a newly created device. For `authenticationType` and other device properties in a newly created device, use the register manager API provided in the Azure IoT SDKs.
For a detailed description of each property, see [Azure Event Grid event schema for IoT Hub](../event-grid/event-schema-iot-hub.md). ## Filter events
-IoT Hub event subscriptions can filter events based on event type, data content and subject, which is the device name.
-
-Event Grid enables [filtering](../event-grid/event-filtering.md) on event types, subjects and data content. While creating the Event Grid subscription, you can choose to subscribe to selected IoT events. Subject filters in Event Grid work based on **Begins With** (prefix) and **Ends With** (suffix) matches. The filter uses an `AND` operator, so events with a subject that match both the prefix and suffix are delivered to the subscriber.
-
-The subject of IoT Events uses the format:
-
-```json
-devices/{deviceId}
-```
-
-Event Grid also allows for filtering on attributes of each event, including the data content. This allows you to choose what events are delivered based contents of the telemetry message. Please see [advanced filtering](../event-grid/event-filtering.md#advanced-filtering) to view examples. For filtering on the telemetry message body, you must set the contentType to **application/json** and contentEncoding to **UTF-8** in the message [system properties](./iot-hub-devguide-routing-query-syntax.md#system-properties). Both of these properties are case insensitive.
+Event Grid enables [filtering](../event-grid/event-filtering.md) on event types, subjects, and data content. While creating the Event Grid subscription, you can choose to subscribe to selected IoT events.
-For non-telemetry events like DeviceConnected, DeviceDisconnected, DeviceCreated and DeviceDeleted, the Event Grid filtering can be used when creating the subscription.
+- Event type: For the list of IoT Hub event types, see [event types](#event-types).
+- Subject: For IoT Hub events, the subject is the device name. The subject takes the format `devices/{deviceId}`. You can filter subjects based on **Begins With** (prefix) and **Ends With** (suffix) matches. The filter uses an `AND` operator, so events with a subject that match both the prefix and suffix are delivered to the subscriber.
+- Data content: The data content is populated by IoT Hub using the message format. You can choose what events are delivered based on the contents of the telemetry message. For examples, see [advanced filtering](../event-grid/event-filtering.md#advanced-filtering). For filtering on the telemetry message body, you must set the contentType to **application/json** and contentEncoding to **UTF-8** in the message [system properties](./iot-hub-devguide-routing-query-syntax.md#system-properties). Both of these properties are case insensitive.
## Limitations for device connected and device disconnected events
-### Device State Events
+### Device state events
+ Device connection state events are available for devices connecting using either the MQTT or AMQP protocol, or using either of these protocols over WebSockets. Requests made only with HTTPS won't trigger device connection state notifications. * For devices connecting using Java, Node, or Python [Azure IoT SDKs](iot-hub-devguide-sdks.md) with the [MQTT protocol](iot-hub-mqtt-support.md) will have connection states sent automatically.
Device connection state events are available for devices connecting using either
* For devices connecting using the .NET [Azure IoT SDK](iot-hub-devguide-sdks.md) with the [MQTT](iot-hub-mqtt-support.md) or [AMQP](iot-hub-amqp-support.md) protocol wonΓÇÖt send a device connected event until an initial device-to-cloud or cloud-to-device message is sent/received. * Outside of the Azure IoT SDKs, in MQTT these operations equate to SUBSCRIBE or PUBLISH operations on the appropriate messaging [topics](iot-hub-mqtt-support.md). Over AMQP these equate to attaching or transferring a message on the [appropriate link paths](iot-hub-amqp-support.md).
-### Device State Interval
-IoT Hub does not report each individual device connect and disconnect, but rather publishes the current connection state taken at a periodic 60 second snapshot. Receiving either the same connection state event with different sequence numbers or different connection state events both mean that there was a change in the device connection state during the 60 second window.
-![image](https://user-images.githubusercontent.com/94493443/178398214-7423f7ca-8dfe-4202-8e9a-46cc70974b5e.png)
+### Device state interval
+IoT Hub does not report each individual device connect and disconnect action, but rather publishes the current connection state taken at a periodic 60 second snapshot. Receiving either the same connection state event with different sequence numbers or different connection state events both mean that there was a change in the device connection state during the 60 second window.
+![image](https://user-images.githubusercontent.com/94493443/178398214-7423f7ca-8dfe-4202-8e9a-46cc70974b5e.png)
## Tips for consuming events Applications that handle IoT Hub events should follow these suggested practices: * Multiple subscriptions can be configured to route events to the same event handler, so don't assume that events are from a particular source. Always check the message topic to ensure that it comes from the IoT hub that you expect.- * Don't assume that all events you receive are the types that you expect. Always check the eventType before processing the message.- * Messages can arrive out of order or after a delay. Use the etag field to understand if your information about objects is up to date for device created or device-deleted events. ## Next steps * [Try the IoT Hub events tutorial](../event-grid/publish-iot-hub-events-to-logic-apps.md)- * [Learn how to order device connected and disconnected events](iot-hub-how-to-order-connection-state-events.md)- * [Learn more about Event Grid](../event-grid/overview.md)- * [Compare the differences between routing IoT Hub events and messages](iot-hub-event-grid-routing-comparison.md)- * [Learn how to use IoT telemetry events to implement IoT spatial analytics using Azure Maps](../azure-maps/tutorial-iot-hub-maps.md)- * [Learn more about how to use Event Grid and Azure Monitor to monitor, diagnose, and troubleshoot device connectivity to IoT Hub](iot-hub-troubleshoot-connectivity.md)
iot-hub Iot Hub Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-mqtt-support.md
RFC 2396-encoded(<PropertyName1>)=RFC 2396-encoded(<PropertyValue1>)&RFC 2396-en
> If you are routing D2C messages to a Storage account and you want to levarage JSON encoding you need to specify the Content Type and Content Encoding > information including `$.ct=application%2Fjson&$.ce=utf-8` as part of the `{property_bag}` mentioned above. >
-> These attributes format are protocol-specific and are translated by IoT Hub into the relative System Properties as described [here](/azure/iot-hub/iot-hub-devguide-routing-query-syntax#system-properties)
+> These attributes format are protocol-specific and are translated by IoT Hub into the relative System Properties as described [here](./iot-hub-devguide-routing-query-syntax.md#system-properties)
The following is a list of IoT Hub implementation-specific behaviors:
To learn more about planning your IoT Hub deployment, see:
To further explore the capabilities of IoT Hub, see: * [IoT Hub developer guide](iot-hub-devguide.md)
-* [Deploying AI to edge devices with Azure IoT Edge](../iot-edge/quickstart-linux.md)
+* [Deploying AI to edge devices with Azure IoT Edge](../iot-edge/quickstart-linux.md)
key-vault Mhsm Control Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/mhsm-control-data.md
Last updated 06/13/2022
# Azure Key Vault Managed HSM ΓÇô Control your data in the cloud
-At Microsoft, we value, protect, and defend privacy. We believe in transparency, so that people and organizations can control their data and have meaningful choices in how it is used. We empower and defend the privacy choices of every person who uses our products and services. In this blog, we will take a deep dive on MicrosoftΓÇÖs [Azure Key Vault Managed HSMΓÇÖs](/azure/key-vault/managed-hsm/overview) security controls for encryption and how it provides additional safeguards and technical measures to help our customers meet compliance. Encryption is one of the key technical measures to achieve sole control of your data.
+At Microsoft, we value, protect, and defend privacy. We believe in transparency, so that people and organizations can control their data and have meaningful choices in how it is used. We empower and defend the privacy choices of every person who uses our products and services. In this blog, we will take a deep dive on MicrosoftΓÇÖs [Azure Key Vault Managed HSMΓÇÖs](./overview.md) security controls for encryption and how it provides additional safeguards and technical measures to help our customers meet compliance. Encryption is one of the key technical measures to achieve sole control of your data.
MicrosoftΓÇÖs Azure fortifies your data through state-of-the-art encryption technologies for both data at rest and in transit. Our encryption products erect barriers against unauthorized access to the data including two or more independent encryption layers to protect against compromises of any one layer. In addition, Azure has clearly defined, well-established responses, policies and processes, strong contractual commitments, and strict physical, operational, and infrastructure security controls to provide our customers the ultimate control of their data in the cloud. The fundamental premise of AzureΓÇÖs key management strategy is to give our customers more control over their data with [Zero Trust](https://www.microsoft.com/security/business/zero-trust) posture with advanced enclave technologies, hardware security modules and identity isolation that reduces MicrosoftΓÇÖs access to customer keys and data.
MicrosoftΓÇÖs Azure fortifies your data through state-of-the-art encryption tech
- **Azure Key Vault (AKV Premium)** encrypts with a FIPS 140-2 Level 2 hardware security module (HSM) protected keys - **Azure Key Vault Managed HSM** encrypts with a single tenant FIPS 140-2 Level 3 hardware security module (HSM) protected keys and is fully managed by Microsoft and provides customers with the sole control of the cryptographic keys
-For added assurance, AKV Premium and AKV Managed HSM support importing HSM-protected keys from an on-premises HSM commonly referred to as [*Bring your own key (BYOK)*](/azure/key-vault/keys/hsm-protected-keys-byok)
+For added assurance, AKV Premium and AKV Managed HSM support importing HSM-protected keys from an on-premises HSM commonly referred to as [*Bring your own key (BYOK)*](../keys/hsm-protected-keys-byok.md)
## Portfolio of Azure Key Management products
For added assurance, AKV Premium and AKV Managed HSM support importing HSM-prote
- Managed Hardware Security Module (HSM) - Managed HSM only support HSM-backed keys
-See [Azure Key Vault Concepts](/azure/key-vault/general/basic-concepts) and [Azure Key Vault REST API overview](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/key-vault/general/about-keys-secrets-certificates.md) for details.
+See [Azure Key Vault Concepts](../general/basic-concepts.md) and [Azure Key Vault REST API overview](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/key-vault/general/about-keys-secrets-certificates.md) for details.
## What is Azure Key Vault Managed HSM?
Microsoft designs, builds, and operates datacenters in a way that strictly contr
- **Contractual obligations** around security and customer data protection as discussed in [Microsoft Trust Center](https://www.microsoft.com/trust-center?rtc=1) - **[Cross region replication](../../availability-zones/cross-region-replication-azure.md)** - Managed HSM is introducing new functionality (geo-replication) very soon that will allow you to deploy HSMs in a secondary region - **Disaster Recovery** - Azure offers an end-to-end backup and disaster recovery solution that is simple, secure, scalable and cost-effective
- - [Business continuity management program](/azure/availability-zones/business-continuity-management-program)
- - [Azure Site Recovery](/azure/site-recovery)
- - [Azure backup](/azure/backup/) - Planned integration with the Managed HSM
+ - [Business continuity management program](../../availability-zones/business-continuity-management-program.md)
+ - [Azure Site Recovery](../../site-recovery/index.yml)
+ - [Azure backup](../../backup/index.yml) - Planned integration with the Managed HSM
- [Azure well-architected framework](/azure/architecture/framework/) - **[Microsoft Security Response Center](https://www.microsoft.com/msrc) (MSRC)** - Managed HSM service administration tightly integrated with MSRC - Security monitoring for unexpected administrative operations with full 24/7 security response
key-vault Quick Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-python.md
Get started with the Azure Key Vault secret client library for Python. Follow th
- An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - [Python 2.7+ or 3.6+](/azure/developer/python/configure-local-development-environment).-- [Azure CLI](/cli/azure/install-azure-cli).
+- [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps).
-This quickstart assumes you're running [Azure CLI](/cli/azure/install-azure-cli) in a Linux terminal window.
+This quickstart assumes you're running [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps) in a Linux terminal window.
## Set up your local environment
-This quickstart is using Azure Identity library with Azure CLI to authenticate user to Azure Services. Developers can also use Visual Studio or Visual Studio Code to authenticate their calls, for more information, see [Authenticate the client with Azure Identity client library](/python/api/overview/azure/identity-readme).
+This quickstart is using Azure Identity library with Azure CLI or Azure PowerShell to authenticate user to Azure Services. Developers can also use Visual Studio or Visual Studio Code to authenticate their calls, for more information, see [Authenticate the client with Azure Identity client library](/python/api/overview/azure/identity-readme).
### Sign in to Azure
-1. Run the `login` command.
+### [Azure CLI](#tab/azure-cli)
- ```azurecli-interactive
+1. Run the `az login` command.
+
+ ```azurecli
az login ```
This quickstart is using Azure Identity library with Azure CLI to authenticate u
2. Sign in with your account credentials in the browser.
+### [Azure PowerShell](#tab/azure-powershell)
+
+1. Run the `Connect-AzAccount` command.
+
+ ```azurepowershell
+ Connect-AzAccount
+ ```
+
+ If the PowerShell can open your default browser, it will do so and load an Azure sign-in page.
+
+ Otherwise, open a browser page at [https://aka.ms/devicelogin](https://aka.ms/devicelogin) and enter the
+ authorization code displayed in your terminal.
+
+2. Sign in with your account credentials in the browser.
+++ ### Install the packages 1. In a terminal or command prompt, create a suitable project folder, and then create and activate a Python virtual environment as described on [Use Python virtual environments](/azure/developer/python/configure-local-development-environment?tabs=cmd#use-python-virtual-environments).
This quickstart is using Azure Identity library with Azure CLI to authenticate u
Create an access policy for your key vault that grants secret permission to your user account.
+### [Azure CLI](#tab/azure-cli)
+ ```azurecli az keyvault set-policy --name <your-unique-keyvault-name> --upn user@domain.com --secret-permissions delete get list set ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Set-AzKeyVaultAccessPolicy -VaultName "<your-unique-keyvault-name>" -UserPrincipalName "user@domain.com" -PermissionsToSecrets delete,get,list,set
+```
+++ ## Create the sample code The Azure Key Vault secret client library for Python allows you to manage secrets. The following code sample demonstrates how to create a client, set a secret, retrieve a secret, and delete a secret.
Make sure the code in the previous section is in a file named *kv_secrets.py*. T
python kv_secrets.py ``` -- If you encounter permissions errors, make sure you ran the [`az keyvault set-policy` command](#grant-access-to-your-key-vault).
+- If you encounter permissions errors, make sure you ran the [`az keyvault set-policy` or `Set-AzKeyVaultAccessPolicy` command](#grant-access-to-your-key-vault).
- Re-running the code with the same secret name may produce the error, "(Conflict) Secret \<name\> is currently in a deleted but recoverable state." Use a different secret name. ## Code details
retrieved_secret = client.get_secret(secretName)
The secret value is contained in `retrieved_secret.value`.
-You can also retrieve a secret with the the Azure CLI command [az keyvault secret show](/cli/azure/keyvault/secret?#az-keyvault-secret-show).
+You can also retrieve a secret with the Azure CLI command [az keyvault secret show](/cli/azure/keyvault/secret?#az-keyvault-secret-show) or the Azure PowerShell command [Get-AzKeyVaultSecret](/powershell/module/az.keyvault/get-azkeyvaultsecret).
### Delete a secret
deleted_secret = poller.result()
The `begin_delete_secret` method is asynchronous and returns a poller object. Calling the poller's `result` method waits for its completion.
-You can verify that the secret had been removed with the Azure CLI command [az keyvault secret show](/cli/azure/keyvault/secret?#az-keyvault-secret-show).
+You can verify that the secret had been removed with the Azure CLI command [az keyvault secret show](/cli/azure/keyvault/secret?#az-keyvault-secret-show) or the Azure PowerShell command [Get-AzKeyVaultSecret](/powershell/module/az.keyvault/get-azkeyvaultsecret).
Once deleted, a secret remains in a deleted but recoverable state for a time. If you run the code again, use a different secret name.
If you want to also experiment with [certificates](../certificates/quick-create-
Otherwise, when you're finished with the resources created in this article, use the following command to delete the resource group and all its contained resources:
+### [Azure CLI](#tab/azure-cli)
+ ```azurecli az group delete --resource-group myResourceGroup ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Remove-AzResourceGroup -Name myResourceGroup
+```
+++ ## Next steps - [Overview of Azure Key Vault](../general/overview.md)
lab-services Administrator Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/administrator-guide.md
Egress charges might be waived for [Education Solutions](https://www.microsoft.c
For more information, see "What data transfer programs exist for academic customers and how do I qualify?" in the FAQ section of the [Programs for educational institutions](https://azure.microsoft.com/pricing/details/bandwidth/) page.
-For information about costs to store images and their replications, see [billing in an Azure Compute Gallery](/azure/virtual-machines/shared-image-galleries).
+For information about costs to store images and their replications, see [billing in an Azure Compute Gallery](../virtual-machines/shared-image-galleries.md).
#### Cost management
For more information about setting up and managing labs, see:
- [Configure a lab plan](lab-plan-setup-guide.md) - [Configure a lab](setup-guide.md)-- [Manage costs for labs](cost-management-guide.md)
+- [Manage costs for labs](cost-management-guide.md)
lighthouse Cross Tenant Management Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/cross-tenant-management-experience.md
With all scenarios, please be aware of the following current limitations:
- Requests handled by Azure Resource Manager can be performed using Azure Lighthouse. The operation URIs for these requests start with `https://management.azure.com`. However, requests that are handled by an instance of a resource type (such as Key Vault secrets access or storage data access) aren't supported with Azure Lighthouse. The operation URIs for these requests typically start with an address that is unique to your instance, such as `https://myaccount.blob.core.windows.net` or `https://mykeyvault.vault.azure.net/`. The latter also are typically data operations rather than management operations. - Role assignments must use [Azure built-in roles](../../role-based-access-control/built-in-roles.md). All built-in roles are currently supported with Azure Lighthouse, except for Owner or any built-in roles with [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission. The User Access Administrator role is supported only for limited use in [assigning roles to managed identities](../how-to/deploy-policy-remediation.md#create-a-user-who-can-assign-roles-to-a-managed-identity-in-the-customer-tenant). Custom roles and [classic subscription administrator roles](../../role-based-access-control/classic-administrators.md) are not supported.
+- Role assignments from Azure Lighthouse are not shown under Access Control (IAM) or with CLI tools such as `az role assignment list`. They are only visible in Azure Lighthouse under the Delegations section.
- While you can onboard subscriptions that use Azure Databricks, users in the managing tenant can't launch Azure Databricks workspaces on a delegated subscription at this time. - While you can onboard subscriptions and resource groups that have resource locks, those locks will not prevent actions from being performed by users in the managing tenant. [Deny assignments](../../role-based-access-control/deny-assignments.md) that protect system-managed resources, such as those created by Azure managed applications or Azure Blueprints (system-assigned deny assignments), do prevent users in the managing tenant from acting on those resources; however, at this time users in the customer tenant can't create their own deny assignments (user-assigned deny assignments). - Delegation of subscriptions across a [national cloud](../../active-directory/develop/authentication-national-cloud.md) and the Azure public cloud, or across two separate national clouds, is not supported.
lighthouse Migration At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/migration-at-scale.md
The workflow for this model will be similar to the following:
1. When the target customer subscription is ready, proceed with the migration through the access granted by Azure Lighthouse. The migration project containing assessment results and migrated resources will be created in the customer tenant under the target subscription. > [!TIP]
-> Prior to migration, a landing zone must be deployed to provision the foundation infrastructure resources and prepare the subscription to which virtual machines will be migrated. To access or create some resources in this landing zone, the Owner built-in role may be required, which is not currently supported in Azure Lighthouse. With these scenarios, the customer may need to provide [guest access](/azure/active-directory/external-identities/what-is-b2b) or delegate admin access via the [Cloud Solution Provider (CSP) subscription model](/partner-center/customers-revoke-admin-privileges). For an approach to creating multi-tenant landing zones, see the [Multi-tenant-Landing-Zones demo solution](https://github.com/Azure/Multi-tenant-Landing-Zones) on GitHub.
+> Prior to migration, a landing zone must be deployed to provision the foundation infrastructure resources and prepare the subscription to which virtual machines will be migrated. To access or create some resources in this landing zone, the Owner built-in role may be required, which is not currently supported in Azure Lighthouse. With these scenarios, the customer may need to provide [guest access](../../active-directory/external-identities/what-is-b2b.md) or delegate admin access via the [Cloud Solution Provider (CSP) subscription model](/partner-center/customers-revoke-admin-privileges). For an approach to creating multi-tenant landing zones, see the [Multi-tenant-Landing-Zones demo solution](https://github.com/Azure/Multi-tenant-Landing-Zones) on GitHub.
## Create an Azure Migrate project in the managing tenant
For more information, see [Link your partner ID to track your impact on delegate
## Next steps - Learn more about [Azure Migrate](../../migrate/migrate-services-overview.md).-- Learn about other [cross-tenant management experiences](../concepts/cross-tenant-management-experience.md) supported by Azure Lighthouse.
+- Learn about other [cross-tenant management experiences](../concepts/cross-tenant-management-experience.md) supported by Azure Lighthouse.
load-balancer Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/gateway-overview.md
-# Gateway Load Balancer (Preview)
+# Gateway Load Balancer
Gateway Load Balancer is a SKU of the Azure Load Balancer portfolio catered for high performance and high availability scenarios with third-party Network Virtual Appliances (NVAs). With the capabilities of Gateway Load Balancer, you can easily deploy, scale, and manage NVAs. Chaining a Gateway Load Balancer to your public endpoint only requires one click.
With Gateway Load Balancer, you can easily add or remove advanced network functi
The health probe listens across all ports and routes traffic to the backend instances using the HA ports rule. Traffic sent to and from Gateway Load Balancer uses the VXLAN protocol.
-> [!IMPORTANT]
-> Gateway Load Balancer is currently in preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Benefits Gateway Load Balancer has the following benefits:
For pricing see [Load Balancer pricing](https://azure.microsoft.com/pricing/deta
* Gateway Load Balancer doesn't work with the Global Load Balancer tier. * Cross-tenant chaining is not supported through the Azure portal. * Gateway Load Balancer does not currently support IPv6
+* Gateway Load Balancer does not currently support zone-redundant frontends due to a known issue. All frontends configured as zone-redundant will be allocated no-zone or non-zonal IPs. Frontends configured in Portal will automatically be created as no-zone.
## Next steps
load-balancer Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/skus.md
Azure Load Balancer has three SKUs.
## <a name="skus"></a> SKU comparison Azure Load Balancer has 3 SKUs - Basic, Standard, and Gateway. Each SKU is catered towards a specific scenario and has differences in scale, features, and pricing.
-To compare and understand the differences between Basic and Standard SKU, see the following table. For more information, see [Azure Standard Load Balancer overview](./load-balancer-overview.md). For information on Gateway SKU - catered for third-party network virtual appliances (NVAs) currently in preview, see [Gateway Load Balancer overview](gateway-overview.md)
+To compare and understand the differences between Basic and Standard SKU, see the following table. For more information, see [Azure Standard Load Balancer overview](./load-balancer-overview.md). For information on Gateway SKU - catered for third-party network virtual appliances (NVAs), see [Gateway Load Balancer overview](gateway-overview.md)
>[!NOTE] > Microsoft recommends Standard load balancer. See [Upgrade from Basic to Standard Load Balancer](upgrade-basic-standard.md) for a guided instruction on upgrading SKUs along with an upgrade script.
load-balancer Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/whats-new.md
You can also find the latest Azure Load Balancer updates and subscribe to the RS
| Type |Name |Description |Date added | | ||||
+| SKU | [Gateway Load Balancer now generally available](https://azure.microsoft.com/updates/generally-available-azure-gateway-load-balancer/) | Gateway Load Balancer is a new SKU of Azure Load Balancer targeted for scenarios requiring transparent NVA (network virtual appliance) insertion. Learn more about [Gateway Load Balancer](gateway-overview.md) or our supported [third party partners](gateway-partners.md). | July 2022 |
| SKU | [Gateway Load Balancer public preview](https://azure.microsoft.com/updates/gateway-load-balancer-preview/) | Gateway Load Balancer is a fully managed service enabling you to deploy, scale, and enhance the availability of third party network virtual appliances (NVAs) in Azure. You can add your favorite third party appliance whether it is a firewall, inline DDoS appliance, deep packet inspection system, or even your own custom appliance into the network path transparently ΓÇô all with a single click.| November 2021 | | Feature | [Support for IP-based backend pools (General Availability)](https://azure.microsoft.com/updates/iplbg)|March 2021 |
-| Feature | [Instance Metadata support for Standard SKU Load Balancers and Public IPs](https://azure.microsoft.com/updates/standard-load-balancer-and-ip-addresses-metadata-now-available-through-azure-instance-metadata-service-imds/)|Metadata of Standard Public IP addresses and Standard Load Balancer can now be retrieved through Azure Instance Metadata Service (IMDS). The metadata is available from within the running instances of virtual machines (VMs) and virtual machine scale sets instances. You can leverage the metadata to manage your virtual machines. Lean more [here](instance-metadata-service-load-balancer.md)| February 2021 |
+| Feature | [Instance Metadata support for Standard SKU Load Balancers and Public IPs](https://azure.microsoft.com/updates/standard-load-balancer-and-ip-addresses-metadata-now-available-through-azure-instance-metadata-service-imds/)|Metadata of Standard Public IP addresses and Standard Load Balancer can now be retrieved through Azure Instance Metadata Service (IMDS). The metadata is available from within the running instances of virtual machines (VMs) and virtual machine scale sets instances. You can leverage the metadata to manage your virtual machines. Learn more [here](instance-metadata-service-load-balancer.md)| February 2021 |
| Feature | [Public IP SKU upgrade from Basic to Standard without losing IP address](https://azure.microsoft.com/updates/public-ip-sku-upgrade-generally-available/) | As you move from Basic to Standard Load Balancers, retain your public IP address. Learn more [here](../virtual-network/ip-services/public-ip-upgrade-portal.md)| January 2021| | Feature | Support for moves across resource groups | Standard Load Balancer and Standard Public IP support for [resource group moves](https://azure.microsoft.com/updates/standard-resource-group-move/). | October 2020 | | Feature | [Cross-region load balancing with Global tier on Standard LB](https://azure.microsoft.com/updates/preview-azure-load-balancer-now-supports-crossregion-load-balancing/) | Azure Load Balancer supports Cross Region Load Balancing. Previously, Standard Load Balancer had a regional scope. With this release, you can load balance across multiple Azure regions via a single, static, global anycast Public IP address. | September 2020 |
logic-apps Create Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-managed-service-identity.md
ms.suite: integration Previously updated : 06/17/2022 Last updated : 07/14/2022
The following table lists the operations where you can use either the system-ass
| Operation type | Supported operations | |-|-| | Built-in | - Azure API Management <br>- Azure App Services <br>- Azure Functions <br>- HTTP <br>- HTTP + Webhook <p>**Note**: HTTP operations can authenticate connections to Azure Storage accounts behind Azure firewalls with the system-assigned identity. However, they don't support the user-assigned managed identity for authenticating the same connections. |
-| Managed connector | - Azure Automation <br>- Azure Blob Storage <br>- Azure Event Grid <br>- Azure Event Hubs <br>- Azure Key Vault <br>- Azure Resource Manager <br>- Azure Service Bus <br>- HTTP with Azure AD <br>- SQL Server |
+| Managed connector | - Azure AD <br>- Azure AD Identity Protection <br>- Azure App Service <br>- Azure Automation <br>- Azure Blob Storage <br>- Azure Container Instance <br>- Azure Cosmos DB <br>- Azure Data Explorer <br>- Azure Data Factory <br>- Azure Data Lake <br>- Azure Event Grid <br>- Azure Event Hubs <br>- Azure IoT Central V2 <br>- Azure IoT Central V3 <br>- Azure Key Vault <br>- Azure Log Analytics <br>- Azure Queues <br>- Azure Resource Manager <br>- Azure Service Bus <br>- Azure Sentinel <br>- Azure VM <br>- HTTP with Azure AD <br>- SQL Server |
||| ### [Standard](#tab/standard)
The following table lists the operations where you can use both the system-assig
| Operation type | Supported operations | |-|-| | Built-in | - HTTP <br>- HTTP + Webhook <p>**Note**: HTTP operations can authenticate connections to Azure Storage accounts behind Azure firewalls with the system-assigned identity. |
-| Managed connector | - Azure Automation <br>- Azure Blob Storage <br>- Azure Event Grid <br>- Azure Event Hubs <br>- Azure Key Vault <br>- Azure Resource Manager <br>- Azure Service Bus <br>- HTTP with Azure AD <br>- SQL Server |
+| Managed connector | - Azure AD <br>- Azure AD Identity Protection <br>- Azure App Service <br>- Azure Automation <br>- Azure Blob Storage <br>- Azure Container Instance <br>- Azure Cosmos DB <br>- Azure Data Explorer <br>- Azure Data Factory <br>- Azure Data Lake <br>- Azure Event Grid <br>- Azure Event Hubs <br>- Azure IoT Central V2 <br>- Azure IoT Central V3 <br>- Azure Key Vault <br>- Azure Log Analytics <br>- Azure Queues <br>- Azure Resource Manager <br>- Azure Service Bus <br>- Azure Sentinel <br>- Azure VM <br>- HTTP with Azure AD <br>- SQL Server |
|||
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
Previously updated : 05/26/2022 Last updated : 07/13/2022 #Customer intent: I'm a data scientist with ML knowledge in the computer vision space, looking to build ML models using image data in Azure Machine Learning with full control of the model algorithm, hyperparameters, and training and deployment environments.
machine-learning How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md
Azure uses limits and quotas to prevent budget overruns due to fraud, and to hon
> + Viewing your quotas and limits. > + Requesting quota increases.
-Along with managing quotas, you can learn how to [plan and manage costs for Azure Machine Learning](concept-plan-manage-cost.md) or learn about the [service limits in Azure Machine Learning](resource-limits-quotas-capacity.md).
+Along with managing quotas, you can learn how to [plan and manage costs for Azure Machine Learning](concept-plan-manage-cost.md) or learn about the [service limits in Azure Machine Learning](resource-limits-capacity.md).
## Special considerations
In this section, you learn about the default and maximum quota limits for the fo
+ Azure Storage > [!IMPORTANT]
-> Limits are subject to change. For the latest information, see [Service limits in Azure Machine Learning](resource-limits-quotas-capacity.md).
+> Limits are subject to change. For the latest information, see [Service limits in Azure Machine Learning](resource-limits-capacity.md).
When you're requesting a quota increase, select the service that you have in min
## Next steps + [Plan and manage costs for Azure Machine Learning](concept-plan-manage-cost.md)
-+ [Service limits in Azure Machine Learning](resource-limits-quotas-capacity.md)
++ [Service limits in Azure Machine Learning](resource-limits-capacity.md) + [Troubleshooting managed online endpoints deployment and scoring](./how-to-troubleshoot-online-endpoints.md)
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
When you access online endpoints with REST requests, the returned status codes a
| 401 | Unauthorized | You don't have permission to do the requested action, such as score, or your token is expired. | | 404 | Not found | Your URL isn't correct. | | 408 | Request timeout | The model execution took longer than the timeout supplied in `request_timeout_ms` under `request_settings` of your model deployment config.|
-| 424 | Model Error | If your model container returns a non-200 response, Azure returns a 424. Check the `Model Status Code` dimension under the `Requests Per Minute` metric on your endpoint's [Azure Monitor Metric Explorer](/azure/azure-monitor/essentials/metrics-getting-started). Or check response headers `ms-azureml-model-error-statuscode` and `ms-azureml-model-error-reason` for more information. |
+| 424 | Model Error | If your model container returns a non-200 response, Azure returns a 424. Check the `Model Status Code` dimension under the `Requests Per Minute` metric on your endpoint's [Azure Monitor Metric Explorer](../azure-monitor/essentials/metrics-getting-started.md). Or check response headers `ms-azureml-model-error-statuscode` and `ms-azureml-model-error-reason` for more information. |
| 429 | Rate-limiting | You attempted to send more than 100 requests per second to your endpoint. | | 429 | Too many pending requests | Your model is getting more requests than it can handle. We allow 2 * `max_concurrent_requests_per_instance` * `instance_count` requests at any time. Additional requests are rejected. You can confirm these settings in your model deployment config under `request_settings` and `scale_settings`. If you are using auto-scaling, your model is getting requests faster than the system can scale up. With auto-scaling, you can try to resend requests with [exponential backoff](https://aka.ms/exponential-backoff). Doing so can give the system time to adjust. | | 500 | Internal server error | Azure ML-provisioned infrastructure is failing. |
When you access online endpoints with REST requests, the returned status codes a
- [Deploy and score a machine learning model with a managed online endpoint](how-to-deploy-managed-online-endpoints.md) - [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md)-- [Online endpoint YAML reference](reference-yaml-endpoint-online.md)
+- [Online endpoint YAML reference](reference-yaml-endpoint-online.md)
machine-learning How To Use Mlflow Azure Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-synapse.md
If you have an MLflow Project to train with Azure Machine Learning, see [Train M
## Prerequisites
-* An [Azure Synapse Analytics workspace and cluster](/azure/synapse-analytics/quickstart-create-workspace.md).
+* An [Azure Synapse Analytics workspace and cluster](../synapse-analytics/quickstart-create-workspace.md).
* An [Azure Machine Learning Workspace](quickstart-create-resources.md). ## Install libraries
If you wish to keep your Azure Synapse Analytics workspace, but no longer need t
## Next steps * [Track experiment runs with MLflow and Azure Machine Learning](how-to-use-mlflow.md). * [Deploy MLflow models in Azure Machine Learning](how-to-deploy-mlflow-models.md).
-* [Manage your models with MLflow](how-to-manage-models-mlflow.md).
+* [Manage your models with MLflow](how-to-manage-models-mlflow.md).
machine-learning Resource Limits Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-limits-capacity.md
+
+ Title: Service limits
+
+description: Service limits used for capacity planning and maximum limits on requests and responses for Azure Machine Learning.
+++++++ Last updated : 07/13/2022++
+# Service limits in Azure Machine Learning
+
+This section lists basic limits and throttling thresholds in Azure Machine Learning.
+
+To learn how increase resource quotas, see ["Manage and increase quotas for resources"](how-to-manage-quotas.md)
+
+> [!Important]
+> Azure Machine Learning doesn't store or process your data outside of the region where you deploy.
++
+## Workspaces
+| Limit | Value |
+| | |
+| Workspace name | 2-32 characters |
+
+## Runs
+| Limit | Value |
+| | |
+| Runs per workspace | 10 million |
+| RunId/ParentRunId | 256 characters |
+| DataContainerId | 261 characters |
+| DisplayName |256 characters|
+| Description |5,000 characters|
+| Number of properties |50 |
+| Length of property key |100 characters |
+| Length of property value |1,000 characters |
+| Number of tags |50 |
+| Length of tag key |100 |
+| Length of tag value |1,000 characters |
+| CancelUri / CompleteUri / DiagnosticsUri |1,000 characters |
+| Error message length |3,000 characters |
+| Warning message length |300 characters |
+| Number of input datasets |200 |
+| Number of output datasets |20 |
++
+## Metrics
+| Limit | Value |
+| | |
+| Metric names per run |50|
+| Metric rows per metric name |10 million|
+| Columns per metric row |15|
+| Metric column name length |255 characters |
+| Metric column value length |255 characters |
+| Metric rows per batch uploaded | 250 |
+
+> [!NOTE]
+> If you are hitting the limit of metric names per run because you are formatting variables into the metric name, consider instead to use a row metric where one column is the variable value and the second column is the metric value.
+
+## Artifacts
+
+| Limit | Value |
+| | |
+| Number of artifacts per run |10 million|
+| Max length of artifact path |5,000 characters |
+
+## Limit increases
+
+Some limits can be increased for individual workspaces by [contacting support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest/).
+
+## Next steps
+
+- [Configure your Azure Machine Learning environment](how-to-configure-environment.md)
+- Learn how increase resource quotas in ["Manage and increase quotas for resources"](how-to-manage-quotas.md).
+
machine-learning Tutorial Designer Automobile Price Train Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-designer-automobile-price-train-score.md
You need an Azure Machine Learning workspace to use the designer. The workspace
1. Select **Easy-to-use prebuilt components**.
-1. At the top of the canvas, select the default pipeline name **Pipeline-Created-on**. Rename it to *Automobile price prediction*. The name doesn't need to be unique.
+1. Open the ![Screenshot of the gear icon that is in the UI.](./media/tutorial-designer-automobile-price-train-score/gear-icon.png) **Settings** pane to the right of the canvas, and scroll to **Draft name** text box. Rename it to *Automobile price prediction*. The name doesn't need to be unique.
## Set the default compute target
A pipeline runs on a compute target, which is a compute resource that's attached
You can set a **Default compute target** for the entire pipeline, which will tell every component to use the same compute target by default. However, you can specify compute targets on a per-module basis.
-1. Next to the pipeline name, select the **Gear icon** ![Screenshot of the gear icon that is in the UI.](./media/tutorial-designer-automobile-price-train-score/gear-icon.png) at the top of the canvas to open the **Settings** pane.
+1. Select ![Screenshot of the gear icon that is in the UI.](./media/tutorial-designer-automobile-price-train-score/gear-icon.png)**Settings** to the right of the canvas to open the **Settings** pane.
-1. In the **Settings** pane to the right of the canvas, select **Select compute target**.
+1. Select **Create Azure ML compute instance**.
- If you already have an available compute target, you can select it to run this pipeline.
+ If you already have an available compute target, you can select it from the **Select Azure ML compute instance** drop-down to run this pipeline.
1. Enter a name for the compute resource.
-1. Select **Save**.
+1. Select **Create**.
> [!NOTE] > It takes approximately five minutes to create a compute resource. After the resource is created, you can reuse it and skip this wait time for future runs.
You can set a **Default compute target** for the entire pipeline, which will tel
There are several sample datasets included in the designer for you to experiment with. For this tutorial, use **Automobile price data (Raw)**.
-1. To the left of the pipeline canvas is a palette of datasets and components. Select **Sample datasets** to view the available sample datasets.
+1. To the left of the pipeline canvas is a palette of datasets and components. Select **Data**.
1. Select the dataset **Automobile price data (Raw)**, and drag it onto the canvas.
Datasets typically require some preprocessing before analysis. You might have no
When you train a model, you have to do something about the data that's missing. In this dataset, the **normalized-losses** column is missing many values, so you'll exclude that column from the model altogether.
-1. In the component palette to the left of the canvas, expand the **Component** section and find the **Select Columns in Dataset** component.
+1. In the datasets and component palette to the left of the canvas, click **Component** and search for the **Select Columns in Dataset** component.
1. Drag the **Select Columns in Dataset** component onto the canvas. Drop the component below the dataset component.
When you train a model, you have to do something about the data that's missing.
1. Select the **Select Columns in Dataset** component.
-1. In the component details pane to the right of the canvas, select **Edit column**.
+1. Click on the arrow icon under Settings to the right of the canvas to open the component details pane. Alternatively, you can double-click the **Select Columns in Dataset** component to open the details pane.
+
+1. Select **Edit column** to the right of the pane.
1. Expand the **Column names** drop down next to **Include**, and select **All columns**.
When you train a model, you have to do something about the data that's missing.
:::image type="content" source="./media/tutorial-designer-automobile-price-train-score/exclude-column.png" alt-text="Screenshot of select columns with exclude highlighted.":::
-1. Select the **Select Columns in Dataset** component.
+1. In the **Select Columns in Dataset** component details pane, expand **Node info**.
-1. In the component details pane to the right of the canvas, select the **Comment** text box and enter *Exclude normalized losses*.
+1. Select the **Comment** text box and enter *Exclude normalized losses*.
Comments will appear on the graph to help you organize your pipeline.
Your dataset still has missing values after you remove the **normalized-losses**
> [!TIP] > Cleaning the missing values from input data is a prerequisite for using most of the components in the designer.
-1. In the component palette to the left of the canvas, expand the section **Component**, and find the **Clean Missing Data** component.
+1. In the datasets and component palette to the left of the canvas, click **Component** and search for the **Clean Missing Data** component.
1. Drag the **Clean Missing Data** component to the pipeline canvas. Connect it to the **Select Columns in Dataset** component. 1. Select the **Clean Missing Data** component.
-1. In the component details pane to the right of the canvas, select **Edit Column**.
+1. Click on the arrow icon under Settings to the right of the canvas to open the component details pane. Alternatively, you can double-click the **Clean Missing Data** component to open the details pane.
+
+1. Select **Edit column** to the right of the pane.
1. In the **Columns to be cleaned** window that appears, expand the drop-down menu next to **Include**. Select, **All columns** 1. Select **Save**
-1. In the component details pane to the right of the canvas, select **Remove entire row** under **Cleaning mode**.
+1. In the **Clean Missing Data** component details pane, under **Cleaning mode**, select **Remove entire row**.
-1. In the component details pane to the right of the canvas, select the **Comment** box, and enter *Remove missing value rows*.
+1. In the **Clean Missing Data** component details pane, expand **Node info**.
+
+1. Select the **Comment** text box and enter *Remove missing value rows*.
Your pipeline should now look something like this:
Because you want to predict price, which is a number, you can use a regression a
Splitting data is a common task in machine learning. You'll split your data into two separate datasets. One dataset will train the model and the other will test how well the model performed.
-1. In the component palette, expand the section **Component** and find the **Split Data** component.
+1. In the datasets and component palette to the left of the canvas, click **Component** and search for the **Split Data** component.
1. Drag the **Split Data** component to the pipeline canvas. 1. Connect the left port of the **Clean Missing Data** component to the **Split Data** component. > [!IMPORTANT]
- > Be sure that the left output ports of **Clean Missing Data** connects to **Split Data**. The left port contains the cleaned data. The right port contains the discarded data.
+ > Make sure that the left output port of **Clean Missing Data** connects to **Split Data**. The left port contains the cleaned data. The right port contains the discarded data.
1. Select the **Split Data** component.
-1. In the component details pane to the right of the canvas, set the **Fraction of rows in the first output dataset** to 0.7.
+1. Click on the arrow icon under Settings to the right of the canvas to open the component details pane. Alternatively, you can double-click the **Split Data** component to open the details pane.
+
+1. In the **Split Data** details pane, set the **Fraction of rows in the first output dataset** to 0.7.
This option splits 70 percent of the data to train the model and 30 percent for testing it. The 70 percent dataset will be accessible through the left output port. The remaining data will be available through the right output port.
-1. In the component details pane to the right of the canvas, select the **Comment** box, and enter *Split the dataset into training set (0.7) and test set (0.3)*.
+1. In the **Split Data** details pane, expand **Node info**.
+
+1. Select the **Comment** text box and enter *Split the dataset into training set (0.7) and test set (0.3)*.
### Train the model Train the model by giving it a dataset that includes the price. The algorithm constructs a model that explains the relationship between the features and the price as presented by the training data.
-1. In the component palette, expand **Components**.
+1. In the datasets and component palette to the left of the canvas, click **Component** and search for the **Linear Regression** component.
- This option displays several categories of components that you can use to initialize learning algorithms.
+1. Drag the **Linear Regression** component to the pipeline canvas.
-1. Select **Regression** > **Linear Regression**, and drag it to the pipeline canvas.
+1. In the datasets and component palette to the left of the canvas, click **Component** and search for the **Train Model** component.
-1. In the component palette, expand the section **Module training**, and drag the **Train Model** component to the canvas.
+1. Drag the **Train Model** component to the pipeline canvas.
1. Connect the output of the **Linear Regression** component to the left input of the **Train Model** component. 1. Connect the training data output (left port) of the **Split Data** component to the right input of the **Train Model** component. > [!IMPORTANT]
- > Be sure that the left output ports of **Split Data** connects to **Train Model**. The left port contains the training set. The right port contains the test set.
+ > Make sure that the left output port of **Split Data** connects to **Train Model**. The left port contains the training set. The right port contains the test set.
:::image type="content" source="./media/tutorial-designer-automobile-price-train-score/pipeline-train-model.png" alt-text="Screenshot showing the Linear Regression connects to left port of Train Model and the Split Data connects to right port of Train Model."::: 1. Select the **Train Model** component.
-1. Double click to open the component details, select **Edit column** selector.
+1. Click on the arrow icon under Settings to the right of the canvas to open the component details pane. Alternatively, you can double-click the **Train Model** component to open the details pane.
+
+1. Select **Edit column** to the right of the pane.
-1. In the **Label column** dialog box, expand the drop-down menu and select **Column names**.
+1. In the **Label column** window that appears, expand the drop-down menu and select **Column names**.
1. In the text box, enter *price* to specify the value that your model is going to predict.
Train the model by giving it a dataset that includes the price. The algorithm co
After you train your model by using 70 percent of the data, you can use it to score the other 30 percent to see how well your model functions.
-1. Enter *score model* in the search box to find the **Score Model** component. Drag the component to the pipeline canvas.
+1. In the datasets and component palette to the left of the canvas, click **Component** and search for the **Score Model** component.
+
+1. Drag the **Score Model** component to the pipeline canvas.
1. Connect the output of the **Train Model** component to the left input port of **Score Model**. Connect the test data output (right port) of the **Split Data** component to the right input port of **Score Model**.
After you train your model by using 70 percent of the data, you can use it to sc
Use the **Evaluate Model** component to evaluate how well your model scored the test dataset.
-1. Enter *evaluate* in the search box to find the **Evaluate Model** component. Drag the component to the pipeline canvas.
+1. In the datasets and component palette to the left of the canvas, click **Component** and search for the **Evaluate Model** component.
+
+1. Drag the **Evaluate Model** component to the pipeline canvas.
1. Connect the output of the **Score Model** component to the left input of **Evaluate Model**.
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-data.md
Azure Machine Learning makes it easy to connect to your data in the cloud. It pr
## Data workflow
-When you're ready to use the data in your cloud-based storage solution, we recommend the following data delivery workflow. This workflow assumes you have an [Azure storage account](/azure/storage/common/storage-account-create?tabs=azure-portal) and data in a cloud-based storage service in Azure.
+When you're ready to use the data in your cloud-based storage solution, we recommend the following data delivery workflow. This workflow assumes you have an [Azure storage account](../../storage/common/storage-account-create.md?tabs=azure-portal) and data in a cloud-based storage service in Azure.
1. Create an [Azure Machine Learning datastore](#connect-to-storage-with-datastores) to store connection information to your Azure storage.
See the [Create a dataset monitor](../how-to-monitor-datasets.md) article, to le
## Next steps + Create a dataset in Azure Machine Learning studio or with the Python SDK [using these steps.](how-to-create-register-datasets.md)
-+ Try out dataset training examples with our [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/).
++ Try out dataset training examples with our [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/).
managed-grafana How To Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-permissions.md
In this article, you'll learn how to manually edit permissions for a specific re
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).-- An Azure Managed Grafana instance. If you don't have one yet, [create an Azure Managed Grafana instance](/azure/managed-grafana/quickstart-managed-grafana-portal).
+- An Azure Managed Grafana instance. If you don't have one yet, [create an Azure Managed Grafana instance](./quickstart-managed-grafana-portal.md).
- An Azure resource with monitoring data and write permissions, such as [User Access Administrator](../../articles/role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../../articles/role-based-access-control/built-in-roles.md#owner) ## Sign in to Azure
For more information about how to use Managed Grafana with Azure Monitor, go to
## Next steps > [!div class="nextstepaction"]
-> [How to configure data sources for Azure Managed Grafana](./how-to-data-source-plugins-managed-identity.md)
+> [How to configure data sources for Azure Managed Grafana](./how-to-data-source-plugins-managed-identity.md)
migrate Common Questions Discovery Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-discovery-assessment.md
The Azure SQL assessment only includes databases that are in online status. In c
## I want to compare costs for running my SQL instances on Azure VM vs Azure SQL Database/Azure SQL Managed Instance
-You can create an assessment with type **Azure VM** on the same group that was used in your **Azure SQL** assessment. You can then compare the two reports side by side. Though, Azure VM assessments in Azure Migrate are currently lift-and-shift focused and will not consider the specific performance metrics for running SQL instances and databases on the Azure virtual machine. When you run an Azure VM assessment on a server, the recommended size and cost estimates will be for all instances running on the server and can be migrated to an Azure VM using the Server Migration tool. Before you migrate, [review the performance guidelines](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist.md) for SQL Server on Azure virtual machines.
+You can create an assessment with type **Azure VM** on the same group that was used in your **Azure SQL** assessment. You can then compare the two reports side by side. Though, Azure VM assessments in Azure Migrate are currently lift-and-shift focused and will not consider the specific performance metrics for running SQL instances and databases on the Azure virtual machine. When you run an Azure VM assessment on a server, the recommended size and cost estimates will be for all instances running on the server and can be migrated to an Azure VM using the Server Migration tool. Before you migrate, [review the performance guidelines](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist) for SQL Server on Azure virtual machines.
## The storage cost in my Azure SQL assessment is zero
migrate Concepts Azure Sql Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-azure-sql-assessment-calculation.md
Assessment criteria | **Comfort factor** | You can indicate the buffer you want
Azure SQL Managed Instance sizing | **Service Tier** | You can choose the most appropriate service tier option to accommodate your business needs for migration to Azure SQL Managed Instance:<br/><br/>Select *Recommended* if you want Azure Migrate to recommend the best suited service tier for your servers. This can be General purpose or Business critical.<br/><br/>Select *General Purpose* if you want an Azure SQL configuration designed for budget-oriented workloads.<br/><br/>Select *Business Critical* if you want an Azure SQL configuration designed for low-latency workloads with high resiliency to failures and fast failovers. Azure SQL Managed Instance sizing | **Instance type** | Defaulted to *Single instance*. Azure SQL Managed Instance sizing | **Pricing Tier** | Defaulted to *Standard*.
-SQL Server on Azure VM sizing | **VM series** | You can specify the Azure VM series you want to consider for *SQL Server on Azure VM* sizing. Based on the configuration and performance requirements of your SQL Server or SQL Server instance, the assessment will recommend a VM size from the selected list of VM series. <br/>You can edit settings as needed. For example, if you don't want to include D-series VM, you can exclude D-series from this list.<br/> As Azure SQL assessments are intended to give the best performance for your SQL workloads, the VM series list only has VMs that are optimized for running your SQL Server on Azure Virtual Machines (VMs). [Learn more](https://docs.microsoft.com/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist?view=azuresql#vm-size&preserve-view=true).
+SQL Server on Azure VM sizing | **VM series** | You can specify the Azure VM series you want to consider for *SQL Server on Azure VM* sizing. Based on the configuration and performance requirements of your SQL Server or SQL Server instance, the assessment will recommend a VM size from the selected list of VM series. <br/>You can edit settings as needed. For example, if you don't want to include D-series VM, you can exclude D-series from this list.<br/> As Azure SQL assessments are intended to give the best performance for your SQL workloads, the VM series list only has VMs that are optimized for running your SQL Server on Azure Virtual Machines (VMs). [Learn more](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist?preserve-view=true&view=azuresql#vm-size).
SQL Server on Azure VM sizing | **Storage Type** | Defaulted to *Recommended*, which means the assessment will recommend the best suited Azure Managed Disk based on the chosen environment type, on-premises disk size, IOPS and throughput. Azure SQL Database sizing | **Service Tier** | You can choose the most appropriate service tier option to accommodate your business needs for migration to Azure SQL Database:<br/><br/>Select **Recommended** if you want Azure Migrate to recommend the best suited service tier for your servers. This can be General purpose or Business critical.<br/><br/>Select **General Purpose** if you want an Azure SQL configuration designed for budget-oriented workloads.<br/><br/>Select **Business Critical** if you want an Azure SQL configuration designed for low-latency workloads with high resiliency to failures and fast failovers. Azure SQL Database sizing | **Instance type** | Defaulted to *Single database*.
After the assessment determines the readiness and the recommended Azure SQL depl
### Instances to SQL Server on Azure VM configuration
-*Instance to SQL Server on Azure VM* assessment report covers the ideal approach for migrating SQL Server instances and databases to SQL Server on Azure VM, adhering to the best practices. [Learn more](https://docs.microsoft.com/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist?view=azuresql#vm-size&preserve-view=true).
+*Instance to SQL Server on Azure VM* assessment report covers the ideal approach for migrating SQL Server instances and databases to SQL Server on Azure VM, adhering to the best practices. [Learn more](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist?preserve-view=true&view=azuresql#vm-size).
#### Storage sizing For storage sizing, the assessment maps each of the instance disk to an Azure disk. Sizing works as follows:
After it calculates storage requirements, the assessment considers CPU and RAM r
- If a suitable size is found, Azure Migrate applies the storage calculations. It then applies location and pricing-tier settings for the final VM size recommendation. - If there are multiple eligible Azure VM sizes, the one with the lowest cost is recommended. > [!NOTE]
->As Azure SQL assessments are intended to give the best performance for your SQL workloads, the VM series list only has VMs that are optimized for running your SQL Server on Azure Virtual Machines (VMs). [Learn more](https://docs.microsoft.com/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist?view=azuresql#vm-size&preserve-view=true).
+>As Azure SQL assessments are intended to give the best performance for your SQL workloads, the VM series list only has VMs that are optimized for running your SQL Server on Azure Virtual Machines (VMs). [Learn more](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist?preserve-view=true&view=azuresql#vm-size).
### Servers to SQL Server on Azure VM configuration
After sizing recommendations are complete, Azure SQL assessment calculates the c
## Next steps - [Review](best-practices-assessment.md) best practices for creating assessments. -- Learn how to run an [Azure SQL assessment](how-to-create-azure-sql-assessment.md).
+- Learn how to run an [Azure SQL assessment](how-to-create-azure-sql-assessment.md).
migrate Concepts Migration Webapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-migration-webapps.md
Support | Details
- Learn how to [perform at-scale agentless migration of ASP.NET web apps to Azure App Service](./tutorial-migrate-webapps.md). - Once you have successfully completed migration, you may explore the following steps based on web app specific requirement(s):
- - [Map existing custom DNS name](/azure/app-service/app-service-web-tutorial-custom-domain).
- - [Secure a custom DNS with a TLS/SSL binding](/azure/app-service/configure-ssl-bindings).
- - [Securely connect to Azure resources](/azure/app-service/tutorial-connect-overview).
- - [Deployment best practices](/azure/app-service/deploy-best-practices).
- - [Security recommendations](/azure/app-service/security-recommendations).
- - [Networking features](/azure/app-service/networking-features).
- - [Monitor App Service with Azure Monitor](/azure/app-service/monitor-app-service).
- - [Configure Azure AD authentication](/azure/app-service/configure-authentication-provider-aad).
-- [Review best practices](/azure/app-service/deploy-best-practices) for deploying to Azure App service.
+ - [Map existing custom DNS name](../app-service/app-service-web-tutorial-custom-domain.md).
+ - [Secure a custom DNS with a TLS/SSL binding](../app-service/configure-ssl-bindings.md).
+ - [Securely connect to Azure resources](../app-service/tutorial-connect-overview.md).
+ - [Deployment best practices](../app-service/deploy-best-practices.md).
+ - [Security recommendations](../app-service/security-recommendations.md).
+ - [Networking features](../app-service/networking-features.md).
+ - [Monitor App Service with Azure Monitor](../app-service/monitor-app-service.md).
+ - [Configure Azure AD authentication](../app-service/configure-authentication-provider-aad.md).
+- [Review best practices](../app-service/deploy-best-practices.md) for deploying to Azure App service.
migrate How To Create Azure Sql Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-azure-sql-assessment.md
Run an assessment as follows:
- In **VM series**, specify the Azure VM series you want to consider for *SQL Server on Azure VM* sizing. Based on the configuration and performance requirements of your SQL Server or SQL Server instance, the assessment will recommend a VM size from the selected list of VM series. - You can edit settings as needed. For example, if you don't want to include D-series VM, you can exclude D-series from this list. > [!NOTE]
- > As Azure SQL assessments are intended to give the best performance for your SQL workloads, the VM series list only has VMs that are optimized for running your SQL Server on Azure Virtual Machines (VMs). [Learn more](https://docs.microsoft.com/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist?view=azuresql#vm-size&preserve-view=true).
+ > As Azure SQL assessments are intended to give the best performance for your SQL workloads, the VM series list only has VMs that are optimized for running your SQL Server on Azure Virtual Machines (VMs). [Learn more](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist?preserve-view=true&view=azuresql#vm-size).
- **Storage Type** is defaulted to *Recommended*, which means the assessment will recommend the best suited Azure Managed Disk based on the chosen environment type, on-premises disk size, IOPS, and throughput. 1. In **Assessment settings** > **Azure SQL Database sizing**,
The confidence rating helps you estimate the reliability of size recommendations
## Next steps - [Learn more](concepts-azure-sql-assessment-calculation.md) about how Azure SQL assessments are calculated.-- Start migrating SQL instances and databases using [Azure Database Migration Service](../dms/dms-overview.md).
+- Start migrating SQL instances and databases using [Azure Database Migration Service](../dms/dms-overview.md).
migrate Migrate Support Matrix Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical.md
Set up an account that the appliance can use to access the physical servers.
**Windows servers**
-For Windows servers, use a domain account for domain-joined servers, and a local account for servers that are not domain-joined. The user account can be created in one of the two ways:
+For Windows servers, use a domain account for domain-joined servers, and a local account for servers that aren't domain-joined. The user account can be created in one of the two ways:
### Option 1
For Linux servers, based on the features you want to perform, you can create a u
### Option 2 - To discover the configuration and performance metadata from Linux servers, you can provide a user account with sudo permissions.-- The support to add a user account with sudo access is provided by default with the new appliance installer script downloaded from portal after July 20,2021.
+- The support to add a user account with sudo access is provided by default with the new appliance installer script downloaded from portal after July 20, 2021.
- For older appliances, you can enable the capability by following these steps: 1. On the server running the appliance, open the Registry Editor. 1. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance.
For Linux servers, based on the features you want to perform, you can create a u
:::image type="content" source="./media/tutorial-discover-physical/issudo-reg-key.png" alt-text="Screenshot that shows how to enable sudo support."::: -- You need to enable sudo access for the commands listed [here](discovered-metadata.md#linux-server-metadata). Make sure that you have enabled 'NOPASSWD' for the account to run the required commands without prompting for a password every time sudo command is invoked.
+- For the sudo user, you need to provide the bin/bash NOPASSWD permission in the sudoers file in addition to the commands mentioned in the table [here](discovered-metadata.md#linux-server-metadata).
- The following Linux OS distributions are supported for discovery by Azure Migrate using an account with sudo access: Operating system | Versions
For Linux servers, based on the features you want to perform, you can create a u
> 'Sudo' account is currently not supported to perform software inventory (discovery of installed applications) and enable agentless dependency analysis. ### Option 3-- If you cannot provide root account or user account with sudo access, then you can set 'isSudo' registry key to value '0' in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance registry and provide a non-root account with the required capabilities using the following commands:
+- If you can't provide root account or user account with sudo access, then you can set 'isSudo' registry key to value '0' in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance registry and provide a non-root account with the required capabilities using the following commands:
**Command** | **Purpose** | |
Support | Details
## Dependency analysis requirements (agentless)
-[Dependency analysis](concepts-dependency-visualization.md) helps you analyze the dependencies between the discovered servers which can be easily visualized with a map view in Azure Migrate project and can be used to group related servers for migration to Azure. The following table summarizes the requirements for setting up agentless dependency analysis:
+[Dependency analysis](concepts-dependency-visualization.md) helps you analyze the dependencies between the discovered servers, which can be easily visualized with a map view in Azure Migrate project and can be used to group related servers for migration to Azure. The following table summarizes the requirements for setting up agentless dependency analysis:
Support | Details |
Support | Details
**Operating systems** | Servers running all Windows and Linux versions that meet the server requirements and have the required access permissions are supported. **Server requirements** | Windows servers must have PowerShell remoting enabled and PowerShell version 2.0 or later installed. <br/><br/> Linux servers must have SSH connectivity enabled and ensure that the following commands can be executed on the Linux servers: touch, chmod, cat, ps, grep, echo, sha256sum, awk, netstat, ls, sudo, dpkg, rpm, sed, getcap, which, date **Windows server access** | A user account (local or domain) with administrator permissions on servers.
-**Linux server access** | A root user account, or an account that has these permissions on /bin/netstat and /bin/ls files: <br />CAP_DAC_READ_SEARCH<br /> CAP_SYS_PTRACE<br /><br /> Set these capabilities by using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
+**Linux server access** | A root user account, or an account that has these permissions on /bin/netstat and /bin/ls files: <br />CAP_DAC_READ_SEARCH<br /> CAP_SYS_PTRACE<br /><br /> Set these capabilities by using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep usr/bin/ls</code><br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep usr/bin/netstat</code>
**Port access** | For Windows server, need access on port 5985 (HTTP) and for Linux servers, need access on port 22(TCP). **Discovery method** | Agentless dependency analysis is performed by directly connecting to the servers using the server credentials added on the appliance. <br/><br/> The appliance gathers the dependency information from Windows servers using PowerShell remoting and from Linux servers using SSH connection. <br/><br/> No agent is installed on the servers to pull dependency data.
Support | Details
**Log Analytics** | Azure Migrate uses the [Service Map](../azure-monitor/vm/service-map.md) solution in [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) for dependency visualization.<br/><br/> You associate a new or existing Log Analytics workspace with a project. The workspace for a project can't be modified after it's added. <br/><br/> The workspace must be in the same subscription as the project.<br/><br/> The workspace must reside in the East US, Southeast Asia, or West Europe regions. Workspaces in other regions can't be associated with a project.<br/><br/> The workspace must be in a region in which [Service Map is supported](../azure-monitor/vm/vminsights-configure-workspace.md#supported-regions).<br/><br/> In Log Analytics, the workspace associated with Azure Migrate is tagged with the Migration Project key, and the project name. **Required agents** | On each server you want to analyze, install the following agents:<br/><br/> The [Microsoft Monitoring agent (MMA)](../azure-monitor/agents/agent-windows.md).<br/> The [Dependency agent](../azure-monitor/agents/agents-overview.md#dependency-agent).<br/><br/> If on-premises servers aren't connected to the internet, you need to download and install Log Analytics gateway on them.<br/><br/> Learn more about installing the [Dependency agent](how-to-create-group-machine-dependencies.md#install-the-dependency-agent) and [MMA](how-to-create-group-machine-dependencies.md#install-the-mma). **Log Analytics workspace** | The workspace must be in the same subscription a project.<br/><br/> Azure Migrate supports workspaces residing in the East US, Southeast Asia, and West Europe regions.<br/><br/> The workspace must be in a region in which [Service Map is supported](../azure-monitor/vm/vminsights-configure-workspace.md#supported-regions).<br/><br/> The workspace for a project can't be modified after it's added.
-**Costs** | The Service Map solution doesn't incur any charges for the first 180 days (from the day that you associate the Log Analytics workspace with the project)/<br/><br/> After 180 days, standard Log Analytics charges will apply.<br/><br/> Using any solution other than Service Map in the associated Log Analytics workspace will incur [standard charges](https://azure.microsoft.com/pricing/details/log-analytics/) for Log Analytics.<br/><br/> When the project is deleted, the workspace is not deleted along with it. After deleting the project, Service Map usage isn't free, and each node will be charged as per the paid tier of Log Analytics workspace/<br/><br/>If you have projects that you created before Azure Migrate general availability (GA- 28 February 2018), you might have incurred additional Service Map charges. To ensure payment after 180 days only, we recommend that you create a new project, since existing workspaces before GA are still chargeable.
-**Management** | When you register agents to the workspace, you use the ID and key provided by the project.<br/><br/> You can use the Log Analytics workspace outside Azure Migrate.<br/><br/> If you delete the associated project, the workspace isn't deleted automatically. [Delete it manually](../azure-monitor/logs/manage-access.md).<br/><br/> Don't delete the workspace created by Azure Migrate, unless you delete the project. If you do, the dependency visualization functionality will not work as expected.
+**Costs** | The Service Map solution doesn't incur any charges for the first 180 days (from the day that you associate the Log Analytics workspace with the project)/<br/><br/> After 180 days, standard Log Analytics charges will apply.<br/><br/> Using any solution other than Service Map in the associated Log Analytics workspace will incur [standard charges](https://azure.microsoft.com/pricing/details/log-analytics/) for Log Analytics.<br/><br/> When the project is deleted, the workspace isn't deleted along with it. After deleting the project, Service Map usage isn't free, and each node will be charged as per the paid tier of Log Analytics workspace/<br/><br/>If you have projects that you created before Azure Migrate general availability (GA- 28 February 2018), you might have incurred additional Service Map charges. To ensure payment after 180 days only, we recommend that you create a new project, since existing workspaces before GA are still chargeable.
+**Management** | When you register agents to the workspace, you use the ID and key provided by the project.<br/><br/> You can use the Log Analytics workspace outside Azure Migrate.<br/><br/> If you delete the associated project, the workspace isn't deleted automatically. [Delete it manually](../azure-monitor/logs/manage-access.md).<br/><br/> Don't delete the workspace created by Azure Migrate, unless you delete the project. If you do, the dependency visualization functionality won't work as expected.
**Internet connectivity** | If servers aren't connected to the internet, you need to install the Log Analytics gateway on them. **Azure Government** | Agent-based dependency analysis isn't supported.
migrate Troubleshoot Webapps Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-webapps-migration.md
UnableToConnectToServer | Connecting to the remote server failed. | Check error
- Continue to [perform at-scale agentless migration of ASP.NET web apps to Azure App Service](./tutorial-migrate-webapps.md). - Once you have successfully completed migration, you may explore the following steps based on web app specific requirement(s):
- - [Map existing custom DNS name](/azure/app-service/app-service-web-tutorial-custom-domain).
- - [Secure a custom DNS with a TLS/SSL binding](/azure/app-service/configure-ssl-bindings).
- - [Securely connect to Azure resources](/azure/app-service/tutorial-connect-overview).
- - [Deployment best practices](/azure/app-service/deploy-best-practices).
- - [Security recommendations](/azure/app-service/security-recommendations).
- - [Networking features](/azure/app-service/networking-features).
- - [Monitor App Service with Azure Monitor](/azure/app-service/monitor-app-service).
- - [Configure Azure AD authentication](/azure/app-service/configure-authentication-provider-aad).
-- [Review best practices](/azure/app-service/deploy-best-practices) for deploying to Azure App service.
+ - [Map existing custom DNS name](../app-service/app-service-web-tutorial-custom-domain.md).
+ - [Secure a custom DNS with a TLS/SSL binding](../app-service/configure-ssl-bindings.md).
+ - [Securely connect to Azure resources](../app-service/tutorial-connect-overview.md).
+ - [Deployment best practices](../app-service/deploy-best-practices.md).
+ - [Security recommendations](../app-service/security-recommendations.md).
+ - [Networking features](../app-service/networking-features.md).
+ - [Monitor App Service with Azure Monitor](../app-service/monitor-app-service.md).
+ - [Configure Azure AD authentication](../app-service/configure-authentication-provider-aad.md).
+- [Review best practices](../app-service/deploy-best-practices.md) for deploying to Azure App service.
migrate Tutorial Assess Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-sql.md
Run an assessment as follows:
- In **VM series**, specify the Azure VM series you want to consider for *SQL Server on Azure VM* sizing. Based on the configuration and performance requirements of your SQL Server or SQL Server instance, the assessment will recommend a VM size from the selected list of VM series. - You can edit settings as needed. For example, if you don't want to include D-series VM, you can exclude D-series from this list. > [!NOTE]
- > As Azure SQL assessments are intended to give the best performance for your SQL workloads, the VM series list only has VMs that are optimized for running your SQL Server on Azure Virtual Machines (VMs). [Learn more](https://docs.microsoft.com/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist?view=azuresql#vm-size&preserve-view=true).
+ > As Azure SQL assessments are intended to give the best performance for your SQL workloads, the VM series list only has VMs that are optimized for running your SQL Server on Azure Virtual Machines (VMs). [Learn more](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist?preserve-view=true&view=azuresql#vm-size).
- **Storage Type** is defaulted to *Recommended*, which means the assessment will recommend the best suited Azure Managed Disk based on the chosen environment type, on-premises disk size, IOPS, and throughput. 1. In **Assessment settings** > **Azure SQL Database sizing**,
migrate Tutorial Migrate Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-hyper-v.md
Run the following commands on each host, as described below:
1. Register the Hyper-V host to Azure Migrate. > [!NOTE]
- > If your Hyper-V host was previously registered with another Azure Migrate project that you are no longer using or have deleted, you'll need to de-register it from that project and register it in the new one. Follow the [Remove servers and disable protection](https://docs.microsoft.com/azure/site-recovery/site-recovery-manage-registration-and-protection?WT.mc_id=modinfra-39236-thmaure#unregister-a-connected-configuration-server) guide to do so.
+ > If your Hyper-V host was previously registered with another Azure Migrate project that you are no longer using or have deleted, you'll need to de-register it from that project and register it in the new one. Follow the [Remove servers and disable protection](../site-recovery/site-recovery-manage-registration-and-protection.md?WT.mc_id=modinfra-39236-thmaure) guide to do so.
``` "C:\Program Files\Microsoft Azure Site Recovery Provider\DRConfigurator.exe" /r /Credentials <key file path>
migrate Tutorial Migrate Physical Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-physical-virtual-machines.md
If you don't have an Azure subscription, create a [free account](https://azure.m
Before you begin this tutorial, you should: - [Review](./agent-based-migration-architecture.md) the migration architecture.-- [Review](/azure/site-recovery/migrate-tutorial-windows-server-2008#limitations-and-known-issues) the limitations related to migrating Windows Server 2008 servers to Azure.
+- [Review](../site-recovery/migrate-tutorial-windows-server-2008.md#limitations-and-known-issues) the limitations related to migrating Windows Server 2008 servers to Azure.
## Prepare Azure
Assign the Virtual Machine Contributor role to the Azure account. This provides
### Create an Azure network > [!IMPORTANT]
-> Virtual Networks (VNets) are a regional service, so make sure you create your VNet in the desired target Azure Region. For example: if you are planning on replicating and migrating Virtual Machines from your on-premises environment to the East US Azure Region, then your target VNet **must be created** in the East US Region. To connect VNets in different regions refer to the [Virtual network peering](/azure/virtual-network/virtual-network-peering-overview) guide.
+> Virtual Networks (VNets) are a regional service, so make sure you create your VNet in the desired target Azure Region. For example: if you are planning on replicating and migrating Virtual Machines from your on-premises environment to the East US Azure Region, then your target VNet **must be created** in the East US Region. To connect VNets in different regions refer to the [Virtual network peering](../virtual-network/virtual-network-peering-overview.md) guide.
[Set up](../virtual-network/manage-virtual-network.md#create-a-virtual-network) an Azure virtual network (VNet). When you replicate to Azure, Azure VMs are created and joined to the Azure VNet that you specify when you set up migration.
After you've verified that the test migration works as expected, you can migrate
## Next steps
-Investigate the [cloud migration journey](/azure/architecture/cloud-adoption/getting-started/migrate) in the Azure Cloud Adoption Framework.
+Investigate the [cloud migration journey](/azure/architecture/cloud-adoption/getting-started/migrate) in the Azure Cloud Adoption Framework.
migrate Tutorial Migrate Webapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-webapps.md
Once the migration is initiated, you can track the status using the Azure Resour
Once you have successfully completed migration, you may explore the following steps based on web app specific requirement(s): -- [Map existing custom DNS name](/azure/app-service/app-service-web-tutorial-custom-domain).-- [Secure a custom DNS with a TLS/SSL binding](/azure/app-service/configure-ssl-bindings).-- [Securely connect to Azure resources](/azure/app-service/tutorial-connect-overview)-- [Deployment best practices](/azure/app-service/deploy-best-practices).-- [Security recommendations](/azure/app-service/security-recommendations).-- [Networking features](/azure/app-service/networking-features).-- [Monitor App Service with Azure Monitor](/azure/app-service/monitor-app-service).-- [Configure Azure AD authentication](/azure/app-service/configure-authentication-provider-aad).
+- [Map existing custom DNS name](../app-service/app-service-web-tutorial-custom-domain.md).
+- [Secure a custom DNS with a TLS/SSL binding](../app-service/configure-ssl-bindings.md).
+- [Securely connect to Azure resources](../app-service/tutorial-connect-overview.md)
+- [Deployment best practices](../app-service/deploy-best-practices.md).
+- [Security recommendations](../app-service/security-recommendations.md).
+- [Networking features](../app-service/networking-features.md).
+- [Monitor App Service with Azure Monitor](../app-service/monitor-app-service.md).
+- [Configure Azure AD authentication](../app-service/configure-authentication-provider-aad.md).
## Next steps - Investigate the [cloud migration journey](/azure/architecture/cloud-adoption/getting-started/migrate) in the Azure Cloud Adoption Framework.-- [Review best practices](/azure/app-service/deploy-best-practices) for deploying to Azure App service.
+- [Review best practices](../app-service/deploy-best-practices.md) for deploying to Azure App service.
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
Also, when a NSG is deleted, by default the associated flow log resource is dele
- [Logic Apps](https://azure.microsoft.com/services/logic-apps/) > [!NOTE]
-> App services deployed under App Service Plan do not support NSG Flow Logs. Please refer [this documentaion](/azure/app-service/overview-vnet-integration#how-regional-virtual-network-integration-works.md) for additional details.
+> App services deployed under App Service Plan do not support NSG Flow Logs. Please refer [this documentaion](../app-service/overview-vnet-integration.md#how-regional-virtual-network-integration-works) for additional details.
## Best practices
networking Networking Partners Msp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/networking-partners-msp.md
Use the links in this section for more information about managed cloud networkin
|[Interxion](https://www.interxion.com/products/interconnection/cloud-connect/support-your-cloud-strategy/)|[Azure Networking Assessment - Five Days](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/interxionhq.inxn_azure_networking_assessment)||||| |[IX Reach](https://www.ixreach.com/services/sdn-cloud-connect/)||[ExpressRoute by IX Reach, a BSO company](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/ixreach.cloudconnect?tab=Overview)|||| |[KoçSistem](https://azure.kocsistem.com.tr/en)|[KoçSistem Managed Cloud Services for Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kocsistem.kocsistemcloudmanagementtool?tab=Overview)|[KoçSistem Azure ExpressRoute Management](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kocsistem.ks_azure_express_route?tab=Overview)|[KoçSistem Azure Virtual WAN Management](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kocsistem.ks_azure_virtual_wan?tab=Overview)||[`KoçSistem Azure Security Center Managed Service`](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kocsistem.ks_azure_security_center?tab=Overview)|
-|[Liquid Telecom](https://liquidcloud.africa/)|[Cloud Readiness - Two Hour Assessment](https://azuremarketplace.microsoft.com/marketplace/consulting-services/incremental_group_ltd.cloud-readiness-assess);[Liquid Azure Expert Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/liquidtelecommunicationsoperationslimited.5dab29ab-bb14-4df8-8978-9a8608a41ad7?tab=Overview)|||||
+|[Liquid Telecom](https://liquidcloud.africa/)|[Liquid Managed ExpressRoute for Azure (Microsoft preferred solution badge)](https://azuremarketplace.microsoft.com/marketplace/apps/liquidtelecommunicationsoperationslimited.42cfee0b-8f07-4948-94b0-c9fc3e1ddc42?tab=Overview); [Liquid Azure Expert Services](https://azuremarketplace.microsoft.com/marketplace/apps/liquidtelecommunicationsoperationslimited.5dab29ab-bb14-4df8-8978-9a8608a41ad7?tab=Overview)|||||
|[Lumen](https://www.lumen.com/en-us/solutions/hybrid-cloud.html)||[ExpressRoute Consulting |[Macquarie Telecom](https://macquariecloudservices.com/azure-managed-services/)|[Azure Managed Services by Macquarie Cloud](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.managed_services?tab=Overview); [Azure Extend by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.azure_extend?tab=Overview)||[Azure Deploy by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/marketplace/apps/macquariecloudservices.azure_deploy?tab=Overview); [SD-WAN Virtual Edge offer by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.azure_deploy?tab=Overview)||[Managed Security by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.managed_security?tab=Overview)| |[Megaport](https://www.megaport.com/services/microsoft-expressroute/)||[Managed Routing Service for ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/megaport1582290752989.megaport_mcr?tab=Overview)||||
orbital Concepts Contact Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/concepts-contact-profile.md
Title: Ground station contact profile - Azure Orbital GSaaS
+ Title: Ground station contact profile - Azure Orbital
description: Learn more about the contact profile object, including how to create, modify, and delete the profile. Previously updated : 06/21/2022 Last updated : 07/13/2022
-#Customer intent: As a satellite operator or user, I want to understand how to use the contact profile so that I can take passes using the GSaaS service.
+#Customer intent: As a satellite operator or user, I want to understand how to use the contact profile so that I can take passes using the Azure Orbital Ground Station (AOGS) service.
# Ground station contact profile
See [how to configure a contact profile](contact-profile.md) for the full list o
## Prerequisites -- Subnet that is created in the VNET and resource group you desire. See [Prepare network for Orbital GSaaS integration.](prepare-network.md)
+- Subnet that is created in the VNET and resource group you desire. See [Prepare network for Azure Orbital Ground Station integration.](prepare-network.md)
## Creating a contact profile
When you onboard a third party network, you'll receive a token that identifies y
## Next steps -- [Quickstart: Schedule a contact](schedule-contact.md)-- [How to: Update the Spacecraft TLE](update-tle.md)
+- [Schedule a contact](schedule-contact.md)
+- [Update the Spacecraft TLE](update-tle.md)
orbital Concepts Contact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/concepts-contact.md
Title: Ground station contact - Azure Orbital GSaSS
+ Title: Ground station contact - Azure Orbital
description: Learn more about the contact object and how to schedule a contact. Previously updated : 06/21/2022 Last updated : 07/13/2022 #Customer intent: As a satellite operator or user, I want to understand how to what the contact object is so I can manage my mission operations. # Ground station contact
-A contact occurs when the spacecraft is over a specified ground station. You can find available passes on the system and schedule them for use through Azure Orbital GSaaS. A contact and ground station pass mean the same thing.
+A contact occurs when the spacecraft is over a specified ground station. You can find available passes on the system and schedule them for use through Azure Orbital Ground Station (AOGS). A contact and ground station pass mean the same thing.
When you schedule a contact, a contact object is created under your spacecraft object in your resource group. The contact only associated with this spacecraft and can't be transferred to another spacecraft, resource group, or region.
See [how-to schedule a contact](schedule-contact.md) for the Portal method. The
## Next steps -- [Quickstart: Schedule a contact](schedule-contact.md)-- [How to: Update the Spacecraft TLE](update-tle.md)
+- [Schedule a contact](schedule-contact.md)
+- [Update the Spacecraft TLE](update-tle.md)
orbital Contact Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/contact-profile.md
Title: 'Configure a contact profile on Azure Orbital Earth Observation service'
-description: 'Quickstart: Configure a contact profile'
+ Title: Configure a contact profile on Azure Orbital Ground Station service
+description: Learn how to configure a contact profile
Previously updated : 06/01/2022 Last updated : 07/13/2022 # Customer intent: As a satellite operator, I want to ingest data from my satellite into Azure.
Configure a contact profile with Azure Orbital to save and reuse contact configu
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- To complete the onboarding process for the preview. [Onboard to the Azure Orbital Preview](orbital-preview.md) - To collect telemetry during the contact create an event hub. [Learn more about Azure Event Hubs](../event-hubs/event-hubs-about.md) - An IP address (private or public) for data retrieval/delivery. [Create a VM and use its private IP](../virtual-machines/windows/quick-create-portal.md)
Sign in to the [Azure portal - Orbital](https://aka.ms/orbital/portal).
## Next steps - [How-to Receive real-time telemetry](receive-real-time-telemetry.md)-- [Quickstart: Schedule a contact](schedule-contact.md)-- [Tutorial: Cancel a contact](delete-contact.md)
+- [Schedule a contact](schedule-contact.md)
+- [Cancel a contact](delete-contact.md)
orbital Delete Contact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/delete-contact.md
Title: Cancel a scheduled contact on Azure Orbital Earth Observation service
+ Title: Cancel a scheduled contact on Azure Orbital Ground Station service
description: Learn how to cancel a scheduled contact. -+ Previously updated : 06/13/2022 Last updated : 07/13/2022 # Customer intent: As a satellite operator, I want to ingest data from my satellite into Azure.
-# Tutorial: Cancel a scheduled contact
+# Cancel a scheduled contact
To cancel a scheduled contact, the contact entry must be deleted on the **Contacts** page.
Sign in to the [Azure portal - Orbital](https://aka.ms/orbital/portal).
## Delete a scheduled contact entry
-1. In the Azure portal search box, enter **Spacecrafts**. Select **Spacecrafts** in the search results.
-2. In the **Spacecrafts** page, select the name of the spacecraft for the scheduled contact.
+1. In the Azure portal search box, enter **Spacecraft**. Select **Spacecraft** in the search results.
+2. In the **Spacecraft** page, select the name of the spacecraft for the scheduled contact.
3. Select **Contacts** from the left menu bar in the spacecraftΓÇÖs overview page. :::image type="content" source="media/orbital-eos-delete-contact.png" alt-text="Select a scheduled contact" lightbox="media/orbital-eos-delete-contact.png":::
Sign in to the [Azure portal - Orbital](https://aka.ms/orbital/portal).
## Next steps -- [Quickstart: Schedule a contact](schedule-contact.md)-- [Tutorial: Update the spacecraft TLE](update-tle.md)
+- [Schedule a contact](schedule-contact.md)
+- [Update the spacecraft TLE](update-tle.md)
orbital Downlink Aqua https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/downlink-aqua.md
Title: Schedule a contact with NASA's AQUA public satellite using Azure Orbital Earth Observation Service
-description: How to schedule a contact with NASA's AQUA public satellite using Azure Orbital Earth Observation Service
+ Title: Schedule a contact with NASA's AQUA public satellite using Azure Orbital Ground Station service
+description: How to schedule a contact with NASA's AQUA public satellite using Azure Orbital Ground Station service
-+ Last updated 07/12/2022
# Tutorial: Downlink data from NASA's AQUA public satellite
-You can communicate with satellites directly from Azure using Azure Orbital's ground station service. Once downlinked, this data can be processed and analyzed in Azure. In this guide you'll learn how to:
+You can communicate with satellites directly from Azure using Azure Orbital's Ground Station (AOGS) service. Once downlinked, this data can be processed and analyzed in Azure. In this guide you'll learn how to:
> [!div class="checklist"] > * Create & authorize a spacecraft for AQUA
You can communicate with satellites directly from Azure using Azure Orbital's gr
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Complete the onboarding process for the preview. [Onboard to the Azure Orbital Preview](orbital-preview.md). ## Sign in to Azure
Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
> These steps must be followed as is or you won't be able to find the resources. Please use the specific link above to sign in directly to the Azure Orbital Preview page. ## Create & authorize a spacecraft for AQUA
-1. In the Azure portal search box, enter **Spacecrafts*. Select **Spacecrafts** in the search results.
-2. In the **Spacecrafts** page, select Create.
+
+1. In the Azure portal search box, enter **Spacecraft*. Select **Spacecraft** in the search results.
+2. In the **Spacecraft** page, select Create.
3. Learn an up-to-date Two-Line Element (TLE) for AQUA by checking celestrak at https://celestrak.com/NORAD/elements/active.txt > [!NOTE] > You will want to periodically update this TLE value to ensure that it is up-to-date prior to scheduling a contact. A TLE that is more than one or two weeks old may result in an unsuccessful downlink.
Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
## Prepare a virtual machine (VM) to receive the downlinked AQUA data+ 1. [Create a virtual network](../virtual-network/quick-create-portal.md) to host your data endpoint virtual machine (VM) 2. [Create a virtual machine (VM)](../virtual-network/quick-create-portal.md#create-virtual-machines) within the virtual network above. Ensure that this VM has the following specifications: - Operation System: Linux (Ubuntu 18.04 or higher)
sudo apt install socat
9. Select the **Create** button ## Schedule a contact with AQUA using Azure Orbital and save the downlinked data
-1. In the Azure portal search box, enter **Spacecrafts**. Select **Spacecrafts** in the search results.
-2. In the **Spacecrafts** page, select **AQUA**.
+
+1. In the Azure portal search box, enter **Spacecraft**. Select **Spacecraft** in the search results.
+2. In the **Spacecraft** page, select **AQUA**.
3. Select **Schedule contact** on the top bar of the spacecraftΓÇÖs overview. 4. In the **Schedule contact** page, specify this information from the top of the page:
socat -u tcp-listen:56001,fork create:/media/aqua/out.bin
of the tmpfs and into your home directory to avoid being overwritten when another contact is executed. > [!NOTE]
- > For a 10 minute long contact with AQUA while it is transmitting with 15MHz of bandwidth, you should expect to receive somewhere in the order of 450MB of data.
+ > For a 10 minute long contact with AQUA while it is transmitting with 15MHz of bandwidth, you should expect to receive somewhere in the order of 450MB of data.
## Next steps -- [Quickstart: Configure a contact profile](contact-profile.md)-- [Quickstart: Schedule a contact](schedule-contact.md)
+- [Configure a contact profile](contact-profile.md)
+- [Schedule a contact](schedule-contact.md)
orbital Geospatial Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/geospatial-reference-architecture.md
Title: End-to-end geospatial reference architecture
+ Title: Geospatial reference architecture - Azure Orbital
description: Show how to architect end-to-end geospatial data on Azure.
The next architecture involves Azure Orbital and Esri's ArcGIS Image. With this
## COTS/Open-source software geospatial imagery architecture: Azure Space to Analysis Ready Dataset
-When Analysis Ready Datasets are made available through APIs that enable search and query capabilities, like with Microsoft&#39;s Planetary Computer, there's no need to first download the data from a satellite. However, if low lead times are required for imagery, acquiring the data directly from Azure Space is ideal because a satellite operator or mission driven organization can schedule a contact with a satellite via Azure Orbital. The process for going from Level 0 to Level 2 Analysis Ready Dataset varies by the satellite and the imagery products. Multiple tools and intermediate steps are often required. Azure Batch or another compute resource can process the data in a cluster and store the resulting data. The data may go through multiple steps before it's ready for being used in ArcGIS or QGIS or some other geovisualization tool. For example, once the data is in a [Cloud Optimized GeoTIFF](https://www.cogeo.org/) (COG) format, it's served up via a Storage Account or Azure Data Lake and made accessible and queryable via the [STAC API](https://stacspec.org/), which can be deployed on Azure as a service, with AKS among others. Alternatively, the data is published as a Web Mapping Tile Service with GeoServer. Consumers can then access the data in ArcGIS Pro or QGIS or via a web app with Azure Maps or Esri&#39;s mobile and web SDKs.
+When Analysis Ready Datasets are made available through APIs that enable search and query capabilities, like with Microsoft's Planetary Computer, there's no need to first download the data from a satellite. However, if low lead times are required for imagery, acquiring the data directly from Azure Space is ideal because a satellite operator or mission driven organization can schedule a contact with a satellite via Azure Orbital. The process for going from Level 0 to Level 2 Analysis Ready Dataset varies by the satellite and the imagery products. Multiple tools and intermediate steps are often required. Azure Batch or another compute resource can process the data in a cluster and store the resulting data. The data may go through multiple steps before it's ready for being used in ArcGIS or QGIS or some other geovisualization tool. For example, once the data is in a [Cloud Optimized GeoTIFF](https://www.cogeo.org/) (COG) format, it's served up via a Storage Account or Azure Data Lake and made accessible and queryable via the [STAC API](https://stacspec.org/), which can be deployed on Azure as a service, with AKS among others. Alternatively, the data is published as a Web Mapping Tile Service with GeoServer. Consumers can then access the data in ArcGIS Pro or QGIS or via a web app with Azure Maps or Esri's mobile and web SDKs.
:::image type="content" source="media/geospatial-space-ard.png" alt-text="Diagram of Azure Space to Analysis Ready Dataset." lightbox="media/geospatial-space-ard.png":::
When Analysis Ready Datasets are made available through APIs that enable search
- [Azure Batch](https://azure.microsoft.com/services/batch/) allows you to run large-scale parallel and high-performance computing jobs. - [Azure Data Lake Storage](../data-lake-store/data-lake-store-overview.md) is a scalable and secure data lake for high-performance analytics workloads. This service can manage multiple petabytes of information while sustaining hundreds of gigabits of throughput. The data typically comes from multiple, heterogeneous sources and can be structured, semi-structured, or unstructured. - [Azure SQL Database](https://azure.microsoft.com/products/azure-sql/database/) is a PaaS version of SQL Server and is an intelligent, scalable, relational database service.-- [Azure Database for PostgreSQL](../postgresql/overview.md) is a fully managed relational database service that&#39;s based on the community edition of the open-source [PostgreSQL](https://www.postgresql.org/) database engine.
+- [Azure Database for PostgreSQL](../postgresql/overview.md) is a fully managed relational database service that's based on the community edition of the open-source [PostgreSQL](https://www.postgresql.org/) database engine.
- [PostGIS](https://www.postgis.net/) is an extension for the PostgreSQL database that integrates with GIS servers. PostGIS can run SQL location queries that involve geographic objects. - [Power BI](/power-bi/fundamentals/power-bi-overview) is a collection of software services and apps. You can use Power BI to connect unrelated sources of data and create visuals of them. - The [Azure Maps visual for Power BI](../azure-maps/power-bi-visual-get-started.md) provides a way to enhance maps with spatial data. You can use this visual to show how location data affects business metrics.
But [other solutions also exist for processing and scaling geospatial workloads
- [Vector tiles](https://github.com/mapbox/vector-tile-spec) provide an efficient way to display GIS data on maps. A solution could use PostGIS to dynamically query vector tiles. This approach works well for simple queries and result sets that contain well under 1 million records. But in the following cases, a different approach may be better: - Your queries are computationally expensive.
- - Your data doesn&#39;t change frequently.
- - You&#39;re displaying large data sets.
+ - Your data doesn't change frequently.
+ - You're displaying large data sets.
In these situations, consider using [Tippecanoe](https://github.com/mapbox/tippecanoe) to generate vector tiles. You can run Tippecanoe as part of your data processing flow, either as a container or with [Azure Functions](../azure-functions/functions-overview.md). You can make the resulting tiles available through APIs.
orbital License Spacecraft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/license-spacecraft.md
Title: License your spacecraft - Azure Orbital
-description: Learn how to license your spacecraft with Orbital.
+description: Learn how to license your spacecraft with Azure Orbital Ground Station.
The process starts by initiating the licensing request via the Azure portal.
1. Issue type: Technical. 1. Subscription: Choose your current subscription. 1. Service: My Service
- 1. Service Type: Azure orbital
+ 1. Service Type: Azure Orbital
1. Problem type: Spacecraft Management and Setup 1. Problem subtype: Spacecraft Registration 1. Click next to Solutions
Based on the details provided in the steps above, our regulatory team will make
Once the determination is made, we'll confirm the cost with you and ask you to authorize before proceeding.
-## Step 4 - Orbital requests the relevant licensing
+## Step 4 - Azure Orbital requests the relevant licensing
Upon authorization, you'll be billed and our regulatory team will seek the relevant licenses to enable your spacecraft with the desired ground stations. This step will take 2 to 6 months to execute. ## Step 5 - Spacecraft is authorized
-Once the licenses are in place, the spacecraft object will be updated by Orbital to represent the licenses held at the specified ground stations. Refer to (to add link to spacecraft concept) to understand how the authorizations are applied.
+Once the licenses are in place, the spacecraft object will be updated by Azure Orbital to represent the licenses held at the specified ground stations. Refer to (to add link to spacecraft concept) to understand how the authorizations are applied.
## FAQ
Q. Are third party ground stations such as KSAT included in this process?
A. No, the process on this page applies to Microsoft sites only. For more information, see (to add link to third party page). ## Next steps
+- [Integrate partner network ground stations](./partner-network-integration.md)
+- [Receive real-time telemetry](receive-real-time-telemetry.md)
orbital Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/overview.md
Title: Why use Azure Orbital?
-description: Azure Orbital is a cloud-based Ground Station as a Service that allows you to streamline your operations by ingesting space data directly into Azure.
+description: Azure Orbital is a cloud-based ground station as a Service that allows you to streamline your operations by ingesting space data directly into Azure.
- Previously updated : 11/16/2021+ Last updated : 07/13/2022 # Customer intent: As a satellite operator, I want to ingest data from my satellite into Azure.
Azure Orbital offers two main
## Next steps -- [Quickstart: Onboard to the Azure Orbital Preview](orbital-preview.md)-- [Quickstart: Register Spacecraft](register-spacecraft.md)-- [Quickstart: Configure a Contact Profile](contact-profile.md)
+- [Register Spacecraft](register-spacecraft.md)
+- [Configure a Contact Profile](contact-profile.md)
orbital Partner Network Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/partner-network-integration.md
Title: Integrate partner network ground stations into your Azure Orbital Ground Station as a Service solution
+ Title: Integrate partner network ground stations into your Azure Orbital Ground Station solution
description: Leverage partner network ground station locations through Azure Orbital.
Last updated 07/06/2022
-# Integrate partner network ground stations into your Azure Orbital Ground Station as a Service solution
+# Integrate partner network ground stations into your Azure Orbital Ground Station solution
This article describes how to integrate partner network ground stations.
This article describes how to integrate partner network ground stations.
## Request integration resource information
-1. Email the Azure Orbital Ground Station as a Service (GSaaS) team at **azorbitalpm@microsoft.com** to initiate partner network integration by providing the details below:
+1. Email the Azure Orbital Ground Station (AOGS) team at **azorbitalpm@microsoft.com** to initiate partner network integration by providing the details below:
- Azure Subscription ID - List of partner networks you've contracted with - List of ground station locations included in partner contracts
-2. The Azure Orbital GSaaS team will reply to your message with additional requested information, or, the Contact Profile resource parameters that will enable your partner network integration.
-3. Create a contact profile resource with the parameters provided by the Azure Orbital GSaaS team.
+2. The AOGS team will reply to your message with additional requested information, or, the Contact Profile resource parameters that will enable your partner network integration.
+3. Create a contact profile resource with the parameters provided by the AOGS team.
4. Await integration confirmation prior to scheduling Contacts with the newly integrated partner network(s). > [!NOTE]
-> It is important that the contact profile resource parameters match those provided by the Azure Orbital GSaaS team.
+> It is important that the contact profile resource parameters match those provided by the AOGS team.
## Next steps
orbital Prepare Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/prepare-network.md
Last updated 07/12/2022
-# Prepare the network for Azure Orbital GSaaS integration
+# Prepare the network for Azure Orbital Ground Station integration
-The Orbital GSaaS platform interfaces with your resources using VNET injection, which is used in both uplink and downlink directions. This page describes how to ensure your Subnet and Orbital GSaaS objects are configured correctly.
+The Azure Orbital Ground Station platform interfaces with your resources using VNET injection, which is used in both uplink and downlink directions. This page describes how to ensure your Subnet and Orbital ground station objects are configured correctly.
Ensure the objects comply with the recommendations in this article. Note, these steps don't have to be followed in order.
Here's how to set up the link flows based on direction on tcp or udp preference.
## Next steps -- [Quickstart: Register Spacecraft](register-spacecraft.md)-- [Quickstart: Schedule a contact](schedule-contact.md)
+- [Register Spacecraft](register-spacecraft.md)
+- [Schedule a contact](schedule-contact.md)
orbital Register Spacecraft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/register-spacecraft.md
Title: Register Spacecraft on Azure Orbital Earth Observation service
-description: 'Quickstart: Register Spacecraft'
+description: Learn how to register a spacecraft.
Previously updated : 06/03/2022 Last updated : 07/13/2022 # Customer intent: As a satellite operator, I want to ingest data from my satellite into Azure.
Sign in to the [Azure portal](https://aka.ms/orbital/portal).
## Next steps -- [Quickstart: Configure a contact profile](contact-profile.md)-- [Quickstart: Schedule a contact](schedule-contact.md)
+- [Configure a contact profile](contact-profile.md)
+- [Schedule a contact](schedule-contact.md)
orbital Satellite Imagery With Orbital Ground Station https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/satellite-imagery-with-orbital-ground-station.md
Title: 'Collect and process Aqua satellite payload'
-description: 'An end-to-end walk-through of using the Azure Orbital Ground Station as-a-Service (GSaaS) to capture and process satellite imagery'
+ Title: Collect and process Aqua satellite payload - Azure Orbital
+description: An end-to-end walk-through of using the Azure Orbital Ground Station (AOGS) to capture and process satellite imagery.
- Previously updated : 07/06/2022+ Last updated : 07/13/2022
-# Collect and process Aqua satellite payload using Azure Orbital Ground Station as-a-Service (GSaaS)
+# Tutorial: Collect and process Aqua satellite payload using Azure Orbital Ground Station (AOGS)
-This topic is a comprehensive walk-through showing how to use the [Azure Orbital Ground Station as-a-Service (GSaaS)](https://azure.microsoft.com/services/orbital/) to capture and process satellite imagery. It introduces the Azure Orbital GSaaS and its core concepts and shows how to schedule contacts. The topic also steps through an example in which we collect and process NASA Aqua satellite data in an Azure virtual machine (VM) using NASA-provided tools.
+This article is a comprehensive walk-through showing how to use the [Azure Orbital Ground Station (AOGS)](https://azure.microsoft.com/services/orbital/) to capture and process satellite imagery. It introduces the AOGS and its core concepts and shows how to schedule contacts. The article also steps through an example in which we collect and process NASA Aqua satellite data in an Azure virtual machine (VM) using NASA-provided tools.
-Aqua is a polar-orbiting spacecraft launched by NASA in 2002. Data from all science instruments aboard Aqua is downlinked to the Earth using direct broadcast over the X-band in near real-time. More information about Aqua can be found on the [Aqua Project Science](https://aqua.nasa.gov/) website. With Azure Orbital Ground Station as-a-Service (GSaaS), we can capture the Aqua broadcast when the satellite is within line of sight of a ground station.
+Aqua is a polar-orbiting spacecraft launched by NASA in 2002. Data from all science instruments aboard Aqua is downlinked to the Earth using direct broadcast over the X-band in near real-time. More information about Aqua can be found on the [Aqua Project Science](https://aqua.nasa.gov/) website. With AOGS, we can capture the Aqua broadcast when the satellite is within line of sight of a ground station.
A *contact* is time reserved at an orbital ground station to communicate with a satellite. During the contact, the ground station orients its antenna towards Aqua and captures the broadcast payload. The captured data is sent to an Azure VM as a data stream that is processed using the [RT-STPS](http://directreadout.sci.gsfc.nasa.gov/index.cfm?section=technology&page=NISGS&subpage=NISFES&sub2page=RT-STPS&sub3Page=overview) (Real-Time Software Telemetry Processing System) provided by [Direct Readout Laboratory](http://directreadout.sci.gsfc.nasa.gov/) at NASA to generate a level 0 product. Further processing of level 0 data is done using IPOPP (International Planetary Observation Processing Package) tool, also provided by DRL.
-Processing the Aqua data stream involves the following steps in order:
+In this tutorial, you'll follow these steps to process the Aqua data stream:
-1. [Prerequisites](#step-1-prerequisites).
-2. [Process RAW data using RT-STPS](#step-2-process-raw-data-using-rt-stps).
-3. [Prepare a virtual machine (processor-vm) to process higher level products](#step-3-prepare-a-virtual-machine-processor-vm-to-create-higher-level-products).
-4. [Create higher level products using IPOPP](#step-4-create-higher-level-products-using-ipopp).
+> [!div class="checklist"]
+> * [Prerequisites](#step-1-prerequisites).
+> * [Process RAW data using RT-STPS](#step-2-process-raw-data-using-rt-stps).
+> * [Prepare a virtual machine (processor-vm) to process higher level products](#step-3-prepare-a-virtual-machine-processor-vm-to-create-higher-level-products).
+> * [Create higher level products using IPOPP](#step-4-create-higher-level-products-using-ipopp).
-Optional setup for capturing the ground station telemetry are included in the [Appendix](#appendix)
+Optional setup steps for capturing the ground station telemetry are included in the [Appendix](#appendix).
## Step 1: Prerequisites
IPOPP will produce output products in the following directories:
### Capture ground station telemetry
-An Azure Orbital Ground station emits telemetry events that can be used to analyze the ground station operation for the duration of the contact. You can configure your contact profile to send such telemetry events to Azure Event Hubs. The steps below describe how to create an Event Hub and grant Azure Orbital access to send events to it.
+An Azure Orbital Ground station emits telemetry events that can be used to analyze the ground station operation during the contact. You can configure your contact profile to send such telemetry events to Azure Event Hubs. The steps below describe how to create Event Hubs and grant Azure Orbital access to send events to it.
1. In your subscription, go to **Resource Provider** settings and register Microsoft.Orbital as a provider.  
-2. [Create an Azure Event Hub](../event-hubs/event-hubs-create.md) in your subscription.
+2. [Create Azure Event Hubs](../event-hubs/event-hubs-create.md) in your subscription.
3. From the left menu, select **Access Control (IAM)**. Under **Grant Access to this Resource**, select **Add Role Assignment**. 4. Select **Azure Event Hubs Data Sender**.   5. Assign access to '**User, group, or service principal**'.
Congrats! Orbital can now communicate with your hub. 
### Enable telemetry for a contact profile in the Azure portal  1. Go to **Contact Profile** resource, and click **Create**. 
-2. Choose a namespace using the **Event Hub Namespace** dropdown. 
-3. Choose an instance using the **Event Hub Instance** dropdown that appears after namespace selection. 
+2. Choose a namespace using the **Event Hubs Namespace** dropdown. 
+3. Choose an instance using the **Event Hubs Instance** dropdown that appears after namespace selection. 
### Test telemetry on a contact  1. Schedule a contact using the Contact Profile that you previously configured for Telemetry. 
-2. Once the contact begins, you should begin to see data in your Event Hub soon after. 
+2. Once the contact begins, you should begin to see data in your Event Hubs soon after. 
-To verify that events are being received in your Event Hub, you can check the graphs present on the Event Hub namespace **Overview** page. The graphs show data across all Event Hub instances within a namespace. You can navigate to the Overview page of a specific instance to see the graphs for that instance. 
+To verify that events are being received in your Event Hubs, you can check the graphs present on the Event Hubs namespace **Overview** page. The graphs show data across all Event Hubs instances within a namespace. You can navigate to the Overview page of a specific instance to see the graphs for that instance. 
-You can enable an Event Hub's [Capture feature](../event-hubs/event-hubs-capture-enable-through-portal.md) that will automatically deliver the telemetry data to an Azure Blob storage account of your choosing. 
+You can enable an Event Hubs [Capture feature](../event-hubs/event-hubs-capture-enable-through-portal.md) that will automatically deliver the telemetry data to an Azure Blob storage account of your choosing. 
Once enabled, you can check your container and view or download the data.   
orbital Schedule Contact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/schedule-contact.md
Sign in to the [Azure portal - Orbital](https://aka.ms/orbital/portal).
## Next steps -- [Quickstart: Register Spacecraft](register-spacecraft.md)-- [Quickstart: Configure a contact profile](contact-profile.md)
+- [Register Spacecraft](register-spacecraft.md)
+- [Configure a contact profile](contact-profile.md)
orbital Space Partner Program Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/space-partner-program-overview.md
Title: What is the Space Partner Community?
+ Title: What is the Azure Orbital Space Partner Community?
description: Overview of the Azure Space Partner Community - Previously updated : 3/21/2022+ Last updated : 07/13/2022 # Customer intent: Educate potential partners on how to engage with the Azure Space partner Communities.
orbital Spacecraft Object https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/spacecraft-object.md
Previously updated : 07/07/2022 Last updated : 07/13/2022 #Customer intent: As a satellite operator or user, I want to understand what the spacecraft object does so I can manage my mission. # Spacecraft object
-Learn about how you can represent your spacecraft details in Azure Orbital GSaaS.
+Learn about how you can represent your spacecraft details in Azure Orbital Ground Station (AOGS).
## Spacecraft details
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-supported-versions.md
Previously updated : 06/29/2022 Last updated : 07/14/2022 # Supported PostgreSQL major versions in Azure Database for PostgreSQL - Flexible Server
Azure Database for PostgreSQL - Flexible Server currently supports the following
The current minor release is **14.3**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/14/static/release-14-3.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
->[!NOTE]
-> If you are deploying Postgres 14 in a Private Access (VNET), in some cases, your deployment may fail. This will be addressed shortly. Meanwhile, to explore Postgres 14, consider deploying in Public access.
- ## PostgreSQL version 13 The current minor release is **13.7**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/13/static/release-13-7.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
private-link Create Private Endpoint Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-portal.md
Use the VM that you created earlier to connect to the web app across the private
8. In the bastion connection to **myVM**, open the web browser.
-9. Enter the URL of your web app, **https://mywebapp1979.azurewebsites.net**.
+9. Enter the URL of your web app, `https://mywebapp1979.azurewebsites.net`.
If your web app hasn't been deployed, you'll get the following default web app page:
private-link Troubleshoot Private Link Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/troubleshoot-private-link-connectivity.md
Title: Troubleshoot Azure Private Link connectivity problems
+ Title: Troubleshoot Azure Private Link Service connectivity problems
description: Step-by-step guidance to diagnose private link connectivity documentationcenter: na
-# Troubleshoot Azure Private Link connectivity problems
+# Troubleshoot Azure Private Link Service connectivity problems
This article provides step-by-step guidance to validate and diagnose connectivity for your Azure Private Link setup.
purview Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/overview.md
Microsoft Purview's solutions in the governance portal provide a unified data go
- Enable data curators to manage and secure your data estate. - Empower data consumers to find valuable, trustworthy data.
+ Chart showing the high-level architecture of Microsoft Purview. Multi-cloud and on-premises sources flow into Microsoft Purview, and Microsoft Purview's apps (Data Catalog, Map, Data Estate Insights, Policy, and Data Sharing) allow data consumers and data curators to view and manage metadata, share data, and protect assets. This metadata is also being ported to external analytics services from Microsoft Purview for more processing.
>[!TIP] > Looking to govern your data in Microsoft 365 by keeping what you need and deleting what you don't? Use [Microsoft Purview Data Lifecycle Management](/microsoft-365/compliance/data-lifecycle-management).
search Cognitive Search Defining Skillset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-defining-skillset.md
Previously updated : 06/10/2022 Last updated : 07/14/2022 # Create a skillset in Azure Cognitive Search
Last updated 06/10/2022
A skillset defines the operations that extract and enrich data to make it searchable. It executes after text and images are extracted, and after [field mappings](search-indexer-field-mappings.md) are processed.
-This article explains how to create a skillset with the [Create Skillset (REST API)](/rest/api/searchservice/create-skillset). Rules for skillset definition include:
+This article explains how to create a skillset with the [Create Skillset (REST API)](/rest/api/searchservice/create-skillset), but the same concepts and steps apply to other programming languages.
-+ A skillset is a named top-level resource, which means it can be created once and referenced by many indexers.
-+ A skillset must contain at least one skill.
+Rules for skillset definition include:
+++ A skillset must have a unique name within the skillset collection. When you define a skillset, you're creating a top-level resource that can be used by any indexer.++ A skillset must contain at least one skill. A typical skillset has three to five. The maximum is 30. + A skillset can repeat skills of the same type (for example, multiple Shaper skills).++ A skillset supports chained operations, looping, and branching. Indexers drive skillset execution. You'll need an [indexer](search-howto-create-indexers.md), [data source](search-data-sources-gallery.md), and [index](search-what-is-an-index.md) before you can test your skillset.
After the name and description, a skillset has four main properties:
+ `knowledgeStore` (optional) specifies an Azure Storage account and settings for projecting skillset output into tables, blobs, and files in Azure Storage. Remove this section if you don't need it, otherwise [specify a knowledge store](knowledge-store-create-rest.md).
-+ `encryptionKey` (optional) specifies an Azure Key Vault and [customer-managed keys](search-security-manage-encryption-keys.md) used to encrypt sensitive content in a skillset definition. Remove this property if you aren't using customer-managed encryption.
++ `encryptionKey` (optional) specifies an Azure Key Vault and [customer-managed keys](search-security-manage-encryption-keys.md) used to encrypt sensitive content (descriptions, connection strings, keys) in a skillset definition. Remove this property if you aren't using customer-managed encryption. ## Add skills Inside the skillset definition, the skills array specifies which skills to execute. Three to five skills are common, but you can add as many skills as necessary, subject to [service limits](search-limits-quotas-capacity.md#indexer-limits).
-The end result of an enrichment pipeline is textual content in either a search index or knowledge store. For this reason, most skills either create text from images (OCR text, captions, tags), or analyze existing text to create new information (entities, key phrases, sentiment). Skills that operate independently are processed in parallel. Skills that depend on each other specify the output of one skill (such as key phrases) as the input of second skill (such as text translation). The search service determines the order of skill execution.
+The end result of an enrichment pipeline is textual content in either a search index or a knowledge store. For this reason, most skills either create text from images (OCR text, captions, tags), or analyze existing text to create new information (entities, key phrases, sentiment). Skills that operate independently are processed in parallel. Skills that depend on each other specify the output of one skill (such as key phrases) as the input of second skill (such as text translation). The search service determines the order of skill execution and the execution environment.
All skills have a type, context, inputs, and outputs. A skill might optionally have a name and description. The following example shows two unrelated [built-in skills](cognitive-search-predefined-skills.md) so that you can compare the basic structure.
Each skill is unique in terms of its input values and the parameters that it tak
## Set skill context
-Each skill has a [context property](cognitive-search-working-with-skillsets.md#context) that determines the level at which operations take place. If the "context" property isn't explicitly set, the default is `"/document"`, where the context is the whole document (the skill is called once per document).
+Each skill has a [context property](cognitive-search-working-with-skillsets.md#skill-context) that determines the level at which operations take place. If the "context" property isn't explicitly set, the default is `"/document"`, where the context is the whole document (the skill is called once per document).
```json "skills":[
Context also determines where outputs are produced in the [enrichment tree](cogn
Skills read from and write to an enriched document. Skill inputs specify the origin of the incoming data. It's often the root node of the enriched document. For blobs, a typical skill input is the document's content property.
-[Skill reference documentation](cognitive-search-predefined-skills.md) for each skill describes the inputs it can produce. Each input has a "name" and a "source". The following example is from the Entity Recognition skill:
+[Skill reference documentation](cognitive-search-predefined-skills.md) for each skill describes the inputs it can consume. Each input has a "name" that identifies a specific input, and a "source" that specifies the location fo the data in the enriched document. The following example is from the Entity Recognition skill:
```json "inputs": [
Although skill output can be optionally cached for reuse purposes, it's usually
+ To send output to a knowledge store, [create a projection](knowledge-store-projection-overview.md).
-+ To send output to a downstream skill, reference the output by its node name, such as `"/document/organization"`, in the downstream skill's input source property.
++ To send output to a downstream skill, reference the output by its node name, such as `"/document/organization"`, in the downstream skill's input source property. See [Reference an annotation](cognitive-search-concept-annotations-syntax.md) for examples. ## Tips for a first skillset
search Cognitive Search Working With Skillsets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-working-with-skillsets.md
Previously updated : 08/10/2021 Last updated : 07/13/2022 # Skillset concepts in Azure Cognitive Search
-This article is for developers who need a deeper understanding of skillset concepts and composition, and assumes familiarity with the high-level concepts and workflows of [AI enrichment](cognitive-search-concept-intro.md).
+This article is for developers who need a deeper understanding of skillset concepts and composition, and assumes familiarity with the high-level concepts of [AI enrichment](cognitive-search-concept-intro.md).
-A skillset is a reusable resource in Azure Cognitive Search that is attached to [an indexer](search-indexer-overview.md). It contains one or more skills, which are atomic operations that call built-in AI or external custom processing over documents retrieved from an external data source.
+A skillset is a reusable resource in Azure Cognitive Search that's attached to [an indexer](search-indexer-overview.md). It contains one or more skills that call built-in AI or external custom processing over documents retrieved from an external data source.
-From the onset of skillset processing to its conclusion, skills read and write to an enriched document. An enriched document is initially just the raw content extracted from a data source, but with each skill execution, it gains structure and substance. Ultimately, nodes from an enriched document are then [mapped to fields](cognitive-search-output-field-mapping.md) in a search index, or [mapped to projections](knowledge-store-projection-overview.md) in a knowledge store, so that the content can be routed appropriately, where it will be queried or consumed by other apps.
+The following diagram illustrates the basic data flow of skillset execution.
+
+From the onset of skillset processing to its conclusion, skills read from and write to an [*enriched document*](#enrichment-tree). Initially, an enriched document is just the raw content extracted from a data source (articulated as the `"/document"` root node). With each skill execution, the enriched document gains structure and substance as skill writes its output as nodes in the graph.
+
+After skillset execution is done, the output of an enriched document finds its way into an index through *output field mappings*. Any raw content that you want transferred intact, from source to an index, is defined through *field mappings*.
+
+To configure enrichment, you'll specify settings in a skillset and indexer.
## Skillset definition
-A skillset is an array of one or more *skills* that represent an atomic enrichment operation, like translating text, extracting key phrases, or performing optical character recognition from an image file. Skills can be the [built-in skills](cognitive-search-predefined-skills.md) from Microsoft, or [custom skills](cognitive-search-create-custom-skill-example.md) that contain models or processing logic that you provide. It produces enriched documents that are either consumed during indexing or projected to a knowledge store.
+A skillset is an array of one or more *skills* that perform an enrichment, such as translating text or OCR on an image file. Skills can be the [built-in skills](cognitive-search-predefined-skills.md) from Microsoft, or [custom skills](cognitive-search-create-custom-skill-example.md) for processing logic that you host externally. A skillset produces enriched documents that are either consumed during indexing or projected to a knowledge store.
+
+Skills have a context, inputs, and outputs:
++++ [Context](#skill-context) refers to the scope of the operation, which could be once per document or once for each item in a collection.+++ Inputs originate from nodes in an enriched document, where a "source" and "name" identify a given node.+++ Output is sent back to the enriched document as a new node. Values are the node "name" and node content. If a node name is duplicated, you can set a target name for disambiguation.+
+### Skill context
+
+Each skill has a context, which can be the entire document (`/document`) or a node lower in the tree (`/document/countries/*`). A context determines:
+++ The number of times the skill executes, over a single value (once per field, per document), or for context values of type collection, where adding an `/*` results in skill invocation, once for each instance in the collection. +++ Output declaration, or where in the enrichment tree the skill outputs are added. Outputs are always added to the tree as children of the context node.+++ Shape of the inputs. For multi-level collections, setting the context to the parent collection will affect the shape of the input for the skill. For example if you have an enrichment tree with a list of countries/regions, each enriched with a list of states containing a list of ZIP codes, how you set the context will determine how the input is interpreted.+
+ |Context|Input|Shape of Input|Skill Invocation|
+ |-|--|--|-|
+ |`/document/countries/*` |`/document/countries/*/states/*/zipcodes/*` |A list of all ZIP codes in the country/region |Once per country/region |
+ |`/document/countries/*/states/*` |`/document/countries/*/states/*/zipcodes/*` |A list of ZIP codes in the state | Once per combination of country/region and state|
-Skills have a type, a context, and inputs and outputs that are often chained together. The following example demonstrates two [built-in skills](cognitive-search-predefined-skills.md) that work together and introduces some of the terminology of skillset definition.
+### Skill dependencies
-+ Skill #1 is a [Text Split skill](cognitive-search-skill-textsplit.md) that accepts the contents of the "reviews_text" source field as input, and splits that content into "pages" of 5000 characters as output. Splitting large text into smaller chunks can produce better outcomes during natural language processing.
+Skills can execute independently and in parallel, or sequentially if you feed the output of one skill into another skill. The following example demonstrates two [built-in skills](cognitive-search-predefined-skills.md) that execute in sequence:
+++ Skill #1 is a [Text Split skill](cognitive-search-skill-textsplit.md) that accepts the contents of the "reviews_text" source field as input, and splits that content into "pages" of 5000 characters as output. Splitting large text into smaller chunks can produce better outcomes for skills like sentiment detection. + Skill #2 is a [Sentiment Detection skill](cognitive-search-skill-sentiment.md) accepts "pages" as input, and produces a new field called "Sentiment" as output that contains the results of sentiment analysis.
+Notice how the output of the first skill ("pages") is used in sentiment analysis, where "/document/reviews_text/pages/*" is both the context and input. For more information about path formulation, see [How to reference annotations](cognitive-search-concept-annotations-syntax.md).
+ ```json { "skills": [
Skills have a type, a context, and inputs and outputs that are often chained tog
], "outputs": [ {
- "name": "score",
- "targetName": "Sentiment"
+ "name": "sentiment",
+ "targetName": "sentiment"
+ },
+ {
+ "name": "confidenceScores",
+ "targetName": "confidenceScores"
+ },
+ {
+ "name": "sentences",
+ "targetName": "sentences"
} ] }
-. . .
+ . . .
+ ]
} ```
-Key points to notice about the above example are that inputs and outputs are name-value pairs, you can match the outputs of one skill to the inputs of downstream skills, and that all skills have context that determines where in the enrichment tree the processing occurs.
-
-For more detail about how inputs and outputs are formulated, see [How to reference annotations](cognitive-search-concept-annotations-syntax.md).
- ## Enrichment tree
-An enriched document is a temporary tree-like data structure created during skillset execution that collects all of the changes introduced through skills, and represents them in a hierarchy of addressable nodes. Nodes also include any un-enriched fields that are passed in verbatim from the external data source. An enriched document exists for the duration of skillset execution, but can be cached or persisted to a knowledge store.
+An enriched document is a temporary, tree-like data structure created during skillset execution that collects all of the changes introduced through skills. Collectively, enrichments are represented as a hierarchy of addressable nodes. Nodes also include any unenriched fields that are passed in verbatim from the external data source.
+
+An enriched document exists for the duration of skillset execution, but can be [cached](cognitive-search-incremental-indexing-conceptual.md) or sent to a [knowledge store](knowledge-store-concept-intro.md).
Initially, an enriched document is simply the content extracted from a data source during [*document cracking*](search-indexer-overview.md#document-cracking), where text and images are extracted from the source and made available for language or image analysis.
-The initial content is the *root node* (`document\content`) and is usually a whole document or a normalized image that is extracted from a data source during document cracking. How it's articulated in an enrichment tree varies for each data source type. The following table shows the state of a document entering into the enrichment pipeline for several supported data sources:
+The initial content is metadata and the *root node* (`document\content`). The root node is usually a whole document or a normalized image that is extracted from a data source during document cracking. How it's articulated in an enrichment tree varies for each data source type. The following table shows the state of a document entering into the enrichment pipeline for several supported data sources:
|Data Source\Parsing Mode|Default|JSON, JSON Lines & CSV| ||||
The initial content is the *root node* (`document\content`) and is usually a who
|Azure SQL|/document/{column1}<br>/document/{column2}<br>…|N/A | |Cosmos DB|/document/{key1}<br>/document/{key2}<br>…|N/A|
-As skills execute, output is added to the enrichment tree as new nodes. These nodes can then be used as inputs for downstream skills, and will eventually be projected into a knowledge store, or mapped to index fields. Skills that create content, such as translated strings, will write their output to the enriched document. Likewise, skills that consume the output of upstream skills will read from the enriched document to get the necessary inputs.
+As skills execute, output is added to the enrichment tree as new nodes. If skill execution is over the entire document, nodes are added at the first level under the root.
+
+Nodes can be used as inputs for downstream skills. For example, skills that create content, such as translated strings, could become input for skills that recognize entities or extract key phrases.
:::image type="content" source="media/cognitive-search-working-with-skillsets/skillset-def-enrichment-tree.png" alt-text="Skills read and write from enrichment tree" border="false":::
-An enrichment tree consists of extracted content and metadata pulled from the source, plus any new nodes that are created by a skill, such as `translated_text` from the [Text Translation skill](cognitive-search-skill-text-translation.md), `locations` from [Entity Recognition skill](cognitive-search-skill-entity-recognition-v3.md), or `keyPhrases` from the [Key Phrase Extraction skill](cognitive-search-skill-keyphrases.md). Although you can [visualize and work with an enrichment tree](cognitive-search-debug-session.md) through a visual editor, it's mostly an internal structure.
+Although you can visualize and work with an enrichment tree through the [Debug Sessions visual editor](cognitive-search-debug-session.md), it's mostly an internal structure.
-Enrichments aren't mutable: once created, nodes cannot be edited. As your skillsets get more complex, so will your enrichment tree, but not all nodes in the enrichment tree need to make it to the index or the knowledge store. You can selectively persist just a subset of the enrichment outputs so that you are only keeping what you intend to use.
+Enrichments are immutable: once created, nodes can't be edited. As your skillsets get more complex, so will your enrichment tree, but not all nodes in the enrichment tree need to make it to the index or the knowledge store.
-Because a skill's inputs and outputs are reading from and writing to enrichment trees, one of tasks you'll complete as part of skillset design is creating [output field mappings](cognitive-search-output-field-mapping.md) that move content out of the enrichment tree and into a field in a search index. Likewise, if you are creating a knowledge store, you can map outputs into [shapes](knowledge-store-projection-shape.md) that are assigned to projections.
+You can selectively persist just a subset of the enrichment outputs so that you're only keeping what you intend to use. The [output field mappings](cognitive-search-output-field-mapping.md) in your indexer definition will determine what content actually gets ingested in the search index. Likewise, if you're creating a knowledge store, you can map outputs into [shapes](knowledge-store-projection-shape.md) that are assigned to projections.
> [!NOTE]
-> The enrichment tree format enables the enrichment pipeline to attach metadata to even primitive data types. The metadata will not be a valid JSON object, but can be projected into a valid JSON format in projection definitions in a knowledge store. For more information, see [Shaper skill](cognitive-search-skill-shaper.md).
+> The enrichment tree format enables the enrichment pipeline to attach metadata to even primitive data types. The metadata won't be a valid JSON object, but can be projected into a valid JSON format in projection definitions in a knowledge store. For more information, see [Shaper skill](cognitive-search-skill-shaper.md).
-## Context
+## Indexer definition
-Each skill has a context, which can be the entire document (`/document`) or a node lower in the tree (`/document/countries/*`). A context determines:
+An indexer has properties and parameters used to configure indexer execution. Among those properties are mappings that set the data path to fields in a search index.
-+ The number of times the skill executes, over a single value (once per field, per document), or for context values of type collection, where adding an `/*` results in skill invocation, once for each instance in the collection.
-+ Output declaration, or where in the enrichment tree the skill outputs are added. Outputs are always added to the tree as children of the context node.
+There are two sets of mappings:
-+ Shape of the inputs. For multi-level collections, setting the context to the parent collection will affect the shape of the input for the skill. For example if you have an enrichment tree with a list of countries/regions, each enriched with a list of states containing a list of ZIP codes, how you set the context will determine how the input is interpreted.
++ ["fieldMappings"](search-indexer-field-mappings.md) map a source field to a search field.+++ ["outputFieldMappings"](cognitive-search-output-field-mapping.md) map a node in an enriched document to a search field.
-|Context|Input|Shape of Input|Skill Invocation|
-|-|--|--|-|
-|`/document/countries/*` |`/document/countries/*/states/*/zipcodes/*` |A list of all ZIP codes in the country/region |Once per country/region |
-|`/document/countries/*/states/*` |`/document/countries/*/states/*/zipcodes/*` |A list of ZIP codes in the state | Once per combination of country/region and state|
+The "sourceFieldName" property specifies either a field in your data source or a node in an enrichment tree. The "targetFieldName" property specifies the search field in an index that receives the content.
## Enrichment example
Conceptually, the initial enrichment tree looks as follows:
![enrichment tree after document cracking](media/cognitive-search-working-with-skillsets/enrichment-tree-doc-cracking.png "Enrichment tree after document cracking and before skill execution")
-The root node for all enrichments is `"/document"`. When working with blob indexers, the `"/document"` node will have child nodes of `"/document/content"` and `"/document/normalized_images"`. When working with CSV data, as we are in this example, the column names will map to nodes beneath `"/document"`.
+The root node for all enrichments is `"/document"`. When you're working with blob indexers, the `"/document"` node will have child nodes of `"/document/content"` and `"/document/normalized_images"`. When the data is CSV, as in this example, the column names will map to nodes beneath `"/document"`.
### Skill #1: Split skill
A text split skill is typically first in a skillset.
} ```
-With the skill context of `"/document/reviews_text"`, the split skill executes once for the `reviews_text`. The skill output is a list where the `reviews_text` is chunked into 5000 character segments. The output from the split skill is named `pages` and it is added to the enrichment tree. The `targetName` feature allows you to rename a skill output before being added to the enrichment tree.
+With the skill context of `"/document/reviews_text"`, the split skill executes once for the `reviews_text`. The skill output is a list where the `reviews_text` is chunked into 5000 character segments. The output from the split skill is named `pages` and it's added to the enrichment tree. The `targetName` feature allows you to rename a skill output before being added to the enrichment tree.
The enrichment tree now has a new node placed under the context of the skill. This node is available to any skill, projection, or output field mapping. ![enrichment tree after skill #1](media/cognitive-search-working-with-skillsets/enrichment-tree-skill1.png "Enrichment tree after skill #1 executes")
-To access any of the enrichments added to a node by a skill, the full path for the enrichment is needed. For example, if you want to use the text from the ```pages``` node as an input to another skill, you will need to specify it as ```"/document/reviews_text/pages/*"```. For more information about paths, see [Reference annotations](cognitive-search-concept-annotations-syntax.md).
+To access any of the enrichments added to a node by a skill, the full path for the enrichment is needed. For example, if you want to use the text from the ```pages``` node as an input to another skill, you'll need to specify it as ```"/document/reviews_text/pages/*"```. For more information about paths, see [Reference annotations](cognitive-search-concept-annotations-syntax.md).
### Skill #2 Language detection Hotel review documents include customer feedback expressed in multiple languages. The language detection skill determines which language is used. The result will then be passed to key phrase extraction and sentiment detection (not shown), taking language into consideration when detecting sentiment and phrases.
-While the language detection skill is the third (skill #3) skill defined in the skillset, it is the next skill to execute. Since it is not blocked by requiring any inputs, it will execute in parallel with the previous skill. Like the split skill that preceded it, the language detection skill is also invoked once for each document. The enrichment tree now has a new node for language.
+While the language detection skill is the third (skill #3) skill defined in the skillset, it's the next skill to execute. It doesn't require any inputs so it executes in parallel with the previous skill. Like the split skill that preceded it, the language detection skill is also invoked once for each document. The enrichment tree now has a new node for language.
![enrichment tree after skill #2](media/cognitive-search-working-with-skillsets/enrichment-tree-skill2.png "Enrichment tree after skill #2 executes")
You should now be able to look at the rest of the skills in the skillset and vis
![enrichment tree after all skills](media/cognitive-search-working-with-skillsets/enrichment-tree-final.png "Enrichment tree after all skills")
-The colors of the connectors in the tree above indicate that the enrichments were created by different skills and the nodes will need to be addressed individually and will not be part of the object returned when selecting the parent node.
+The colors of the connectors in the tree above indicate that the enrichments were created by different skills and the nodes will need to be addressed individually and won't be part of the object returned when selecting the parent node.
### Skill #5 Shaper skill If output includes a [knowledge store](knowledge-store-concept-intro.md), add a [Shaper skill](cognitive-search-skill-shaper.md) as a last step. The Shaper skill creates data shapes out of nodes in an enrichment tree. For example, you might want to consolidate multiple nodes into a single shape. You can then project this shape as a table (nodes become the columns in a table), passing the shape by name to a table projection.
-The Shaper skill is easy to work with because it focuses shaping under one skill. Alternatively, you can opt for in-line shaping within individual projections. The Shaper Skill does not add or detract from an enrichment tree, so it's not visualized. Instead, you can think of a Shaper skill as the means by which you re-articulate the enrichment tree you already have. Conceptually, this is similar to creating views out of tables in a database.
+The Shaper skill is easy to work with because it focuses shaping under one skill. Alternatively, you can opt for in-line shaping within individual projections. The Shaper Skill doesn't add or detract from an enrichment tree, so it's not visualized. Instead, you can think of a Shaper skill as the means by which you rearticulate the enrichment tree you already have. Conceptually, this is similar to creating views out of tables in a database.
```json {
search Search Howto Index Cosmosdb Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb-gremlin.md
Previously updated : 02/28/2022 Last updated : 07/12/2022 # Index data from Azure Cosmos DB using the Gremlin API
Last updated 02/28/2022
> [!IMPORTANT] > The Gremlin API indexer is currently in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Currently, there is no SDK support.
-In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from Azure Cosmos DB and makes it searchable in Azure Cognitive Search.
+In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content using the Gremlin API from Azure Cosmos DB.
This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to Cosmos DB [Gremlin API](../cosmos-db/choose-api.md#gremlin-api). It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
search Search Howto Index Cosmosdb Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb-mongodb.md
Previously updated : 06/10/2022 Last updated : 07/12/2022 # Index data from Azure Cosmos DB using the MongoDB API
Last updated 06/10/2022
> [!IMPORTANT] > MongoDB API support is currently in public preview under [supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Currently, there is no SDK support.
-In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from Azure Cosmos DB and makes it searchable in Azure Cognitive Search.
+In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content using the MongoDB API from Azure Cosmos DB.
This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to Cosmos DB [MongoDB API](../cosmos-db/choose-api.md#api-for-mongodb). It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
Because terminology can be confusing, it's worth noting that [Cosmos DB indexing
+ A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to send REST calls that create the data source, index, and indexer.
+## Limitations
+
+These are the limitations of this feature:
+++ Custom queries aren't supported for specifying the data set.+++ The column name `_ts` is a reserved word. If you need this field, consider alternative solutions for populating an index. You could use the [push API](search-what-is-data-import.md). Or, you could use [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md) with an Azure Cognitive Search index as the sink.+ ## Define the data source The data source definition specifies the data to index, credentials, and policies for identifying changes in the data. A data source is defined as an independent resource so that it can be used by multiple indexers.
In a [search index](search-what-is-an-index.md), add fields to accept the source
{ "name": "mysearchindex", "fields": [{
- "name": "id",
+ "name": "doc_id",
"type": "Edm.String", "key": true, "retrievable": true,
In a [search index](search-what-is-an-index.md), add fields to accept the source
} ```
-1. Create a document key field ("key": true). For MongoDB collections, Azure Cognitive Search automatically renames the `_id` property to `id` because field names canΓÇÖt start with an underscore character. If `_id` contains characters that are invalid for search document keys, the `id` values are Base64 encoded.
+1. Create a document key field ("key": true). For a search index based on a MongoDB collection, the document key can be "doc_id", "rid", or some other string field that contains unique values. As long as field names and data types are the same on both sides, no field mappings are required.
+
+ + "doc_id" represents "_id" for the object identifier. If you specify a field of "doc_id" in the index, the indexer populates it with the values of the object identifier.
+
+ + "rid" is a system property in Cosmos DB. If you specify a field of "rid" in the index, the indexer populates it with the base64-encoded value of the "rid" property.
+
+ + For any other field, your search field should have the same name as defined in the collection.
1. Create additional fields for more searchable content. See [Create an index](search-how-to-create-search-index.md) for details.
api-key: [Search service admin key]
} ```
-## Limitations
-
-These are the limitations of this feature:
-
-+ Custom queries are not supported.
-
-+ In this feature, the column name `_ts` is a reserved word. If there is a column called `_ts` in the Mongo database, the indexer will fail. If this is the case, it is recommended an alternate method to index is used, such as [Push API](search-what-is-data-import.md) or through [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md) by selecting an Azure Cognitive Search index sink.
-- ## Next steps You can now control how you [run the indexer](search-howto-run-reset-indexers.md), [monitor status](search-howto-monitor-indexers.md), or [schedule indexer execution](search-howto-schedule-indexers.md). The following articles apply to indexers that pull content from Azure Cosmos DB:
search Search Howto Index Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb.md
Previously updated : 02/28/2022 Last updated : 07/12/2022 # Index data from Azure Cosmos DB using the SQL API
-In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from Azure Cosmos DB and makes it searchable in Azure Cognitive Search.
+In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content using the SQL API from Azure Cosmos DB.
This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to Cosmos DB [SQL API](../cosmos-db/choose-api.md#coresql-api). It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
search Search Indexer Field Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-field-mappings.md
Field mappings also provide light-weight data conversion through mapping functio
Field mappings enable the following scenarios:
-+ Rename fields or handle name discrepancies. Suppose your data source has a field named `_id`. Given that Azure Cognitive Search doesn't allow field names that start with an underscore, a field mapping lets you effectively rename a field.
++ Rename fields or handle name discrepancies. Suppose your data source has a field named `_city`. Given that Azure Cognitive Search doesn't allow field names that start with an underscore, a field mapping lets you effectively rename a field. + Data type discrepancies. Cognitive Search has a smaller set of [supported data types](/rest/api/searchservice/supported-data-types) than many data sources. If you're importing SQL data, a field mapping allows you to [map the SQL data type](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md#mapping-data-types) you want in a search index.
api-key: [admin key]
{ "dataSourceName" : "mydatasource", "targetIndexName" : "myindex",
- "fieldMappings" : [ { "sourceFieldName" : "_id", "targetFieldName" : "id" } ]
+ "fieldMappings" : [ { "sourceFieldName" : "_city", "targetFieldName" : "city" } ]
} ```
var indexer = new SearchIndexer("hotels-sql-idxr", dataSource.Name, searchIndex.
Parameters = parameters, FieldMappings = {
- new FieldMapping("_id") {TargetFieldName = "HotelId", FieldMappingFunction.Base64Encode()},
+ new FieldMapping("_hotelId") {TargetFieldName = "HotelId", FieldMappingFunction.Base64Encode()},
new FieldMapping("Amenities") {TargetFieldName = "Tags"} } };
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
| **Log Analytics table(s)** | ESETEnterpriseInspector_CLΓÇï | | **DCR support** | Not currently supported | | **API credentials** | <li>EEI Username<li>EEI Password<li>Base URL |
-| **Vendor documentation/<br>installation instructions** | <li>[ESET Enterprise Inspector REST API documentation](https://help.eset.com/eei/1.5/en-US/api.html) |
+| **Vendor documentation/<br>installation instructions** | <li>[ESET Enterprise Inspector REST API documentation](https://help.eset.com/eei/1.6/en-US/api.html) |
| **Connector deployment instructions** | [Single-click deployment](connect-azure-functions-template.md?tabs=ARM) via Azure Resource Manager (ARM) template | | **Supported by** | [ESET](https://support.eset.com/en) |
sentinel Entities Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/entities-reference.md
Title: Microsoft Sentinel entity types reference | Microsoft Docs
description: This article displays the Microsoft Sentinel entity types and their required identifiers. Previously updated : 11/09/2021 Last updated : 07/06/2022
For best results - for guaranteed unique identification - you should use identif
| [**Mail cluster**](#mail-cluster) | NetworkMessageIds<br>CountByDeliveryStatus<br>CountByThreatType<br>CountByProtectionStatus<br>Threats<br>Query<br>QueryTime<br>MailCount<br>IsVolumeAnomaly<br>Source<br>ClusterSourceIdentifier<br>ClusterSourceType<br>ClusterQueryStartTime<br>ClusterQueryEndTime<br>ClusterGroup | Query<br>Source | Query + Source | | [**Mail message**](#mail-message) | Recipient<br>Urls<br>Threats<br>Sender<br>P1Sender<br>P1SenderDisplayName<br>P1SenderDomain<br>SenderIP<br>P2Sender<br>P2SenderDisplayName<br>P2SenderDomain<br>ReceivedDate<br>NetworkMessageId<br>InternetMessageId<br>Subject<br>BodyFingerprintBin1<br>BodyFingerprintBin2<br>BodyFingerprintBin3<br>BodyFingerprintBin4<br>BodyFingerprintBin5<br>AntispamDirection<br>DeliveryAction<br>DeliveryLocation<br>Language<br>ThreatDetectionMethods | NetworkMessageId<br>Recipient | NetworkMessageId + Recipient | | [**Submission mail**](#submission-mail) | SubmissionId<br>SubmissionDate<br>Submitter<br>NetworkMessageId<br>Timestamp<br>Recipient<br>Sender<br>SenderIp<br>Subject<br>ReportType | SubmissionId<br>NetworkMessageId<br>Recipient<br>Submitter | |
-|
+| [**Sentinel entities**](#sentinel-entities) | Entities | Entities | |
## Entity type schemas
-The following is a more in-depth look at the full schemas of each entity type. You'll notice that many of these schemas include links to other entity types - for example, the User account schema includes a link to the Host entity type, as one attribute of a user account is the host it's defined on. These externally-linked entities can't be used as identifiers for entity mapping, but they are very useful in giving a complete picture of entities on entity pages and the investigation graph.
+The following is a more in-depth look at the full schemas of each entity type. You'll notice that many of these schemas include links to other entity types - for example, the User account schema includes a link to the Host entity type, as one attribute of a user account is the host it's defined on. These externally linked entities can't be used as identifiers for entity mapping, but they are very useful in giving a complete picture of entities on entity pages and the investigation graph.
> [!NOTE] > A question mark following the value in the **Type** column indicates the field is nullable.
The following is a more in-depth look at the full schemas of each entity type. Y
| IsDomainJoined | Bool? | Determines whether this is a domain account. | | DisplayName | String | The display name of the account. | | ObjectGuid | Guid? | The objectGUID attribute is a single-value attribute that is the unique identifier for the object, assigned by Active Directory. |
-|
Strong identifiers of an account entity:
Weak identifiers of an account entity:
| OSFamily | Enum? | One of the following values: <li>Linux<li>Windows<li>Android<li>IOS | | OSVersion | String | A free-text representation of the operating system.<br>This field is meant to hold specific versions the are more fine-grained than OSFamily, or future values not supported by OSFamily enumeration. | | IsDomainJoined | Bool | Determines whether this host belongs to a domain. |
-|
Strong identifiers of a host entity: - HostName + NTDomain
Weak identifiers of a host entity:
| Type | String | ΓÇÿipΓÇÖ | | Address | String | The IP address as string, e.g. 127.0.0.1 (either in IPv4 or IPv6). | | Location | GeoLocation | The geo-location context attached to the IP entity. <br><br>For more information, see also [Enrich entities in Microsoft Sentinel with geolocation data via REST API (Public preview)](geolocation-data-api.md). |
-|
Strong identifiers of an IP entity: - Address
Strong identifiers of an IP entity:
| Category | String | The malware category by the vendor, e.g. Trojan. | | Files | List\<Entity> | List of linked file entities on which the malware was found. Can contain the File entities inline or as reference.<br>See the File entity for additional details on structure. | | Processes | List\<Entity> | List of linked process entities on which the malware was found. This would often be used when the alert triggered on fileless activity.<br>See the [Process](#process) entity for additional details on structure. |
-|
Strong identifiers of a malware entity:
Strong identifiers of a malware entity:
| Name | String | The file name without the path (some alerts might not include path). | | Host | Entity | The host on which the file was stored. | | FileHashes | List&lt;Entity&gt; | The file hashes associated with this file. |
-|
Strong identifiers of a file entity: - Name + Directory
Strong identifiers of a file entity:
| ParentProcess | Entity (Process) | The parent process entity. <br>Can contain partial data, i.e. only the PID. | | Host | Entity | The host on which the process was running. | | LogonSession | Entity (HostLogonSession) | The session in which the process was running. |
-|
Strong identifiers of a process entity:
Weak identifiers of a process entity:
| AppId | Int | The technical identifier of the application. This should be one of the values defined in the list of [cloud application identifiers](#cloud-application-identifiers). The value for AppId field is optional. | | Name | String | The name of the related cloud application. The value of application name is optional. | | InstanceName | String | The user-defined instance name of the cloud application. It is often used to distinguish between several applications of the same type that a customer has. |
-|
Strong identifiers of a cloud application entity: - AppId (without InstanceName)
Strong identifiers of a cloud application entity:
| IpAddress | List&lt;Entity (IP)&gt; | Entities corresponding to the resolved IP addresses. | | DnsServerIp | Entity (IP) | An entity representing the DNS server resolving the request. | | HostIpAddress | Entity (IP) | An entity representing the DNS request client. |
-|
Strong identifiers of a DNS entity: - DomainName + DnsServerIp + HostIpAddress
Weak identifiers of a DNS entity:
| TryGetResourceGroup | Bool | The resource group value if it exists. | | TryGetProvider | Bool | The provider value if it exists. | | TryGetName | Bool | The name value if it exists. |
-|
Strong identifiers of an Azure resource entity: - ResourceId
Strong identifiers of an Azure resource entity:
| Type | String | 'filehash' | | Algorithm | Enum | The hash algorithm type. Possible values:<li>Unknown<li>MD5<li>SHA1<li>SHA256<li>SHA256AC | | Value | String | The hash value. |
-|
Strong identifiers of a file hash entity: - Algorithm + Value
Strong identifiers of a file hash entity:
| Type | String | ΓÇÿregistry-keyΓÇÖ | | Hive | Enum? | One of the following values:<li>HKEY_LOCAL_MACHINE<li>HKEY_CLASSES_ROOT<li>HKEY_CURRENT_CONFIG<li>HKEY_USERS<li>HKEY_CURRENT_USER_LOCAL_SETTINGS<li>HKEY_PERFORMANCE_DATA<li>HKEY_PERFORMANCE_NLSTEXT<li>HKEY_PERFORMANCE_TEXT<li>HKEY_A<li>HKEY_CURRENT_USER | | Key | String | The registry key path. |
-|
Strong identifiers of a registry key entity: - Hive + Key
Strong identifiers of a registry key entity:
| Name | String | The registry value name. | | Value | String | String-formatted representation of the value data. | | ValueType | Enum? | One of the following values:<li>String<li>Binary<li>DWord<li>Qword<li>MultiString<li>ExpandString<li>None<li>Unknown<br>Values should conform to Microsoft.Win32.RegistryValueKind enumeration. |
-|
Strong identifiers of a registry value entity: - Key + Name
Weak identifiers of a registry value entity:
| DistinguishedName | String | The group distinguished name. | | SID | String | The SID attribute is a single-value attribute that specifies the security identifier (SID) of the group. | | ObjectGuid | Guid? | The objectGUID attribute is a single-value attribute that is the unique identifier for the object, assigned by Active Directory. |
-|
Strong identifiers of a security group entity: - DistinguishedName
Strong identifiers of a security group entity:
| -- | - | -- | | Type | String | 'url' | | Url | Uri | A full URL the entity points to. |
-|
Strong identifiers of a URL entity: - Url (when an absolute URL)
Weak identifiers of a URL entity:
| MacAddress | String | The MAC address of the device. | | Protocols | List&lt;String&gt; | A list of protocols that the device supports. | | SerialNumber | String | The serial number of the device. |
-|
Strong identifiers of an IoT device entity: - IoTHub + DeviceId
Weak identifiers of an IoT device entity:
| Upn | String | The mailbox's UPN. | | RiskLevel | Enum? | The risk level of this mailbox. Possible values:<li>None<li>Low<li>Medium<li>High | | ExternalDirectoryObjectId | Guid? | The AzureAD identifier of mailbox. Similar to AadUserId in the Account entity, but this property is specific to mailbox object on the Office side. |
-|
Strong identifiers of a mailbox entity: - MailboxPrimaryAddress
Strong identifiers of a mailbox entity:
| ClusterQueryStartTime | DateTime? | Cluster start time - used as start time for cluster counts query. Usually dates to the End time minus DaysToLookBack setting from Microsoft Defender for Office 365 (see note above). | | ClusterQueryEndTime | DateTime? | Cluster end time - used as end time for cluster counts query. Usually the mail data's received time. | | ClusterGroup | String | Corresponds to the Kusto query key used on Microsoft Defender for Office 365 (see note above). |
-|
Strong identifiers of a mail cluster entity: - Query + Source
Strong identifiers of a mail cluster entity:
| DeliveryLocation | Enum? | The delivery location of this mail message. Possible values:<li>Unknown<li>Inbox<li>JunkFolder<li>DeletedFolder<li>Quarantine<li>External<li>Failed<li>Dropped<li>Forwarded | | Language | String | The language in which the contents of the mail are written. | | ThreatDetectionMethods | IList&lt;String&gt; | The list of Threat Detection Methods applied on this mail. |
-|
Strong identifiers of a mail message entity: - NetworkMessageId + Recipient
Strong identifiers of a mail message entity:
| SenderIp | String | The sender's IP. | | Subject | String | The subject of submission mail. | | ReportType | String | The submission type for the given instance. This maps to Junk, Phish, Malware or NotJunk. |
-|
Strong identifiers of a SubmissionMail entity: - SubmissionId, Submitter, NetworkMessageId, Recipient
+## Sentinel entities
+
+| Field | Type | Description |
+| -- | - | -- |
+| Entities | String | A list of the entities identified in the alert. This list is the **entities** column from the SecurityAlert schema ([see documentation](security-alert-schema.md)). |
+ ## Cloud application identifiers The following list defines identifiers for known cloud applications. The App ID value is used as a [cloud application](#cloud-application) entity identifier.
-|App ID|Name|
-||-|
-|10026|DocuSign|
-|10395|Anaplan|
-|10489|Box|
-|10549|Cisco Webex|
-|10618|Atlassian|
-|10915|Cornerstone OnDemand|
-|10921|Zendesk|
-|10980|Okta|
-|11042|Jive Software|
-|11114|Salesforce|
-|11161|Office 365|
-|11162|Microsoft OneNote Online|
-|11394|Microsoft Online Services|
-|11522|Yammer|
-|11599|Amazon Web Services|
-|11627|Dropbox|
-|11713|Expensify|
-|11770|G Suite|
-|12005|SuccessFactors|
-|12260|Microsoft Azure|
-|12275|Workday|
-|13843|LivePerson|
-|13979|Concur|
-|14509|ServiceNow|
-|15570|Tableau|
-|15600|Microsoft OneDrive for Business|
-|15782|Citrix ShareFile|
-|17152|Amazon|
-|17865|Ariba Inc|
-|18432|Zscaler|
-|19688|Xactly|
-|20595|Microsoft Defender for Cloud Apps|
-|20892|Microsoft SharePoint Online|
-|20893|Microsoft Exchange Online|
-|20940|Active Directory|
-|20941|Adallom CPanel|
-|22110|Google Cloud Platform|
-|22930|Gmail|
-|23004|Autodesk Fusion Lifecycle|
-|23043|Slack|
-|23233|Microsoft Office Online|
-|25275|Microsoft Skype for Business|
-|25988|Google Docs|
-|26055|Microsoft Office 365 admin center|
-|26060|OPSWAT Gears|
-|26061|Microsoft Word Online|
-|26062|Microsoft PowerPoint Online|
-|26063|Microsoft Excel Online|
-|26069|Google Drive|
-|26206|Workiva|
-|26311|Microsoft Dynamics|
-|26318|Microsoft Azure AD|
-|26320|Microsoft Office Sway|
-|26321|Microsoft Delve|
-|26324|Microsoft Power BI|
-|27548|Microsoft Forms|
-|27592|Microsoft Flow|
-|27593|Microsoft PowerApps|
-|28353|Workplace by Facebook|
-|28373|CAS Proxy Emulator|
-|28375|Microsoft Teams|
-|32780|Microsoft Dynamics 365|
-|33626|Google|
-|34127|Microsoft AppSource|
-|34667|HighQ|
-|35395|Microsoft Dynamics Talent|
-|
+| App ID | Name |
+| | |
+| 10026 | DocuSign |
+| 10395 | Anaplan |
+| 10489 | Box |
+| 10549 | Cisco Webex |
+| 10618 | Atlassian |
+| 10915 | Cornerstone OnDemand |
+| 10921 | Zendesk |
+| 10980 | Okta |
+| 11042 | Jive Software |
+| 11114 | Salesforce |
+| 11161 | Office 365 |
+| 11162 | Microsoft OneNote Online |
+| 11394 | Microsoft Online Services |
+| 11522 | Yammer |
+| 11599 | Amazon Web Services |
+| 11627 | Dropbox |
+| 11713 | Expensify |
+| 11770 | G Suite |
+| 12005 | SuccessFactors |
+| 12260 | Microsoft Azure |
+| 12275 | Workday |
+| 13843 | LivePerson |
+| 13979 | Concur |
+| 14509 | ServiceNow |
+| 15570 | Tableau |
+| 15600 | Microsoft OneDrive for Business |
+| 15782 | Citrix ShareFile |
+| 17152 | Amazon |
+| 17865 | Ariba Inc |
+| 18432 | Zscaler |
+| 19688 | Xactly |
+| 20595 | Microsoft Defender for Cloud Apps |
+| 20892 | Microsoft SharePoint Online |
+| 20893 | Microsoft Exchange Online |
+| 20940 | Active Directory |
+| 20941 | Adallom CPanel |
+| 22110 | Google Cloud Platform |
+| 22930 | Gmail |
+| 23004 | Autodesk Fusion Lifecycle |
+| 23043 | Slack |
+| 23233 | Microsoft Office Online |
+| 25275 | Microsoft Skype for Business |
+| 25988 | Google Docs |
+| 26055 | Microsoft 365 admin center |
+| 26060 | OPSWAT Gears |
+| 26061 | Microsoft Word Online |
+| 26062 | Microsoft PowerPoint Online |
+| 26063 | Microsoft Excel Online |
+| 26069 | Google Drive |
+| 26206 | Workiva |
+| 26311 | Microsoft Dynamics |
+| 26318 | Microsoft Azure AD |
+| 26320 | Microsoft Office Sway |
+| 26321 | Microsoft Delve |
+| 26324 | Microsoft Power BI |
+| 27548 | Microsoft Forms |
+| 27592 | Microsoft Flow |
+| 27593 | Microsoft PowerApps |
+| 28353 | Workplace by Facebook |
+| 28373 | CAS Proxy Emulator |
+| 28375 | Microsoft Teams |
+| 32780 | Microsoft Dynamics 365 |
+| 33626 | Google |
+| 34127 | Microsoft AppSource |
+| 34667 | HighQ |
+| 35395 | Microsoft Dynamics Talent |
## Next steps
sentinel Preparing Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/preparing-sap.md
To deploy the CRs, follow the steps outlined below:
1. Repeat the procedure in the preceding 5 steps to add the remaining Change Requests to be deployed.
-1. In the **Import Queue** window, select the **Import All Requests** icon:
+1. In the **Import Queue** window, select the relevant Transport Request once, and then select **F9** or **Select/Deselect Request** icon.
- :::image type="content" source="media/preparing-sap/import-all-requests.png" alt-text="Screenshot of importing all requests." lightbox="media/preparing-sap/import-all-requests-lightbox.png":::
+1. To add the remaining Transport Requests to the deployment, repeat step 9.
+
+1. Select the Import Requests icon:
+
+ :::image type="content" source="media/preparing-sap/import-requests.png" alt-text="Screenshot of importing all requests." lightbox="media/preparing-sap/import-requests-lightbox.png":::
1. In **Start Import** window, select the **Target Client** field.
The required authorizations are listed here by log type. Only the authorizations
| | |
+## Remove the user role and the optional CR installed on your ABAP system
+
+To remove the user role and optional CR imported to your system, import the deletion CR *NPLK900259* into your ABAP system.
+ ## Next steps You have now fully prepared your SAP environment. The required CRs have been deployed, a role and profile have been provisioned, and a user account has been created and assigned the proper role profile.
sentinel Work With Threat Indicators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/work-with-threat-indicators.md
IMPORTANT: Microsoft Sentinel refreshes indicators every 14 days to make sure th
> [!IMPORTANT] > Matching analytics is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-[Create a rule](detect-threats-built-in.md#use-built-in-analytics-rules) using the built-in **Microsoft Threat Intelligence Matching Analytics** analytics rule template to have Microsoft Sentinel match Microsoft-generated threat intelligence data with the logs you've ingested in to Microsoft Sentinel.
+[Create a rule](detect-threats-built-in.md#use-built-in-analytics-rules) using the built-in **Microsoft Threat Intelligence Analytics** analytics rule template to have Microsoft Sentinel match Microsoft-generated threat intelligence data with the logs you've ingested in to Microsoft Sentinel.
Matching threat intelligence data with your logs helps to generate high-fidelity alerts and incidents, with appropriate severities applied. When a match is found, any alerts generated are grouped into incidents.
Alerts are grouped on a per-observable basis, over a 24-hour timeframe. So, for
If you have a match found, any alerts generated are grouped into incidents.
-Use the following steps to triage through the incidents generated by the **Microsoft Threat Intelligence Matching Analytics** rule:
+Use the following steps to triage through the incidents generated by the **Microsoft Threat Intelligence Analytics** rule:
-1. In the Microsoft Sentinel workspace where you've enabled the **Microsoft Threat Intelligence Matching Analytics** rule, select **Incidents** and search for **Microsoft Threat Intelligence Analytics**.
+1. In the Microsoft Sentinel workspace where you've enabled the **Microsoft Threat Intelligence Analytics** rule, select **Incidents** and search for **Microsoft Threat Intelligence Analytics**.
Any incidents found are shown in the grid.
service-fabric Backup Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/backup-explorer.md
Title: Read and update a Reliable Collections backup locally description: Use Backup Explorer in Azure Service Fabric to read and update a local Reliable Collections backup.-- Previously updated : 07/01/2020-++++ Last updated : 07/11/2022 # Read and update a Reliable Collections backup by using Backup Explorer
service-fabric Cluster Resource Manager Subclustering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/cluster-resource-manager-subclustering.md
Title: Balancing of subclustered metrics description: The effect of placement constraints on balancing and how to handle it-- Previously updated : 03/15/2020-++++ Last updated : 07/11/2022 # Balancing of subclustered metrics
service-fabric Cluster Security Certificate Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/cluster-security-certificate-management.md
Title: Manage certificates in a Service Fabric cluster description: Learn about managing certificates in a Service Fabric cluster that's secured with X.509 certificates. Previously updated : 04/10/2020-++++ Last updated : 07/11/2022 # Manage certificates in Service Fabric clusters
We describe *certificate management* as the processes and procedures that are us
Some management operations, such as enrollment, policy setting, and authorization controls, are beyond the scope of this article. Other operations, such as provisioning, renewal, re-keying, or revocation, are related only incidentally to Service Fabric. Nonetheless, the article addresses them somewhat, because understanding these operations can help you secure your cluster properly.
-Your immediate goal is likely to be to automate certificate management as much as possible to ensure uninterrupted availability of the cluster. Because the process is user-touch-free, you'll also want to offer security assurances. With Service Fabric clusters, this goal is attainable.
+Your immediate goal is likely to be to automate certificate management as much as possible to ensure uninterrupted availability of the cluster. Because the process is user-touch-free, you'll also want to offer security assurances. With Service Fabric clusters, this goal is attainable.
The rest of the article first deconstructs certificate management, and later focuses on enabling autorollover.
Let's quickly outline the progression of a certificate from issuance to consumpt
For the purposes of this article, the first two steps in the preceding sequence are mostly unrelated. Their only connection is that the subject common name of the certificate is the DNS name that's declared in the cluster definition.
-Certificate issuance and provisioning flow is illustrated in the following diagrams:
+Certificate issuance and provisioning flow are illustrated in the following diagrams:
**For certificates that are declared by thumbprint**
Continuing with Azure as the context, and using Key Vault as the secret-manageme
- Under `{vaultUri}/secrets/{name}`: The certificate, including its private key, available for downloading as an unprotected PFX or PEM file. Recall that a certificate in the key vault contains a chronological list of certificate instances that share a policy. Certificate versions will be created according to the lifetime and renewal attributes of this policy. We highly recommend that vault certificates not share subjects or domains or DNS names, because it can be disruptive in a cluster to provision certificate instances from different vault certificates, with identical subjects but substantially different other attributes, such as issuer, key usages, and so on.- At this point, a certificate exists in the key vault, ready for consumption. Now let's explore the rest of the process. ### Certificate provisioning
-We mentioned a *provisioning agent*, which is the entity that retrieves the certificate, including its private key, from the key vault and installs it on each of the hosts of the cluster. (Recall that Service Fabric doesn't provision certificates.)
+We mentioned a *provisioning agent*, which is the entity that retrieves the certificate, including its private key, from the key vault and installs it on each of the hosts of the cluster. (Recall that Service Fabric doesn't provision certificates.)
-In the context of this article, the cluster will be hosted on a collection of Azure virtual machines (VMs) or virtual machine scale sets (VMSS). In Azure, you can provision a certificate from a vault to a VM/VMSS by using the following mechanisms. This assumes, as before, that the provisioning agent was previously granted *secret get* permissions on the key vault by the key vault owner.
+In the context of this article, the cluster will be hosted on a collection of Azure virtual machines (VMs) or virtual machine scale sets. In Azure, you can provision a certificate from a vault to a VM/VMSS by using the following mechanisms. This assumes, as before, that the provisioning agent was previously granted *secret get* permissions on the key vault by the key vault owner.
- Ad-hoc: An operator retrieves the certificate from the key vault (as PFX/PKCS #12 or PEM) and installs it on each node.
In the context of this article, the cluster will be hosted on a collection of Az
- By using the [Key Vault VM extension](../virtual-machines/extensions/key-vault-windows.md). This lets you provision certificates by using version-less declarations, with periodic refreshing of observed certificates. In this case, the VM/VMSS is expected to have a [managed identity](../virtual-machines/security-policy.md#managed-identities-for-azure-resources), an identity that has been granted access to the key vaults that contain the observed certificates.
-VMSS/compute-based provisioning presents security and availability advantages, but it also presents restrictions. It requires, by design, that you declare certificates as versioned secrets, which makes it suitable only for clusters secured with certificates declared by thumbprint.
+VMSS/compute-based provisioning presents security and availability advantages, but it also presents restrictions. It requires, by design, that you declare certificates as versioned secrets. This requirement makes VMSS/compute-based provisioning suitable only for clusters secured with certificates declared by thumbprint.
-In contrast, Key Vault VM extension-based provisioning always installs the latest version of each observed certificate, which makes it suitable only for clusters secured with certificates declared by subject common name. To emphasize, do not use an autorefresh provisioning mechanism (such as the Key Vault VM extension) for certificates that are declared by instance (that is, by thumbprint). The risk of losing availability is considerable.
+In contrast, Key Vault VM extension-based provisioning always installs the latest version of each observed certificate. which makes it suitable only for clusters secured with certificates declared by subject common name. To emphasize, do not use an autorefresh provisioning mechanism (such as the Key Vault VM extension) for certificates that are declared by instance (that is, by thumbprint). The risk of losing availability is considerable.
Other provisioning mechanisms exist, but the approaches mentioned here are the currently accepted options for Azure Service Fabric clusters.
Next, let's set up the additional resources that are needed to ensure the autoro
### Set up the prerequisite resources
-As mentioned earlier, a certificate that's provisioned as a virtual machine scale set secret is retrieved from the key vault by the Microsoft.Compute Resource Provider service. It does so by using its first-party identity on behalf of the deployment operator. For autorollover, that will change. You'll switch to using a managed identity that's assigned to the virtual machine scale set and that has been granted GET permissions on the secrets in that vault.
+As mentioned earlier, a certificate that's provisioned as a virtual machine scale set secret is retrieved from the key vault by the Microsoft Compute Resource Provider service. It does so by using its first-party identity on behalf of the deployment operator. That process will change for autorollover. You'll switch to using a managed identity that's assigned to the virtual machine scale set and that has been granted GET permissions on the secrets in that vault.
You should deploy the next excerpts at the same time. They're listed individually only for play-by-play analysis and explanation.
This indicates to the Key Vault VM extension that, on the first run (after deplo
#### Certificate linking, explained
-You might have noticed the Key Vault VM extension `linkOnRenewal` flag, and the fact that it is set to false. This setting addresses in depth the behavior controlled by this flag and its implications on the functioning of a cluster. This behavior is specific to Windows.
+You might have noticed the Key Vault VM extension `linkOnRenewal` flag, and the fact that it is set to false. This setting addresses the behavior controlled by this flag and its implications on the functioning of a cluster. This behavior is specific to Windows.
According to its [definition](../virtual-machines/extensions/key-vault-windows.md#extension-schema):
service-fabric Cluster Security Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/cluster-security-certificates.md
Title: X.509 Certificate-based Authentication in a Service Fabric Cluster description: Learn about certificate-based authentication in Service Fabric clusters, and how to detect, mitigate and fix certificate-related problems. Previously updated : 03/16/2020++++ Last updated : 07/11/2022 + # X.509 Certificate-based authentication in Service Fabric clusters This article complements the introduction to [Service Fabric cluster security](service-fabric-cluster-security.md), and goes into the details of certificate-based authentication in Service Fabric clusters. We assume the reader is familiar with fundamental security concepts, and also with the controls that Service Fabric exposes to control the security of a cluster.
As alluded to above, the Service Fabric runtime defines two levels of privilege
The security settings of a Service Fabric cluster describe, in principle, the following aspects: - the authentication type; this is a creation-time, immutable characteristic of the cluster. Examples of such settings are 'ClusterCredentialType', 'ServerCredentialType', and allowed values are 'none', 'x509' or 'windows'. This article focuses on the x509-type authentication. - the (authentication) validation rules; these settings are set by the cluster owner and describe which credentials shall be accepted for a given role. Examples will be examined in depth immediately below.-- settings used to tweak or subtly alter the result of authentication; examples here include flags (de-)restricting enforcement of certificate revocation lists etc.
+- settings used to tweak or subtly alter the result of authentication; examples here include flags restricting or unrestricting enforcement of certificate revocation lists, etc.
> [!NOTE] > Cluster configuration examples provided below are excerpts from the cluster manifest in XML format, as the most-digested format which supports directly the Service Fabric functionality described in this article. The same settings can be expressed directly in the JSON representations of a cluster definition, whether a standalone json cluster manifest, or an Azure Resource Mangement template.
The declarations above correspond to the admin and user identities, respectively
Tying it all together, upon receiving a request for a connection in a cluster secured with X.509 certificates, the Service Fabric runtime will use the cluster's security settings to validate the credentials of the remote party as described above; if successful, the caller/remote party is considered to be authenticated. If the credential matches multiple validation rules, the runtime will grant the caller the highest-privileged role of any of the matched rules. ### Presentation rules
-The previous section described how authentication works in a certificate-secured cluster; this section will explain how the Service Fabric runtime itself discovers and loads the certificates it uses for in-cluster communication; we call these the "presentation" rules.
+The previous section described how authentication works in a certificate-secured cluster; this section will explain how the Service Fabric runtime itself discovers and loads the certificates it uses for in-cluster communication; we call these "presentation" rules.
As with the validation rules, the presentation rules specify a role and the associated credential declaration, expressed either by thumbprint or common name. Unlike the validation rules, common name-based declarations do not have provisions for issuer pinning; this allows for greater flexibility as well as improved performance. The presentation rules are declared in the 'NodeType' section(s) of the cluster manifest, for each distinct node type; the settings are split from the Security sections of the cluster to allow each node type to have its full configuration in a single section. In Azure Service Fabric clusters, the node type certificate declarations default to their corresponding settings in the Security section of the definition of the cluster.
Note that, for common-name based presentation declarations, a certificate is con
### Miscellaneous certificate configuration settings It was mentioned previously that the security settings of a Service Fabric cluster also allow for subtly changing the behavior of the authentication code. While the article on [Service Fabric cluster settings](service-fabric-cluster-fabric-settings.md) represents the comprehensive and most up to date list of settings, we'll expand on the meaning of a select few of the security settings here, to complete the full expose on certificate-based authentication. For each setting, we'll explain the intent, default value/behavior, how it affects authentication and which values are acceptable.
-As mentioned, certificate validation always implies building and evaluating the certificate's chain. For CA-issued certificates, this apparently-simple OS API call typically entails several outbound calls to various endpoints of the issuing PKI, caching of responses and so on. Given the prevalence of certificate validation calls in a Service Fabric cluster, any issues in the PKI's endpoints can result in reduced availability of the cluster, or outright breakdown. While the outbound calls cannot be suppressed (see below in the FAQ section for more on this), the following settings can be used to mask out validation errors caused by failing CRL calls.
+As mentioned, certificate validation always implies building and evaluating the certificate's chain. For CA-issued certificates, this apparently simple OS API call typically entails several outbound calls to various endpoints of the issuing PKI, caching of responses and so on. Given the prevalence of certificate validation calls in a Service Fabric cluster, any issues in the PKI's endpoints can result in reduced availability of the cluster, or outright breakdown. While the outbound calls cannot be suppressed (see below in the FAQ section for more on this), the following settings can be used to mask out validation errors caused by failing CRL calls.
* CrlCheckingFlag - under the "Security" section, string converted to UINT. The value of this setting is used by Service Fabric to mask out certificate chain status errors by changing the behavior of chain building; it is passed in to the Win32 CryptoAPI [CertGetCertificateChain](/windows/win32/api/wincrypt/nf-wincrypt-certgetcertificatechain) call as the 'dwFlags' parameter, and can be set to any valid combination of flags accepted by the function. A value of 0 forces the Service Fabric runtime to ignore any trust status errors - this is not recommended, as its use would constitute a significant security exposure. The default value is 0x40000000 (CERT_CHAIN_REVOCATION_CHECK_CHAIN_EXCLUDE_ROOT).
Typical symptoms that manifest themselves in a cluster experiencing authenticati
- connection attempts are rejected - connection attempts are timing out
-Each of the symptoms may be caused by different problems, and the same root cause may show different manifestations; as such, we'll just list a small sample of typical problems, with recommendations for fixing them.
+Each of the symptoms may be caused by different problems, and the same root cause may show different manifestations; as such, we'll just list a small sample of typical problems, with recommendations for fixing them.
-* Nodes can exchange messages but cannot connect. A possible cause for connection attempts to be terminated is the 'certificate not matched' error - one of the parties in a Service Fabric-to- Service Fabric connections is presenting a certificate which fails the recipient's validation rules. May be accompanied by either of the following errors:
+* Nodes can exchange messages but cannot connect. A possible cause for connection attempts to be terminated is the 'certificate not matched' error - one of the parties in a Service Fabric-to-Service Fabric connection is presenting a certificate which fails the recipient's validation rules. May be accompanied by either of the following errors:
```C++ 0x80071c44 -2147017660 FABRIC_E_SERVER_AUTHENTICATION_FAILED ```
service-fabric Concepts Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/concepts-managed-identity.md
Title: Managed identities for Azure description: Learn about using Managed identities for Azure with Service Fabric. Previously updated : 05/28/2022++++ Last updated : 07/11/2022 # Using Managed identities for Azure with Service Fabric
service-fabric Configure Container Repository Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/configure-container-repository-credentials.md
Title: Azure Service Fabric - Configure container repository credentials description: Configure repository credentials to download images from container registry Previously updated : 12/09/2019++++ Last updated : 07/11/2022 # Configure repository credentials for your application to download container images
service-fabric Configure Existing Cluster Enable Managed Identity Token Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/configure-existing-cluster-enable-managed-identity-token-service.md
Title: Configure managed identity support in an existing Service Fabric cluster description: Here's how to enable managed identities support in an existing Azure Service Fabric cluster- Previously updated : 03/11/2019+++++ Last updated : 07/11/2022 # Configure managed identity support in an existing Service Fabric cluster
To enable the Managed Identity Token Service in an existing cluster, you will ne
] ```
-In order for the changes to take effect, you will also need to change the upgrade policy to specify a forceful restart of the Service Fabric runtime on each node as the upgrade progresses through the cluster. This restart ensures that the newly enabled system service is started and running on each node. In the snippet below, `forceRestart` is the essential setting to enable restart. For the remaining parameters, use values described below or use existing custom values already specified for the cluster resource. Custom settings for Fabric Upgrade Policy ('upgradeDescription') can be viewed from Azure Portal by selecting 'Fabric Upgrades' option on the Service Fabric resource or resources.azure.com. Default options for the upgrade policy ('upgradeDescription') are not viewable from powershell or resources.azure.com. See [ClusterUpgradePolicy](/dotnet/api/microsoft.azure.management.servicefabric.models.clusterupgradepolicy) for additional information.
+In order for the changes to take effect, you will also need to change the upgrade policy to specify a forceful restart of the Service Fabric runtime on each node as the upgrade progresses through the cluster. This restart ensures that the newly enabled system service is started and running on each node. In the snippet below, `forceRestart` is the essential setting to enable restart. For the remaining parameters, use values described below or use existing custom values already specified for the cluster resource. Custom settings for Fabric Upgrade Policy ('upgradeDescription') can be viewed from Azure portal by selecting 'Fabric Upgrades' option on the Service Fabric resource or resources.azure.com. Default options for the upgrade policy ('upgradeDescription') are not viewable from PowerShell or resources.azure.com. See [ClusterUpgradePolicy](/dotnet/api/microsoft.azure.management.servicefabric.models.clusterupgradepolicy) for additional information.
```json "upgradeDescription": {
service-fabric Configure New Azure Service Fabric Enable Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/configure-new-azure-service-fabric-enable-managed-identity.md
Title: Configure managed identity support for a new Service Fabric cluster description: Here's how to enable managed identities support in a new Azure Service Fabric cluster- Previously updated : 12/09/2019+++++ Last updated : 07/11/2022 # Configure managed identity support for a new Service Fabric cluster
service-fabric Create Load Balancer Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/create-load-balancer-rule.md
Title: Create an Azure Load Balancer rule for a cluster description: Configure an Azure Load Balancer to open ports for your Azure Service Fabric cluster.-- Previously updated : 12/06/2017 -+++++ Last updated : 07/11/2022 # Open ports for a Service Fabric cluster
service-fabric How To Deploy Service Fabric Application System Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-deploy-service-fabric-application-system-assigned-managed-identity.md
Title: Deploy a Service Fabric app with system-assigned MI description: This article shows you how to assign a system-assigned managed identity to an Azure Service Fabric application-- Previously updated : 05/25/2022+++++ Last updated : 07/11/2022 # Deploy Service Fabric application with system-assigned managed identity
service-fabric How To Deploy Service Fabric Application User Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-deploy-service-fabric-application-user-assigned-managed-identity.md
Title: Deploy app with a user-assigned managed identity description: This article shows you how to deploy Service Fabric application with a user-assigned managed identity-- Previously updated : 12/09/2019+++++ Last updated : 07/11/2022 # Deploy Service Fabric application with a User-Assigned Managed Identity
service-fabric How To Grant Access Other Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-grant-access-other-resources.md
Title: Grant an application access to other Azure resources description: This article explains how to grant your managed-identity-enabled Service Fabric application access to other Azure resources supporting Azure Active Directory-based authentication.-- Previously updated : 12/09/2019-+++++ Last updated : 07/11/2022 # Granting a Service Fabric application's managed identity access to Azure resources
service-fabric How To Managed Cluster App Deployment Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-app-deployment-template.md
Title: Deploy an application to a managed cluster using Azure Resource Manager description: Learn how to deploy, upgrade, or delete a Service Fabric application on an Azure Service Fabric managed cluster using Azure Resource Manager Previously updated : 8/23/2021 -++++ Last updated : 07/11/2022 # Manage application lifecycle on a managed cluster using Azure Resource Manager
To delete a service fabric application that was deployed by using the applicatio
If you are migrating application(s) from classic to managed clusters you will need to make sure to validate types are correctly specified or you will encounter errors.
-The following items are called out specifically due to frequency of usage, but not not meant to be an exclusive list of differences.
+The following items are called out specifically due to frequency of usage, but not meant to be an exclusive list of differences.
* upgradeReplicaSetCheckTimeout is now an integer for managed, but a string on classic SFRP.
service-fabric How To Managed Cluster Application Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-application-managed-identity.md
Title: Configure and use applications with managed identity on a Service Fabric managed cluster description: Learn how to configure, and use an application with managed identity on an Azure Resource Manager (ARM) template deployed Azure Service Fabric managed cluster. Previously updated : 8/23/2021++++ Last updated : 07/11/2022 # Deploy an application with Managed Identity to a Service Fabric managed cluster
service-fabric How To Managed Cluster Application Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-application-secrets.md
Title: Deploy application secrets to a Service Fabric managed cluster description: Learn about Azure Service Fabric application secrets and how to deploy them to a managed cluster Previously updated : 8/23/2021++++ Last updated : 07/11/2022 # Deploy application secrets to a Service Fabric managed cluster
service-fabric How To Managed Cluster Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-autoscale.md
Title: Configure autoscaling for Service Fabric managed cluster nodes description: Learn how to configure autoscaling policies on Service Fabric managed cluster. Previously updated : 2/14/2022++++ Last updated : 07/11/2022 # Introduction to Autoscaling on Service Fabric managed clusters
-[Autoscaling](../azure-monitor/autoscale/autoscale-overview.md) gives great elasticity and enables addition or reduction of nodes on demand on a secondary node type. This automated and elastic behavior reduces the management overhead and potential business impact by monitoring and optimizing the amount of nodes servicing your workload. You configure rules for your workload and let autoscaling handle the rest. When those defined thresholds are met, autoscale rules take action to adjust the capacity of your node type. Autoscaling can be enabled, disabled, or configured at any time. This article provides an example deployment, how to enable or disable autoscaling, and how to configure an example autoscale policy.
+[Autoscaling](../azure-monitor/autoscale/autoscale-overview.md) gives great elasticity and enables addition or reduction of nodes on demand on a secondary node type. This automated and elastic behavior reduces the management overhead and potential business impact by monitoring and optimizing the number of nodes servicing your workload. You configure rules for your workload and let autoscaling handle the rest. When those defined thresholds are met, autoscale rules take action to adjust the capacity of your node type. Autoscaling can be enabled, disabled, or configured at any time. This article provides an example deployment, how to enable or disable autoscaling, and how to configure an example autoscale policy.
**Requirements and supported metrics:**
Last updated 2/14/2022
A common scenario where autoscaling is useful is when the load on a particular service varies over time. For example, a service such as a gateway can scale based on the amount of resources necessary to handle incoming requests. Let's take a look at an example of what those scaling rules could look like and we'll use them later in the article:
-* If all instances of my gateway are using more than 70% on average, then scale the gateway service out by adding two more instance. Do this every 30 minutes, but never have more than twenty instances in total.
+* If all instances of my gateway are using more than 70% on average, then scale the gateway service out by adding two more instances. Do this every 30 minutes, but never have more than twenty instances in total.
* If all instances of my gateway are using less than 40% cores on average, then scale the service in by removing one instance. Do this every 30 minutes, but never have fewer than three instances in total. ## Example autoscale deployment
service-fabric How To Managed Cluster Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-availability-zones.md
Title: Deploy a Service Fabric managed cluster across Availability Zones description: Learn how to deploy Service Fabric managed cluster across Availability Zones and how to configure in an ARM template. Previously updated : 1/20/2022++++ Last updated : 07/11/2022 # Deploy a Service Fabric managed cluster across availability zones Availability Zones in Azure are a high-availability offering that protects your applications and data from datacenter failures. An Availability Zone is a unique physical location equipped with independent power, cooling, and networking within an Azure region.
service-fabric How To Managed Cluster Azure Active Directory Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-azure-active-directory-client.md
Title: How to configure Azure Service Fabric managed cluster for Azure active directory client access description: Learn how to configure an Azure Service Fabric managed cluster for Azure active directory client access-- Previously updated : 03/1/2022+++++ Last updated : 07/11/2022 # How to configure Azure Service Fabric managed cluster for Active Directory client access
service-fabric How To Managed Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-configuration.md
Title: Configure your Service Fabric managed cluster description: Learn how to configure your Service Fabric managed cluster for automatic OS upgrades, NSG rules, and more. Previously updated : 10/25/2021++++ Last updated : 07/11/2022 # Service Fabric managed cluster configuration options
service-fabric How To Managed Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-connect.md
Title: Connect to a Service Fabric managed cluster description: Learn how to connect to a Service Fabric managed cluster Previously updated : 10/25/2021++++ Last updated : 07/11/2022 # Connect to a Service Fabric managed cluster
To navigate to SFX for your managed cluster
## Use PowerShell Modules
-There following PowerShell Modules are available to connect, view, and modify configurations for your cluster.
+The following PowerShell Modules are available to connect, view, and modify configurations for your cluster.
* Install the [Service Fabric SDK and PowerShell module](service-fabric-get-started.md).
service-fabric How To Managed Cluster Enable Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-enable-disk-encryption.md
Title: Enable Disk Encryption for Service Fabric managed cluster nodes description: Learn how to enable disk encryption for Azure Service Fabric managed cluster nodes in Windows using an ARM template. Previously updated : 2/14/2022-++++ Last updated : 07/11/2022 # Enable disk encryption for Service Fabric managed cluster nodes
service-fabric How To Managed Cluster Grant Access Other Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-grant-access-other-resources.md
Title: Grant access to Azure resources on a Service Fabric cluster description: Learn how to grant a managed-identity-enabled Service Fabric application access to other Azure resources that support Azure Active Directory authentication.--- Previously updated : 06/01/2022++++ Last updated : 07/11/2022 # Grant a Service Fabric application access to Azure resources on a Service Fabric cluster
service-fabric How To Managed Cluster Large Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-large-virtual-machine-scale-sets.md
Title: Configure a secondary node type for large virtual machine scale sets on a Service Fabric managed cluster description: This article walks through how to configure a secondary node type as a large virtual machine scale set Previously updated : 8/23/2021 ++++ Last updated : 07/11/2022 # Service Fabric managed cluster node type scaling
service-fabric How To Managed Cluster Managed Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-managed-disk.md
Title: Select managed disk types for Service Fabric managed cluster nodes description: Learn how to select managed disk types for Service Fabric managed cluster nodes and configure in an ARM template. Previously updated : 2/14/2022++++ Last updated : 07/11/2022 # Select managed disk types for Service Fabric managed cluster nodes
service-fabric How To Managed Cluster Managed Identity Service Fabric App Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-managed-identity-service-fabric-app-code.md
Title: Use managed identity with an application on a Service Fabric managed cluster description: How to use managed identities in Azure Service Fabric application code to access Azure Services on a Service Fabric managed cluster.-- Previously updated : 5/10/2021+++++ Last updated : 07/11/2022 # How to leverage a Service Fabric application's managed identity to access Azure services on a Service Fabric managed cluster
See a companion sample application that demonstrates using system-assigned and u
> [!IMPORTANT] > Prior to using the managed identity of a Service Fabric application, the client application must be granted access to the protected resource. Please refer to the list of [Azure services which support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-managed-identities-for-azure-resources) to check for support, and then to the respective service's documentation for specific steps to grant an identity access to resources of interest.
-
## Leverage a managed identity using Azure.Identity
The 'status code' field of the HTTP response header indicates the success status
| Status Code | Error Reason | How To Handle | | -- | | - | | 404 Not found. | Unknown authentication code, or the application was not assigned a managed identity. | Rectify the application setup or token acquisition code. |
-| 429 Too many requests. | Throttle limit reached, imposed by AAD or SF. | Retry with Exponential Backoff. See guidance below. |
+| 429 Too many requests. | Throttle limit reached, imposed by Azure AD or SF. | Retry with Exponential Backoff. See guidance below. |
| 4xx Error in request. | One or more of the request parameters was incorrect. | Do not retry. Examine the error details for more information. 4xx errors are design-time errors.| | 5xx Error from service. | The managed identity subsystem or Azure Active Directory returned a transient error. | It is safe to retry after a short while. You may hit a throttling condition (429) upon retrying.|
service-fabric How To Managed Cluster Modify Node Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-modify-node-type.md
Title: Configure or modify a Service Fabric managed cluster node type description: This article walks through how to modify a managed cluster node type Previously updated : 5/12/2022 ++++ Last updated : 07/11/2022 # Service Fabric managed cluster node types
New-AzServiceFabricManagedNodeType -ResourceGroupName $resourceGroup -ClusterNam
You can remove a Service Fabric managed cluster node type using Portal or PowerShell. > [!NOTE]
-> To remove a primary node type from a Service Fabric managed cluster, you must use PowerShell and there must be more then one primary node type available.
+> To remove a primary node type from a Service Fabric managed cluster, you must use PowerShell and there must be more than one primary node type available.
### Remove with portal 1) Log in to [Azure portal](https://portal.azure.com/)
Set-AzServiceFabricManagedNodeType -ResourceGroupName $rgName -ClusterName $clus
## Modify the VM SKU for a node type
-Service Fabric managed cluster does not support in-place modification of the VM SKU, but is simpler then classic. In order to accomplish this you'll need to do the following:
+Service Fabric managed cluster does not support in-place modification of the VM SKU, but is simpler than classic. In order to accomplish this you'll need to do the following:
* [Create a new node type via portal, ARM template, or PowerShell](how-to-managed-cluster-modify-node-type.md#add-a-node-type) with the required VM SKU. You'll need to use a template or PowerShell for adding a primary or stateless node type. * Migrate your workload over. One way is to use a [placement property to ensure that certain workloads run only on certain types of nodes in the cluster](./service-fabric-cluster-resource-manager-cluster-description.md#node-properties-and-placement-constraints). * [Delete old node type via portal or PowerShell](how-to-managed-cluster-modify-node-type.md#remove-a-node-type). To remove a primary node type you will have to use PowerShell.
Service Fabric managed clusters by default configure one managed disk. By config
Configure more managed disks by declaring `additionalDataDisks` property and required parameters in your Resource Manager template as follows: **Feature Requirements**
-* Lun must be unique per disk and can not use reserved lun 0
+* Lun must be unique per disk and can't use reserved lun 0
* Disk letter cannot use reserved letters C or D and cannot be modified once created. S will be used as default if not specified. * Must specify a [supported disk type](how-to-managed-cluster-managed-disk.md) * The Service Fabric managed cluster resource apiVersion should be **2022-01-01** or later.
service-fabric How To Managed Cluster Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-networking.md
Title: Configure network settings for Service Fabric managed clusters description: Learn how to configure your Service Fabric managed cluster for NSG rules, RDP port access, load-balancing rules, and more. Previously updated : 2/14/2022++++ Last updated : 07/11/2022 + # Configure network settings for Service Fabric managed clusters Service Fabric managed clusters are created with a default networking configuration. This configuration consists of an [Azure Load Balancer](../load-balancer/load-balancer-overview.md) with a public ip, a VNet with one subnet allocated, and a NSG configured for essential cluster functionality. There are also optional NSG rules applied such as allowing all outbound traffic by default that is intended to make customer configuration easier. This document walks through how to modify the following networking configuration options and more:
Managed clusters do not enable IPv6 by default. This feature will enable full du
This feature allows customers to use an existing virtual network by specifying a dedicated subnet the managed cluster will deploy its resources into. This can be useful if you already have a configured VNet and subnet with related security policies and traffic routing that you want to use. After you deploy to an existing virtual network, it's easy to use or incorporate other networking features, like Azure ExpressRoute, Azure VPN Gateway, a network security group, and virtual network peering. Additionally, you can [bring your own Azure Load balancer](#byolb) if needed also. > [!NOTE]
-> When using BYOVNET, managed cluster resources will be deployed in one subnet.
+> When using BYOVNET, managed cluster resources will be deployed in one subnet.
> [!NOTE]
-> This setting cannot be changed once the cluster is created and the managed cluster will assign a NSG to the provided subnet. Do not override the NSG assignment or traffic may break.
+> This setting cannot be changed once the cluster is created and the managed cluster will assign an NSG to the provided subnet. Do not override the NSG assignment or traffic may break.
**To bring your own virtual network:**
This feature allows customers to use an existing virtual network by specifying a
<a id="byolb"></a> ## Bring your own Azure Load Balancer+ Managed clusters create an Azure public Standard Load Balancer and fully qualified domain name with a static public IP for both the primary and secondary node types. Bring your own load balancer allows you to use an existing Azure Load Balancer for secondary node types for both inbound and outbound traffic. When you bring your own Azure Load Balancer, you can: * Use a pre-configured Load Balancer static IP address for either private or public traffic
Managed clusters create an Azure public Standard Load Balancer and fully qualifi
Here are a couple example scenarios customers may use this for:
-In this example, a customer wants to route traffic through an existing Azure Load Balancer configured with an existing static ip address to two node types.
+In this example, a customer wants to route traffic through an existing Azure Load Balancer configured with an existing static IP address to two node types.
![Bring your own Load Balancer example 1][sfmc-byolb-example-1]
In this example, a customer wants to route traffic through existing internal Azu
![Bring your own Load Balancer example 3][sfmc-byolb-example-3]
-To configure bring your own load balancer:
+To configure with your own load balancer:
1. Get the service `Id` from your subscription for Service Fabric Resource Provider application:
service-fabric How To Managed Cluster Stateless Node Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-stateless-node-type.md
Title: Deploy a Service Fabric managed cluster with stateless node types description: Learn how to create and deploy stateless node types in Service Fabric managed clusters- Previously updated : 4/11/2022--+++ + Last updated : 07/11/2022 + # Deploy a Service Fabric managed cluster with stateless node types
-Service Fabric node types come with an inherent assumption that at some point of time, stateful services might be placed on the nodes. Stateless node types change this assumption for a node type. This allows the node type to benefit from features such as faster scale out operations, support for Automatic OS Upgrades, Spot VMs, and scaling out to more than 100 nodes in a node type.
+Service Fabric node types come with an inherent assumption that at some point of time, stateful services might be placed on the nodes. Stateless node types change this assumption for a node type. This allows the node type to benefit from features such as faster scale-out operations, support for Automatic OS Upgrades, Spot VMs, and scaling out to more than 100 nodes in a node type.
* Primary node types can't be configured to be stateless. * Stateless node types require an API version of **2021-05-01** or later.
service-fabric How To Managed Cluster Upgrades https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-upgrades.md
Title: Upgrading Azure Service Fabric managed clusters description: Learn about options for upgrading your Azure Service Fabric managed cluster Previously updated : 08/23/2021++++ Last updated : 07/11/2022 # Manage Service Fabric managed cluster upgrades
service-fabric How To Managed Cluster Vmss Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-vmss-extension.md
Title: Add a virtual machine scale set extension to a Service Fabric managed cluster node type description: Here's how to add a virtual machine scale set extension a Service Fabric managed cluster node type- Previously updated : 2/14/2022+++++ Last updated : 07/11/2022 # Virtual machine scale set extension support on Service Fabric managed cluster node type(s)
service-fabric How To Managed Identity Managed Cluster Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-identity-managed-cluster-virtual-machine-scale-sets.md
Title: Add a managed identity to a Service Fabric managed cluster node type description: This article shows how to add a managed identity to a Service Fabric managed cluster node type Previously updated : 5/10/2021 -++++ Last updated : 07/11/2022 # Add a managed identity to a Service Fabric managed cluster node type
service-fabric How To Managed Identity Service Fabric App Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-identity-service-fabric-app-code.md
Title: Use managed identity with an application description: How to use managed identities in Azure Service Fabric application code to access Azure Services.-- Previously updated : 10/09/2019+++++ Last updated : 07/11/2022 # How to leverage a Service Fabric application's managed identity to access Azure services
See a companion sample application that demonstrates using system-assigned and u
> [!IMPORTANT] > Prior to using the managed identity of a Service Fabric application, the client application must be granted access to the protected resource. Please refer to the list of [Azure services which support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-managed-identities-for-azure-resources) to check for support, and then to the respective service's documentation for specific steps to grant an identity access to resources of interest.
-
## Leverage a managed identity using Azure.Identity
In clusters enabled for managed identity, the Service Fabric runtime exposes a l
Specifically, the environment of a managed-identity-enabled Service Fabric service will be seeded with the following variables: - 'IDENTITY_ENDPOINT': the localhost endpoint corresponding to service's managed identity-- 'IDENTITY_HEADER': an unique authentication code representing the service on the current node-- 'IDENTITY_SERVER_THUMBPRINT' : Thumbprint of service fabric managed identity server
+- 'IDENTITY_HEADER': a unique authentication code representing the service on the current node
+- 'IDENTITY_SERVER_THUMBPRINT': Thumbprint of service fabric managed identity server
> [!IMPORTANT] > The application code should consider the value of the 'IDENTITY_HEADER' environment variable as sensitive data - it should not be logged or otherwise disseminated. The authentication code has no value outside of the local node, or after the process hosting the service has terminated, but it does represent the identity of the Service Fabric service, and so should be treated with the same precautions as the access token itself.
service-fabric How To Patch Cluster Nodes Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-patch-cluster-nodes-windows.md
Title: Patch the Windows operating system in your Service Fabric cluster description: Here's how to enable automatic OS image upgrades to patch Service Fabric cluster nodes running on Windows. Previously updated : 04/26/2022++++ Last updated : 07/11/2022 # Patch the Windows operating system in your Service Fabric cluster
service-fabric Infrastructure Service Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/infrastructure-service-faq.md
Title: Introduction to the Service Fabric Infrastructure Service description: Frequently asked questions about Service Fabric Infrastructure Service-- Previously updated : 1/21/2022-+++++ Last updated : 07/11/2022 # Introduction to the Service Fabric Infrastructure Service
The Service Fabric Infrastructure Service is a system service for Azure clusters
The rest of this document covers frequently asked questions about Infrastructure Service:
-## FAQs
+## FAQs
### What are the different kinds of updates that are managed by Infrastructure Service? * Platform Update - An update to underlying host for the virtual machine scale set initiated by the Azure platform and performed in a safe manner by Upgrade Domain (UD).
service-fabric Initializer Codepackages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/initializer-codepackages.md
Title: Initializer CodePackages in Service Fabric description: Describes Initializer CodePackages in Service Fabric.-- Previously updated : 03/10/2020-++++ Last updated : 07/11/2022 + # Initializer CodePackages Starting with version 7.1, Service Fabric supports **Initializer CodePackages** for [containers][containers-introduction-link] and [guest executable][guest-executables-introduction-link] applications. Initializer CodePackages provide the opportunity to perform initializations at the ServicePackage scope before other CodePackages begin execution. Their relationship to a ServicePackage is analogous to what a [SetupEntryPoint][setup-entry-point-link] is for a CodePackage.
Before proceeding with this article, we recommend getting familiar with the [Ser
> [!NOTE] > Initializer CodePackages are currently not supported for services written using the [Reliable Services][reliable-services-link] programming model.
-
+ ## Semantics An Initializer CodePackage is expected to run to **successful completion (exit code 0)**. A failed Initializer CodePackage is restarted until it successfully completes. Multiple Initializer CodePackages are allowed and are executed to **successful completion**, **sequentially**, **in a specified order** before other CodePackages in the ServicePackage begin execution.
service-fabric Overview Managed Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/overview-managed-cluster.md
Title: Service Fabric managed clusters description: Service Fabric managed clusters are an evolution of the Azure Service Fabric cluster resource model that streamlines deployment and cluster management. Previously updated : 10/22/2021++++ Last updated : 07/11/2022 # Service Fabric managed clusters
service-fabric Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/policy-reference.md
Title: Built-in policy definitions for Azure Service Fabric description: Lists Azure Policy built-in policy definitions for Azure Service Fabric. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 07/06/2022 -++++ Last updated : 07/11/2022 + # Azure Policy built-in definitions for Azure Service Fabric This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy
service-fabric Probes Codepackage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/probes-codepackage.md
Title: Azure Service Fabric probes description: How to model a liveness and readiness probe in Azure Service Fabric by using application and service manifest files.- -- Previously updated : 3/12/2020++++ Last updated : 07/11/2022 # Service Fabric Probes
service-fabric Quickstart Classic Cluster Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/quickstart-classic-cluster-portal.md
Previously updated : 03/02/2022 Last updated : 07/11/2022 # Quickstart: Deploy a Service Fabric cluster using the Azure portal
service-fabric Quickstart Cluster Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/quickstart-cluster-template.md
Title: Create a Service Fabric cluster using Azure Resource Manager template description: In this quickstart, you will create an Azure Service Fabric test cluster by using Azure Resource Manager template.-- Previously updated : 05/10/2021 ++ -+ Last updated : 07/11/2022 # Quickstart: Create a Service Fabric cluster using ARM template
service-fabric Quickstart Guest App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/quickstart-guest-app.md
Title: Quickly deploy an existing app to a cluster description: Use an Azure Service Fabric cluster to host an existing Node.js application with Visual Studio.-- Previously updated : 12/06/2017-+++++ Last updated : 07/14/2022 # Host a Node.js application on Azure Service Fabric
This quickstart helps you deploy an existing application (Node.js in this exampl
## Prerequisites
-Before you get started, make sure that you have [set up your development environment](service-fabric-get-started.md). Which includes installing the Service Fabric SDK and Visual Studio 2019 or 2015.
+Before you get started, make sure that you have [set up your development environment](service-fabric-get-started.md), which includes installing the Service Fabric SDK and Visual Studio 2019 or 2015.
You also need to have an existing Node.js application for deployment. This quickstart uses a simple Node.js website that can be downloaded [here][download-sample]. Extract this file to your `<path-to-project>\ApplicationPackageRoot\<package-name>\Code\` folder after you create the project in the next step.
service-fabric Quickstart Managed Cluster Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/quickstart-managed-cluster-template.md
Title: Deploy a Service Fabric managed cluster using Azure Resource Manager description: Learn how to create a Service Fabric managed cluster with an Azure Resource Manager template Previously updated : 5/10/2021-++++ Last updated : 07/14/2022 # Quickstart: Deploy a Service Fabric managed cluster with an Azure Resource Manager template
-Service Fabric managed clusters are an evolution of the Azure Service Fabric cluster resource model that streamlines your deployment and cluster management experience. Service Fabric managed clusters are a fully encapsulated resource that enable you to deploy a single Service Fabric cluster resource rather than having to deploy all of the underlying resources that make up a Service Fabric cluster. This article describes how to do deploy a Service Fabric managed cluster for testing in Azure using an Azure Resource Manager template (ARM template).
+Service Fabric managed clusters are an evolution of the Azure Service Fabric cluster resource model that streamlines your deployment and cluster management experience. A Service Fabric managed cluster is a fully encapsulated resource that enables you to deploy a single Service Fabric cluster resource rather than having to deploy all of the underlying resources that make up a Service Fabric cluster. This article describes how to do deploy a Service Fabric managed cluster for testing in Azure using an Azure Resource Manager template (ARM template).
The three-node Basic SKU cluster deployed in this tutorial is only intended to be used for instructional purposes (rather than production workloads). For further information, see [Service Fabric managed cluster SKUs](overview-managed-cluster.md#service-fabric-managed-cluster-skus).
service-fabric Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/release-notes.md
Title: Azure Service Fabric releases description: Release notes for Azure Service Fabric. Includes information on the latest features and improvements in Service Fabric. Previously updated : 04/13/2021- Last updated : 07/14/2022+++++ hide_comments: true hideEdit: true
We are excited to announce that 9.0 release of the Service Fabric runtime has st
### Key announcements - **General Availability** Support for .NET 6.0 - **General Availability** Support for Ubuntu 20.04-- **General Availability** Support for Multi-AZ within a single VM Scale Set (VMSS)
+- **General Availability** Support for Multi-AZ within a single virtual machine scale set
- Added support for IHost, IHostBuilder and Minimal Hosting Model - Enabling opt-in option for Data Contract Serialization (DCS) based remoting exception - Support creation of End-to-End Developer Experience for Linux development on Windows using WSL2
service-fabric Run To Completion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/run-to-completion.md
Title: RunToCompletion semantics and specifications description: Learn about Service Fabric RunToCompletion semantics and specifications, and see complete code examples and considerations.- Previously updated : 06/08/2022--++++ Last updated : 07/14/2022 + # RunToCompletion Starting with version 7.1, Service Fabric supports **RunToCompletion** semantics for [containers][containers-introduction-link] and [guest executable applications][guest-executables-introduction-link]. These semantics enable applications and services that complete a task and exit, in contrast to always running applications and services.
service-fabric Samples Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/samples-cli.md
Title: Azure CLI (az) and Azure Service Fabric CLI (sfctl) Samples description: Azure CLI (az) and Azure Service Fabric CLI (sfctl) Samples on managing clusters, applications, and services.- Previously updated : 04/09/2018 -++++ Last updated : 07/14/2022 # Azure CLI (az) and Azure Service Fabric CLI (sfctl) Samples
service-fabric Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Service Fabric description: Lists Azure Policy Regulatory Compliance controls available for Azure Service Fabric. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/06/2022---+++ -+ Last updated : 07/14/2022 # Azure Policy Regulatory Compliance controls for Azure Service Fabric
service-fabric Service Fabric Api Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-api-management-overview.md
Title: Azure Service Fabric with API Management overview description: This article is an introduction to using Azure API Management as a gateway to your Service Fabric applications. Previously updated : 06/22/2017++++ Last updated : 07/14/2022 # Service Fabric with Azure API Management overview
In the simplest case, traffic is forwarded to a stateless service instance. To a
**Example**
-In the following scenario, a Service Fabric application contains a stateless service named `fabric:/app/fooservice`, that exposes an internal HTTP API. The service instance name is well known and can be hard-coded directly in the API Management inbound processing policy.
+In the following scenario, a Service Fabric application contains a stateless service named `fabric:/app/fooservice` that exposes an internal HTTP API. The service instance name is well known and can be hard-coded directly in the API Management inbound processing policy.
![Diagram that shows a Service Fabric application contains a stateless service that exposes an internal HTTP API.][sf-apim-static-stateless]
service-fabric Service Fabric Application And Service Manifests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-and-service-manifests.md
Title: Describing Azure Service Fabric apps and services description: Describes how manifests are used to describe Service Fabric applications and services.- Previously updated : 8/12/2019++++ Last updated : 07/14/2022 + # Service Fabric application and service manifests This article describes how Service Fabric applications and services are defined and versioned using the ApplicationManifest.xml and ServiceManifest.xml files. For more detailed examples, see [application and service manifest examples](service-fabric-manifest-examples.md). The XML schema for these manifest files is documented in [ServiceFabricServiceModel.xsd schema documentation](service-fabric-service-model-schema.md).
service-fabric Service Fabric Application And Service Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-and-service-security.md
Title: Learn about Azure Service Fabric application security description: An overview of how to securely run microservices applications on Service Fabric. Learn how to run services and startup script under different security accounts, authenticate and authorize users, manage application secrets, secure service communications, use an API gateway, and secure application data at rest. - Previously updated : 03/16/2018++++ Last updated : 07/14/2022 + # Service Fabric application and service security A microservices architecture can bring [many benefits](service-fabric-overview-microservices.md). Managing the security of microservices, however, is a challenge and different than managing traditional monolithic applications security.
service-fabric Service Fabric Application Arm Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-arm-resource.md
Title: Deploy and upgrade with Azure Resource Manager description: Learn how to deploy applications and services to a Service Fabric cluster using an Azure Resource Manager template.-- Previously updated : 12/06/2017+++++ Last updated : 07/14/2022 + # Manage applications and services as Azure Resource Manager resources You can deploy applications and services onto your Service Fabric cluster via Azure Resource Manager. This means that instead of deploying and managing applications via PowerShell or CLI after having to wait for the cluster to be ready, you can now express applications and services in JSON and deploy them in the same Resource Manager template as your cluster. The process of application registration, provisioning, and deployment all happens in one step.
service-fabric Service Fabric Application Lifecycle Sfctl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-lifecycle-sfctl.md
Title: Manage Azure Service Fabric applications using sfctl description: Learn how to deploy and remove applications from an Azure Service Fabric cluster by using Azure Service Fabric CLI--- Previously updated : 07/31/2018-+++++ Last updated : 07/14/2022 + # Manage an Azure Service Fabric application by using Azure Service Fabric CLI (sfctl) Learn how to create and delete applications that are running in an Azure Service Fabric cluster.
service-fabric Service Fabric Application Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-lifecycle.md
Last updated 05/25/2022 + # Service Fabric application lifecycle As with other platforms, an application on Azure Service Fabric usually goes through the following phases: design, development, testing, deployment, upgrading, maintenance, and removal. Service Fabric provides first-class support for the full application lifecycle of cloud applications, from development through deployment, daily management, and maintenance to eventual decommissioning. The service model enables several different roles to participate independently in the application lifecycle. This article provides an overview of the APIs and how they are used by the different roles throughout the phases of the Service Fabric application lifecycle.
service-fabric Service Fabric Application Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-model.md
Title: Azure Service Fabric application model description: How to model and describe applications and services in Azure Service Fabric using application and service manifest files.- Previously updated : 2/23/2018++++ Last updated : 07/14/2022 + # Model an application in Service Fabric This article provides an overview of the Azure Service Fabric application model and how to define an application and service via manifest files.
service-fabric Service Fabric Application Runas Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-runas-security.md
Title: Run a service under system and local security accounts description: Learn how to run a Service Fabric application under system and local security accounts. Create security principals and apply the Run-As policy to securely run your services.-- Previously updated : 03/29/2018+++++ Last updated : 07/14/2022 + # Run a service as a local user account or local system account By using Azure Service Fabric, you can secure applications that are running in the cluster under different user accounts. By default, Service Fabric applications run under the account that the Fabric.exe process runs under. Service Fabric also provides the capability to run applications under a local user or system account. Supported local system account types are **LocalUser**, **NetworkService**, **LocalService**, and **LocalSystem**. If you're running Service Fabric on a Windows standalone cluster, you can run a service under [Active Directory domain accounts](service-fabric-run-service-as-ad-user-or-group.md) or [group managed service accounts](service-fabric-run-service-as-gmsa.md).
service-fabric Service Fabric Application Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-scenarios.md
Title: Application scenarios and design description: Overview of categories of cloud applications in Service Fabric. Discusses application design that uses stateful and stateless services. Previously updated : 01/08/2020++++ Last updated : 07/14/2022 + # Service Fabric application scenarios Azure Service Fabric offers a reliable and flexible platform where you can write and run many types of business applications and services. These applications and microservices can be stateless or stateful, and they're resource-balanced across virtual machines to maximize efficiency.
service-fabric Service Fabric Application Secret Management Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-secret-management-linux.md
Title: Set up an encryption cert on Linux clusters description: Learn how to set up an encryption certificate and encrypt secrets on Linux clusters.--- Previously updated : 01/04/2019-+++++ Last updated : 07/14/2022 + # Set up an encryption certificate and encrypt secrets on Linux clusters This article shows how to set up an encryption certificate and use it to encrypt secrets on Linux clusters. For Windows clusters, see [Set up an encryption certificate and encrypt secrets on Windows clusters][secret-management-windows-specific-link].
service-fabric Service Fabric Application Secret Management Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-secret-management-windows.md
Title: Set up an encryption cert on Windows clusters description: Learn how to set up an encryption certificate and encrypt secrets on Windows clusters.-- Previously updated : 01/04/2019+++++ Last updated : 07/14/2022 + # Set up an encryption certificate and encrypt secrets on Windows clusters This article shows how to set up an encryption certificate and use it to encrypt secrets on Windows clusters. For Linux clusters, see [Set up an encryption certificate and encrypt secrets on Linux clusters.][secret-management-linux-specific-link]
service-fabric Service Fabric Application Secret Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-secret-management.md
Title: Manage Azure Service Fabric application secrets description: Learn how to secure secret values in a Service Fabric application (platform-agnostic).-- Previously updated : 01/04/2019-+++++ Last updated : 07/14/2022 + # Manage encrypted secrets in Service Fabric applications This guide walks you through the steps of managing secrets in a Service Fabric application. Secrets can be any sensitive information, such as storage connection strings, passwords, or other values that should not be handled in plain text.
service-fabric Service Fabric Application Secret Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-secret-store.md
Title: Azure Service Fabric Central Secret Service description: This article describes how to use Central Secret Service in Azure Service Fabric-- Previously updated : 04/06/2021-+++++ Last updated : 07/14/2022 + # Central Secret Service in Azure Service Fabric Central Secret Service (CSS), also known as Central Secret Store, is a Service Fabric system service meant to safeguard secrets within a cluster. CSS eases the management of secrets for SF applications, eliminating the need to rely on encrypted parameters.
service-fabric Service Fabric Application Upgrade Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-upgrade-advanced.md
Title: Advanced Application Upgrade Topics description: This article covers some advanced topics pertaining to upgrading a Service Fabric application.- Previously updated : 03/11/2020++++ Last updated : 07/14/2022 + # Service Fabric application upgrade: Advanced topics ## Add or remove service types during an application upgrade
HealthState : Ok
ApplicationParameters : { "ImportantParameter" = "1"; "NewParameter" = "testBefore" } ```
-Now, upgrade the application using the **Start-ServiceFabricApplicationUpgrade** cmdlet. This example shows an monitored upgrade, but an unmonitored upgrade can also be used. To see a full description of flags accepted by this cmdlet, see the [Azure Service Fabric PowerShell module reference](/powershell/module/servicefabric/start-servicefabricapplicationupgrade#parameters)
+Now, upgrade the application using the **Start-ServiceFabricApplicationUpgrade** cmdlet. This example shows a monitored upgrade, but an unmonitored upgrade can also be used. To see a full description of flags accepted by this cmdlet, see the [Azure Service Fabric PowerShell module reference](/powershell/module/servicefabric/start-servicefabricapplicationupgrade#parameters)
```PowerShell PS C:\> $appParams = @{ "ImportantParameter" = "2"; "NewParameter" = "testAfter"}
service-fabric Service Fabric Application Upgrade Data Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-upgrade-data-serialization.md
Title: 'Application upgrade: data serialization' description: Best practices for data serialization and how it affects rolling application upgrades. Previously updated : 11/02/2017++++ Last updated : 07/14/2022 # How data serialization affects an application upgrade
service-fabric Service Fabric Application Upgrade Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-upgrade-parameters.md
Title: 'Application upgrade: upgrade parameters' description: Describes parameters related to upgrading a Service Fabric application, including health checks to perform and policies to automatically undo the upgrade.- Previously updated : 11/08/2018++++ Last updated : 07/14/2022 + # Application upgrade parameters This article describes the various parameters that apply during the upgrade of an Azure Service Fabric application. Application upgrade parameters control the time-outs and health checks that are applied during the upgrade, and they specify the policies that must be applied when an upgrade fails. Application parameters apply to upgrades using: - PowerShell
Service Fabric application upgrades using PowerShell use the [Start-ServiceFabri
Visual Studio Service Fabric application upgrade parameters are set via the Visual Studio Upgrade Settings dialog. The Visual Studio upgrade mode is selected using the **Upgrade Mode** dropdown box to either **Monitored**, **UnmonitoredAuto**, or **UnmonitoredManual**. For more information, see [Configure the upgrade of a Service Fabric application in Visual Studio](service-fabric-visualstudio-configure-upgrade.md). ### Required parameters
-(PS=PowerShell, VS=Visual Studio)
- | Parameter | Applies To | Description | | | | |
-ApplicationName |PS| Name of the application that is being upgraded. Examples: fabric:/VisualObjects, fabric:/ClusterMonitor. |
-ApplicationTypeVersion|PS|The version of the application type that the upgrade targets. |
-FailureAction |PS, VS|Allowed values are **Rollback**, **Manual**, and **Invalid**. The compensating action to perform when a *Monitored* upgrade encounters monitoring policy or health policy violations. <br>**Rollback** specifies that the upgrade will automatically roll back to the pre-upgrade version. <br>**Manual** indicates that the upgrade will switch to the *UnmonitoredManual* upgrade mode. <br>**Invalid** indicates that the failure action is invalid.|
-Monitored |PS|Indicates that the upgrade mode is monitored. After the cmdlet finishes an upgrade for an upgrade domain, if the health of the upgrade domain and the cluster meet the health policies that you define, Service Fabric upgrades the next upgrade domain. If the upgrade domain or cluster fails to meet health policies, the upgrade fails and Service Fabric rolls back the upgrade for the upgrade domain or reverts to manual mode per the policy specified. This is the recommended mode for application upgrades in a production environment. |
-UpgradeMode | VS | Allowed values are **Monitored** (default), **UnmonitoredAuto**, or **UnmonitoredManual**. See PowerShell parameters for each mode in this article for details. |
-UnmonitoredAuto | PS | Indicates that the upgrade mode is unmonitored automatic. After Service Fabric upgrades an upgrade domain, Service Fabric upgrades the next upgrade domain irrespective of the application health state. This mode is not recommended for production, and is only useful during development of an application. |
-UnmonitoredManual | PS | Indicates that the upgrade mode is unmonitored manual. After Service Fabric upgrades an upgrade domain, it waits for you to upgrade the next upgrade domain by using the *Resume-ServiceFabricApplicationUpgrade* cmdlet. |
+ApplicationName | PowerShell | Name of the application that is being upgraded. Examples: fabric:/VisualObjects, fabric:/ClusterMonitor. |
+ApplicationTypeVersion| PowerShell |The version of the application type that the upgrade targets. |
+FailureAction | PowerShell, Visual Studio |Allowed values are **Rollback**, **Manual**, and **Invalid**. The compensating action to perform when a *Monitored* upgrade encounters monitoring policy or health policy violations. <br>**Rollback** specifies that the upgrade will automatically roll back to the pre-upgrade version. <br>**Manual** indicates that the upgrade will switch to the *UnmonitoredManual* upgrade mode. <br>**Invalid** indicates that the failure action is invalid.|
+Monitored | PowerShell |Indicates that the upgrade mode is monitored. After the cmdlet finishes an upgrade for an upgrade domain, if the health of the upgrade domain and the cluster meet the health policies that you define, Service Fabric upgrades the next upgrade domain. If the upgrade domain or cluster fails to meet health policies, the upgrade fails and Service Fabric rolls back the upgrade for the upgrade domain or reverts to manual mode per the policy specified. This is the recommended mode for application upgrades in a production environment. |
+UpgradeMode | Visual Studio | Allowed values are **Monitored** (default), **UnmonitoredAuto**, or **UnmonitoredManual**. See PowerShell parameters for each mode in this article for details. |
+UnmonitoredAuto | PowerShell | Indicates that the upgrade mode is unmonitored automatic. After Service Fabric upgrades an upgrade domain, Service Fabric upgrades the next upgrade domain irrespective of the application health state. This mode is not recommended for production, and is only useful during development of an application. |
+UnmonitoredManual | PowerShell | Indicates that the upgrade mode is unmonitored manual. After Service Fabric upgrades an upgrade domain, it waits for you to upgrade the next upgrade domain by using the *Resume-ServiceFabricApplicationUpgrade* cmdlet. |
### Optional parameters
The health evaluation parameters are optional. If the health evaluation criteria
> [!div class="mx-tdBreakAll"] > | Parameter | Applies To | Description | > | | | |
-> | ApplicationParameter |PS, VS| Specifies the overrides for application parameters.<br>PowerShell application parameters are specified as hashtable name/value pairs. For example, @{ "VotingData_MinReplicaSetSize" = "3"; "VotingData_PartitionCount" = "1" }.<br>Visual Studio application parameters can be specified in the Publish Service Fabric Application dialog in the **Application Parameters File** field.
-> | Confirm |PS| Allowed values are **True** and **False**. Prompts for confirmation before running the cmdlet. |
-> | ConsiderWarningAsError |PS, VS |Allowed values are **True** and **False**. Default value is **False**. Treat the warning health events for the application as errors when evaluating the health of the application during upgrade. By default, Service Fabric does not evaluate warning health events to be failures (errors), so the upgrade can proceed even if there are warning events. |
-> | DefaultServiceTypeHealthPolicy | PS, VS |Specifies the health policy for the default service type to use for the monitored upgrade in the format MaxPercentUnhealthyPartitionsPerService, MaxPercentUnhealthyReplicasPerPartition, MaxPercentUnhealthyServices. For example, 5,10,15 indicates the following values: MaxPercentUnhealthyPartitionsPerService = 5, MaxPercentUnhealthyReplicasPerPartition = 10, MaxPercentUnhealthyServices = 15. |
-> | Force | PS, VS | Allowed values are **True** and **False**. Indicates that the upgrade process skips the warning message and forces the upgrade even when the version number hasnΓÇÖt changed. This is useful for local testing but is not recommended for use in a production environment as it requires removing the existing deployment which causes down-time and potential data loss. |
-> | ForceRestart |PS, VS |If you update a configuration or data package without updating the service code, the service is restarted only if the ForceRestart property is set to **True**. When the update is complete, Service Fabric notifies the service that a new configuration package or data package is available. The service is responsible for applying the changes. If necessary, the service can restart itself. |
-> | HealthCheckRetryTimeoutSec |PS, VS |The duration (in seconds) that Service Fabric continues to perform health evaluation before declaring the upgrade as failed. The default is 600 seconds. This duration starts after *HealthCheckWaitDurationSec* is reached. Within this *HealthCheckRetryTimeout*, Service Fabric might perform multiple health checks of the application health. The default value is 10 minutes and should be customized appropriately for your application. |
-> | HealthCheckStableDurationSec |PS, VS |The duration (in seconds) to verify that the application is stable before moving to the next upgrade domain or completing the upgrade. This wait duration is used to prevent undetected changes of health right after the health check is performed. The default value is 120 seconds, and should be customized appropriately for your application. |
-> | HealthCheckWaitDurationSec |PS, VS | The time to wait (in seconds) after the upgrade has finished on the upgrade domain before Service Fabric evaluates the health of the application. This duration can also be considered as the time an application should be running before it can be considered healthy. If the health check passes, the upgrade process proceeds to the next upgrade domain. If the health check fails, Service Fabric waits for [UpgradeHealthCheckInterval](./service-fabric-cluster-fabric-settings.md#clustermanager) before retrying the health check again until the *HealthCheckRetryTimeoutSec* is reached. The default and recommended value is 0 seconds. |
-> | MaxPercentUnhealthyDeployedApplications|PS, VS |Default and recommended value is 0. Specify the maximum number of deployed applications (see the [Health section](service-fabric-health-introduction.md)) that can be unhealthy before the application is considered unhealthy and fails the upgrade. This parameter defines the application health on the node and helps detect issues during upgrade. Typically, the replicas of the application get load-balanced to the other node, which allows the application to appear healthy, thus allowing the upgrade to proceed. By specifying a strict *MaxPercentUnhealthyDeployedApplications* health, Service Fabric can detect a problem with the application package quickly and help produce a fail fast upgrade. |
-> | MaxPercentUnhealthyServices |PS, VS |A parameter to *DefaultServiceTypeHealthPolicy* and *ServiceTypeHealthPolicyMap*. Default and recommended value is 0. Specify the maximum number of services in the application instance that can be unhealthy before the application is considered unhealthy and fails the upgrade. |
-> | MaxPercentUnhealthyPartitionsPerService|PS, VS |A parameter to *DefaultServiceTypeHealthPolicy* and *ServiceTypeHealthPolicyMap*. Default and recommended value is 0. Specify the maximum number of partitions in a service that can be unhealthy before the service is considered unhealthy. |
-> | MaxPercentUnhealthyReplicasPerPartition|PS, VS |A parameter to *DefaultServiceTypeHealthPolicy* and *ServiceTypeHealthPolicyMap*. Default and recommended value is 0. Specify the maximum number of replicas in partition that can be unhealthy before the partition is considered unhealthy. |
-> | ServiceTypeHealthPolicyMap | PS, VS | Represents the health policy used to evaluate the health of services belonging to a service type. Takes a hash table input in the following format: @ {"ServiceTypeName" : "MaxPercentUnhealthyPartitionsPerService,MaxPercentUnhealthyReplicasPerPartition,MaxPercentUnhealthyServices"} For example: @{ "ServiceTypeName01" = "5,10,5"; "ServiceTypeName02" = "5,5,5" } |
-> | TimeoutSec | PS, VS | Specifies the time-out period in seconds for the operation. |
-> | UpgradeDomainTimeoutSec |PS, VS |Maximum time (in seconds) for upgrading a single upgrade domain. If this time-out is reached, the upgrade stops and proceeds based on the setting for *FailureAction*. The default value is never (Infinite) and should be customized appropriately for your application. |
-> | UpgradeReplicaSetCheckTimeoutSec |PS, VS |Measured in seconds.<br>**Stateless service**--Within a single upgrade domain, Service Fabric tries to ensure that additional instances of the service are available. If the target instance count is more than one, Service Fabric waits for more than one instance to be available, up to a maximum time-out value. This time-out is specified by using the *UpgradeReplicaSetCheckTimeoutSec* property. If the time-out expires, Service Fabric proceeds with the upgrade, regardless of the number of service instances. If the target instance count is one, Service Fabric does not wait, and immediately proceeds with the upgrade.<br><br>**Stateful service**--Within a single upgrade domain, Service Fabric tries to ensure that the replica set has a quorum. Service Fabric waits for a quorum to be available, up to a maximum time-out value (specified by the *UpgradeReplicaSetCheckTimeoutSec* property). If the time-out expires, Service Fabric proceeds with the upgrade, regardless of quorum. This setting is set as never (infinite) when rolling forward, and 1200 seconds when rolling back. |
-> | UpgradeTimeoutSec |PS, VS |A time-out (in seconds) that applies for the entire upgrade. If this time-out is reached, the upgrade stops and *FailureAction* is triggered. The default value is never (Infinite) and should be customized appropriately for your application. |
-> | WhatIf | PS | Allowed values are **True** and **False**. Shows what would happen if the cmdlet runs. The cmdlet is not run. |
+> | ApplicationParameter | PowerShell, Visual Studio | Specifies the overrides for application parameters.<br>PowerShell application parameters are specified as hashtable name/value pairs. For example, @{ "VotingData_MinReplicaSetSize" = "3"; "VotingData_PartitionCount" = "1" }.<br>Visual Studio application parameters can be specified in the Publish Service Fabric Application dialog in the **Application Parameters File** field.
+> | Confirm | PowerShell | Allowed values are **True** and **False**. Prompts for confirmation before running the cmdlet. |
+> | ConsiderWarningAsError | PowerShell, Visual Studio |Allowed values are **True** and **False**. Default value is **False**. Treat the warning health events for the application as errors when evaluating the health of the application during upgrade. By default, Service Fabric does not evaluate warning health events to be failures (errors), so the upgrade can proceed even if there are warning events. |
+> | DefaultServiceTypeHealthPolicy | PowerShell, Visual Studio |Specifies the health policy for the default service type to use for the monitored upgrade in the format MaxPercentUnhealthyPartitionsPerService, MaxPercentUnhealthyReplicasPerPartition, MaxPercentUnhealthyServices. For example, 5,10,15 indicates the following values: MaxPercentUnhealthyPartitionsPerService = 5, MaxPercentUnhealthyReplicasPerPartition = 10, MaxPercentUnhealthyServices = 15. |
+> | Force | PowerShell, Visual Studio | Allowed values are **True** and **False**. Indicates that the upgrade process skips the warning message and forces the upgrade even when the version number hasnΓÇÖt changed. This is useful for local testing but is not recommended for use in a production environment as it requires removing the existing deployment which causes down-time and potential data loss. |
+> | ForceRestart | PowerShell, Visual Studio |If you update a configuration or data package without updating the service code, the service is restarted only if the ForceRestart property is set to **True**. When the update is complete, Service Fabric notifies the service that a new configuration package or data package is available. The service is responsible for applying the changes. If necessary, the service can restart itself. |
+> | HealthCheckRetryTimeoutSec | PowerShell, Visual Studio |The duration (in seconds) that Service Fabric continues to perform health evaluation before declaring the upgrade as failed. The default is 600 seconds. This duration starts after *HealthCheckWaitDurationSec* is reached. Within this *HealthCheckRetryTimeout*, Service Fabric might perform multiple health checks of the application health. The default value is 10 minutes and should be customized appropriately for your application. |
+> | HealthCheckStableDurationSec | PowerShell, Visual Studio |The duration (in seconds) to verify that the application is stable before moving to the next upgrade domain or completing the upgrade. This wait duration is used to prevent undetected changes of health right after the health check is performed. The default value is 120 seconds, and should be customized appropriately for your application. |
+> | HealthCheckWaitDurationSec | PowerShell, Visual Studio | The time to wait (in seconds) after the upgrade has finished on the upgrade domain before Service Fabric evaluates the health of the application. This duration can also be considered as the time an application should be running before it can be considered healthy. If the health check passes, the upgrade process proceeds to the next upgrade domain. If the health check fails, Service Fabric waits for [UpgradeHealthCheckInterval](./service-fabric-cluster-fabric-settings.md#clustermanager) before retrying the health check again until the *HealthCheckRetryTimeoutSec* is reached. The default and recommended value is 0 seconds. |
+> | MaxPercentUnhealthyDeployedApplications| PowerShell, Visual Studio |Default and recommended value is 0. Specify the maximum number of deployed applications (see the [Health section](service-fabric-health-introduction.md)) that can be unhealthy before the application is considered unhealthy and fails the upgrade. This parameter defines the application health on the node and helps detect issues during upgrade. Typically, the replicas of the application get load-balanced to the other node, which allows the application to appear healthy, thus allowing the upgrade to proceed. By specifying a strict *MaxPercentUnhealthyDeployedApplications* health, Service Fabric can detect a problem with the application package quickly and help produce a fail fast upgrade. |
+> | MaxPercentUnhealthyServices | PowerShell, Visual Studio |A parameter to *DefaultServiceTypeHealthPolicy* and *ServiceTypeHealthPolicyMap*. Default and recommended value is 0. Specify the maximum number of services in the application instance that can be unhealthy before the application is considered unhealthy and fails the upgrade. |
+> | MaxPercentUnhealthyPartitionsPerService| PowerShell, Visual Studio |A parameter to *DefaultServiceTypeHealthPolicy* and *ServiceTypeHealthPolicyMap*. Default and recommended value is 0. Specify the maximum number of partitions in a service that can be unhealthy before the service is considered unhealthy. |
+> | MaxPercentUnhealthyReplicasPerPartition| PowerShell, Visual Studio |A parameter to *DefaultServiceTypeHealthPolicy* and *ServiceTypeHealthPolicyMap*. Default and recommended value is 0. Specify the maximum number of replicas in partition that can be unhealthy before the partition is considered unhealthy. |
+> | ServiceTypeHealthPolicyMap | PowerShell, Visual Studio | Represents the health policy used to evaluate the health of services belonging to a service type. Takes a hash table input in the following format: @ {"ServiceTypeName" : "MaxPercentUnhealthyPartitionsPerService,MaxPercentUnhealthyReplicasPerPartition,MaxPercentUnhealthyServices"} For example: @{ "ServiceTypeName01" = "5,10,5"; "ServiceTypeName02" = "5,5,5" } |
+> | TimeoutSec | PowerShell , Visual Studio | Specifies the time-out period in seconds for the operation. |
+> | UpgradeDomainTimeoutSec | PowerShell, Visual Studio |Maximum time (in seconds) for upgrading a single upgrade domain. If this time-out is reached, the upgrade stops and proceeds based on the setting for *FailureAction*. The default value is never (Infinite) and should be customized appropriately for your application. |
+> | UpgradeReplicaSetCheckTimeoutSec | PowerShell, Visual Studio |Measured in seconds.<br>**Stateless service**--Within a single upgrade domain, Service Fabric tries to ensure that additional instances of the service are available. If the target instance count is more than one, Service Fabric waits for more than one instance to be available, up to a maximum time-out value. This time-out is specified by using the *UpgradeReplicaSetCheckTimeoutSec* property. If the time-out expires, Service Fabric proceeds with the upgrade, regardless of the number of service instances. If the target instance count is one, Service Fabric does not wait, and immediately proceeds with the upgrade.<br><br>**Stateful service**--Within a single upgrade domain, Service Fabric tries to ensure that the replica set has a quorum. Service Fabric waits for a quorum to be available, up to a maximum time-out value (specified by the *UpgradeReplicaSetCheckTimeoutSec* property). If the time-out expires, Service Fabric proceeds with the upgrade, regardless of quorum. This setting is set as never (infinite) when rolling forward, and 1200 seconds when rolling back. |
+> | UpgradeTimeoutSec | PowerShell, Visual Studio |A time-out (in seconds) that applies for the entire upgrade. If this time-out is reached, the upgrade stops and *FailureAction* is triggered. The default value is never (Infinite) and should be customized appropriately for your application. |
+> | WhatIf | PowerShell | Allowed values are **True** and **False**. Shows what would happen if the cmdlet runs. The cmdlet is not run. |
The *MaxPercentUnhealthyServices*, *MaxPercentUnhealthyPartitionsPerService*, and *MaxPercentUnhealthyReplicasPerPartition* criteria can be specified per service type for an application instance. Setting these parameters per-service allows for an application to contain different services types with different evaluation policies. For example, a stateless gateway service type can have a *MaxPercentUnhealthyPartitionsPerService* that is different from a stateful engine service type for a particular application instance.
service-fabric Service Fabric Application Upgrade Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-upgrade-troubleshooting.md
Title: Troubleshooting application upgrades description: This article covers some common issues around upgrading a Service Fabric application and how to resolve them.-- Previously updated : 2/23/2018+++++ Last updated : 07/14/2022 + # Troubleshoot application upgrades This article covers some of the common issues around upgrading an Azure Service Fabric application and how to resolve them.
service-fabric Service Fabric Application Upgrade Tutorial Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-upgrade-tutorial-powershell.md
Title: Service Fabric App upgrade using PowerShell description: This article walks through the experience of deploying a Service Fabric application, changing the code, and rolling out an upgrade using PowerShell.-- Previously updated : 8/5/2020 -+++++ Last updated : 07/14/2022 + # Service Fabric application upgrade using PowerShell > [!div class="op_single_selector"] > * [PowerShell](service-fabric-application-upgrade-tutorial-powershell.md)
UpgradeTimeout = 3000
## Step 4: Prepare application for upgrade Now the application is built and ready to be upgraded. If you open up a PowerShell window as an administrator and type [Get-ServiceFabricApplication](/powershell/module/servicefabric/get-servicefabricapplication), it should let you know that it is application type 1.0.0.0 of **VisualObjects** that's been deployed.
-The application package is stored under the following relative path where you uncompressed the Service Fabric SDK - *Samples\Services\Stateful\VisualObjects\VisualObjects\obj\x64\Debug*. You should find a "Package" folder in that directory, where the application package is stored. Check the timestamps to ensure that it is the latest build (you may need to modify the paths appropriately as well).
+The application package is stored under the following relative path where you uncompressed the Service Fabric SDK: *Samples\Services\Stateful\VisualObjects\VisualObjects\obj\x64\Debug*. You should find a "Package" folder in that directory, where the application package is stored. Check the timestamps to ensure that it is the latest build (you may need to modify the paths appropriately as well).
Now let's copy the updated application package to the Service Fabric ImageStore (where the application packages are stored by Service Fabric). The parameter *ApplicationPackagePathInImageStore* informs Service Fabric where it can find the application package. We have put the updated application in "VisualObjects\_V2" with the following command (you may need to modify paths again appropriately).
service-fabric Service Fabric Application Upgrade Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-upgrade-tutorial.md
Title: Service Fabric app upgrade tutorial description: This article walks through the experience of deploying a Service Fabric application, changing the code, and rolling out an upgrade by using Visual Studio.-- Previously updated : 2/23/2018+++++ Last updated : 07/14/2022 + # Service Fabric application upgrade tutorial using Visual Studio > [!div class="op_single_selector"] > * [PowerShell](service-fabric-application-upgrade-tutorial-powershell.md)
service-fabric Service Fabric Application Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-upgrade.md
Title: Service Fabric application upgrade description: This article provides an introduction to upgrading a Service Fabric application, including choosing upgrade modes and performing health checks.- Previously updated : 8/5/2020++++ Last updated : 07/14/2022 + # Service Fabric application upgrade An Azure Service Fabric application is a collection of services. During an upgrade, Service Fabric compares the new [application manifest](service-fabric-application-and-service-manifests.md) with the previous version and determines which services in the application require updates. Service Fabric compares the version numbers in the service manifests with the version numbers in the previous version. If a service has not changed, that service is not upgraded.
service-fabric Service Fabric Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-architecture.md
Title: Architecture of Azure Service Fabric
-description: This article explains the architecture of Service Fabric, a distributed systems platform used to build scalable, reliable, and easily-managed applications for the cloud.
-
+description: This article explains the architecture of Service Fabric, a distributed systems platform used to build scalable, reliable, and easily managed applications for the cloud.
Previously updated : 01/09/2020++++ Last updated : 07/14/2022 + # Service Fabric architecture Service Fabric is built with layered subsystems. These subsystems enable you to write applications that are:
service-fabric Service Fabric Assign Policy To Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-assign-policy-to-endpoint.md
Title: Assign access policies to service endpoints
-description: Learn how to assign security access polices to HTTP or HTTPS endpoints in your Service Fabric service.
-- Previously updated : 03/21/2018
+description: Learn how to assign security access policies to HTTP or HTTPS endpoints in your Service Fabric service.
+++++ Last updated : 07/14/2022 # Assign a security access policy for HTTP and HTTPS endpoints
service-fabric Service Fabric Availability Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-availability-services.md
Title: Availability of Service Fabric services description: Describes fault detection, failover, and recovery of a service in an Azure Service Fabric application.-- Previously updated : 08/18/2017-++++ Last updated : 07/14/2022 # Availability of Service Fabric services
There can be only one Primary replica, but there can be multiple Active Secondar
If the Primary replica goes down, Service Fabric makes one of the Active Secondary replicas the new Primary replica. This Active Secondary replica already has the updated version of the state, via *replication*, and it can continue processing further read/write operations. This process is known as *reconfiguration* and is described further in the [Reconfiguration](service-fabric-concepts-reconfiguration.md) article.
-The concept of a replica being either a Primary or Active Secondary, is known as the *replica role*. These replicas are described further in the [Replicas and instances](service-fabric-concepts-replica-lifecycle.md) article.
+The concept of a replica being either a Primary or Active Secondary is known as the *replica role*. These replicas are described further in the [Replicas and instances](service-fabric-concepts-replica-lifecycle.md) article.
## Next steps For more information on Service Fabric concepts, see the following articles:
service-fabric Service Fabric Azure Clusters Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-azure-clusters-overview.md
Title: Create clusters on Windows Server and Linux description: Service Fabric clusters run on Windows Server and Linux. You can deploy and host Service Fabric applications anywhere you can run Windows Server or Linux.- documentationcenter: .net Previously updated : 02/01/2019++++ Last updated : 07/14/2022 # Overview of Service Fabric clusters on Azure
A Service Fabric cluster on Azure is an Azure resource that uses and interacts w
![Service Fabric Cluster][Image] ### Virtual machine
-A [virtual machine](../virtual-machines/index.yml) that's part of a cluster is called a node though, technically, a cluster node is a Service Fabric runtime process. Each node is assigned a node name (a string). Nodes have characteristics, such as [placement properties](service-fabric-cluster-resource-manager-cluster-description.md#node-properties-and-placement-constraints). Each machine or VM has an auto-start service, *FabricHost.exe*, that starts running at boot time and then starts two executables, *Fabric.exe* and *FabricGateway.exe*, which make up the node. A production deployment is one node per physical or virtual machine. For testing scenarios, you can host multiple nodes on a single machine or VM by running multiple instances of *Fabric.exe* and *FabricGateway.exe*.
+A [virtual machine](../virtual-machines/index.yml) that's part of a cluster is called a node though, technically, a cluster node is a Service Fabric runtime process. Each node is assigned a node name (a string). Nodes have characteristics, such as [placement properties](service-fabric-cluster-resource-manager-cluster-description.md#node-properties-and-placement-constraints). Each machine or VM has an auto-start service, *FabricHost.exe*, which starts running at boot time and then starts two executables, *Fabric.exe* and *FabricGateway.exe*, which make up the node. A production deployment is one node per physical or virtual machine. For testing scenarios, you can host multiple nodes on a single machine or VM by running multiple instances of *Fabric.exe* and *FabricGateway.exe*.
Each VM is associated with a virtual network interface card (NIC) and each NIC is assigned a private IP address. A VM is assigned to a virtual network and local balancer through the NIC.
For more information, read [Service Fabric node types and virtual machine scale
### Azure Load Balancer VM instances are joined behind an [Azure load balancer](../load-balancer/load-balancer-overview.md), which is associated with a [public IP address](../virtual-network/ip-services/public-ip-addresses.md) and DNS label. When you provision a cluster with *&lt;clustername&gt;*, the DNS name, *&lt;clustername&gt;.&lt;location&gt;.cloudapp.azure.com* is the DNS label associated with the load balancer in front of the scale set.
-VMs in a cluster have only [private IP addresses](../virtual-network/ip-services/private-ip-addresses.md). Management traffic and service traffic are routed through the public facing load balancer. Network traffic is routed to these machines through NAT rules (clients connect to specific nodes/instances) or load-balancing rules (traffic goes to VMs round robin). A load balancer has an associated public IP with a DNS name in the format: *&lt;clustername&gt;.&lt;location&gt;.cloudapp.azure.com*. A public IP is another Azure resource in the resource group. If you define multiple node types in a cluster, a load balancer is created for each node type/scale set. Or, you can setup a single load balancer for multiple node types. The primary node type has the DNS label *&lt;clustername&gt;.&lt;location&gt;.cloudapp.azure.com*, other node types have the DNS label *&lt;clustername&gt;-&lt;nodetype&gt;.&lt;location&gt;.cloudapp.azure.com*.
+VMs in a cluster have only [private IP addresses](../virtual-network/ip-services/private-ip-addresses.md). Management traffic and service traffic are routed through the public facing load balancer. Network traffic is routed to these machines through NAT rules (clients connect to specific nodes/instances) or load-balancing rules (traffic goes to VMs round robin). A load balancer has an associated public IP with a DNS name in the format: *&lt;clustername&gt;.&lt;location&gt;.cloudapp.azure.com*. A public IP is another Azure resource in the resource group. If you define multiple node types in a cluster, a load balancer is created for each node type/scale set. Or, you can set up a single load balancer for multiple node types. The primary node type has the DNS label *&lt;clustername&gt;.&lt;location&gt;.cloudapp.azure.com*, other node types have the DNS label *&lt;clustername&gt;-&lt;nodetype&gt;.&lt;location&gt;.cloudapp.azure.com*.
### Storage accounts Each cluster node type is supported by an [Azure storage account](../storage/common/storage-introduction.md) and managed disks.
In addition to client certificates, Azure Active Directory can also be configure
For more information, read [Client-to-node security](service-fabric-cluster-security.md#client-to-node-security) ### Role-based access control
-Azure role-based access control (Azure RBAC) allows you to assign fine-grained access controls on Azure resources. You can assign different access rules to subscriptions, resource groups, and resources. Azure RBAC rules are inherited along the resource hierarchy unless overridden at a lower level. You can assign any user or user groups on your AAD with Azure RBAC rules so that designated users and groups can modify your cluster. For more information, read the [Azure RBAC overview](../role-based-access-control/overview.md).
+
+Azure role-based access control (Azure RBAC) allows you to assign fine-grained access controls on Azure resources. You can assign different access rules to subscriptions, resource groups, and resources. Azure RBAC rules are inherited along the resource hierarchy unless overridden at a lower level. You can assign any user or user groups on your Azure AD with Azure RBAC rules so that designated users and groups can modify your cluster. For more information, read the [Azure RBAC overview](../role-based-access-control/overview.md).
Service Fabric also supports access control to limit access to certain cluster operations for different groups of users. This helps make the cluster more secure. Two access control types are supported for clients that connect to a cluster: Administrator role and User role.
service-fabric Service Fabric Azure Resource Manager Guardrails https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-azure-resource-manager-guardrails.md
Title: Service Fabric Azure Resource Manager deployment guardrails description: This article provides an overview of common mistakes made when deploying a Service Fabric cluster through Azure Resource Manager and how to avoid them. - documentationcenter: .net--- Previously updated : 02/13/2020-+++++ Last updated : 07/14/2022 + # Service Fabric guardrails When deploying a Service Fabric cluster, guardrails are put in place, which will fail an Azure Resource Manager deployment in the case of an invalid cluster configuration. The following sections provide an overview of common cluster configuration issues and the steps required to mitigate these issues.
service-fabric Service Fabric Backup Restore Service Ondemand Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-backup-restore-service-ondemand-backup.md
Title: On-demand backup in Azure Service Fabric description: Use the backup and restore feature in Service Fabric to back up your application data on a need basis.-- Previously updated : 10/30/2018-++++ Last updated : 07/14/2022 + # On-demand backup in Azure Service Fabric You can back up data of Reliable Stateful services and Reliable Actors to address disaster or data loss scenarios.
service-fabric Service Fabric Backup Restore Service Trigger Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-backup-restore-service-trigger-restore.md
Title: Restoring backup in Azure Service Fabric description: Use the periodic backup and restore feature in Service Fabric for restoring data from a backup of your application data.-- Previously updated : 10/30/2018-++++ Last updated : 07/14/2022 # Restoring backup in Azure Service Fabric
service-fabric Service Fabric Backuprestoreservice Configure Periodic Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-backuprestoreservice-configure-periodic-backup.md
Title: Understanding periodic backup configuration description: Use Service Fabric's periodic backup and restore feature to configure periodic backup of your Reliable stateful services or Reliable Actors.-- Previously updated : 2/01/2019+++++ Last updated : 07/14/2022 + # Understanding periodic backup configuration in Azure Service Fabric Configuring periodic backup of your Reliable stateful services or Reliable Actors consists of the following steps:
service-fabric Service Fabric Backuprestoreservice Quickstart Azurecluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-backuprestoreservice-quickstart-azurecluster.md
Title: Periodic backup and restore in Azure Service Fabric description: Use Service Fabric's periodic backup and restore feature for enabling periodic data backup of your application data.-- Previously updated : 5/20/2022+++++ Last updated : 07/14/2022 + # Periodic backup and restore in an Azure Service Fabric cluster > [!div class="op_single_selector"] > * [Clusters on Azure](service-fabric-backuprestoreservice-quickstart-azurecluster.md)
service-fabric Service Fabric Backuprestoreservice Quickstart Standalonecluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-backuprestoreservice-quickstart-standalonecluster.md
Title: Periodic backup/restore in standalone Azure Service Fabric description: Use a standalone Service Fabric's periodic backup and restore feature for enabling periodic data backup of your application data.-- Previously updated : 5/24/2019+++++ Last updated : 07/14/2022 + # Periodic backup and restore in a standalone Service Fabric > [!div class="op_single_selector"] > * [Clusters on Azure](service-fabric-backuprestoreservice-quickstart-azurecluster.md)
service-fabric Service Fabric Best Practices Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-applications.md
Title: Azure Service Fabric application design best practices description: Best practices and design considerations for developing applications and services using Azure Service Fabric.- Previously updated : 06/18/2019++++ Last updated : 07/14/2022 # Azure Service Fabric application design best practices
service-fabric Service Fabric Best Practices Capacity Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-capacity-scaling.md
Title: Capacity planning and scaling for Azure Service Fabric description: Best practices for planning and scaling Service Fabric clusters and applications.-- Previously updated : 04/25/2019--++++ Last updated : 07/14/2022 # Capacity planning and scaling for Azure Service Fabric
service-fabric Service Fabric Best Practices Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-infrastructure-as-code.md
Title: Azure Service Fabric infrastructure as Code Best Practices description: Best practices and design considerations for managing Azure Service Fabric as infrastructure as code.-- Previously updated : 01/23/2019--++++ Last updated : 07/14/2022 # Infrastructure as code
service-fabric Service Fabric Best Practices Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-monitoring.md
Title: Azure Service Fabric monitoring best practices description: Best practices and design considerations for monitoring clusters and applications using Azure Service Fabric.-- Previously updated : 01/23/2019-++++ Last updated : 07/14/2022 # Monitoring and diagnostic best practices for Azure Service Fabric
service-fabric Service Fabric Best Practices Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-networking.md
Title: Azure Service Fabric networking best practices description: Rules and design considerations for managing network connectivity using Azure Service Fabric.- Previously updated : 03/01/2022-++++ Last updated : 07/14/2022 # Networking
service-fabric Service Fabric Best Practices Replica Set Size Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-replica-set-size-configuration.md
Title: Stateful service replica set size configuration description: Best practices for TargetReplicaSetSize and MinReplicaSetSize configuration- Previously updated : 02/04/2022-++++ Last updated : 07/14/2022 # Stateful service replica set size configuration
service-fabric Service Fabric Best Practices Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-security.md
Title: Azure Service Fabric security best practices description: Best practices and design considerations for keeping Azure Service Fabric clusters and applications secure.- Previously updated : 01/23/2019-++++ Last updated : 07/14/2022 # Azure Service Fabric security
service-fabric Service Fabric Capacity Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-capacity-planning.md
Title: Capacity planning for Service Fabric apps description: Describes how to identify the number of compute nodes required for a Service Fabric application-- Previously updated : 2/23/2018+++++ Last updated : 07/14/2022 + # Capacity planning for Service Fabric applications This document teaches you how to estimate the amount of resources (CPUs, RAM, disk storage) you need to run your Azure Service Fabric applications. It is common for your resource requirements to change over time. You typically require few resources as you develop/test your service, and then require more resources as you go into production and your application grows in popularity. When you design your application, think through the long-term requirements and make choices that allow your service to scale to meet high customer demand.
service-fabric Service Fabric Choose Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-choose-framework.md
Title: Service Fabric programming model overview description: 'Service Fabric offers two frameworks for building Previously updated : 01/07/2020-++++ Last updated : 07/14/2022 # Service Fabric programming model overview
service-fabric Service Fabric Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cli.md
Title: Get started with Azure Service Fabric CLI description: Learn how to use the Azure Service Fabric CLI. Learn how to connect to a cluster and how to manage applications.--- Previously updated : 5/19/2020-+++++ Last updated : 07/14/2022 + # Azure Service Fabric CLI The Azure Service Fabric command-line interface (CLI) is a command-line utility for interacting with and managing Service Fabric entities. The Service Fabric CLI can be used with either Windows or Linux clusters. The Service Fabric CLI runs on any platform where Python is supported.
service-fabric Service Fabric Cloud Services Migration Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cloud-services-migration-differences.md
Title: Differences between Cloud Services and Service Fabric description: A conceptual overview for migrating applications from Cloud Services to Service Fabric.- Previously updated : 11/02/2017+++++ Last updated : 07/14/2022 + # Learn about the differences between Cloud Services and Service Fabric before migrating applications. Microsoft Azure Service Fabric is the next-generation cloud application platform for highly scalable, highly reliable distributed applications. It introduces many new features for packaging, deploying, upgrading, and managing distributed cloud applications.
service-fabric Service Fabric Cloud Services Migration Worker Role Stateless Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cloud-services-migration-worker-role-stateless-service.md
Title: Convert Azure Cloud Services apps to Service Fabric description: This guide compares Cloud Services Web and Worker Roles and Service Fabric stateless services to help migrate from Cloud Services to Service Fabric.- Previously updated : 11/02/2017-+++++ Last updated : 07/14/2022 + # Guide to converting Web and Worker Roles to Service Fabric stateless services This article describes how to migrate your Cloud Services Web and Worker Roles to Service Fabric stateless services. This is the simplest migration path from Cloud Services to Service Fabric for applications whose overall architecture is going to stay roughly the same.
service-fabric Service Fabric Cluster Azure Deployment Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-azure-deployment-preparation.md
Title: Plan an Azure Service Fabric cluster deployment description: Learn about planning and preparing for a production Service Fabric cluster deployment to Azure.-- Previously updated : 03/20/2019+++++ Last updated : 07/14/2022 + # Plan and prepare for a cluster deployment Planning and preparing for a production cluster deployment is very important. There are many factors to consider. This article walks you through the steps of preparing your cluster deployment.
service-fabric Service Fabric Cluster Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-capacity.md
Title: Service Fabric cluster capacity planning considerations description: Node types, durability, reliability, and other things to consider when planning your Service Fabric cluster.-- Previously updated : 06/23/2022-+++++ Last updated : 07/14/2022 + # Service Fabric cluster capacity planning considerations Cluster capacity planning is important for every Service Fabric production environment. Key considerations include:
service-fabric Service Fabric Cluster Change Cert Thumbprint To Cn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-change-cert-thumbprint-to-cn.md
Title: Update a cluster to use certificate common name description: Learn how to convert an Azure Service Fabric cluster certificate from thumbprint-based declarations to common names.-- Previously updated : 09/06/2019 -+++++ Last updated : 07/14/2022 + # Convert cluster certificates from thumbprint-based declarations to common names The signature of a certificate (commonly known as a thumbprint) is unique. A cluster certificate declared by thumbprint refers to a specific instance of a certificate. This specificity makes certificate rollover, and management in general, difficult and explicit. Each change requires orchestrating upgrades of the cluster and the underlying computing hosts.
service-fabric Service Fabric Cluster Config Upgrade Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-config-upgrade-azure.md
Title: Upgrade the configuration of an Azure Service Fabric cluster description: Learn how to upgrade the configuration that runs a Service Fabric cluster in Azure using a Resource Manager template.- Previously updated : 11/09/2018+++++ Last updated : 07/14/2022 + # Upgrade the configuration of a cluster in Azure This article describes how to customize the various fabric settings for your Service Fabric cluster. For clusters hosted in Azure, you can customize settings through the [Azure portal](https://portal.azure.com) or by using an Azure Resource Manager template.
service-fabric Service Fabric Cluster Config Upgrade Windows Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-config-upgrade-windows-server.md
Title: Upgrade the configuration of a standalone cluster description: Learn how to upgrade the configuration that runs a standalone Service Fabric cluster.- Previously updated : 11/09/2018+++++ Last updated : 07/14/2022 + # Upgrade the configuration of a standalone cluster For any modern system, the ability to upgrade is key to the long-term success of your product. An Azure Service Fabric cluster is a resource that you own. This article describes how to upgrade the configuration settings of your standalone Service Fabric cluster.
service-fabric Service Fabric Cluster Creation Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-creation-create-template.md
Title: Create an Azure Service Fabric cluster template description: Learn how to create a Resource Manager template for a Service Fabric cluster. Configure security, Azure Key Vault, and Azure Active Directory (Azure AD) for client authentication.-- Previously updated : 08/16/2018 -+++++ Last updated : 07/14/2022 + # Create a Service Fabric cluster Resource Manager template An [Azure Service Fabric cluster](service-fabric-deploy-anywhere.md) is a network-connected set of virtual machines into which your microservices are deployed and managed. A Service Fabric cluster running in Azure is an Azure resource and is deployed, managed, and monitored using the Resource Manager. This article describes how create a Resource Manager template for a Service Fabric cluster running in Azure. When the template is complete, you can [deploy the cluster on Azure](service-fabric-cluster-creation-via-arm.md).
service-fabric Service Fabric Cluster Creation For Windows Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-creation-for-windows-server.md
Title: Create a standalone Azure Service Fabric cluster description: Create an Azure Service Fabric cluster on any machine (physical or virtual) running Windows Server, whether it's on-premises or in any cloud.- Previously updated : 2/21/2019+++++ Last updated : 07/14/2022 + # Create a standalone cluster running on Windows Server You can use Azure Service Fabric to create Service Fabric clusters on any virtual machines or computers running Windows Server. This means you can deploy and run Service Fabric applications in any environment that contains a set of interconnected Windows Server computers, be it on premises or with any cloud provider. Service Fabric provides a setup package to create Service Fabric clusters called the standalone Windows Server package. Traditional Service Fabric clusters on Azure are available as a managed service, while standalone Service Fabric clusters are self-service. For more on the differences, see [Comparing Azure and standalone Service Fabric clusters](./service-fabric-deploy-anywhere.md).
service-fabric Service Fabric Cluster Creation Setup Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-creation-setup-aad.md
Title: Set up Azure Active Directory for client authentication description: Learn how to set up Azure Active Directory (Azure AD) to authenticate clients for Service Fabric clusters.- Previously updated : 5/18/2022-+++++ Last updated : 07/14/2022 # Set up Azure Active Directory for client authentication
service-fabric Service Fabric Cluster Creation Via Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-creation-via-arm.md
Title: Create an Azure Service Fabric cluster description: Learn how to set up a secure Service Fabric cluster in Azure using Azure Resource Manager. You can create a cluster using a default template or using your own cluster template.-- Previously updated : 08/16/2018 -+++++ Last updated : 07/14/2022 + # Create a Service Fabric cluster using Azure Resource Manager > [!div class="op_single_selector"] > * [Azure Resource Manager](service-fabric-cluster-creation-via-arm.md)
service-fabric Service Fabric Cluster Creation Via Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-creation-via-portal.md
Title: Create a Service Fabric cluster in the Azure portal description: Learn how to set up a secure Service Fabric cluster in Azure using the Azure portal and Azure Key Vault.-- Previously updated : 06/06/2022+++++ Last updated : 07/14/2022 + # Create a Service Fabric cluster in Azure using the Azure portal > [!div class="op_single_selector"] > * [Azure Resource Manager](service-fabric-cluster-creation-via-arm.md)
service-fabric Service Fabric Cluster Fabric Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-fabric-settings.md
Title: Change Azure Service Fabric cluster settings description: This article describes the fabric settings and the fabric upgrade policies that you can customize.- Previously updated : 08/30/2019++++ Last updated : 07/14/2022 + # Customize Service Fabric cluster settings This article describes the various fabric settings for your Service Fabric cluster that you can customize. For clusters hosted in Azure, you can customize settings through the [Azure portal](https://portal.azure.com) or by using an Azure Resource Manager template. For more information, see [Upgrade the configuration of an Azure cluster](service-fabric-cluster-config-upgrade-azure.md). For standalone clusters, you customize settings by updating the *ClusterConfig.json* file and performing a configuration upgrade on your cluster. For more information, see [Upgrade the configuration of a standalone cluster](service-fabric-cluster-config-upgrade-windows-server.md).
service-fabric Service Fabric Cluster Manifest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-manifest.md
Title: Configure your Azure Service Fabric standalone cluster description: Learn how to configure your standalone or on-premises Azure Service Fabric cluster.- Previously updated : 11/12/2018+++++ Last updated : 07/14/2022 + # Configuration settings for a standalone Windows cluster This article describes configuration settings of a standalone Azure Service Fabric cluster that can be set in the *ClusterConfig.json* file. You will use this file to specify information about the cluster's nodes, security configurations, as well as the network topology in terms of fault and upgrade domains. After changing or adding configuration settings, you can either [create a standalone cluster](service-fabric-cluster-creation-for-windows-server.md) or [upgrade the configuration of a standalone cluster](service-fabric-cluster-config-upgrade-windows-server.md).
service-fabric Service Fabric Cluster Nodetypes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-nodetypes.md
Title: Node types and virtual machine scale sets description: Learn how Azure Service Fabric node types relate to virtual machine scale sets and how to remotely connect to a scale set instance or cluster node. Previously updated : 03/23/2018-++++ Last updated : 07/14/2022 + # Azure Service Fabric node types and virtual machine scale sets [Virtual machine scale sets](../virtual-machine-scale-sets/index.yml) are an Azure compute resource. You can use scale sets to deploy and manage a collection of virtual machines as a set. Each node type that you define in an Azure Service Fabric cluster sets up exactly one scale set: multiple node types cannot be backed by the same scale set and one node type should not be backed by multiple scale sets.
The following are the property descriptions:
| nicPrefixOverride | string | Subnet Prefix like "10.0.0.0/24" | | commonNames | string[] | Common Names of installed cluster certificates | | x509StoreName | string | Name of Store where installed cluster certificate is located |
-| typeHandlerVersion | 1.1 | Version of Extension. 1.0 classic version of extension are recommended to upgrade to 1.1 |
+| typeHandlerVersion | 1.1 | Version of Extension. 1.0 classic versions of extension are recommended to upgrade to 1.1 |
| dataPath | string | Path to the drive used to save state for Service Fabric system services and application data. ## Next steps
service-fabric Service Fabric Cluster Programmatic Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-programmatic-scaling.md
Title: Azure Service Fabric Programmatic Scaling description: Scale an Azure Service Fabric cluster in or out programmatically, according to custom triggers--- Previously updated : 01/23/2018--+++++ Last updated : 07/14/2022 # Scale a Service Fabric cluster programmatically
service-fabric Service Fabric Cluster Region Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-region-move.md
Title: Move an Azure Service Fabric cluster to a new region description: How to migrate an Azure Service Fabric cluster and applications to another region.- Previously updated : 07/20/2021-+++++ Last updated : 07/14/2022 # Move an Azure Service Fabric cluster to a new region
Before engaging in any regional migration, we recommend establishing a testbed a
- For all * <p>Ensure that any communication stages between clients and the services are configured similarly to the source cluster. For example, this validation may include ensuring that intermediaries like Event Hubs, Network Load Balancers, App Gateways, or API Management are set up with the rules necessary to allow traffic to flow to the cluster.</p>
-3. Redirect traffic from the old region to the new region. We recommend using [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) for migration as it offers a range of [routing methods](../traffic-manager/traffic-manager-routing-methods.md). How exactly you update your traffic routing rules will depend on whether you desire to keep the existing region or deprecate it, and will also depend on how traffic flows within your application. You may need to investigate whether private/public IPs or DNS names can be moved between different Azure resources in different regions. Service Fabric is not aware of this part of your system, so please investigate and if necessary involve the Azure teams involved in your traffic flow, particularly if it is more complex or if your workload is latency-critical. Documents such as [Configure Custom Domain](../api-management/configure-custom-domain.md), [Public IP Addresses](../virtual-network/ip-services/public-ip-addresses.md), and [DNS Zones and Records](../dns/dns-zones-records.md) may be useful, and are examples of the information you will need depending on your traffic flows and protocols. Here are two example scenarios demonstrating how one could approach updating traffic routing:
+3. Redirect traffic from the old region to the new region. We recommend using [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) for migration as it offers a range of [routing methods](../traffic-manager/traffic-manager-routing-methods.md). How exactly you update your traffic routing rules will depend on whether you desire to keep the existing region or deprecate it, and will also depend on how traffic flows within your application. You may need to investigate whether private/public IPs or DNS names can be moved between different Azure resources in different regions. Service Fabric is not aware of this part of your system, so please investigate and if necessary involve the Azure teams involved in your traffic flow, particularly if it is more complex or if your workload is latency-critical. Documents such as [Configure Custom Domain](../api-management/configure-custom-domain.md), [Public IP Addresses](../virtual-network/ip-services/public-ip-addresses.md), and [DNS Zones and Records](../dns/dns-zones-records.md) may be useful to review. Here are two example scenarios demonstrating how one could approach updating traffic routing:
* If you do not plan to keep the existing source region and you have a DNS/CNAME associated with the public IP of a Network Load Balancer that is delivering calls to your original source cluster. Update the DNS/CNAME to be associated with a new public IP of the new network load balancer in the new region. Completing that transfer would cause clients using the existing cluster to switch to using the new cluster. * If you do plan to keep the existing source region and you have a DNS/CNAME associated with the public IP of a Network Load Balancer that was delivering calls to your original source cluster. Set up an instance of Azure Traffic Manager and then associate the DNS name with that Azure Traffic Manager Instance. The Azure Traffic Manager could be configured to then route to the individual Network Load Balancers within each region.
-4. If you do plan to keep both regions, then you will usually have some sort of ΓÇ£back syncΓÇ¥, where the source of truth is kept in some remote store, such as SQL, CosmosDB, or Blob or File Storage, which is then synced between the regions. If this applies to your workload, then it is recommended to confirm that data is flowing between the regions as expected.
+4. If you do plan to keep both regions, then you will usually have some sort of ΓÇ£back syncΓÇ¥, where the source of truth is kept in some remote store, such as SQL, Cosmos DB, or Blob or File Storage, which is then synced between the regions. If this applies to your workload, then it is recommended to confirm that data is flowing between the regions as expected.
## Final Validation 1. As a final validation, verify that traffic is flowing as expected and that the services in the new region (and potentially the old region) are operating as expected.
Before engaging in any regional migration, we recommend establishing a testbed a
2. If you do not plan to keep the original source region, then at this point the resources in that region can be removed. We recommend waiting for some time before deleting resources, in case some issue is discovered that requires a rollback to the original source region. ## Next Steps
-Now that you've moved your cluster and applications to a new region you should validate backups are setup to protect any required data.
+Now that you've moved your cluster and applications to a new region you should validate backups are set up to protect any required data.
> [!div class="nextstepaction"] > [Set up backups after migration](service-fabric-backuprestoreservice-quickstart-azurecluster.md)
service-fabric Service Fabric Cluster Remote Connect To Azure Cluster Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-remote-connect-to-azure-cluster-node.md
Title: Remote connect to an Azure Service Fabric cluster node description: Learn how to remotely connect to a scale set instance (a Service Fabric cluster node).-- Previously updated : 03/23/2018+++++ Last updated : 07/14/2022 + # Remote connect to a virtual machine scale set instance or a cluster node In a Service Fabric cluster running in Azure, each cluster node type that you define [sets up a virtual machine separate scale](service-fabric-cluster-nodetypes.md). You can remote connect to specific scale set instances (cluster nodes). Unlike single-instance VMs, scale set instances don't have their own virtual IP addresses. This can be challenging when you are looking for an IP address and port that you can use to remotely connect to a specific instance.
service-fabric Service Fabric Cluster Resource Manager Advanced Placement Rules Affinity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-advanced-placement-rules-affinity.md
Title: Service Fabric Cluster Resource Manager - Affinity description: Overview of service affinity for Azure Service Fabric services and guidance on service affinity configuration.- documentationcenter: .net-- Previously updated : 08/18/2017--++++ Last updated : 07/14/2022 + # Configuring and using service affinity in Service Fabric Affinity is a control that is provided mainly to help ease the transition of larger monolithic applications into the cloud and microservices world. It is also used as an optimization for improving the performance of services, although doing so can have side effects.
service-fabric Service Fabric Cluster Resource Manager Advanced Placement Rules Placement Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-advanced-placement-rules-placement-policies.md
Title: Service Fabric Cluster Resource Manager - Placement Policies description: Overview of additional placement policies and rules for Service Fabric Services-- Previously updated : 08/18/2017--++++ Last updated : 07/14/2022 + # Placement policies for service fabric services Placement policies are additional rules that can be used to govern service placement in some specific, less-common scenarios. Some examples of those scenarios are:
Replicas are _normally_ distributed across fault and upgrade domains when the cl
> For more information on constraints and constraint priorities generally, check out [this topic](service-fabric-cluster-resource-manager-management-integration.md#constraint-priorities). >
-If you've ever seen a health message such as "`The Load Balancer has detected a Constraint Violation for this Replica:fabric:/<some service name> Secondary Partition <some partition ID> is violating the Constraint: FaultDomain`", then you've hit this condition or something like it. Usually only one or two replicas are packed together temporarily. So long as there are fewer than a quorum of replicas in a given domain, you're safe. Packing is rare, but it can happen, and usually these situations are transient since the nodes come back. If the nodes do stay down and the Cluster Resource Manager needs to build replacements, usually there are other nodes available in the ideal fault domains.
+If you've ever seen a health message such as "`The Load Balancer has detected a Constraint Violation for this Replica:fabric:/<some service name> Secondary Partition <some partition ID> is violating the Constraint: FaultDomain`", then you've hit this condition or something like it. Usually only one or two replicas are packed together temporarily. So long as there is fewer than a quorum of replicas in a given domain, you're safe. Packing is rare, but it can happen, and usually these situations are transient since the nodes come back. If the nodes do stay down and the Cluster Resource Manager needs to build replacements, usually there are other nodes available in the ideal fault domains.
Some workloads would prefer always having the target number of replicas, even if they are packed into fewer domains. These workloads are betting against total simultaneous permanent domain failures and can usually recover local state. Other workloads would rather take the downtime earlier than risk correctness or loss of data. Most production workloads run with more than three replicas, more than three fault domains, and many valid nodes per fault domain. Because of this, the default behavior allows domain packing by default. The default behavior allows normal balancing and failover to handle these extreme cases, even if that means temporary domain packing.
service-fabric Service Fabric Cluster Resource Manager Advanced Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-advanced-throttling.md
Title: Throttling in the Service Fabric cluster resource manager description: Learn to configure the throttles provided by the Service Fabric Cluster Resource Manager.-- Previously updated : 08/18/2017-++++ Last updated : 07/14/2022 # Throttling the Service Fabric Cluster Resource Manager
service-fabric Service Fabric Cluster Resource Manager Application Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-application-groups.md
Title: Service Fabric Cluster Resource Manager - Application Groups description: Overview of the Application Group functionality in the Service Fabric Cluster Resource Manager-- Previously updated : 08/18/2017-++++ Last updated : 07/14/2022 + # Introduction to Application Groups Service Fabric's Cluster Resource Manager typically manages cluster resources by spreading the load (represented via [Metrics](service-fabric-cluster-resource-manager-metrics.md)) evenly throughout the cluster. Service Fabric manages the capacity of the nodes in the cluster and the cluster as a whole via [capacity](service-fabric-cluster-resource-manager-cluster-description.md). Metrics and capacity work great for many workloads, but patterns that make heavy use of different Service Fabric Application Instances sometimes bring in additional requirements. For example you may want to:
service-fabric Service Fabric Cluster Resource Manager Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-architecture.md
Title: Resource Manager Architecture description: An overview of and architectural information about the Azure Service Fabric Cluster Resource Manager service.-- Previously updated : 08/18/2017-++++ Last updated : 07/14/2022 + # Cluster resource manager architecture overview The Service Fabric Cluster Resource Manager is a central service that runs in the cluster. It manages the desired state of the services in the cluster, particularly with respect to resource consumption and any placement rules.
service-fabric Service Fabric Cluster Resource Manager Autoscaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-autoscaling.md
Title: Azure Service Fabric Auto Scaling Services and Containers description: Azure Service Fabric allows you to set auto scaling policies for services and containers.-- Previously updated : 04/17/2018--++++ Last updated : 07/14/2022 + # Introduction to Auto Scaling Auto scaling is an additional capability of Service Fabric to dynamically scale your services based on the load that services are reporting, or based on their usage of resources. Auto scaling gives great elasticity and enables provisioning of additional instances or partitions of your service on demand. The entire auto scaling process is automated and transparent, and once you set up your policies on a service there is no need for manual scaling operations at the service level. Auto scaling can be turned on either at service creation time, or at any time by updating the service.
service-fabric Service Fabric Cluster Resource Manager Balancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-balancing.md
Title: Balance your Azure Service Fabric cluster description: An introduction to balancing your cluster with the Service Fabric Cluster Resource Manager.-- Previously updated : 08/18/2017-++++ Last updated : 07/14/2022 + # Balancing your service fabric cluster The Service Fabric Cluster Resource Manager supports dynamic load changes, reacting to additions or removals of nodes or services. It also automatically corrects constraint violations, and proactively rebalances the cluster. But how often are these actions taken, and what triggers them?
There are three different categories of work that the Cluster Resource Manager p
3. Balancing ΓÇô this stage checks to see if rebalancing is necessary based on the configured desired level of balance for different metrics. If so it attempts to find an arrangement in the cluster that is more balanced. ## Configuring Cluster Resource Manager Timers
-The first set of controls around balancing are a set of timers. These timers govern how often the Cluster Resource Manager examines the cluster and takes corrective actions.
+The first set of controls around balancing is a set of timers. These timers govern how often the Cluster Resource Manager examines the cluster and takes corrective actions.
Each of these different types of corrections the Cluster Resource Manager can make is controlled by a different timer that governs its frequency. When each timer fires, the task is scheduled. By default the Resource
via ClusterConfig.json for Standalone deployments or Template.json for Azure hos
] ```
-Balancing and activity thresholds are both tied to a specific metric - balancing is triggered only if both the Balancing Threshold and Activity Threshold is exceeded for the same metric.
+Balancing and activity thresholds are both tied to a specific metric - balancing is triggered only if both the Balancing Threshold and Activity Threshold are exceeded for the same metric.
> [!NOTE] > When not specified, the Balancing Threshold for a metric is 1, and the Activity Threshold is 0. This means that the Cluster Resource Manager will try to keep that metric perfectly balanced for any given load. If you are using custom metrics it is recommended that you explicitly define your own balancing and activity thresholds for your metrics.
service-fabric Service Fabric Cluster Resource Manager Cluster Description https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-cluster-description.md
Title: Describe a cluster by using Cluster Resource Manager description: Describe a Service Fabric cluster by specifying fault domains, upgrade domains, node properties, and node capacities for Cluster Resource Manager.-- Previously updated : 07/28/2020--++++ Last updated : 07/14/2022 # Describe a Service Fabric cluster by using Cluster Resource Manager
The following diagram shows three upgrade domains striped across three fault dom
There are pros and cons to having large numbers of upgrade domains. More upgrade domains mean each step of the upgrade is more granular and affects a smaller number of nodes or services. Fewer services have to move at a time, introducing less churn into the system. This tends to improve reliability, because less of the service is affected by any issue introduced during the upgrade. More upgrade domains also mean that you need less available buffer on other nodes to handle the impact of the upgrade.
-For example, if you have five upgrade domains, the nodes in each are handling roughly 20 percent of your traffic. If you need to take down that upgrade domain for an upgrade, that load usually needs to go somewhere. Because you have four remaining upgrade domains, each must have room for about 25 percent of the total traffic. More upgrade domains mean that you need less buffer on the nodes in the cluster.
+For example, if you have five upgrade domains, the nodes in each are handling roughly 20 percent of your traffic. If you need to take down that upgrade domain for an upgrade, the load usually needs to go somewhere. Because you have four remaining upgrade domains, each must have room for about 25 percent of the total traffic. More upgrade domains mean that you need less buffer on the nodes in the cluster.
Consider if you had 10 upgrade domains instead. In that case, each upgrade domain would be handling only about 10 percent of the total traffic. When an upgrade steps through the cluster, each domain would need to have room for only about 11 percent of the total traffic. More upgrade domains generally allow you to run your nodes at higher utilization, because you need less reserved capacity. The same is true for fault domains.
service-fabric Service Fabric Cluster Resource Manager Configure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-configure-services.md
Title: Specify metrics and placement settings description: Learn how to describe a Service Fabric service by specifying metrics, placement constraints, and other placement policies.-- Previously updated : 08/18/2017-++++ Last updated : 07/14/2022 + # Configuring cluster resource manager settings for Service Fabric services The Service Fabric Cluster Resource Manager allows fine-grained control over the rules that govern every individual named service. Each named service can specify rules for how it should be allocated in the cluster. Each named service can also define the set of metrics that it wants to report, including how important they are to that service. Configuring services breaks down into three different tasks:
service-fabric Service Fabric Cluster Resource Manager Defragmentation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-defragmentation-metrics.md
Title: Defragmentation of Metrics in Azure Service Fabric description: Learn about using defragmentation, or packing, as a strategy for metrics in Service Fabric. This technique is useful for very large services.-- Previously updated : 08/18/2017-++++ Last updated : 07/14/2022 + # Defragmentation of metrics and load in Service Fabric The Service Fabric Cluster Resource Manager's default strategy for managing load metrics in the cluster is to distribute the load. Ensuring that nodes are evenly utilized avoids hot and cold spots that lead to both contention and wasted resources. Distributing workloads in the cluster is also the safest in terms of surviving failures since it ensures that a failure doesnΓÇÖt take out a large percentage of a given workload.
If there are many services and state to move around, then it could take a long t
You can configure defragmentation metrics to have the Cluster Resource Manager to proactively try to condense the load of the services into fewer nodes. This helps ensure that there is almost always room for large services without reorganizing the cluster. Not having to reorganize the cluster allows creating large workloads quickly.
-Most people donΓÇÖt need defragmentation. Services are usually be small, so itΓÇÖs not hard to find room for them in the cluster. When reorganization is possible, it goes quickly, again because most services are small and can be moved quickly and in parallel. However, if you have large services and need them created quickly then the defragmentation strategy is for you. We'll discuss the tradeoffs of using defragmentation next.
+Most people donΓÇÖt need defragmentation. Services are usually small, so itΓÇÖs not hard to find room for them in the cluster. When reorganization is possible, it goes quickly, again because most services are small and can be moved quickly and in parallel. However, if you have large services and need them created quickly then the defragmentation strategy is for you. We'll discuss the tradeoffs of using defragmentation next.
## Defragmentation tradeoffs Defragmentation can increase impactfulness of failures, since more services are running on nodes that fail. Defragmentation can also increase costs, since resources in the cluster must be held in reserve, waiting for the creation of large workloads.
So what are those other conceptual tradeoffs? HereΓÇÖs a quick table of things t
| Enables lower data movement during creation |Failures can impact more services and cause more churn | | Allows rich description of requirements and reclamation of space |More complex overall Resource Management configuration |
-You can mix defragmented and normal metrics in the same cluster. The Cluster Resource Manager tries to consolidate the defragmentation metrics as much as possible while spreading out the others. The results of mixing defragmentation and balancing strategies depends on several factors, including:
+You can mix defragmented and normal metrics in the same cluster. The Cluster Resource Manager tries to consolidate the defragmentation metrics as much as possible while spreading out the others. The results of mixing defragmentation and balancing strategies depend on several factors, including:
- the number of balancing metrics vs. the number of defragmentation metrics - Whether any service uses both types of metrics - the metric weights
service-fabric Service Fabric Cluster Resource Manager Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-introduction.md
Title: Introducing the Service Fabric Cluster Resource Manager description: Learn about the Service Fabric Cluster Resource Manager, a way to manage orchestration of your application's services.-- Previously updated : 08/18/2017-++++ Last updated : 07/14/2022 + # Introducing the Service Fabric cluster resource manager Traditionally managing IT systems or online services meant dedicating specific physical or virtual machines to those specific services or systems. Services were architected as tiers. There would be a ΓÇ£webΓÇ¥ tier and a ΓÇ£dataΓÇ¥ or ΓÇ£storageΓÇ¥ tier. Applications would have a messaging tier where requests flowed in and out, as well as a set of machines dedicated to caching. Each tier or type of workload had specific machines dedicated to it: the database got a couple machines dedicated to it, the web servers a few. If a particular type of workload caused the machines it was on to run too hot, then you added more machines with that same configuration to that tier. However, not all workloads could be scaled out so easily - particularly with the data tier you would typically replace machines with larger machines. Easy. If a machine failed, that part of the overall application ran at lower capacity until the machine could be restored. Still fairly easy (if not necessarily fun).
Suddenly managing your environment is not so simple as managing a few machines d
Because your app is no longer a series of monoliths spread across several tiers, you now have many more combinations to deal with. Who decides what types of workloads can run on which hardware, or how many? Which workloads work well on the same hardware, and which conflict? When a machine goes down how do you know what was running there on that machine? Who is in charge of making sure that workload starts running again? Do you wait for the (virtual?) machine to come back or do your workloads automatically fail over to other machines and keep running? Is human intervention required? What about upgrades in this environment?
-As developers and operators dealing in this environment, weΓÇÖre going to want help managing this complexity. A hiring binge and trying to hide the complexity with people is probably not the right answer, so what do we do?
+As developers and operators dealing in this environment, weΓÇÖre going to want help with managing this complexity. A hiring binge and trying to hide the complexity with people is probably not the right answer, so what do we do?
## Introducing orchestrators An ΓÇ£OrchestratorΓÇ¥ is the general term for a piece of software that helps administrators manage these types of environments. Orchestrators are the components that take in requests like ΓÇ£I would like five copies of this service running in my environment." They try to make the environment match the desired state, no matter what happens.
service-fabric Service Fabric Cluster Resource Manager Management Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-management-integration.md
Title: Cluster Resource Manager - Management Integration description: An overview of the integration points between the Cluster Resource Manager and Service Fabric Management.-- Previously updated : 08/18/2017-++++ Last updated : 07/14/2022 + # Cluster resource manager integration with Service Fabric cluster management The Service Fabric Cluster Resource Manager doesn't drive upgrades in Service Fabric, but it is involved. The first way that the Cluster Resource Manager helps with management is by tracking the desired state of the cluster and the services inside it. The Cluster Resource Manager sends out health reports when it cannot put the cluster into the desired configuration. For example, if there is insufficient capacity the Cluster Resource Manager sends out health warnings and errors indicating the problem. Another piece of integration has to do with how upgrades work. The Cluster Resource Manager alters its behavior slightly during upgrades.
Here's what this health message is telling us is:
5. The distribution policy for this service: "Distribution Policy -- Packing". This is governed by the `RequireDomainDistribution` [placement policy](service-fabric-cluster-resource-manager-advanced-placement-rules-placement-policies.md#requiring-replica-distribution-and-disallowing-packing). *Packing* indicates that in this case DomainDistribution was _not_ required, so we know that placement policy was not specified for this service. 6. When the report happened - 8/10/2015 7:13:02 PM
-Information like this powers alerts that fire in production to let you know something has gone wrong and is also used to detect and halt bad upgrades. In this case, weΓÇÖd want to see if we can figure out why the Resource Manager had to pack the replicas into the Upgrade Domain. Usually packing is transient because the nodes in the other Upgrade Domains were down, for example.
+Information like this powers alerting. You can use alerts in production to let you know something has gone wrong. Alerting is also used to detect and halt bad upgrades. In this case, weΓÇÖd want to see if we can figure out why the Resource Manager had to pack the replicas into the Upgrade Domain. Usually packing is transient because the nodes in the other Upgrade Domains were down, for example.
LetΓÇÖs say the Cluster Resource Manager is trying to place some services, but there aren't any solutions that work. When services can't be placed, it is usually for one of the following reasons:
service-fabric Service Fabric Cluster Resource Manager Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-metrics.md
Title: Manage Azure Service Fabric app load using metrics description: Learn about how to configure and use metrics in Service Fabric to manage service resource consumption.-- Previously updated : 08/18/2017--++++ Last updated : 07/14/2022 + # Managing resource consumption and load in Service Fabric with metrics *Metrics* are the resources that your services care about and which are provided by the nodes in the cluster. A metric is anything that you want to manage in order to improve or monitor the performance of your services. For example, you might watch memory consumption to know if your service is overloaded. Another use is to figure out whether the service could move elsewhere where memory is less constrained in order to get better performance.
The whole point of defining metrics is to represent some load. *Load* is how muc
All of these strategies can be used within the same service over its lifetime. ## Default load
-*Default load* is how much of the metric each service object (stateless instance or stateful replica) of this service consumes. The Cluster Resource Manager uses this number for the load of the service object until it receives other information, such as a dynamic load report. For simpler services, the default load is a static definition. The default load is never updated and is used for the lifetime of the service. Default loads works great for simple capacity planning scenarios where certain amounts of resources are dedicated to different workloads and do not change.
+*Default load* is how much of the metric each service object (stateless instance or stateful replica) of this service consumes. The Cluster Resource Manager uses this number for the load of the service object until it receives other information, such as a dynamic load report. For simpler services, the default load is a static definition. The default load is never updated and is used for the lifetime of the service. Default loads work great for simple capacity planning scenarios where certain amounts of resources are dedicated to different workloads and do not change.
> [!NOTE] > For more information on capacity management and defining capacities for the nodes in your cluster, please see [this article](service-fabric-cluster-resource-manager-cluster-description.md#capacity).
service-fabric Service Fabric Cluster Resource Manager Movement Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-movement-cost.md
Title: 'Service Fabric Cluster Resource description: Learn about the movement cost for Service Fabric services, and how it can be specified to fit any architectural need, including dynamic configuration.-- Previously updated : 08/18/2017--++++ Last updated : 07/14/2022 + # Service movement cost A factor that the Service Fabric Cluster Resource Manager considers when trying to determine what changes to make to a cluster is the cost of those changes. The notion of "cost" is traded off against how much the cluster can be improved. Cost is factored in when moving services for balancing, defragmentation, and other requirements. The goal is to meet the requirements in the least disruptive or expensive way.
service-fabric Service Fabric Cluster Resource Manager Node Tagging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-resource-manager-node-tagging.md
Title: Azure Service Fabric dynamic node tags description: Azure Service Fabric allows you to dynamically add and remove node tags.-- Previously updated : 04/05/2021--++++ Last updated : 07/14/2022 + # Introduction to dynamic node tags Node tags allow you to dynamically add and remove tags from nodes in order to influence the placement of services. Node tagging is very flexible and allows changes to service placement without application or cluster upgrades. Tags can be added or removed from nodes at any time, and services can specify requirements for certain tags when they are created. A service can also have its tag requirements updated dynamically while it is running.
service-fabric Service Fabric Cluster Rollover Cert Cn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-rollover-cert-cn.md
Title: Roll over an Azure Service Fabric cluster certificate description: Learn how to roll over a Service Fabric cluster certificate identified by the certificate common name.-- Previously updated : 09/06/2019 -+++++ Last updated : 07/14/2022 + # Manually roll over a Service Fabric cluster certificate When a Service Fabric cluster certificate is close to expiring, you need to update the certificate. Certificate rollover is simple if the cluster was [set up to use certificates based on common name](service-fabric-cluster-change-cert-thumbprint-to-cn.md) (instead of thumbprint). Get a new certificate from a certificate authority with a new expiration date. Self-signed certificates are not support for production Service Fabric clusters, to include certificates generated during Azure portal Cluster creation workflow. The new certificate must have the same common name as the older certificate.
service-fabric Service Fabric Cluster Scale In Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-scale-in-out.md
Title: Scale a Service Fabric cluster in or out description: Scale a Service Fabric cluster in or out to match demand by setting auto-scale rules for each node type/virtual machine scale set. Add or remove nodes to a Service Fabric cluster-- Previously updated : 03/12/2019 -+++++ Last updated : 07/14/2022 + # Scale a cluster in or out > [!WARNING]
Virtual machine scale sets are an Azure compute resource that you can use to dep
## Choose the node type/Virtual Machine scale set to scale Currently, you are not able to specify the auto-scale rules for virtual machine scale sets using the portal to create a Service Fabric Cluster, so let us use Azure PowerShell (1.0+) to list the node types and then add auto-scale rules to them.
-To get the list of virtual machine scale set that make up your cluster, run the following cmdlets:
+To get the list of virtual machine scale sets that make up your cluster, run the following cmdlets:
```powershell Get-AzResource -ResourceGroupName <RGname> -ResourceType Microsoft.Compute/VirtualMachineScaleSets
When you scale out a cluster the Service Fabric Explorer will reflect the number
Here is the explanation for this behavior.
-The nodes listed in Service Fabric Explorer are a reflection of what the Service Fabric system services (FM specifically) knows about the number of nodes the cluster had/has. When you scale the virtual machine scale set in, the VM was deleted but FM system service still thinks that the node (that was mapped to the VM that was deleted) will come back. So Service Fabric Explorer continues to display that node (though the health state may be error or unknown).
+The nodes listed in Service Fabric Explorer are a reflection of what the Service Fabric system services (FM specifically) know about the number of nodes the cluster had/has. When you scale the virtual machine scale set in, the VM was deleted but FM system service still thinks that the node (that was mapped to the VM that was deleted) will come back. So Service Fabric Explorer continues to display that node (though the health state may be error or unknown).
In order to make sure that a node is removed when a VM is removed, you have two options:
-1. Choose a durability level of Gold or Silver for the node types in your cluster, which gives you the infrastructure integration. Which will then automatically remove the nodes from our system services (FM) state when you scale in.
+1. Choose a durability level of Gold or Silver for the node types in your cluster, which gives you the infrastructure integration. When you scale in, nodes will be automatically removed from our system services (FM) state.
Refer to [the details on durability levels here](service-fabric-cluster-capacity.md) > [!NOTE]
service-fabric Service Fabric Cluster Scaling Standalone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-scaling-standalone.md
Title: Azure Service Fabric standalone cluster scaling description: Learn about scaling Service Fabric standalone clusters in or out and up or down. Previously updated : 11/13/2018++++ Last updated : 07/14/2022 + # Scaling Service Fabric standalone clusters A Service Fabric cluster is a network-connected set of virtual or physical machines into which your microservices are deployed and managed. A machine or VM that's part of a cluster is called a node. Clusters can contain potentially thousands of nodes. After creating a Service Fabric cluster, you can scale the cluster horizontally (change the number of nodes) or vertically (change the resources of the nodes). You can scale the cluster at any time, even when workloads are running on the cluster. As the cluster scales, your applications automatically scale as well.
Changes the number of nodes in the cluster. Once the new nodes join the cluster
- Advantages: Infinite scale, in theory. If your application is designed for scalability, you can enable limitless growth by adding more nodes. The tooling in cloud environments makes it easy to add or remove nodes, so it's easy to adjust capacity and you only pay for the resources you use. - Disadvantages: Applications must be [designed for scalability](service-fabric-concepts-scalability.md). Application databases and persistence may require additional architectural work to scale as well. [Reliable collections](service-fabric-reliable-services-reliable-collections.md) in Service Fabric stateful services, however, make it much easier to scale your application data.
-Standalone clusters allow you to deploy Service Fabric cluster on-premises or in the cloud provider of your choice. Node types are comprised of physical machines or virtual machines, depending on your deployment. Compared to clusters running in Azure, the process of scaling a standalone cluster is a little more involved. You must manually change the number of nodes in the cluster and then run a cluster configuration upgrade.
+Standalone clusters allow you to deploy Service Fabric cluster on-premises or in the cloud provider of your choice. Node types are composed of physical machines or virtual machines, depending on your deployment. Compared to clusters running in Azure, the process of scaling a standalone cluster is a little more involved. You must manually change the number of nodes in the cluster and then run a cluster configuration upgrade.
Removal of nodes may initiate multiple upgrades. Some nodes are marked with `IsSeedNode=ΓÇ¥trueΓÇ¥` tag and can be identified by querying the cluster manifest using [Get-ServiceFabricClusterManifest](/powershell/module/servicefabric/get-servicefabricclustermanifest). Removal of such nodes may take longer than others since the seed nodes will have to be moved around in such scenarios. The cluster must maintain a minimum of three primary node type nodes.
service-fabric Service Fabric Cluster Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-scaling.md
Title: Azure Service Fabric cluster scaling description: Learn about scaling Azure Service Fabric clusters in or out and up or down. As application demands change, so can Service Fabric clusters.- Previously updated : 11/13/2018-++++ Last updated : 07/14/2022 + # Scaling Azure Service Fabric clusters A Service Fabric cluster is a network-connected set of virtual or physical machines into which your microservices are deployed and managed. A machine or VM that's part of a cluster is called a node. Clusters can contain potentially thousands of nodes. After creating a Service Fabric cluster, you can scale the cluster horizontally (change the number of nodes) or vertically (change the resources of the nodes). You can scale the cluster at any time, even when workloads are running on the cluster. As the cluster scales, your applications automatically scale as well.
Create a new node type with the resources you need. Update the placement constr
### Scaling the primary node type Deploy a new primary node type with updated VM SKU, then disable the original primary node type instances one at a time so that the system services migrate to the new scale set. Verify the cluster and new nodes are healthy, then remove the original scale set and node state for the deleted nodes.
-If that not possible, you can create a new cluster and [restore application state](service-fabric-reliable-services-backup-restore.md) (if applicable) from your old cluster. You do not need to restore any system service state, they are recreated when you deploy your applications to your new cluster. If you were just running stateless applications on your cluster, then all you do is deploy your applications to the new cluster, you have nothing to restore.
+If that's not possible, you can create a new cluster and [restore application state](service-fabric-reliable-services-backup-restore.md) (if applicable) from your old cluster. You do not need to restore any system service state, they are recreated when you deploy your applications to your new cluster. If you were just running stateless applications on your cluster, then all you do is deploy your applications to the new cluster, you have nothing to restore.
## Next steps * Learn about [application scalability](service-fabric-concepts-scalability.md).
service-fabric Service Fabric Cluster Security Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-security-roles.md
Title: 'Service Fabric cluster security: client roles' description: This article describes the two client roles and the permissions provided to the roles.-- Previously updated : 2/23/2018+++++ Last updated : 07/14/2022 + # Role-based access control for Service Fabric clients Azure Service Fabric supports two different access control types for clients that are connected to a Service Fabric cluster: administrator and user. Access control allows the cluster administrator to limit access to certain cluster operations for different groups of users, making the cluster more secure.
service-fabric Service Fabric Cluster Security Update Certs Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-security-update-certs-azure.md
Title: Manage certificates in an Azure Service Fabric cluster description: Describes how to add new certificates, rollover certificate, and remove certificate to or from a Service Fabric cluster.-- Previously updated : 11/13/2018 -+++++ Last updated : 07/14/2022 + # Add or remove certificates for a Service Fabric cluster in Azure It is recommended that you familiarize yourself with how Service Fabric uses X.509 certificates and be familiar with the [Cluster security scenarios](service-fabric-cluster-security.md). You must understand what a cluster certificate is and what is used for, before you proceed further. Azure Service Fabrics SDK's default certificate load behavior is to deploy and use the defined certificate with expiry date furthest into the future; regardless of their primary or secondary configuration definition. Falling back to the classic behavior is a not recommended advanced action, and requires setting the "UseSecondaryIfNewer" setting parameter value to false within your `Fabric.Code` configuration.
-Service fabric lets you specify two cluster certificates, a primary and a secondary, when you configure certificate security during cluster creation, in addition to client certificates. Refer to [creating an azure cluster via portal](service-fabric-cluster-creation-via-portal.md) or [creating an azure cluster via Azure Resource Manager](service-fabric-cluster-creation-via-arm.md) for details on setting them up at create time. If you specify only one cluster certificate at create time, then that is used as the primary certificate. After cluster creation, you can add a new certificate as a secondary.
+Service fabric lets you specify two cluster certificates, a primary and a secondary, when you configure certificate security during cluster creation, in addition to client certificates. Refer to [creating an Azure cluster via portal](service-fabric-cluster-creation-via-portal.md) or [creating an Azure cluster via Azure Resource Manager](service-fabric-cluster-creation-via-arm.md) for details on setting them up at create time. If you specify only one cluster certificate at create time, then that is used as the primary certificate. After cluster creation, you can add a new certificate as a secondary.
> [!NOTE] > For a secure cluster, you will always need at least one valid (not revoked and not expired) cluster certificate (primary or secondary) deployed (if not, the cluster stops functioning). 90 days before all valid certificates reach expiration, the system generates a warning trace and a warning health event on the node. These are currently the only notifications Service Fabric sends regarding certificate expiration.
Read these articles for more information on cluster management:
* [Certificate management in Service Fabric clusters](cluster-security-certificate-management.md)
-* [Service Fabric Cluster upgrade process and expectations from you](service-fabric-cluster-upgrade.md)
+* [Service Fabric Cluster upgrade process and expectations](service-fabric-cluster-upgrade.md)
* [Setup role-based access for clients](service-fabric-cluster-security-roles.md) <!--Image references-->
service-fabric Service Fabric Cluster Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-security.md
Title: Secure an Azure Service Fabric cluster description: Learn about security scenarios for an Azure Service Fabric cluster, and the various technologies you can use to implement them. Previously updated : 08/14/2018++++ Last updated : 07/14/2022 + # Service Fabric cluster security scenarios An Azure Service Fabric cluster is a resource that you own. It is your responsibility to secure your clusters to help prevent unauthorized users from connecting to them. A secure cluster is especially important when you are running production workloads on the cluster. It is possible to create an unsecured cluster, however if the cluster exposes management endpoints to the public internet, anonymous users can connect to it. Unsecured clusters are not supported for production workloads.
Some important things to consider:
* To create certificates for clusters that are running production workloads, use a correctly configured Windows Server certificate service, or one from an approved [certificate authority (CA)](https://en.wikipedia.org/wiki/Certificate_authority). * Never use any temporary or test certificates that you create by using tools like MakeCert.exe in a production environment. * You can use a self-signed certificate, but only in a test cluster. Do not use a self-signed certificate in production.
-* When generating the certificate thumbprint, be sure to generate a SHA1 thumbprint. SHA1 is what's used when configuring the Client and Cluster certificate thumbprints.
+* When generating the certificate thumbprint, be sure to generate an SHA1 thumbprint. SHA1 is what's used when configuring the Client and Cluster certificate thumbprints.
### Cluster and server certificate (required)
service-fabric Service Fabric Cluster Standalone Deployment Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-standalone-deployment-preparation.md
Title: Standalone Cluster Deployment Preparation description: Documentation related to preparing the environment and creating the cluster configuration, to be considered prior to deploying a cluster intended for handling a production workload.- Previously updated : 5/19/2022+++++ Last updated : 07/14/2022 + # Plan and prepare your Service Fabric Standalone cluster deployment <a id="preparemachines"></a>Perform the following steps before you create your cluster.
Open one of the ClusterConfig.json files from the package you downloaded and mod
| **Configuration Setting** | **Description** | | | |
-| **NodeTypes** |Node types allow you to separate your cluster nodes into various groups. A cluster must have at least one NodeType. All nodes in a group have the following common characteristics: <br> **Name** - This is the node type name. <br>**Endpoint Ports** - These are various named end points (ports) that are associated with this node type. You can use any port number that you wish, as long as they do not conflict with anything else in this manifest and are not already in use by any other application running on the machine/VM. <br> **Placement Properties** - These describe properties for this node type that you use as placement constraints for the system services or your services. These properties are user-defined key/value pairs that provide extra meta data for a given node. Examples of node properties would be whether the node has a hard drive or graphics card, the number of spindles in its hard drive, cores, and other physical properties. <br> **Capacities** - Node capacities define the name and amount of a particular resource that a particular node has available for consumption. For example, a node may define that it has capacity for a metric called ΓÇ£MemoryInMbΓÇ¥ and that it has 2048 MB available by default. These capacities are used at runtime to ensure that services that require particular amounts of resources are placed on the nodes that have those resources available in the required amounts.<br>**IsPrimary** - If you have more than one NodeType defined ensure that only one is set to primary with the value *true*, which is where the system services run. All other node types should be set to the value *false* |
+| **NodeTypes** |Node types allow you to separate your cluster nodes into various groups. A cluster must have at least one NodeType. All nodes in a group have the following common characteristics: <br> **Name** - This is the node type name. <br>**Endpoint Ports** - These are named endpoints (ports) that are associated with this node type. You can use any port number that you wish, as long as they do not conflict with anything else in this manifest and are not already in use by any other application running on the machine/VM. <br> **Placement Properties** - These describe properties for this node type that you use as placement constraints for the system services or your services. These properties are user-defined key/value pairs that provide extra meta data for a given node. Examples of node properties would be whether the node has a hard drive or graphics card, the number of spindles in its hard drive, cores, and other physical properties. <br> **Capacities** - Node capacities define the name and amount of a particular resource that a particular node has available for consumption. For example, a node may define that it has capacity for a metric called ΓÇ£MemoryInMbΓÇ¥ and that it has 2048 MB available by default. These capacities are used at runtime to ensure that services that require particular amounts of resources are placed on the nodes that have those resources available in the required amounts.<br>**IsPrimary** - If you have more than one NodeType defined ensure that only one is set to primary with the value *true*, which is where the system services run. All other node types should be set to the value *false* |
| **Nodes** |These are the details for each of the nodes that are part of the cluster (node type, node name, IP address, fault domain, and upgrade domain of the node). The machines you want the cluster to be created on need to be listed here with their IP addresses. <br> If you use the same IP address for all the nodes, then a one-box cluster is created, which you can use for testing purposes. Do not use One-box clusters for deploying production workloads. | After the cluster configuration has had all settings configured to the environment, it can be tested against the cluster environment (step 7).
service-fabric Service Fabric Cluster Standalone Package Contents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-standalone-package-contents.md
Title: Azure Service Fabric Standalone Package for Windows Server description: Description and contents of the Azure Service Fabric Standalone package for Windows Server.--- Previously updated : 8/10/2017-+++++ Last updated : 07/14/2022 # Contents of Service Fabric Standalone package for Windows Server
service-fabric Service Fabric Cluster Upgrade Os https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-upgrade-os.md
Title: Upgrade Linux OS for Azure Service Fabric description: Learn about options for migrating your Azure Service Fabric cluster to another Linux operating system.-- - Previously updated : 06/01/2022++++ Last updated : 07/14/2022 # Upgrade Linux OS for Azure Service Fabric
service-fabric Service Fabric Cluster Upgrade Standalone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-upgrade-standalone.md
Title: Upgrade an Azure Service Fabric standalone cluster description: Learn about upgrading the version or configuration of an Azure Service Fabric standalone cluster. T- Previously updated : 11/12/2018++++ Last updated : 07/14/2022 + # Upgrading and updating a Service Fabric standalone cluster For any modern system, designing for upgradability is key to achieving long-term success of your product. An Azure Service Fabric standalone cluster is a resource that you own. This article describes what can be upgraded or updated.
service-fabric Service Fabric Cluster Upgrade Version Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-upgrade-version-azure.md
Title: Manage Service Fabric cluster upgrades description: Manage when and how your Service Fabric cluster runtime is updated Previously updated : 03/26/2021++++ Last updated : 07/14/2022 + # Manage Service Fabric cluster upgrades An Azure Service Fabric cluster is a resource you own, but it's partly managed by Microsoft. Here's how to manage when and how Microsoft updates your Azure Service Fabric cluster.
service-fabric Service Fabric Cluster Upgrade Windows Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-upgrade-windows-server.md
Title: Upgrade the version of a standalone cluster description: Upgrade the Azure Service Fabric code that runs a standalone Service Fabric cluster.- Previously updated : 11/09/2018+++++ Last updated : 07/14/2022 + # Upgrade the Service Fabric version that runs on your cluster For any modern system, the ability to upgrade is key to the long-term success of your product. An Azure Service Fabric cluster is a resource that you own. This article describes how to upgrade the version of Service Fabric running on your standalone cluster.
service-fabric Service Fabric Cluster Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-upgrade.md
Title: Upgrading Azure Service Fabric clusters description: Learn about options for updating your Azure Service Fabric cluster Previously updated : 03/26/2021++++ Last updated : 07/14/2022 + # Upgrading and updating Azure Service Fabric clusters An Azure Service Fabric cluster is a resource you own, but it's partly managed by Microsoft. This article describes the options for when and how to update your Azure Service Fabric cluster.
service-fabric Service Fabric Cluster Windows Server Add Remove Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-windows-server-add-remove-nodes.md
Title: Add or remove nodes to a standalone Service Fabric cluster description: Learn how to add or remove nodes to an Azure Service Fabric cluster on a physical or virtual machine running Windows Server, which could be on-premises or in any cloud.- Previously updated : 11/02/2017+++++ Last updated : 07/14/2022 + # Add or remove nodes to a standalone Service Fabric cluster running on Windows Server After you have [created your standalone Service Fabric cluster on Windows Server machines](service-fabric-cluster-creation-for-windows-server.md), your (business) needs may change and you will need to add or remove nodes to your cluster, as described in this article.
service-fabric Service Fabric Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-common-questions.md
Title: Common questions about Microsoft Azure Service Fabric description: Frequently asked questions about Service Fabric, including capabilities, use cases, and common scenarios.-- Previously updated : 08/18/2017-+++++ Last updated : 07/14/2022
service-fabric Service Fabric Concept Resource Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-concept-resource-model.md
Title: Azure Service Fabric application resource model description: This article provides an overview of managing an Azure Service Fabric application by using Azure Resource Manager.- Previously updated : 5/18/2022-+++++ Last updated : 07/14/2022 # Service Fabric application resource model
service-fabric Service Fabric Concepts Partitioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-concepts-partitioning.md
Title: Partitioning Service Fabric services description: Learn how to partition Service Fabric stateless and stateful services Previously updated : 06/16/2022-++++ Last updated : 07/14/2022 + # Partition Service Fabric reliable services This article provides an introduction to the basic concepts of partitioning Azure Service Fabric reliable services. Partitioning enables data storage on the local machines so data and compute can be scaled together.
service-fabric Service Fabric Concepts Reconfiguration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-concepts-reconfiguration.md
Title: Reconfiguration in Azure Service Fabric description: Learn about configurations for stateful service replicas and the process of reconfiguration Service Fabric uses to maintain consistency and availability during the change.-- Previously updated : 01/10/2018-++++ Last updated : 07/14/2022 # Reconfiguration in Azure Service Fabric
service-fabric Service Fabric Concepts Replica Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-concepts-replica-lifecycle.md
Title: Replicas and instances in Azure Service Fabric description: Learn about replicas and instances in Service Fabric, including an overview of their lifecycles and functions.-- Previously updated : 01/10/2018-++++ Last updated : 07/14/2022 # Replicas and instances
service-fabric Service Fabric Concepts Scalability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-concepts-scalability.md
Title: Scalability of Service Fabric services description: Learn about scaling in Azure Service Fabric and the various techniques used to scale applications.-- Previously updated : 08/26/2019--++++ Last updated : 07/14/2022 + # Scaling in Service Fabric Azure Service Fabric makes it easy to build scalable applications by managing the services, partitions, and replicas on the nodes of a cluster. Running many workloads on the same hardware enables maximum resource utilization, but also provides flexibility in terms of how you choose to scale your workloads. This Channel 9 video describes how you can build scalable microservices applications:
service-fabric Service Fabric Concepts State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-concepts-state.md
Title: Manage state in Azure Service Fabric services description: Learn about state in Azure Service Fabric, including how to define and manage service state in Service Fabric services.-- Previously updated : 08/18/2017-++++ Last updated : 07/14/2022 + # Service state **Service state** refers to the in-memory or on disk data that a service requires to function. It includes, for example, the data structures and member variables that the service reads and writes to do work. Depending on how the service is architected, it could also include files or other resources that are stored on disk. For example, the files a database would use to store data and transaction logs.
service-fabric Service Fabric Concepts Unit Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-concepts-unit-testing.md
Title: Unit testing stateful services in Azure Service Fabric description: Learn about the concepts and practices of unit testing Service Fabric Stateful Services.- Previously updated : 09/04/2018++++ Last updated : 07/14/2022 + # Unit testing stateful services in Service Fabric This article covers the concepts and practices of unit testing Service Fabric Stateful Services. Unit testing within Service Fabric deserves its own considerations due to the fact that the application code actively runs under multiple different contexts. This article describes the practices used to ensure application code is covered under each of the different contexts it can run.
service-fabric Service Fabric Configure Certificates Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-configure-certificates-linux.md
Title: Configure certificates for applications on Linux description: Configure certificates for your app with the Service Fabric runtime on a Linux cluster-- Previously updated : 09/06/2019--+++++ Last updated : 07/14/2022 + # Certificates and security on Linux clusters This article provides information about configuring X.509 certificates on Linux clusters.
service-fabric Service Fabric Connect And Communicate With Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-connect-and-communicate-with-services.md
Title: Connect and communicate with services in Azure Service Fabric description: Learn how to resolve, connect, and communicate with services in Service Fabric. Previously updated : 11/01/2017-++++ Last updated : 07/14/2022 + # Connect and communicate with services in Service Fabric In Service Fabric, a service runs somewhere in a Service Fabric cluster, typically distributed across multiple VMs. It can be moved from one place to another, either by the service owner, or automatically by Service Fabric. Services are not statically tied to a particular machine or address.
service-fabric Service Fabric Connect To Secure Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-connect-to-secure-cluster.md
Title: Connect securely to an Azure Service Fabric cluster description: Describes how to authenticate client access to a Service Fabric cluster and how to secure communication between clients and a cluster.-- Previously updated : 06/22/2022-+++++ Last updated : 07/14/2022 + # Connect to a secure cluster When a client connects to a Service Fabric cluster node, the client can be authenticated and secure communication established using certificate security or Azure Active Directory (AAD). This authentication ensures that only authorized users can access the cluster and deployed applications and perform management tasks. Certificate or AAD security must have been previously enabled on the cluster when the cluster was created. For more information on cluster security scenarios, see [Cluster security](service-fabric-cluster-security.md). If you are connecting to a cluster secured with certificates, [set up the client certificate](service-fabric-connect-to-secure-cluster.md#connectsecureclustersetupclientcert) on the computer that connects to the cluster.
service-fabric Service Fabric Containers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-containers-overview.md
Title: Overview of Service Fabric and containers description: An overview of Service Fabric and the use of containers to deploy microservice applications. This article provides an overview of how containers can be used and the available capabilities in Service Fabric.- Previously updated : 7/9/2020++++ Last updated : 07/14/2022 # Service Fabric and containers
service-fabric Service Fabric Containers View Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-containers-view-logs.md
Title: View containers logs in Azure Service Fabric description: Describes how to view container logs for a running Service Fabric container services using Service Fabric Explorer.-- Previously updated : 05/15/2018+++++ Last updated : 07/14/2022 + # View logs for a Service Fabric container service Azure Service Fabric is a container orchestrator and supports both [Linux and Windows containers](service-fabric-containers-overview.md). This article describes how to view container logs of a running container service or a dead container so that you can diagnose and troubleshoot problems.
service-fabric Service Fabric Containers Volume Logging Drivers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-containers-volume-logging-drivers.md
Title: Azure Files volume driver for Service Fabric description: Service Fabric supports using Azure Files to backup volumes from your container.- Previously updated : 6/10/2018+++++ Last updated : 07/14/2022 # Azure Files volume driver for Service Fabric
service-fabric Service Fabric Content Roadmap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-content-roadmap.md
Title: Learn more about Azure Service Fabric description: Learn about the core concepts and major areas of Azure Service Fabric. Provides an extended overview of Service Fabric and how to create microservices. - Previously updated : 12/08/2017++++ Last updated : 07/14/2022 + # So you want to learn about Service Fabric? Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable microservices. Service Fabric has a large surface area, however, and there's a lot to learn. This article provides a synopsis of Service Fabric and describes the core concepts, programming models, application lifecycle, testing, clusters, and health monitoring. Read the [Overview](service-fabric-overview.md) and [What are microservices?](service-fabric-overview-microservices.md) for an introduction and how Service Fabric can be used to create microservices. This article does not contain a comprehensive content list, but does link to overview and getting started articles for every area of Service Fabric.
service-fabric Service Fabric Controlled Chaos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-controlled-chaos.md
Title: Induce Chaos in Service Fabric clusters description: Using Fault Injection and Cluster Analysis Service APIs to manage Chaos in the cluster.- Previously updated : 05/31/2022-+++++ Last updated : 07/14/2022 + # Induce controlled Chaos in Service Fabric clusters Large-scale distributed systems like cloud infrastructures are inherently unreliable. Azure Service Fabric enables developers to write reliable distributed services on top of an unreliable infrastructure. To write robust distributed services on top of an unreliable infrastructure, developers need to be able to test the stability of their services while the underlying unreliable infrastructure is going through complicated state transitions due to faults.
service-fabric Service Fabric Create Cluster Using Cert Cn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-create-cluster-using-cert-cn.md
Title: Create a cluster using certificate common name description: Learn how to create a Service Fabric cluster using certificate common name from a template.-- Previously updated : 09/06/2019 -+++++ Last updated : 07/14/2022 + # Deploy a Service Fabric cluster that uses certificate common name instead of thumbprint No two certificates can have the same thumbprint, which makes cluster certificate rollover or management difficult. Multiple certificates, however, can have the same common name or subject. A cluster using certificate common names makes certificate management much simpler. This article describes how to deploy a Service Fabric cluster to use the certificate common name instead of the certificate thumbprint.
service-fabric Service Fabric Create Your First Linux Application With Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-create-your-first-linux-application-with-csharp.md
Title: Create your first Azure Service Fabric app on Linux using C# description: Learn how to create and deploy a Service Fabric application using C# and .NET Core 2.0.-- Previously updated : 04/11/2018+++++ Last updated : 07/14/2022 + # Create your first Azure Service Fabric application > [!div class="op_single_selector"] > * [Java - Linux (Preview)](service-fabric-create-your-first-linux-application-with-java.md)
service-fabric Service Fabric Create Your First Linux Application With Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-create-your-first-linux-application-with-java.md
Title: Create an Azure Service Fabric reliable actors Java application on Linux description: Learn how to create and deploy a Java Service Fabric reliable actors application in five minutes.-- Previously updated : 06/18/2018-+++++ Last updated : 07/14/2022 # Create your first Java Service Fabric Reliable Actors application on Linux
service-fabric Service Fabric Cross Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cross-availability-zones.md
Title: Deploy a cluster across Availability Zones description: Learn how to create an Azure Service Fabric cluster across Availability Zones.--- Previously updated : 05/13/2022-+++++ Last updated : 07/14/2022 # Deploy an Azure Service Fabric cluster across Availability Zones
service-fabric Service Fabric Debugging Your Application Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-debugging-your-application-java.md
Title: Debug your application in Eclipse description: Improve the reliability and performance of your services by developing and debugging them in Eclipse on a local development cluster.-- Previously updated : 11/02/2017-+++++ Last updated : 07/14/2022 + # Debug your Java Service Fabric application using Eclipse > [!div class="op_single_selector"] > * [Visual Studio/CSharp](service-fabric-debugging-your-application.md)
service-fabric Service Fabric Debugging Your Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-debugging-your-application.md
Title: Debug your application in Visual Studio description: Improve the reliability and performance of your services by developing and debugging them in Visual Studio on a local development cluster.- Previously updated : 05/02/2022-+++++ Last updated : 07/14/2022 + # Debug your Service Fabric application by using Visual Studio > [!div class="op_single_selector"] > * [Visual Studio/CSharp](service-fabric-debugging-your-application.md)
service-fabric Service Fabric Deploy Anywhere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-deploy-anywhere.md
Title: Overview of Azure and standalone Service Fabric clusters description: You can create Service Fabric clusters on any VMs or computers running Windows Server or Linux. This means you are able to deploy and run Service Fabric applications in any environment where you have a set of Windows Server or Linux computers that are interconnected- on-premises, Microsoft Azure, or with any cloud provider. Previously updated : 01/07/2020++++ Last updated : 07/14/2022 + # Comparing Azure and standalone Service Fabric clusters on Windows Server and Linux A Service Fabric cluster is a network-connected set of virtual or physical machines into which your microservices are deployed and managed. A machine or VM that is part of a cluster is called a cluster node. Clusters can scale to thousands of nodes. If you add new nodes to the cluster, Service Fabric rebalances the service partition replicas and instances across the increased number of nodes. Overall application performance improves and contention for access to memory decreases. If the nodes in the cluster are not being used efficiently, you can decrease the number of nodes in the cluster. Service Fabric again rebalances the partition replicas and instances across the decreased number of nodes to make better use of the hardware on each node.
service-fabric Service Fabric Deploy Existing App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-deploy-existing-app.md
Title: Deploy an existing executable to Azure Service Fabric description: Learn how to package an existing application as a guest executable, so it can be deployed to a Service Fabric cluster.-- Previously updated : 03/30/2020+++++ Last updated : 07/14/2022 + # Package and deploy an existing executable to Service Fabric When packaging an existing executable as a [guest executable](service-fabric-guest-executables-introduction.md), you can choose either to use a Visual Studio project template or to [create the application package manually](#manually). Using Visual Studio, the application package structure and manifest files are created by the new project template for you.
service-fabric Service Fabric Deploy Remove Applications Fabricclient https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-deploy-remove-applications-fabricclient.md
Title: Azure Service Fabric deployment with FabricClient description: Use the FabricClient APIs to deploy and remove applications in Service Fabric.-- Previously updated : 01/19/2018-+++++ Last updated : 07/14/2022 + # Deploy and remove applications using FabricClient > [!div class="op_single_selector"] > * [Resource Manager](service-fabric-application-arm-resource.md)
service-fabric Service Fabric Deploy Remove Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-deploy-remove-applications.md
Title: Azure Service Fabric deployment with PowerShell description: Learn about removing and deploying applications in Azure Service Fabric and how to perform these actions in PowerShell.-- Previously updated : 01/19/2018+++++ Last updated : 07/14/2022 + # Deploy and remove applications using PowerShell > [!div class="op_single_selector"]
service-fabric Service Fabric Develop Csharp Applications With Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-develop-csharp-applications-with-vs-code.md
Title: Develop .NET Core applications with Visual Studio Code description: This article shows how to build, deploy, and debug .NET Core Service Fabric applications using Visual Studio Code. --- Previously updated : 06/29/2018-+++++ Last updated : 07/14/2022 # Develop C# Service Fabric applications with Visual Studio Code
service-fabric Service Fabric Develop Java Applications With Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-develop-java-applications-with-vs-code.md
Title: Develop Java applications with Visual Studio Code description: This article shows how to build, deploy, and debug Java Service Fabric applications using Visual Studio Code. --- Previously updated : 06/29/2018--+++++ Last updated : 07/14/2022 # Develop Java Service Fabric applications with Visual Studio Code
service-fabric Service Fabric Diagnostics Code Package Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-code-package-errors.md
Title: Diagnose common code package errors by using Service Fabric description: Learn how to troubleshoot common code package errors with Azure Service Fabric--- Previously updated : 05/09/2019-+++++ Last updated : 07/14/2022 # Diagnose common code package errors by using Service Fabric
service-fabric Service Fabric Diagnostics Common Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-common-scenarios.md
Title: Azure Service Fabric Diagnose Common Scenarios description: Learn about troubleshooting common monitoring and diagnostic scenarios within Azure Service Fabric applications.- Previously updated : 02/25/2019+++++ Last updated : 07/14/2022 # Diagnose common scenarios with Service Fabric
service-fabric Service Fabric Diagnostics Event Aggregation Eventflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-event-aggregation-eventflow.md
Title: Azure Service Fabric Event Aggregation with EventFlow description: Learn about aggregating and collecting events using EventFlow for monitoring and diagnostics of Azure Service Fabric clusters.- Previously updated : 2/25/2019-+++++ Last updated : 07/14/2022 # Event aggregation and collection using EventFlow
service-fabric Service Fabric Diagnostics Event Aggregation Lad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-event-aggregation-lad.md
Title: Event Aggregation with Linux Azure Diagnostics description: Learn about aggregating and collecting events using LAD for monitoring and diagnostics of Azure Service Fabric clusters.- Previously updated : 2/25/2019+++++ Last updated : 07/14/2022 # Event aggregation and collection using Linux Azure Diagnostics
service-fabric Service Fabric Diagnostics Event Aggregation Wad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-event-aggregation-wad.md
Title: Event Aggregation with Windows Azure Diagnostics description: Learn about aggregating and collecting events using WAD for monitoring and diagnostics of Azure Service Fabric clusters.- Previously updated : 04/03/2018 -+++++ Last updated : 07/14/2022 # Event aggregation and collection using Windows Azure Diagnostics
service-fabric Service Fabric Diagnostics Event Analysis Appinsights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-event-analysis-appinsights.md
Title: Azure Service Fabric Event Analysis with Application Insights description: Learn about visualizing and analyzing events using Application Insights for monitoring and diagnostics of Azure Service Fabric clusters.- Previously updated : 11/21/2018+++++ Last updated : 07/14/2022 # Event analysis and visualization with Application Insights
service-fabric Service Fabric Diagnostics Event Analysis Oms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-event-analysis-oms.md
Title: Azure Service Fabric Event Analysis with Azure Monitor logs description: Learn about visualizing and analyzing events using Azure Monitor logs for monitoring and diagnostics of Azure Service Fabric clusters.- Previously updated : 02/21/2019+++++ Last updated : 07/14/2022 # Event analysis and visualization with Azure Monitor logs
service-fabric Service Fabric Diagnostics Event Generation App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-event-generation-app.md
Title: Azure Service Fabric Application Level Monitoring description: Learn about application and service level events and logs used to monitor and diagnose Azure Service Fabric clusters. Previously updated : 11/21/2018++++ Last updated : 07/14/2022 # Application logging
service-fabric Service Fabric Diagnostics Event Generation Infra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-event-generation-infra.md
Title: Azure Service Fabric Platform Level Monitoring description: Learn about platform level events and logs used to monitor and diagnose Azure Service Fabric clusters. Previously updated : 11/21/2018++++ Last updated : 07/14/2022 # Monitoring the cluster
service-fabric Service Fabric Diagnostics Event Generation Operational https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-event-generation-operational.md
Title: Azure Service Fabric Event List description: Comprehensive list of events provided by Azure Service Fabric to help monitor clusters. Previously updated : 2/25/2019++++ Last updated : 07/14/2022 # List of Service Fabric events
service-fabric Service Fabric Diagnostics Event Generation Perf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-event-generation-perf.md
Title: Azure Service Fabric Performance Monitoring description: Learn about performance counters for monitoring and diagnostics of Azure Service Fabric clusters. Previously updated : 11/21/2018++++ Last updated : 07/14/2022 # Performance metrics
service-fabric Service Fabric Diagnostics Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-events.md
Title: Azure Service Fabric Events description: Learn about the Service Fabric events provided out of the box to help you monitor your Azure Service Fabric cluster. Previously updated : 11/21/2018++++ Last updated : 07/14/2022
-# Service Fabric events
+# Service Fabric events
The Service Fabric platform writes several structured events for key operational activities happening within your cluster. These range from cluster upgrades to replica placement decisions. Each event that Service Fabric exposes maps to one of the following entities in the cluster: * Cluster * Application * Service * Partition
-* Replica
+* Replica
* Container To see a full list of events exposed by the platform - [List of Service Fabric events](service-fabric-diagnostics-event-generation-operational.md).
service-fabric Service Fabric Diagnostics Eventstore Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-eventstore-query.md
Title: Query for cluster events using the EventStore APIs description: Learn how to use the Azure Service Fabric EventStore APIs to query for platform events- Previously updated : 02/25/2019-+++++ Last updated : 07/14/2022 # Query EventStore APIs for cluster events
service-fabric Service Fabric Diagnostics Eventstore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-eventstore.md
Title: Azure Service Fabric Event Store description: Learn about Azure Service Fabric's EventStore, a way to understand and monitor the state of a cluster or workloads at any time. Previously updated : 6/6/2019++++ Last updated : 07/14/2022 # EventStore Overview
service-fabric Service Fabric Diagnostics How To Monitor And Diagnose Services Locally Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-how-to-monitor-and-diagnose-services-locally-linux.md
Title: Debug Azure Service Fabric apps in Linux description: Learn how to monitor and diagnose your Service Fabric services on a local Linux development machine.-- Previously updated : 2/23/2018-+++++ Last updated : 07/14/2022 # Monitor and diagnose services in a local Linux machine development setup - > [!div class="op_single_selector"] > * [Windows](service-fabric-diagnostics-how-to-monitor-and-diagnose-services-locally.md) > * [Linux](service-fabric-diagnostics-how-to-monitor-and-diagnose-services-locally-linux.md)
service-fabric Service Fabric Diagnostics How To Monitor And Diagnose Services Locally https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-how-to-monitor-and-diagnose-services-locally.md
Title: Debug Azure Service Fabric apps in Windows description: Learn how to monitor and diagnose your services written using Microsoft Azure Service Fabric on a local development machine.- Previously updated : 02/25/2019+++++ Last updated : 07/14/2022 + # Monitor and diagnose services in a local machine development setup+ > [!div class="op_single_selector"] > * [Windows](service-fabric-diagnostics-how-to-monitor-and-diagnose-services-locally.md) > * [Linux](service-fabric-diagnostics-how-to-monitor-and-diagnose-services-locally-linux.md)
->
->
Monitoring, detecting, diagnosing, and troubleshooting allow for services to continue with minimal disruption to the user experience. While monitoring and diagnostics are critical in an actual deployed production environment, the efficiency will depend on adopting a similar model during development of services to ensure they work when you move to a real-world setup. Service Fabric makes it easy for service developers to implement diagnostics that can seamlessly work across both single-machine local development setups and real-world production cluster setups.
service-fabric Service Fabric Diagnostics How To Report And Check Service Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-how-to-report-and-check-service-health.md
Title: Report and check health with Azure Service Fabric description: Learn how to send health reports from your service code and how to check the health of your service by using the health monitoring tools that Azure Service Fabric provides.- Previously updated : 02/25/2019-+++++ Last updated : 07/14/2022 # Report and check service health When your services encounter problems, your ability to respond to and fix incidents and outages depends on your ability to detect the issues quickly. If you report problems and failures to the Azure Service Fabric health manager from your service code, you can use standard health monitoring tools that Service Fabric provides to check the health status.
service-fabric Service Fabric Diagnostics Oms Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-oms-agent.md
Title: Performance Monitoring with Azure Monitor logs description: Learn how to set up the Log Analytics Agent for monitoring containers and performance counters for your Azure Service Fabric clusters.- Previously updated : 04/16/2018+++++ Last updated : 07/14/2022 # Performance Monitoring with Azure Monitor logs
service-fabric Service Fabric Diagnostics Oms Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-oms-containers.md
Title: Monitor containers with Azure Monitor logs description: Use Azure Monitor logs for monitoring containers running on Azure Service Fabric clusters.- Previously updated : 02/25/2019+++++ Last updated : 07/14/2022 # Monitor containers with Azure Monitor logs
service-fabric Service Fabric Diagnostics Oms Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-oms-setup.md
Title: Set up monitoring with Azure Monitor logs description: Learn how to set up Azure Monitor logs for visualizing and analyzing events to monitor your Azure Service Fabric clusters.- Previously updated : 02/20/2019 -+++++ Last updated : 07/14/2022 # Set up Azure Monitor logs for a cluster
service-fabric Service Fabric Diagnostics Oms Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-oms-syslog.md
Title: Monitor Linux cluster events in Azure Service Fabric description: Learn how to monitor a Service Fabric Linux cluster events by writing Service Fabric platform events to Syslog.- Previously updated : 10/23/2018+++++ Last updated : 07/14/2022 # Service Fabric Linux cluster events in Syslog
service-fabric Service Fabric Diagnostics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-overview.md
Title: Azure Service Fabric Monitoring and Diagnostics Overview description: Learn about monitoring and diagnostics for Azure Service Fabric clusters, applications, and services. Previously updated : 1/17/2019++++ Last updated : 07/14/2022 # Monitoring and diagnostics for Azure Service Fabric
service-fabric Service Fabric Diagnostics Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-partners.md
Title: Azure Service Fabric Monitoring Partners description: Learn how to monitor Azure Service Fabric applications, clusters, and infrastructure with partner monitoring solutions.- Previously updated : 10/16/2018+++++ Last updated : 07/14/2022 # Azure Service Fabric Monitoring Partners
service-fabric Service Fabric Diagnostics Perf Wad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-perf-wad.md
Title: Performance monitoring with Windows Azure Diagnostics description: Use Windows Azure Diagnostics to collect performance counters for your Azure Service Fabric clusters.- Previously updated : 11/21/2018+++++ Last updated : 07/14/2022 # Performance monitoring with the Windows Azure Diagnostics extension
service-fabric Service Fabric Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-disaster-recovery.md
Title: Azure Service Fabric disaster recovery description: Azure Service Fabric offers capabilities to deal with disasters. This article describes the types of disasters that can occur and how to deal with them.--- Previously updated : 08/18/2017-+++++ Last updated : 07/14/2022 # Disaster recovery in Azure Service Fabric A critical part of delivering high availability is ensuring that services can survive all different types of failures. This is especially important for failures that are unplanned and outside your control.
service-fabric Service Fabric Dnsservice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-dnsservice.md
Title: Azure Service Fabric DNS service description: Use Service Fabric's dns service for discovering microservices from inside the cluster.- Previously updated : 7/20/2018-++++ Last updated : 07/14/2022 + # DNS Service in Azure Service Fabric The DNS Service is an optional system service that you can enable in your cluster to discover other services using the DNS protocol.
service-fabric Service Fabric Docker Compose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-docker-compose.md
Title: Azure Service Fabric Docker Compose Deployment Preview description: Azure Service Fabric accepts Docker Compose format to make it easier to orchestrate existing containers using Service Fabric. This support is currently in preview.- Previously updated : 2/23/2018++++ Last updated : 07/14/2022 + # Docker Compose deployment support in Azure Service Fabric (Preview) Docker uses the [docker-compose.yml](https://docs.docker.com/compose) file for defining multi-container applications. To make it easy for customers familiar with Docker to orchestrate existing container applications on Azure Service Fabric, we have included preview support for Docker Compose deployment natively in the platform. Service Fabric can accept version 3 and later of `docker-compose.yml` files.
service-fabric Service Fabric Enable Azure Disk Encryption Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-enable-azure-disk-encryption-linux.md
Title: Enable disk encryption for Linux clusters description: This article describes how to enable disk encryption for Azure Service Fabric cluster nodes in Linux by using Azure Resource Manager and Azure Key Vault.-- Previously updated : 03/22/2019 -+++++ Last updated : 07/14/2022 + # Enable disk encryption for Azure Service Fabric cluster nodes in Linux > [!div class="op_single_selector"] > * [Disk Encryption for Linux](service-fabric-enable-azure-disk-encryption-linux.md)
service-fabric Service Fabric Enable Azure Disk Encryption Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-enable-azure-disk-encryption-windows.md
Title: Enable disk encryption for Windows clusters description: This article describes how to enable disk encryption for Azure Service Fabric cluster nodes by using Azure Key Vault in Azure Resource Manager.-- Previously updated : 03/22/2019 -+++++ Last updated : 07/14/2022 + # Enable disk encryption for Azure Service Fabric cluster nodes in Windows > [!div class="op_single_selector"] > * [Disk Encryption for Windows](service-fabric-enable-azure-disk-encryption-windows.md)
service-fabric Service Fabric Environment Variables Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-environment-variables-reference.md
Title: Azure Service Fabric environment variables description: Learn about environment variables in Azure Service Fabric. Contains a reference of a full list of variables and their uses. Previously updated : 12/07/2017++++ Last updated : 07/14/2022 + # Service Fabric environment variables Service Fabric has built-in environment variables set for each service instance. The full list of environment variables is below:
service-fabric Service Fabric Errors And Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-errors-and-exceptions.md
Title: Common FabricClient exceptions thrown description: Describes the common exceptions and errors which can be thrown by the FabricClient APIs while performing application and cluster management operations.- Previously updated : 06/20/2018+++++ Last updated : 07/14/2022 + # Common exceptions and errors when working with the FabricClient APIs The [FabricClient](/dotnet/api/system.fabric.fabricclient) APIs enable cluster and application administrators to perform administrative tasks on a Service Fabric application, service, or cluster. For example, application deployment, upgrade, and removal, checking the health a cluster, or testing a service. Application developers and cluster administrators can use the FabricClient APIs to develop tools for managing the Service Fabric cluster and applications.
service-fabric Service Fabric Get Started Containers Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-get-started-containers-linux.md
Title: Create an Azure Service Fabric container application on Linux description: Create your first Linux container application on Azure Service Fabric. Build a Docker image with your application, push the image to a container registry, build and deploy a Service Fabric container application.-- Previously updated : 1/4/2019-+++++ Last updated : 07/14/2022 # Create your first Service Fabric container application on Linux
service-fabric Service Fabric Get Started Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-get-started-containers.md
Title: Create an Azure Service Fabric container application description: Create your first Windows container application on Azure Service Fabric. Build a Docker image with a Python application, push the image to a container registry, then build and deploy the container to Azure Service Fabric.-- Previously updated : 01/25/2019-+++++ Last updated : 07/14/2022 # Create your first Service Fabric container application on Windows
service-fabric Service Fabric Get Started Eclipse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-get-started-eclipse.md
Title: Azure Service Fabric plug-in for Eclipse description: Learn about getting started with Azure Service Fabric in Java using eclipse and the Service Fabric provided plug-in. - Previously updated : 04/06/2018-+++++ Last updated : 07/14/2022 # Service Fabric plug-in for Eclipse Java application development
service-fabric Service Fabric Get Started Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-get-started-linux.md
Title: Set up your development environment on Linux description: Install the runtime and SDK and create a local development cluster on Linux. After completing this setup, you'll be ready to build applications.-- Previously updated : 10/16/2020-+++++ Last updated : 07/14/2022 # Maintainer notes: Keep these documents in sync: # service-fabric-get-started-linux.md
# service-fabric-local-linux-cluster-windows.md # service-fabric-local-linux-cluster-windows-wsl2.md + # Prepare your development environment on Linux > [!div class="op_single_selector"] > * [Windows](service-fabric-get-started.md)
service-fabric Service Fabric Get Started Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-get-started-mac.md
Title: Set up your dev environment on macOS description: Install the runtime, SDK, and tools and create a local development cluster. After completing this setup, you'll be ready to build applications on macOS.-- Previously updated : 10/16/2020-+++++ Last updated : 07/14/2022 # Maintainer notes: Keep these documents in sync: # service-fabric-get-started-linux.md # service-fabric-get-started-mac.md # service-fabric-local-linux-cluster-windows.md + # Set up your development environment on Mac OS X > [!div class="op_single_selector"] > * [Windows](service-fabric-get-started.md)
service-fabric Service Fabric Get Started Tomcat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-get-started-tomcat.md
Title: Create a container for Apache Tomcat on Linux description: Create Linux container to expose an application running on Apache Tomcat server on Azure Service Fabric. Build a Docker image with your application and Apache Tomcat server, push the image to a container registry, build and deploy a Service Fabric container application.-- Previously updated : 6/08/2018-+++++ Last updated : 07/14/2022 # Create Service Fabric container running Apache Tomcat server on Linux
service-fabric Service Fabric Get Started Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-get-started-vs-code.md
Title: Azure Service Fabric with VS Code Getting Started description: This article is an overview of creating Service Fabric applications using Visual Studio Code. --- Previously updated : 06/29/2018--+++++ Last updated : 07/14/2022 # Service Fabric for Visual Studio Code
service-fabric Service Fabric Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-get-started.md
Title: Set up a Windows development environment description: Install the runtime, SDK, and tools and create a local development cluster. After completing this setup, you will be ready to build applications on Windows.-- Previously updated : 06/16/2020-+++++ Last updated : 07/14/2022 + # Prepare your development environment on Windows > [!div class="op_single_selector"]
If you only need the SDK, you can install this package:
The current versions are:
-* Service Fabric SDK and Tools 6.0.1028
-* Service Fabric runtime 9.0.1028
+* Service Fabric SDK and Tools 6.0.1048
+* Service Fabric runtime 9.0.1048
For a list of supported versions, see [Service Fabric versions](service-fabric-versions.md)
service-fabric Service Fabric Guest Executables Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-guest-executables-introduction.md
Title: Package an existing executable to Azure Service Fabric description: Learn about packaging an existing application as a guest executable, so it can be deployed to a Service Fabric cluster.- Previously updated : 03/15/2018++++ Last updated : 07/14/2022 + # Deploy an existing executable to Service Fabric You can run any type of code, such as Node.js, Java, or C++ in Azure Service Fabric as a service. Service Fabric refers to these types of services as guest executables.
service-fabric Service Fabric Health Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-health-introduction.md
Title: Health monitoring in Service Fabric description: An introduction to the Azure Service Fabric health monitoring model, which provides monitoring of the cluster and its applications and services. Previously updated : 2/28/2018++++ Last updated : 07/14/2022 + # Introduction to Service Fabric health monitoring Azure Service Fabric introduces a health model that provides rich, flexible, and extensible health evaluation and reporting. The model allows near-real-time monitoring of the state of the cluster and the services running in it. You can easily obtain health information and correct potential issues before they cascade and cause massive outages. In the typical model, services send reports based on their local views, and that information is aggregated to provide an overall cluster-level view.
service-fabric Service Fabric Host App In A Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-host-app-in-a-container.md
Title: Deploy a .NET app in a container to Azure Service Fabric description: Learn how to containerize an existing .NET application using Visual Studio and debug containers in Service Fabric locally. The containerized application is pushed to an Azure container registry and deployed to a Service Fabric cluster. When deployed to Azure, the application uses Azure SQL DB to persist data.- Previously updated : 07/08/2019 -++++ Last updated : 07/14/2022 # Tutorial: Deploy a .NET application in a Windows container to Azure Service Fabric
service-fabric Service Fabric Hosting Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-hosting-lifecycle.md
Title: Azure Service Fabric hosting activation and deactivation life cycle description: Learn about the life cycle of an application and a ServicePackage on a node. -- Previously updated : 05/01/2020-++++ Last updated : 07/14/2022 + # Azure Service Fabric hosting life cycle This article provides an overview of events that happen in Azure Service Fabric when an application is activated on a node. It explains various cluster configurations that control the behavior.
service-fabric Service Fabric Hosting Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-hosting-model.md
Title: Azure Service Fabric hosting model description: Describes the relationship between replicas (or instances) of a deployed Service Fabric service and the service-host process.-- Previously updated : 04/15/2017--++++ Last updated : 07/14/2022 + # Azure Service Fabric hosting model This article provides an overview of application hosting models provided by Azure Service Fabric, and describes the differences between the **Shared Process** and **Exclusive Process** models. It describes how a deployed application looks on a Service Fabric node, and the relationship between replicas (or instances) of the service and the service-host process.
service-fabric Service Fabric How To Debug Windows Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-how-to-debug-windows-containers.md
Title: Debug Windows containers with Service Fabric and VS description: Learn how to debug Windows containers in Azure Service Fabric using Visual Studio 2019.-- Previously updated : 02/14/2019+++++ Last updated : 07/14/2022 + # How to: Debug Windows containers in Azure Service Fabric using Visual Studio 2019 With Visual Studio 2019, you can debug .NET applications in containers as Service Fabric services. This article shows you how to configure your environment and then debug a .NET application in a container running in a local Service Fabric cluster.
service-fabric Service Fabric How To Diagnostics Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-how-to-diagnostics-log.md
Title: Generate log events from a .NET app description: Learn about how to add logging to your .NET Service Fabric application hosted on an Azure cluster or a standalone-cluster.- Previously updated : 03/27/2018-+++++ Last updated : 07/14/2022 # Add logging to your Service Fabric application
service-fabric Service Fabric How To Parameterize Configuration Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-how-to-parameterize-configuration-files.md
Title: Parameterize config files in Azure Service Fabric description: Learn how to parameterize configuration files in Service Fabric, a useful technique when managing multiple environments.- Previously updated : 10/09/2018+++++ Last updated : 07/14/2022 + # How to parameterize configuration files in Service Fabric This article shows you how to parameterize a configuration file in Service Fabric. If you're not already familiar with the core concepts of managing applications for multiple environments, read [Manage applications for multiple environments](service-fabric-manage-multiple-environment-app-configuration.md).
service-fabric Service Fabric How To Publish Linux App Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-how-to-publish-linux-app-vs.md
Title: Create and Publish a.Net Core app to a remote Linux Cluster description: Create and publish .Net Core apps targeting a remote Linux cluster from Visual Studio--- Previously updated : 5/20/2019-+++++ Last updated : 07/14/2022 + # Use Visual Studio to create and publish .Net Core applications targeting a remote Linux Service Fabric cluster With Visual Studio tooling you can develop and publish Service Fabric .Net Core applications targeting a Linux Service Fabric cluster. The SDK version must be 3.4 or above to deploy a .Net Core application targeting Linux Service Fabric clusters from Visual Studio.
service-fabric Service Fabric How To Remove Node Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-how-to-remove-node-type.md
Title: Remove a node type in Azure Service Fabric | Microsoft Docs description: Learn how to remove a node type from a Service Fabric cluster running in Azure.--- Previously updated : 08/11/2020-+++++ Last updated : 07/14/2022 # How to remove a Service Fabric node type
service-fabric Service Fabric How To Specify Environment Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-how-to-specify-environment-variables.md
Title: Specify environment variables for services description: Shows you how to use environment variables for applications in Service Fabric- Previously updated : 12/06/2017+++++ Last updated : 07/14/2022 + # How to specify environment variables for services in Service Fabric This article shows you how to specify environment variables for a service or container in Service Fabric.
service-fabric Service Fabric How To Specify Port Number Using Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-how-to-specify-port-number-using-parameters.md
Title: Specify port number of a service using parameters description: Shows you how to use parameters to specify the port for an application in Service Fabric- Previously updated : 12/06/2017+++++ Last updated : 07/14/2022 + # How to specify the port number of a service using parameters in Service Fabric This article shows you how to specify the port number of a service using parameters in Service Fabric using Visual Studio.
service-fabric Service Fabric How To Unit Test Stateful Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-how-to-unit-test-stateful-services.md
Title: Develop unit tests for stateful services description: Learn about unit testing in Azure Service Fabric for stateful services, and special considerations to keep in mind during development.-- Previously updated : 09/04/2018-+++++ Last updated : 07/14/2022 # Create unit tests for Stateful Services
service-fabric Service Fabric Image Store Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-image-store-connection-string.md
Title: Azure Service Fabric image store connection string description: Learn about the image store connection string, including its uses and applications to a Service Fabric cluster.-- Previously updated : 02/27/2018-++++ Last updated : 07/14/2022 + # Understand the ImageStoreConnectionString setting In some of our documentation, we briefly mention the existence of an "ImageStoreConnectionString" parameter without describing what it really means. And after going through an article like [Deploy and remove applications using PowerShell][10], it looks like all you do is copy/paste the value as shown in the cluster manifest of the target cluster. So the setting must be configurable per cluster, but when you create a cluster through the [Azure portal][11], there's no option to configure this setting and it's always "fabric:ImageStore". What's the purpose of this setting then?
service-fabric Service Fabric Java Rest Api Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-java-rest-api-usage.md
- Title: Azure Service Fabric Java Client APIs
-description: Generate and use Service Fabric Java client APIs using Service Fabric client REST API specification
-- Previously updated : 11/27/2017--
-# Azure Service Fabric Java Client APIs
-
-Service Fabric client APIs allows deploying and managing microservices based applications and containers in a Service Fabric cluster on Azure, on-premises, on local development machine or in other cloud. This article describes how to generate and use Service Fabric Java client APIs on top of the Service Fabric client REST APIs
-
-## Generate the client code using AutoRest
-
-[AutoRest](https://github.com/Azure/autorest) is a tool that generates client libraries for accessing RESTful web services. Input to AutoRest is a specification that describes the REST API using the OpenAPI Specification format. [Service Fabric client REST APIs](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/servicefabric/data-plane) follow this specification .
-
-Follow the steps mentioned below to generate Service Fabric Java client code using the AutoRest tool.
-
-1. Install nodejs and NPM on your machine
-
- If you are using Linux then:
- ```bash
- sudo apt-get install npm
- sudo apt install nodejs
- ```
- If you are using Mac OS X then:
- ```bash
- brew install node
- ```
-
-2. Install AutoRest using NPM.
- ```bash
- npm install -g autorest
- ```
-
-3. Fork and clone [azure-rest-api-specs](https://github.com/Azure/azure-rest-api-specs) repository in your local machine and go to the cloned location from the terminal of your machine.
--
-4. Go to the location mentioned below in your cloned repo.
- ```bash
- cd specification\servicefabric\data-plane\Microsoft.ServiceFabric\stable\6.0
- ```
-
- > [!NOTE]
- > If your cluster version is not 6.0.* then go to the appropriate directory in the stable folder.
- >
-
-5. Run the following autorest command to generate the Java client code.
-
- ```bash
- autorest --input-file= servicefabric.json --java --output-folder=[output-folder-name] --namespace=[namespace-of-generated-client]
- ```
- Below is an example demonstrating the usage of autorest.
-
- ```bash
- autorest --input-file=servicefabric.json --java --output-folder=java-rest-api-code --namespace=servicefabricrest
- ```
-
- The following command takes ``servicefabric.json`` specification file as input and generates Java client code in ``java-rest-api- code`` folder and encloses the code in ``servicefabricrest`` namespace. After this step you would find two folders ``models``, ``implementation`` and two files ``ServiceFabricClientAPIs.java`` and ``package-info.java`` generated in the ``java-rest-api-code`` folder.
--
-## Include and use the generated client in your project
-
-1. Add the generated code appropriately into your project. We recommend that you create a library using the generated code and include this library in your project.
-2. If you are creating a library then include the following dependency in your library's project. If you are following a different approach then include the dependency appropriately.
-
- ```
- GroupId: com.microsoft.rest
- Artifactid: client-runtime
- Version: 1.2.1
- ```
- For example, if you are using Maven build system include the following in your ``pom.xml`` file:
-
- ```xml
- <dependency>
- <groupId>com.microsoft.rest</groupId>
- <artifactId>client-runtime</artifactId>
- <version>1.2.1</version>
- </dependency>
- ```
-
-3. Create a RestClient using the following code:
-
- ```java
- RestClient simpleClient = new RestClient.Builder()
- .withBaseUrl("http://<cluster-ip or name:port>")
- .withResponseBuilderFactory(new ServiceResponseBuilder.Factory())
- .withSerializerAdapter(new JacksonAdapter())
- .build();
- ServiceFabricClientAPIs client = new ServiceFabricClientAPIsImpl(simpleClient);
- ```
-4. Use the client object and make the appropriate calls as required. Here are some examples which demonstrate the usage of client object. We assume that the application package is built and uploaded into image store before using the below API's.
- * Provision an application
-
- ```java
- ApplicationTypeImageStorePath imageStorePath = new ApplicationTypeImageStorePath();
- imageStorePath.withApplicationTypeBuildPath("<application-path-in-image-store>");
- client.provisionApplicationType(imageStorePath);
- ```
- * Create an application
-
- ```java
- ApplicationDescription applicationDescription = new ApplicationDescription();
- applicationDescription.withName("<application-uri>");
- applicationDescription.withTypeName("<application-type>");
- applicationDescription.withTypeVersion("<application-version>");
- client.createApplication(applicationDescription);
- ```
-
-## Understanding the generated code
-For every API you will find four overloads of implementation. If there are optional parameters then you would find four more variations including those optional parameters. For example consider the API ``removeReplica``.
- 1. **public void removeReplica(String nodeName, UUID partitionId, String replicaId, Boolean forceRemove, Long timeout)**
- * This is the synchronous variant of the removeReplica API call
- 2. **public ServiceFuture\<Void> removeReplicaAsync(String nodeName, UUID partitionId, String replicaId, Boolean forceRemove, Long timeout, final ServiceCallback\<Void> serviceCallback)**
- * This variant of API call can be used if you want to use future based asynchronous programming and use callbacks
- 3. **public Observable\<Void> removeReplicaAsync(String nodeName, UUID partitionId, String replicaId)**
- * This variant of API call can be used if you want to use reactive asynchronous programming
- 4. **public Observable\<ServiceResponse\<Void>> removeReplicaWithServiceResponseAsync(String nodeName, UUID partitionId, String replicaId)**
- * This variant of API call can be used if you want to use reactive asynchronous programming and deal with RAW rest response
-
-## Next steps
-* Learn about [Service Fabric REST APIs](/rest/api/servicefabric/)
+
+ Title: Azure Service Fabric Java Client APIs
+description: Generate and use Service Fabric Java client APIs using Service Fabric client REST API specification
+++++ Last updated : 07/14/2022++
+# Azure Service Fabric Java Client APIs
+
+Service Fabric client APIs allows deploying and managing microservices based applications and containers in a Service Fabric cluster on Azure, on-premises, on local development machine or in other cloud. This article describes how to generate and use Service Fabric Java client APIs on top of the Service Fabric client REST APIs
+
+## Generate the client code using AutoRest
+
+[AutoRest](https://github.com/Azure/autorest) is a tool that generates client libraries for accessing RESTful web services. Input to AutoRest is a specification that describes the REST API using the OpenAPI Specification format. [Service Fabric client REST APIs](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/servicefabric/data-plane) follow this specification .
+
+Follow the steps mentioned below to generate Service Fabric Java client code using the AutoRest tool.
+
+1. Install nodejs and NPM on your machine
+
+ If you are using Linux then:
+ ```bash
+ sudo apt-get install npm
+ sudo apt install nodejs
+ ```
+ If you are using Mac OS X then:
+ ```bash
+ brew install node
+ ```
+
+2. Install AutoRest using NPM.
+ ```bash
+ npm install -g autorest
+ ```
+
+3. Fork and clone [azure-rest-api-specs](https://github.com/Azure/azure-rest-api-specs) repository in your local machine and go to the cloned location from the terminal of your machine.
++
+4. Go to the location mentioned below in your cloned repo.
+ ```bash
+ cd specification\servicefabric\data-plane\Microsoft.ServiceFabric\stable\6.0
+ ```
+
+ > [!NOTE]
+ > If your cluster version is not 6.0.* then go to the appropriate directory in the stable folder.
+ >
+
+5. Run the following autorest command to generate the Java client code.
+
+ ```bash
+ autorest --input-file= servicefabric.json --java --output-folder=[output-folder-name] --namespace=[namespace-of-generated-client]
+ ```
+ Below is an example demonstrating the usage of autorest.
+
+ ```bash
+ autorest --input-file=servicefabric.json --java --output-folder=java-rest-api-code --namespace=servicefabricrest
+ ```
+
+ The following command takes ``servicefabric.json`` specification file as input and generates Java client code in ``java-rest-api- code`` folder and encloses the code in ``servicefabricrest`` namespace. After this step you would find two folders ``models``, ``implementation`` and two files ``ServiceFabricClientAPIs.java`` and ``package-info.java`` generated in the ``java-rest-api-code`` folder.
++
+## Include and use the generated client in your project
+
+1. Add the generated code appropriately into your project. We recommend that you create a library using the generated code and include this library in your project.
+2. If you are creating a library then include the following dependency in your library's project. If you are following a different approach then include the dependency appropriately.
+
+ ```
+ GroupId: com.microsoft.rest
+ Artifactid: client-runtime
+ Version: 1.2.1
+ ```
+ For example, if you are using Maven build system include the following in your ``pom.xml`` file:
+
+ ```xml
+ <dependency>
+ <groupId>com.microsoft.rest</groupId>
+ <artifactId>client-runtime</artifactId>
+ <version>1.2.1</version>
+ </dependency>
+ ```
+
+3. Create a RestClient using the following code:
+
+ ```java
+ RestClient simpleClient = new RestClient.Builder()
+ .withBaseUrl("http://<cluster-ip or name:port>")
+ .withResponseBuilderFactory(new ServiceResponseBuilder.Factory())
+ .withSerializerAdapter(new JacksonAdapter())
+ .build();
+ ServiceFabricClientAPIs client = new ServiceFabricClientAPIsImpl(simpleClient);
+ ```
+4. Use the client object and make the appropriate calls as required. Here are some examples which demonstrate the usage of client object. We assume that the application package is built and uploaded into image store before using the below API's.
+ * Provision an application
+
+ ```java
+ ApplicationTypeImageStorePath imageStorePath = new ApplicationTypeImageStorePath();
+ imageStorePath.withApplicationTypeBuildPath("<application-path-in-image-store>");
+ client.provisionApplicationType(imageStorePath);
+ ```
+ * Create an application
+
+ ```java
+ ApplicationDescription applicationDescription = new ApplicationDescription();
+ applicationDescription.withName("<application-uri>");
+ applicationDescription.withTypeName("<application-type>");
+ applicationDescription.withTypeVersion("<application-version>");
+ client.createApplication(applicationDescription);
+ ```
+
+## Understanding the generated code
+For every API you will find four overloads of implementation. If there are optional parameters then you would find four more variations including those optional parameters. For example consider the API ``removeReplica``.
+ 1. **public void removeReplica(String nodeName, UUID partitionId, String replicaId, Boolean forceRemove, Long timeout)**
+ * This is the synchronous variant of the removeReplica API call
+ 2. **public ServiceFuture\<Void> removeReplicaAsync(String nodeName, UUID partitionId, String replicaId, Boolean forceRemove, Long timeout, final ServiceCallback\<Void> serviceCallback)**
+ * This variant of API call can be used if you want to use future based asynchronous programming and use callbacks
+ 3. **public Observable\<Void> removeReplicaAsync(String nodeName, UUID partitionId, String replicaId)**
+ * This variant of API call can be used if you want to use reactive asynchronous programming
+ 4. **public Observable\<ServiceResponse\<Void>> removeReplicaWithServiceResponseAsync(String nodeName, UUID partitionId, String replicaId)**
+ * This variant of API call can be used if you want to use reactive asynchronous programming and deal with RAW rest response
+
+## Next steps
+* Learn about [Service Fabric REST APIs](/rest/api/servicefabric/)
service-fabric Service Fabric Keyvault References https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-keyvault-references.md
Title: Azure Service Fabric - Using Service Fabric application KeyVault references description: This article explains how to use service fabric KeyVaultReference support for application secrets.-- Previously updated : 09/20/2019+++++ Last updated : 07/14/2022 # KeyVaultReference support for Azure-deployed Service Fabric Applications
service-fabric Service Fabric Linux Windows Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-linux-windows-differences.md
Title: Azure Service Fabric differences between Linux and Windows description: Differences between the Azure Service Fabric on Linux and Azure Service Fabric on Windows. Previously updated : 2/23/2018++++ Last updated : 07/14/2022 + # Differences between Service Fabric on Linux and Windows There are some features that are supported on Windows, but not yet on Linux. Eventually, the feature sets will be at parity and with each release this feature gap will shrink. The following differences exist between the latest available releases.
service-fabric Service Fabric Local Linux Cluster Windows Wsl2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-local-linux-cluster-windows-wsl2.md
Title: Set up Azure Service Fabric Linux cluster on WSL2 linux distribution inside Windows description: This article covers how to set up Service Fabric Linux clusters inside WSL2 linux distribution running on Windows development machines. This approach is useful for cross-platform development.-- Previously updated : 10/31/2021+++++ Last updated : 07/14/2022 # Maintainer notes: Keep these documents in sync: # service-fabric-get-started-linux.md
Last updated 10/31/2021
# service-fabric-local-linux-cluster-windows.md # service-fabric-local-linux-cluster-windows-wsl2.md + # Set up a Linux Service Fabric cluster via WSL2 on your Windows developer machine This document covers how to set up a local Linux Service Fabric cluster via WSL2 on a Windows development machine. Setting up a local Linux cluster is useful to quickly test applications targeted for Linux clusters but are developed on a Windows machine.
service-fabric Service Fabric Local Linux Cluster Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-local-linux-cluster-windows.md
Title: Set up Azure Service Fabric Linux cluster on Windows description: This article covers how to set up Service Fabric Linux clusters running on Windows development machines. This approach is useful for cross platform development. -- Previously updated : 10/16/2020+++++ Last updated : 07/14/2022 # Maintainer notes: Keep these documents in sync: # service-fabric-get-started-linux.md
Last updated 10/16/2020
# service-fabric-local-linux-cluster-windows.md # service-fabric-local-linux-cluster-windows-wsl2.md + # Set up a Linux Service Fabric cluster on your Windows developer machine This document covers how to set up a local Linux Service Fabric cluster on a Windows development machine. Setting up a local Linux cluster is useful to quickly test applications targeted for Linux clusters but are developed on a Windows machine.
service-fabric Service Fabric Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-overview.md
Title: Overview of Azure Service Fabric description: Service Fabric is a distributed systems platform for building scalable, reliable, and easily managed microservices. Previously updated : 09/22/2020-++++ Last updated : 07/11/2022 # Overview of Azure Service Fabric
service-fabric Service Fabric Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-versions.md
The tables in this article outline the Service Fabric and platform versions that
| Service Fabric runtime |Can upgrade directly from|Can downgrade to*|Compatible SDK or NuGet package version|Supported .NET runtimes** |OS Version |End of support | | | | | | | | |
+| 9.0 CU2<br>9.0.1048.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version |
| 9.0 CU1<br>9.0.1028.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version | | 9.0 RTO<br>9.0.1017.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 6.0 | .NET 6.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version |
+| 8.2 CU4<br>8.2.1659.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2022 |
| 8.2 CU3<br>8.2.1620.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2022 | | 8.2 CU2.1<br>8.2.1571.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2022 | | 8.2 CU2<br>8.2.1486.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 6.0 (Preview), .NET 5.0, >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 1, 2022 |
Support for Service Fabric on a specific OS ends when support for the OS version
| Service Fabric runtime | Can upgrade directly from |Can downgrade to*|Compatible SDK or NuGet package version | Supported .NET runtimes** | OS version | End of support | | | | | | | | |
+| 9.0 CU2<br>9.0.1056.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version |
| 9.0 CU1<br>9.0.1035.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version | | 9.0 RTO<br>9.0.1018.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version |
+| 8.2 CU4<br>8.2.1458.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2022 |
| 8.2 CU3<br>8.2.1434.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2022 | | 8.2 CU2.1<br>8.2.1397.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2022 | | 8.2 CU2<br>8.2.1285.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2022 |
The following table lists the version names of Service Fabric and their correspo
| Version name | Windows version number | Linux version number | | | | |
+| 9.0 CU2 | 9.0.1048.9590 | 9.0.1056.1 |
| 9.0 CU1 | 9.0.1028.9590 | 9.0.1035.1 | | 9.0 RTO | 9.0.1017.9590 | 9.0.1018.1 |
+| 8.2 CU4 | 8.2.1659.9590 | 8.2.1458.1 |
| 8.2 CU3 | 8.2.1620.9590 | 8.2.1434.1 | | 8.2 CU2.1 | 8.2.1571.9590 | 8.2.1397.1 | | 8.2 CU2 | 8.2.1486.9590 | 8.2.1285.1 |
storage Access Tiers Online Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-online-manage.md
description: Learn how to specify a blob's access tier when you upload it, or ho
Previously updated : 03/02/2022 Last updated : 07/13/2022
When you upload a blob to Azure Storage, you have two options for setting the bl
- You can explicitly specify the tier in which the blob will be created. This setting overrides the default access tier for the storage account. You can set the tier for a blob or set of blobs on upload to Hot, Cool, or Archive. - You can upload a blob without specifying a tier. In this case, the blob will be created in the default access tier specified for the storage account (either Hot or Cool).
+If you are uploading a new blob that uses an encryption scope, you cannot change the access tier for that blob.
+ The following sections describe how to specify that a blob is uploaded to either the Hot or Cool tier. For more information about archiving a blob on upload, see [Archive blobs on upload](archive-blob.md#archive-blobs-on-upload). ### Upload a blob to a specific online tier
Use PowerShell, Azure CLI, or one of the Azure Storage client libraries to move
When you change a blob's tier, you move that blob and all of its data to the target tier. Calling [Set Blob Tier](/rest/api/storageservices/set-blob-tier) is typically the best option when you are changing a blob's tier from a hotter tier to a cooler one.
+You cannot change the access tier for an existing blob that uses an encryption scope. For more information about encryption scopes, see [Encryption scopes for Blob storage](encryption-scope-overview.md).
+ # [Portal](#tab/azure-portal) To change a blob's tier from Hot to Cool in the Azure portal, follow these steps:
storage Access Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-overview.md
description: Azure storage offers different access tiers so that you can store y
Previously updated : 06/16/2022 Last updated : 07/13/2022
To explicitly set a blob's tier when you create it, specify the tier when you up
After a blob is created, you can change its tier in either of the following ways: -- By calling the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation, either directly or via a [lifecycle management](#blob-lifecycle-management) policy. Calling [Set Blob Tier](/rest/api/storageservices/set-blob-tier) is typically the best option when you're changing a blob's tier from a hotter tier to a cooler one.
+- By calling the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation, either directly or via a [lifecycle management](#blob-lifecycle-management) policy. Calling [Set Blob Tier](/rest/api/storageservices/set-blob-tier) is typically the best option when you're changing a blob's tier from a hotter tier to a cooler one.
- By calling the [Copy Blob](/rest/api/storageservices/copy-blob) operation to copy a blob from one tier to another. Calling [Copy Blob](/rest/api/storageservices/copy-blob) is recommended for most scenarios where you're rehydrating a blob from the Archive tier to an online tier, or moving a blob from Cool to Hot. By copying a blob, you can avoid the early deletion penalty, if the required storage interval for the source blob hasn't yet elapsed. However, copying a blob results in capacity charges for two blobs, the source blob and the destination blob. Changing a blob's tier from Hot to Cool or Archive is instantaneous, as is changing from Cool to Hot. Rehydrating a blob from the Archive tier to either the Hot or Cool tier can take up to 15 hours.
-Keep in mind the following points when moving a blob between the Cool and Archive tiers:
+Keep in mind the following points when changing a blob's tier:
+- You cannot call **Set Blob Tier** on a blob that uses an encryption scope. For more information about encryption scopes, see [Encryption scopes for Blob storage](encryption-scope-overview.md).
- If a blob's tier is inferred as Cool based on the storage account's default access tier and the blob is moved to the Archive tier, there's no early deletion charge. - If a blob is explicitly moved to the Cool tier and then moved to the Archive tier, the early deletion charge applies.
storage Data Lake Storage Integrate With Services Tutorials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-integrate-with-services-tutorials.md
This article contains links to tutorials that show you how to use various Azure
||-| | Azure Synapse Analytics | [Get Started with Azure Synapse Analytics](../../synapse-analytics/get-started.md) | | Azure Data Factory | [Load data into Azure Data Lake Storage Gen2 with Azure Data Factory](../../data-factory/load-azure-data-lake-storage-gen2.md) |
-| Azure Databricks | [Use with Azure Databricks](https://docs.azuredatabricks.net/data/data-sources/azure/azure-datalake-gen2.html) |
+| Azure Databricks | [Use with Azure Databricks](/azure/databricks/data/data-sources/azure/adls-gen2/) |
| Azure Databricks | [Extract, transform, and load data by using Azure Databricks](/azure/databricks/scenarios/databricks-extract-load-sql-data-warehouse) | | Azure Databricks | [Access Data Lake Storage Gen2 data with Azure Databricks using Spark](data-lake-storage-use-databricks-spark.md)| | Azure Event Grid | [Implement the data lake capture pattern to update a Databricks Delta table](data-lake-storage-events.md) |
This article contains links to tutorials that show you how to use various Azure
## See also
-[Best practices for using Azure Data Lake Storage Gen2](data-lake-storage-best-practices.md)
+[Best practices for using Azure Data Lake Storage Gen2](data-lake-storage-best-practices.md)
storage Encryption Scope Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/encryption-scope-manage.md
Previously updated : 05/10/2021 Last updated : 07/13/2022
az storage account encryption-scope create \
To learn how to configure Azure Storage encryption with customer-managed keys in a key vault or managed HSM, see the following articles: - [Configure encryption with customer-managed keys stored in Azure Key Vault](../common/customer-managed-keys-configure-key-vault.md)-- [Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM](../common/customer-managed-keys-configure-key-vault-hsm.md).
+- [Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM](../common/customer-managed-keys-configure-key-vault-hsm.md)
To learn more about infrastructure encryption, see [Enable infrastructure encryption for double encryption of data](../common/infrastructure-encryption-enable.md).
If a client attempts to specify a scope when uploading a blob to a container tha
When you upload a blob, you can specify an encryption scope for that blob, or use the default encryption scope for the container, if one has been specified.
+When you upload a new blob with an encryption scope, you cannot change the default access tier for that blob.
+ # [Portal](#tab/portal) To upload a blob with an encryption scope via the Azure portal, first create the encryption scope as described in [Create an encryption scope](#create-an-encryption-scope). Next, follow these steps to create the blob:
storage Encryption Scope Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/encryption-scope-overview.md
Previously updated : 07/19/2021 Last updated : 07/13/2022
A default encryption scope must be specified for a container at the time that th
If no default encryption scope is specified for the container, then you can upload a blob using any encryption scope that you've defined for the storage account. The encryption scope must be specified at the time that the blob is uploaded.
+When you upload a new blob with an encryption scope, you cannot change the default access tier for that blob. You also cannot change the access tier for an existing blob that uses an encryption scope. For more information about access tiers, see [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md).
+ ## Disabling an encryption scope When you disable an encryption scope, any subsequent read or write operations made with the encryption scope will fail with HTTP error code 403 (Forbidden). If you re-enable the encryption scope, read and write operations will proceed normally again.
storage Scalability Targets Premium Block Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/scalability-targets-premium-block-blobs.md
Last updated 12/18/2019-+
storage Scalability Targets Premium Page Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/scalability-targets-premium-page-blobs.md
Last updated 09/24/2021-+
storage Scalability Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/scalability-targets.md
Last updated 04/01/2021-+
storage Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/security-recommendations.md
Last updated 05/12/2022-+
storage Storage Blob Block Blob Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-block-blob-premium.md
description: Achieve lower and consistent latencies for Azure Storage workloads that require fast and consistent response times. -+ Last updated 10/14/2021
storage Storage Blob Encryption Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-encryption-status.md
Last updated 11/26/2019-+
storage Storage Blob Reserved Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-reserved-capacity.md
Last updated 05/17/2021-+
storage Storage Blob User Delegation Sas Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-cli.md
Last updated 12/18/2019-+
storage Storage Blob User Delegation Sas Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-powershell.md
Last updated 12/18/2019-+
storage Storage Blobs Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-latency.md
Last updated 09/05/2019-+
storage Storage How To Mount Container Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-how-to-mount-container-linux.md
Last updated 04/28/2022-+
storage Last Sync Time Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/last-sync-time-get.md
Last updated 05/28/2020-+
storage Lock Account Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/lock-account-resource.md
Last updated 03/09/2021-+
storage Network Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/network-routing-preference.md
Last updated 02/11/2021-+
storage Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/policy-reference.md
description: Lists Azure Policy built-in policy definitions for Azure Storage. T
Last updated 07/06/2022 -+
storage Redundancy Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/redundancy-migration.md
Last updated 06/14/2022-+
storage Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/resource-graph-samples.md
description: Sample Azure Resource Graph queries for Azure Storage showing use o
Last updated 07/07/2022 -+
storage Sas Expiration Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/sas-expiration-policy.md
Last updated 04/18/2022-+
storage Scalability Targets Resource Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/scalability-targets-resource-provider.md
Last updated 12/18/2019-+
storage Scalability Targets Standard Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/scalability-targets-standard-account.md
Last updated 05/25/2022-+
storage Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/security-controls-policy.md
description: Lists Azure Policy Regulatory Compliance controls available for Azu
Last updated 07/06/2022 -+
storage Shared Key Authorization Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/shared-key-authorization-prevent.md
Last updated 04/01/2022-+ ms.devlang: azurecli
storage Storage Account Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-create.md
Last updated 05/26/2022-+
storage Storage Account Get Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-get-info.md
description: Use the Azure portal, PowerShell, or Azure CLI to retrieve storage
-+ Last updated 05/26/2022
storage Storage Account Keys Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-keys-manage.md
Last updated 04/14/2022-+
storage Storage Account Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-move.md
Last updated 06/15/2022-+
storage Storage Account Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-overview.md
Last updated 06/28/2022-+
storage Storage Account Recover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-recover.md
Last updated 06/23/2022-+
storage Storage Account Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-upgrade.md
Last updated 04/29/2021-+
storage Storage Auth Abac Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-auth-abac-attributes.md
Last updated 05/24/2022-+
storage Storage Auth Abac Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-auth-abac-cli.md
-+ Last updated 11/16/2021
storage Storage Auth Abac Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-auth-abac-examples.md
-+ Last updated 05/24/2022
storage Storage Auth Abac Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-auth-abac-portal.md
-+ Last updated 11/16/2021
storage Storage Auth Abac Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-auth-abac-powershell.md
-+ Last updated 11/16/2021
storage Storage Auth Abac Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-auth-abac-security.md
Last updated 05/06/2021-+
storage Storage Auth Abac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-auth-abac.md
Last updated 05/16/2022-+
storage Storage Configure Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-configure-connection-string.md
Last updated 05/26/2022-+
storage Storage Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-disaster-recovery-guidance.md
Last updated 03/01/2022-+
storage Storage Initiate Account Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-initiate-account-failover.md
Last updated 05/07/2021-+
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
Last updated 03/31/2022-+
storage Storage Powershell Independent Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-powershell-independent-clouds.md
Last updated 12/04/2019-+
storage Storage Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-private-endpoints.md
Last updated 03/16/2021-+
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-redundancy.md
Last updated 05/24/2022-+
storage Storage Require Secure Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-require-secure-transfer.md
Last updated 06/01/2021-+
storage Storage Sas Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-sas-overview.md
Last updated 12/28/2021-+
storage Storage Service Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-service-encryption.md
The Azure Blob Storage client libraries for .NET, Java, and Python support encry
The Blob Storage and Queue Storage client libraries uses [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) in order to encrypt user data. There are two versions of client-side encryption available in the client libraries: -- Version 2 uses [Galois/Counter Mode (GCM)](https://en.wikipedia.org/wiki/Galois/Counter_Mode) mode with AES.-- Version 1 uses [Cipher Block Chaining (CBC)](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Cipher-block_chaining_.28CBC.29) mode with AES.
+- Version 2 uses [Galois/Counter Mode (GCM)](https://en.wikipedia.org/wiki/Galois/Counter_Mode) mode with AES. The Blob Storage and Queue Storage SDKs support client-side encryption with v2.
+- Version 1 uses [Cipher Block Chaining (CBC)](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Cipher-block_chaining_.28CBC.29) mode with AES. The Blob Storage, Queue Storage, and Table Storage SDKs support client-side encryption with v1.
> [!WARNING]
-> Using version 1 of client-side encryption is no longer recommended due to a security vulnerability in the client library's implementation of CBC mode. For more information about this security vulnerability, see [Azure Storage updating client-side encryption in SDK to address security vulnerability](https://aka.ms/azstorageclientencryptionblog). If you are currently using version 1, we recommend that you update your application to use version 2 and migrate your data.
+> Using client-side encryption v1 is no longer recommended due to a security vulnerability in the client library's implementation of CBC mode. For more information about this security vulnerability, see [Azure Storage updating client-side encryption in SDK to address security vulnerability](https://aka.ms/azstorageclientencryptionblog). If you are currently using v1, we recommend that you update your application to use client-side encryption v2 and migrate your data.
>
-> The Azure Table Storage SDK supports only version 1 of client-side encryption. Using client-side encryption with Table Storage is not recommended.
+> The Azure Table Storage SDK supports only client-side encryption v1. Using client-side encryption with Table Storage is not recommended.
The following table shows which client libraries support which versions of client-side encryption and provides guidelines for migrating to client-side encryption v2.
storage Transport Layer Security Configure Client Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/transport-layer-security-configure-client-version.md
Last updated 07/08/2020-+
storage Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/transport-layer-security-configure-minimum-version.md
Last updated 07/07/2021-+
storage File Sync Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-deployment-guide.md
The recommended steps to onboard on Azure File Sync for the first time with zero
1. Redirect users and applications to this new share. 1. You can optionally delete any duplicate shares on the servers.
-If you don't have extra storage for initial onboarding and would like to attach to the existing shares, you can pre-seed the data in the Azure files shares. This approach is suggested, if and only if you can accept downtime and absolutely guarantee no data changes on the server shares during the initial onboarding process.
+If you don't have extra storage for initial onboarding and would like to attach to the existing shares, you can pre-seed the data in the Azure files shares using another data transfer tool instead of using the Storage Sync Service to upload the data. The pre-seeding approach is only suggested if you can accept downtime and absolutely guarantee no data changes on the server shares during the initial onboarding process.
1. Ensure that data on any of the servers can't change during the onboarding process.
-1. Pre-seed Azure file shares with the server data using any data transfer tool over the SMB. Robocopy, for example. You can also use AzCopy over REST. Be sure to use AzCopy with the appropriate switches to preserve ACLs timestamps and attributes.
+1. Pre-seed Azure file shares with the server data using any data transfer tool over SMB, such as Robocopy, or AzCopy over REST. If using Robocopy, make sure you mount the Azure file share using the storage account access key; don't use a domain identity. If using AzCopy, be sure to set the appropriate switches to preserve ACL timestamps and attributes.
1. Create Azure File Sync topology with the desired server endpoints pointing to the existing shares. 1. Let sync finish reconciliation process on all endpoints. 1. Once reconciliation is complete, you can open shares for changes.
-Currently, pre-seeding approach has a few limitations -
+Currently, pre-seeding approach has a few limitations:
- Data changes on the server before the sync topology is fully up and running can cause conflicts on the server endpoints. - After the cloud endpoint is created, Azure File Sync runs a process to detect the files in the cloud before starting the initial sync. The time taken to complete this process varies depending on the various factors like network speed, available bandwidth, and number of files and folders. For the rough estimation in the preview release, detection process runs approximately at 10 files/sec. Hence, even if pre-seeding runs fast, the overall time to get a fully running system may be significantly longer when data is pre-seeded in the cloud.
storage Storage Files Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md
Title: Frequently asked questions (FAQ) for Azure Files | Microsoft Docs
description: Get answers to Azure Files frequently asked questions. You can mount Azure file shares concurrently on cloud or on-premises Windows, Linux, or macOS deployments. Previously updated : 06/06/2022 Last updated : 07/12/2022
- If users are accessing the Azure file share via a Windows Server that has the Azure File Sync agent installed, use an [audit policy](/windows/security/threat-protection/auditing/apply-a-basic-audit-policy-on-a-file-or-folder) or third-party product to track file changes and user access on the Windows Server. * <a id="access-based-enumeration"></a>
-**Does Azure Files support using Access-Based Enumeration (ABE) to control the visibility of the files and folders in SMB Azure file shares?**
+**Does Azure Files support using Access-Based Enumeration (ABE) to control the visibility of the files and folders in SMB Azure file shares? Can I use DFS Namespaces (DFS-N) as a workaround to use ABE with Azure Files?**
- No, this scenario isn't supported.
+ Using ABE with Azure Files isn't currently a supported scenario, but you can [use DFS-N with SMB Azure file shares](files-manage-namespaces.md). ABE is a feature of DFS-N, so it's possible to configure identity-based authentication and enable the ABE feature. However, this only applies to the DFS-N folder targets; it doesn't retroactively apply to the targeted file shares themselves. This is because DFS-N works by referral, rather than as a proxy in front of the folder target.
+ For example, if the user types in the path \\mydfsnserver\share, the SMB client gets the referral of \\mydfsnserver\share => \\server123\share and makes the mount against the latter.
+
+ Because of this, ABE will only work in cases where the DFS-N server is hosting the list of usernames before the redirection:
+
+ \\DFSServer\users\contosouser1 => \\SA.file.core.windows.net\contosouser1
+ \\DFSServer\users\contosouser1 => \\SA.file.core.windows.net\users\contosouser1
+
+ (Where **contosouser1** is a subfolder of the **users** share)
+
+ If each user is a subfolder *after* the redirection, ABE won't work:
+
+ \\DFSServer\SomePath\users --> \\SA.file.core.windows.net\users
### AD DS & Azure AD DS Authentication * <a id="ad-support-devices"></a>
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md
Previously updated : 05/24/2022 Last updated : 07/14/2022
First, you must check the state of your environment. Specifically, you must chec
### Create an identity representing the storage account in your AD manually
-To create this account manually, create a new Kerberos key for your storage account. Then, use that Kerberos key as the password for your account with the PowerShell cmdlets below. This key is only used during setup and cannot be used for any control or data plane operations against the storage account.
+To create this account manually, first create a new Kerberos key for your storage account and get the access key using the PowerShell cmdlets below. This key is only used during setup. It can't be used for any control or data plane operations against the storage account.
```PowerShell # Create the Kerberos key on the storage account and get the Kerb1 key as the password for the AD identity to represent the storage account
New-AzStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAcco
Get-AzStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAccountName -ListKerbKey | where-object{$_.Keyname -contains "kerb1"} ```
-Once you have that key, create either a service or computer account under your OU. Use the following specification (remember to replace the example text with your storage account name):
+The cmdlets above should return the key value. Once you have the kerb1 key, create either a service account or computer account in AD under your OU, and use the key as the password for the AD identity.
-SPN: "cifs/your-storage-account-name-here.file.core.windows.net"
-Password: Kerberos key for your storage account.
+1. Set the SPN to **cifs/your-storage-account-name-here.file.core.windows.net** either in the AD GUI or by running the `Setspn` command from the Windows command line as administrator (remember to replace the example text with your storage account name):
+
+ ```shell
+ Setspn -S cifs/your-storage-account-name-here.file.core.windows.net
+ ```
+
+2. Use PowerShell to set the AD account password to the value of the kerb1 key (you must have AD PowerShell cmdlets installed):
+
+ ```powershell
+ Set-ADAccountPassword -Identity servername$ -Reset -NewPassword (ConvertTo-SecureString -AsPlainText "kerb1_key_value_here" -Force)
+ ```
If your OU enforces password expiration, you must update the password before the maximum password age to prevent authentication failures when accessing Azure file shares. See [Update the password of your storage account identity in AD](storage-files-identity-ad-ds-update-password.md) for details.
-Keep the SID of the newly created identity, you'll need it for the next step. The identity you've created that represent the storage account doesn't need to be synced to Azure AD.
+Keep the SID of the newly created identity, you'll need it for the next step. The identity you've created that represents the storage account doesn't need to be synced to Azure AD.
### Enable the feature on your storage account
storage Storage Files Identity Ad Ds Update Password https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-update-password.md
Title: Update AD DS storage account password
-description: Learn how to update the password of the Active Directory Domain Services account that represents your storage account. This prevents the storage account from being cleaned up when the password expires, preventing authentication failures.
+description: Learn how to update the password of the Active Directory Domain Services computer or service account that represents your storage account. This prevents authentication failures and keeps the storage account from being deleted when the password expires.
Previously updated : 06/22/2020 Last updated : 07/13/2022 # Update the password of your storage account identity in AD DS
-If you registered the Active Directory Domain Services (AD DS) identity/account that represents your storage account in an organizational unit or domain that enforces password expiration time, you must change the password before the maximum password age. Your organization may run automated cleanup scripts that delete accounts once their password expires. Because of this, if you do not change your password before it expires, your account could be deleted, which will cause you to lose access to your Azure file shares.
+If you registered the Active Directory Domain Services (AD DS) identity/account that represents your storage account in an organizational unit or domain that enforces password expiration time, you must change the password before the maximum password age. Your organization may run automated cleanup scripts that delete accounts once their password expires. Because of this, if you don't change your password before it expires, your account could be deleted, which will cause you to lose access to your Azure file shares.
+
+> [!NOTE]
+> A storage account identity in AD DS can be either a service account or a computer account. Service account passwords can expire in AD; however, because computer account password changes are driven by the client machine and not AD, they don't expire in AD.
To trigger password rotation, you can run the `Update-AzStorageAccountADObjectPassword` command from the [AzFilesHybrid module](https://github.com/Azure-Samples/azure-files-samples/releases). This command must be run in an on-premises AD DS-joined environment using a hybrid user with owner permission to the storage account and AD DS permissions to change the password of the identity representing the storage account. The command performs actions similar to storage account key rotation. Specifically, it gets the second Kerberos key of the storage account, and uses it to update the password of the registered account in AD DS. Then, it regenerates the target Kerberos key of the storage account, and updates the password of the registered account in AD DS. You must run this command in an on-premises AD DS-joined environment.
storage Scalability Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/scalability-targets.md
description: Learn about scalability and performance targets for Queue Storage. -+ Last updated 12/18/2019
storage Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/security-recommendations.md
description: Learn about security recommendations for Queue Storage. Implementing this guidance will help you fulfill your security obligations as described in our shared responsibility model. -+ Last updated 05/12/2022
storage Scalability Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/scalability-targets.md
Last updated 03/09/2020-+
stream-analytics Debug Locally Using Job Diagram Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/debug-locally-using-job-diagram-vs-code.md
Streaming jobs that output no result or unexpected results often need troublesho
## Debug a query using job diagram
-An Azure Stream Analytics script is used to transform input data to output data. The job diagram shows how data flows from input sources, like Event Hub or IoT Hub, through multiple query steps to output sinks. Each query step is mapped to a temporary result set defined in the script using a `WITH` statement. You can view the data as well as metrics of each query step in each intermediate result set to find the source of an issue.
+An Azure Stream Analytics script is used to transform input data to output data. The job diagram shows how data flows from input sources, like Event Hubs or IoT Hub, through multiple query steps to output sinks. Each query step is mapped to a temporary result set defined in the script using a `WITH` statement. You can view the data as well as metrics of each query step in each intermediate result set to find the source of an issue.
> [!NOTE] > This job diagram only shows the data and metrics for local testing in a single node. It should not be used for performance tuning and troubleshooting.
Select **Job Summary** at the top-right of the job diagram to see properties and
## Limitations
-* Live output sinks aren't supported in local run.
- * Run job locally with JavaScript function is only supported on the Windows operating system.
-* C# custom code and Azure Machine Learning functions aren't supported.
+* Azure Machine Learning functions aren't supported.
* Only cloud input options have [time policies](./stream-analytics-time-handling.md) support, while local input options don't.
stream-analytics Stream Analytics Define Outputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-outputs.md
Previously updated : 06/06/2022 Last updated : 07/13/2022 # Outputs from Azure Stream Analytics
Some outputs types support [partitioning](#partitioning), and [output batch size
|[Azure Cosmos DB](azure-cosmos-db-output.md)|Yes|Access key| |[Azure Functions](azure-functions-output.md)|Yes|Access key|
+> [!IMPORTANT]
+> Azure Stream Analytics uses Insert or Replace API by design. This operation replaces an existing entity or inserts a new entity if it does not exist in the table.
+ ## Partitioning Stream Analytics supports partitions for all outputs except for Power BI. For more information on partition keys and the number of output writers, see the article for the specific output type you're interested in. All output articles are linked in the previous section.
stream-analytics Stream Analytics Job Diagram With Metrics New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-diagram-with-metrics-new.md
+
+ Title: Debugging with the job diagram (preview) in Azure portal
+description: This article describes how to troubleshoot your Azure Stream Analytics job with job diagram and metrics in the Azure portal.
+++++ Last updated : 07/01/2022++
+# Debugging with the job diagram (preview) in Azure portal
+
+The job diagram in the Azure portal can help you visualize your job's query steps with its input source, output destination, and metrics. You can use the job diagram to examine the metrics for each step and quickly identify the source of a problem when you troubleshoot issues.
+
+The job diagram is also available in Stream Analytics extension for VS Code. It provides the similar functions with more metrics when you debug your job that runs locally on your device. To learn more details, see [Debug Azure Stream Analytics queries locally using job diagram](./debug-locally-using-job-diagram-vs-code.md).
+
+## Using the job diagram
+
+In the Azure portal, while in a Stream Analytics job, under **Support + troubleshooting**, select **Job diagram (preview)**:
+++
+The job level default metrics such as Watermark delay, Input events, Output Events, and Backlogged Input Events are shown in the chart section for the latest 30 minutes. You can visualize other metrics in a chart by selecting them in the left pane.
++
+If you select one of the nodes in diagram section, the metrics data and the metrics options in the chart section will be filtered according to the selected node's properties. For example, if you select the input node, only the input node related metrics and its options are shown:
++
+To see the query script snippet that is mapping the corresponding query step, click the **'{}'** in the query step node as below:
++
+To see the job overview information summary, click the **Job Summary** button in right side.
++
+It also provides the job operation actions in the menu section. You can use them to stop the job (**Stop** button), refresh the metrics data (**Refresh** button), and change the metrics time range (**Time range**).
++
+## Troubleshoot with metrics
+
+A job's metrics provides lots of insights to your job's health. You can view these metrics through the job diagram in its chart section in job level or in the step level. To learn about Stream Analytics job metrics definition, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md). Job diagram integrates these metrics into the query steps (diagram). You can use these metrics within steps to monitor and analyze your job.
+
+### Is the job running well with its computation resource?
+
+* **SU (Memory) % utilization** is the percentage of memory utilized by your job. If SU (Memory) % utilization is consistently over 80%, it shows the job is approaching to the maximum allocated memory.
+* **CPU % utilization** is the percentage of CPU utilized by your job. There might be spikes intermittently for this metric. Thus, we often check its average percentage data. High CPU utilization indicates that there might be CPU bottleneck if the number of backlogged input events or watermark delay increases at the same time.
+
+
+### How much data is being read?
+
+The input data related metrics can be viewed under **Input** category in the chart section. They're available in the step of the input.
+* **Input events** is the number of data events read.
+* **Input events bytes** is the number of event bytes read. This can be used to validate that events are being sent to the input source.
+* **Input source received** is the number of messages read by the job.
+
+### Are there any errors in data processing?
+
+* **Deserialization errors** is the number of input events that couldn't be deserialized.
+* **Data conversion errors** is the number of output events that couldn't be converted to the expected output schema.
+* **Runtime errors** is the total number of errors related to query processing (excluding errors found while ingesting events or outputting results).
+
+### Are there any events out of order that are being dropped or adjusted?
+
+* **Out of order events** is the number of events received out of order that were either dropped or given an adjusted timestamp, based on the Event Ordering Policy. This can be impacted by the configuration of the **"Out of order events"** setting under **Event ordering** section in Azure portal.
+
+### Is the job falling behind in processing input data streams?
+
+* **Backlogged input events** tells you how many more messages from the input need to be processed. When this number is consistently greater than 0, it means your job can't process the data as fast as it's coming in. In this case you may need to increase the number of Streaming Units and/or make sure your job can be parallelized. You can see more info on this in the [query parallelization page](./stream-analytics-parallelization.md).
++
+## Get help
+For more assistance, try our [Microsoft Q&A question page for Azure Stream Analytics](/answers/topics/azure-stream-analytics.html).
+
+## Next steps
+* [Introduction to Stream Analytics](stream-analytics-introduction.md)
+* [Get started with Stream Analytics](stream-analytics-real-time-fraud-detection.md)
+* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
+* [Scale Stream Analytics jobs](stream-analytics-scale-jobs.md)
+* [Stream Analytics query language reference](/stream-analytics-query/stream-analytics-query-language-reference)
+* [Stream Analytics management REST API reference](/rest/api/streamanalytics/)
synapse-analytics Get Started Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-create-workspace.md
To complete this tutorial's steps, you need to have access to a resource group f
### Start the process 1. Open the [Azure portal](https://portal.azure.com), in the search bar enter **Synapse** without hitting enter. 1. In the search results, under **Services**, select **Azure Synapse Analytics**.
-1. Select **Add** to create a workspace.
+1. Select **Create** to create a workspace.
## Basics tab > Project Details Fill in the following fields:
synapse-analytics Security White Paper Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-network-security.md
These endpoints are automatically created when the Synapse workspace is created.
[*Synapse Studio*](/learn/modules/explore-azure-synapse-studio/) is a secure web front-end development environment for Azure Synapse. It supports various roles, including the data engineer, data scientist, data developer, data analyst, and Synapse administrator.
-Use Synapse Studio to performing various data and management operations in Azure Synapse, such as:
+Use Synapse Studio to perform various data and management operations in Azure Synapse, such as:
- Connecting to dedicated SQL pools, serverless SQL pools, and running SQL scripts. - Developing and running notebooks on Apache Spark pools.
Dedicated SQL pool and serverless SQL pool don't use managed private endpoints f
## Private link hubs for Synapse Studio
-[Azure Private Link Hubs](../security/synapse-private-link-hubs.md) allows securely connecting to Synapse Studio from the customer's VNet using Azure Private Link. This feature is useful for customers who want to access the Synapse workspace using the Synapse Studio from a controlled and restricted environment, where the outbound internet traffic is restricted to a limited set of Azure services.
+[Synapse Private Link Hubs](../security/synapse-private-link-hubs.md) allows securely connecting to Synapse Studio from the customer's VNet using Azure Private Link. This feature is useful for customers who want to access the Synapse workspace using the Synapse Studio from a controlled and restricted environment, where the outbound internet traffic is restricted to a limited set of Azure services.
It's achieved by creating a private link hub resource and a private endpoint to this hub from the VNet. This private endpoint is then used to access the studio using its fully qualified domain name (FQDN), *web.azuresynapse.net*, with a private IP address from the VNet. The private link hub resource downloads the static contents of Synapse Studio over Azure Private Link to the user's workstation. In addition, separate private endpoints must be created for the individual workspace endpoints to ensure that communication to the workspace endpoints is private.
synapse-analytics Sql Data Warehouse Manage Compute Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-compute-rest-api.md
PUT https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/
Content-Type: application/json; charset=UTF-8 {
- location: "West Central US",
+ "location": "West Central US",
"sku": {
- "name": "DW200c"
+ "name": "DW200c"
} } ```
update-center Enable Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/enable-machines.md
Register the periodic assessment and scheduled patching feature resource provide
## Prerequisites -- Azure subscription - if you don't have one yet, you can [activate your MSDN subscriber benefits](/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](/free/?WT.mc_id=A261C142F).
+- Azure subscription - if you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- Your account must be a member of the Azure [Owner](/azure/role-based-access-control/built-in-roles#owner) or [Contributor](/azure/role-based-access-control/built-in-roles#contributor) role in the subscription.
update-center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/overview.md
Before you enable your machines for update management center (preview), make sur
> [!IMPORTANT] > Update management center (preview) can manage machines that are currently managed by Azure Automation [Update management](/azure/automation/update-management/overview) feature without interrupting your update management process. However, we don't recommend migrating from Automation Update Management since this preview gives you a chance to evaluate and provide feedback on features before it's generally available (GA). >
-> While update management center is in **preview**, the [Supplemental Terms of Use for Microsoft Azure Previews](/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> While update management center is in **preview**, the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Key benefits
update-center Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/troubleshoot.md
To review the logs related to all actions performed by the extension, check for
### Arc-enabled servers
-For Arc-enabled servers, review the [troubleshoot VM extensions](/azure-arc/servers/troubleshoot-vm-extensions) article for general troubleshooting steps.
+For Arc-enabled servers, review the [troubleshoot VM extensions](/azure/azure-arc/servers/troubleshoot-vm-extensions) article for general troubleshooting steps.
To review the logs related to all actions performed by the extension, on Windows check for more details in `C:\ProgramData\GuestConfig\extension_Logs\Microsoft.SoftwareUpdateManagement\WindowsOsUpdateExtension`. It includes the following two log files of interest:
For Windows service pack updates, we check for 20 minutes + 10 minutes for rebo
More details can be found by reviewing the logs in the file path provided in the error message of the deployment run.
->[!NOTE]
-> For [Azure Arc-enabled servers](/azure/azure-arc/servers/overview), it can take up to five minutes to trigger a deployment job on the machine. If you have configured 30 minutes as the maximum duration, there is a high chance that the scan for missing updates will not occur. At least 25 minutes is required in the maintenance window to start the operation.
- #### Resolution Setting a longer time range for maximum duration when triggering an [on-demand update deployment](deploy-updates.md) helps avoid the problem.
virtual-desktop Create File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-file-share.md
In this article, you'll learn how to create an Azure file share authenticated by a domain controller on an existing Azure Virtual Desktop host pool. You can use this file share to store storage profiles.
-This process uses Active Directory Domain Services (AD DS), which is an on-prem directory service. If you're looking for information about how to create an FSLogix profile container with Azure AD DS, see [Create an FSLogix profile container with Azure Files](create-profile-container-adds.md).
+This process uses Active Directory Domain Services (AD DS), which is an on-premises directory service. If you're looking for information about how to create an FSLogix profile container with Azure AD DS, see [Create an FSLogix profile container with Azure Files](create-profile-container-adds.md).
## Prerequisites
virtual-desktop Manual Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/manual-migration.md
# Migrate manually from Azure Virtual Desktop (classic)
-Azure Virtual Desktop (classic) creates its service environment with PowerShell cmdlets, REST APIs, and service objects. An "object" in a Azure Virtual Desktop service environment is a thing that Azure Virtual Desktop creates. Service objects include tenants, host pools, application groups, and session hosts.
+Azure Virtual Desktop (classic) creates its service environment with PowerShell cmdlets, REST APIs, and service objects. An *object* in an Azure Virtual Desktop service environment is a thing that Azure Virtual Desktop creates. Service objects include tenants, host pools, application groups, and session hosts.
However, Azure Virtual Desktop (classic) isn't integrated with Azure. Without Azure integration, any objects you create aren't automatically managed by the Azure portal because they're not connected to your Azure subscription.
virtual-desktop Troubleshoot Powershell 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-powershell-2019.md
Title: Azure Virtual Desktop (classic) PowerShell - Azure
-description: How to troubleshoot issues with PowerShell when you set up a Azure Virtual Desktop (classic) tenant environment.
+description: How to troubleshoot issues with PowerShell when you set up an Azure Virtual Desktop (classic) tenant environment.
Last updated 04/05/2022
Remove-RdsHostPool -TenantName <TenantName> -Name <HostPoolName>
**Fix:** Run the following command to delete the session host. ```powershell
-Get-RdsSessionHost-TenantName <TenantName> -Hostpook <HostPoolName> | remove-RdsSessionhost -Force
+Get-RdsSessionHost-TenantName <TenantName> -Hostpook <HostPoolName> | Remove-RdsSessionHost -Force
``` Using the force command will let you delete the session host even if it has assigned users.
Using the force command will let you delete the session host even if it has assi
## Next steps - For an overview on troubleshooting Azure Virtual Desktop and the escalation tracks, see [Troubleshooting overview, feedback, and support](troubleshoot-set-up-overview-2019.md).-- To troubleshoot issues while creating a tenant and host pool in a Azure Virtual Desktop environment, see [Tenant and host pool creation](troubleshoot-set-up-issues-2019.md).
+- To troubleshoot issues while creating a tenant and host pool in an Azure Virtual Desktop environment, see [Tenant and host pool creation](troubleshoot-set-up-issues-2019.md).
- To troubleshoot issues while configuring a virtual machine (VM) in Azure Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration-2019.md). - To troubleshoot issues with Azure Virtual Desktop client connections, see [Azure Virtual Desktop service connections](troubleshoot-service-connection-2019.md). - To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](../troubleshoot-client.md)
virtual-machines Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/delete.md
PATCH https://management.azure.com/subscriptions/subID/resourceGroups/resourcegr
## Force Delete for VMs
-Force delete allows you to forcefully delete your virtual machine, reducing delete latency and immediately freeing up attached resources. For VMs that do not require graceful shutdown, Force Delete aims to delete the VM as fast as possible while relieving the logical resources from the VM, bypassing some of the cleanup operations. Force delete should only be used when you are not intending to re-use virtual hard disks. You can use force delete through Portal, CLI, PowerShell, and REST API.
+Force delete allows you to forcefully delete your virtual machine, reducing delete latency and immediately freeing up attached resources. For VMs that do not require graceful shutdown, Force Delete will delete the VM as fast as possible while relieving the logical resources from the VM, bypassing the graceful shutdown and some of the cleanup operations. Force Delete will not immediately free the MAC address associated with a VM, as this is a physical resource that may take up to 10 min to free. If you need to immediately re-use the MAC address on a new VM, Force Delete is not recommended. Force delete should only be used when you are not intending to re-use virtual hard disks. You can use force delete through Portal, CLI, PowerShell, and REST API.
### [Portal](#tab/portal3)
You can use the Azure REST API to apply force delete to your virtual machines. U
## Force Delete for virtual machine scale sets
-Force delete allows you to forcefully delete your **Uniform** virtual machine scale sets, reducing delete latency and immediately freeing up attached resources. Force delete should only be used when you are not intending to re-use virtual hard disks. You can use force delete through Portal, CLI, PowerShell, and REST API.
+Force delete allows you to forcefully delete your **Uniform** virtual machine scale sets, reducing delete latency and immediately freeing up attached resources. . Force Delete will not immediately free the MAC address associated with a VM, as this is a physical resource that may take up to 10 min to free. If you need to immediately re-use the MAC address on a new VM, Force Delete is not recommended. Force delete should only be used when you are not intending to re-use virtual hard disks. You can use force delete through Portal, CLI, PowerShell, and REST API.
### [Portal](#tab/portal4)
virtual-machines Maintenance Configurations Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations-portal.md
With Maintenance Configurations, you can now take more control over when to appl
1. In the Basics tab, choose a subscription and resource group, provide a name for the configuration, choose a region, and select one of the scopes we offer which you wish to apply updates for. Click **Add a schedule** to add or modify the schedule for your configuration. > [!IMPORTANT]
- > There are different **scopes** which support certain machine types and schedules, so please ensure you are selecting the right scope for your virtual machine.
+ > Certain virtual machine types and schedules will require a specific kind of scope. Check out [maintenance configuration scopes](maintenance-configurations.md#scopes) to find the right one for your virtual machine.
:::image type="content" source="media/virtual-machines-maintenance-control-portal/maintenance-configurations-basics-tab.png" alt-text="Screenshot showing Maintenance Configuration basics":::
You can verify that the configuration was applied correctly or check to see any
![Screenshot showing how to check a maintenance configuration](media/virtual-machines-maintenance-control-portal/maintenance-configurations-host-type.png)
-<!-- You can also check the configuration for a specific virtual machine on its properties page. Click **Maintenance** to see the configuration assigned to that virtual machine.
-
-![Screenshot showing how to check Maintenance for a host](media/virtual-machines-maintenance-control-portal/maintenance-configurations-check-config.png) -->
- ## Check for pending updates You can check if there are any updates pending for a maintenance configuration. In **Maintenance Configurations**, on the details for the configuration, click **Machines** and check **Maintenance status**. ![Screenshot showing how to check pending updates](media/virtual-machines-maintenance-control-portal/maintenance-configurations-pending.png)
-<!-- You can also check a specific host using **Virtual Machines** or properties of the dedicated host.
-
-![Screenshot that shows the highlighted maintenance state.](media/virtual-machines-maintenance-control-portal/maintenance-configurations-pending-vm.png) -->
-
-<!-- ## Apply updates
-
-You can apply pending updates on demand. On the VM or Azure Dedicated Host details, click **Maintenance** and click **Apply maintenance now**. Apply update calls can take upto 2 hours to complete.
-
-![Screenshot showing how to apply pending updates](media/virtual-machines-maintenance-control-portal/maintenance-configurations-apply-updates-now.png)
-
-## Check the status of applying updates
-
-You can check on the progress of the updates for a configuration in **Maintenance Configurations** or using **Virtual Machines**. On the VM details, click **Maintenance**. In the following example, the **Maintenance state** shows an update is **Pending**.
-
-![Screenshot showing how to check status of pending updates](media/virtual-machines-maintenance-control-portal/maintenance-configurations-status.png) -->
- ## Delete a maintenance configuration To delete a configuration, open the configuration details and click **Delete**.
virtual-machines Maintenance Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations.md
Last updated 10/06/2021
-#pmcontact: shants
+#pmcontact: pphillips
-# Managing platform updates with Maintenance Configurations
+# Managing VM updates with Maintenance Configurations
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-Manage platform updates, that don't require a reboot, using Maintenance Configurations. Azure frequently updates its infrastructure to improve reliability, performance, security or launch new features. Most updates are transparent to users. Some sensitive workloads, like gaming, media streaming, and financial transactions, can't tolerate even few seconds of a VM freezing or disconnecting for maintenance. Creating a Maintenance Configuration gives you the option to wait on platform updates and apply them within a 35-day rolling window.
+Maintenance Configurations give you the ability to control and manage updates for many Azure virtual machine resources since Azure frequently updates its infrastructure to improve reliability, performance, security or launch new features. Most updates are transparent to users, but some sensitive workloads, like gaming, media streaming, and financial transactions, can't tolerate even few seconds of a VM freezing or disconnecting for maintenance. Maintenance configurations is integrated with Azure Resource Graph (ARG) for low latency and high scale customer experience.
+>[!IMPORTANT]
+> Users are required to have a role of at least contributor in order to use maintenance configurations.
-With Maintenance Configurations, you can:
-- Batch updates into one update package.-- Wait up to 35 days to apply updates for **Host** machines. -- Automate platform updates by configuring your maintenance schedule.-- Maintenance Configurations work across subscriptions and resource groups.
+## Scopes
-## Limitations
+Maintenance Configurations currently supports three (3) scopes: Host, OS image, and Guest. While each scope allows scheduling and managing updates, the major difference lies in the resource they each support. This section outlines the details on the various scopes and their supported types:
-- Maintenance window duration can vary month over month and sometimes it can take up to 2 hours to apply the pending updates once it is initiated by the user. -- After 35 days, an update will automatically be applied to your **Host** machines.-- Rack level maintenance cannot be controlled through maintenance configurations.-- User must have **Resource Contributor** access.-- Users need to know the nuances of the scopes required for their machine.
+| Scope | Support Resources |
+|-|-|
+| Host | Isolated Virtual Machines, Isolated Virtual Machine Scale Sets, Dedicated Hosts |
+| OS Image | Virtual Machine Scale Sets |
+| Guest | Virtual Machines, Azure Arc Servers |
++
+### Host
+With this scope, you can manage platform updates that do not require a reboot on your *isolated VMs*, *isolated Virtual Machine Scale Set instances* and *dedicated hosts*. Some features and limitations unique to the host scope are:
+
+- Schedules can be set anytime within 35 days. After 35 days, updates are automatically applied.
+- A minimum of a 2 hour maintenance window is required for this scope.
+
+[Learn more about Azure Dedicated Hosts](dedicated-hosts.md)
+
+### OS image
+Using this scope with maintenance configurations lets you decide when to apply upgrades to OS disks in your *virtual machine scale sets* through an easier and more predictable experience. An upgrade works by replacing the OS disk of a VM with a new disk created using the latest image version. Any configured extensions and custom data scripts are run on the OS disk, while data disks are retained. Some features and limitations unique to this scope are:
+
+- Scale sets need to have [automatic OS upgrades](/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) enabled in order to use maintenance configurations.
+- Schedule recurrence is defaulted to daily
+- A minimum of 5 hours is required for the maintenance window
+
+### Guest
+This scope is integrated with [update management center](../update-center/overview.md) which allows you to save recurring deployment schedules to install updates for your Windows Server and Linux machines in Azure, in on-premises environments, and in other cloud environments connected using Azure Arc-enabled servers. Some features and limitations unique to this scope include:
+
+- [Patch orchestration](automatic-vm-guest-patching.md#patch-orchestration-modes) for virtual machines need to be set to AutomaticByPlatform
+- A minimum of 1 hour and 10 minutes is required for the maintenance window.
+- There is no limit to the recurrence of your schedule
+
+To learn more about this topic, checkout [update management center and scheduled patching](../update-center/scheduled-patching.md)
+
## Management options You can create and manage maintenance configurations using any of the following options:
virtual-network Accelerated Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-overview.md
The following versions of Windows are supported:
- **Windows Server 2019 Standard/Datacenter** - **Windows Server 2016 Standard/Datacenter** - **Windows Server 2012 R2 Standard/Datacenter**-- **Windows 10, version 21H2 or later**-- **Windows 11**
+- **Windows 10, version 21H2 or later** _(includes Windows 10 Enterprise multi-session)_
+- **Windows 11** _(includes Windows 11 Enterprise multi-session)_
The following distributions are supported out of the box from the Azure Gallery: - **Ubuntu 14.04 with the linux-azure kernel**
virtual-wan Scenario Bgp Peering Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-bgp-peering-hub.md
The virtual hub router now also exposes the ability to peer with it, thereby exc
| Number of routes each BGP peer can advertise to the virtual hub.| The hub can only accept a maximum number of 10,000 routes (total) from its connected resources. For example, if a virtual hub has a total of 6000 routes from the connected virtual networks, branches, virtual hubs etc., then when a new BGP peering is configured with an NVA, the NVA can only advertise up to 4000 routes. | * Routes from NVA in a virtual network that are more specific than the virtual network address space, when advertised to the virtual hub through BGP are not propagated further to on-premises. * Traffic destined for addresses in the virtual network directly connected to the virtual hub cannot be configured to route through the NVA using BGP peering between the hub and NVA. This is because the virtual hub automatically learns about system routes associated with addresses in the spoke virtual network when the spoke virtual network connection is created. These automatically learned system routes are preferred over routes learned by the hub through BGP.
+* This feature is not supported for setting up BGP peering between NVA in spoke VNET and Virtual hub with Azure Firewall.
## BGP peering scenarios
virtual-wan Virtual Wan Point To Site Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-point-to-site-azure-ad.md
Previously updated : 06/16/2022 Last updated : 07/13/2022
A User VPN configuration defines the parameters for connecting remote clients. I
:::image type="content" source="./media/virtual-wan-point-to-site-azure-ad/basics.png" alt-text="Screenshot of the Basics page.":::
- * **Configuration name** - Enter the name you want to call your User VPN Configuration.
+ * **Configuration name** - Enter the name you want to call your User VPN Configuration.
* **Tunnel type** - Select OpenVPN from the dropdown menu.+ 1. Click **Azure Active Directory** to open the page. :::image type="content" source="./media/virtual-wan-point-to-site-azure-ad/values.png" alt-text="Screenshot of the Azure Active Directory page.":::
A User VPN configuration defines the parameters for connecting remote clients. I
* **Authentication method** - Select Azure Active Directory. * **Audience** - Type in the Application ID of the [Azure VPN](openvpn-azure-ad-tenant.md) Enterprise Application registered in your Azure AD tenant. * **Issuer** - `https://sts.windows.net/<your Directory ID>/`
- * **AAD Tenant** - `https://login.microsoftonline.com/<your Directory ID>`
+ * **AAD Tenant:** TenantID for the Azure AD tenant
+
+ * Enter `https://login.microsoftonline.com/{AzureAD TenantID}/` for Azure Public AD
+ * Enter `https://login.microsoftonline.us/{AzureAD TenantID/` for Azure Government AD
+ * Enter `https://login-us.microsoftonline.de/{AzureAD TenantID/` for Azure Germany AD
+ * Enter `https://login.chinacloudapi.cn/{AzureAD TenantID/` for China 21Vianet AD
+ 1. Click **Create** to create the User VPN configuration. You'll select this configuration later in the exercise. ## <a name="site"></a>Create an empty hub
This section shows you how to add a gateway to an already existing virtual hub.
* **Gateway scale units**: Select the Gateway scale units. Scale units represent the aggregate capacity of the User VPN gateway. If you select 40 or more gateway scale units, plan your client address pool accordingly. For information about how this setting impacts the client address pool, see [About client address pools](about-client-address-pools.md). For information about gateway scale units, see the [FAQ](virtual-wan-faq.md#for-user-vpn-point-to-site--how-many-clients-are-supported). * **User VPN configuration**: Select the configuration that you created earlier.
- * **Client address pool**: Specify the client address pool from which the VPN clients will be assigned IP addresses. This setting corresponds to the gateway scale units that you
+ * **Client address pool**: Specify the client address pool from which the VPN clients will be assigned IP addresses. This setting corresponds to the gateway scale units that you set.
1. Click **Confirm**. It can take up to 30 minutes to update the hub. ## <a name="connect-vnet"></a>Connect VNet to hub
vpn-gateway Openvpn Azure Ad Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-tenant.md
Verify that you have an Azure AD tenant. If you don't have an Azure AD tenant, y
1. Enable Azure AD authentication on the VPN gateway by navigating to **Point-to-site configuration** and picking **OpenVPN (SSL)** as the **Tunnel type**. Select **Azure Active Directory** as the **Authentication type**, then fill in the information under the **Azure Active Directory** section. Replace {AzureAD TenantID} with your tenant ID. * **Tenant:** TenantID for the Azure AD tenant
- * Enter `https://login.microsoftonline.com/{AzureAD TenantID}/` for Azure Public AD
- * Enter `https://login.microsoftonline.us/{AzureAD TenantID/` for Azure Government AD
- * Enter `https://login-us.microsoftonline.de/{AzureAD TenantID/` for Azure Germany AD
- * Enter `https://login.chinacloudapi.cn/{AzureAD TenantID/` for China 21Vianet AD
+
+ * Enter `https://login.microsoftonline.com/{AzureAD TenantID}/` for Azure Public AD
+ * Enter `https://login.microsoftonline.us/{AzureAD TenantID/` for Azure Government AD
+ * Enter `https://login-us.microsoftonline.de/{AzureAD TenantID/` for Azure Germany AD
+ * Enter `https://login.chinacloudapi.cn/{AzureAD TenantID/` for China 21Vianet AD
* **Audience:** Application ID of the "Azure VPN" Azure AD Enterprise App
- * Enter 41b23e61-6c1e-4545-b367-cd054e0ed4b4 for Azure Public
- * Enter 51bb15d4-3a4f-4ebf-9dca-40096fe32426 for Azure Government
- * Enter 538ee9e6-310a-468d-afef-ea97365856a9 for Azure Germany
- * Enter 49f817b6-84ae-4cc0-928c-73f27289b3aa for Azure China 21Vianet
+ * Enter 41b23e61-6c1e-4545-b367-cd054e0ed4b4 for Azure Public
+ * Enter 51bb15d4-3a4f-4ebf-9dca-40096fe32426 for Azure Government
+ * Enter 538ee9e6-310a-468d-afef-ea97365856a9 for Azure Germany
+ * Enter 49f817b6-84ae-4cc0-928c-73f27289b3aa for Azure China 21Vianet
* **Issuer**: URL of the Secure Token Service `https://sts.windows.net/{AzureAD TenantID}/`
web-application-firewall Application Gateway Waf Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-waf-metrics.md
Previously updated : 03/15/2022 Last updated : 07/14/2022
WAF with Application Gateway provides detailed reporting on each threat it detec
For additional information on diagnostics log, visit [Application Gateway WAF resource logs](../ag/web-application-firewall-logs.md)
-## Application Gateway WAF V2 Metrics
+## Application Gateway WAF v2 Metrics
New WAF metrics are only available for Core Rule Set 3.2 or greater, or with bot protection and geo-filtering. The metrics can be further filtered on the supported dimensions. |**Metrics**|**Description**|**Dimension**| | :| :-| :--|
-|**WAF Total Requests**|Count of successful requests that WAF engine has served.| Action, Country/Region, Method, Mode|
-|**WAF Managed Rule Matches**|Count of total requests that a managed rule has matched.| Action, Country/Region, Mode, Rule Group, Rule Id |
-|**WAF Custom Rule Matches**|Count of total requests that match a specific custom rule. | Action, Country/Region, Mode, Rule Group, Rule Name|
-|**WAF Bot Protection Matches**|Count of total requests that have been blocked or logged from malicious IP addresses. The IP addresses are sourced from the Microsoft Threat Intelligence feed.| Action, Country/Region, Bot Type, Mode|
+|**WAF Total Requests**|Count of successful requests that WAF engine has served| Action, Country/Region, Method, Mode|
+|**WAF Managed Rule Matches**|Count of total managed rule matches| Action, Country/Region, Mode, Rule Group, Rule Id |
+|**WAF Custom Rule Matches**|Count of custom rule matches| Action, Country/Region, Mode, Rule Group, Rule Name|
+|**WAF Bot Protection Matches**|Count of total bot protection rule matches that have been blocked or logged from malicious IP addresses. The IP addresses are sourced from the Microsoft Threat Intelligence feed.| Action, Country/Region, Bot Type, Mode|
For metrics supported by Application Gateway V2 SKU, see [Application Gateway v2 metrics](../../application-gateway/application-gateway-metrics.md#metrics-supported-by-application-gateway-v2-sku)
-## Application Gateway WAF V1 Metrics
+## Application Gateway WAF v1 Metrics
|**Metrics**|**Description**|**Dimension**| | :| :-| :--|