Updates from: 07/16/2022 01:11:47
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication In Azure Web App File Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-web-app-file-based.md
# Configure authentication in an Azure Web App configuration file by using Azure AD B2C
-This article explains how to add Azure Active Directory B2C (Azure AD B2C) authentication functionality to an Azure Web App. For more information, check out the [File-based configuration in Azure App Service authentication](/azure/app-service/configure-authentication-file-based) article.
+This article explains how to add Azure Active Directory B2C (Azure AD B2C) authentication functionality to an Azure Web App. For more information, check out the [File-based configuration in Azure App Service authentication](../app-service/configure-authentication-file-based.md) article.
## Overview
From your server code, the provider-specific tokens are injected into the reques
## Next steps
-* After successful authentication, you can show display name on the navigation bar. To view the claims that the Azure AD B2C token returns to your app, check out the [Work with user identities in Azure App Service authentication](/azure/app-service/configure-authentication-user-identities).
-* Lear how to [Work with OAuth tokens in Azure App Service authentication](/azure/app-service/configure-authentication-oauth-tokens).
-
+* After successful authentication, you can show display name on the navigation bar. To view the claims that the Azure AD B2C token returns to your app, check out the [Work with user identities in Azure App Service authentication](../app-service/configure-authentication-user-identities.md).
+* Learn how to [Work with OAuth tokens in Azure App Service authentication](../app-service/configure-authentication-oauth-tokens.md).
active-directory-b2c Configure Authentication In Azure Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-web-app.md
# Configure authentication in an Azure Web App by using Azure AD B2C
-This article explains how to add Azure Active Directory B2C (Azure AD B2C) authentication functionality to an Azure Web App. For more information, check out the [configure your App Service or Azure Functions app to login using an OpenID Connect provider](/azure/app-service/configure-authentication-provider-openid-connect) article.
+This article explains how to add Azure Active Directory B2C (Azure AD B2C) authentication functionality to an Azure Web App. For more information, check out the [configure your App Service or Azure Functions app to login using an OpenID Connect provider](../app-service/configure-authentication-provider-openid-connect.md) article.
## Overview
To register your application, follow these steps:
1. For the **Client Secret** provide the Web App (client) secret from [step 2.2](#step-22-create-a-client-secret). > [!TIP]
- > Your client secret will be stored as an app setting to ensure secrets are stored in a secure fashion. You can update that setting later to use [Key Vault references](/azure/app-service/app-service-key-vault-references) if you wish to manage the secret in Azure Key Vault.
+ > Your client secret will be stored as an app setting to ensure secrets are stored in a secure fashion. You can update that setting later to use [Key Vault references](../app-service/app-service-key-vault-references.md) if you wish to manage the secret in Azure Key Vault.
1. Keep the rest of the settings with the default values. 1. Press the **Add** button to finish setting up the identity provider.
From your server code, the provider-specific tokens are injected into the reques
## Next steps
-* After successful authentication, you can show display name on the navigation bar. To view the claims that the Azure AD B2C token returns to your app, check out the [Work with user identities in Azure App Service authentication](/azure/app-service/configure-authentication-user-identities).
-* Lear how to [Work with OAuth tokens in Azure App Service authentication](/azure/app-service/configure-authentication-oauth-tokens).
-
+* After successful authentication, you can show display name on the navigation bar. To view the claims that the Azure AD B2C token returns to your app, check out the [Work with user identities in Azure App Service authentication](../app-service/configure-authentication-user-identities.md).
+* Learn how to [Work with OAuth tokens in Azure App Service authentication](../app-service/configure-authentication-oauth-tokens.md).
active-directory-b2c String Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/string-transformations.md
Determines whether one string claim is equal to another. The result is a new boo
| InputClaim | inputClaim1 | string | First claim type, which is to be compared. | | InputClaim | inputClaim2 | string | Second claim type, which is to be compared. | | InputParameter | operator | string | Possible values: `EQUAL` or `NOT EQUAL`. |
-| InputParameter | ignoreCase | boolean | Specifies whether this comparison should ignore the case of the strings being compared. |
+| InputParameter | ignoreCase | string | Specifies whether this comparison should ignore the case of the strings being compared. |
| OutputClaim | outputClaim | boolean | The claim that is produced after this claims transformation has been invoked. | ### Example of CompareClaims
Use this claims transformation to check if a claim is equal to another claim. T
</InputClaims> <InputParameters> <InputParameter Id="operator" DataType="string" Value="NOT EQUAL" />
- <InputParameter Id="ignoreCase" DataType="boolean" Value="true" />
+ <InputParameter Id="ignoreCase" DataType="string" Value="true" />
</InputParameters> <OutputClaims> <OutputClaim ClaimTypeReferenceId="SameEmailAddress" TransformationClaimType="outputClaim" />
active-directory Workday Attribute Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/workday-attribute-reference.md
To configure additional XPATHs, refer to the section [Tutorial: Managing your co
| 12 | Company | wd:Worker/wd:Worker\_Data/wd:Organization\_Data/wd:Worker\_Organization\_Data\[translate\(string\(wd:Organization\_Data/wd:Organization\_Type\_Reference/wd:ID\[@wd:type='Organization\_Type\_ID'\]\),'abcdefghijklmnopqrstuvwxyz','ABCDEFGHIJKLMNOPQRSTUVWXYZ'\)='COMPANY'\]/wd:Organization\_Data/wd:Organization\_Name/text\(\) | | 13 | ContingentWorkerID | wd:Worker/wd:Worker\_Reference/wd:ID\[@wd:type='Contingent\_Worker\_ID'\]/text\(\) | | 14 | CountryReference | wd:Worker/wd:Worker\_Data/wd:Employment\_Data/wd:Worker\_Job\_Data\[@wd:Primary_Job=1]/wd:Position\_Data/wd:Business\_Site\_Summary\_Data/wd:Address\_Data/wd:Country\_Reference/wd:ID\[@wd:type='ISO\_3166\-1\_Alpha\-3\_Code'\]/text\(\) |
-| 15 | CountryReferenceFriendly | Not supported\. |
+| 15 | CountryReferenceFriendly | wd:Worker/wd:Worker\_Data/wd:Employment\_Data/wd:Worker\_Job\_Data\[@wd:Primary\_Job=1\]/wd:Position\_Data/wd:Business\_Site\_Summary\_Data/wd:Address\_Data/wd:Country\_Reference/@wd:Descriptor |
| 16 | CountryReferenceNumeric | wd:Worker/wd:Worker\_Data/wd:Employment\_Data/wd:Worker\_Job\_Data\[@wd:Primary_Job=1]/wd:Position\_Data/wd:Business\_Site\_Summary\_Data/wd:Address\_Data/wd:Country\_Reference/wd:ID\[@wd:type='ISO\_3166\-1\_Numeric\-3\_Code'\]/text\(\) | | 17 | CountryReferenceTwoLetter | wd:Worker/wd:Worker\_Data/wd:Employment\_Data/wd:Worker\_Job\_Data\[@wd:Primary_Job=1]/wd:Position\_Data/wd:Business\_Site\_Summary\_Data/wd:Address\_Data/wd:Country\_Reference/wd:ID\[@wd:type='ISO\_3166\-1\_Alpha\-2\_Code'\]/text\(\) | | 18 | CountryRegionReference | wd:Worker/wd:Worker\_Data/wd:Employment\_Data/wd:Worker\_Job\_Data\[@wd:Primary_Job=1]/wd:Position\_Data/wd:Business\_Site\_Summary\_Data/wd:Address\_Data/wd:Country\_Region\_Descriptor/text\(\) | | 19 | EmailAddress | wd:Worker/wd:Worker\_Data/wd:Personal\_Data/wd:Contact\_Data/wd:Email\_Address\_Data\[wd:Usage\_Data/@wd:Public='1' and string\(wd:Usage\_Data/wd:Type\_Data/wd:Type\_Reference/wd:ID\[@wd:type='Communication\_Usage\_Type\_ID'\]\)='WORK'\]/wd:Email\_Address/text\(\) | | 20 | EmployeeID | wd:Worker/wd:Worker\_Reference/wd:ID\[@wd:type='Employee\_ID'\]/text\(\) |
-| 21 | FacilityLocation | wd:Worker/wd:Worker\_Data/wd:Organization\_Data/wd:Worker\_Organization\_Data\[translate\(string\(wd:Organization\_Data/wd:Organization\_Type\_Reference/wd:ID\[@wd:type='Organization\_Type\_ID'\]\),'abcdefghijklmnopqrstuvwxyz','ABCDEFGHIJKLMNOPQRSTUVWXYZ'\)='FACILITY'\]/wd:Organization\_Reference/@wd:Descriptor |
+| 21 | FacilityLocation | wd:Worker/wd:Worker\_Data/wd:Organization\_Data/wd:Worker\_Organization\_Data/wd:Organization\_Data\[translate(string(wd:Organization\_Type\_Reference/wd:ID\[@wd:type='Organization\_Type\_ID'\]),'abcdefghijklmnopqrstuvwxyz','ABCDEFGHIJKLMNOPQRSTUVWXYZ')='LOCATION\_HIERARCHY'\]/wd:Organization\_Name/text\(\) |
| 22 | Fax | wd:Worker/wd:Worker\_Data/wd:Personal\_Data/wd:Contact\_Data/wd:Phone\_Data\[wd:Usage\_Data/@wd:Public='1' and string\(wd:Usage\_Data/wd:Type\_Data/wd:Type\_Reference/wd:ID\[@wd:type='Communication\_Usage\_Type\_ID'\]\)='WORK' and string\(wd:Phone\_Device\_Type\_Reference/wd:ID\[@wd:type='Phone\_Device\_Type\_ID'\]\)='Fax'\]/@wd:Workday_Traditional_Formatted_Phone | | 23 | FirstName | wd:Worker/wd:Worker\_Data/wd:Personal\_Data/wd:Name\_Data/wd:Legal\_Name\_Data/wd:Name\_Detail\_Data/wd:First\_Name/text\(\) | | 24 | JobClassificationID | wd:Worker/wd:Worker\_Data/wd:Employment\_Data/wd:Worker\_Job\_Data\[@wd:Primary_Job=1]/wd:Position\_Data/wd:Job\_Classification\_Summary\_Data/wd:Job\_Classification\_Reference/wd:ID\[@wd:type='Job\_Classification\_Reference\_ID'\]/text\(\) |
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
Previously updated : 07/14/2022 Last updated : 07/15/2022
Microsoft Authenticator can be used to sign in to any Azure AD account without u
This authentication technology can be used on any device platform, including mobile. This technology can also be used with any app or website that integrates with Microsoft Authentication Libraries. People who enabled phone sign-in from Microsoft Authenticator see a message that asks them to tap a number in their app. No username or password is asked for. To complete the sign-in process in the app, a user must next take the following actions:
People who enabled phone sign-in from Microsoft Authenticator see a message that
1. Choose **Approve**. 1. Provide their PIN or biometric.
-## Multiple accounts on iOS (preview)
-
-You can enable passwordless phone sign-in for multiple accounts in Microsoft Authenticator on any supported iOS device. Consultants, students, and others with multiple accounts in Azure AD can add each account to Microsoft Authenticator and use passwordless phone sign-in for all of them from the same iOS device.
-
-Previously, admins might not require passwordless sign-in for users with multiple accounts because it requires them to carry more devices for sign-in. By removing the limitation of one user sign-in from a device, admins can more confidently encourage users to register passwordless phone sign-in and use it as their default sign-in method.
-
-The Azure AD accounts can be in the same tenant or different tenants. Guest accounts aren't supported for multiple account sign-in from one device.
-
->[!NOTE]
->Multiple accounts on iOS is currently in public preview. Some features might not be supported or have limited capabilities. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Prerequisites To use passwordless phone sign-in with Microsoft Authenticator, the following prerequisites must be met: - Recommended: Azure AD Multi-Factor Authentication, with push notifications allowed as a verification method. Push notifications to your smartphone or tablet help the Authenticator app to prevent unauthorized access to accounts and stop fraudulent transactions. The Authenticator app automatically generates codes when set up to do push notifications so a user has a backup sign-in method even if their device doesn't have connectivity. - Latest version of Microsoft Authenticator installed on devices running iOS 12.0 or greater, or Android 6.0 or greater.-- For Android, the device that runs Microsoft Authenticator must be registered to an individual user. We're actively working to enable multiple accounts on Android. -- For iOS, the device must be registered with each tenant where it's used to sign in. For example, the following device must be registered with Contoso and Wingtiptoys to allow all accounts to sign in:
- - balas@contoso.com
- - balas@wingtiptoys.com and bsandhu@wingtiptoys
-- For iOS, the option in Microsoft Authenticator to allow Microsoft to gather usage data must be enabled. It's not enabled by default. To enable it in Microsoft Authenticator, go to **Settings** > **Usage Data**.
-
- :::image type="content" border="true" source="./media/howto-authentication-passwordless-phone/telemetry.png" alt-text="Screenshot os Usage Data in Microsoft Authenticator.":::
+- The device that runs Microsoft Authenticator must be registered to an individual user. We're actively working to enable multiple accounts on Android.
To use passwordless authentication in Azure AD, first enable the combined registration experience, then enable users for the passwordless method.
An end user can be enabled for multifactor authentication (MFA) through an on-pr
If the user attempts to upgrade multiple installations (5+) of Microsoft Authenticator with the passwordless phone sign-in credential, this change might result in an error.
+### Device registration
+
+Before you can create this new strong credential, there are prerequisites. One prerequisite is that the device on which Microsoft Authenticator is installed must be registered within the Azure AD tenant to an individual user.
+
+Currently, a device can only be enabled for passwordless sign-in in a single tenant. This limit means that only one work or school account in Microsoft Authenticator can be enabled for phone sign-in.
+
+> [!NOTE]
+> Device registration is not the same as device management or mobile device management (MDM). Device registration only associates a device ID and a user ID together, in the Azure AD directory.
## Next steps
active-directory Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/faqs.md
For information about permissions usage reports, see [Generate and download the
## Does Permissions Management integrate with third-party ITSM (Information Technology Service Management) tools?
-Permissions Management integrates with ServiceNow.
+Integration with ITMS tools, such as ServiceNow, is in the future roadmap.
## How is Permissions Management being deployed?
Where xx-XX is one of the following available language parameters: 'cs-CZ', 'de-
- [Permissions Management web page](https://microsoft.com/security/business/identity-access-management/permissions-management) - For more information about Microsoft's privacy and security terms, seeΓÇ»[Commercial Licensing Terms](https://www.microsoft.com/licensing/terms/product/ForallOnlineServices/all). - For more information about Microsoft's data processing and security terms when you subscribe to a product, see [Microsoft Products and Services Data Protection Addendum (DPA)](https://www.microsoft.com/licensing/docs/view/Microsoft-Products-and-Services-Data-Protection-Addendum-DPA).-- For more information about MicrosoftΓÇÖs policy and practices for Data Subject Requests for GDPR and CCPA: [https://docs.microsoft.com/en-us/compliance/regulatory/gdpr-dsr-azure](https://docs.microsoft.com/compliance/regulatory/gdpr-dsr-azure).
+- For more information about MicrosoftΓÇÖs policy and practices for Data Subject Requests for GDPR and CCPA: [https://docs.microsoft.com/en-us/compliance/regulatory/gdpr-dsr-azure](/compliance/regulatory/gdpr-dsr-azure).
## Next steps - For an overview of Permissions Management, see [What's Permissions Management?](overview.md).-- For information on how to onboard Permissions Management in your organization, see [Enable Permissions Management in your organization](onboard-enable-tenant.md).
+- For information on how to onboard Permissions Management in your organization, see [Enable Permissions Management in your organization](onboard-enable-tenant.md).
active-directory Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-tenant.md
To enable Permissions Management in your organization:
> [!NOTE] > During public preview, Permissions Management doesn't perform a license check.
+> The public preview environment will only be available until October 7th, 2022. You will be no longer be able view or access your configuration and data in the public preview environment after that date.
+> Once you complete all the steps and confirm to use Microsoft Entra Permissions Management, access to the public preview environment will be lost. You can take a note of your configuration before you start.
+> To start using generally available Microsoft Entra Permissions Management, you must purchase a license or begin a trial. From the public preview console, initiate the workflow by selecting Start.
+++ ## How to enable Permissions Management on your Azure AD tenant
active-directory Usage Analytics Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-groups.md
When you select **Groups**, the **Usage Analytics** dashboard provides a high-le
- **Authorization System Type**: Select the authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP). - **Authorization System**: Select from a **List** of accounts and **Folders**.
- - **Group Type**: Select **All**, **ED**, or **Local**.
+ - **Group Type**: Select **All**, **ED** (enterprise directory), or **Local**.
- **Group Activity Status**: Select **All**, **Active**, or **Inactive**. - **Tasks Type**: Select **All**, **High Risk Tasks**, or **Delete Tasks** - **Search**: Enter group name to find specific group.
The **Groups** table displays the results of your query:
- **Group Name**: Provides the name of the group. - To view details about the group, select the down arrow.-- A **Group Type** icon displays to the left of the group name to describe the type of group (**ED** or **Local**).
+- A **Group Type** icon displays to the left of the group name to describe the type of group (**ED** (enterprise directory) or **Local**).
- The **Domain/Account** name. - The **Permission Creep Index (PCI)**: Provides the following information: - **Index**: A numeric value assigned to the PCI.
You can filter user details by type of user, user role, app, or service used, or
1. From the **Authorization System Type** dropdown, select the authorization system you want to use: **AWS**, **Azure**, or **GCP**. 1. From the **Authorization System** dropdown, select from a **List** of accounts and **Folders**.
-1. From the **Group Type** dropdown, select the type of user: **All**, **ED**, or **Local**.
+1. From the **Group Type** dropdown, select the type of user: **All**, **ED** (enterprise directory), or **Local**.
1. Select **Apply** to run your query and display the information you selected. Select **Reset Filter** to discard your changes.
active-directory Msal Client Application Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-client-application-configuration.md
Previously updated : 11/20/2020 Last updated : 07/15/2022
The authority is a URL that indicates a directory that MSAL can request tokens f
Common authorities are:
-| Common authority URLs | When to use |
-|--|--|
-| `https://login.microsoftonline.com/<tenant>/` | Sign in users of a specific organization only. The `<tenant>` in the URL is the tenant ID of the Azure Active Directory (Azure AD) tenant (a GUID), or its tenant domain. |
-| `https://login.microsoftonline.com/common/` | Sign in users with work and school accounts or personal Microsoft accounts. |
-| `https://login.microsoftonline.com/organizations/` | Sign in users with work and school accounts. |
-| `https://login.microsoftonline.com/consumers/` | Sign in users with personal Microsoft accounts (MSA) only. |
+| Common authority URLs | When to use |
+| -- | - |
+| `https://login.microsoftonline.com/<tenant>/` | Sign in users of a specific organization only. The `<tenant>` in the URL is the tenant ID of the Azure Active Directory (Azure AD) tenant (a GUID), or its tenant domain. |
+| `https://login.microsoftonline.com/common/` | Sign in users with work and school accounts or personal Microsoft accounts. |
+| `https://login.microsoftonline.com/organizations/` | Sign in users with work and school accounts. |
+| `https://login.microsoftonline.com/consumers/` | Sign in users with personal Microsoft accounts (MSA) only. |
The authority you specify in your code needs to be consistent with the **Supported account types** you specified for the app in **App registrations** in the Azure portal.
The authority can be:
Azure AD cloud authorities have two parts: -- The identity provider *instance*-- The sign-in *audience* for the app
+- The identity provider _instance_
+- The sign-in _audience_ for the app
The instance and audience can be concatenated and provided as the authority URL. This diagram shows how the authority URL is composed:
The instance and audience can be concatenated and provided as the authority URL.
## Cloud instance
-The *instance* is used to specify if your app is signing users from the Azure public cloud or from national clouds. Using MSAL in your code, you can set the Azure cloud instance by using an enumeration or by passing the URL to the [national cloud instance](authentication-national-cloud.md#azure-ad-authentication-endpoints) as the `Instance` member (if you know it).
+The _instance_ is used to specify if your app is signing users from the Azure public cloud or from national clouds. Using MSAL in your code, you can set the Azure cloud instance by using an enumeration or by passing the URL to the [national cloud instance](authentication-national-cloud.md#azure-ad-authentication-endpoints) as the `Instance` member.
MSAL.NET will throw an explicit exception if both `Instance` and `AzureCloudInstance` are specified.
Currently, the only way to get an app to sign in users with only personal Micros
## Client ID
-The client ID is the unique application (client) ID assigned to your app by Azure AD when the app was registered.
+The client ID is the unique **Application (client) ID** assigned to your app by Azure AD when the app was registered.
## Redirect URI
The redirect URI is the URI the identity provider will send the security tokens
If you're a public client app developer who's using MSAL: -- You'd want to use `.WithDefaultRedirectUri()` in desktop or UWP applications (MSAL.NET 4.1+). This method will set the public client application's redirect URI property to the default recommended redirect URI for public client applications.
+- You'd want to use `.WithDefaultRedirectUri()` in desktop or Universal Windows Platform (UWP) applications (MSAL.NET 4.1+). The `.WithDefaultRedirectUri()` method will set the public client application's redirect URI property to the default recommended redirect URI for public client applications.
- | Platform | Redirect URI |
- |--|--|
- | Desktop app (.NET FW) | `https://login.microsoftonline.com/common/oauth2/nativeclient` |
- | UWP | value of `WebAuthenticationBroker.GetCurrentApplicationCallbackUri()`. This enables SSO with the browser by setting the value to the result of WebAuthenticationBroker.GetCurrentApplicationCallbackUri() which you need to register |
- | .NET Core | `https://localhost`. This enables the user to use the system browser for interactive authentication since .NET Core doesn't have a UI for the embedded web view at the moment. |
+ | Platform | Redirect URI |
+ | | |
+ | Desktop app (.NET FW) | `https://login.microsoftonline.com/common/oauth2/nativeclient` |
+ | UWP | value of `WebAuthenticationBroker.GetCurrentApplicationCallbackUri()`. This enables single sign-on (SSO) with the browser by setting the value to the result of WebAuthenticationBroker.GetCurrentApplicationCallbackUri(), which you need to register |
+ | .NET Core | `https://localhost` enables the user to use the system browser for interactive authentication since .NET Core doesn't have a UI for the embedded web view at the moment. |
-- You don't need to add a redirect URI if you're building a Xamarin Android and iOS application that doesn't support the broker redirect URI. It is automatically set to `msal{ClientId}://auth` for Xamarin Android and iOS.
+- You don't need to add a redirect URI if you're building a Xamarin Android and iOS application that doesn't support the broker redirect URI. It's automatically set to `msal{ClientId}://auth` for Xamarin Android and iOS.
- Configure the redirect URI in [App registrations](https://aka.ms/appregistrations):
- ![Redirect URI in App registrations](media/msal-client-application-configuration/redirect-uri.png)
+ ![Redirect URI in App registrations](media/msal-client-application-configuration/redirect-uri.png)
You can override the redirect URI by using the `RedirectUri` property (for example, if you use brokers). Here are some examples of redirect URIs for that scenario: - `RedirectUriOnAndroid` = "msauth-5a434691-ccb2-4fd1-b97b-b64bcfbc03fc://com.microsoft.identity.client.sample"; - `RedirectUriOnIos` = $"msauth.{Bundle.ID}://auth";
-For additional iOS details, see [Migrate iOS applications that use Microsoft Authenticator from ADAL.NET to MSAL.NET](msal-net-migration-ios-broker.md) and [Leveraging the broker on iOS](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Leveraging-the-broker-on-iOS).
-For additional Android details, see [Brokered auth in Android](msal-android-single-sign-on.md).
+For more iOS details, see [Migrate iOS applications that use Microsoft Authenticator from ADAL.NET to MSAL.NET](msal-net-migration-ios-broker.md) and [Leveraging the broker on iOS](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Leveraging-the-broker-on-iOS).
+For more Android details, see [Brokered auth in Android](msal-android-single-sign-on.md).
### Redirect URI for confidential client apps
-For web apps, the redirect URI (or reply URL) is the URI that Azure AD will use to send the token back to the application. This URI can be the URL of the web app/web API if the confidential app is one of these. The redirect URI needs to be registered in app registration. This registration is especially important when you deploy an app that you've initially tested locally. You then need to add the reply URL of the deployed app in the application registration portal.
+For web apps, the redirect URI (or reply URL) is the URI that Azure AD will use to send the token back to the application. The URI can be the URL of the web app/web API if the confidential app is one of them. The redirect URI needs to be registered in app registration. The registration is especially important when you deploy an app that you've initially tested locally. You then need to add the reply URL of the deployed app in the application registration portal.
For daemon apps, you don't need to specify a redirect URI. ## Client secret
-This option specifies the client secret for the confidential client app. This secret (app password) is provided by the application registration portal or provided to Azure AD during app registration with PowerShell AzureAD, PowerShell AzureRM, or Azure CLI.
+This option specifies the client secret for the confidential client app. The client secret (app password) is provided by the application registration portal or provided to Azure AD during app registration with PowerShell AzureAD, PowerShell AzureRM, or Azure CLI.
## Logging
-To help in debugging and authentication failure troubleshooting scenarios, the Microsoft Authentication Library provides built-in logging support. Logging is each library is covered in the following articles:
+
+To help in debugging and authentication failure troubleshooting scenarios, the MSAL provides built-in logging support. Logging in each library is covered in the following articles:
:::row::: :::column:::
active-directory Clean Up Unmanaged Azure Ad Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/clean-up-unmanaged-azure-ad-accounts.md
Azure Active Directory (Azure AD) supports self-service sign-up for
email-verified users. Users can create Azure AD accounts if they can verify email ownership. To learn more, see, [What is self-service sign-up for Azure Active
-Directory?](https://docs.microsoft.com/azure/active-directory/enterprise-users/directory-self-service-signup)
+Directory?](./directory-self-service-signup.md)
However, if a user creates an account, and the domain isn't verified in an Azure AD tenant, the user is created in an unmanaged, or viral
You can remove unmanaged Azure AD accounts from your Azure AD tenants
and prevent these types of accounts from redeeming future invitations. 1. Enable [email one-time
- passcode](https://docs.microsoft.com/azure/active-directory/external-identities/one-time-passcode#enable-email-one-time-passcode)
+ passcode](../external-identities/one-time-passcode.md#enable-email-one-time-passcode)
(OTP). 2. Use the sample application in [Azure-samples/Remove-unmanaged-guests](https://github.com/Azure-Samples/Remove-Unmanaged-Guests) or
and prevent these types of accounts from redeeming future invitations.
PowerShell module to identify viral users in an Azure AD tenant and reset user redemption status.
-Once the above steps are complete, when users with unmanaged Azure AD accounts try to access your tenant, they'll re-redeem their invitations. However, because Email OTP is enabled, Azure AD will prevent users from redeeming with an existing unmanaged Azure AD account and theyΓÇÖll redeem with another account type. Google Federation and SAML/WS-Fed aren't enabled by default. So by default, these users will redeem with either an MSA or Email OTP, with MSA taking precedence. For a full explanation on the B2B redemption precedence, refer to the [redemption precedence flow chart](https://docs.microsoft.com/azure/active-directory/external-identities/redemption-experience#invitation-redemption-flow).
+Once the above steps are complete, when users with unmanaged Azure AD accounts try to access your tenant, they'll re-redeem their invitations. However, because Email OTP is enabled, Azure AD will prevent users from redeeming with an existing unmanaged Azure AD account and theyΓÇÖll redeem with another account type. Google Federation and SAML/WS-Fed aren't enabled by default. So by default, these users will redeem with either an MSA or Email OTP, with MSA taking precedence. For a full explanation on the B2B redemption precedence, refer to the [redemption precedence flow chart](../external-identities/redemption-experience.md#invitation-redemption-flow).
## Overtaken tenants and domains Some tenants created as unmanaged tenants can be taken over and converted to a managed tenant. See, [take over an unmanaged directory as
-administrator in Azure AD](https://docs.microsoft.com/azure/active-directory/enterprise-users/domains-admin-takeover).
+administrator in Azure AD](./domains-admin-takeover.md).
In some cases, overtaken domains might not be updated, for example, missing a DNS TXT record and therefore become flagged as unmanaged. Implications are:
To delete unmanaged Azure AD accounts, run:
## Next steps Examples of using
-[Get-MSIdUnmanagedExternalUser](https://github.com/AzureAD/MSIdentityTools/wiki/Get-MsIdUnmanagedExternalUser)
+[Get-MSIdUnmanagedExternalUser](https://github.com/AzureAD/MSIdentityTools/wiki/Get-MsIdUnmanagedExternalUser)
active-directory Groups Dynamic Rule Member Of https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-rule-member-of.md
Title: Group membership for Azure AD dynamic groups with memberOf - Azure AD | M
description: How to create a dynamic membership group that can contain members of other groups in Azure Active Directory. documentationcenter: ''--++ Previously updated : 06/23/2022- Last updated : 07/15/2022+
# Group membership in a dynamic group (preview) in Azure Active Directory
-This feature preview in Azure Active Directory (Azure AD), part of Microsoft Entra, enables admins to create dynamic groups that populate by adding members of other groups using the memberOf attribute. Apps that couldn't read group-based membership previously in Azure AD can now read the entire membership of these new memberOf groups. Not only can these groups be used for apps, they can also be used for licensing assignment and role-based access control. The following diagram illustrates how you could create Dynamic-Group-A with members of Security-Group-X and Security-Group-Y. Members of the groups inside of Security-Group-X and Security-Group-Y don't become members of Dynamic-Group-A.
+This feature preview in Azure Active Directory (Azure AD), part of Microsoft Entra, enables admins to create dynamic groups that populate by adding members of other groups using the memberOf attribute. Apps that couldn't read group-based membership previously in Azure AD can now read the entire membership of these new memberOf groups. Not only can these groups be used for apps, they can also be used for licensing assignments. The following diagram illustrates how you could create Dynamic-Group-A with members of Security-Group-X and Security-Group-Y. Members of the groups inside of Security-Group-X and Security-Group-Y don't become members of Dynamic-Group-A.
:::image type="content" source="./media/groups-dynamic-rule-member-of/member-of-diagram.png" alt-text="Diagram showing how the memberOf attribute works.":::
active-directory Active Directory How To Find Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-how-to-find-tenant.md
Azure subscriptions have a trust relationship with Azure Active Directory (Azure
1. Select **Properties**.
-1. Then, scroll down to the **Tenant ID** field. Your tenant ID will be in the box.
+1. Scroll down to the **Tenant ID** field. Your tenant ID will be in the box.
:::image type="content" source="media/active-directory-how-to-find-tenant/portal-tenant-id.png" alt-text="Azure Active Directory - Properties - Tenant ID - Tenant ID field"::: ## Find tenant ID with PowerShell
-You can also find the tenant programmatically. To find the tenant ID with Azure PowerShell, use the cmdlet `Get-AzTenant`.
+To find the tenant ID with Azure PowerShell, use the cmdlet `Get-AzTenant`.
```azurepowershell-interactive Connect-AzAccount
For more information, see this Azure PowerShell cmdlet reference for [Get-AzTena
## Find tenant ID with CLI
-If you want to use a command-line interface to find the tenant ID, you can do so with [Azure CLI](/cli/azure/install-azure-cli) or [Microsoft 365 CLI](https://pnp.github.io/cli-microsoft365/).
+The [Azure CLI](/cli/azure/install-azure-cli) or [Microsoft 365 CLI](https://pnp.github.io/cli-microsoft365/) can be used to find the tenant ID.
For Azure CLI, use one of the commands **az login**, **az account list**, or **az account tenant list** as shown in the following example. Notice the **tenantId** property for each of your subscriptions in the output from each command.
active-directory Active Directory Troubleshooting Support Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-troubleshooting-support-howto.md
If you are unable to find answers by using self-help resources, you can open an
### How to open a support ticket for Azure AD in the Azure portal > [!NOTE]
-> * For billing or subscription issues, you must use the [Microsoft 365 admin center](https://admin.microsoft.com).
-> * If you're using Azure AD B2C, open a support ticket by first switching to an Azure AD tenant that has an Azure subscription associated with it. Typically, this is your employee tenant or the default tenant created for you when you signed up for an Azure subscription. To learn more, see [how an Azure subscription is related to Azure AD](active-directory-how-subscriptions-associated-directory.md).
+> If you're using Azure AD B2C, open a support ticket by first switching to an Azure AD tenant that has an Azure subscription associated with it. Typically, this is your employee tenant or the default tenant created for you when you signed up for an Azure subscription. To learn more, see [how an Azure subscription is related to Azure AD](active-directory-how-subscriptions-associated-directory.md).
1. Sign in to [the Azure portal](https://portal.azure.com) and open **Azure Active Directory**.
active-directory Active Directory Users Profile Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-users-profile-azure-portal.md
As you'll see, there's more information available in a user's profile than what
>[!Note] >You must use Windows Server Active Directory to update the identity, contact info, or job info for users whose source of authority is Windows Server Active Directory. After you complete your update, you must wait for the next synchronization cycle to complete before you'll see the changes.
+ >
+ > If you're having issues updating a user's Profile picture, please ensure that your Office 365 Exchange Online Enterprise App is Enabled for users to sign-in.
## Next steps After you've updated your users' profiles, you can perform the following basic processes:
active-directory Road To The Cloud Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-migrate.md
A typical migration workstream has the following stages:
## Users and Groups
-### Move password self-service
+### Enable password self-service
We recommend a [passwordless environment](../authentication/concept-authentication-passwordless.md). Until then, you can migrate password self-service workflows from on-premises systems to Azure AD to simplify your environment. Azure AD [self-service password reset (SSPR)](../authentication/concept-sspr-howitworks.md) gives users the ability to change or reset their password, with no administrator or help desk involvement.
-To enable self-service capabilities, your authentication methods must be updated to a [level that supported by self-service capabilities](../authentication/tutorial-enable-sspr.md). Once authentication methods are updated, you'll want to enable user self-service password capability for your Azure AD authentication environment.
+To enable self-service capabilities, choose the appropriate [authentication methods](../authentication/concept-authentication-methods.md) for your organization. Once the authentication methods are updated, you can enable user self-service password capability for your Azure AD authentication environment. For deployment guidance, see [Deployment considerations for Azure Active Directory self-service password reset](../authentication/howto-sspr-deployment.md)
-### To evaluate and pilot SSPR
-
-* Enable [combined registration (multi-factor authentication (MFA) +SSPR)](../authentication/concept-registration-mfa-sspr-combined.md) for a target group of users
-
-* Deploy [SSPR](../authentication/tutorial-enable-sspr.md) for a target group of users
-
-* For that group of users with Azure AD and Hybrid Azure AD joined devices (Windows devices - 7, 8, 8.1 and 10), enable [Windows password reset](../authentication/howto-sspr-windows.md) for those users.
+**Additional considerations include**:
* Deploy [Password Protection](../authentication/howto-password-ban-bad-on-premises-operations.md) in a subset of DCs with *Audit Mode* to gather information about impact of modern policies. For more guidance, see [Enable on-premises Azure Active Directory Password Protection](../authentication/howto-password-ban-bad-on-premises-operations.md).-
-### To scale out
-
-Gradually register and enable SSPR. For example, roll out by region, subsidiary, department, etc. for all users. This enables both MFA and SSPR. Refer to [Sample SSPR rollout materials](https://www.microsoft.com/download/details.aspx?id=56768) to assist with required end-user communications and evangelizing.
-
-**Key points:**
-
-* Use Azure AD password policies on the domain.
-
+* Gradually register and enable [Combined registration for SSPR and Azure AD Multi-Factor Authentication](../authentication/concept-registration-mfa-sspr-combined.md). This enables both MFA and SSPR. For example, roll out by region, subsidiary, department, etc. for all users.
* Go through a cycle of password change for all users to flush out weak passwords.- * Once the cycle is complete, implement the policy expiration time.
-* Enable Windows 10 password reset ([Self-service password reset for Windows devices - Azure Active Directory](../authentication/howto-sspr-windows.md)) for all users
-
-For Windows down-level devices, follow [these instructions](../authentication/howto-sspr-windows.md)
-
-* Add monitoring information like workbooks, for reset activity ([Self-service password reset reports - Azure Active Directory](../authentication/howto-sspr-reporting.md)) - Authentication Methods Insights and reporting ([Authentication Methods Activity - Azure Active Directory](../authentication/howto-authentication-methods-activity.md))
- * Switch the "Password Protection" configuration in the DCs that have "Audit Mode" set to "Enforced mode" ([Enable on-premises Azure AD Password Protection](../authentication/howto-password-ban-bad-on-premises-operations.md))-
-* For customers with Azure AD Identity Protection, enable [password reset as a control in Conditional Access policies](../identity-protection/howto-identity-protection-configure-risk-policies.md)for risky users (users marked as risky through Identity Protection). [Investigate risk Azure Active Directory Identity Protection](../identity-protection/howto-identity-protection-investigate-risk.md)
+>[!NOTE]
+>* End user communications and evangelizing are recommended for a smooth deployment. See [Sample SSPR rollout materials](https://www.microsoft.com/download/details.aspx?id=56768) to assist with required end-user communications and evangelizing.
+>* For customers with Azure AD Identity Protection, enable [password reset as a control in Conditional Access policies](../identity-protection/howto-identity-protection-configure-risk-policies.md) for risky users (users marked as risky through Identity Protection).
### Move groups management
active-directory Secure With Azure Ad Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-best-practices.md
+
+ Title: Best practices to secure with Azure Active Directory
+description: Best practices we recommend you follow to secure your isolated environments in Azure Active Directory.
+++++++ Last updated : 7/5/2022++++++
+# Best practices for all isolation architectures
+
+The following are design considerations for all isolation configurations. Throughout this content, there are many links. We link to content, rather than duplicate it here, so you'll always have access to the most up-to-date information.
+
+For general guidance on how to configure Azure Active Directory (Azure AD) tenants (isolated or not), refer to the [Azure AD feature deployment guide](../fundamentals/active-directory-deployment-checklist-p2.md).
+
+>[!NOTE]
+>For all isolated tenants we suggest you use clear and differentiated branding to help avoid human error of working in the wrong tenant.
+
+## Isolation security principles
+
+When designing isolated environments, it's important to consider the following principles:
+
+* **Use only modern authentication** - Applications deployed in isolated environments must use claims-based modern authentication (for example, SAML, * Auth, OAuth2, and OpenID Connect) to use capabilities such as federation, Azure AD B2B collaboration, delegation, and the consent framework. This way, legacy applications that have dependency on legacy authentication methods such as NT LAN Manager (NTLM) won't carry forward in isolated environments.
+
+* **Enforce strong authentication** - Strong authentication must always be used when accessing the isolated environment services and infrastructure. Whenever possible, [passwordless authentication](https://docs.microsoft.com/azure/active-directory/authentication/concept-authentication-passwordless) such as [Windows for Business Hello](https://docs.microsoft.com/windows/security/identity-protection/hello-for-business/hello-overview) or a [FIDO2 security keys](https://docs.microsoft.com/azure/active-directory/authentication/howto-authentication-passwordless-security-key)) should be used.
+
+* **Deploy secure workstations** - [Secure workstations](/security/compass/privileged-access-devices) provide the mechanism to ensure that the platform and the identity that platform represents is properly attested and secured against exploitation. Two other approaches to consider are:
+
+ * Use Windows 365 Cloud PCs (Cloud PC) with the Microsoft Graph API.
+
+ * Use [Conditional Access](../conditional-access/concept-condition-filters-for-devices.md) and filter for devices as a condition.
+
+* **Eliminate legacy trust mechanisms** - Isolated directories and services shouldn't establish trust relationships with other environments through legacy mechanisms such as Active Directory trusts. All trusts between environments should be established with modern constructs such as federation and claims-based identity.
+
+* **Isolate services** - Minimize the surface attack area by protecting underlying identities and service infrastructure from exposure. Enable access only through modern authentication for services and secure remote access (also protected by modern authentication) for the infrastructure.
+
+* **Directory-level role assignments** - Avoid or reduce numbers of directory-level role assignments (User Administrator on directory scope instead of AU-scoping) or service-specific directory roles with control plane actions (Knowledge Admin with permissions to manage security group memberships).
+
+In addition to the guidance in the [Azure Active Directory general operations guide](../fundamentals/active-directory-ops-guide-ops.md), we also recommend the following considerations for isolated environments.
+
+## Human identity provisioning
+
+### Privileged Accounts
+
+Provision accounts in the isolated environment for administrative personnel and IT teams who will be operating the environment. This will enable you to add stronger security policies such as device-based access control for [secure workstations](https://docs.microsoft.com/security/compass/privileged-access-deployment). As discussed in previous sections, non-production environments can potentially utilize Azure AD B2B collaboration to onboard privileged accounts to the non-production tenants using the same posture and security controls designed for privileged access in their production environment.
+
+Cloud-only accounts are the simplest way to provision human identities in an Azure AD tenant and it's a good fit for greenfield environments. However, if there's an existing on-premises infrastructure that corresponds to the isolated environment (for example, pre-production or management Active Directory forest), you could consider synchronizing identities from there. This holds especially true if the on-premises infrastructure described above is also used for IaaS solutions that require server access to manage the solution data plane. For more information on this scenario, see [Protecting Microsoft 365 from on-premises attacks](../fundamentals/protect-m365-from-on-premises-attacks.md). Synchronizing from isolated on-premises environments might also be needed if there are specific regulatory compliance requirements such as smart-card only authentication.
+
+![Diagram that shows synchronizing admin identities from on-premises.](media/secure-with-azure-ad-best-practices/admin-id-sync.png)
+
+>[!NOTE]
+>There are no technical controls to do identity proofing for Azure AD B2B accounts. External identities provisioned with Azure AD B2B are bootstrapped with a single factor. The mitigation is for the organization to have a process to proof the required identities prior to a B2B invitation being issued, and regular access reviews of external identities to manage the lifecycle. Consider enabling a Conditional Access policy to control the MFA registration.
+
+### Outsourcing high risk roles
+
+To mitigate inside threats, it's possible to outsource access to the global administrator, and privileged role administrator roles to be managed service provider using Azure B2B collaboration or delegating access through a CSP partner or lighthouse. This access can be tightly controlled by in-house staff via approval flows in Azure Privileged Identity Management (PIM). This approach can greatly reduce inside threats. This is an approach that you can use to meet compliance demands for customers that have concerns.
+
+## Non-human identity provisioning
+
+### Emergency access accounts
+
+Provision [emergency access accounts](../roles/security-emergency-access.md) for "break glass" scenarios where normal administrative accounts can't be used in the event you're accidentally locked out of your Azure AD organization. For on-premises environments using federation systems such as Active Directory Federation Services (AD FS) for authentication, maintain alternate cloud-only credentials for your global administrators to ensure service delivery in the event of an on-premises infrastructure outage.
+
+### Azure managed identities
+
+Use [Azure managed identities](../managed-identities-azure-resources/overview.md) for Azure resources that require a service identity. Check the [list of services that support managed identities](../managed-identities-azure-resources/managed-identities-status.md) when designing your Azure solutions.
+
+If managed identities aren't supported or not possible, consider [provisioning service principal objects](https://docs.microsoft.com/azure/active-directory/develop/app-objects-and-service-principals).
+
+### Hybrid service accounts
+
+Some hybrid solutions might require access to both on-premises and cloud resources. An example of a use case would be an Identity Governance solution that uses a service account on premises for access to AD DS and requires access to Azure AD.
+
+On-premises service accounts typically don't have the ability to sign in interactively, which means that in cloud scenarios they can't fulfill strong credential requirements such as multi-factor authentication (MFA). In this scenario, don't use a service account that has been synced from on-premises, but instead use a managed identity \ service principal. For service principal (SP), use a certificate as a credential, or [protect the SP with Conditional Access](../conditional-access/workload-identity.md).
+
+If there are technical constraints that don't make this possible and the same account must be used for both on-premises and cloud, then implement compensating controls such as Conditional Access to lock down the hybrid account to come from a specific network location.
+
+## Resource assignment
+
+An enterprise solution may be comprised of multiple Azure resources and its access should be managed and governed as a logical unit of assignment - a resource group. In that scenario, Azure AD security groups can be created and associated with the proper permissions and role assignment across all solution resources, so that adding or removing users from those groups results in allowing or denying access to the entire solution.
+
+We recommend you use security groups to grant access to Microsoft services that rely on licensing to provide access (for example, Dynamics 365, Power BI).
+
+Azure AD cloud native groups can be natively governed from the cloud when combined with [Azure AD access reviews](../governance/access-reviews-overview.md) and [Azure AD entitlement management](../governance/access-reviews-overview.md). Organizations who already have on-premises group governance tools can continue to use those tools and rely on identity synchronization with Azure AD Connect to reflect group membership changes.
+
+Azure AD also supports direct user assignment to third-party SaaS services (for example, Salesforce, Service Now) for single sign-on and identity provisioning. Direct assignments to resources can be natively governed from the cloud when combined with [Azure AD access reviews](../governance/access-reviews-overview.md) and [Azure AD entitlement management](../fundamentals/active-directory-ops-guide-ops.md). Direct assignment might be a good fit for end-user facing assignment.
+
+Some scenarios might require granting access to on-premises resources through on-premises Active Directory security groups. For those cases, consider the synchronization cycle to Azure AD when designing processes SLA.
+
+## Authentication management
+
+This section describes the checks to perform and actions to take for credential management and access policies based on your organization's security posture.
+
+### Credential management
+
+#### Strong credentials
+
+All human identities (local accounts and external identities provisioned through B2B collaboration) in the isolated environment must be provisioned with strong authentication credentials such as multi-factor authentication or a FIDO key. Environments with an underlying on-premises infrastructure with strong authentication such as smart card authentication can continue using smart card authentication in the cloud.
+
+#### Passwordless credentials
+
+A [passwordless solution](../authentication/concept-authentication-passwordless.md) is the best solution for ensuring the most convenient and secure method of authentication. Passwordless credentials such as [FIDO security keys](../authentication/howto-authentication-passwordless-security-key.md) and [Windows Hello for Business](https://docs.microsoft.com/windows/security/identity-protection/hello-for-business/hello-overview) are recommended for human identities with privileged roles.
+
+#### Password protection
+
+If the environment is synchronized from an on-premises Active Directory forest, you should deploy [Azure AD password protection](../authentication/concept-password-ban-bad-on-premises.md) to eliminate weak passwords in your organization. [Azure AD smart lockout](../authentication/howto-password-smart-lockout.md) should also be used in hybrid or cloud-only environments to lock out bad actors who are trying to guess your users' passwords or use brute-force methods to get in.
+
+#### Self-service password management
+
+Users needing to change or reset their passwords is one of the biggest sources of volume and cost of help desk calls. In addition to cost, changing the password as a tool to mitigate a user risk is a fundamental step in improving the security posture of your organization. At a minimum, deploy [Self-Service Password Management](../authentication/howto-sspr-deployment.md) for human and test accounts with passwords to deflect help desk calls.
+
+#### External identities passwords
+
+By using Azure AD B2B collaboration, an [invitation and redemption process](../external-identities/what-is-b2b.md) lets external users such as partners, developers, and subcontractors use their own credentials to access your company's resources. This mitigates the need to introduce more passwords into the isolated tenants.
+
+>[!Note]
+>Some applications, infrastructure, or workflows might require a local credential. Evaluate this on a case-by-case basis.
+
+#### Service principals credentials
+
+For scenarios where service principals are needed, use certificate credentials for service principals or [Conditional Access for workload identities](../conditional-access/workload-identity.md). If necessary, use client secrets as an exception to organizational policy.
+
+In both cases, Azure Key Vault can be used with Azure managed identities, so that the runtime environment (for example, an Azure function) can retrieve the credential from the key vault.
+
+Check this example to [create service principals with self-signed certificate](../develop/howto-authenticate-service-principal-powershell.md) for authentication of service principals with certificate credentials.
+
+### Access policies
+
+Below are some specific recommendations for Azure solutions. For general guidance on Conditional Access policies for individual environments, check the [CA Best practices](../conditional-access/overview.md), [Azure AD Operations Guide](../fundamentals/active-directory-ops-guide-auth.md), and [Conditional Access for Zero Trust](https://docs.microsoft.com/azure/architecture/guide/security/conditional-access-zero-trust):
+
+* Define [Conditional Access policies](../conditional-access/workload-identity.md) for the [Microsoft Azure Management](../authentication/howto-password-smart-lockout.md) cloud app to enforce identity security posture when accessing Azure Resource Manager. This should include controls on MFA and device-based controls to enable access only through secure workstations (more on this in the Privileged Roles section under Identity Governance). Additionally, use [Conditional Access to filter for devices](../conditional-access/concept-condition-filters-for-devices.md).
+
+* All applications onboarded to isolated environments must have explicit Conditional Access policies applied as part of the onboarding process.
+
+* Define Conditional Access policies for [security information registration](../conditional-access/howto-conditional-access-policy-registration.md) that reflects a secure root of trust process on-premises (for example, for workstations in physical locations, identifiable by IP addresses, that employees must visit in person for verification).
+
+* Consider managing Conditional Access policies at scale with automation using [MS Graph CA API](https://docs.microsoft.com/azure/active-directory/conditional-access/howto-conditional-access-apis)). For example, you can use the API to configure, manage, and monitor CA policies consistently across tenants.
+
+* Consider using Conditional Access to restrict workload identities. Create a policy to limit or better control access based on location or other relevant circumstances.
+
+### Authentication Challenges
+
+* External identities provisioned with Azure AD B2B might need to reprovision multi-factor authentication (MFA) credentials in the resource tenant. This might be necessary if a cross-tenant access policy hasn't been set up with the resource tenant. This means that onboarding to the system is bootstrapped with a single factor. With this approach, the risk mitigation is for the organization to have a process to proof the user and credential risk profile prior to a B2B invitation being issued. Additionally, define Conditional Access to the registration process as described previously.
+
+* Use [External identities cross-tenant access settings](../external-identities/cross-tenant-access-overview.md) to manage how they collaborate with other Azure AD organizations and other Microsoft Azure clouds through B2B collaboration and [B2B direct connect](../external-identities/cross-tenant-access-settings-b2b-direct-connect.md).
+
+* For specific device configuration and control, you can use device filters in Conditional Access policies to [target or exclude specific devices](../conditional-access/concept-condition-filters-for-devices.md). This enables you to restrict access to Azure management tools from a designated secure admin workstation (SAW). Other approaches you can take include using [Azure Virtual desktop](../../virtual-desktop/environment-setup.md), [Azure Bastion](../../bastion/bastion-overview.md), or [Cloud PC](/graph/cloudpc-concept-overview).
+
+* Billing management applications such as Azure EA portal or MCA billing accounts aren't represented as cloud applications for Conditional Access targeting. As a compensating control, define separate administration accounts and target Conditional Access policies to those accounts using an "All Apps" condition.
+
+## Identity Governance
+
+### Privileged roles
+
+Below are some identity governance principles to consider across all the tenant configurations for isolation.
+
+* **No standing access** - No human identities should have standing access to perform privileged operations in isolated environments. Azure Role-based access control (RBAC) integrates with [Azure AD Privileged Identity Management](../privileged-identity-management/pim-configure.md) (PIM). PIM provides just-in-time activation determined by security gates such as Multi-Factor Authentication, approval workflow, and limited duration.
+
+* **Number of admins** - Organizations should define minimum and maximum number of humans holding a privileged role to mitigate business continuity risks. With too few privileged roles, there may not be enough time-zone coverage. Mitigate security risks by having as few administrators as possible, following the least-privilege principle.
+
+* **Limit privileged access** - Create cloud-only and role-assignable groups for high privilege or sensitive privileges. This offers protection of the assigned users and the group object.
+
+* **Least privileged access** - Identities should only be granted the permissions needed to perform the privileged operations per their role in the organization.
+
+ * Azure RBAC [custom roles](../../role-based-access-control/custom-roles.md) allow designing least privileged roles based on organizational needs. We recommend that custom roles definitions are authored or reviewed by specialized security teams and mitigate risks of unintended excessive privileges. Authoring of custom roles can be audited through [Azure Policy](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json).
+
+ * To mitigate accidental use of roles that aren't meant for wider use in the organization, use Azure Policy to define explicitly which role definitions can be used to assign access. Learn more from this [GitHub Sample](https://github.com/Azure/azure-policy/tree/master/samples/Authorization/allowed-role-definitions).
+
+* **Privileged access from secure workstations** - All privileged access should occur from secure, locked down devices. Separating these sensitive tasks and accounts from daily use workstations and devices protect privileged accounts from phishing attacks, application and OS vulnerabilities, various impersonation attacks, and credential theft attacks such as keystroke logging, [Pass-the-Hash](https://aka.ms/AzureADSecuredAzure/27a), and Pass-The-Ticket.
+
+Some approaches you can use for [using secure devices as part of your privileged access story](/security/compass/privileged-access-devices) include using Conditional Access policies to [target or exclude specific devices](../conditional-access/concept-condition-filters-for-devices.md), using [Azure Virtual desktop](../../virtual-desktop/environment-setup.md), [Azure Bastion](../../bastion/bastion-overview.md), or [Cloud PC](/graph/cloudpc-concept-overview), or creating Azure-managed workstations or privileged access workstations.
+
+* **Privileged role process guardrails** - Organizations must define processes and technical guardrails to ensure that privileged operations can be executed whenever needed while complying with regulatory requirements. Examples of guardrails criteria include:
+
+ * Qualification of humans with privileged roles (for example, full-time employee/vendor, clearance level, citizenship)
+
+ * Explicit incompatibility of roles (also known as separation of duties). Examples include teams with Azure AD directory roles shouldn't be responsible for managing Azure Resource Manager privileged roles, etc.
+
+ * Whether direct user or groups assignments are preferred for which roles.
+
+### Resource access
+
+* **Attestation** - Identities that hold privileged roles should be reviewed periodically to keep membership current and justified. [Azure AD Access Reviews](../governance/access-reviews-overview.md) integrate with Azure RBAC roles, group memberships and Azure AD B2B external identities.
+
+* **Lifecycle** - Privileged operations might require access to multiple resources such as line of business applications, SaaS Applications, and Azure resource groups and subscriptions. [Azure AD Entitlement Management](../governance/entitlement-management-overview.md) allows defining access packages that represent a set resource that is assigned to users as a unit, establish a validity period, approval workflows, etc.
+
+### Governance challenges
+
+* The Azure Enterprise (Azure EA) Agreement portal doesn't integrate with Azure RBAC or Conditional Access. The mitigation for this is to use dedicated administration accounts that can be targeted with policies and additional monitoring.
+
+* The Azure EA Enterprise portal doesn't provide an audit log. To mitigate this, consider an automated governed process to provision subscriptions with the considerations described above and use dedicated EA accounts and audit the authentication logs.
+
+* [Microsoft Customer Agreement](../../cost-management-billing/understand/mca-overview.md) (MCA) roles don't integrate natively with PIM. To mitigate this, use dedicated MCA accounts and monitor usage of these accounts.
+
+* Monitoring IAM assignments outside Azure AD PIM isn't automated through Azure Policies. The mitigation is to not grant Subscription Owner or User Access Administrator roles to engineering teams. Instead create groups assigned to least privileged roles such as Contributor and delegate the management of those groups to engineering teams.
+
+* Privileged roles in Azure AD B2C tenants aren't integrated with Azure AD PIM. The mitigation is to create dedicated accounts in the organization's Azure AD tenant, onboard them in the Azure AD B2C tenant and apply conditional access policies to these dedicated administration accounts.
+
+* Azure AD B2C tenant privileged roles aren't integrated with Azure AD Access Reviews. The mitigation is to create dedicated accounts in the organization's Azure AD tenant, add these accounts to a group and perform regular access reviews on this group.
+
+* There are no technical controls to subordinate the creation of tenants to an organization. However, the activity is recorded in the Audit log. The onboarding to the billing plane is a compensating control at the gate. This needs to be complemented with monitoring and alerts instead.
+
+* There's no out-of-the box product to implement the subscription provisioning workflow recommended above. Organizations need to implement their own workflow.
+
+## Tenant and subscription lifecycle management
+
+### Tenant lifecycle
+
+* We recommend implementing a process to request a new corporate Azure AD tenant. The process should account for:
+
+ * Business justification to create it. Creating a new Azure AD tenant will increase complexity significantly, so it's key to ascertain if a new tenant is necessary.
+
+ * The Azure cloud in which it should be created (for example, Commercial, Government, etc.).
+
+ * Whether this is production or not production
+
+ * Directory data residency requirements
+
+ * Global Administrators who will manage it
+
+ * Training and understanding of common security requirements.
+
+* Upon approval, the Azure AD tenant will be created, configured with necessary baseline controls, and onboarded in the billing plane, monitoring, etc.
+
+* Regular review of the Azure AD tenants in the billing plane needs to be implemented to detect and discover tenant creation outside the governed process. Refer to the *Inventory and Visibility* section of this document for further details.
+
+* Azure AD B2C tenant creation can be controlled using Azure Policy. The policy executes when an Azure subscription is associated to the B2C tenant (a pre-requisite for billing). Customers can limit the creation of Azure AD B2C tenants to specific management groups.
+
+### Subscription lifecycle
+
+Below are some considerations when designing a governed subscription lifecycle process:
+
+* Define a taxonomy of applications and solutions that require Azure resources. All teams requesting subscriptions should supply their "product identifier" when requesting subscriptions. This information taxonomy will determine:
+
+ * Azure AD tenant to provision the subscription
+
+ * Azure EA account to use for subscription creation
+
+ * Naming convention
+
+ * Management group assignment
+
+ * Other aspects such as tagging, cross-charging, product-view usage, etc.
+
+* Don't allow ad-hoc subscription creation through the portals or by other means. Instead consider managing [subscriptions programmatically using Azure Resource Manager](../../cost-management-billing/manage/programmatically-create-subscription.md) and pulling consumption and billing reports [programmatically](/rest/api/consumption/). This can help limit subscription provisioning to authorized users and enforce your policy and taxonomy goals. Guidance on following [AZOps principals](https://github.com/azure/azops/wiki/introduction) can be used to help create a practical solution.
+
+* When a subscription is provisioned, create Azure AD cloud groups to hold standard Azure Resource Manager Roles needed by application teams such as Contributor, Reader and approved custom roles. This enables you to manage Azure RBAC role assignments with governed privileged access at scale.
+
+ 1. Configure the groups to become eligible for Azure RBAC roles using Azure AD PIM with the corresponding controls such as activation policy, access reviews, approvers, etc.
+
+ 1. Then [delegate the management of the groups](../enterprise-users/groups-self-service-management.md) to solution owners.
+
+ 1. As a guardrail, don't assign product owners to User Access Administrator or Owner roles to avoid inadvertent direct assignment of roles outside Azure AD PIM, or potentially changing the subscription to a different tenant altogether.
+
+ 1. For customers who choose to enable cross-tenant subscription management in non-production tenants through Azure Lighthouse, make sure that the same access policies from the production privileged account (for example, privileged access only from [secured workstations](/security/compass/privileged-access-deployment)) are enforced when authenticating to manage subscriptions.
+
+* If your organization has pre-approved reference architectures, the subscription provisioning can be integrated with resource deployment tools such as [Azure Blueprints](../../governance/blueprints/overview.md) or [Terraform](https://www.terraform.io).
+
+* Given the tenant affinity to Azure Subscriptions, subscription provisioning should be aware of multiple identities for the same human actor (employee, partner, vendor, etc.) across multiple tenants and assign access accordingly.
+
+### Azure AD B2C tenants
+
+* In an Azure AD B2C tenant, the built-in roles don't support PIM. To increase security, we recommend using Azure AD B2B collaboration to onboard the engineering teams managing Customer Identity Access Management (CIAM) from your Azure tenant, and assign them to Azure AD B2C privileged roles.
+
+* Following the emergency access guidelines for Azure AD above, consider creating equivalent [emergency access accounts](../roles/security-emergency-access.md) in addition to the external administrators described above.
+
+* We recommend the logical ownership of the underlying Azure AD subscription of the B2C tenant aligns with the CIAM engineering teams, in the same way that the rest of Azure subscriptions are used for the B2C solutions.
+
+## Operations
+
+The following are additional operational considerations for Azure AD, specific to multiple isolated environments. Check the [Azure Cloud Adoption Framework](https://docs.microsoft.com/azure/cloud-adoption-framework/manage/), [Azure Security Benchmark](/security/benchmark/azure/) and [Azure AD Operations guide](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-ops-guide-ops) for detailed guidance to operate individual environments.
+
+### Cross-environment roles and responsibilities
+
+**Enterprise-wide SecOps architecture** - Members of operations and security teams from all environments in the organization should jointly define the following:
+
+* Principles to define when environments need to be created, consolidated, or deprecated.
+
+* Principles to define management group hierarchy on each environment.
+
+* Billing plane (EA portal / MCA) security posture, operational posture, and delegation approach.
+
+* Tenant creation process.
+
+* Enterprise application taxonomy.
+
+* Azure subscription provisioning process.
+
+* Isolation and administration autonomy boundaries and risk assessment across teams and environments.
+
+* Common baseline configuration and security controls (technical and compensating) and operational baselines to be used in all environments.
+
+* Common standard operational procedures and tooling that spans multiple environments (for example, monitoring, provisioning).
+
+* Agreed upon delegation of roles across multiple environments.
+
+* Segregation of duty across environments.
+
+* Common supply chain management for privileged workstations.
+
+* Naming conventions.
+
+* Cross-environment correlation mechanisms.
+
+**Tenant creation** - A specific team should own creating the tenant following standardized procedures defined by enterprise-wide SecOps architecture. This includes:
+
+* Underlying license provisioning (for example, Microsoft 365).
+
+* Onboarding to corporate billing plan (for example, Azure EA or MCA).
+
+* Creation of Azure management group hierarchy.
+
+* Configuration of management policies for various perimeters including identity, data protection, Azure, etc.
+
+* Deployment of security stack per agreed upon cybersecurity architecture, including diagnostic settings, SIEM onboarding, CASB onboarding, PIM onboarding, etc.
+
+* Configuration of Azure AD roles based on agreed upon delegation.
+
+* Configuration and distribution of initial privileged workstations.
+
+* Provisioning emergency access accounts.
+
+* Configuration of identity provisioning stack.
+
+**Cross-environment tooling architecture** - Some tools such as identity provisioning and source control pipelines might need to work across multiple environments. These tools should be considered critical to the infrastructure and must be architected, designed, implemented, and managed as such. As a result, architects from all environments should be involved whenever cross-environment tools need to be defined.
+
+### Inventory and visibility
+
+**Azure subscription discovery** - For each discovered tenant, an Azure AD global administrator can [elevate access](../../role-based-access-control/elevate-access-global-admin.md) to gain visibility of all subscriptions in the environment. This elevation will assign the global administrator the User Access Administrator built-in role at the root management group.
+
+>[!NOTE]
+>This action is highly privileged and might give the admin access to subscriptions that hold extremely sensitive information if that data has not been properly isolated.
+
+**Enabling read access to discover resources** - Management groups enable RBAC assignment at scale across multiple subscriptions. Customers can grant a Reader role to a centralized IT team by configuring a role assignment in the root management group, which will propagate to all subscriptions in the environment.
+
+**Resource discovery** - After gaining resource Read access in the environment, [Azure Resource Graph](../../governance/resource-graph/overview.md) can be used to query resources in the environment.
+
+### Logging and monitoring
+
+**Central security log management** - Ingest logs from each environment in a [centralized way](/security/benchmark/azure/security-control-logging-monitoring), following consistent best practices across environments (for example, diagnostics settings, log retention, SIEM ingestion, etc.). [Azure Monitor](../../azure-monitor/overview.md) can be used to ingest logs from different sources such as endpoint devices, network, operating systems' security logs, etc.
+
+Detailed information on using automated or manual processes and tools to monitor logs as part of your security operations is available at [Azure Active Directory security operation guide](https://github.com/azure/azops/wiki/introduction).
+
+Some environments might have regulatory requirements that limit which data (if any) can leave a given environment. If centralized monitoring across environments isn't possible, teams should have operational procedures to correlate activities of identities across environments for auditing and forensics purposes such as cross-environment lateral movement attempts. It's recommended that the object unique identifiers human identities belonging to the same person is discoverable, potentially as part of the identity provisioning systems.
+
+The log strategy must include the following Azure AD logs for each tenant used in the organization:
+
+* Sign-in activity
+
+* Audit logs
+
+* Risk events
+
+Azure AD provides [Azure Monitor integration](../reports-monitoring/concept-activity-logs-azure-monitor.md) for the sign-in activity log and audit logs. Risk events can be ingested through [Microsoft Graph API](/graph/tutorial-riskdetection-api).
+
+The following diagram shows the different data sources that need to be incorporated as part of the monitoring strategy:
+
+![Diagram that shows monitoring strategy.](media/secure-with-azure-ad-best-practices/human-identity-provisioning.png)
+
+Azure AD B2C tenants can be [integrated with Azure Monitor](../../active-directory-b2c/azure-monitor.md). We recommend monitoring of Azure AD B2C using the same criteria discussed above for Azure AD.
+
+Subscriptions that have enabled cross-tenant management with Azure Lighthouse can enable cross-tenant monitoring if the logs are collected by Azure Monitor. The corresponding Log Analytics workspaces can reside in the resource tenant and can be analyzed centrally in the managing tenant using Azure Monitor workbooks. To learn more, check [Monitor delegated resources at scale - Azure Lighthouse](../../lighthouse/how-to/monitor-at-scale.md).
+
+### Hybrid infrastructure OS security logs
+
+All hybrid identity infrastructure OS logs should be archived and carefully monitored as a Tier 0 system, given the surface area implications. This includes:
+
+* AD FS servers and Web Application Proxy
+
+* Azure AD Connect
+
+* Application Proxy Agents
+
+* Password write-back agents
+
+* Password Protection Gateway machines
+
+* NPS that has the Azure AD Multi-Factor Authentication RADIUS extension
+
+[Azure AD Connect Health](../hybrid/whatis-azure-ad-connect.md) must be deployed to monitor identity synchronization and federation (when applicable) for all environments.
+
+**Log storage retention** - All environments should have a cohesive log storage retention strategy, design, and implementation to facilitate a consistent toolset (for example, SIEM systems such as Azure Sentinel), common queries, investigation, and forensics playbooks. Azure Policy can be used to set up diagnostic settings.
+
+**Monitoring and log reviewing** - The operational tasks around identity monitoring should be consistent and have owners in each environment. As described above, strive to consolidate these responsibilities across environments to the extent allowed by regulatory compliance and isolation requirements.
+
+The following scenarios must be explicitly monitored and investigated:
+
+* **Suspicious activity** - All [Azure AD risk events](../identity-protection/overview-identity-protection.md) should be monitored for suspicious activity. All tenants should define the network [named locations](../conditional-access/location-condition.md) to avoid noisy detections on location-based signals. [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) is natively integrated with Azure Security Center. It's recommended that any risk detection investigation includes all the environments the identity is provisioned (for example, if a human identity has an active risk detection in the corporate tenant, the team operating the customer facing tenant should also investigate the activity of the corresponding account in that environment).
+
+* **User entity behavioral analytics (UEBA) alerts** - UEBA should be used to get insightful information based on anomaly detection. [Microsoft Microsoft 365 Defender for Cloud Apps](/security/business/siem-and-xdr/microsoft-defender-cloud-apps?rtc=1) provides [UEBA in the cloud](/defender-cloud-apps/tutorial-ueba). Customers can integrate [on-premises UEBA from Microsoft Microsoft 365 Defender for Identity](/defender-cloud-apps/mdi-integration). MCAS reads signals from Azure AD Identity Protection.
+
+* **Emergency access accounts activity** - Any access using [emergency access accounts](../fundamentals/security-operations-privileged-accounts.md) should be monitored and [alerts](../users-groups-roles/directory-emergency-access.md) created for investigations. This monitoring must include:
+
+ * Sign-ins
+
+ * Credential management
+
+ * Any updates on group memberships
+
+ * Application Assignments
+
+* **Billing management accounts** - Given the sensitivity of accounts with billing management roles in Azure EA or MCA, and their significant privilege, it's recommended to monitor and alert:
+
+ * Sign in attempts by accounts with billing roles.
+
+ * Any attempt to authenticate to applications other than the EA Portal.
+
+ * Any attempt to authenticate to applications other than Azure Resource Management if using dedicated accounts for MCA billing tasks.
+
+ * Assignment to Azure resources using dedicated accounts for MCA billing tasks.
+
+* **Privileged role activity** - Configure and review security [alerts generated by Azure AD PIM](https://docs.microsoft.com/azure/active-directory/privileged-identity-management/pim-how-to-configure-security-alerts). If locking down direct RBAC assignments isn't fully enforceable with technical controls (for example, Owner role has to be granted to product teams to do their job), then monitor direct assignment of privileged roles outside PIM by generating alerts whenever a user is assigned directly to access the subscription with Azure RBAC.
+
+* **Classic role assignments** - Organizations should use the modern Azure RBAC role infrastructure instead of the classic roles. As a result, the following events should be monitored:
+
+ * Assignment to classic roles at the subscription level
+
+* **Tenant-wide configurations** - Any tenant-wide configuration service should generate alerts in the system.
+
+ * Updating Custom Domains
+
+ * Updating branding
+
+ * Azure AD B2B allow/block list
+
+ * Azure AD B2B allowed identity providers (SAML IDPs through direct federation or Social Logins)
+
+ * Conditional Access Policies changes
+
+* **Application and service principal objects**
+
+ * New Applications / Service principals that might require Conditional Access policies
+
+ * Application Consent activity
+
+* **Management group activity** - The following Identity Aspects of management groups should be monitored:
+
+ * RBAC role assignments at the MG
+
+ * Azure Policies applied at the MG
+
+ * Subscriptions moved between MGs
+
+ * Any changes to security policies to the Root MG
+
+* **Custom roles**
+
+ * Updates of the custom role definitions
+
+ * New custom roles created
+
+* **Custom governance rules** - If your organizations established any separation of duties rules (for example, a holder of a Global Administrator tenant GA can't be owner/contributor of subscriptions), create alerts or configure periodic reviews to detect violations.
+
+**Other monitoring considerations** - Azure subscriptions that contain resources used for Log Management should be considered as critical infrastructure (Tier 0) and locked down to the Security Operations team of the corresponding environment. Consider using tools such as Azure Policy to enforce additional controls to these subscriptions.
+
+### Operational tools
+
+**Cross-environment** tooling design considerations:
+
+* Whenever possible, operational tools that will be used across multiple tenants should be designed to run as an Azure AD multi-tenant application to avoid redeployment of multiple instances on each tenant and avoid operational inefficiencies. The implementation should include authorization logic in to ensure that isolation between users and data is preserved.
+
+* Add alerts and detections to monitor any cross-environment automation (for example, identity provisioning) and threshold limits for fail-safes. For example, you may want an alert if deprovisioning of user accounts reaches a specific level, as it may indicate a bug or operational error that could have broad impact.
+
+* Any automation that orchestrates cross-environment tasks should be operated as highly privileged system. This system should be homed to the highest security environment and pull from outside sources if data from other environments is required. Data validation and thresholds need to be applied to maintain system integrity. A common cross-environment task is identity lifecycle management to remove identities from all environments for a terminated employee.
+
+**IT service management tools** - Organizations using IT Service Management (ITSM) systems such as ServiceNow should configure [Azure AD PIM role activation settings](../privileged-identity-management/pim-how-to-change-default-settings.md) to request a ticket number as part of the activation purposes.
+
+Similarly, Azure Monitor can be integrated with ITSM systems through the [IT Service Management Connector](../../azure-monitor/alerts/itsmc-overview.md).
+
+**Operational practices** - Minimize operational activities that require direct access to the environment to human identities. Instead model them as Azure Pipelines that execute common operations (for example, add capacity to a PaaS solution, run diagnostics, etc.) and model direct access to the Azure Resource Manager interfaces to "break glass" scenarios.
+
+### Operations challenges
+
+* Activity of Service Principal Monitoring is limited for some scenarios
+
+* Azure AD PIM alerts don't have an API. The mitigation is to have a regular review of those PIM alerts.
+
+* Azure EA Portal doesn't provide monitoring capabilities. The mitigation is to have dedicated administration accounts and monitor the account activity.
+
+* MCA doesn't provide audit logs for billing tasks. The mitigation is to have dedicated administration accounts and monitor the account activity.
+
+* Some services in Azure needed to operate the environment need to be redeployed and reconfigured across environments as they can't be multi-tenant or multi-cloud.
+
+* There's no full API coverage across Microsoft Online Services to fully achieve infrastructure as code. The mitigation is to use APIs as much as possible and use portals for the remainder. This [Open-Source initiative](https://microsoft365dsc.com/) to help you with determining an approach that might work for your environment.
+
+* There's no programmatic capability to discover resource tenants that have delegated subscription access to identities in a managing tenant. For example, if an email address enabled a security group in the contoso.com tenant to manage subscriptions in the fabrikam.com tenant, administrators in the contoso.com don't have an API to discover that this delegation took place.
+
+* Specific account activity monitoring (for example, break-glass account, billing management account) isn't provided out of the box. The mitigation is for customers to create their own alert rules.
+
+* Tenant-wide configuration monitoring isn't provided out of the box. The mitigation is for customers to create their own alert rules.
+
+## Next steps
+
+* [Introduction to delegated administration and isolated environments](secure-with-azure-ad-introduction.md)
+
+* [Azure AD fundamentals](secure-with-azure-ad-fundamentals.md)
+
+* [Azure resource management fundamentals](secure-with-azure-ad-resource-management.md)
+
+* [Resource isolation in a single tenant](secure-with-azure-ad-single-tenant.md)
+
+* [Resource isolation with multiple tenants](secure-with-azure-ad-multiple-tenants.md)
active-directory Secure With Azure Ad Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-fundamentals.md
+
+ Title: Fundamentals of securing with Azure Active Directory
+description: Fundamentals of securing your tenants in Azure Active Directory.
+++++++ Last updated : 7/5/2022++++++
+# Azure Active Directory fundamentals
+
+Azure Active Directory (Azure AD) provides an identity and access boundary for Azure resources and trusting applications. Most environment-separation requirements can be fulfilled with delegated administration in a single Azure AD tenant. This configuration reduces management overhead of your systems. However, some specific cases, for example complete resource and identity isolation, require multiple tenants.
+
+You must determine your environment separation architecture based on your needs. Areas to consider include:
+
+* **Resource separation**. If a resource can change directory objects such as user objects, and the change would interfere with other resources, the resource may need to be isolated in a multi-tenant architecture.
+
+* **Configuration separation**. Tenant-wide configurations affect all resources. The effect of some tenant-wide configurations can be scoped with conditional access (CA) policies and other methods. If you have a need for different tenant configurations that can't be scoped with CA policies, you may need a multi-tenant architecture.
+
+* **Administrative separation**. You can delegate the administration of management groups, subscriptions, resource groups, resources, and some policies within a single tenant. A Global Administrator always has access to everything within the tenant. If you need to ensure that the environment doesn't share administrators with another environment, you'll need a multi-tenant architecture.
+
+To stay secure, you must follow best practices for identity provisioning, authentication management, identity governance, lifecycle management, and operations consistently across all tenants.
+
+## Terminology
+
+This list of terms is commonly associated with Azure AD and relevant to this content:
+
+**Azure AD tenant**. A dedicated and trusted instance of Azure AD that is automatically created when your organization signs up for a Microsoft cloud service subscription. Examples of subscriptions include Microsoft Azure, Microsoft Intune, or Microsoft 365. An Azure AD tenant generally represents a single organization or security boundary. The Azure AD tenant includes the users, groups, devices, and applications used to perform identity and access management (IAM) for tenant resources.
+
+**Environment**. In the context of this content, an environment is a collection of Azure subscriptions, Azure resources, and applications that are associated with one or more Azure AD tenets. The Azure AD tenant provides the identity control plane to govern access to these resources.
+
+**Production environment**. In the context of this content, a production environment is the live environment with the infrastructure and services that end users directly interact with. For example, a corporate or customer-facing environment.
+
+**Non-production environment**. In the context of this content, a non-production environment refers to an environment used for:
+
+* Development
+
+* Testing
+
+* Lab purposes
+
+Non-production environments are commonly referred to as sandbox environments.
+
+**Identity**. An identity is a directory object that can be authenticated and authorized for access to a resource. Identity objects exist for human identities and non-human identities. Non-human entities include:
+
+* Application objects
+
+* Workload identities (formerly described as service principles)
+
+* Managed identities
+
+* Devices
+
+**Human identities** are user objects that generally represent people in an organization. These identities are either created and managed directly in Azure AD or are synchronized from an on-premises Active Directory to Azure AD for a given organization. These types of identities are referred to as **local identities**. There can also be user objects invited from a partner organization or a social identity provider using [Azure AD B2B collaboration](https://docs.microsoft.com/azure/active-directory/external-identities/what-is-b2b). In this content, we refer to these types of identity as **external identities**.
+
+**Non-human identities** include any identity not associated with a human. This type of identity is an object such as an application that requires an identity to run. In this content, we refer to this type of identity as a **workload identity**. Various terms are used to describe this type of identity, including [application objects and service principals](https://docs.microsoft.com/azure/marketplace/manage-aad-apps).
+
+* **Application object**. An Azure AD application is defined by its one and only application object. The object resides in the Azure AD tenant where the application registered. The tenant is known as the application's "home" tenant.
+
+ * **Single-tenant** applications are created to only authorize identities coming from the "home" tenant.
+
+ * **Multi-tenant** applications allow identities from any Azure AD tenant to authenticate.
+
+* **Service principal object**. Although there are [exceptions](https://docs.microsoft.com/azure/marketplace/manage-aad-apps), application objects can be considered the *definition* of an application. Service principal objects can be considered an instance of an application. Service principals generally reference an application object, and one application object can be referenced by multiple service principals across directories.
+
+**Service principal objects** are also directory identities that can perform tasks independently from human intervention. The service principal defines the access policy and permissions for a user or application in the Azure AD tenant. This mechanism enables core features such as authentication of the user or application during sign-in and authorization during resource access.
+
+Azure AD allows application and service principal objects to authenticate with a password (also known as an application secret), or with a certificate. The use of passwords for service principals is discouraged and [we recommend using a certificate](https://docs.microsoft.com/azure/active-directory/develop/howto-create-service-principal-portal) whenever possible.
+
+* **Managed identities for Azure resources**. Managed identities are special service principals in Azure AD. This type of service principal can be used to authenticate against services that support Azure AD authentication without needing to store credentials in your code or handle secrets management. For more information, see [What are managed identities for Azure resources?](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview)
+
+* **Device identity**: A device identity is an identity that verifies that the device being used in the authentication flow has undergone a process to attest that the device is legitimate and meets the technical requirements specified by the organization. Once the device has successfully completed this process, the associated identity can be used to further control access to an organization's resources. With Azure AD, devices can authenticate with a certificate.
+
+Some legacy scenarios required a human identity to be used in *non-human* scenarios. For example, when service accounts being used in on-premises applications such as scripts or batch jobs require access to Azure AD. This pattern isn't recommended and we recommend you use [certificates](../authentication/concept-certificate-based-authentication-technical-deep-dive.md). However, if you do use a human identity with password for authentication, protect your Azure AD accounts with [Azure Active Directory Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md).
+
+**Hybrid identity**. A hybrid identity is an identity that spans on-premises and cloud environments. This provides the benefit of being able to use the same identity to access on-premises and cloud resources. The source of authority in this scenario is typically an on-premises directory, and the identity lifecycle around provisioning, de-provisioning and resource assignment is also driven from on-premises. For more information, see [Hybrid identity documentation](https://docs.microsoft.com/azure/active-directory/hybrid/).
+
+**Directory objects**. An Azure AD tenant contains the following common objects:
+
+* **User objects** represent human identities and non-human identities for services that currently don't support service principals. User objects contain attributes that have the required information about the user including personal details, group memberships, devices, and roles assigned to the user.
+
+* **Device objects** represent devices that are associated with an Azure AD tenant. Device objects contain attributes that have the required information about the device. This includes the operating system, associated user, compliance state, and the nature of the association with the Azure AD tenant. This association can take multiple forms depending on the nature of the interaction and trust level of the device.
+
+ * **Hybrid Domain Joined**. Devices that are owned by the organization and [joined](../devices/concept-azure-ad-join-hybrid.md) to both the on-premises Active Directory and Azure AD. Typically a device purchased and managed by an organization and managed by System Center Configuration Manager.
+
+ * **Azure AD Domain Joined**. Devices that are owned by the organization and joined to the organization's Azure AD tenant. Typically a device purchased and managed by an organization that is joined to Azure AD and managed by a service such as [Microsoft Intune](https://www.microsoft.com/microsoft-365/enterprise-mobility-security/microsoft-intune).
+
+ * **Azure AD Registered**. Devices not owned by the organization, for example, a personal device, used to access company resources. Organizations may require the device be enrolled via [Mobile Device Management (MDM)](https://www.microsoft.com/itshowcase/mobile-device-management-at-microsoft), or enforced through [Mobile Application Management (MAM)](https://docs.microsoft.com/office365/enterprise/office-365-client-support-mobile-application-management) without enrollment to access resources. This capability can be provided by a service such as Microsoft Intune.
+
+* **Group objects** contain objects for the purposes of assigning resource access, applying controls, or configuration. Group objects contain attributes that have the required information about the group including the name, description, group members, group owners, and the group type. Groups in Azure AD take multiple forms based on an organization's requirements and can be mastered in Azure AD or synchronized from on-premises Active Directory Domain Services (AD DS).
+
+ * **Assigned groups**. In Assigned groups, users are added to or removed from the group manually, synchronized from on-premises AD DS, or updated as part of an automated scripted workflow. An assigned group can be synchronized from on-premises AD DS or can be homed in Azure AD.
+
+ * **Dynamic membership groups**. In Dynamic groups, users are assigned to the group automatically based on defined attributes. This allows group membership to be dynamically updated based on data held within the user objects. A dynamic group can only be homed in Azure AD.
+
+**Microsoft Account (MSA)**. You can create Azure subscriptions and tenants using Microsoft Accounts (MSA). A Microsoft Account is a personal account (as opposed to an organizational account) and is commonly used by developers and for trial scenarios. When used, the personal account is always made a guest in an Azure AD tenant.
+
+## Azure AD functional areas
+
+These are the functional areas provided by Azure AD that are relevant to isolated environments. To learn more about the capabilities of Azure AD, see [What is Azure Active Directory?](../fundamentals/active-directory-whatis.md).
+
+### Authentication
+
+**Authentication**. Azure AD provides support for authentication protocols compliant with open standards such as Open ID Connect, OAuth and SAML. Azure AD also provides capabilities to allow organizations to federate existing on-premises identity providers such as Active Directory Federation Services (AD FS) to authenticate access to Azure AD integrated applications.
+
+Azure AD provides industry-leading strong authentication options that organizations can use to secure access to resources. Azure Active Directory Multi-Factor Authentication, device authentication and password-less capabilities allow organizations to deploy strong authentication options that suit their workforce's requirements.
+
+**Single sign-on (SSO)**. With single sign-on, users sign in once with one account to access all resources that trust the directory such as domain-joined devices, company resources, software as a service (SaaS) applications, and all Azure AD integrated applications. For more information, see [single sign-on to applications in Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
+
+### Authorization
+
+**Resource access assignment**. Azure AD provides and secures access to resources. Assigning access to a resource in Azure AD can be done in two ways:
+
+* **User assignment**: The user is directly assigned access to the resource and the appropriate role or permission is assigned to the user.
+
+* **Group assignment**: A group containing one or more users is assigned to the resource and the appropriate role or permission is assigned to the group
+
+**Application access policies**. Azure AD provides capabilities to further control and secure access to your organization's applications.
+
+**Conditional Access**. Azure AD Conditional Access policies are tools to bring user and device context into the authorization flow when accessing Azure AD resources. Organizations should explore use of Conditional Access policies to allow, deny, or enhance authentication based on user, risk, device, and network context. For more information, see the [Azure AD Conditional Access documentation](https://docs.microsoft.com/azure/active-directory/conditional-access/).
+
+**Azure AD Identity Protection**. This feature enables organizations to automate the detection and remediation of identity-based risks, investigate risks, and export risk detection data to third-party utilities for further analysis. For more information, see [overview on Azure AD Identity Protection](../identity-protection/overview-identity-protection.md).
+
+### Administration
+
+**Identity management**. Azure AD provides tools to manage the lifecycle of user, group, and device identities. [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md) enables organizations to extend their current, on-premises identity management solution to the cloud. Azure AD Connect manages the provisioning, de-provisioning, and updates to these identities in Azure AD.
+
+Azure AD also provides a portal and the Microsoft Graph API to allow organizations to manage identities or integrate Azure AD identity management into existing workflows or automation. To learn more about Microsoft Graph, see [Use the Microsoft Graph API](/graph/use-the-api).
+
+**Device management**. Azure AD is used to manage the lifecycle and integration with cloud and on-premises device management infrastructures. It also is used to define policies to control access from cloud or on-premises devices to your organizational data. Azure AD provides the lifecycle services of devices in the directory and the credential provisioning to enable authentication. It also manages a key attribute of a device in the system that is the level of trust. This detail is important when designing a resource access policy. For more information, see [Azure AD Device Management documentation](https://docs.microsoft.com/azure/active-directory/devices/).
+
+**Configuration management**. Azure AD has service elements that need to be configured and managed to ensure the service is configured to an organization's requirements. These elements include domain management, SSO configuration, and application management to name but a few. Azure AD provides a portal and the Microsoft Graph API to allow organizations to manage these elements or integrate into existing processes. To learn more about Microsoft Graph, see [Use the Microsoft Graph API](https://docs.microsoft.com/graph/use-the-api).
+
+### Governance
+
+**Identity lifecycle**. Azure AD provides capabilities to create, retrieve, delete, and update identities in the directory, including external identities. Azure AD also [provides services to automate the identity lifecycle](../app-provisioning/how-provisioning-works.md) to ensure it's maintained in line with your organization's needs. For example, using Access Reviews to remove external users who haven't signed in for a specified period.
+
+**Reporting and analytics**. An important aspect of identity governance is visibility into user actions. Azure AD provides insights into your environment's security and usage patterns. These insights include detailed information on:
+
+* What your users access
+
+* Where they access it from
+
+* The devices they use
+
+* Applications used to access
+
+Azure AD also provides information on the actions that are being performed within Azure AD, and reports on security risks. For more information, see [Azure Active Directory reports and monitoring](https://docs.microsoft.com/azure/active-directory/reports-monitoring/).
+
+**Auditing**. Auditing provides traceability through logs for all changes done by specific features within Azure AD. Examples of activities found in audit logs include changes made to any resources within Azure AD like adding or removing users, apps, groups, roles, and policies. Reporting in Azure AD enables you to audit sign-in activities, risky sign-ins, and users flagged for risk. For more information, see [Audit activity reports in the Azure Active Directory portal](../reports-monitoring/concept-audit-logs.md).
+
+**Access certification**. Access certification is the process to prove that a user is entitled to have access to a resource at a point in time. Azure AD Access Reviews continually review the memberships of groups or applications and provide insight to determine whether access is required or should be removed. This enables organizations to effectively manage group memberships, access to enterprise applications, and role assignments to make sure only the right people have continued access. For more information, see [What are Azure AD access reviews?](../governance/access-reviews-overview.md)
+
+**Privileged access**. [Azure AD Privileged Identity Management](../privileged-identity-management/pim-configure.md) (PIM) provides time-based and approval-based role activation to mitigate the risks of excessive, unnecessary, or misused access permissions to Azure resources. It's used to protect privileged accounts by lowering the exposure time of privileges and increasing visibility into their use through reports and alerts.
+
+### Self-service management
+
+**Credential registration**. Azure AD provides capabilities to manage all aspects of user identity lifecycle and self-service capabilities to reduce the workload of an organization's helpdesk.
+
+**Group management**. Azure AD provides capabilities that enable users to request membership in a group for resource access and to create groups that can be used for securing resources or collaboration. These capabilities can be controlled by the organization so that appropriate controls are put in place.
+
+### Consumer Identity and Access Management (IAM)
+
+**Azure AD B2C**. Azure AD B2C is a service that can be enabled in an Azure subscription to provide identities to consumers for your organization's customer-facing applications. This is a separate island of identity and these users don't appear in the organization's Azure AD tenant. Azure AD B2C is managed by administrators in the tenant associated with the Azure subscription.
+
+## Next steps
+
+* [Introduction to delegated administration and isolated environments](secure-with-azure-ad-introduction.md)
+
+* [Azure resource management fundamentals](secure-with-azure-ad-resource-management.md)
+
+* [Resource isolation in a single tenant](secure-with-azure-ad-single-tenant.md)
+
+* [Resource isolation with multiple tenants](secure-with-azure-ad-multiple-tenants.md)
+
+* [Best practices](secure-with-azure-ad-best-practices.md)
active-directory Secure With Azure Ad Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-introduction.md
+
+ Title: Delegated administration to secure with Azure Active Directory
+description: Introduction to delegated administration and isolated environments in Azure Active Directory.
+++++++ Last updated : 7/5/2022++++++
+# Introduction to delegated administration and isolated environments
+
+An Azure Active Directory (Azure AD) single-tenant architecture with delegated administration is often adequate for separating environments. As detailed in other sections of this article, Microsoft provides many tools to do this. However, there may be times when your organization requires a degree of isolation beyond what can be achieved in a single tenant.
+
+Before discussing specific architectures, it's important to understand:
+
+* How a typical single tenant works.
+
+* How administrative units in Azure AD work.
+
+* The relationships between Azure resources and Azure AD tenants.
+
+* Common requirements driving isolation.
+
+## Azure AD tenant as a security boundary
+
+An Azure AD tenant provides identity and access management (IAM) capabilities to applications and resources used by the organization.
+
+An identity is a directory object that can be authenticated and authorized for access to a resource. Identity objects exist for human identities and non-human identities. To differentiate between human and non-human identities, human identities are referred to as identities and non-human identities are referred to as workload identities. Non-human entities include application objects, service principals, managed identities, and devices. The terminology is inconsistent across the industry, but generally a workload identity is something you need for your software entity to authenticate with some system.
+
+To distinguish between human and non-human identities, different terms are emerging across the IT industry to distinguish between the two:
+
+* **Identity** - Identity started by describing the Active Directory (AD) and Azure AD object used by humans to authenticate. In this series of articles, identity refers to objects that represent humans.
+
+* **Workload identity** - In Azure Active Directory (Azure AD), workload identities are applications, service principals, and managed identities. The workload identity is used to authenticate and access other services and resources.
+
+For more information on workload identities, see [What are workload identities](../develop/workload-identities-overview.md).
+
+The Azure AD tenant is an identity security boundary that is under the control of global administrators. Within this security boundary, administration of subscriptions, management groups, and resource groups can be delegated to segment administrative control of Azure resources. While not directly interacting, these groupings are dependent on tenant-wide configurations of policies and settings. And those settings and configurations are under the control of the Azure AD Global Administrators.
+
+Azure AD is used to grant objects representing identities access to applications and Azure resources. In that sense both Azure resources and applications trusting Azure AD are resources that can be managed with Azure AD. In the following diagram, The Azure AD tenant boundary shows the Azure AD identity objects and the configuration tools. Below the directory are the resources that use the identity objects for identity and access management. Following best practices, the environment is set up with a test environment to test the proper operation of IAM.
+
+![Diagram that shows shows Azure AD tenant boundary.](media/secure-with-azure-ad-introduction/tenant-boundary.png)
+
+### Access to apps that use Azure AD
+
+Identities can be granted access to many types of applications. Examples include:
+
+* Microsoft productivity services such as Exchange Online, Microsoft Teams, and SharePoint Online
+
+* Microsoft IT services such as Azure Sentinel, Microsoft Intune, and Microsoft 365 Defender ATP
+
+* Microsoft Developer tools such as Azure DevOps and Microsoft Graph API
+
+* SaaS solutions such as Salesforce and ServiceNow
+
+* On-premises applications integrated with hybrid access capabilities such as Azure AD Application Proxy
+
+* Custom in-house developed applications
+
+Applications that use Azure AD require directory objects to be configured and managed in the trusted Azure AD tenant. Examples of directory objects include application registrations, service principals, groups, and [schema attribute extensions](/graph/extensibility-overview).
+
+### Access to Azure resources
+
+Users, groups, and service principal objects (workload identities) in the Azure AD tenant are granted roles by using [Azure Role Based Access Control](../../role-based-access-control/overview.md) (RBAC) and [Azure attribute-based access control](../../role-based-access-control/conditions-overview.md) (ABAC).
+
+* Azure RBAC enables you to provide access based on role as determined by security principal, role definition, and scope.
+
+* Azure ABAC builds on Azure RBAC by adding role assignment conditions based on attributes in the context of specific actions. A role assignment condition is another check that you can optionally add to your role assignment to provide more fine-grained access control.
+
+Azure resources, resource groups, subscriptions, and management groups are accessed through using these assigned RBAC roles. For example, the following diagram shows distribution of administrative capability in Azure AD using role-based access control.
+
+![Diagram that shows Azure AD role hierarchy.](media/secure-with-azure-ad-introduction/azure-ad-role-hierarchy.png)
+
+Azure resources that [support Managed Identities](../managed-identities-azure-resources/overview.md) allow resources to authenticate, be granted access to, and be assigned roles to other resources within the Azure AD tenant boundary.
+
+Applications using Azure AD for sign-in may also use Azure resources such as compute or storage as part of its implementation. For example, a custom application that runs in Azure and trusts Azure AD for authentication will have both directory objects and Azure resources.
+
+Lastly, all Azure resources in the Azure AD tenant affect tenant-wide [Azure Quotas and Limits](../../azure-resource-manager/management/azure-subscription-service-limits.md).
+
+### Access to Directory Objects
+
+As outlined in the diagram above, identities, resources, and their relationships are represented in an Azure AD tenant as directory objects. Examples of directory objects include users, groups, service principals, and app registrations.
+
+Having a set of directory objects in the Azure AD tenant boundary engenders the following Capabilities:
+
+* Visibility. Identities can discover or enumerate resources, users, groups, access usage reporting and audit logs based on their permissions. For example, a member of the directory can discover users in the directory per Azure AD [default user permissions](../fundamentals/users-default-permissions.md).
+
+* Applications can affect objects. Applications can manipulate directory objects through Microsoft Graph as part of their business logic. Typical examples include reading/setting user attributes, updating user's calendar, sending emails on behalf of the user, etc. Consent is necessary to allow applications to affect the tenant. Administrators can consent for all users. For more information, see [Permissions and consent in the Microsoft identity platform](../develop/v2-admin-consent.md).
+
+>[!NOTE]
+>Use caution when using application permissions. For example, with Exchange Online, you should [scope application permissions to specific mailboxes and permissions](/graph/auth-limit-mailbox-access).
+
+* Throttling and service limits. Runtime behavior of a resource might trigger [throttling](/graph/throttling) in order to prevent overuse or service degradation. Throttling can occur at the application, tenant, or entire service level. Most commonly it occurs when an application has a large number of requests within or across tenants. Similarly, there are [Azure AD service limits and restrictions](../enterprise-users/directory-service-limits-restrictions.md) that might affect the runtime behavior of applications.
+
+## Administrative units for role management
+
+Administrative units restrict permissions in a role to any portion of your organization that you define. You could, for example, use administrative units to delegate the [Helpdesk Administrator](https://docs.microsoft.com/azure/active-directory/roles/permissions-reference) role to regional support specialists, so they can manage users only in the region that they support. An administrative unit is an Azure AD resource that can be a container for other Azure AD resources. An administrative unit can contain only:
+
+* Users
+
+* Groups
+
+* Devices
+
+In the following diagram, administrative units are used to segment the Azure AD tenant further based on the business or organizational structure. This is useful when different business units or groups have dedicated IT support staff. The administrative units can be used to provide privileged permissions that are limited to a designated administrative unit.
+
+![Diagram that shows Azure AD Administrative units.](media/secure-with-azure-ad-introduction/administrative-units.png)
+
+For more information on administrative units, see [Administrative units in Azure Active Directory](https://docs.microsoft.com/azure/active-directory/roles/administrative-units).
+
+### Common reasons for resource isolation
+
+Sometimes a group of resources should be isolated from other resources for security or other reasons, such as the resources have unique access requirements. This is a good use case for using administrative units. You must determine which users and security principals should have resource access and in what roles. Reasons to isolate resources might include:
+
+* Developer teams need the flexibility to safely iterate during the software development lifecycle of apps. But the development and testing of apps that write to Azure AD can potentially affect the Azure AD tenant through write operations. Some examples of this include:
+
+ * New applications that may change Office 365 content such as SharePoint sites, OneDrive, MS Teams, etc.
+
+ * Custom applications that can change data of users with MS Graph or similar APIs at scale (for example, applications that are granted Directory.ReadWrite.All)
+
+ * DevOps scripts that update large sets of objects as part of a deployment lifecycle.
+
+ * Developers of Azure AD integrated apps need the ability to create user objects for testing, and those user objects shouldn't have access to production resources.
+
+* Non-production Azure resources and applications that may affect other resources. For example, a new beta version of a SaaS application may need to be isolated from the production instance of the application and production user objects
+
+* Secret resources that should be shielded from discovery, enumeration, or takeover from existing administrators for regulatory or business critical reasons.
+
+## Configuration in a tenant
+
+Configuration settings in Azure AD can impact any resource in the Azure AD tenant through targeted, or tenant-wide management actions. Examples of tenant-wide settings include:
+
+* **External identities**: Global administrators for the tenant identify and control the external identities that can be provisioned in the tenant.
+
+ * Whether to allow external identities in the tenant.
+
+ * From which domain(s) external identities can be added.
+
+ * Whether users can invite users from other tenants.
+
+* **Named Locations**: Global administrators can create named locations, which can then be used to
+
+ * Block sign-ins from specific locations.
+
+ * Trigger conditional access policies such as MFA.
+
+ * Bypass security requirements
+
+>[!NOTE]
+>Using [Named Locations](https://docs.microsoft.com/azure/active-directory/conditional-access/location-condition) can present some challenges to your [zero-trust journey](https://www.microsoft.com/security/business/zero-trust). Verify that using Named Locations fits into your security strategy and principles.
+Allowed authentication methods: Global administrators set the authentication methods allowed for the tenant.
+
+* **Self-service options**. Global Administrators set self-service options such as self-service-password reset and create Microsoft 365 groups at the tenant level.
+
+The implementation of some tenant-wide configurations can be scoped as long as they don't get overridden by global administration policies. For example:
+
+* If the tenant is configured to allow external identities, a resource administrator can still exclude those identities from accessing a resource.
+
+* If the tenant is configured to allow personal device registration, a resource administrator can exclude those devices from accessing specific resources.
+
+* If named locations are configured, a resource administrator can configure policies either allowing or excluding access from those locations.
+
+### Common reasons for configuration isolation
+
+Configurations, controlled by Global Administrators, affect resources. While some tenant-wide configuration can be scoped with policies to not apply or partially apply to a specific resource, others can't. If a resource has configuration needs that are unique, it will likely need to be isolated in a separate tenant. Examples of configuration isolation scenarios include:
+
+* Resources having requirements that conflict with existing tenant-wide security or collaboration postures. (for example allowed authentication types, device management policies, ability to self-service, identity proofing for external identities, etc.).
+
+* Compliance requirements that scope certification to the entire environment, including all resources and the Azure AD tenant itself, especially when those requirements conflict with or must exclude other organizational resources.
+
+* External user access requirements that conflict with production or sensitive resource policies.
+
+* Organizations that span multiple countries and companies hosted in a single Azure AD Tenant. For example, what settings and licenses are used in different countries or business subsidiaries.
+
+## Administration in a tenant
+
+Identities with privileged roles in the Azure AD tenant have the visibility and permissions to execute the configuration tasks described in the previous sections. Administration includes both the administration of identity objects such as users, groups, and devices, and the scoped implementation of tenant-wide configurations for authentication, authorization, etc.
+
+### Administration of directory objects
+
+Administrators manage how identity objects can access resources, and under what circumstances. They also can disable, delete, or modify directory objects based on their privileges. Identity objects include:
+
+* **Organizational identities**, such as the following, are represented by user objects:
+
+ * Administrators
+
+ * Organizational users
+
+ * Organizational developers
+
+ * Service Accounts
+
+ * Test users
+
+* **External identities** represent users from outside the organization such as:
+
+ * Partners, suppliers, or vendors that are provisioned with accounts local to the organization environment
+
+ * Partners, suppliers, or vendors that are provisioned via Azure B2B collaboration
+
+* **Groups** are represented by objects such as:
+
+ * Security groups
+
+ * [Microsoft 365 groups](https://docs.microsoft.com/microsoft-365/community/all-about-groups)
+
+ * Dynamic Groups
+
+ * Administrative Units
+
+* **Devices** are represented by objects such as:
+
+ * Hybrid Azure AD joined devices (On-premises computers synchronized from on-premises Active Directory)
+
+ * Azure AD joined devices
+
+ * Azure AD registered mobile devices used by employees to access their workplace applications.
+
+ * Azure AD registered down-level devices (legacy). For example, Windows 2012 R2.
+
+* **Workload Identities**
+ * Managed identities
+
+ * Service principals
+
+ * Applications
+
+In a hybrid environment, identities are typically synchronized from the on-premises Active Directory environment using [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md).
+
+### Administration of identity services
+
+Administrators with appropriate permissions can also manage how tenant-wide policies are implemented at the level of resource groups, security groups, or applications. When considering administration of resources, keep the following in mind. Each can be a reason to keep resources together, or to isolate them.
+
+* A **Global Administrator** can take control of any Azure subscription linked to the Tenant.
+
+* An **identity assigned an Authentication Administrator role** can require non-administrators to reregister for MFA or FIDO authentication.
+
+* A **Conditional Access (CA) Administrator** can create CA policies that require users signing-in to specific apps to do so only from organization-owned devices. They can also scope configurations. For example, even if external identities are allowed in the tenant, they can exclude those identities from accessing a resource.
+
+* A **Cloud Application Administrator** can consent to application permissions on behalf of all users.
+
+### Common reasons for administrative isolation
+
+Who should have the ability to administer the environment and its resources? There are times when administrators of one environment must not have access to another environment. Examples include:
+
+* Separation of tenant-wide administrative responsibilities to further mitigate the risk of security and operational errors affecting critical resources.
+
+* Regulations that constrain who can administer the environment based on conditions such as country of citizenship, country of residency, clearance level, etc. that can't be accommodated with existing staff.
+
+## Security and operational considerations
+
+Given the interdependence between an Azure AD tenant and its resources, it's critical to understand the security and operational risks of compromise or error. If you're operating in a federated environment with synchronized accounts, an on-premises compromise can lead to an Azure AD compromise.
+
+* **Identity compromise** - Within the boundary of a tenant, any identity can be assigned any role, given the one providing access has sufficient privileges. While the impact of compromised non-privileged identities is largely contained, compromised administrators can have broad impact. For example, if an Azure AD global administrator account is compromised, Azure resources can become compromised. To mitigate risk of identity compromise, or bad actors, implement [tiered administration](https://docs.microsoft.com/security/compass/privileged-access-access-model) and ensure that you follow principles of least privilege for [Azure AD Administrator Roles](https://docs.microsoft.com/azure/active-directory/roles/delegate-by-task). Similarly, ensure that you create CA policies that specifically exclude test accounts and test service principals from accessing resources outside of the test applications. For more information on privileged access strategy, see [Privileged access: Strategy](https://docs.microsoft.com/security/compass/privileged-access-strategy).
+
+* **Federated environment compromise**
+
+* **Trusting resource compromise** - Human identities aren't the only security consideration. Any compromised component of the Azure AD tenant can impact trusting resources based on its level of permissions at the tenant and resource level. The impact of a compromised component of an Azure AD trusting resource is determined by the privileges of the resource; resources that are deeply integrated with the directory to perform write operations can have profound impact in the entire tenant. Following [guidance for zero trust](https://docs.microsoft.com/azure/architecture/guide/security/conditional-access-zero-trust) can help limit the impact of compromise.
+
+* **Application development** - Early stages of the development lifecycle for applications with writing privileges to Azure AD, where bugs can unintentionally write changes to the Azure AD objects, present a risk. Follow [Microsoft Identity platform best practices](../develop/identity-platform-integration-checklist.md) during development to mitigate these risks.
+
+* **Operational error** - A security incident can occur not only due to bad actors, but also because of an operational error by tenant administrators or the resource owners. These risks occur in any architecture. Mitigate these risks with separation of duties, tiered administration, following principles of least privilege, and following best practices before trying to mitigate by using a separate tenant.
+
+Incorporating zero-trust principles into your Azure AD design strategy can help guide your design to mitigate these considerations. For more information, visit [Embrace proactive security with Zero Trust](https://www.microsoft.com/en-us/security/business/zero-trust).
+
+## Next steps
+
+* [Azure AD fundamentals](secure-with-azure-ad-fundamentals.md)
+
+* [Azure resource management fundamentals](secure-with-azure-ad-resource-management.md)
+
+* [Resource isolation in a single tenant](secure-with-azure-ad-single-tenant.md)
+
+* [Resource isolation with multiple tenants](secure-with-azure-ad-multiple-tenants.md)
+
+* [Best practices](secure-with-azure-ad-best-practices.md)
active-directory Secure With Azure Ad Multiple Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-multiple-tenants.md
+
+ Title: Resource isolation with multiple tenants to secure with Azure Active Directory
+description: Introduction to resource isolation with multiple tenants in Azure Active Directory.
+++++++ Last updated : 7/5/2022++++++
+# Resource isolation with multiple tenants
+
+There are specific scenarios when delegating administration within a single tenant boundary won't meet your needs. In this section, we'll discuss requirements that may drive you to create a multi-tenant architecture. Multi-tenant organizations might span two or more Azure AD tenants. This can result in unique cross-tenant collaboration and management requirements. Multi-tenant architectures increase management overhead and complexity and should be used with caution. We recommend using a single tenant if your needs can be met with that architecture. For more detailed information, see [Multi-tenant user management]../fundamentals/multi-tenant-user-management-introduction.md).
+
+A separate tenant creates a new boundary, and therefore decoupled management of Azure AD directory roles, directory objects, conditional access policies, Azure resource groups, Azure management groups, and other controls as described in previous sections.
+
+A separate tenant is useful for an organization's IT department to validate tenant-wide changes in Microsoft services such as, Intune, Azure AD Connect, or a hybrid authentication configuration while protecting an organization's users and resources. This includes testing service configurations that might have tenant-wide impact and can't be scoped to a subset of users in the production tenant.
+
+Deploying a non-production environment in a separate tenant might be necessary during development of custom applications that can change data of production user objects with MS Graph or similar APIs (for example, applications that are granted Directory.ReadWrite.All, or similar wide scope).
+
+>[!Note]
+>Azure AD Connect synchronization to multiple tenants, which might be useful when deploying a non-production environment in a separate tenant. For more information, see [Azure AD Connect: Supported topologies](../hybrid/plan-connect-topologies.md).
+
+## Outcomes
+
+In addition to the outcomes achieved with a single tenant architecture as described above, organizations can fully decouple the resource and tenant interactions as described below:
+
+### Resource separation
+
+* **Visibility** - Resources in a separate tenant can't be discovered or enumerated by users and administrators in other tenants. Similarly, usage reports and audit logs are contained within the new tenant boundary. This separation of visibility allows organizations to manage resources needed for confidential projects.
+
+* **Object footprint** - Applications that write to Azure AD and/or other Microsoft Online services through Microsoft Graph or other management interfaces can operate in a separate object space. This enables development teams to perform tests during the software development lifecycle without affecting other tenants.
+
+* **Quotas** - Consumption of tenant-wide [Azure Quotas and Limits](../../azure-resource-manager/management/azure-subscription-service-limits.md) is separated from that of the other tenants.
+
+### Configuration separation
+
+A new tenant provides a separate set of tenant-wide settings that can accommodate resources and trusting applications that have requirements that need different configurations at the tenant level. Additionally, a new tenant provides a new set of Microsoft Online services such as Office 365.
+
+### Administrative separation
+
+A new tenant boundary involves a separate set of Azure AD directory roles, which enables you to configure different sets of administrators.
+
+## Common usage
+
+The following diagram illustrates a common usage for resource isolation in multiple tenants: a pre-production or "sandbox" environment that requires more separation than can be achieved with delegated administration in a single tenant.
+
+![Diagram that shows common usage scenario.](media/secure-with-azure-ad-multiple-tenants/multiple-tenant-common-scenario.png)
+
+Contoso is an organization that augmented their corporate tenant architecture with a pre-production tenant called ContosoSandbox.com. The sandbox tenant is used to support ongoing development of enterprise solutions that write to Azure AD and Microsoft 365 using Microsoft Graph. These solutions will later be deployed in the corporate tenant.
+
+The sandbox tenant is brought online to prevent those applications under development from impacting production systems either directly or indirectly, by consuming tenant resources and affecting quotas, or throttling.
+
+Developers require access to the sandbox tenant during the development lifecycle, ideally with self-service access requiring additional permissions that are restricted in the production environment. Examples of these additional permissions might include creating, deleting, and updating user accounts, registering applications, provisioning and deprovisioning Azure resources, and changes to policies or overall configuration of the environment.
+
+In this example, Contoso uses [Azure AD B2B Collaboration](../external-identities/what-is-b2b.md) to provision users from the corporate tenant to enable users that can manage and access resources in applications in the sandbox tenant without managing multiple credentials. This capability is primarily oriented to cross-organization collaboration scenarios. However, enterprises with multiple tenants like Contoso can use this capability to avoid additional credential lifecycle administration and user experience complexities.
+
+Use [External Identities cross-tenant access](../external-identities/cross-tenant-access-settings-b2b-collaboration.md) settings to manage how you collaborate with other Azure AD organizations through B2B collaboration. These settings determine both the level of inbound access users in external Azure AD organizations have to your resources, and the level of outbound access your users have to external organizations. They also let you trust multifactor authentication (MFA) and device claims ([compliant claims and hybrid Azure AD joined claims](../conditional-access/howto-conditional-access-policy-compliant-device.md)) from other Azure AD organizations. For details and planning considerations, see [Cross-tenant access in Azure AD External Identities](../external-identities/cross-tenant-access-overview.md).
+
+Another approach could have been to utilize the capabilities of Azure AD Connect to sync the same on-premises Azure AD credentials to multiple tenants, keeping the same password but differentiating on the users UPN domain.
+
+## Multi-tenant resource isolation
+
+A new tenant provides the ability to have a separate set of administrators. Organizations can choose to use corporate identities through [Azure AD B2B collaboration](https://docs.microsoft.com/azure/active-directory/external-identities/what-is-b2b). Similarly, organizations can implement [Azure Lighthouse](https://docs.microsoft.com/azure/lighthouse/overview) for cross-tenant management of Azure resources so that non-production Azure subscriptions can be managed by identities in the production counterpart. Azure Lighthouse can't be used to manage services outside of Azure, such as Intune or Microsoft Endpoint Manager. For Managed Service Providers (MSPs), [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide) is an admin portal that helps secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business.
+
+This will allow users to continue to use their corporate credentials, while achieving the benefits of separation as described above.
+
+Azure AD B2B collaboration in sandbox tenants should be configured to allow only identities from the corporate environment to be onboarded using Azure B2B [allow/deny lists](../external-identities/allow-deny-list.md). For tenants that you do want to allow for B2B consider using External Identities cross-tenant access settings for cross tenant multifactor authentication\Device trust.
+
+>[!IMPORTANT]
+>Multi-tenant architectures with external identity access enabled provide only resource isolation, but don't enable identity isolation. Resource isolation using Azure AD B2B collaboration and Azure Lighthouse don't mitigate risks related to identities.
+
+If the sandbox environment shares identities with the corporate environment, the following scenarios are applicable to the sandbox tenant:
+
+* A malicious actor that compromises a user, a device, or hybrid infrastructure in the corporate tenant, and is invited into the sandbox tenant, might gain access to the sandbox tenant's apps and resources.
+
+* An operational error (for example, user account deletion or credential revocation) in the corporate tenant might impact the access of an invited user into the sandbox tenant.
+
+You must do the risk analysis and potentially consider identity isolation through multiple tenants for business-critical resources that require a highly defensive approach. Azure Privileged Identity Management can help mitigate some of the risks by imposing extra security for accessing business critical tenants and resources.
+
+### Directory objects
+
+The tenant you use to isolate resources may contain the same types of objects, Azure resources, and trusting applications as your primary tenant. You may need to provision the following object types:
+
+**Users and groups**: Identities needed by solution engineering teams, such as:
+
+* Sandbox environment administrators.
+
+* Technical owners of applications.
+
+* Line-of-business application developers.
+
+* Test end-user accounts.
+
+These identities might be provisioned for:
+
+* Employees who come with their corporate account through [Azure AD B2B collaboration](../external-identities/what-is-b2b.md).
+
+* Employees who need local accounts for administration, emergency administrative access, or other technical reasons.
+
+Customers who have or require non-production Active Directory on-premises can also synchronize their on-premises identities to the sandbox tenant if needed by the underlying resources and applications.
+
+**Devices**: The non-production tenant contains a reduced number of devices to the extent that are needed in the solution engineering cycle:
+
+* Administration workstations
+
+* Non-production computers and mobile devices needed for development, testing, and documentation
+
+### Applications
+
+Azure AD integrated applications: Application objects and service principals for:
+
+* Test instances of the applications that are deployed in production (for example, applications that write to Azure AD and Microsoft online services).
+
+* Infrastructure services to manage and maintain the non-production tenant, potentially a subset of the solutions available in the corporate tenant.
+
+Microsoft Online
+
+* Typically, the team that owns the Microsoft Online Services in production should be the one owning the non-production instance of those services.
+
+* Administrators of non-production test environments shouldn't be provisioning Microsoft Online Services unless those services are specifically being tested. This avoids inappropriate use of Microsoft services, for example setting up production SharePoint sites in a test environment.
+
+* Similarly, provisioning of Microsoft Online services that can be initiated by end users (also known as ad-hoc subscriptions) should be locked down. For more information, see [What is self-service sign-up for Azure Active Directory?](../enterprise-users/directory-self-service-signup.md).
+
+* Generally, all non-essential license features should be disabled for the tenant using group-based licensing. This should be done by the same team that manages licenses in the production tenant, to avoid misconfiguration by developers who might not know the impact of enabling licensed features.
+
+### Azure resources
+
+Any Azure resources needed by trusting applications may also be deployed. For example, databases, virtual machines, containers, Azure functions, etc. For your sandbox environment, you must weigh the cost savings of using less-expensive SKUs for products and services with the less security features available.
+
+The RBAC model for access control should still be employed in a non-production environment in case changes are replicated to production after tests have concluded. Failure to do so will allow security flaws in the non-production environment to propagate to your production tenant.
+
+## Resource and identity isolation with multiple tenants
+
+### Isolation outcomes
+
+There are limited situations where resource isolation alone won't meet your requirements. You can isolate both resources and identities in a multi-tenant architecture by disabling all cross-tenant collaboration capabilities and effectively building a separate identity boundary. This approach is a defense against operational errors and compromise of user identities, devices, or hybrid infrastructure in corporate tenants.
+
+### Isolation common usage
+
+A separate identity boundary is typically used for business-critical applications and resources such as customer-facing services. In this scenario, Fabrikam has decided to create a separate tenant for their customer-facing SaaS product to avoid the risk of employee identity compromise affecting their SaaS customers. The following diagram illustrates this architecture:
+
+The FabrikamSaaS tenant contains the environments used for applications that are offered to customers as part of Fabrikam's business model.
+
+### Isolation of directory objects
+
+The directory objects in FabrikamSaas are as follows:
+
+Users and groups: Identities needed by solution IT teams, customer support staff, or other necessary personnel are created within the SaaS tenant. To preserve isolation, only local accounts are used, and Azure AD B2B collaboration isn't enabled.
+
+Azure AD B2C directory objects: If the tenant environments are accessed by customers, it may contain an Azure AD B2C tenant and its associated identity objects. Subscriptions that hold these directories are good candidates for an isolated consumer-facing environment.
+
+Devices: This tenant contains a reduced number of devices; only those that are needed to run customer-facing solutions:
+
+* Secure administration workstations.
+
+* Support personnel workstations (this can include engineers who are "on call" as described above).
+
+### Isolation of applications
+
+**Azure AD integrated applications**: Application objects and service principals for:
+
+* Production applications (for example, multi-tenant application definitions).
+
+* Infrastructure services to manage and maintain the customer-facing environment.
+
+**Azure Resources**: Hosts the IaaS, PaaS and SaaS resources of the customer-facing production instances.
+
+## Next steps
+
+* [Introduction to delegated administration and isolated environments](secure-with-azure-ad-introduction.md)
+
+* [Azure AD fundamentals](secure-with-azure-ad-fundamentals.md)
+
+* [Azure resource management fundamentals](secure-with-azure-ad-resource-management.md)
+
+* [Resource isolation in a single tenant](secure-with-azure-ad-single-tenant.md)
+
+* [Best practices](secure-with-azure-ad-best-practices.md)
active-directory Secure With Azure Ad Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-resource-management.md
+
+ Title: Resource management fundamentals in Azure Active Directory
+description: Introduction to resource management in Azure Active Directory.
+++++++ Last updated : 7/5/2022+++++
+# Azure resource management fundamentals
+
+It's important to understand the structure and terms that are specific to Azure resources. The following image shows an example of the four levels of scope that are provided by Azure:
+
+![Diagram that shows Azure resource management model.](media/secure-with-azure-ad-resource-management/resource-management-terminology.png)
+
+## Terminology
+
+The following are some of the terms you should be familiar with:
+
+**Resource** - A manageable item that is available through Azure. Virtual machines, storage accounts, web apps, databases, and virtual networks are examples of resources.
+
+**Resource group** - A container that holds related resources for an Azure solution such as a collection of virtual machines, associated VNets, and load balancers that require management by specific teams. The [resource group](../../azure-resource-manager/management/overview.md) includes those resources that you want to manage as a group. You decide which resources belong in a resource group based on what makes the most sense for your organization. Resource groups can also be used to help with life-cycle management by deleting all resources that have the same lifespan at one time. This approach also provides security benefit by leaving no fragments that might be exploited.
+
+**Subscription** - From an organizational hierarchy perspective, a subscription is a billing and management container of resources and resource groups. An Azure subscription has a trust relationship with Azure AD. A subscription trusts Azure AD to authenticate users, services, and devices.
+
+>[!Note]
+>A subscription may trust only one Azure AD tenant. However, each tenant may trust multiple subscriptions and subscriptions can be moved between tenants.
+
+**Management group** - [Azure management groups](../../governance/management-groups/overview.md) provide a hierarchical method of applying policies and compliance at different scopes above subscriptions. It can be at the tenant root management group (highest scope) or at lower levels in the hierarchy. You organize subscriptions into containers called "management groups" and apply your governance conditions to the management groups. All subscriptions within a management group automatically inherit the conditions applied to the management group. Note, policy definitions can be applied to a management group or subscription.
+
+**Resource provider** - A service that supplies Azure resources. For example, a common [resource provider](../../azure-resource-manager/management/resource-providers-and-types.md) is Microsoft. Compute, which supplies the virtual machine resource. Microsoft. Storage is another common resource provider.
+
+**Resource Manager template** - A JavaScript Object Notation (JSON) file that defines one or more resources to deploy to a resource group, subscription, tenant, or management group. The template can be used to deploy the resources consistently and repeatedly. See [Template deployment overview](../../azure-resource-manager/templates/overview.md). Additionally, the [Bicep language](../../azure-resource-manager/bicep/overview.md) can be used instead of JSON.
+
+## Azure Resource Management Model
+
+Each Azure subscription is associated with controls used by [Azure Resource Manager](../../azure-resource-manager/management/overview.md) (ARM). Resource Manager is the deployment and management service for Azure, it has a trust relationship with Azure AD for identity management for organizations, and the Microsoft Account (MSA) for individuals. Resource Manager provides a management layer that enables you to create, update, and delete resources in your Azure subscription. You use management features like access control, locks, and tags, to secure and organize your resources after deployment.
+
+>[!NOTE]
+>Prior to ARM, there was another deployment model named Azure Service Manager (ASM) or "classic". To learn more, see [Azure Resource Manager vs. classic deployment](../../azure-resource-manager/management/deployment-models.md). Managing environments with the ASM model is out of scope of this content.
+
+Azure Resource Manager is the front-end service, which hosts the REST APIs used by PowerShell, the Azure portal, or other clients to manage resources. When a client makes a request to manage a specific resource, Resource Manager proxies the request to the resource provider to complete the request. For example, if a client makes a request to manage a virtual machine resource, Resource Manager proxies the request to the Microsoft. Compute resource provider. Resource Manager requires the client to specify an identifier for both the subscription and the resource group to manage the virtual machine resource.
+
+Before any resource management request can be executed by Resource Manager, a set of controls is checked.
+
+* **Valid user check** - The user requesting to manage the resource must have an account in the Azure AD tenant associated with the subscription of the managed resource.
+
+* **User permission check** - Permissions are assigned to users using [role-based access control (RBAC)](https://docs.microsoft.com/azure/role-based-access-control/overview). An RBAC role specifies a set of permissions a user may take on a specific resource. RBAC helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.
+
+* **Azure policy check** - [Azure policies](../../governance/policy/overview.md) specify the operations allowed or explicitly denied for a specific resource. For example, a policy can specify that users are only allowed (or not allowed) to deploy a specific type of virtual machine.
+
+The following diagram summarizes the resource model we just described.
+
+![Diagram that shows Azure resource management with ARM and Azure AD.](media/secure-with-azure-ad-resource-management/resource-model.png)
+
+**Azure Lighthouse** - [Azure Lighthouse](../../lighthouse/overview.md) enables resource management across tenants. Organizations can delegate roles at the subscription or resource group level to identities in another tenant.
+
+Subscriptions that enable [delegated resource management](../../lighthouse/concepts/azure-delegated-resource-management.md) with Azure Lighthouse have attributes that indicate the tenant IDs that can manage subscriptions or resource groups, and mapping between the built-in RBAC role in the resource tenant to identities in the service provider tenant. At runtime, Azure Resource Manager will consume these attributes to authorize tokens coming from the service provider tenant.
+
+It's worth noting that Azure Lighthouse itself is modeled as an Azure resource provider, which means that aspects of the delegation across a tenant can be targeted through Azure Policies.
+
+**Microsoft 365 Lighthouse** - [Microsoft 365 Lighthouse](https://docs.microsoft.com/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide) is an admin portal that helps Managed Service Providers (MSPs) secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business.
+
+## Azure resource management with Azure AD
+
+Now that you have a better understanding of the resource management model in Azure, let's briefly examine some of the capabilities of Azure AD that can provide identity and access management for Azure resources.
+
+### Billing
+
+Billing is important to resource management because some billing roles interact with or can manage resources. Billing works differently depending on the type of agreement that you have with Microsoft.
+
+#### Azure Enterprise Agreements
+
+Azure Enterprise Agreement (Azure EA) customers are onboarded to the Azure EA Portal upon execution of their commercial contract with Microsoft. Upon onboarding, an identity is associated to a "root" Enterprise Administrator billing role. The portal provides a hierarchy of management functions:
+
+* Departments help you segment costs into logical groupings and enable you to set a budget or quota at the department level.
+
+* Accounts are used to further segment departments. You can use accounts to manage subscriptions and to access reports.
+The EA portal can authorize Microsoft Accounts (MSA) or Azure AD accounts (identified in the portal as "Work or School Accounts"). Identities with the role of "Account Owner" in the EA portal can create Azure subscriptions.
+
+#### Enterprise billing and Azure AD tenants
+
+When an Account Owner creates an Azure subscription within an enterprise agreement, the identity and access management of the subscription is configured as follows:
+
+* The Azure subscription is associated with the same Azure AD tenant of the Account Owner.
+
+* The account owner who created the subscription will be assigned the Service Administrator and Account Administrator roles. (The Azure EA Portal assigns Azure Service Manager (ASM) or "classic" roles to manage subscriptions. To learn more, see [Azure Resource Manager vs. classic deployment]../../azure-resource-manager/management/deployment-models.md).)
+
+An enterprise agreement can be configured to support multiple tenants by setting the authentication type of "Work or school account cross-tenant" in the Azure EA Portal. Given the above, organizations can set multiple accounts for each tenant, and multiple subscriptions for each account, as shown in the diagram below.
+
+![Diagram that shows Enterprise Agreement billing structure.](media/secure-with-azure-ad-resource-management/billing-tenant-relationship.png)
+
+It's important to note that the default configuration described above grants the Azure EA Account Owner privileges to manage the resources in any subscriptions they created. For subscriptions holding production workloads, consider decoupling billing and resource management by changing the service administrator of the subscription right after creation.
+
+ To further decouple and prevent the account owner from regaining service administrator access to the subscription, the subscriptionΓÇÖs tenant can be [changed](../fundamentals/active-directory-how-subscriptions-associated-directory.md) after creation. If the account owner doesn't have a user object in the Azure AD tenant the subscription is moved to, they can't regain the service owner role.
+
+To learn more, visit [Classic subscription administrator roles, Azure RBAC roles, and Azure AD roles](../../role-based-access-control/rbac-and-directory-admin-roles.md).
+
+### Microsoft Customer Agreement
+
+Customers enrolled with a [Microsoft Customer Agreement](../../cost-management-billing/understand/mca-overview.md) (MCA) have a different billing management system with its own roles.
+
+A [billing account](../../cost-management-billing/manage/understand-mca-roles.md) for the Microsoft Customer Agreement contains one or more [billing profiles](../../cost-management-billing/manage/understand-mca-roles.md) that allow managing invoices and payment methods. Each billing profile contains one or more [invoice sections](../../cost-management-billing/manage/understand-mca-roles.md) to organize costs on the billing profile's invoice.
+
+In a Microsoft Customer Agreement, billing roles come from a single Azure AD tenant. To provision subscriptions for multiple tenants, the subscriptions must be initially created in the same Azure AD Tenant as the MCA, and then changed. In the diagram below, the subscriptions for the Corporate IT pre-production environment were moved to the ContosoSandbox tenant after creation.
+
+ ![Diagram that shows MCA billing structure.](media/secure-with-azure-ad-resource-management/microsoft-customer-agreement.png)
+
+## RBAC and role assignments in Azure
+
+In the Azure AD Fundamentals section, you learned Azure RBAC is the authorization system that provides fine-grained access management to Azure resources, and includes many [built-in roles]../../role-based-access-control/built-in-roles.md). You can create [custom roles](../../role-based-access-control/custom-roles.md), and assign roles at different scopes. Permissions are enforced by assigning RBAC roles to objects requesting access to Azure resources.
+
+Azure AD roles operate on concepts like [Azure role-based access control](../../role-based-access-control/overview.md). The [difference between these two role-based access control systems](../../role-based-access-control/rbac-and-directory-admin-roles.md) is that Azure RBAC uses Azure Resource Management to control access to Azure resources such as virtual machines or storage, and Azure AD roles control access to Azure AD, applications, and Microsoft services such as Office 365.
+
+Both Azure AD roles and Azure RBAC roles integrate with Azure AD Privileged Identity Management to enable just-in-time activation policies such as approval workflow and MFA.
+
+## ABAC and role assignments in Azure
+
+[Attribute-based access control (ABAC)](../../role-based-access-control/conditions-overview.md) is an authorization system that defines access based on attributes associated with security principals, resources, and environment. With ABAC, you can grant a security principal access to a resource based on attributes. Azure ABAC refers to the implementation of ABAC for Azure.
+
+Azure ABAC builds on Azure RBAC by adding role assignment conditions based on attributes in the context of specific actions. A role assignment condition is an additional check that you can optionally add to your role assignment to provide more fine-grained access control. A condition filters down permissions granted as a part of the role definition and role assignment. For example, you can add a condition that requires an object to have a specific tag to read the object. You can't explicitly deny access to specific resources using conditions.
+
+## Conditional Access
+
+Azure AD [Conditional Access](../../role-based-access-control/conditional-access-azure-management.md) (CA) can be used to manage access to Azure management endpoints. CA policies can be applied to the Microsoft Azure Management cloud app to protect the Azure resource management endpoints such as:
+
+* Azure Resource Manager Provider (services)
+
+* Azure Resource Manager APIs
+
+* Azure PowerShell
+
+* Azure CLI
+
+* Azure portal
+
+![Diagram that shows the Conditional Access policy.](media/secure-with-azure-ad-resource-management/conditional-access.jpeg)
+
+For example, an administrator may configure a Conditional Access policy, which allows a user to sign into the Azure portal only from approved locations, and also requires either multifactor authentication (MFA) or a hybrid Azure AD domain-joined device.
+
+## Azure Managed Identities
+
+A common challenge when building cloud applications is how to manage the credentials in your code for authenticating to cloud services. Keeping the credentials secure is an important task. Ideally, the credentials never appear on developer workstations and aren't checked into source control. [Managed identities for Azure resources](../managed-identities-azure-resources/overview.md) provide Azure services with an automatically managed identity in Azure AD. You can use the identity to authenticate to any service that supports Azure AD authentication without any credentials in your code.
+
+There are two types of managed identities:
+
+* A system-assigned managed identity is enabled directly on an Azure resource. When the resource is enabled, Azure creates an identity for the resource in the associated subscriptionΓÇÖs trusted Azure AD tenant. After the identity is created, the credentials are provisioned onto the resource. The lifecycle of a system-assigned identity is directly tied to the Azure resource. If the resource is deleted, Azure automatically cleans up the credentials and the identity in Azure AD.
+
+* A user-assigned managed identity is created as a standalone Azure resource. Azure creates an identity in the Azure AD tenant that's trusted by the subscription with which the resource is associated. After the identity is created, the identity can be assigned to one or more Azure resources. The lifecycle of a user-assigned identity is managed separately from the lifecycle of the Azure resources to which it's assigned.
+
+Internally, managed identities are service principals of a special type, to only be used by specific Azure resources. When the managed identity is deleted, the corresponding service principal is automatically removed. Noe that authorization of Graph API permissions can only be done by PowerShell, so not all features of Managed Identity are accessible via the Portal UI.
+
+## Azure Active Directory Domain Services
+
+Azure Active Directory Domain Services (Azure AD DS) provides a managed domain to facilitate authentication for Azure workloads using legacy protocols. Supported servers are moved from an on-premises AD DS forest and joined to an Azure AD DS managed domain and continue to use legacy protocols for authentication (for example, Kerberos authentication).
+
+## Azure AD B2C directories and Azure
+
+An Azure AD B2C tenant is linked to an Azure subscription for billing and communication purposes. Azure AD B2C tenants have a self-contained role structure in the directory, which is independent from the Azure RBAC privileged roles of the Azure subscription.
+
+When the Azure AD B2C tenant is initially provisioned, the user creating the B2C tenant must have contributor or owner permissions in the subscription. Upon creation, that user becomes the first Azure AD B2C tenant global administrator and they can later create other accounts and assign them to directory roles.
+
+It's important to note that the owners and contributors of the linked Azure AD subscription can remove the link between the subscription and the directory, which will affect the ongoing billing of the Azure AD B2C usage.
+
+## Identity considerations for IaaS solutions in Azure
+
+This scenario covers identity isolation requirements that organizations have for Infrastructure-as-a-Service (IaaS) workloads.
+
+There are three key options regarding isolation management of IaaS workloads:
+
+* Virtual machines joined to stand-alone Active Directory Domain Services (AD DS)
+
+* Azure Active Directory Domain Services (Azure AD DS) joined virtual machines
+
+* Sign-in to virtual machines in Azure using Azure AD authentication
+
+A key concept to address with the first two options is that there are two identity realms that are involved in these scenarios.
+
+* When you sign in to an Azure Windows Server VM via remote desktop protocol (RDP), you're generally logging on to the server using your domain credentials, which performs a Kerberos authentication against an on-premises AD DS domain controller or Azure AD DS. Alternatively, if the server isn't domain-joined then a local account can be used to sign in to the virtual machines.
+
+* When you sign into the Azure portal to create or manage a VM, you're authenticating against Azure AD (potentially using the same credentials if you've synchronized the correct accounts), and this could result in an authentication against your domain controllers should you be using Active Directory Federation Services (AD FS) or PassThrough Authentication.
+
+### Virtual machines joined to standalone Active Directory Domain Services
+
+AD DS is the Windows Server based directory service that organizations have largely adopted for on-premises identity services. AD DS can be deployed when a requirement exists to deploy IaaS workloads to Azure that require identity isolation from AD DS administrators and users in another forest.
+
+![Diagram that shows AD DS virtual machine management](media/secure-with-azure-ad-resource-management/vm-to-standalone-active-directory-domain-controller.jpeg)
+
+The following considerations need to be made in this scenario:
+
+AD DS domain controllers: a minimum of two AD DS domain controllers must be deployed to ensure that authentication services are highly available and performant. For more information, see [AD DS Design and Planning](/windows-server/identity/ad-ds/plan/ad-ds-design-and-planning).
+
+**AD DS Design and Planning** - A new AD DS forest must be created with the following services configured correctly:
+
+* **AD DS Domain Name Services (DNS)** - AD DS DNS must be configured for the relevant zones within AD DS to ensure that name resolution operates correctly for servers and applications.
+
+* **AD DS Sites and Services** - These services must be configured to ensure that applications have low latency and performant access to domain controllers. The relevant virtual networks, subnets, and data center locations that servers are located in should be configured in sites and services.
+
+* **AD DS FSMOs** - The Flexible Single Master Operation (FSMO) roles that are required should be reviewed and assigned to the appropriate AD DS domain controllers.
+
+* **AD DS Domain Join** - All servers (excluding "jumpboxes") that require AD DS for authentication, configuration and management need to be joined to the isolated forest.
+
+* **AD DS Group Policy (GPO)** - AD DS GPOs must be configured to ensure that the configuration meets the security requirements, and that the configuration is standardized across the forest and domain-joined machines.
+
+* **AD DS Organizational Units (OU)** - AD DS OUs must be defined to ensure grouping of AD DS resources into logical management and configuration silos for purposes of administration and application of configuration.
+
+* **Role-based access control** - RBAC must be defined for administration and access to resources joined to this forest. This includes:
+
+ * **AD DS Groups** - Groups must be created to apply appropriate permissions for users to AD DS resources.
+
+ * **Administration accounts** - As mentioned at the start of this section there are two administration accounts required to manage this solution.
+
+ * An AD DS administration account with the least privileged access required to perform the administration required in AD DS and domain-joined servers.
+
+ * An Azure AD administration account for Azure portal access to connect, manage, and configure virtual machines, VNets, network security groups and other required Azure resources.
+
+ * **AD DS user accounts** - Relevant user accounts need to be provisioned and added to correct groups to allow user access to applications hosted by this solution.
+
+**Virtual networks (VNets)** - Configuration guidance
+
+* **AD DS domain controller IP address** - The domain controllers shouldn't be configured with static IP addresses within the operating system. The IP addresses should be reserved on the Azure VNet to ensure they always stay the same and DC should be configured to use DHCP.
+
+* **VNet DNS Server** - DNS servers must be configured on VNets that are part of this isolated solution to point to the domain controllers. This is required to ensure that applications and servers can resolve the required AD DS services or other services joined to the AD DS forest.
+
+* **Network security groups (NSGs)** - The domain controllers should be located on their own VNet or subnet with NSGs defined to only allow access to domain controllers from required servers (for example, domain-joined machines or jumpboxes). Jumpboxes should be added to an application security group (ASG) to simplify NSG creation and administration.
+
+**Challenges**: The list below highlights key challenges with using this option for identity isolation:
+
+* An additional AD DS Forest to administer, manage and monitor resulting in more work for the IT team to perform.
+
+* Further infrastructure may be required for management of patching and software deployments. Organizations should consider deploying Azure Update Management, Group Policy (GPO) or System Center Configuration Manager (SCCM) to manage these servers.
+
+* Additional credentials for users to remember and use to access resources.
+
+>[!IMPORTANT]
+>For this isolated model, it is assumed that there is no connectivity to or from the domain controllers from the customer's corporate network and that there are no trusts configured with other forests. A jumpbox or management server should be created to allow a point from which the AD DS domain controllers can be managed and administered.
+
+### Azure Active Directory Domain Services joined virtual machines
+
+When a requirement exists to deploy IaaS workloads to Azure that require identity isolation from AD DS administrators and users in another forest, then an Azure AD Domain Services (Azure AD DS) managed domain can be deployed. Azure AD DS is a service that provides a managed domain to facilitate authentication for Azure workloads using legacy protocols. This provides an isolated domain without the technical complexities of building and managing your own AD DS. The following considerations need to be made.
+
+![Diagram that shows Azure AD DS virtual machine management.](media/secure-with-azure-ad-resource-management/vm-to-azure-ad-domain-services.png)
+
+**Azure AD DS managed domain** - Only one Azure AD DS managed domain can be deployed per Azure AD tenant and this is bound to a single VNet. It's recommended that this VNet forms the "hub" for Azure AD DS authentication. From this hub, "spokes" can be created and linked to allow legacy authentication for servers and applications. The spokes are additional VNets on which Azure AD DS joined servers are located and are linked to the hub using Azure network gateways or VNet peering.
+
+**User forest vs. resource forest** - Azure AD DS provides two options for forest configuration of the Azure AD DS managed domain. For the purposes of this section we focus on user forest, as the resource forest relies on a trust being configured with an AD DS forest and this goes against the isolation principle we're addressing here.
+
+* **User forest** - By default, an Azure AD DS managed domain is created as a user forest. This type of forest synchronizes all objects from Azure AD, including any user accounts synchronized from an on-premises AD DS environment.
+
+* **Resource forest** - Resource forests only synchronize users and groups created directly in Azure AD and requires a trust be configured with an AD DS forest for user authentication. For more information, see [Resource forest concepts and features for Azure Active Directory Domain Services](../../active-directory-domain-services/concepts-resource-forest.md).
+
+**Managed domain location** - A location must be set when deploying an Azure AD DS managed domain. The location is a physical region (data center) where the managed domain is deployed. It's recommended you:
+
+* Consider a location that is geographically closed to the servers and applications that require Azure AD DS services.
+
+* Consider regions that provide Availability Zones capabilities for high availability requirements. For more information, see [Regions and Availability Zones in Azure](../../availability-zones/az-overview.md).
+
+**Object provisioning** - Azure AD DS synchronizes identities from the Azure AD that is associated with the subscription that Azure AD DS is deployed into. It's also worth noting that if the associated Azure AD has synchronization set up with Azure AD Connect (user forest scenario) then the life cycle of these identities can also be reflected in Azure AD DS. This service has two modes that can be used for provisioning user and group objects from Azure AD.
+
+* **All**: All users and groups are synchronized from Azure AD into Azure AD DS.
+
+* **Scoped**: Only users in scope of a group(s) are synchronized from Azure AD into Azure AD DS.
+
+When you first deploy Azure AD DS, an automatic one-way synchronization is configured to replicate the objects from Azure AD. This one-way synchronization continues to run in the background to keep the Azure AD DS managed domain up to date with any changes from Azure AD. No synchronization occurs from Azure AD DS back to Azure AD. For more information, see [How objects and credentials are synchronized in an Azure AD Domain Services managed domain](../../active-directory-domain-services/synchronization.md).
+
+It's worth noting that if you need to change the type of synchronization from All to Scoped (or vice versa), then the Azure AD DS managed domain will need to be deleted, recreated and configured. In addition, organizations should consider the use of ΓÇ£scopedΓÇ¥ provisioning to reduce the identities to only those that need access to Azure AD DS resources as a good practice.
+
+**Group Policy Objects (GPO)** - To configure GPO in an Azure AD DS managed domain you must use Group Policy Management tools on a server that has been domain joined to the Azure AD DS managed domain. For more information, see [Administer Group Policy in an Azure AD Domain Services managed domain](../../active-directory-domain-services/manage-group-policy.md).
+
+**Secure LDAP** - Azure AD DS provides a secure LDAP service that can be used by applications that require it. This setting is disabled by default and to enable secure LDAP a certificate needs to be uploaded, in addition, the NSG that secures the VNet that Azure AD DS is deployed on to must allow port 636 connectivity to the Azure AD DS managed domains. For more information, see [Configure secure LDAP for an Azure Active Directory Domain Services managed domain](../../active-directory-domain-services/tutorial-configure-ldaps.md).
+
+**Administration** - To perform administration duties on Azure AD DS (for example, domain join machines or edit GPO), the account used for this task needs to be part of the Azure AD DC Administrators group. Accounts that are members of this group can't directly sign-in to domain controllers to perform management tasks. Instead, you create a management VM that is joined to the Azure AD DS managed domain, then install your regular AD DS management tools. For more information, see [Management concepts for user accounts, passwords, and administration in Azure Active Directory Domain Services](../../active-directory-domain-services/administration-concepts.md).
+
+**Password hashes** - For authentication with Azure AD DS to work, password hashes for all users need to be in a format that is suitable for NT LAN Manager (NTLM) and Kerberos authentication. To ensure authentication with Azure AD DS works as expected, the following prerequisites need to be performed.
+
+* **Users synchronized with Azure AD Connect (from AD DS)** - The legacy password hashes need to be synchronized from on-premises AD DS to Azure AD.
+
+* **Users created in Azure AD** - Need to reset their password for the correct hashes to be generated for usage with Azure AD DS. For more information, see [Enable synchronization of password hashes](../../active-directory-domain-services/tutorial-configure-password-hash-sync.md).
+
+**Network** - Azure AD DS is deployed on to an Azure VNet so considerations need to be made to ensure that servers and applications are secured and can access the managed domain correctly. For more information, see [Virtual network design considerations and configuration options for Azure AD Domain Services](../../active-directory-domain-services/network-considerations.md).
+
+* Azure AD DS must be deployed in its own subnet: Don't use an existing subnet or a gateway subnet.
+
+* **A network security group (NSG)** - is created during the deployment of an Azure AD DS managed domain. This network security group contains the required rules for correct service communication. Don't create or use an existing network security group with your own custom rules.
+
+* **Azure AD DS requires 3-5 IP addresses** - Make sure that your subnet IP address range can provide this number of addresses. Restricting the available IP addresses can prevent Azure AD DS from maintaining two domain controllers.
+
+* **VNet DNS Server** - As previously discussed about the "hub and spoke" model, it's important to have DNS configured correctly on the VNets to ensure that servers joined to the Azure AD DS managed domain have the correct DNS settings to resolve the Azure AD DS managed domain. Each VNet has a DNS server entry that is passed to servers as they obtain an IP address and these DNS entries need to be the IP addresses of the Azure AD DS managed domain. For more information, see [Update DNS settings for the Azure virtual network](../../active-directory-domain-services/tutorial-create-instance.md).
+
+**Challenges** - The following list highlights key challenges with using this option for Identity Isolation.
+
+* Some Azure AD DS configuration can only be administered from an Azure AD DS joined server.
+
+* Only one Azure AD DS managed domain can be deployed per Azure AD tenant. As we describe in this section the hub and spoke model is recommended to provide Azure AD DS authentication to services on other VNets.
+
+* Further infrastructure maybe required for management of patching and software deployments. Organizations should consider deploying Azure Update Management, Group Policy (GPO) or System Center Configuration Manager (SCCM) to manage these servers.
+
+For this isolated model, it's assumed that there's no connectivity to the VNet that hosts the Azure AD DS managed domain from the customer's corporate network and that there are no trusts configured with other forests. A jumpbox or management server should be created to allow a point from which the Azure AD DS can be managed and administered.
+
+### Sign into virtual machines in Azure using Azure Active Directory authentication
+
+When a requirement exists to deploy IaaS workloads to Azure that require identity isolation, then the final option is to use Azure AD for logon to servers in this scenario. This provides the ability to make Azure AD the identity realm for authentication purposes and identity isolation can be achieved by provisioning the servers into the relevant subscription, which is linked to the required Azure AD tenant. The following considerations need to be made.
+
+![Diagram that shows Azure AD authentication to Azure VMs.](media/secure-with-azure-ad-resource-management/sign-into-vm.png)
+
+**Supported operating systems**: Signing into virtual machines in Azure using Azure AD authentication is currently supported in Windows and Linux. For more specifics on supported operating systems, refer to the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](../../virtual-machines/linux/login-using-aad.md).
+
+**Credentials**: One of the key benefits of signing into virtual machines in Azure using Azure AD authentication is the ability to use the same federated or managed Azure AD credentials that you normally use for access to Azure AD services for sign-in to the virtual machine.
+
+>[!NOTE]
+>The Azure AD tenant that is used for sign-in in this scenario is the Azure AD tenant that is associated with the subscription that the virtual machine has been provisioned into. This Azure AD tenant can be one that has identities synchronized from on-premises AD DS. Organizations should make an informed choice that aligns with their isolation principals when choosing which subscription and Azure AD tenant they wish to use for sign-in to these servers.
+
+**Network Requirements**: These virtual machines will need to access Azure AD for authentication so you must ensure that the virtual machines network configuration permits outbound access to Azure AD endpoints on 443. See the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](../../virtual-machines/linux/login-using-aad.md) for more information.
+
+**Role-based Access Control (RBAC)**: Two RBAC roles are available to provide the appropriate level of access to these virtual machines. These RBAC roles can be configured via the Azure AD Portal or via the Azure Cloud Shell Experience. For more information, see [Configure role assignments for the VM](../devices/howto-vm-sign-in-azure-ad-windows.md).
+
+* **Virtual machine administrator logon**: Users with this role assigned to them can log into an Azure virtual machine with administrator privileges.
+
+* **Virtual machine user logon**: Users with this role assigned to them can log into an Azure virtual machine with regular user privileges.
+
+Conditional Access: A key benefit of using Azure AD for signing into Azure virtual machines is the ability to enforce Conditional Access as part of the sign-in process. This provides the ability for organizations to require conditions to be met before allowing access to the virtual machine and to use multifactor authentication to provide strong authentication. For more information, see [Using Conditional Access](../devices/howto-vm-sign-in-azure-ad-windows.md).
+
+>[!NOTE]
+>Remote connection to virtual machines joined to Azure AD is only allowed from Windows 10, Windows 11, and Cloud PC PCs that are Azure AD joined or hybrid Azure AD joined to the same directory as the virtual machine.
+
+**Challenges**: The list below highlights key challenges with using this option for identity isolation.
+
+* No central management or configuration of servers. For example, there's no Group Policy that can be applied to a group of servers. Organizations should consider deploying [Update Management in Azure](https://docs.microsoft.com/azure/automation/update-management/overview) to manage patching and updates of these servers.
+
+* Not suitable for multi-tiered applications that have requirements to authenticate with on-premises mechanisms such as Windows Integrated Authentication across these servers or services. If this is a requirement for the organization, then it's recommended that you explore the Standalone Active Directory Domain Services, or the Azure Active Directory Domain Services scenarios described in this section.
+
+For this isolated model, it's assumed that there's no connectivity to the VNet that hosts the virtual machines from the customerΓÇÖs corporate network. A jumpbox or management server should be created to allow a point from which these servers can be managed and administered.
+
+## Next steps
+
+* [Introduction to delegated administration and isolated environments](secure-with-azure-ad-introduction.md)
+
+* [Azure AD fundamentals](secure-with-azure-ad-fundamentals.md)
+
+* [Resource isolation in a single tenant](secure-with-azure-ad-single-tenant.md)
+
+* [Resource isolation with multiple tenants](secure-with-azure-ad-multiple-tenants.md)
+
+* [Best practices](secure-with-azure-ad-best-practices.md)
active-directory Secure With Azure Ad Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-single-tenant.md
+
+ Title: Resource isolation in a single tenant to secure with Azure Active Directory
+description: Introduction to resource isolation in a single tenant in Azure Active Directory.
+++++++ Last updated : 7/5/2022++++++
+# Resource isolation in a single tenant
+
+Many separation scenarios can be achieved within a single tenant. If possible, we recommend that you delegate administration to separate environments within a single tenant to provide the best productivity and collaboration experience for your organization.
+
+## Outcomes
+
+**Resource separation** - With Azure AD directory roles, security groups, conditional access policies, Azure resource groups, Azure management groups, administrative units (AU's), and other controls, you can restrict resource access to specific users, groups, and service principals. Resources can be managed by separate administrators, and have separate users, permissions, and access requirements.
+
+If a set of resources require unique tenant-wide settings, or there is minimal risk tolerance for unauthorized access by tenant members, or critical impact could be caused by configuration changes, you must achieve isolation in multiple tenants.
+
+**Configuration separation** - In some cases, resources such as applications have dependencies on tenant-wide configurations like authentication methods or [named locations](https://docs.microsoft.com/azure/active-directory/conditional-access/location-condition#named-locations). You should consider these dependencies when isolating resources. Global administrators can configure the resource settings and tenant-wide settings that affect resources.
+
+If a set of resources require unique tenant-wide settings, or the tenant's settings must be administered by a different entity, you must achieve isolation with multiple tenants.
+
+**Administrative separation** - With Azure AD delegated administration, you can segregate the administration of resources such as applications and APIs, users and groups, resource groups, and conditional access policies.
+
+Global administrators can discover and obtain full access to any trusting resources. You can set up auditing and alerts to know when an administrator changes a resource if they are authenticated.
+
+You can also use administrative units (AU) in Azure AD to provide some level of administrative separation. Administrative units restrict permissions in a role to any portion of your organization that you define. You could, for example, use administrative units to delegate the [Helpdesk Administrator](../roles/permissions-reference.md) role to regional support specialists, so they can manage users only in the region that they support.
+
+![Diagram that shows administrative units.](media/secure-with-azure-ad-single-tenant/azure-ad-administrative-units.png)
+
+Administrative Units can be used to separate [user, groups and device objects](../roles/administrative-units.md). Assignments of those units can be managed by [dynamic membership rules](../roles/admin-units-members-dynamic.md).
+
+By using Privileged Identity Management (PIM) you can define who in your organization is the best person to approve the request for highly privileged roles. For example, admins requiring global administrator access to make tenant-wide changes.
+
+>[!NOTE]
+>Using PIM requires and Azure AD P2 license per human.
+
+If you must ensure that global administrators are unable to manage a specific resource, you must isolate that resource in a separate tenant with separate global administrators. This can be especially important for backups, see [multi-user authorization guidance](../../backup/multi-user-authorization.md) for examples of this.
+
+## Common usage
+
+One of the most common uses for multiple environments in a single tenant is to segregate production from non-production resources. Within a single tenant, development teams and application owners can create and manage a separate environment with test apps, test users and groups, and test policies for those objects; similarly, they can create non-production instances of Azure resources and trusted apps.
+
+The following diagram illustrates the non-production environments and the production environment.
+
+![Diagram that shows Azure AD tenant boundary.](media/secure-with-azure-ad-single-tenant/azure-ad-tenant-boundary.png)
+
+In this diagram, there are non-production Azure resources and non-production instances Azure AD integrated applications with equivalent non-production directory objects. In this example, the non-production resources in the directory are used for testing purposes.
+
+>[!NOTE]
+>You cannot have more than one Microsoft 365 environment in a single Azure AD tenant. However, you can have multiple Dynamics 365 environments in a single Azure AD tenant.
+
+Another scenario for isolation within a single tenant could be separation between locations, subsidiary or implementation of tiered administration (according to the "[Enterprise Access Model](/security/compass/privileged-access-access-model)").
+
+Azure RBAC role assignments allow scoped administration of Azure resources. Similarly, Azure AD allows granular management of Azure AD trusting applications through multiple capabilities such as conditional access, user and group filtering, administrative unit assignments and application assignments.
+
+If you must ensure full isolation (including staging of organization-level configuration) of Microsoft 365 services, you need to choose a [multiple tenant isolation](../../backup/multi-user-authorization.md).
+
+## Scoped management in a single tenant
+
+### Scoped management for Azure resources
+
+Azure RBAC allows you to design an administration model with granular scopes and surface area. Consider the management hierarchy in the following example:
+
+>[!NOTE]
+>There are multiple ways to define the management hierarchy based on an organization's individual requirements, constraints, and goals. For more information, consult the Cloud Adoption Framework guidance on how to [Organize Azure Resources](https://docs.microsoft.com/azure/cloud-adoption-framework/ready/azure-setup-guide/organize-resources)).
+
+![Diagram that shows resource isolation in a single tenant.](media/secure-with-azure-ad-single-tenant/azure-ad-resource-hierarchy.png)
+
+* **Management group** - You can assign roles to specific management groups so that they don't impact any other management groups. In the scenario above, the HR team can define an Azure Policy to audit the regions where resources are deployed across all HR subscriptions.
+
+* **Subscription** - You can assign roles to a specific subscription to prevent it from impacting any other resource groups. In the example above, the HR team can assign the Reader role for the Benefits subscription, without reading any other HR subscription, or a subscription from any other team.
+
+* **Resource group** - You can assign roles to specific resource groups so that they don't impact any other resource groups. In the example above, the Benefits engineering team can assign the Contributor role to the test lead so they can manage the test DB and the test web app, or to add more resources.
+
+* **Individual resources** - You can assign roles to specific resources so that they don't impact any other resources. In the example above, the Benefits engineering team can assign a data analyst the Cosmos DB Account Reader role just for the test instance of the Cosmos DB, without interfering with the test web app, or any production resource.
+
+For more information, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md) and [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md).
+
+This is a hierarchical structure, so the higher up in the hierarchy, the more scope, visibility, and impact there is to lower levels. Top-level scopes affect all Azure resources in the Azure AD tenant boundary. This also means that permissions can be applied at multiple levels. The risk this introduces is that assigning roles higher up the hierarchy could provide more access lower down the scope than intended. [Microsoft Entra](/security/business/identity-access/microsoft-entra-permissions-management?rtc=1) (formally CloudKnox) is a Microsoft product that provides visibility and remediation to help reduce the risk. A few details are as follows:
+
+* The root management group defines Azure Policies and RBAC role assignments that will be applied to all subscriptions and resources.
+
+* Global Administrators can [elevate access](https://aka.ms/AzureADSecuredAzure/12a) to all subscriptions and management groups.
+
+Both top-level scopes should be strictly monitored. It is important to plan for other dimensions of resource isolation such as networking. For general guidance on Azure networking, see [Azure best practices for network security](../../security/fundamentals/network-best-practices.md). Infrastructure as a Service (IaaS) workloads have special scenarios where both identity and resource isolation need to be part of the overall design and strategy.
+
+Consider isolating sensitive or test resources according to [Azure landing zone conceptual architecture](https://docs.microsoft.com/azure/cloud-adoption-framework/ready/landing-zone/). For example, Identity subscription should be assigned to separated management group and all subscriptions for development purposes could be separated in "Sandbox" management group. More details can be found in the [Enterprise-Scale documentation](https://docs.microsoft.com/azure/cloud-adoption-framework/ready/enterprise-scale/faq). Separation for testing purposes within a single tenant is also considered in the [management group hierarchy of the reference architecture](https://docs.microsoft.com/azure/cloud-adoption-framework/ready/enterprise-scale/testing-approach).
+
+### Scoped management for Azure AD trusting applications
+
+The pattern to scope management of Azure AD trusting applications is outlined in the following section.
+
+Azure AD supports configuring multiple instances of custom and SaaS apps, but not most Microsoft services, against the same directory with [independent user assignments](../manage-apps/assign-user-or-group-access-portal.md). The above example contains both a production and a test version of the travel app. You can deploy pre-production versions against the corporate tenant to achieve app-specific configuration and policy separation that enables workload owners to perform testing with their corporate credentials. Non-production directory objects such as test users and test groups are associated to the non-production application with separate [ownership](https://aka.ms/AzureADSecuredAzure/14a) of those objects.
+
+There are tenant-wide aspects that affect all trusting applications in the Azure AD tenant boundary including:
+
+* Global Administrators can manage all tenant-wide settings.
+
+* Other [directory roles](https://aka.ms/AzureADSecuredAzure/14b) such as User Administrator, Administrator, and Conditional Access Administrators can manage tenant-wide configuration within the scope of the role.
+
+Configuration settings such authentication methods allowed, hybrid configurations, B2B collaboration allow-listing of domains, and named locations are tenant wide.
+
+>[!Note]
+>Microsoft Graph API Permissions and consent permissions cannot be scoped to a group or members of Administrative Units. Those permissions will be assigned on directory-level, only resource-specific consent allows scope on resource-level (currently limited to [Microsoft Teams Chat permissions](/microsoftteams/platform/graph-api/rsc/resource-specific-consent))
+
+>[!IMPORTANT]
+>The lifecycle of Microsoft SaaS services such as Office 365, Microsoft Dynamics, and Microsoft Exchange are bound to the Azure AD tenant. As a result, multiple instances of these services necessarily require multiple Azure AD tenants. Check the documentation for individual services to learn more about specific management scoping capabilities.
+
+## Next steps
+
+* [Introduction to delegated administration and isolated environments](secure-with-azure-ad-introduction.md)
+
+* [Azure AD fundamentals](secure-with-azure-ad-fundamentals.md)
+
+* [Azure resource management fundamentals](secure-with-azure-ad-resource-management.md)
+
+* [Resource isolation with multiple tenants](secure-with-azure-ad-multiple-tenants.md)
+
+* [Best practices](secure-with-azure-ad-best-practices.md)
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
Additionally, authentication session management used to only apply to the First
In June 2020 we have added the following 29 new applications in our App gallery with Federation support:
-[Shopify Plus](../saas-apps/shopify-plus-tutorial.md), [Ekarda](../saas-apps/ekarda-tutorial.md), [MailGates](../saas-apps/mailgates-tutorial.md), [BullseyeTDP](../saas-apps/bullseyetdp-tutorial.md), [Raketa](../saas-apps/raketa-tutorial.md), [Segment](../saas-apps/segment-tutorial.md), [Ai Auditor](https://www.mindbridge.ai/products/ai-auditor/), [Pobuca Connect](https://app.pobu.c), [MyCompliance Cloud](https://cloud.metacompliance.com/), [Smallstep SSH](https://smallstep.com/sso-ssh/)
+[Shopify Plus](../saas-apps/shopify-plus-tutorial.md), [Ekarda](../saas-apps/ekarda-tutorial.md), [MailGates](../saas-apps/mailgates-tutorial.md), [BullseyeTDP](../saas-apps/bullseyetdp-tutorial.md), [Raketa](../saas-apps/raketa-tutorial.md), [Segment](../saas-apps/segment-tutorial.md), [Ai Auditor](https://www.mindbridge.ai/products/ai-auditor/), [Pobuca Connect](https://app.pobu.c), [MyCompliance Cloud](https://cloud.metacompliance.com/), [Smallstep SSH](https://smallstep.com/sso-ssh/)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial. For listing your application in the Azure AD app gallery, please read the details here: https://aka.ms/AzureADAppRequest.
active-directory Entitlement Management Access Package First https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-first.md
na Previously updated : 09/30/2020 Last updated : 07/11/2022
A resource directory has one or more resources to share. In this step, you creat
| **Admin1** | Global administrator<br/>-or-<br/>User administrator | | **Requestor1** | User |
-1. Create an Azure AD security group named **Marketing resources** with a membership type of **Assigned**.
+4. Create an Azure AD security group named **Marketing resources** with a membership type of **Assigned**.
This group will be the target resource for entitlement management. The group should be empty of members to start.
In this step, you remove the changes you made and delete the **Marketing Campaig
1. Delete the **Marketing resources** group.
+## Set up group writeback in entitlement management
+To set up group writeback for Micosoft 356 groups in access packages, you must complete the following prerequisites:
+- Set up group writeback in the Azure Active Directory admin center.
+- The Organizational Unit (OU) that will be used to set up group writeback in Azure AD Connect Configuration.
+- Complete the [group writeback enablement steps](../hybrid/how-to-connect-group-writeback-v2.md#enable-group-writeback-using-azure-ad-connect) for Azure AD Connect.
+
+Using group writeback, you can now sync M365 groups that are part of access packages to on-premises Active Directory. To do this, follow the steps below:
+
+1. Create an Azure Active Directory M365 group.
+
+1. Set the group to be written back to on-premises Active Directory. For instructions, see [Group writeback in the Azure Active Directory admin center](../enterprise-users/groups-write-back-portal.md).
+
+1. Add the group to an access package as a resource role. See [Create a new access package](entitlement-management-access-package-create.md#resource-roles) for guidance.
+
+1. Assign the user to the access package. See [View, add, and remove assignments for an access package](entitlement-management-access-package-assignments.md#directly-assign-a-user) for instructions to directly assign a user.
+
+1. After you have assigned a user to the access package, confirm that the user is now a member of the on-premises group once AAD Connect Sync cycle completes:
+ 1. View the member property of the group in the on-premises OU OR
+ 1. Review the member Of on the user object.
+
+> [!NOTE]
+> Azure AD Connect's default sync cycle schedule is every 30 minutes. You may need to wait until the next cycle occurs to see results on-premises or choose to run the sync cycle manually to see results sooner.
+ ## Next steps Advance to the next article to learn about common scenario steps in entitlement management.
active-directory Entitlement Management Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-organization.md
Only users from configured connected organizations can request access packages t
> [!NOTE] > As part of rolling out this new feature, all connected organizations created before 09/09/20 were considered **configured**. If you had an access package that allowed users from any organization to sign up, you should review your list of connected organizations that were created before that date to ensure none are miscategorized as **configured**. An admin can update the **State** property as appropriate. For guidance, see [Update a connected organization](#update-a-connected-organization).
+> [!NOTE]
+> In some cases, a user might request an access package using their personal account having the same domain as the connected organization resulting in a new suggested connected organization. In this case, make sure the user is using their organization account instead and the portal will identify this user coming from the configured connected organization Azure AD tenant.
+ ## Next steps
active-directory Identity Governance Applications Existing Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-existing-users.md
You may also be using [Azure AD entitlement management](entitlement-management-o
- You'll need to have an appropriate administrative role. If this is the first time you're performing these steps, you'll need the `Global administrator` role to authorize the use of Microsoft Graph PowerShell in your tenant. - There needs to be a service principal for your application in your tenant.
- - If the application uses an LDAP directory, follow the guide for [configuring Azure AD to provision users into LDAP directories](/azure/active-directory/app-provisioning/on-premises-ldap-connector-configure) through the section to Download, install, and configure the Azure AD Connect Provisioning Agent Package.
- - If the application uses a SQL database, follow the guide for [configuring Azure AD to provision users into SQL based applications](/azure/active-directory/app-provisioning/on-premises-sql-connector-configure) through the section to Download, install and configure the Azure AD Connect Provisioning Agent Package.
+ - If the application uses an LDAP directory, follow the guide for [configuring Azure AD to provision users into LDAP directories](../app-provisioning/on-premises-ldap-connector-configure.md) through the section to Download, install, and configure the Azure AD Connect Provisioning Agent Package.
+ - If the application uses a SQL database, follow the guide for [configuring Azure AD to provision users into SQL based applications](../app-provisioning/on-premises-sql-connector-configure.md) through the section to Download, install and configure the Azure AD Connect Provisioning Agent Package.
## Collect existing users from an application
First, get a list of the users from the tables. Most databases provide a way to
### Collect existing users from an application's database table using PowerShell
-This section applies to applications that use another SQL database as its underlying data store, where you're using the [ECMA Connector Host](/azure/active-directory/app-provisioning/on-premises-sql-connector-configure) to provision users into that application. If you've not yet configured the provisioning agent, use that guide to create the DSN connection file you'll use in this section.
+This section applies to applications that use another SQL database as its underlying data store, where you're using the [ECMA Connector Host](../app-provisioning/on-premises-sql-connector-configure.md) to provision users into that application. If you've not yet configured the provisioning agent, use that guide to create the DSN connection file you'll use in this section.
1. Log in to the system where the provisioning agent is or will be installed. 1. Launch PowerShell.
This section applies to applications that use another SQL database as its underl
## Confirm Azure AD has users for each user from the application
-Now that you have a list of all the users obtained from the application, you'll next match those users from the application's data store with users in Azure AD. Before proceeding, review the section on [matching users in the source and target systems](/azure/active-directory/app-provisioning/customize-application-attributes#matching-users-in-the-source-and-target--systems), as you'll configure Azure AD provisioning with equivalent mappings afterwards. That step will allow Azure AD provisioning to query the application's data store with the same matching rules.
+Now that you have a list of all the users obtained from the application, you'll next match those users from the application's data store with users in Azure AD. Before proceeding, review the section on [matching users in the source and target systems](../app-provisioning/customize-application-attributes.md#matching-users-in-the-source-and-target--systems), as you'll configure Azure AD provisioning with equivalent mappings afterwards. That step will allow Azure AD provisioning to query the application's data store with the same matching rules.
### Retrieve the IDs of the users in Azure AD
-This section shows how to interact with Azure AD using [Microsoft Graph PowerShell](https://www.powershellgallery.com/packages/Microsoft.Graph) cmdlets. The first time your organization use these cmdlets for this scenario, you'll need to be in a Global Administrator role to consent Microsoft Graph PowerShell to be used for these scenarios in your tenant. Subsequent interactions can use a lower privileged role, such as User Administrator role if you anticipate creating new users, or the Application Administrator or [Identity Governance Administrator](/azure/active-directory/roles/permissions-reference#identity-governance-administrator) role, if you're just managing application role assignments.
+This section shows how to interact with Azure AD using [Microsoft Graph PowerShell](https://www.powershellgallery.com/packages/Microsoft.Graph) cmdlets. The first time your organization use these cmdlets for this scenario, you'll need to be in a Global Administrator role to consent Microsoft Graph PowerShell to be used for these scenarios in your tenant. Subsequent interactions can use a lower privileged role, such as User Administrator role if you anticipate creating new users, or the Application Administrator or [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator) role, if you're just managing application role assignments.
1. Launch PowerShell. 1. If you don't have the [Microsoft Graph PowerShell modules](https://www.powershellgallery.com/packages/Microsoft.Graph) already installed, install the `Microsoft.Graph.Users` module and others using
The previous steps have confirmed that all the users in the application's data s
## Configure application provisioning
-Before creating new assignments, you'll want to configure [Azure AD provisioning](/azure/active-directory/app-provisioning/user-provisioning) of Azure AD users to the application. Configuring provisioning will enable Azure AD to match up the users in Azure AD with the application role assignments to the users already in the application's data store.
+Before creating new assignments, you'll want to configure [Azure AD provisioning](../app-provisioning/user-provisioning.md) of Azure AD users to the application. Configuring provisioning will enable Azure AD to match up the users in Azure AD with the application role assignments to the users already in the application's data store.
1. Ensure that the application is configured to require users to have application role assignments, so that only selected users will be provisioned to the application.
-1. If provisioning hasn't been configured for the application, then configure, but do not start, [provisioning](/azure/active-directory/app-provisioning/user-provisioning).
+1. If provisioning hasn't been configured for the application, then configure, but do not start, [provisioning](../app-provisioning/user-provisioning.md).
- * If the application uses an LDAP directory, follow the guide for [configuring Azure AD to provision users into LDAP directories](/azure/active-directory/app-provisioning/on-premises-ldap-connector-configure).
- * If the application uses a SQL database, follow the guide for [configuring Azure AD to provision users into SQL based applications](/azure/active-directory/app-provisioning/on-premises-sql-connector-configure).
+ * If the application uses an LDAP directory, follow the guide for [configuring Azure AD to provision users into LDAP directories](../app-provisioning/on-premises-ldap-connector-configure.md).
+ * If the application uses a SQL database, follow the guide for [configuring Azure AD to provision users into SQL based applications](../app-provisioning/on-premises-sql-connector-configure.md).
-1. Check the [attribute mappings](/azure/active-directory/app-provisioning/customize-application-attributes) for provisioning to that application. Make sure that *Match objects using this attribute* is set for the Azure AD attribute and column that you used in the sections above for matching. If these rules aren't using the same attributes as you used earlier, then when application role assignments are created, Azure AD may be unable to locate existing users in the applications' data store, and inadvertently create duplicate users.
+1. Check the [attribute mappings](../app-provisioning/customize-application-attributes.md) for provisioning to that application. Make sure that *Match objects using this attribute* is set for the Azure AD attribute and column that you used in the sections above for matching. If these rules aren't using the same attributes as you used earlier, then when application role assignments are created, Azure AD may be unable to locate existing users in the applications' data store, and inadvertently create duplicate users.
1. Check that there's an attribute mapping for **isSoftDeleted** to an attribute of the application. When a user is unassigned from the application, soft-deleted in Azure AD, or blocked from sign-in, then Azure AD provisioning will update the attribute mapped to **isSoftDeleted**. If no attribute is mapped, then users who later are unassigned from the application role will continue to exist in the application's data store.
-1. If provisioning has already been enabled for the application, check that the application provisioning is not in [quarantine](/azure/active-directory/app-provisioning/application-provisioning-quarantine-status). You'll need to resolve any issues that are causing the quarantine prior to proceeding.
+1. If provisioning has already been enabled for the application, check that the application provisioning is not in [quarantine](../app-provisioning/application-provisioning-quarantine-status.md). You'll need to resolve any issues that are causing the quarantine prior to proceeding.
## Create app role assignments in Azure AD
When an application role assignment is created in Azure AD for a user to applica
If any users aren't assigned to application roles, check the Azure AD audit log for an error from a previous step. 1. If the **Provisioning Status** of the application is **Off**, turn the **Provisioning Status** to **On**.
-1. Based on the guidance for [how long will it take to provision users](/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user#how-long-will-it-take-to-provision-users), wait for Azure AD provisioning to match the existing users of the application to those users just assigned.
-1. Monitor the [provisioning status](/azure/active-directory/app-provisioning/check-status-user-account-provisioning) to ensure that all users were matched successfully. If you don't see users being provisioned, check the troubleshooting guide for [no users being provisioned](/azure/active-directory/app-provisioning/application-provisioning-config-problem-no-users-provisioned). If you see an error in the provisioning status and are provisioning to an on-premises application, then check the [troubleshooting guide for on-premises application provisioning](/azure/active-directory/app-provisioning/on-premises-ecma-troubleshoot).
+1. Based on the guidance for [how long will it take to provision users](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md#how-long-will-it-take-to-provision-users), wait for Azure AD provisioning to match the existing users of the application to those users just assigned.
+1. Monitor the [provisioning status](../app-provisioning/check-status-user-account-provisioning.md) to ensure that all users were matched successfully. If you don't see users being provisioned, check the troubleshooting guide for [no users being provisioned](../app-provisioning/application-provisioning-config-problem-no-users-provisioned.md). If you see an error in the provisioning status and are provisioning to an on-premises application, then check the [troubleshooting guide for on-premises application provisioning](../app-provisioning/on-premises-ecma-troubleshoot.md).
Once the users have been matched by the Azure AD provisioning service, based on the application role assignments you've created, then subsequent changes will be sent to the application. ## Next steps
+ - [Prepare for an access review of users' access to an application](access-reviews-application-preparation.md)
active-directory How To Connect Group Writeback V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md
The following pre-requisites must be met in order to enable group writeback.
- Azure AD Premium license - Azure AD Connect version 2021 December release or later. - Enable Azure AD Connect group writeback-- **Optional** - On-Prem Exchange Server 2016 CU15 or later. Only needed for configuring cloud groups with exchange hybrid - optional. See [Configure Microsoft 365 Groups with on-premises Exchange hybrid](https://docs.microsoft.com/exchange/hybrid-deployment/set-up-microsoft-365-groups#prerequisites) for more information. If you don't have Exchange hybrid and/or an on-premises Exchange Server, the mail components of a group won't be written back.
+- **Optional** - On-Prem Exchange Server 2016 CU15 or later. Only needed for configuring cloud groups with exchange hybrid - optional. See [Configure Microsoft 365 Groups with on-premises Exchange hybrid](/exchange/hybrid-deployment/set-up-microsoft-365-groups#prerequisites) for more information. If you don't have Exchange hybrid and/or an on-premises Exchange Server, the mail components of a group won't be written back.
The latest version of Group Writeback is enabled tenant-wide and not per Azure AD Connect server. The default values for writeback settings on cloud groups are backward compatible.
Limitations and known issues specific to Group Writeback:
## Next steps
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history.md
To read more about auto-upgrade, see [Azure AD Connect: Automatic upgrade](how-t
- We added CerificateUserIds attribute to AAD Connector static schema. - The AAD Connect wizard will now abort if write event logs permission is missing. - We updated the AADConnect health endpoints to support the US government clouds.
+ - We added new cmdlets ΓÇ£Get-ADSyncToolsDuplicateUsersSourceAnchor and Set-ADSyncToolsDuplicateUsersSourceAnchorΓÇ£ to fix bulk "source anchor has changed" errors. When a new forest is added to AADConnect with duplicate user objects, the objects are running into bulk "source anchor has changed" errors. This is happening due to the mismatch between msDsConsistencyGuid & ImmutableId. More information about this module and the new cmdlets can be found in [this article](./reference-connect-adsynctools.md).
### Bug fixes - We fixed a bug that prevented localDB upgrades in some Locales.
This is a bug fix release. There are no functional changes in this release.
## Next steps
-Learn more about how to [integrate your on-premises identities with Azure AD](whatis-hybrid-identity.md).
+Learn more about how to [integrate your on-premises identities with Azure AD](whatis-hybrid-identity.md).
active-directory Cloudflare Azure Ad Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/cloudflare-azure-ad-integration.md
To get started, you need:
- An Azure AD tenant linked to your Azure AD subscription
- - See, [Quickstart: Create a new tenant in Azure Active Directory](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-access-create-new-tenant).
+ - See, [Quickstart: Create a new tenant in Azure Active Directory](../fundamentals/active-directory-access-create-new-tenant.md).
- A Cloudflare Zero Trust account
Use the instructions in the following three sections to register Cloudflare with
- [Integrate single sign-on (SSO) with Cloudflare](https://developers.cloudflare.com/cloudflare-one/identity/idp-integration/) -- [Cloudflare integration with Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/partner-cloudflare)
+- [Cloudflare integration with Azure AD B2C](../../active-directory-b2c/partner-cloudflare.md)
active-directory V2 Howto App Gallery Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/v2-howto-app-gallery-listing.md
Last updated 6/2/2022 -+ # Submit a request to publish your application in Azure Active Directory application gallery
active-directory Overview For Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview-for-developers.md
dr.Close();
#### [Java](#tab/java)
-If you use [Azure Spring Apps](/azure/spring-cloud/), you can connect to Azure SQL Database with a managed identity without needing to make any changes to your code.
+If you use [Azure Spring Apps](../../spring-cloud/index.yml), you can connect to Azure SQL Database with a managed identity without needing to make any changes to your code.
Open the `src/main/resources/application.properties` file, and add `Authentication=ActiveDirectoryMSI;` at the end of the following line. Be sure to use the correct value for `$AZ_DATABASE_NAME` variable.
Open the `src/main/resources/application.properties` file, and add `Authenticati
spring.datasource.url=jdbc:sqlserver://$AZ_DATABASE_NAME.database.windows.net:1433;database=demo;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;Authentication=ActiveDirectoryMSI; ```
-Read more about how to [use a managed identity to connect Azure SQL Database to an Azure Spring Apps app](/azure/spring-cloud/connect-managed-identity-to-azure-sql/).
+Read more about how to [use a managed identity to connect Azure SQL Database to an Azure Spring Apps app](../../spring-cloud/connect-managed-identity-to-azure-sql.md).
Tokens should be treated like credentials. Don't expose them to users or other s
* [How to use managed identities for App Service and Azure Functions](../../app-service/overview-managed-identity.md) * [How to use managed identities with Azure Container Instances](../../container-instances/container-instances-managed-identity.md)
-* [Implementing managed identities for Microsoft Azure Resources](https://www.pluralsight.com/courses/microsoft-azure-resources-managed-identities-implementing)
+* [Implementing managed identities for Microsoft Azure Resources](https://www.pluralsight.com/courses/microsoft-azure-resources-managed-identities-implementing)
active-directory Admin Units Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-assign-roles.md
Previously updated : 06/30/2022 Last updated : 07/15/2022
For more information, see [Prerequisites to use PowerShell or Graph Explorer](pr
## Roles that can be assigned with administrative unit scope
-The following Azure AD roles can be assigned with administrative unit scope:
+The following Azure AD roles can be assigned with administrative unit scope. Additionally, any [custom role](custom-create.md) can be assigned with administrative unit scope as long as the custom role's permissions include at least one permission relevant to users, groups, or devices.
| Role | Description | | --| -- |
The following Azure AD roles can be assigned with administrative unit scope:
| [Teams Administrator](permissions-reference.md#teams-administrator) | Can manage Microsoft 365 groups in the assigned administrative unit only. Can manage team members in the Microsoft 365 admin center for teams associated with groups in the assigned administrative unit only. Cannot use the Teams admin center. | | [Teams Devices Administrator](permissions-reference.md#teams-devices-administrator) | Can perform management related tasks on Teams certified devices. | | [User Administrator](permissions-reference.md#user-administrator) | Can manage all aspects of users and groups, including resetting passwords for limited admins within the assigned administrative unit only. |
+| [&lt;Custom role&gt;](custom-create.md) | Can perform actions that apply to users, groups, or devices, according to the definition of the custom role. |
Certain role permissions apply only to non-administrator users when assigned with the scope of an administrative unit. In other words, administrative unit scoped [Helpdesk Administrators](permissions-reference.md#helpdesk-administrator) can reset passwords for users in the administrative unit only if those users do not have administrator roles. The following list of permissions are restricted when the target of an action is another administrator:
active-directory Docusign Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/docusign-tutorial.md
Previously updated : 03/16/2022 Last updated : 07/14/2022
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * A DocuSign subscription that's single sign-on (SSO) enabled.
+* Control over your domain DNS. This is needed to claim domain on DocuSign.
> [!NOTE] > This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
In this section, you'll grant B.Simon access to DocuSign so that this user can u
3. If you want to set up DocuSign manually, open a new web browser window and sign in to your DocuSign company site as an administrator.
-4. In the upper-right corner of the page, select the profile logo, and then select **Go to Admin**.
+4. In the upper-left corner of the page, select the app launcher (9 dots), and then select **Admin**.
- ![Go to Admin under Profile][51]
+ ![Screenshot of Go to Admin under Profile.](media/docusign-tutorial/docusign-admin.png)
5. On your domain solutions page, select **Domains**.
- ![Domain Solutions/Domains][50]
+ ![Screenshot of Select_Domains.](media/docusign-tutorial/domains.png)
+ 6. In the **Domains** section, select **CLAIM DOMAIN**.
- ![Claim Domain option][52]
+ ![Screenshot of Claim_domain.](media/docusign-tutorial/claim-domain.png)
+ 7. In the **Claim a Domain** dialog box, in the **Domain Name** box, type your company domain, and then select **CLAIM**. Make sure you verify the domain and that its status is active.
- ![Claim a Domain/Domain Name dialog][53]
+ ![Screenshot of Claim a Domain/Domain Name dialog.](media/docusign-tutorial/claim-a-domain.png)
+
+8. In the **Domains** section, select **Get Validation Token** of new domain added in the claim list.
+
+ ![Screenshot of pending_Identity_provider.](media/docusign-tutorial/pending-Identity-provider.png)
+
+9. Copy the **TXT Token**
+
+ ![Screenshot of TXT_token.](media/docusign-tutorial/token.png)
+
+10. Configure your DNS provider with the **TXT Token** by following these steps:
-8. On the domain solutions page, select **Identity Providers**.
+ a. Navigate to your domain's DNS record management page.
+ b. Add a new TXT record.
+ c. Name: @ or *
+ d. Text: paste the **TXT Token** value, which you copied from the earlier step.
+ e. TTL: Default or 1 hour / 3600 seconds
++
+11. On the domain solutions page, select **Identity Providers**.
- ![Identity Providers option][54]
+ ![Screenshot of Identity Providers option.](media/docusign-tutorial/identity-providers.png)
-9. In the **Identity Providers** section, select **ADD IDENTITY PROVIDER**.
+12. In the **Identity Providers** section, select **ADD IDENTITY PROVIDER**.
- ![Add Identity Provider option][55]
+ ![Screenshot of Add Identity Provider option.](media/docusign-tutorial/add-identity-provider-option.png)
-10. On the **Identity Provider Settings** page, follow these steps:
- ![Identity Provider Settings fields][56]
+13. On the **Identity Provider Settings** page, follow these steps:
- a. In the **Name** box, type a unique name for your configuration. Don't use spaces.
+ a. In the **Custom Name** box, type a unique name for your configuration. Don't use spaces.
+
+ ![Screenshot of name_Identity_provider.](media/docusign-tutorial/add-identity-providers.png)
b. In the **Identity Provider Issuer box**, paste the **Azure AD Identifier** value, which you copied from the Azure portal.
+ ![Screenshot of urls_Identity_provider.](media/docusign-tutorial/idp-urls.png)
++ c. In the **Identity Provider Login URL** box, paste the **Login URL** value, which you copied from Azure portal. d. In the **Identity Provider Logout URL** box, paste the value of **Logout URL**, which you copied from Azure portal.
+
+ ![Screenshot of settings_Identity_provider.](media/docusign-tutorial/settings-Identity-provider.png)
+ e. For **Send AuthN request by**, select **POST**.
In this section, you'll grant B.Simon access to DocuSign so that this user can u
g. In the **Custom Attribute Mapping** section, select **ADD NEW MAPPING**.
- ![Custom Attribute Mapping UI][62]
+ ![Screenshot of Custom Attribute Mapping UI.](media/docusign-tutorial/add-new-mapping.png)
h. Choose the field you want to map to the Azure AD claim. In this example, the **emailaddress** claim is mapped with the value of `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress`. That's the default claim name from Azure AD for the email claim. Select **SAVE**.
- ![Custom Attribute Mapping fields][57]
+ ![Screenshot of Custom Attribute Mapping fields.](media/docusign-tutorial/email-address.png)
> [!NOTE]
- > Use the appropriate **User identifier** to map the user from Azure AD to DocuSign user mapping. Select the proper field, and enter the appropriate value based on your organization settings.
+ > Use the appropriate **User identifier** to map the user from Azure AD to DocuSign user mapping. Select the proper field, and enter the appropriate value based on your organization settings. Custom Attribute Mapping setting is not mandatory.
i. In the **Identity Provider Certificates** section, select **ADD CERTIFICATE**, upload the certificate you downloaded from Azure AD portal, and select **SAVE**.
- ![Identity Provider Certificates/Add Certificate][58]
+ ![Screenshot of Identity Provider Certificates/Add Certificate.](media/docusign-tutorial/certificates.png)
j. In the **Identity Providers** section, select **ACTIONS**, and then select **Endpoints**.
- ![Identity Providers/Endpoints][59]
+ ![Screenshot of Identity Providers/Endpoints.](media/docusign-tutorial/identity-providers-endpoints.png)
k. In the **View SAML 2.0 Endpoints** section of the DocuSign admin portal, follow these steps:
- ![View SAML 2.0 Endpoints][60]
+ ![Screenshot of View SAML 2.0 Endpoints.](media/docusign-tutorial/saml-endpoints.png)
1. Copy the **Service Provider Issuer URL**, and then paste it into the **Identifier** box in **Basic SAML Configuration** section in the Azure portal.
In this section, you test your Azure AD single sign-on configuration with follow
* You can use Microsoft My Apps. When you click the DocuSign tile in the My Apps, you should be automatically signed in to the DocuSign for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-## Next Steps
+## Next steps
Once you configure DocuSign you can enforce Session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).-
-<!--Image references-->
-
-[50]: ./media/docusign-tutorial/tutorial-docusign-18.png
-[51]: ./media/docusign-tutorial/tutorial-docusign-21.png
-[52]: ./media/docusign-tutorial/tutorial-docusign-22.png
-[53]: ./media/docusign-tutorial/tutorial-docusign-23.png
-[54]: ./media/docusign-tutorial/tutorial-docusign-19.png
-[55]: ./media/docusign-tutorial/tutorial-docusign-20.png
-[56]: ./media/docusign-tutorial/request.png
-[57]: ./media/docusign-tutorial/tutorial-docusign-25.png
-[58]: ./media/docusign-tutorial/tutorial-docusign-26.png
-[59]: ./media/docusign-tutorial/tutorial-docusign-27.png
-[60]: ./media/docusign-tutorial/tutorial-docusign-28.png
-[61]: ./media/docusign-tutorial/tutorial-docusign-29.png
-[62]: ./media/docusign-tutorial/tutorial-docusign-30.png
active-directory Fortiweb Web Application Firewall Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortiweb-web-application-firewall-tutorial.md
Previously updated : 03/11/2020 Last updated : 07/14/2022
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **FortiWeb Web Application Firewall** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pen icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. Perform the following steps in the following page.
- ![SAML server page](./media/fortiweb-web-application-firewall-tutorial/configure-sso.png)
+ ![Screenshot for SAML server page](./media/fortiweb-web-application-firewall-tutorial/configure-sso.png)
a. In the left-hand menu, click **User**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
e. In the **Name** field, provide the value for `<fwName>` used in the Configure Azure AD section.
- f. In the **Entity ID** textbox, paste the **Azure AD Identifier** value which you have copied from the Azure portal.
+ f. In the **Entity ID** textbox, Enter the **Identifier (Entity ID)** value, like `https://www.<CUSTOMER_DOMAIN>.com/samlsp`
g. Next to **Metadata**, click **Choose File** and select the **Federation Metadata XML** file which you have downloaded from the Azure portal.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. Sign-in using the administrator credentials provided during the FortiWeb VM deployment. 1. Perform the following steps in the following page.
- ![Site Publishing Rule](./media/fortiweb-web-application-firewall-tutorial/site-publish-rule.png)
+ ![Screenshot for Site Publishing Rule](./media/fortiweb-web-application-firewall-tutorial/site-publish-rule.png)
a. In the left-hand menu, click **Application Delivery**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. Perform the following steps in the following page.
- ![Site Publishing Policy](./media/fortiweb-web-application-firewall-tutorial/site-publish-policy.png)
+ ![Screenshot for Site Publishing Policy](./media/fortiweb-web-application-firewall-tutorial/site-publish-policy.png)
a. In the left-hand menu, click **Application Delivery**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
8. Next to **Site Publish**, select the site publishing policy you created earlier. 9. Click **OK**.
- ![site publish](./media/fortiweb-web-application-firewall-tutorial/web-protection.png)
+ ![Screenshot for site publish](./media/fortiweb-web-application-firewall-tutorial/web-protection.png)
10. In the left-hand menu, click **Policy**. 11. Under **Policy**, click **Server Policy**.
active-directory Ziflow Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ziflow-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Ziflow | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Ziflow'
description: Learn how to configure single sign-on between Azure Active Directory and Ziflow.
Previously updated : 06/15/2021 Last updated : 07/14/2022
-# Tutorial: Azure Active Directory integration with Ziflow
+# Tutorial: Azure AD SSO integration with Ziflow
In this tutorial, you'll learn how to integrate Ziflow with Azure Active Directory (Azure AD). When you integrate Ziflow with Azure AD, you can:
Follow these steps to enable Azure AD SSO in the Azure portal.
b. In the **Sign on URL** text box, type a URL using the following pattern: `https://ziflow-production.auth0.com/login/callback?connection=<UNIQUE_ID>`
+ c. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://ziflow-production.auth0.com/login/callback?connection=<UNIQUE_ID>`
+ > [!NOTE]
- > The preceding values are not real. You will update the unique ID value in the Identifier and Sign on URL with actual value, which is explained later in the tutorial.
+ > The preceding values are not real. You will update the unique ID value in the Identifier, Sign on URL and Reply URL with actual value, which is explained later in the tutorial.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
2. Click on Avatar in the top right corner, and then click **Manage account**.
- ![Ziflow Configuration Manage](./media/ziflow-tutorial/manage-account.png)
+ ![Screenshot for Ziflow Configuration Manage](./media/ziflow-tutorial/manage-account.png)
3. In the top left, click **Single Sign-On**.
- ![Ziflow Configuration Sign](./media/ziflow-tutorial/configuration.png)
+ ![Screenshot for Ziflow Configuration Sign](./media/ziflow-tutorial/configuration.png)
4. On the **Single Sign-On** page, perform the following steps:
- ![Ziflow Configuration Single](./media/ziflow-tutorial/page.png)
+ ![Screenshot for Ziflow Configuration Single](./media/ziflow-tutorial/page.png)
a. Select **Type** as **SAML2.0**.
To provision a user account, perform the following steps:
2. Navigate to **People** on the top.
- ![Ziflow Configuration people](./media/ziflow-tutorial/people.png)
+ ![Screenshot for Ziflow Configuration people](./media/ziflow-tutorial/people.png)
3. Click **Add** and then click **Add user**.
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
The following diagram illustrates the Azure AD Verifiable Credentials architectu
[Azure Key Vault](../../key-vault/general/basic-concepts.md) is a cloud service that enables the secure storage and access of secrets and keys. The Verifiable Credentials service stores public and private keys in Azure Key Vault. These keys are used to sign and verify credentials.
-If you don't have an Azure Key Vault instance available, follow [these steps](/azure/key-vault/general/quick-create-portal) to create a key vault using the Azure portal.
+If you don't have an Azure Key Vault instance available, follow [these steps](../../key-vault/general/quick-create-portal.md) to create a key vault using the Azure portal.
>[!NOTE] >By default, the account that creates a vault is the only one with access. The Verifiable Credentials service needs access to the key vault. You must configure the key vault with an access policy that allows the account used during configuration to create and delete keys. The account used during configuration also requires permission to sign to create the domain binding for Verifiable Credentials. If you use the same account while testing, modify the default policy to grant the account sign permission, in addition to the default permissions granted to vault creators.
Once that you have successfully completed the verification steps, you are ready
## Next steps - [Learn how to issue Azure AD Verifiable Credentials from a web application](verifiable-credentials-configure-issuer.md).-- [Learn how to verify Azure AD Verifiable Credentials](verifiable-credentials-configure-verifier.md).
+- [Learn how to verify Azure AD Verifiable Credentials](verifiable-credentials-configure-verifier.md).
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-configuration.md
Like the temporary disk, an ephemeral OS disk is included in the price of the vi
When using ephemeral OS, the OS disk must fit in the VM cache. The sizes for VM cache are available in the [Azure documentation](../virtual-machines/dv3-dsv3-series.md) in parentheses next to IO throughput ("cache size in GiB").
-Using the AKS default VM size [Standard_DS2_v2](/azure/virtual-machines/dv2-dsv2-series#dsv2-series) with the default OS disk size of 100GB as an example, this VM size supports ephemeral OS but only has 86GB of cache size. This configuration would default to managed disks if the user does not specify explicitly. If a user explicitly requested ephemeral OS, they would receive a validation error.
+Using the AKS default VM size [Standard_DS2_v2](../virtual-machines/dv2-dsv2-series.md#dsv2-series) with the default OS disk size of 100GB as an example, this VM size supports ephemeral OS but only has 86GB of cache size. This configuration would default to managed disks if the user does not specify explicitly. If a user explicitly requested ephemeral OS, they would receive a validation error.
-If a user requests the same [Standard_DS2_v2](/azure/virtual-machines/dv2-dsv2-series#dsv2-series) with a 60GB OS disk, this configuration would default to ephemeral OS: the requested size of 60GB is smaller than the maximum cache size of 86GB.
+If a user requests the same [Standard_DS2_v2](../virtual-machines/dv2-dsv2-series.md#dsv2-series) with a 60GB OS disk, this configuration would default to ephemeral OS: the requested size of 60GB is smaller than the maximum cache size of 86GB.
-Using [Standard_D8s_v3](/azure/virtual-machines/dv3-dsv3-series#dsv3-series) with 100GB OS disk, this VM size supports ephemeral OS and has 200GB of cache space. If a user does not specify the OS disk type, the node pool would receive ephemeral OS by default.
+Using [Standard_D8s_v3](../virtual-machines/dv3-dsv3-series.md#dsv3-series) with 100GB OS disk, this VM size supports ephemeral OS and has 200GB of cache space. If a user does not specify the OS disk type, the node pool would receive ephemeral OS by default.
-The latest generation of VM series does not have a dedicated cache, but only temporary storage. Let's assume to use the [Standard_E2bds_v5](/azure/virtual-machines/ebdsv5-ebsv5-series#ebdsv5-series) VM size with the default OS disk size of 100 GiB as an example. This VM size supports ephemeral OS disks but only has 75 GiB of temporary storage. This configuration would default to managed OS disks if the user does not specify explicitly. If a user explicitly requested ephemeral OS disks, they would receive a validation error.
+The latest generation of VM series does not have a dedicated cache, but only temporary storage. Let's assume to use the [Standard_E2bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) VM size with the default OS disk size of 100 GiB as an example. This VM size supports ephemeral OS disks but only has 75 GiB of temporary storage. This configuration would default to managed OS disks if the user does not specify explicitly. If a user explicitly requested ephemeral OS disks, they would receive a validation error.
-If a user requests the same [Standard_E2bds_v5](/azure/virtual-machines/ebdsv5-ebsv5-series#ebdsv5-series) VM size with a 60 GiB OS disk, this configuration would default to ephemeral OS disks: the requested size of 60 GiB is smaller than the maximum temporary storage of 75 GiB.
+If a user requests the same [Standard_E2bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) VM size with a 60 GiB OS disk, this configuration would default to ephemeral OS disks: the requested size of 60 GiB is smaller than the maximum temporary storage of 75 GiB.
-Using [Standard_E4bds_v5](/azure/virtual-machines/ebdsv5-ebsv5-series#ebdsv5-series) with 100 GiB OS disk, this VM size supports ephemeral OS and has 150 GiB of temporary storage. If a user does not specify the OS disk type, the node pool would receive ephemeral OS by default.
+Using [Standard_E4bds_v5](../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) with 100 GiB OS disk, this VM size supports ephemeral OS and has 150 GiB of temporary storage. If a user does not specify the OS disk type, the node pool would receive ephemeral OS by default.
Ephemeral OS requires at least version 2.15.0 of the Azure CLI.
az aks show -n aks -g myResourceGroup --query "oidcIssuerProfile.issuerUrl" -ots
[az-feature-register]: /cli/azure/feature#az_feature_register [az-feature-list]: /cli/azure/feature#az_feature_list [az-provider-register]: /cli/azure/provider#az_provider_register
-[aks-add-np-containerd]: windows-container-cli.md#add-a-windows-server-node-pool-with-containerd
+[aks-add-np-containerd]: windows-container-cli.md#add-a-windows-server-node-pool-with-containerd
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-driver.md
You might sometimes want to create a Kubernetes secret to mirror the mounted con
When you create a `SecretProviderClass`, use the `secretObjects` field to define the desired state of the Kubernetes secret, as shown in the following example. > [!NOTE]
-> The example here is incomplete. You'll need to modify it to support your chosen method of access to your key vault identity.
+> The YAML examples here are incomplete. You'll need to modify them to support your chosen method of access to your key vault identity. For details, see [Provide an identity to access the Azure Key Vault Provider for Secrets Store CSI Driver][identity-access-methods].
The secrets will sync only after you start a pod to mount them. To rely solely on syncing with the Kubernetes secrets feature doesn't work. When all the pods that consume the secret are deleted, the Kubernetes secret is also deleted.
spec:
After you've created the Kubernetes secret, you can reference it by setting an environment variable in your pod, as shown in the following example code: > [!NOTE]
-> The example here is incomplete. You'll need to modify it to support the Azure key vault identity access that you've chosen.
+> The example here demonstrates access to a secret through env variables and through volume/volumeMount. This is for illustrative purposes. These two methods can exist independently from the other.
```yml kind: Pod
aks Node Image Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-image-upgrade.md
az aks upgrade \
During the upgrade, check the status of the node images with the following `kubectl` command to get the labels and filter out the current node image information:
+>[!NOTE]
+> This command may differ slightly depending on the shell you use. See the [Kubernetes JSONPath documentation][kubernetes-json-path] for more information on Windows/PowerShell environments.
+ ```azurecli kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.kubernetes\.azure\.com\/node-image-version}{"\n"}{end}' ```
az aks nodepool upgrade \
During the upgrade, check the status of the node images with the following `kubectl` command to get the labels and filter out the current node image information:
+>[!NOTE]
+> This command may differ slightly depending on the shell you use. See the [Kubernetes JSONPath documentation][kubernetes-json-path] for more information on Windows/PowerShell environments.
+ ```azurecli kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.kubernetes\.azure\.com\/node-image-version}{"\n"}{end}' ```
az aks nodepool show \
- [Automatically apply cluster and node pool upgrades with GitHub Actions][github-schedule] - Learn more about multiple node pools and how to upgrade node pools with [Create and manage multiple node pools][use-multiple-node-pools].
+<!-- LINKS - external -->
+[kubernetes-json-path]: https://kubernetes.io/docs/reference/kubectl/jsonpath/
+ <!-- LINKS - internal --> [upgrade-cluster]: upgrade-cluster.md [github-schedule]: node-upgrade-github-actions.md
aks Uptime Sla https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/uptime-sla.md
This process takes several minutes to complete.
[vm-skus]: ../virtual-machines/sizes.md [paid-sku-tier]: /rest/api/aks/managed-clusters/create-or-update#managedclusterskutier [nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool
-[manage-resource-group-cli]: /azure/azure-resource-manager/management/manage-resource-groups-cli
+[manage-resource-group-cli]: ../azure-resource-manager/management/manage-resource-groups-cli.md
[faq]: ./faq.md [availability-zones]: ./availability-zones.md [az-aks-create]: /cli/azure/aks?#az_aks_create
This process takes several minutes to complete.
[az-aks-update]: /cli/azure/aks#az_aks_update [az-group-delete]: /cli/azure/group#az_group_delete [private-clusters]: private-clusters.md
-[install-azure-cli]: /cli/azure/install-azure-cli
+[install-azure-cli]: /cli/azure/install-azure-cli
api-management Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-custom-domain.md
API Management offers a free, managed TLS certificate for your domain, if you do
#### Limitations * Currently can be used only with the Gateway endpoint of your API Management service
+* Not supported with the self-hosted gateway
* Not supported in the following Azure regions: France South and South Africa West * Currently available only in the Azure cloud * Does not support root domain names (for example, `contoso.com`). Requires a fully qualified name such as `api.contoso.com`.
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md
The following lists show supported and unsupported Docker Compose configuration
#### Syntax Limitations -- the "version x.x" always needs to be the first yaml statement in the file-- the ports section must use quoted numbers-- the image > volume section must be quoted and cannot have a permissions definitions-- the volumes section must not have an empty curly brace after the volume name
+- "version x.x" always needs to be the first YAML statement in the file
+- ports section must use quoted numbers
+- image > volume section must be quoted and cannot have permissions definitions
+- volumes section must not have an empty curly brace after the volume name
> [!NOTE] > Any other options not explicitly called out are ignored in Public Preview.
The following lists show supported and unsupported Docker Compose configuration
Or, see additional resources: - [Environment variables and app settings reference](reference-app-settings.md)-- [Load certificate in Windows/Linux containers](configure-ssl-certificate-in-code.md#load-certificate-in-linuxwindows-containers)
+- [Load certificate in Windows/Linux containers](configure-ssl-certificate-in-code.md#load-certificate-in-linuxwindows-containers)
application-gateway Application Gateway Configure Ssl Policy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-configure-ssl-policy-powershell.md
CipherSuites:
## Configure a custom TLS policy
-When configuring a custom TLS policy, you pass the following parameters: PolicyType, MinProtocolVersion, CipherSuite, and ApplicationGateway. If you attempt to pass other parameters, you get an error when creating or updating the Application Gateway.
-
-> [!IMPORTANT]
-> - If you are using a custom SSL policy in Application Gateway v1 SKU (Standard or WAF), make sure that you add the mandatory cipher &#34;TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256&#34; to the list. This cipher is required to enable metrics and logging in the Application Gateway v1 SKU. This is not mandatory for Application Gateway v2 SKU (Standard_v2 or WAF_v2).
-> - Cipher suites "TLS_AES_128_GCM_SHA256" and "TLS_AES_256_GCM_SHA384" with TLSv1.3 are not customizable and included by default when setting a CustomV2 policy with a minimum TLS version of 1.2 or 1.3.
-
-The following example sets a custom TLS policy on an application gateway. It sets the minimum protocol version to `TLSv1_1` and enables the following cipher suites:
+When configuring a custom TLS policy, you pass the following parameters: PolicyType, MinProtocolVersion, CipherSuite, and ApplicationGateway. If you attempt to pass other parameters, you get an error when creating or updating the Application Gateway. The following example sets a custom TLS policy on an application gateway. It sets the minimum protocol version to `TLSv1_1` and enables the following cipher suites:
* TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 * TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
The following example sets a custom TLS policy on an application gateway. It set
$gw = Get-AzApplicationGateway -Name AdatumAppGateway -ResourceGroup AdatumAppGatewayRG # set the TLS policy on the application gateway
-Set-AzApplicationGatewaySslPolicy -ApplicationGateway $gw -PolicyType Custom -MinProtocolVersion TLSv1_1 -CipherSuite "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_RSA_WITH_AES_128_GCM_SHA256"
+Set-AzApplicationGatewaySslPolicy -ApplicationGateway $gw -PolicyType Custom -MinProtocolVersion TLSv1_1 -CipherSuite "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"
# validate the TLS policy locally Get-AzApplicationGatewaySslPolicy -ApplicationGateway $gw
Get-AzApplicationGatewaySslPolicy -ApplicationGateway $gw
Set-AzApplicationGateway -ApplicationGateway $gw ```
+> [!IMPORTANT]
+> - If you are using a custom SSL policy in Application Gateway v1 SKU (Standard or WAF), make sure that you add the mandatory cipher &#34;TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256&#34; to the list. This cipher is required to enable metrics and logging in the Application Gateway v1 SKU. This is not mandatory for Application Gateway v2 SKU (Standard_v2 or WAF_v2).
+> - Cipher suites "TLS_AES_128_GCM_SHA256" and "TLS_AES_256_GCM_SHA384" with TLSv1.3 are not customizable and included by default when setting a CustomV2 policy with a minimum TLS version of 1.2 or 1.3. These two cipher suites will not appear in the Get Details output, with an exception of Portal.
+
+To set minimum protocol version to 1.3, you must use the following command:
+
+```powershell
+Set-AzApplicationGatewaySslPolicy -ApplicationGateway $AppGW -MinProtocolVersion TLSv1_3 -PolicyType CustomV2 -CipherSuite @()
+```
+
+This illustration further explains the usage of CustomV2 policy with minimum protocol versions 1.2 and 1.3.
++ ## Create an application gateway with a pre-defined TLS policy When configuring a Predefined TLS policy, you pass the following parameters: PolicyType, PolicyName, and ApplicationGateway. If you attempt to pass other parameters, you get an error when creating or updating the Application Gateway.
application-gateway Application Gateway Ssl Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-ssl-policy-overview.md
If a TLS policy needs to be configured for your requirements, you can use a Cust
> [!IMPORTANT] > - If you are using a custom SSL policy in Application Gateway v1 SKU (Standard or WAF), make sure that you add the mandatory cipher "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" to the list. This cipher is required to enable metrics and logging in the Application Gateway v1 SKU. > This is not mandatory for Application Gateway v2 SKU (Standard_v2 or WAF_v2).
-> - The cipher suites ΓÇ£TLS_AES_128_GCM_SHA256ΓÇ¥ and ΓÇ£TLS_AES_256_GCM_SHA384ΓÇ¥ with TLSv1.3 are not customizable. Hence, these are included by default when choosing a CustomV2 policy with minimum protocol version 1.2 or 1.3.
+> - The cipher suites ΓÇ£TLS_AES_128_GCM_SHA256ΓÇ¥ and ΓÇ£TLS_AES_256_GCM_SHA384ΓÇ¥ are mandatory for TLSv1.3. You need NOT mention these explicitly when setting a CustomV2 policy with minimum protocol version 1.2 or 1.3 through [PowerShell](application-gateway-configure-ssl-policy-powershell.md) or CLI. Accordingly, these ciphers suites will not appear in the Get Details output, with an exception of Portal.
### Cipher suites
application-gateway Create Multiple Sites Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-multiple-sites-portal.md
Title: 'Tutorial: Hosts multiple web sites using the Azure portal'
+ Title: 'Tutorial: Create and configure an application gateway to host multiple web sites using the Azure portal'
description: In this tutorial, you learn how to create an application gateway that hosts multiple web sites using the Azure portal. Previously updated : 03/19/2021 Last updated : 07/14/2022 + #Customer intent: As an IT administrator, I want to use the Azure portal to set up an application gateway so I can host multiple sites. # Tutorial: Create and configure an application gateway to host multiple web sites using the Azure portal
-You can use the Azure portal to [configure the hosting of multiple web sites](multiple-site-overview.md) when you create an [application gateway](overview.md). In this tutorial, you define backend address pools using virtual machines. You then configure listeners and rules based on two domains to make sure web traffic arrives at the appropriate servers in the pools. This tutorial uses examples of *www.contoso.com* and *www.fabrikam.com*.
+You can use the Azure portal to configure the [hosting of multiple web sites](multiple-site-overview.md) when you create an [application gateway](overview.md). In this tutorial, you define backend address pools using virtual machines. You then configure listeners and rules based on two domains to make sure web traffic arrives at the appropriate servers in the pools. This tutorial uses examples of `www.contoso.com` and `www.fabrikam.com`.
In this tutorial, you learn how to:
In this tutorial, you learn how to:
> * Create backend pools with the backend servers > * Create listeners > * Create routing rules
-> * Edit Hosts file for name resolution
+> * Edit hosts file for name resolution
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. + ## Prerequisites
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+- An Azure subscription
+
+## Sign in to Azure
+
+Sign in to the [Azure portal](https://portal.azure.com).
## Create an application gateway
-1. Select **Create a resource** on the left menu of the Azure portal. The **New** window appears.
+1. From the Azure portal menu, select **+ Create a resource** > **Networking** > **Application Gateway**, or search for *Application Gateway* in the portal search box.
-2. Select **Networking** and then select **Application Gateway** in the **Featured** list.
+2. Select **Create**.
### Basics tab
-1. On the **Basics** tab, enter these values for the following application gateway settings:
+1. On the **Basics** tab, enter these values:
- **Resource group**: Select **myResourceGroupAG** for the resource group. If it doesn't exist, select **Create new** to create it. - **Application gateway name**: Enter *myAppGateway* for the name of the application gateway.
- :::image type="content" source="./media/application-gateway-create-gateway-portal/application-gateway-create-basics.png" alt-text="Create Application Gateway":::
+ :::image type="content" source="./media/create-multiple-sites-portal/application-gateway-create-basics.png" alt-text="Screenshot showing Create application gateway page.":::
-2. For Azure to communicate between the resources that you create, it needs a virtual network. You can either create a new virtual network or use an existing one. In this example, you'll create a new virtual network at the same time that you create the application gateway. Application Gateway instances are created in separate subnets. You create two subnets in this example: one for the application gateway, and another for the backend servers.
+2. For Azure to communicate between the resources that you create, it needs a virtual network. You can either create a new virtual network or use an existing one. In this example, you'll create a new virtual network at the same time that you create the application gateway. Application gateway instances are created in separate subnets. You create two subnets in this example: one for the application gateway, and another for the backend servers.
- Under **Configure virtual network**, select **Create new** to create a new virtual network . In the **Create virtual network** window that opens, enter the following values to create the virtual network and two subnets:
+ Under **Configure virtual network**, select **Create new** to create a new virtual network. In the **Create virtual network** window that opens, enter the following values to create the virtual network and two subnets:
- **Name**: Enter *myVNet* for the name of the virtual network.
- - **Subnet name** (Application Gateway subnet): The **Subnets** grid will show a subnet named *Default*. Change the name of this subnet to *myAGSubnet*.<br>The application gateway subnet can contain only application gateways. No other resources are allowed.
+ - **Subnet name** (application gateway subnet): The **Subnets** grid will show a subnet named *Default*. Change the name of this subnet to *myAGSubnet*.<br>The application gateway subnet can contain only application gateways. No other resources are allowed.
- **Subnet name** (backend server subnet): In the second row of the **Subnets** grid, enter *myBackendSubnet* in the **Subnet name** column.
Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.c
1. On the **Frontends** tab, verify **Frontend IP address type** is set to **Public**. <br>You can configure the Frontend IP to be Public or Private as per your use case. In this example, you'll choose a Public Frontend IP. > [!NOTE]
- > For the Application Gateway v2 SKU, you can only choose **Public** frontend IP configuration. Private frontend IP configuration is currently not enabled for this v2 SKU.
+ > For the application gateway v2 SKU, you can only choose **Public** frontend IP configuration. Private frontend IP configuration is currently not enabled for this v2 SKU.
2. Select **Add new** for the **Public IP address** and enter *myAGPublicIPAddress* for the public IP address name, and then select **OK**.
On the **Configuration** tab, you'll connect the frontend and backend pools you
Under **Additional settings**: - **Listener type**: Multiple sites
- - **Host name**: **www.contoso.com**
+ - **Host name**: `www.contoso.com`
Accept the default values for the other settings on the **Listener** tab, then select the **Backend targets** tab to configure the rest of the routing rule.
Wait for the deployment to complete before proceeding to the next step.
## Edit your hosts file for name resolution
-After the application gateway is created with its public IP address, you can get the IP address and use it to edit your hosts file to resolve `www.contoso.com` and `www.fabrikam.com`. In a production environment, you could create a `CNAME` in DNS for name resolution.
+After the application gateway is created with its public IP address, you can get the IP address, and use it to edit your hosts file to resolve `www.contoso.com` and `www.fabrikam.com`. In a production environment, you could create a `CNAME` in DNS for name resolution.
-1. Click **All resources**, and then click **myAGPublicIPAddress**.
+1. Select **All resources**, and then select **myAGPublicIPAddress**.
![Record application gateway DNS address](./media/create-multiple-sites-portal/public-ip.png)
After the application gateway is created with its public IP address, you can get
## Clean up resources
-When you no longer need the resources that you created with the application gateway, remove the resource group. When you remove the resource group, you also remove the application gateway and all its related resources.
+When you no longer need the resources that you created with the application gateway, delete the resource group. When you delete the resource group, you also delete the application gateway and all its related resources.
To remove the resource group: 1. On the left menu of the Azure portal, select **Resource groups**. 2. On the **Resource groups** page, search for **myResourceGroupAG** in the list, then select it.
-3. On the **Resource group page**, select **Delete resource group**.
+3. On the **myResourceGroupAG** page, select **Delete resource group**.
4. Enter *myResourceGroupAG* for **TYPE THE RESOURCE GROUP NAME** and then select **Delete**. To restore the hosts file:
-1. Delete the `www.contoso.com` and `www.fabrikam.com` lines from the hosts file and run `ipconfig/registerdns` and `ipconfig/flushdns` from the command prompt.
+
+1. Delete the `www.contoso.com` and `www.fabrikam.com` lines from the `hosts` file.
+1. Run `ipconfig/registerdns` and `ipconfig/flushdns` from the command prompt.
## Next steps
+In this tutorial, you:
+
+- Created an application gateway with listeners and rules based on two domains
+- Tested the application gateway after editing the host files of backend servers
+
+To learn more about hosting multiple sites, see [application gateway multiple site hosting](multiple-site-overview.md).
+
+To learn how to create and configure an application gateway with path-based routing rules using the Azure portal, advance to the next tutorial.
+ > [!div class="nextstepaction"]
-> [Learn more about what you can do with Azure Application Gateway](./overview.md)
+> [Route by URL](create-url-route-portal.md)
application-gateway Tutorial Ingress Controller Add On Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-existing.md
Previously updated : 07/09/2022 Last updated : 07/15/2022
In this tutorial, you learn how to:
## Create a resource group
-In Azure, you allocate related resources to a resource group. Create a resource group by using [az group create](/cli/azure/group#az-group-create). The following example creates a resource group named **myResourceGroup** in the **canadacentral** location (region).
+In Azure, you allocate related resources to a resource group. Create a resource group by using [az group create](/cli/azure/group#az-group-create). The following example creates a resource group named **myResourceGroup** in the **East US** location (region):
```azurecli-interactive
-az group create --name myResourceGroup --location canadacentral
+az group create --name myResourceGroup --location eastus
``` ## Deploy a new AKS cluster
You'll now deploy a new AKS cluster, to simulate having an existing AKS cluster
In the following example, you'll be deploying a new AKS cluster named **myCluster** using [Azure CNI](../aks/concepts-network.md#azure-cni-advanced-networking) and [Managed Identities](../aks/use-managed-identity.md) in the resource group you created, **myResourceGroup**. ```azurecli-interactive
-az aks create -n myCluster -g myResourceGroup --network-plugin azure --enable-managed-identity
+az aks create -n myCluster -g myResourceGroup --network-plugin azure --enable-managed-identity --generate-ssh-keys
```
-To configure other parameters for the `az aks create` command, visit references [here](/cli/azure/aks#az-aks-create).
+To configure more parameters for the above command, see [az aks create](/cli/azure/aks#az-aks-create).
+
+> [!NOTE]
+> A node resource group will be created with the name **MC_resource-group-name_cluster-name_location**.
## Deploy a new application gateway
-You'll now deploy a new application gateway, to simulate having an existing application gateway that you want to use to load balance traffic to your AKS cluster, **myCluster**. The name of the application gateway will be **myApplicationGateway**, but you'll need to first create a public IP resource, named **myPublicIp**, and a new virtual network called **myVnet** with address space 11.0.0.0/8, and a subnet with address space 11.1.0.0/16 called **mySubnet**, and deploy your application gateway in **mySubnet** using **myPublicIp**.
+You'll now deploy a new application gateway, to simulate having an existing application gateway that you want to use to load balance traffic to your AKS cluster, **myCluster**. The name of the application gateway will be **myApplicationGateway**, but you'll need to first create a public IP resource, named **myPublicIp**, and a new virtual network called **myVnet** with address space 10.0.0.0/16, and a subnet with address space 10.0.0.0/24 called **mySubnet**, and deploy your application gateway in **mySubnet** using **myPublicIp**.
+
+> [!CAUTION]
+> When you use an AKS cluster and application gateway in separate virtual networks, the address spaces of the two virtual networks must not overlap. The default address space that an AKS cluster deploys in is 10.224.0.0/12.
-When you use an AKS cluster and application gateway in separate virtual networks, the address spaces of the two virtual networks must not overlap. The default address space that an AKS cluster deploys in is 10.0.0.0/8, so we set the application gateway virtual network address prefix to 11.0.0.0/8.
```azurecli-interactive az network public-ip create -n myPublicIp -g myResourceGroup --allocation-method Static --sku Standard
-az network vnet create -n myVnet -g myResourceGroup --address-prefix 11.0.0.0/8 --subnet-name mySubnet --subnet-prefix 11.1.0.0/16
-az network application-gateway create -n myApplicationGateway -l canadacentral -g myResourceGroup --sku Standard_v2 --public-ip-address myPublicIp --vnet-name myVnet --subnet mySubnet
+az network vnet create -n myVnet -g myResourceGroup --address-prefix 10.0.0.0/16 --subnet-name mySubnet --subnet-prefix 10.0.0.0/24
+az network application-gateway create -n myApplicationGateway -l eastus -g myResourceGroup --sku Standard_v2 --public-ip-address myPublicIp --vnet-name myVnet --subnet mySubnet --priority 100
``` > [!NOTE]
Check that the sample application you created is up and running by either visiti
## Clean up resources
-When no longer needed, delete the resource group and all related resources.
+When no longer needed, delete all resources created in this tutorial by deleting **myResourceGroup** and **MC_myResourceGroup_myCluster_eastus** resource groups:
```azurecli-interactive az group delete --name myResourceGroup
+az group delete --name MC_myResourceGroup_myCluster_eastus
``` ## Next steps
application-gateway Tutorial Ingress Controller Add On New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-new.md
Previously updated : 07/12/2022 Last updated : 07/15/2022
You'll now deploy a new AKS cluster with the AGIC add-on enabled. If you don't p
In the following example, you'll deploy a new AKS cluster named *myCluster* by using [Azure CNI](../aks/concepts-network.md#azure-cni-advanced-networking) and [managed identities](../aks/use-managed-identity.md). The AGIC add-on will be enabled in the resource group that you created, **myResourceGroup**.
-Deploying a new AKS cluster with the AGIC add-on enabled without specifying an existing application gateway instance will mean an automatic creation of a Standard_v2 SKU application gateway instance. So, you'll also specify the name and subnet address space of the application gateway instance. The name of the application gateway instance will be **myApplicationGateway**, and the subnet address space will be **10.225.0.0/16**.
+Deploying a new AKS cluster with the AGIC add-on enabled without specifying an existing application gateway instance will automatically create a Standard_v2 SKU application gateway instance. You'll need to specify a name and subnet address space for the new application gateway instance. The address space must be from 10.224.0.0/12 prefix used by the AKS virtual network without overlapping with 10.224.0.0/16 prefix used by the AKS subnet. In this tutorial, use *myApplicationGateway* for the application gateway name and *10.225.0.0/16* for its subnet address space.
```azurecli-interactive az aks create -n myCluster -g myResourceGroup --network-plugin azure --enable-managed-identity -a ingress-appgw --appgw-name myApplicationGateway --appgw-subnet-cidr "10.225.0.0/16" --generate-ssh-keys ```
-To configure more parameters for the above command, got to [az aks create](/cli/azure/aks#az-aks-create).
+To configure more parameters for the above command, see [az aks create](/cli/azure/aks#az-aks-create).
> [!NOTE] > The AKS cluster that you created will appear in the resource group that you created, **myResourceGroup**. However, the automatically created application gateway instance will be in the node resource group, where the agent pools are. The node resource group is named **MC_resource-group-name_cluster-name_location** by default, but can be modified.
applied-ai-services Estimate Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/estimate-cost.md
+
+ Title: "Check my usage and estimate the cost"
+
+description: Learn how to use Azure portal to check how many pages are analyzed and estimate the total price.
+++++ Last updated : 07/14/2022+
+recommendations: false
++
+# Check my Form Recognizer usage and estimate the price
+
+ In this guide, you'll learn how to use the metrics dashboard in the Azure portal to view how many pages were processed by Azure Form Recognizer. You'll also learn how to estimate the cost of processing those pages using the Azure pricing calculator.
+
+## Check how many pages were processed
+
+We'll start by looking at the page processing data for a given time period:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to your Form Recognizer resource.
+
+1. From the **Overview** page, select the **Monitoring** tab located near the middle of the page.
+
+ :::image type="content" source="../media/azure-portal-overview-menu.png" alt-text="Screenshot of the Azure portal overview page menu.":::
+
+1. Select a time range and you'll see the **Processed Pages** chart displayed.
+
+ :::image type="content" source="../media/azure-portal-overview-monitoring.png" alt-text="Screenshot that shows how many pages are processed on the resource overview page." lightbox="../media/azure-portal-processed-pages.png":::
+
+### Examine analyzed pages
+
+We can now take a deeper dive to see each model's analyzed pages:
+
+1. Under the **Monitoring** section, select **Metrics** from the left navigation menu.
+
+ :::image type="content" source="../media/azure-portal-monitoring-metrics.png" alt-text="Screenshot of the monitoring menu in the Azure portal.":::
+
+1. On the **Metrics** page, select **Add metric**.
+
+1. Select the Metric dropdown menu and, under **USAGE**, choose **Processed Pages**.
+
+ :::image type="content" source="../media/azure-portal-add-metric.png" alt-text="Screenshot that shows how to add new metrics on Azure portal.":::
+
+1. From the upper right corner, configure the time range and select the **Apply** button.
+
+ :::image type="content" source="../media/azure-portal-processed-pages-timeline.png" alt-text="Screenshot of time period options for metrics in the Azure portal." lightbox="../media/azure-portal-metrics-timeline.png":::
+
+1. Select **Apply splitting**.
+
+ :::image type="content" source="../media/azure-portal-apply-splitting.png" alt-text="Screenshot of the Apply splitting option in the Azure portal.":::
+
+1. Choose **FeatureName** from the **Values** dropdown menu.
+
+ :::image type="content" source="../media/azure-portal-splitting-on-feature-name.png" alt-text="Screenshot of the Apply splitting values dropdown menu.":::
+
+1. You'll see a breakdown of the pages analyzed by each model.
+
+ :::image type="content" source="../media/azure-portal-metrics-drill-down.png" alt-text="Screenshot demonstrating how to drill down to check analyzed pages by model." lightbox="../media/azure-portal-drill-down-closeup.png":::
+
+## Estimate price
+
+Now that we have the page processed data from the portal, we can use the Azure pricing calculator to estimate the cost:
+
+1. Sign in to [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) with the same credentials you use for the Azure portal.
+
+ > Press Ctrl + right-click to open in a new tab!
+
+1. Search for **Azure Form Recognizer** in the **Search products** search box.
+
+1. Select **Azure Form Recognizer** and you'll see that it has been added to the page.
+
+1. Under **Your Estimate**, select the relevant **Region**, **Payment Option** and **Instance** for your Form Recognizer resource. For more information, *see* [Azure Form Recognizer pricing options](https://azure.microsoft.com/pricing/details/form-recognizer/#pricing).
+
+1. Enter the number of pages processed from the Azure portal metrics dashboard. That data can be found using the steps in sections [Check how many pages are processed](#check-how-many-pages-were-processed) or [Examine analyzed pages](#examine-analyzed-pages), above.
+
+1. The estimated price is on the right, after the equal (**=**) sign.
+
+ :::image type="content" source="../media/azure-portal-pricing.png" alt-text="Screenshot of how to estimate the price based on processed pages.":::
+
+That's it. You now know where to find how many pages you have processed using Form Recognizer and how to estimate the cost.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+>
+> [Learn more about Form Recognizer service quotas and limits](../service-limits.md)
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md
There are two ways to use the Managed Identities in Hybrid Runbook Worker script
# [VM's system-assigned managed identity](#tab/sa-mi)
- 1. [Configure](/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vmss#enable-system-assigned-managed-identity-on-an-existing-vm) a System Managed Identity for the VM.
+ 1. [Configure](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vmss.md) a System Managed Identity for the VM.
1. Grant this identity the [required permissions](../active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-arm.md#grant-your-vm-access-to-a-resource-group-in-resource-manager) within the subscription to perform its tasks. 1. Update the runbook to use the [Connect-Az-Account](/powershell/module/az.accounts/connect-azaccount) cmdlet with the `Identity` parameter to authenticate to Azure resources. This configuration reduces the need to use a Run As Account and perform the associated account management.
There are two ways to use the Managed Identities in Hybrid Runbook Worker script
# [VM's user-assigned managed identity](#tab/ua-mi)
- 1. [Configure](/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vmss#user-assigned-managed-identity) a User Managed Identity for the VM.
- 1. Grant this identity the [required permissions](/azure/active-directory/managed-identities-azure-resources/howto-assign-access-portal) within the Subscription to perform its tasks.
+ 1. [Configure](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vmss.md#user-assigned-managed-identity) a User Managed Identity for the VM.
+ 1. Grant this identity the [required permissions](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md) within the Subscription to perform its tasks.
1. Update the runbook to use the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet with the `Identity ` and `AccountID` parameters to authenticate to Azure resources. This configuration reduces the need to use a Run As account and perform the associated account management. ```powershell
automation Source Control Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/source-control-integration.md
Azure Automation supports three types of source control:
> If you have both a Run As account and managed identity enabled, then managed identity is given preference. If you want to use a Run As account instead, you can [create an Automation variable](./shared-resources/variables.md) of BOOLEAN type named `AUTOMATION_SC_USE_RUNAS` with a value of `true`. > [!NOTE]
-> According to [this](https://docs.microsoft.com/azure/devops/organizations/accounts/change-application-access-policies?view=azure-devops#application-connection-policies) Azure DevOps documentation, **Third-party application access via OAuth** policy is defaulted to **off** for all new organizations. So if you try to configure source control in Azure Automation with **Azure Devops (Git)** as source control type without enabling **Third-party application access via OAuth** under Policies tile of Organization Settings in Azure DevOps then you might get **SourceControl securityToken is invalid** error. Hence to avoid this error, make sure you first enable **Third-party application access via OAuth** under Policies tile of Organization Settings in Azure DevOps.
+> According to [this](/azure/devops/organizations/accounts/change-application-access-policies?view=azure-devops#application-connection-policies) Azure DevOps documentation, **Third-party application access via OAuth** policy is defaulted to **off** for all new organizations. So if you try to configure source control in Azure Automation with **Azure Devops (Git)** as source control type without enabling **Third-party application access via OAuth** under Policies tile of Organization Settings in Azure DevOps then you might get **SourceControl securityToken is invalid** error. Hence to avoid this error, make sure you first enable **Third-party application access via OAuth** under Policies tile of Organization Settings in Azure DevOps.
## Configure source control
Currently, you can't use the Azure portal to update the PAT in source control. W
## Next steps * For integrating source control in Azure Automation, see [Azure Automation: Source Control Integration in Azure Automation](https://azure.microsoft.com/blog/azure-automation-source-control-13/).
-* For integrating runbook source control with Visual Studio Codespaces, see [Azure Automation: Integrating Runbook Source Control using Visual Studio Codespaces](https://azure.microsoft.com/blog/azure-automation-integrating-runbook-source-control-using-visual-studio-online/).
+* For integrating runbook source control with Visual Studio Codespaces, see [Azure Automation: Integrating Runbook Source Control using Visual Studio Codespaces](https://azure.microsoft.com/blog/azure-automation-integrating-runbook-source-control-using-visual-studio-online/).
automation Remove Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/remove-vms.md
Title: Remove machines from Azure Automation Update Management description: This article tells how to remove Azure and non-Azure machines managed with Update Management. ++ Last updated 10/26/2021
Sign in to the [Azure portal](https://portal.azure.com).
## To remove your machines
-1. In the Azure portal, launch **Cloud Shell** from the top navigation of the Azure portal. If you are unfamiliar with Azure Cloud Shell, see [Overview of Azure Cloud Shell](../../cloud-shell/overview.md).
+### To identify Azure VM
-2. Use the following method to identify the UUID of an Azure virtual machine or non-Azure machine that you want to remove from management.
+1. In the Azure portal, launch **Cloud Shell** from the top navigation of the Azure portal. If you are unfamiliar with Azure Cloud Shell, see [Overview of Azure Cloud Shell](../../cloud-shell/overview.md).
- # [Azure VM](#tab/azure-vm)
+2. Use the following method to identify the UUID of an Azure virtual machine that you want to remove from management.
```azurecli az vm show -g MyResourceGroup -n MyVm -d ```
- # [Non-Azure machine](#tab/non-azure-machine)
+### To identify Non-Azure machine
+
+1. In the Azure portal, navigate to **Log Analytics workspaces**. Select your workspace from the list.
+
+2. In your Log Analytics workspace, select **Logs** from the left-hand menu.
+
+3. Use the following query to identify the UUID of a non-Azure machine that you want to remove from management.
```kusto Heartbeat
Sign in to the [Azure portal](https://portal.azure.com).
| summarize by Computer, VMUUID ```
-
+### To remove the identified Azure or Non-Azure machine
-3. In the Azure portal, navigate to **Log Analytics workspaces**. Select your workspace from the list.
+1. In the Azure portal, navigate to **Log Analytics workspaces**. Select your workspace from the list.
-4. In your Log Analytics workspace, select **Computer Groups** from the left-hand menu.
+2. In your Log Analytics workspace, select **Computer Groups** from the left-hand menu.
-5. From **Computer Groups** in the right-hand pane, the **Saved groups** tab is shown by default.
+3. From **Computer Groups** in the right-hand pane, the **Saved groups** tab is shown by default.
-6. From the table, click the icon **Run query** to the right of the item **MicrosoftDefaultComputerGroup** with the **Legacy category** value **Updates**.
+4. From the table, click the icon **Run query** to the right of the item **MicrosoftDefaultComputerGroup** with the **Legacy category** value **Updates**.
-7. In the query editor, review the query and find the UUID for the machine. Remove the UUID for the machine and repeat the steps for any other machines you want to remove.
+5. In the query editor, review the query and find the UUID for the machine. Remove the UUID for the machine and repeat the steps for any other machines you want to remove.
> [!NOTE] > For added protection, before making edits be sure to make a copy of the query. Then you can restore it if a problem occurs.
Sign in to the [Azure portal](https://portal.azure.com).
| distinct Computer ```
-8. Save the saved search when you're finished editing it by selecting **Save > Save as function** from the top bar. When prompted, specify the following:
+6. Save the saved search when you're finished editing it by selecting **Save > Save as function** from the top bar. When prompted, specify the following:
* **Name**: Updates__MicrosoftDefaultComputerGroup * **Save as computer Group** is selected * **Legacy category**: Updates
->[!NOTE]
->Machines are still shown after you have unenrolled them because we report on all machines assessed in the last 24 hours. After removing the machine, you need to wait 24 hours before they are no longer listed.
+> [!NOTE]
+> Machines are still shown after you have unenrolled them because we report on all machines assessed in the last 24 hours. After removing the machine, you need to wait 24 hours before they are no longer listed.
## Next steps
-To re-enable managing your Azure or non-Azure machine, see [Enable Update Management by browsing the Azure portal](enable-from-portal.md) or [Enable Update Management from an Azure VM](enable-from-vm.md).
+To re-enable managing your Azure or non-Azure machine, see [Enable Update Management by browsing the Azure portal](enable-from-portal.md) or [Enable Update Management from an Azure VM](enable-from-vm.md).
azure-arc Preview Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/preview-testing.md
Each month, Azure Arc-enabled data services is released on the second Tuesday of
- 14 days before the release date, the *test* pre-release version is made available. - 7 days before the release date, the *preview* pre-release version is made available.
-The main difference between the test and preview pre-release versions is usually just quality and stability, but in some exceptional cases there may be new features introduced in between the test and preview releases.
+Normally, the main difference between the test and preview pre-release versions is quality and stability, but in some exceptional cases there may be new features introduced in between the test and preview releases.
Normally, pre-release version binaries are available around 10:00 AM Pacific Time. Documentation follows later in the day.
If you use the Azure Data Studio extension to install:
#### Indirect connectivity mode
-If you install using the Azure CLI, follow the instructions to [create a custom configuration profile](create-custom-configuration-template.md). Once created, edit this custom configuration profile file enter the `docker` property values as required based on the information provided in the version history table on this page.
+If you install using the Azure CLI:
-For example:
+1. Follow the instructions to [create a custom configuration profile](create-custom-configuration-template.md).
+1. Edit this custom configuration profile file. Enter the `docker` property values as required based on the information provided in the version history table on this page.
-```json
+ For example:
- "docker": {
- "registry": "mcr.microsoft.com",
- "repository": "arcdata/test",
- "imageTag": "v1.8.0_2022-06-07_5ba6b837",
- "imagePullPolicy": "Always"
- },
-```
+ ```json
+
+ "docker": {
+ "registry": "mcr.microsoft.com",
+ "repository": "arcdata/test",
+ "imageTag": "v1.8.0_2022-06-07_5ba6b837",
+ "imagePullPolicy": "Always"
+ },
+ ```
+
+1. Use the command `az arcdata dc create` as explained in [create a custom configuration profile](create-custom-configuration-template.md).
+
+#### Direct connectivity mode
+
+If you install using the Azure CLI:
+
+1. Follow the instructions to [create a custom configuration profile](create-custom-configuration-template.md).
+1. Edit this custom configuration profile file. Enter the `docker` property values as required based on the information provided in the version history table on this page.
+
+ For example:
+
+ ```json
+
+ "docker": {
+ "registry": "mcr.microsoft.com",
+ "repository": "arcdata/test",
+ "imageTag": "v1.8.0_2022-06-07_5ba6b837",
+ "imagePullPolicy": "Always"
+ },
+ ```
+1. Set environment variables for:
+
+ - `ARC_DATASERVICES_EXTENSION_VERSION_TAG`: Use the version of the **Arc enabled Kubernetes helm chart extension version** from the release details under [Current preview release information](#current-preview-release-information).
+ - `ARC_DATASERVICES_EXTENSION_RELEASE_TRAIN`: `preview`
+
+ For example, the following command sets the environment variables on Linux.
+
+ ```console
+ export ARC_DATASERVICES_EXTENSION_VERSION_TAG='1.2.20031002'
+ export ARC_DATASERVICES_EXTENSION_RELEASE_TRAIN='preview'
+ ```
+
+ The following command sets the environment variables on PowerShell
+
+ ```console
+ $ENV:ARC_DATASERVICES_EXTENSION_VERSION_TAG="1.2.20031002"
+ $ENV:ARC_DATASERVICES_EXTENSION_RELEASE_TRAIN="preview"
+ ```
+
+1. Run `az arcdata dc create` as normal for the direct mode to:
+
+ - Create the extension, if it doesn't already exist
+ - Create the custom location, if it doesn't already exist
+ - Create data controller
-Once the file is edited, use the command `az arcdata dc create` as explained in [create a custom configuration profile](create-custom-configuration-template.md).
+ For details see, [create a custom configuration profile](create-custom-configuration-template.md).
### Install using Azure Data Studio
azure-arc Resource Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/resource-sync.md
In this scenario, first the Azure ARM APIs are called and the mapped Azure resou
With the resource sync rule, you can use the Kubernetes API to create the Arc-enabled SQL managed instance, as follows: ```azurecli
-az sql mi-arc create --name <name> -k <namespace> --use-k8 --storage-class-backups <RWX capable storageclass>
+az sql mi-arc create --name <name> --k8s-namespace <namespace> --use-k8s --storage-class-backups <RWX capable storageclass>
``` In this scenario, the SQL managed instance is directly created in the Kubernetes cluster. The resource sync rule ensures that the equivalent resource in Azure is created as well.
https://management.azure.com/subscriptions/{{subscription}}/resourcegroups/{{res
} ```
-### Limitations:
+## Limitations
- Resource sync rule does not hydrate Azure Arc Data controller. The Azure Arc Data controller must be deployed via ARM API. - Resource sync only applies to the data services such as Arc enabled SQL managed instance, post deployment of Data controller.
https://management.azure.com/subscriptions/{{subscription}}/resourcegroups/{{res
## Next steps
-[Create Azure Arc-enabled data controller using Kubernetes tools](create-data-controller-using-kubernetes-native-tools.md)
+[Create Azure Arc-enabled data controller using Kubernetes tools](create-data-controller-using-kubernetes-native-tools.md)
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
If you don't see your problem here or you can't resolve your issue, try one of t
* Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.
-* [Open an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request).
+* [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
azure-cache-for-redis Cache Best Practices Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-client-libraries.md
+
+ Title: Best practices using client libraries
+
+description: Learn about client libraries for Azure Cache for Redis.
+++ Last updated : 07/07/2022++++
+# Client libraries
+
+Azure Cache for Redis is based on the popular open-source in-memory data store, open-source Redis. Azure Cache for Redis can be accessed by a wide variety of Redis clients for many programming languages. Each client library has its own API that makes calls to Redis server using Redis commands, but the client libraries are built to talk to any Redis server.
+
+Each client maintains its own reference documentation for its library. The clients also provide links to get support through the client library developer community. The Azure Cache for Redis team doesn't own the development, or the support for any client libraries.
+
+Although we don't own or support any client libraries, we do recommend some libraries. Recommendations are based on popularity and whether there's an active online community to support and answer your questions. We only recommend using the latest available version, and upgrading regularly as new versions become available. These libraries are under active development and often release new versions with improvements to reliability and performance.
+
+| **Client library** | **Language** | **GitHub** **repo** | **Documentation**|
+| --|- |-| --|
+| StackExchange.Redis | C#/.NET | [Link](https://github.com/StackExchange/StackExchange.Redis)| [More information here](https://stackexchange.github.io/StackExchange.Redis/) |
+| Lettuce | Java | [Link](https://github.com/lettuce-io/) | [More information here](https://lettuce.io/) |
+| Jedis | Java | [Link](https://github.com/redis/jedis) | |
+| node_redis | Node.js | [Link](https://github.com/redis/node-redis) | |
+| Redisson | Java | [Link](https://github.com/redisson/redisson) | [More information here](https://redisson.org/) |
+| ioredis | Node.js | [Link](https://github.com/luin/ioredis) | [More information here](https://ioredis.readthedocs.io/en/stable/API/) |
+
+> [!NOTE]
+> Your application can to connect and use your Azure Cache for Redis instance with any client library that can also communicate with open-source Redis.
+
+## Client library-specific guidance
+
+For information on client library-specific guidance best practices, see the following links:
+
+- [StackExchange.Redis (.NET)](cache-best-practices-connection.md#using-forcereconnect-with-stackexchangeredis)
+- [Java - Which client should I use?](https://gist.github.com/warrenzhu25/1beb02a09b6afd41dff2c27c53918ce7#file-azure-redis-java-best-practices-md)
+- [Lettuce (Java)](https://github.com/Azure/AzureCacheForRedis/blob/main/Lettuce%20Best%20Practices.md)
+- [Jedis (Java)](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-java-jedis-md)
+- [Node.js](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-node-js-md)
+- [PHP](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-php-md)
+- [HiRedisCluster](https://github.com/Azure/AzureCacheForRedis/blob/main/HiRedisCluster%20Best%20Practices.md)
+- [ASP.NET Session State Provider](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-session-state-provider-md)
+
+## How to use client libraries
+
+Besides the reference documentation, you can find tutorials showing how to get started with Azure Cache for Redis using different languages and cache clients.
+
+For more information on using some of these client libraries in tutorials, see the following articles:
+
+- [Code a .NET Framework app](cache-dotnet-how-to-use-azure-redis-cache.md)
+- [Code a .NET Core app](cache-dotnet-core-quickstart.md)
+- [Code an ASP.NET web app](cache-web-app-howto.md)
+- [Code an ASP.NET Core web app](cache-web-app-aspnet-core-howto.md)
+- [Code a Java app](cache-java-get-started.md)
+- [Code a Node.js app](cache-nodejs-get-started.md)
+- [Code a Python app](cache-python-get-started.md)
+
+## Next steps
+
+- [Azure Cache for Redis development FAQs](cache-development-faq.yml)
+- [Best practices for development](cache-best-practices-development.md)
azure-cache-for-redis Cache Best Practices Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-development.md
If your application validates certificate in code, you need to modify it to reco
## Client library-specific guidance -- [StackExchange.Redis (.NET)](cache-best-practices-connection.md#using-forcereconnect-with-stackexchangeredis)-- [Java - Which client should I use?](https://gist.github.com/warrenzhu25/1beb02a09b6afd41dff2c27c53918ce7#file-azure-redis-java-best-practices-md)-- [Lettuce (Java)](https://github.com/Azure/AzureCacheForRedis/blob/main/Lettuce%20Best%20Practices.md)-- [Jedis (Java)](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-java-jedis-md)-- [Node.js](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-node-js-md)-- [PHP](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-php-md)-- [HiRedisCluster](https://github.com/Azure/AzureCacheForRedis/blob/main/HiRedisCluster%20Best%20Practices.md)-- [ASP.NET Session State Provider](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-session-state-provider-md)
+For more information, see [Client libraries](cache-best-practices-client-libraries.md#client-libraries).
## Next steps -- [Azure Cache for Redis development FAQs](cache-development-faq.yml) - [Performance testing](cache-best-practices-performance.md) - [Failover and patching for Azure Cache for Redis](cache-failover.md)
azure-cache-for-redis Cache How To Premium Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-vnet.md
There are eight inbound port range requirements. Inbound requests in these range
There are network connectivity requirements for Azure Cache for Redis that might not be initially met in a virtual network. Azure Cache for Redis requires all the following items to function properly when used within a virtual network:
+- Outbound network connectivity to Azure Key Vault endpoints worldwide. Azure Key Vault endpoints resolve under the DNS domain *vault.azure.net*.
- Outbound network connectivity to Azure Storage endpoints worldwide. Endpoints located in the same region as the Azure Cache for Redis instance and storage endpoints located in *other* Azure regions are included. Azure Storage endpoints resolve under the following DNS domains: *table.core.windows.net*, *blob.core.windows.net*, *queue.core.windows.net*, and *file.core.windows.net*. - Outbound network connectivity to *ocsp.digicert.com*, *crl4.digicert.com*, *ocsp.msocsp.com*, *mscrl.microsoft.com*, *crl3.digicert.com*, *cacerts.digicert.com*, *oneocsp.microsoft.com*, and *crl.microsoft.com*. This connectivity is needed to support TLS/SSL functionality. - The DNS configuration for the virtual network must be able to resolve all of the endpoints and domains mentioned in the earlier points. These DNS requirements can be met by ensuring a valid DNS infrastructure is configured and maintained for the virtual network.
azure-functions Monitor Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/monitor-functions-reference.md
The following table lists the operations related to Azure Functions that may be
|Microsoft.Web/sites/stop/action| Function app stopped.| |Microsoft.Web/sites/write| Change a function app setting, such as runtime version or enable remote debugging.|
-You may also find logged operations that relate to the underlying App Service behaviors. For a more complete list, see [Resource Provider Operations](/azure/role-based-access-control/resource-provider-operations#microsoftweb).
+You may also find logged operations that relate to the underlying App Service behaviors. For a more complete list, see [Resource Provider Operations](../role-based-access-control/resource-provider-operations.md#microsoftweb).
For more information on the schema of Activity Log entries, see [Activity Log schema](../azure-monitor/essentials/activity-log-schema.md).
azure-government Azure Secure Isolation Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/azure-secure-isolation-guidance.md
recommendations: false Previously updated : 07/14/2022 Last updated : 07/15/2022 # Azure guidance for secure isolation
With Key Vault, you can import or generate encryption keys in HSMs, ensuring tha
:::image type="content" source="./media/secure-isolation-fig3.png" alt-text="Azure Key Vault support for bring your own key (BYOK)"::: **Figure 3.** Azure Key Vault support for bring your own key (BYOK)
-**Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.**
+**Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.** For extra assurances, see [How does Azure Key Vault protect your keys?](../key-vault/managed-hsm/mhsm-control-data.md#how-does-azure-key-vault-managed-hsm-protect-your-keys)
Key Vault provides a robust solution for encryption key lifecycle management. Upon creation, every key vault or managed HSM is automatically associated with the Azure AD tenant that owns the subscription. Anyone trying to manage or retrieve content from a key vault or managed HSM must be properly authenticated and authorized:
azure-government Documentation Government Overview Wwps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-wwps.md
recommendations: false Previously updated : 04/06/2022 Last updated : 07/15/2022 # Azure for secure worldwide public sector cloud adoption
This article addresses common data residency, security, and isolation concerns p
Established privacy regulations are silent on **data residency and data location**, and permit data transfers in accordance with approved mechanisms such as the EU Standard Contractual Clauses (also known as EU Model Clauses). Microsoft commits contractually in the Microsoft Products and Services [Data Protection Addendum](https://aka.ms/DPA) (DPA) that all potential transfers of customer data out of the EU, European Economic Area (EEA), and Switzerland shall be governed by the EU Model Clauses. Microsoft will abide by the requirements of the EEA and Swiss data protection laws regarding the collection, use, transfer, retention, and other processing of personal data from the EEA and Switzerland. All transfers of personal data are subject to appropriate safeguards and documentation requirements. However, many customers considering cloud adoption are seeking assurances about customer and personal data being kept within the geographic boundaries corresponding to customer operations or location of customerΓÇÖs end users.
-**Data sovereignty** implies data residency; however, it also introduces rules and requirements that define who has control over and access to customer data stored in the cloud. In many cases, data sovereignty mandates that customer data be subject to the laws and legal jurisdiction of the country or region in which data resides. These laws can have direct implications on data access even for platform maintenance or customer-initiated support requests. You can use Azure public multi-tenant cloud in combination with Azure Stack products for on-premises and edge solutions to meet your data sovereignty requirements, as described later in this article. These other products can be deployed to put you solely in control of your data, including storage, processing, transmission, and remote access.
+**Data sovereignty** implies data residency; however, it also introduces rules and requirements that define who has control over customer data stored in the cloud. In many cases, data sovereignty mandates that customer data be subject to the laws and legal jurisdiction of the country or region in which data resides. These laws can have direct implications on data access even for platform maintenance or customer-initiated support requests. You can use Azure public multi-tenant cloud in combination with Azure Stack products for on-premises and edge solutions to meet your data sovereignty requirements, as described later in this article. These other products can be deployed to put you solely in control of your data, including storage, processing, transmission, and remote access.
Among several [data categories and definitions](https://www.microsoft.com/trust-center/privacy/customer-data-definitions) that Microsoft established for cloud services, the following four categories are discussed in this article:
Proper protection and management of encryption keys is essential for data securi
Key Vault enables you to store your encryption keys in hardware security modules (HSMs) that are FIPS 140 validated. With Azure Key Vault, you can import or generate encryption keys in HSMs, ensuring that keys never leave the HSM protection boundary to support *bring your own key* (BYOK) scenarios. **Keys generated inside the Azure Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs.** This binding is enforced by the underlying HSM. > [!NOTE]
-> Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.
+> Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys. For extra assurances, see [How does Azure Key Vault protect your keys?](../key-vault/managed-hsm/mhsm-control-data.md#how-does-azure-key-vault-managed-hsm-protect-your-keys)
For more information, see [Azure Key Vault](./azure-secure-isolation-guidance.md#azure-key-vault).
Microsoft has implemented extensive protections for the Azure cloud platform and
## Private and hybrid cloud with Azure Stack
-[Azure Stack](https://azure.microsoft.com/overview/azure-stack/) portfolio is an extension of Azure that enables you to build and run hybrid applications across on-premises, edge locations, and cloud. As shown in Figure 4, Azure Stack includes Azure Stack Hyperconverged Infrastructure (HCI), Azure Stack Hub (previously Azure Stack), and Azure Stack Edge (previously Azure Data Box Edge). The last two components (Azure Stack Hub and Azure Stack Edge) are discussed in this section. For more information, see [Differences between global Azure, Azure Stack Hub, and Azure Stack HCI](/azure-stack/operator/compare-azure-azure-stack).
+[Azure Stack](https://azure.microsoft.com/overview/azure-stack/) portfolio is an extension of Azure that enables you to build and run hybrid applications across on-premises, edge locations, and cloud. As shown in Figure 4, Azure Stack includes Azure Stack Hyperconverged Infrastructure (HCI), Azure Stack Hub (formerly Azure Stack), and Azure Stack Edge (formerly Azure Data Box Edge). The last two components (Azure Stack Hub and Azure Stack Edge) are discussed in this section. For more information, see [Differences between global Azure, Azure Stack Hub, and Azure Stack HCI](/azure-stack/operator/compare-azure-azure-stack).
:::image type="content" source="./media/wwps-azure-stack-portfolio.jpg" alt-text="Azure Stack portfolio" border="false"::: **Figure 4.** Azure Stack portfolio
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md
For the template-based ASP.NET MVC app from this article, the file that you need
## Troubleshooting
-See the dedicated [troubleshooting article](https://docs.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/asp-net-troubleshoot-no-data).
+See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/asp-net-troubleshoot-no-data).
There's a known issue in the current version of Visual Studio 2019: storing the instrumentation key or connection string in a user secret is broken for .NET Framework-based apps. The key ultimately has to be hardcoded into the *applicationinsights.config* file to work around this bug. This article is designed to avoid this issue entirely, by not using user secrets.
azure-monitor Availability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-overview.md
You can create up to 100 availability tests per Application Insights resource.
## Troubleshooting
-See the dedicated [troubleshooting article](https://docs.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/troubleshoot-availability).
+See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/troubleshoot-availability).
## Next steps
See the dedicated [troubleshooting article](https://docs.microsoft.com/troublesh
* [Standard Tests](availability-standard-tests.md) * [Multi-step web tests](availability-multistep.md) * [Create and run custom availability tests using Azure Functions](availability-azure-functions.md)
-* [Web Tests Azure Resource Manager template](/azure/templates/microsoft.insights/webtests?tabs=json)
+* [Web Tests Azure Resource Manager template](/azure/templates/microsoft.insights/webtests?tabs=json)
azure-monitor Create Workspace Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-workspace-resource.md
Title: Create a new Azure Monitor Application Insights workspace-based resource description: Learn about the steps required to enable the new Azure Monitor Application Insights workspace-based resources. Previously updated : 10/06/2020 Last updated : 07/14/2022+
The `New-AzApplicationInsights` PowerShell command does not currently support cr
} ```
+> [!NOTE]
+> * For more information on resource properties, see [Property values](https://docs.microsoft.com/azure/templates/microsoft.insights/components?tabs=bicep#property-values)
+> * Flow_Type and Request_Source are not used, but are included in this sample for completeness.
+ #### Parameters file ```json
azure-monitor Java 2X Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-get-started.md
Application Insights can test your website at regular intervals to check that it
## Troubleshooting
-See the dedicated [troubleshooting article](https://docs.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/java-2x-troubleshoot).
+See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/java-2x-troubleshoot).
## Next steps * [Monitor dependency calls](java-2x-agent.md)
azure-monitor Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/performance-counters.md
Windows provides a wide variety of [performance counters](/windows/desktop/perfc
## Prerequisites
-Grant the app pool service account permission to monitor performance counters by adding it to the [Performance Monitor Users](https://docs.microsoft.com/windows/security/identity-protection/access-control/active-directory-security-groups#bkmk-perfmonitorusers) group.
+Grant the app pool service account permission to monitor performance counters by adding it to the [Performance Monitor Users](/windows/security/identity-protection/access-control/active-directory-security-groups#bkmk-perfmonitorusers) group.
```shell net localgroup "Performance Monitor Users" /add "IIS APPPOOL\NameOfYourPool"
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
# Sampling in Application Insights
-Sampling is a feature in [Azure Application Insights](./app-insights-overview.md). It is the recommended way to reduce telemetry traffic, data costs, and storage costs, while preserving a statistically correct analysis of application data. Sampling also helps you avoid Application Insights throttling your telemetry. The sampling filter selects items that are related, so that you can navigate between items when you are doing diagnostic investigations.
+Sampling is a feature in [Azure Application Insights](./app-insights-overview.md). It's the recommended way to reduce telemetry traffic, data costs, and storage costs, while preserving a statistically correct analysis of application data. Sampling also helps you avoid Application Insights throttling your telemetry. The sampling filter selects items that are related, so that you can navigate between items when you're doing diagnostic investigations.
-When metric counts are presented in the portal, they are renormalized to take into account sampling. Doing so minimizes any effect on the statistics.
+When metric counts are presented in the portal, they're re-normalized to take into account sampling. Doing so minimizes any effect on the statistics.
## Brief summary * There are three different types of sampling: adaptive sampling, fixed-rate sampling, and ingestion sampling.
-* Adaptive sampling is enabled by default in all the latest versions of the Application Insights ASP.NET and ASP.NET Core Software Development Kits (SDKs). It is also used by [Azure Functions](../../azure-functions/functions-overview.md).
+* Adaptive sampling is enabled by default in all the latest versions of the Application Insights ASP.NET and ASP.NET Core Software Development Kits (SDKs). It's also used by [Azure Functions](../../azure-functions/functions-overview.md).
* Fixed-rate sampling is available in recent versions of the Application Insights SDKs for ASP.NET, ASP.NET Core, Java (both the agent and the SDK), and Python.
-* In Java, sampling overrides are available, and are useful when you need to apply different sampling rates to selected dependencies, requests, healthchecks. Use [sampling overrides](./java-standalone-sampling-overrides.md) to tune out some noisy dependencies while for example all important errors are kept at 100%. This is a form of fixed sampling that gives you a fine-grained level of control over your telemetry.
+* In Java, sampling overrides are available, and are useful when you need to apply different sampling rates to selected dependencies, requests, and health checks. Use [sampling overrides](./java-standalone-sampling-overrides.md) to tune out some noisy dependencies while, for example, all important errors are kept at 100%. This behavior is a form of fixed sampling that gives you a fine-grained level of control over your telemetry.
* Ingestion sampling works on the Application Insights service endpoint. It only applies when no other sampling is in effect. If the SDK samples your telemetry, ingestion sampling is disabled. * For web applications, if you log custom events and need to ensure that a set of events is retained or discarded together, the events must have the same `OperationId` value. * If you write Analytics queries, you should [take account of sampling](/azure/data-explorer/kusto/query/samples?&pivots=azuremonitor#aggregations). In particular, instead of simply counting records, you should use `summarize sum(itemCount)`.
There are three different sampling methods:
* **Fixed-rate sampling** reduces the volume of telemetry sent from both your ASP.NET or ASP.NET Core or Java server and from your users' browsers. You set the rate. The client and server will synchronize their sampling so that, in Search, you can navigate between related page views and requests.
-* **Ingestion sampling** happens at the Application Insights service endpoint. It discards some of the telemetry that arrives from your app, at a sampling rate that you set. It doesn't reduce telemetry traffic sent from your app, but helps you keep within your monthly quota. The main advantage of ingestion sampling is that you can set the sampling rate without redeploying your app. Ingestion sampling works uniformly for all servers and clients, but it does not apply when any other types of sampling are in operation.
+* **Ingestion sampling** happens at the Application Insights service endpoint. It discards some of the telemetry that arrives from your app, at a sampling rate that you set. It doesn't reduce telemetry traffic sent from your app, but helps you keep within your monthly quota. The main advantage of ingestion sampling is that you can set the sampling rate without redeploying your app. Ingestion sampling works uniformly for all servers and clients, but it doesn't apply when any other types of sampling are in operation.
> [!IMPORTANT] > If adaptive or fixed rate sampling methods are enabled for a telemetry type, ingestion sampling is disabled for that telemetry. However, telemetry types that are excluded from sampling at the SDK level will still be subject to ingestion sampling at the rate set in the portal.
The volume is adjusted automatically to keep within a specified maximum rate of
To achieve the target volume, some of the generated telemetry is discarded. But like other types of sampling, the algorithm retains related telemetry items. For example, when you're inspecting the telemetry in Search, you'll be able to find the request related to a particular exception.
-Metric counts such as request rate and exception rate are adjusted to compensate for the sampling rate, so that they show approximately correct values in Metric Explorer.
+Metric counts such as request rate and exception rate are adjusted to compensate for the sampling rate, so that they show approximate values in Metric Explorer.
### Configuring adaptive sampling for ASP.NET applications
In [`ApplicationInsights.config`](./configuration-with-applicationinsights-confi
* `<SamplingPercentageDecreaseTimeout>00:02:00</SamplingPercentageDecreaseTimeout>`
- When sampling percentage value changes, how soon after are we allowed to lower the sampling percentage again to capture less data?
+ When sampling percentage value changes, this value determines how soon after are we allowed to lower the sampling percentage again to capture less data?
* `<SamplingPercentageIncreaseTimeout>00:15:00</SamplingPercentageIncreaseTimeout>`
- When sampling percentage value changes, how soon after are we allowed to increase the sampling percentage again to capture more data?
+ When sampling percentage value changes, this value determines how soon after are we allowed to increase the sampling percentage again to capture more data?
* `<MinSamplingPercentage>0.1</MinSamplingPercentage>`
In [`ApplicationInsights.config`](./configuration-with-applicationinsights-confi
* `<MovingAverageRatio>0.25</MovingAverageRatio>`
- In the calculation of the moving average, this specifies the weight that should be assigned to the most recent value. Use a value equal to or less than 1. Smaller values make the algorithm less reactive to sudden changes.
+ In the calculation of the moving average, this value specifies the weight that should be assigned to the most recent value. Use a value equal to or less than 1. Smaller values make the algorithm less reactive to sudden changes.
* `<InitialSamplingPercentage>100</InitialSamplingPercentage>`
- The amount of telemetry to sample when the app has just started. Don't reduce this value while you're debugging.
+ The amount of telemetry to sample when the app has started. Don't reduce this value while you're debugging.
* `<ExcludedTypes>type;type</ExcludedTypes>`
- A semi-colon delimited list of types that you do not want to be subject to sampling. Recognized types are: `Dependency`, `Event`, `Exception`, `PageView`, `Request`, `Trace`. All telemetry of the specified types is transmitted; the types that are not specified will be sampled.
+ A semi-colon delimited list of types that you don't want to be subject to sampling. Recognized types are: `Dependency`, `Event`, `Exception`, `PageView`, `Request`, `Trace`. All telemetry of the specified types is transmitted; the types that aren't specified will be sampled.
* `<IncludedTypes>type;type</IncludedTypes>`
builder.UseAdaptiveSampling(maxTelemetryItemsPerSecond:5, excludedTypes: "Depend
### Configuring adaptive sampling for ASP.NET Core applications
-There is no `ApplicationInsights.config` for ASP.NET Core applications, so all configuration is done via code.
+There's no `ApplicationInsights.config` for ASP.NET Core applications, so all configuration is done via code.
Adaptive sampling is enabled by default for all ASP.NET Core applications. You can disable or customize the sampling behavior. #### Turning off adaptive sampling
Follow instructions from [this page](../../azure-functions/configure-monitoring.
Fixed-rate sampling reduces the traffic sent from your web server and web browsers. Unlike adaptive sampling, it reduces telemetry at a fixed rate decided by you. Fixed-rate sampling is available for ASP.NET, ASP.NET Core, Java and Python applications.
-Like other sampling techniques, this also retains related items. It also synchronizes the client and server sampling so that related items are retained - for example, when you look at a page view in Search, you can find its related server requests.
+Like other techniques, it also retains related items. It also synchronizes the client and server sampling so that related items are retained. As an example, when you look at a page view in Search you can find its related server requests.
-In Metrics Explorer, rates such as request and exception counts are multiplied by a factor to compensate for the sampling rate, so that they are approximately correct.
+In Metrics Explorer, rates such as request and exception counts are multiplied by a factor to compensate for the sampling rate, so that they're as accurate as possible.
### Configuring fixed-rate sampling for ASP.NET applications
In Metrics Explorer, rates such as request and exception counts are multiplied b
### Configuring sampling overrides and fixed-rate sampling for Java applications
-By default no sampling is enabled in the Java auto-instrumentation and SDK. Currently the Java auto-instrumentation, [sampling overrides](./java-standalone-sampling-overrides.md) and fixed rate sampling are supported. Adaptive sampling is not supported in Java.
+By default no sampling is enabled in the Java auto-instrumentation and SDK. Currently the Java auto-instrumentation, [sampling overrides](./java-standalone-sampling-overrides.md) and fixed rate sampling are supported. Adaptive sampling isn't supported in Java.
#### Configuring Java auto-instrumentation
Instrument your application with the latest [OpenCensus Azure Monitor exporters]
> Fixed-rate sampling is not available for the metrics exporter. This means custom metrics are the only types of telemetry where sampling can NOT be configured. The metrics exporter will send all telemetry that it tracks. #### Fixed-rate sampling for tracing ####
-You may specify a `sampler` as part of your `Tracer` configuration. If no explicit sampler is provided, the `ProbabilitySampler` will be used by default. The `ProbabilitySampler` would use a rate of 1/10000 by default, meaning one out of every 10000 requests will be sent to Application Insights. If you want to specify a sampling rate, see below.
+You may specify a `sampler` as part of your `Tracer` configuration. If no explicit sampler is provided, the `ProbabilitySampler` will be used by default. The `ProbabilitySampler` would use a rate of 1/10000 by default, meaning one out of every 10,000 requests will be sent to Application Insights. If you want to specify a sampling rate, see below.
To specify the sampling rate, make sure your `Tracer` specifies a sampler with a sampling rate between 0.0 and 1.0 inclusive. A sampling rate of 1.0 represents 100%, meaning all of your requests will be sent as telemetry to Application Insights.
For the sampling percentage, choose a percentage that is close to 100/N where N
#### Coordinating server-side and client-side sampling
-The client-side JavaScript SDK participates in fixed-rate sampling in conjunction with the server-side SDK. The instrumented pages will only send client-side telemetry from the same user for which the server-side SDK made its decision to include in the sampling. This logic is designed to maintain the integrity of user sessions across client- and server-side applications. As a result, from any particular telemetry item in Application Insights you can find all other telemetry items for this user or session and in Search, you can navigate between related page views and requests.
+The client-side JavaScript SDK participates in fixed-rate sampling with the server-side SDK. The instrumented pages will only send client-side telemetry from the same user for which the server-side SDK made its decision to include in the sampling. This logic is designed to maintain the integrity of user sessions across client- and server-side applications. As a result, from any particular telemetry item in Application Insights you can find all other telemetry items for this user or session and in Search, you can navigate between related page views and requests.
If your client and server-side telemetry don't show coordinated samples:
Set the sampling rate in the Usage and estimated costs page:
Like other types of sampling, the algorithm retains related telemetry items. For example, when you're inspecting the telemetry in Search, you'll be able to find the request related to a particular exception. Metric counts such as request rate and exception rate are correctly retained.
-Data points that are discarded by sampling are not available in any Application Insights feature such as [Continuous Export](./export-telemetry.md).
+Data points that are discarded by sampling aren't available in any Application Insights feature such as [Continuous Export](./export-telemetry.md).
Ingestion sampling doesn't operate while adaptive or fixed-rate sampling is in operation. Adaptive sampling is enabled by default when the ASP.NET SDK or the ASP.NET Core SDK is being used, or when Application Insights is enabled in [Azure App Service ](azure-web-apps.md) or by using Status Monitor. When telemetry is received by the Application Insights service endpoint, it examines the telemetry and if the sampling rate is reported to be less than 100% (which indicates that telemetry is being sampled) then the ingestion sampling rate that you set is ignored.
The accuracy of the approximation largely depends on the configured sampling per
## Log query accuracy and high sample rates As the application is scaled up, it may be processing dozens, hundreds, or thousands of work items per second. Logging an event for each of them is not resource nor cost effective. Application Insights uses sampling to adapt to growing telemetry volume in a flexible manner and to control resource usage and cost.
-
-However, sampling can affect the accuracy of query results gained from sampled telemetry. For example, 25 different users made a single request to a web application and each of those requests generated 1 Request telemetry record, 1 Dependency telemetry record, 1 Trace Message telemetry record and 1 Exception telemetry record. This adds up to a total of 100 raw telemetry records displayed in the image below.
-
-![Sample rate at 0 percent and the itemCount is 1](./media/sampling/records-with-legend-0-sampled.png) **Sample Rate 0% 25 Requests (itemCount=1) 25 Dependencies (itemCount=1) 25 Traces (itemCount=1) 25 Exceptions (itemCount=1)**
-
-If the Application Insights SDK did **not** need to throttle the telemetry, the application would send all 100 records to the ingestion endpoint. This is the equivalent of a sample rate of 0%. The SDK would package all of the telemetry records into JSON payloads and send them to the ingestion service. Every one of those 100 telemetry records would have the `itemCount` field set to 1, that is because we donΓÇÖt need to drop any records for sampling and each single telemetry record represents a count of 1. Running a query of `sum(itemCount)` for the requests telemetry would return 25, which matches the 25 requests and is 25% of the 100 telemetry records produced by the web application.
-
-When the SDK **does** throttle the telemetry through sampling the `itemCount` is less representative of the amount of telemetry records stored. For example, if the decision was made to keep 1% of all the records and the sample rate was 99% for the 100 telemetry records in the above example. That would mean only a single record out of all the items would be stored. To illustrate this, if the SDK picks one of the request telemetry records it will have to drop all of the other 99 records (24 requests, 25 dependencies, 25 traces, 25 exceptions). Although only 1 record is stored the SDK sets the `itemCount` field for the request as 100. This is because the single ingested record represents 100 total telemetry records that executed within the web application.
-
-![Sample rate at 99 percent and the itemCount is 100 visualized](./media/sampling/sampling-with-legend-99-sampled.png)
-![Sample rate at 99 percent and the itemCount is 100 in percentages](./media/sampling/sample-rate-99.png) **Sample Rate 99% 1 Request (itemCount=100) 0 Dependencies 0 Traces 0 Exceptions**
--
-One caveat for this example is that App Insights SDK samples based on operation ID, meaning that an `operation_Id` is selected and **all** of the telemetry for that single operation are ingested and saved (not random individual records). This can also result in fluctuations based on application operation telemetry counts. If one operation has a higher amount of records and that operation is sampled it would show up as a spike in adjusted sample rates. For example if one operation produces 4000 telemetry records and the other operations only produce 1 to 3 telemetry records. The sampling based on `operation_Id` is done to enable an end-to-end view for failing operations. All telemetry for an operation can be reviewed, including exception details, to precisely diagnose application code errors.
- > [!WARNING] > A distributed operation's end-to-end view integrity may be impacted if any application in the distributed operation has turned on sampling. Different sampling decisions are made by each application in a distributed operation, so telemetry for one Operation ID may be saved by one application while other applications may decide to not sample the telemetry for that same Operation ID.
azure-monitor Status Monitor V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-overview.md
Application Insights Agent is located here: https://www.powershellgallery.com/pa
## Troubleshooting
-See the dedicated [troubleshooting article](https://docs.microsoft.com/troubleshoot/azure/azure-monitor/app-insights/status-monitor-v2-troubleshoot).
+See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/status-monitor-v2-troubleshoot).
## FAQ
Add more telemetry:
* [Create web tests](monitor-web-app-availability.md) to make sure your site stays live. * [Add web client telemetry](./javascript.md) to see exceptions from web page code and to enable trace calls.
-* [Add the Application Insights SDK to your code](./asp-net.md) so you can insert trace and log calls.
+* [Add the Application Insights SDK to your code](./asp-net.md) so you can insert trace and log calls.
azure-monitor Wire Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/wire-data.md
VMConnection
### More examples queries
-Refer to the [VM insights log search documentation](/azure/azure-monitor/vm/vminsights-log-query) and the [VM insights alert documentation](../vm/monitor-virtual-machine-alerts.md) for additional example queries.
+Refer to the [VM insights log search documentation](../vm/vminsights-log-query.md) and the [VM insights alert documentation](../vm/monitor-virtual-machine-alerts.md) for additional example queries.
## Uninstall Wire Data 2.0 Solution
A record with a type of _WireData_ is created for each type of input data. WireD
## Next steps - See [Deploy VM insights](../vm/vminsights-enable-overview.md) for requirements and methods that to enable monitoring for your virtual machines.-- [Search logs](../logs/log-query-overview.md) to view detailed wire data search records.
+- [Search logs](../logs/log-query-overview.md) to view detailed wire data search records.
azure-monitor Quick Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/quick-create-workspace.md
New-AzOperationalInsightsWorkspace -Location $Location -Name $WorkspaceName -Res
> [!NOTE] > Log Analytics was previously called Operational Insights. The PowerShell cmdlets use Operational Insights in Log Analytics commands.
-After you've created a workspace, [configure a Log Analytics workspace in Azure Monitor by using PowerShell](/azure/azure-monitor/logs/powershell-workspace-configuration).
+After you've created a workspace, [configure a Log Analytics workspace in Azure Monitor by using PowerShell](./powershell-workspace-configuration.md).
## [Azure CLI](#tab/azure-cli)
Run the [az group create](/cli/azure/group#az-group-create) command to create a
--workspace-name <myWorkspace> ```
-For more information about Azure Monitor Logs in Azure CLI, see [Managing Azure Monitor Logs in Azure CLI](/azure/azure-monitor/logs/azure-cli-log-analytics-workspace-sample).
+For more information about Azure Monitor Logs in Azure CLI, see [Managing Azure Monitor Logs in Azure CLI](./azure-cli-log-analytics-workspace-sample.md).
## [Resource Manager template](#tab/azure-resource-manager)
When you create a workspace that was deleted in the last 14 days and in [soft-de
Now that you have a workspace available, you can configure collection of monitoring telemetry, run log searches to analyze that data, and add a management solution to provide more data and analytic insights. To learn more: * See [Monitor health of Log Analytics workspace in Azure Monitor](../logs/monitor-workspace.md) to create alert rules to monitor the health of your workspace.
-* See [Collect Azure service logs and metrics for use in Log Analytics](../essentials/resource-logs.md#send-to-log-analytics-workspace) to enable data collection from Azure resources with Azure Diagnostics or Azure Storage.
+* See [Collect Azure service logs and metrics for use in Log Analytics](../essentials/resource-logs.md#send-to-log-analytics-workspace) to enable data collection from Azure resources with Azure Diagnostics or Azure Storage.
azure-portal Azure Portal Safelist Urls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-safelist-urls.md
Network administrators often deploy proxy servers, firewalls, or other devices,
> [!TIP] > For help diagnosing issues with network connections to these domains, check https://portal.azure.com/selfhelp.
-You can use [service tags](/azure/virtual-network/service-tags-overview) to define network access controls on [network security groups](/azure/virtual-network/network-security-groups-overview), [Azure Firewall](/azure/firewall/service-tags), and user-defined routes. Use service tags in place of fully qualified domain names (FQDNs) or specific IP addresses when you create security rules and routes.
+You can use [service tags](../virtual-network/service-tags-overview.md) to define network access controls on [network security groups](../virtual-network/network-security-groups-overview.md), [Azure Firewall](../firewall/service-tags.md), and user-defined routes. Use service tags in place of fully qualified domain names (FQDNs) or specific IP addresses when you create security rules and routes.
## Azure portal URLs for proxy bypass
login.live.com
> [!NOTE]
-> Traffic to these endpoints uses standard TCP ports for HTTP (80) and HTTPS (443).
+> Traffic to these endpoints uses standard TCP ports for HTTP (80) and HTTPS (443).
azure-signalr Signalr Howto Troubleshoot Live Trace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-troubleshoot-live-trace.md
Previously updated : 06/30/2021 Last updated : 07/14/2022 # How to use live trace tool for Azure SignalR service
-Live trace tool is a single web application for capturing and displaying live traces in Azure SignalR service. The live traces can be collected in real time without any dependency on other services.
-You can enable and disable the live trace feature with a single click. You can also choose any log category that you're interested.
+Live trace tool is a single web application for capturing and displaying live traces in Azure SignalR Service. The live traces can be collected in real time without any dependency on other services. You can enable and disable the live trace feature with a single select. You can also choose any log category that you're interested.
> [!NOTE]
-> Please note that the live traces will be counted as outbound messages.
-> Azure Active Directory access to live trace tool is not supported, please enable `Access Key` in `Keys` menu.
+> Note that the live traces will be counted as outbound messages.
+> Azure Active Directory access to live trace tool is not supported. You will need to enable **Access Key** in **Keys** settings.
## Launch the live trace tool
-1. Go to the Azure portal.
-2. Check **Enable Live Trace**.
-3. click **Save** button in tool bar and wait for the changes take effect.
-4. On the **Diagnostic Settings** page of your Azure Web PubSub service instance, select **Open Live Trace Tool**.
+1. Go to the Azure portal and your SignalR Service page.
+1. From the menu on the left, under **Monitoring** select **Live trace settings**.
+1. Select **Enable Live Trace**.
+1. Select **Save** button. It will take a moment for the changes to take effect.
+1. When updating is complete, select **Open Live Trace Tool**.
- :::image type="content" source="media/signalr-howto-troubleshoot-live-trace/live-traces-with-live-trace-tool.png" alt-text="Screenshot of launching the live trace tool.":::
+ :::image type="content" source="media/signalr-howto-troubleshoot-live-trace/signalr-enable-live-trace.png" alt-text="Screenshot of launching the live trace tool.":::
## Capture live traces
-The live trace tool provides some fundamental functionalities to help you capture the live traces for troubleshooting.
+The live trace tool provides functionality to help you capture the live traces for troubleshooting.
-* **Capture**: Begin to capture the real time live traces from Azure Web PubSub instance with live trace tool.
+* **Capture**: Begin to capture the real time live traces from SignalR Service instance with live trace tool.
* **Clear**: Clear the captured real time live traces. * **Export**: Export live traces to a file. The current supported file format is CSV file.
-* **Log filter**: The live trace tool allows you filtering the captured real time live traces with one specific key word. The common separator (for example, space, comma, semicolon, and so on) will be treated as part of the key word.
-* **Status**: The status shows whether the live trace tool is connected or disconnected with the specific instance.
+* **Log filter**: The live trace tool allows you to filter the captured real time live traces with one specific key word. Separators (for example, space, comma, semicolon, and so on), if present, will be treated as part of the key word.
:::image type="content" source="./media/signalr-howto-troubleshoot-live-trace/live-trace-tool-capture.png" alt-text="Screenshot of capturing live traces with live trace tool.":::
-The real time live traces captured by live trace tool contain detailed information for troubleshooting.
+The real time live traces captured by live trace tool contain detailed information for troubleshooting.
| Name | Description | | | |
The real time live traces captured by live trace tool contain detailed informati
| Status Code | the Http response code | | Duration | The duration between the request is received and processed | | Headers | The additional information passed by the client and the server with an HTTP request or response |
-| Invocation ID | The unique identifier to represent a invocation (only available for ASP.NET SignalR) |
-| Message Type | The type of the message (BroadcastDataMessage/JoinGroupMessage/LeaveGroupMessage/...) |
+| Invocation ID | The unique identifier to represent an invocation (only available for ASP.NET SignalR) |
+| Message Type | The type of the message (BroadcastDataMessage\|JoinGroupMessage\|LeaveGroupMessage\|...) |
## Next Steps
-In this guide, you learned about how to use live trace tool. You could also learn how to handle the common issues:
-* Troubleshooting guides: For how to troubleshoot typical issues based on live traces, see our [troubleshooting guide](./signalr-howto-troubleshoot-guide.md).
-* Troubleshooting methods: For self-diagnosis to find the root cause directly or narrow down the issue, see our [troubleshooting methods introduction](./signalr-howto-troubleshoot-method.md).
+In this guide, you learned about how to use live trace tool. Next, learn how to handle the common issues:
+* Troubleshooting guides: How to troubleshoot typical issues based on live traces, see [troubleshooting guide](./signalr-howto-troubleshoot-guide.md).
+* Troubleshooting methods: For self-diagnosis to find the root cause directly or narrow down the issue, see [troubleshooting methods introduction](./signalr-howto-troubleshoot-method.md).
azure-signalr Signalr Reference Data Plane Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-reference-data-plane-rest-api.md
Similar to authenticating using `AccessKey`, when authenticating using Azure AD
The difference is, in this scenario the JWT Token is generated by Azure Active Directory.
-[Learn how to generate Azure AD Tokens](/azure/active-directory/develop/reference-v2-libraries)
+[Learn how to generate Azure AD Tokens](../active-directory/develop/reference-v2-libraries.md)
You could also use **Role Based Access Control (RBAC)** to authorize the request from your client/server to SignalR Service.
azure-vmware Enable Hcx Access Over Internet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-hcx-access-over-internet.md
Configure a Public IP block through portal by using the Public IP feature of the
After the Public IP is configured successfully, you should see it appear under the Public IP section. The provisioning state shows **Succeeded**. This Public IP block is configured as NSX-T segment on the Tier-1 router.
-For more information about how to enable a public IP to the NSX Edge for Azure VMware Solution, see [Enable Public IP to the NSX Edge for Azure VMware Solution](https://docs.microsoft.com/azure/azure-vmware/enable-public-ip-nsx-edge).
+For more information about how to enable a public IP to the NSX Edge for Azure VMware Solution, see [Enable Public IP to the NSX Edge for Azure VMware Solution](./enable-public-ip-nsx-edge.md).
## Create Public IP segment on NSX-T Before you create a Public IP segment, get your credentials for NSX-T Manager from Azure VMware Solution portal.
The HCX Network Extension service provides layer 2 connectivity between sites. T
After the network is extended to destination site, VMs can be migrated over Layer 2 Extension. ## Next steps
-[Enable Public IP to the NSX Edge for Azure VMware Solution](https://docs.microsoft.com/azure/azure-vmware/enable-public-ip-nsx-edge)
-
-For detailed information on HCX network underlay minimum requirements, see [Network Underlay Minimum Requirements](https://docs.vmware.com/en/VMware-HCX/4.3/hcx-user-guide/GUID-8128EB85-4E3F-4E0C-A32C-4F9B15DACC6D.html).
----
+[Enable Public IP to the NSX Edge for Azure VMware Solution](./enable-public-ip-nsx-edge.md)
+For detailed information on HCX network underlay minimum requirements, see [Network Underlay Minimum Requirements](https://docs.vmware.com/en/VMware-HCX/4.3/hcx-user-guide/GUID-8128EB85-4E3F-4E0C-A32C-4F9B15DACC6D.html).
azure-web-pubsub Quickstart Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-serverless.md
If you're not going to continue to use this app, delete all resources created by
In this quickstart, you learned how to run a serverless chat application. Now, you could start to build your own application. > [!div class="nextstepaction"]
-> [Azure Web PubSub bindings for Azure Functions](/azure/azure-web-pubsub/reference-functions-bindings)
+> [Azure Web PubSub bindings for Azure Functions](./reference-functions-bindings.md)
> [!div class="nextstepaction"]
-> [Quick start: Create a simple chatroom with Azure Web PubSub](/azure/azure-web-pubsub/tutorial-build-chat)
+> [Quick start: Create a simple chatroom with Azure Web PubSub](./tutorial-build-chat.md)
> [!div class="nextstepaction"]
-> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
+> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
azure-web-pubsub Tutorial Serverless Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-iot.md
Open function host index page: `http://localhost:7071/api/index` to view the rea
## Next steps > [!div class="nextstepaction"]
-> [Tutorial: Create a simple chatroom with Azure Web PubSub](/azure/azure-web-pubsub/tutorial-build-chat)
+> [Tutorial: Create a simple chatroom with Azure Web PubSub](./tutorial-build-chat.md)
> [!div class="nextstepaction"]
-> [Azure Web PubSub bindings for Azure Functions](/azure/azure-web-pubsub/reference-functions-bindings)
+> [Azure Web PubSub bindings for Azure Functions](./reference-functions-bindings.md)
> [!div class="nextstepaction"]
-> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
+> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
azure-web-pubsub Tutorial Serverless Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-notification.md
If you're not going to continue to use this app, delete all resources created by
In this quickstart, you learned how to run a serverless chat application. Now, you could start to build your own application. > [!div class="nextstepaction"]
-> [Tutorial: Create a simple chatroom with Azure Web PubSub](/azure/azure-web-pubsub/tutorial-build-chat)
+> [Tutorial: Create a simple chatroom with Azure Web PubSub](./tutorial-build-chat.md)
> [!div class="nextstepaction"]
-> [Azure Web PubSub bindings for Azure Functions](/azure/azure-web-pubsub/reference-functions-bindings)
+> [Azure Web PubSub bindings for Azure Functions](./reference-functions-bindings.md)
> [!div class="nextstepaction"]
-> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
+> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
backup Backup Azure Sql Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-automation.md
Title: SQL DB in Azure VM backup & restore via PowerShell description: Back up and restore SQL Databases in Azure VMs using Azure Backup and PowerShell. Previously updated : 01/17/2022 Last updated : 07/15/2022 ms.assetid: 57854626-91f9-4677-b6a2-5d12b6a866e1
PointInTime : 1/1/0001 12:00:00 AM
> [!IMPORTANT] > Make sure that the final recovery config object has all the necessary and proper values since the restore operation will be based on the config object.
+> [!NOTE]
+> If you don't want to restore the entire chain but only a subset of files, follow the steps as documented [here](restore-sql-database-azure-vm.md#partial-restore-as-files).
+ #### Alternate workload restore to a vault in secondary region > [!IMPORTANT]
backup Backup Azure Sql Restore Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-restore-cli.md
Title: Restore SQL server databases in Azure VMs using Azure Backup via CLI description: Learn how to use CLI to restore SQL server databases in Azure VMs in the Recovery Services vault. Previously updated : 07/07/2022 Last updated : 07/15/2022
The output appears as:
"duration": "0:00:04.304322", "endTime": null, "entityFriendlyName": "master [testSQLVM]",
- "errorDetails": null,
+ "errorDetails": > [!NOTE]
+> Information the user should notice even if skimmingnull,
"extendedInfo": { "dynamicErrorMessage": null, "propertyBag": {
The output appears as:
The response provides you the job name. You can use this job name to track the job status using the [az backup job show](/cli/azure/backup/job#az-backup-job-show) command.
+> [!NOTE]
+> If you don't want to restore the entire chain but only a subset of files, follow the steps as documented [here](restore-sql-database-azure-vm.md#partial-restore-as-files).
## Next steps
backup Backup Azure Vms Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-troubleshoot.md
To resolve this issue, use the [restore disks](./backup-azure-arm-restore-vms.md
**Error message**: VM creation failed due to Market Place purchase request being not present.
-Azure Backup supports backup and restore of VMs that are available in Azure Marketplace. This error occurs when you try to restore a VM (with a specific Plan/Publisher setting), which is no longer available in Azure Marketplace. [Learn more here](/azure/marketplace/deprecate-vm).
+Azure Backup supports backup and restore of VMs that are available in Azure Marketplace. This error occurs when you try to restore a VM (with a specific Plan/Publisher setting), which is no longer available in Azure Marketplace. [Learn more here](../marketplace/deprecate-vm.md).
In this scenario, a partial failure happens where the disks are restored, but the VM isn't restored. This is because it's not possible to create a new VM from the restored disks.
DHCP must be enabled inside the guest for IaaS VM backup to work. If you need a
Get more information on how to set up a static IP through PowerShell: * [How to add a static internal IP to an existing VM](/powershell/module/az.network/set-aznetworkinterfaceipconfig#description)
-* [Change the allocation method for a private IP address assigned to a network interface](../virtual-network/ip-services/virtual-networks-static-private-ip-arm-ps.md)
+* [Change the allocation method for a private IP address assigned to a network interface](../virtual-network/ip-services/virtual-networks-static-private-ip-arm-ps.md)
backup Restore Sql Database Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-sql-database-azure-vm.md
Title: Restore SQL Server databases on an Azure VM description: This article describes how to restore SQL Server databases that are running on an Azure VM and that are backed up with Azure Backup. You can also use Cross Region Restore to restore your databases to a secondary region. Previously updated : 12/09/2021 Last updated : 07/15/2021
If you've selected **Logs (Point in Time)** as the restore type, do the followin
1. After you select a date, the timeline graph displays the available recovery points in a continuous range. 1. Specify a time for the recovery on the timeline graph, or select a time. Then select **OK**.
+### Partial restore as files
+
+The Azure Backup service decides the chain of files to be downloaded during restore as files. But there are scenarios where you might not want to download the entire content again.
+
+For eg., when you have a backup policy of weekly fulls, daily differentials and logs, and you already downloaded files for a particular differential. You found that this is not the right recovery point and decided to download the next day's differential. Now you just need the differential file since you already have the starting full. With the partial restore as files ability, provided by Azure Backup, you can now exclude the full from the download chain and download only the differential.
+
+#### Excluding backup file types
+
+The **ExtensionSettingOverrides.json** is a JSON (JavaScript Object Notation) file, that contains overrides for multiple settings of the Azure Backup service for SQL. For "Partial Restore as files" operation, a new JSON field ``` RecoveryPointsToBeExcludedForRestoreAsFiles ``` must be added. This field holds a string value that denotes which recovery point types should be excluded in the next restore as files operation.
+
+1. In the target machine where files are to be downloaded, go to "C:\Program Files\Azure Workload Backup\bin" folder
+2. Create a new JSON file named "ExtensionSettingOverrides.JSON", if it doesn't already exist.
+3. Add the following JSON key value pair
+
+ ```json
+ {
+ "RecoveryPointsToBeExcludedForRestoreAsFiles": "ExcludeFull"
+ }
+ ```
+
+4. No restart of any service is required. The Azure Backup service will attempt to exclude backup types in the restore chain as mentioned in this file.
+
+The ``` RecoveryPointsToBeExcludedForRestoreAsFiles ``` only takes specific values which denote the recovery points to be excluded during restore. For SQL, these values are:
+
+- ExcludeFull (Other backup types such as differential and logs will be downloaded, if they are present in the restore point chain)
+- ExcludeFullAndDifferential (Other backup types such as logs will be downloaded, if they are present in the restore point chain)
+ ### Restore to a specific restore point If you've selected **Full & Differential** as the restore type, do the following:
backup Sap Hana Db Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-db-restore.md
Title: Restore SAP HANA databases on Azure VMs description: In this article, discover how to restore SAP HANA databases that are running on Azure Virtual Machines. You can also use Cross Region Restore to restore your databases to a secondary region. Previously updated : 04/01/2022 Last updated : 07/15/2022
To restore the backup data as files instead of a database, choose **Restore as F
RECOVER DATABASE FOR DHI UNTIL TIMESTAMP '2022-01-12T08:51:54.023' USING SOURCE <sourceSID> USING CATALOG PATH ('/restore/catalo_gen') USING LOG PATH ('/restore/Log/') USING DATA PATH ('/restore/Data_2022-01-12_08-51-54/') USING BACKUP_ID 1641977514020 CHECK ACCESS USING FILE ```
+### Partial restore as files
+
+The Azure Backup service decides the chain of files to be downloaded during restore as files. But there are scenarios where you might not want to download the entire content again.
+
+For eg., when you have a backup policy of weekly fulls, daily differentials and logs, and you already downloaded files for a particular differential. You found that this is not the right recovery point and decided to download the next day's differential. Now you just need the differential file since you already have the starting full. With the partial restore as files ability, provided by Azure Backup, you can now exclude the full from the download chain and download only the differential.
+
+#### Excluding backup file types
+
+The **ExtensionSettingOverrides.json** is a JSON (JavaScript Object Notation) file, that contains overrides for multiple settings of the Azure Backup service for SQL. For "Partial Restore as files" operation, a new JSON field ``` RecoveryPointsToBeExcludedForRestoreAsFiles ``` must be added. This field holds a string value that denotes which recovery point types should be excluded in the next restore as files operation.
+
+1. In the target machine where files are to be downloaded, go to "opt/msawb/bin" folder
+2. Create a new JSON file named "ExtensionSettingOverrides.JSON", if it doesn't already exist.
+3. Add the following JSON key value pair
+
+ ```json
+ {
+ "RecoveryPointsToBeExcludedForRestoreAsFiles": "ExcludeFull"
+ }
+ ```
+
+4. Change the permissions and ownership of the file as follows:
+
+ ```bash
+ chmod 750 ExtensionSettingsOverrides.json
+ chown root:msawb ExtensionSettingsOverrides.json
+ ```
+
+5. No restart of any service is required. The Azure Backup service will attempt to exclude backup types in the restore chain as mentioned in this file.
+
+The ``` RecoveryPointsToBeExcludedForRestoreAsFiles ``` only takes specific values which denote the recovery points to be excluded during restore. For SAP HANA, these values are:
+
+- ExcludeFull (Other backup types such as differential, incremental and logs will be downloaded, if they are present in the restore point chain.
+- ExcludeFullAndDifferential (Other backup types such as incremental and logs will be downloaded, if they are present in the restore point chain)
+- ExcludeFullAndIncremental (Other backup types such as differential and logs will be downloaded, if they are present in the restore point chain)
+- ExcludeFullAndDifferentialAndIncremental (Other backup types such as logs will be downloaded, if they are present in the restore point chain)
+ ### Restore to a specific point in time If you've selected **Logs (Point in Time)** as the restore type, do the following:
backup Tutorial Backup Sap Hana Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-backup-sap-hana-db.md
If you want to throttle backup service disk IOPS consumption to a maximum value,
4. Change the permissions and ownership of the file as follows: ```bash
- chmod 750 ExtensionSettingsOverrides.json
- chown root:msawb ExtensionSettingsOverrides.json
+ chmod 750 ExtensionSettingOverrides.json
+ chown root:msawb ExtensionSettingOverrides.json
``` 5. No restart of any service is required. The Azure Backup service will attempt to limit the throughput performance as mentioned in this file.
backup Tutorial Sap Hana Restore Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-sap-hana-restore-cli.md
Based on the type of restore point chosen (**Point in time** or **Full & Differe
>[!Note] >If you've selected **Restore to a point in time**, the log files (dumped to the target VM) may sometimes contain logs beyond the point-in-time chosen for restore. Azure Backup does this to ensure that log backups for all HANA services are available for consistent and successful restore to the chosen point-in-time.
+> [!NOTE]
+> If you don't want to restore the entire chain but only a subset of files, follow the steps as documented [here](sap-hana-db-restore.md#partial-restore-as-files).
+ Move these restored files to the SAP HANA server where you want to restore them as a database. Then follow these steps to restore the database: 1. Set permissions on the folder / directory where the backup files are stored using the following command:
baremetal-infrastructure Solution Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/solution-design.md
The following table describes whatΓÇÖs supported for each network features confi
|Features |Basic network features | | :- | -: | |Delegated subnet per VNet |1|
-|[Network Security Groups](/azure/virtual-network/network-security-groups-overview) on NC2 on Azure-delegated subnets|No|
-|[User-defined routes (UDRs)](/azure/virtual-network/virtual-networks-udr-overview#user-defined) on NC2 on Azure-delegated subnets|No|
-|Connectivity to [private endpoints](/azure/private-link/private-endpoint-overview)|No|
+|[Network Security Groups](../../../virtual-network/network-security-groups-overview.md) on NC2 on Azure-delegated subnets|No|
+|[User-defined routes (UDRs)](../../../virtual-network/virtual-networks-udr-overview.md#user-defined) on NC2 on Azure-delegated subnets|No|
+|Connectivity to [private endpoints](../../../private-link/private-endpoint-overview.md)|No|
|Load balancers for NC2 on Azure traffic|No| |Dual stack (IPv4 and IPv6) virtual network|IPv4 only supported|
The following table describes whatΓÇÖs supported for each network features confi
Learn more: > [!div class="nextstepaction"]
-> [Architecture](architecture.md)
+> [Architecture](architecture.md)
batch Batch Linux Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-linux-nodes.md
Last updated 12/13/2021 ms.devlang: csharp, python
+zone_pivot_groups: programming-languages-batch-linux-nodes
# Provision Linux compute nodes in Batch pools
az batch pool supported-images list
For more information, you can refer to [Account - List Supported Images - REST API (Azure Batch Service) | Microsoft Docs](/rest/api/batchservice/account/list-supported-images). + ## Create a Linux pool: Batch Python The following code snippet shows an example of how to use the [Microsoft Azure Batch Client Library for Python](https://pypi.python.org/pypi/azure-batch) to create a pool of Ubuntu Server compute nodes. For more details about the Batch Python module, view the [reference documentation](/python/api/overview/azure/batch).
vmc = batchmodels.VirtualMachineConfiguration(
image_reference=image.image_reference, node_agent_sku_id=image.node_agent_sku_id) ``` ## Create a Linux pool: Batch .NET The following code snippet shows an example of how to use the [Batch .NET](https://www.nuget.org/packages/Microsoft.Azure.Batch/) client library to create a pool of Ubuntu Server compute nodes. For more details about Batch .NET, view the [reference documentation](/dotnet/api/microsoft.azure.batch).
ImageReference imageReference = new ImageReference(
sku: "18.04-LTS", version: "latest"); ``` ## Connect to Linux nodes using SSH During development or while troubleshooting, you may find it necessary to sign in to the nodes in your pool. Unlike Windows compute nodes, you can't use Remote Desktop Protocol (RDP) to connect to Linux nodes. Instead, the Batch service enables SSH access on each node for remote connection. The following Python code snippet creates a user on each node in a pool, which is required for remote connection. It then prints the secure shell (SSH) connection information for each node. ```python
tvm-1219235766_2-20160414t192511z | ComputeNodeState.idle | 13.91.7.57 | 50003
tvm-1219235766_3-20160414t192511z | ComputeNodeState.idle | 13.91.7.57 | 50002 tvm-1219235766_4-20160414t192511z | ComputeNodeState.idle | 13.91.7.57 | 50001 ```-
-Instead of a password, you can specify an SSH public key when you create a user on a node. In the Python SDK, use the **ssh_public_key** parameter on [ComputeNodeUser](/python/api/azure-batch/azure.batch.models.computenodeuser). In .NET, use the [ComputeNodeUser.SshPublicKey](/dotnet/api/microsoft.azure.batch.computenodeuser.sshpublickey#Microsoft_Azure_Batch_ComputeNodeUser_SshPublicKey) property.
+
+Instead of a password, you can specify an SSH public key when you create a user on a node.
+In the Python SDK, use the **ssh_public_key** parameter on [ComputeNodeUser](/python/api/azure-batch/azure.batch.models.computenodeuser).
+In .NET, use the [ComputeNodeUser.SshPublicKey](/dotnet/api/microsoft.azure.batch.computenodeuser.sshpublickey#Microsoft_Azure_Batch_ComputeNodeUser_SshPublicKey) property.
## Pricing
cognitive-services Overview Vision Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-vision-studio.md
Each of these features has one or more try-it-out experiences in Vision Studio t
## Cleaning up resources If you want to remove a Cognitive Services resource after using Vision Studio, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. You can't delete your resource directly from Vision Studio, so use one of the following methods:
-* [Using the Azure portal](/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows#clean-up-resources)
-* [Using the Azure CLI](/azure/cognitive-services/cognitive-services-apis-create-account-cli?tabs=windows#clean-up-resources)
+* [Using the Azure portal](../cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows#clean-up-resources)
+* [Using the Azure CLI](../cognitive-services-apis-create-account-cli.md?tabs=windows#clean-up-resources)
> [!TIP] > In Vision Studio, you can find your resource's details (such as its name and pricing tier) as well as switch resources by selecting the Settings icon in the top-right corner of the Vision Studio screen).
If you want to remove a Cognitive Services resource after using Vision Studio, y
## Next steps * Go to [Vision Studio](https://portal.vision.cognitive.azure.com/) to begin using features offered by the service.
-* For more information on the features offered, see the [Azure Computer Vision overview](overview.md).
+* For more information on the features offered, see the [Azure Computer Vision overview](overview.md).
cognitive-services How To Custom Commands Setup Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-commands-setup-speech-sdk.md
You'll also need:
## Step 2: Create a Visual Studio project
-Create a Visual Studio project for UWP development and [install the Speech SDK](/azure/cognitive-services/speech-service/quickstarts/setup-platform?pivots=programming-language-csharp&tabs=uwp).
+Create a Visual Studio project for UWP development and [install the Speech SDK](./quickstarts/setup-platform.md?pivots=programming-language-csharp&tabs=uwp).
## Step 3: Add sample code
Add the code-behind source as follows:
## Next steps > [!div class="nextstepaction"]
-> [How-to: send activity to client application (Preview)](./how-to-custom-commands-send-activity-to-client.md)
+> [How-to: send activity to client application (Preview)](./how-to-custom-commands-send-activity-to-client.md)
cognitive-services Quickstart Translator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/quickstart-translator.md
To get started, you'll need an active Azure subscription. If you don't have an A
To call the Translator service via the [REST API](reference/rest-api-guide.md), you'll need to include the following headers with each request. Don't worry, we'll include the headers for you in the sample code for each programming language.
-For more information on Translator authentication options, *see* the [Translator v3 reference](/azure/cognitive-services/translator/reference/v3-0-reference#authentication) guide.
+For more information on Translator authentication options, *see* the [Translator v3 reference](./reference/v3-0-reference.md#authentication) guide.
|Header|Value| Condition | | |: |:|
That's it, congratulations! You have learned to use the Translator service to tr
* [**Get sentence length**](translator-text-apis.md#get-sentence-length)
-* [**Dictionary lookup and alternate translations**](translator-text-apis.md#dictionary-examples-translations-in-context)
-
+* [**Dictionary lookup and alternate translations**](translator-text-apis.md#dictionary-examples-translations-in-context)
cognitive-services Cognitive Services Limited Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-limited-access.md
The following services are Limited Access:
- [Speaker Recognition](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context): All features - [Face API](/legal/cognitive-services/computer-vision/limited-access-identity?context=/azure/cognitive-services/computer-vision/context/context): Identify and Verify features - [Computer Vision](/legal/cognitive-services/computer-vision/limited-access?context=/azure/cognitive-services/computer-vision/context/context): Celebrity Recognition feature -- [Azure Video Indexer](/azure/azure-video-indexer/limited-access-features): Celebrity Recognition and Face Identify features
+- [Azure Video Indexer](../azure-video-indexer/limited-access-features.md): Celebrity Recognition and Face Identify features
Features of these services that are not listed above are available without registration.
If you are an existing customer and your application for access is denied, you w
Report abuse of Limited Access services [here](https://aka.ms/reportabuse).
-## Next steps
+## Next steps
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md
The total number of tokens processed in a given request depends on the length of
### Resources
-The Azure OpenAI service is a new product offering on Azure. You can get started with the Azure OpenAI service the same way as any other Azure product where you [create a resource](how-to/create-resource.md), or instance of the service, in your Azure Subscription. You can read more about Azure's [resource management design](/azure/azure-resource-manager/management/overview).
+The Azure OpenAI service is a new product offering on Azure. You can get started with the Azure OpenAI service the same way as any other Azure product where you [create a resource](how-to/create-resource.md), or instance of the service, in your Azure Subscription. You can read more about Azure's [resource management design](../../azure-resource-manager/management/overview.md).
### Deployments
The use of Azure OpenAI service is governed by the terms of service that were a
## Next steps
-Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
+Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
cognitive-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/reference.md
This article provides details on the REST API endpoints for the Azure OpenAI Service, a service in the Azure Cognitive Services suite. The REST APIs are broken up into two categories:
-* **Management APIs**: The Azure Resource Manager (ARM) provides the management layer in Azure that allows you to create, update and delete resource in Azure. All services use a common structure for these operations. [Learn More](/azure/azure-resource-manager/management/overview)
+* **Management APIs**: The Azure Resource Manager (ARM) provides the management layer in Azure that allows you to create, update and delete resource in Azure. All services use a common structure for these operations. [Learn More](../../azure-resource-manager/management/overview.md)
* **Service APIs**: The Azure OpenAI service provides you with a set of REST APIs for interacting with the resources & models you deploy via the Management APIs. ## Management APIs
curl -X DELETE https://example_resource_name.openai.azure.com/openai/deployments
## Next steps
-Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
+Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
container-apps Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/firewall-integration.md
Previously updated : 4/15/2022 Last updated : 07/15/2022
The following tables describe how to configure a collection of NSG allow rules.
|--|--|--|--| | TCP | `443` | \* | Allowing all outbound on port `443` provides a way to allow all FQDN based outbound dependencies that don't have a static IP. | | UDP | `123` | \* | NTP server. |
+| TCP | `5671` | \* | Container Apps control plane. |
+| TCP | `5672` | \* | Container Apps control plane. |
| Any | \* | Infrastructure subnet address space | Allow communication between IPs in the infrastructure subnet. This address is passed as a parameter when you create an environment. For example, `10.0.0.0/23`. |+
+#### Considerations
+
+- If you are running HTTP servers, you might need to add ports `80` and `443`.
+- Adding deny rules for some ports and protocols with lower priority than `65000` may cause service interruption and unexpected behavior.
container-instances Container Instances Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-vnet.md
When you deploy to a new virtual network by using this method, the deployment ca
To deploy a container group to an existing virtual network:
-1. Create a subnet within your existing virtual network, use an existing subnet in which a container group is already deployed, or use an existing subnet emptied of *all* other resources
+1. Create a subnet within your existing virtual network, use an existing subnet in which a container group is already deployed, or use an existing subnet emptied of *all* other resources and configuration.
1. Deploy a container group with [az container create][az-container-create] and specify one of the following: * Virtual network name and subnet name * Virtual network resource ID and subnet resource ID, which allows using a virtual network from a different resource group
container-registry Container Registry Quickstart Task Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-quickstart-task-cli.md
This example creates a *Basic* registry, a cost-optimized option for developers
Now use Azure Container Registry to build and push an image. First, create a local working directory and then create a Dockerfile named *Dockerfile* with the single line: `FROM mcr.microsoft.com/hello-world`. This is a simple example to build a Linux container image from the `hello-world` image hosted at Microsoft Container Registry. You can create your own standard Dockerfile and build images for other platforms. If you are working at a bash shell, create the Dockerfile with the following command: ```bash
-echo FROM mcr.microsoft.com/hello-world > Dockerfile
+echo "FROM mcr.microsoft.com/hello-world" > Dockerfile
``` Run the [az acr build][az-acr-build] command, which builds the image and, after the image is successfully built, pushes it to your registry. The following example builds and pushes the `sample/hello-world:v1` image. The `.` at the end of the command sets the location of the Dockerfile, in this case the current directory.
cosmos-db Cassandra Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cassandra-support.md
You can connect to the Cassandra API in Azure Cosmos DB by using the CQLSH insta
**Windows:**
-If using windows, we recommend you enable the [Windows filesystem for Linux](/windows/wsl/install-win10#install-the-windows-subsystem-for-linux). You can then follow the linux commands below.
+<!-- If using windows, we recommend you enable the [Windows filesystem for Linux](/windows/wsl/install-win10#install-the-windows-subsystem-for-linux). You can then follow the linux commands below. -->
+
+1. Install [Python 2.7](https://www.python.org/downloads/release/python-2718/)
+ 1. Select the Windows x86-64 MSI installer version
+1. Install PIP
+ 1. Before install PIP, download the get-pip.py file.
+ 1. Launch a command prompt if it isn't already open. To do so, open the Windows search bar, type cmd and select the icon.
+ 1. Then, run the following command to download the get-pip.py file:
+ ```bash
+ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
+ ```
+1. Install PIP on Windows
+```bash
+python get-pip.py
+```
+1. Verify the PIP installation (look for a message from step 3 to confirm which folder PIP was installed in and then navigate to that folder and run the command pip help).
+1. Install CQLSH using PIP
+```bash
+pip3 install cqlsh==5.0.3
+```
+1. Run the [CQLSH using the authentication mechanism](manage-data-cqlsh.md#update-your-connection-string).
+
+> [!NOTE]
+> You would need to set the environment variables to point to the Python27 folder.
**Install on Unix/Linux/Mac:**
cosmos-db Migrate Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/migrate-data.md
Use the [CQL COPY command](https://cassandra.apache.org/doc/latest/cassandra/too
```bash COPY exampleks.tablename FROM 'data.csv' WITH HEADER = TRUE; ```
+> [!NOTE]
+> Cassandra API supports protocol version 4, which shipped with Cassandra 3.11. There may be issues with using later protocol versions with our API. COPY FROM with later protocol version can go into a loop and return duplicate rows.
+> Add the protocol-version to the cqlsh command.
+```sql
+cqlsh <USERNAME>.cassandra.cosmos.azure.com 10350 -u <USERNAME> -p <PASSWORD> --ssl --protocol-version=4
+```
+##### Add throughput-limiting options to CQL Copy command
+
+The COPY command in cqlsh supports various parameters to control the rate of ingestion of documents into Azure Cosmos DB.
+
+The default configuration for COPY command tries to ingest data at very fast pace and does not account for the rate-limiting behavior of CosmosDB. You should reduce the CHUNKSIZE or INGESTRATE depending on the throughput configured on the collection.
+
+We recommend the below configuration (at minimum) for a collection at 20,000 RUs if the document or record size is 1 KB.
+
+- CHUNKSIZE = 100
+- INGESTRATE = 500
+- MAXATTEMPTS = 10
+
+###### Example commands
+
+- Copying data from Cassandra API to local csv file
+```sql
+COPY standard1 (key, "C0", "C1", "C2", "C3", "C4") TO 'backup.csv' WITH PAGESIZE=100 AND MAXREQUESTS=1 ;
+```
+
+- Copying data from local csv file to Cassandra API
+```sql
+COPY standard2 (key, "C0", "C1", "C2", "C3", "C4") FROM 'backup.csv' WITH CHUNKSIZE=100 AND INGESTRATE=100 AND MAXATTEMPTS=10;
+```
>[!IMPORTANT] > Only the open-source Apache Cassandra version of CQLSH COPY is supported. Datastax Enterprise (DSE) versions of CQLSH may encounter errors.
cost-management-billing Consumption Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/consumption-api-overview.md
Title: Azure consumption API overview
+ Title: Azure Consumption API overview
description: Learn how Azure Consumption APIs give you programmatic access to cost and usage data for your Azure resources. tags: billing Previously updated : 09/15/2021 Last updated : 07/14/2022
-# Azure consumption API overview
+# Azure Consumption API overview
The Azure Consumption APIs give you programmatic access to cost and usage data for your Azure resources. These APIs currently only support Enterprise Enrollments, Web Direct Subscriptions (with a few exceptions), and CSP Azure plan subscriptions. The APIs are continually updated to support other types of Azure subscriptions.
Azure Consumption APIs provide access to:
- Budgets - Balances
+The Azure public endpoint for the Consumption APIs is `consumption.azure.com`.
+
+The Azure Government endpoint for the Consumption APIs is `consumption.azure.us`.
+ ## Usage Details API Use the Usage Details API to get charge and usage data for all Azure 1st party resources. Information is in the form of usage detail records which are currently emitted once per meter per resource per day. Information can be used to add up the costs across all resources or investigate costs / usage on specific resource(s).
data-factory Connector Amazon Rds For Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-rds-for-oracle.md
Previously updated : 09/07/2021 Last updated : 07/11/2022
This article outlines how to use the copy activity in Azure Data Factory to copy
## Supported capabilities
-This Amazon RDS for Oracle connector is supported for the following activities:
+This Amazon RDS for Oracle connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from an Amazon RDS for Oracle database to any supported sink data store. For a list of data stores that are supported as sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+ For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
Specifically, this Amazon RDS for Oracle connector supports:
data-factory Connector Amazon Rds For Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-rds-for-sql-server.md
Previously updated : 10/18/2021 Last updated : 07/11/2022 # Copy data from Amazon RDS for SQL Server by using Azure Data Factory or Azure Synapse Analytics
This article outlines how to use the copy activity in Azure Data Factory and Azu
## Supported capabilities
-This Amazon RDS for SQL Server connector is supported for the following activities:
+This Amazon RDS for SQL Server connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)-- [GetMetadata activity](control-flow-get-metadata-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
+|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;|
+|[Stored procedure activity](transform-data-using-stored-procedure.md)|&#9312; &#9313;|
-You can copy data from an Amazon RDS for SQL Server database to any supported sink data store. For a list of data stores that are supported as sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
Specifically, this Amazon RDS for SQL Server connector supports:
data-factory Connector Amazon Redshift https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-redshift.md
Previously updated : 09/09/2021 Last updated : 07/11/2022 # Copy data from Amazon Redshift using Azure Data Factory or Synapse Analytics
This article outlines how to use the Copy Activity in Azure Data Factory and Syn
## Supported capabilities
-This Amazon Redshift connector is supported for the following activities:
+This Amazon Redshift connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from Amazon Redshift to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
Specifically, this Amazon Redshift connector supports retrieving data from Redshift using query or built-in Redshift UNLOAD support.
data-factory Connector Azure Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-cosmos-db.md
These properties are supported for the linked service:
| database | Specify the name of the database. | Yes | | servicePrincipalId | Specify the application's client ID. | Yes | | servicePrincipalCredentialType | The credential type to use for service principal authentication. Allowed values are **ServicePrincipalKey** and **ServicePrincipalCert**. | Yes |
-| servicePrincipalCredential | The service principal credential. <br/> When you use **ServicePrincipalKey** as the credential type, specify the application's key. Mark this field as **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> When you use **ServicePrincipalCert** as the credential, reference a certificate in Azure Key Vault. | Yes |
+| servicePrincipalCredential | The service principal credential. <br/> When you use **ServicePrincipalKey** as the credential type, specify the application's key. Mark this field as **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> When you use **ServicePrincipalCert** as the credential, reference a certificate in Azure Key Vault, and ensure the certificate content type is **PKCS #12**.| Yes |
| tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Yes | | azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure Active Directory application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the service's cloud environment is used. | No | | connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. |No |
data-factory Connector Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-storage.md
These properties are supported for the linked service:
| url | Endpoint for Data Lake Storage Gen2 with the pattern of `https://<accountname>.dfs.core.windows.net`. | Yes | | servicePrincipalId | Specify the application's client ID. | Yes | | servicePrincipalCredentialType | The credential type to use for service principal authentication. Allowed values are **ServicePrincipalKey** and **ServicePrincipalCert**. | Yes |
-| servicePrincipalCredential | The service principal credential. <br/> When you use **ServicePrincipalKey** as the credential type, specify the application's key. Mark this field as **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> When you use **ServicePrincipalCert** as the credential, reference a certificate in Azure Key Vault. | Yes |
+| servicePrincipalCredential | The service principal credential. <br/> When you use **ServicePrincipalKey** as the credential type, specify the application's key. Mark this field as **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> When you use **ServicePrincipalCert** as the credential, reference a certificate in Azure Key Vault, and ensure the certificate content type is **PKCS #12**.| Yes |
| servicePrincipalKey | Specify the application's key. Mark this field as **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> This property is still supported as-is for `servicePrincipalId` + `servicePrincipalKey`. As ADF adds new service principal certificate authentication, the new model for service principal authentication is `servicePrincipalId` + `servicePrincipalCredentialType` + `servicePrincipalCredential`. | No | | tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Yes | | azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure Active Directory application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the data factory or Synapse pipeline's cloud environment is used. | No |
data-factory Connector Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-db2.md
Previously updated : 06/07/2022 Last updated : 07/11/2022 # Copy data from DB2 using Azure Data Factory or Synapse Analytics
This article outlines how to use the Copy Activity in Azure Data Factory and Syn
## Supported capabilities
-This DB2 database connector is supported for the following activities:
+This DB2 connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from DB2 database to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
Specifically, this DB2 connector supports the following IBM DB2 platforms and versions with Distributed Relational Database Architecture (DRDA) SQL Access Manager (SQLAM) version 9, 10 and 11. It utilizes the DDM/DRDA protocol.
data-factory Connector Drill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-drill.md
Previously updated : 09/09/2021 Last updated : 07/11/2022 # Copy data from Drill using Azure Data Factory or Synapse Analytics
This article outlines how to use the Copy Activity in an Azure Data Factory or S
## Supported capabilities
-This Drill connector is supported for the following activities:
+This Drill connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from Drill to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
The service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-crm-office-365.md
The following properties are supported for the Dynamics linked service.
| authenticationType | The authentication type to connect to a Dynamics server. Valid values are "AADServicePrincipal", "Office365" and "ManagedIdentity". | Yes | | servicePrincipalId | The client ID of the Azure AD application. | Yes when authentication is "AADServicePrincipal" | | servicePrincipalCredentialType | The credential type to use for service-principal authentication. Valid values are "ServicePrincipalKey" and "ServicePrincipalCert". | Yes when authentication is "AADServicePrincipal" |
-| servicePrincipalCredential | The service-principal credential. <br/><br/>When you use "ServicePrincipalKey" as the credential type, `servicePrincipalCredential` can be a string that the service encrypts upon linked service deployment. Or it can be a reference to a secret in Azure Key Vault. <br/><br/>When you use "ServicePrincipalCert" as the credential, `servicePrincipalCredential` must be a reference to a certificate in Azure Key Vault. | Yes when authentication is "AADServicePrincipal" |
+| servicePrincipalCredential | The service-principal credential. <br/><br/>When you use "ServicePrincipalKey" as the credential type, `servicePrincipalCredential` can be a string that the service encrypts upon linked service deployment. Or it can be a reference to a secret in Azure Key Vault. <br/><br/>When you use "ServicePrincipalCert" as the credential, `servicePrincipalCredential` must be a reference to a certificate in Azure Key Vault, and ensure the certificate content type is **PKCS #12**.| Yes when authentication is "AADServicePrincipal" |
| username | The username to connect to Dynamics. | Yes when authentication is "Office365" | | password | The password for the user account you specified as the username. Mark this field with "SecureString" to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes when authentication is "Office365" | | credentials | Specify the user-assigned managed identity as the credential object. <br/><br/> [Create one or multiple user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md), assign them to your data factory and [create credentials](credentials.md) for each user-assigned managed identity.| Yes when authentication is "ManagedIdentity" |
data-factory Connector Google Bigquery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-bigquery.md
Previously updated : 05/30/2022 Last updated : 07/11/2022 # Copy data from Google BigQuery using Azure Data Factory or Synapse Analytics
This article outlines how to use Copy Activity in Azure Data Factory and Synapse
## Supported capabilities
-This Google BigQuery connector is supported for the following activities:
+This Google BigQuery connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from Google BigQuery to any supported sink data store. For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
The service provides a built-in driver to enable connectivity. Therefore, you don't need to manually install a driver to use this connector.
data-factory Connector Greenplum https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-greenplum.md
This article outlines how to use the Copy Activity in an Azure Data Factory or S
## Supported capabilities
-This Greenplum connector is supported for the following activities:
+This Greenplum connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from Greenplum to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
The service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.
data-factory Connector Hbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hbase.md
This article outlines how to use the Copy Activity in an Azure Data Factory or S
## Supported capabilities
-This HBase connector is supported for the following activities:
+This HBase connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from HBase to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
The service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.
data-factory Connector Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hive.md
This article outlines how to use the Copy Activity in an Azure Data Factory or S
## Supported capabilities
-This Hive connector is supported for the following activities:
+This Hive connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312; |
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from Hive to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
The service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.
data-factory Connector Impala https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-impala.md
This article outlines how to use Copy Activity in an Azure Data Factory or Synap
## Supported capabilities
-This Impala connector is supported for the following activities:
+This Impala connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from Impala to any supported sink data store. For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
The service provides a built-in driver to enable connectivity. Therefore, you don't need to manually install a driver to use this connector.
data-factory Connector Informix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-informix.md
This article outlines how to use the Copy Activity in an Azure Data Factory or S
## Supported capabilities
-This Informix connector is supported for the following activities:
+This Informix connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/sink)|&#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9313;|
-You can copy data from Informix source to any supported sink data store, or copy from any supported source data store to Informix sink. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
## Prerequisites
data-factory Connector Mariadb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mariadb.md
This article outlines how to use the Copy Activity in an Azure Data Factory or S
## Supported capabilities
-This MariaDB connector is supported for the following activities:
+This MariaDB connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from MariaDB to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
The service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.
data-factory Connector Microsoft Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-access.md
This article outlines how to use the Copy Activity in Azure Data Factory and Syn
## Supported capabilities
-This Microsoft Access connector is supported for the following activities:
+This Microsoft Access connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/sink)|&#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9313;|
-You can copy data from Microsoft Access source to any supported sink data store, or copy from any supported source data store to Microsoft Access sink. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
## Prerequisites
data-factory Connector Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mysql.md
This article outlines how to use the Copy Activity in Azure Data Factory and Syn
## Supported capabilities
-This MySQL connector is supported for the following activities:
+This MySQL connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from MySQL database to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
Specifically, this MySQL connector supports MySQL **version 5.6, 5.7 and 8.0**.
data-factory Connector Netezza https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-netezza.md
This article outlines how to use Copy Activity in Azure Data Factory or Synapse
## Supported capabilities
-This Netezza connector is supported for the following activities:
+This Netezza connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
-You can copy data from Netezza to any supported sink data store. For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
+For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
Netezza connector supports parallel copying from source. See the [Parallel copy from Netezza](#parallel-copy-from-netezza) section for details.
data-factory Connector Odata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-odata.md
The following properties are supported for an OData linked service:
| servicePrincipalId | Specify the Azure Active Directory application's client ID. | No | | aadServicePrincipalCredentialType | Specify the credential type to use for service principal authentication. Allowed values are: `ServicePrincipalKey` or `ServicePrincipalCert`. | No | | servicePrincipalKey | Specify the Azure Active Directory application's key. Mark this field as a **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | No |
-| servicePrincipalEmbeddedCert | Specify the base64 encoded certificate of your application registered in Azure Active Directory. Mark this field as a **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | No |
+| servicePrincipalEmbeddedCert | Specify the base64 encoded certificate of your application registered in Azure Active Directory, and ensure the certificate content type is **PKCS #12**. Mark this field as a **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | No |
| servicePrincipalEmbeddedCertPassword | Specify the password of your certificate if your certificate is secured with a password. Mark this field as a **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | No| | tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. Retrieve it by hovering the mouse in the top-right corner of the Azure portal. | No | | aadResourceId | Specify the Azure AD resource you are requesting for authorization.| No |
data-factory Connector Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle.md
This article outlines how to use the copy activity in Azure Data Factory to copy
## Supported capabilities
-This Oracle connector is supported for the following activities:
+This Oracle connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
+|[Script activity](transform-data-using-script.md)|&#9312; &#9313;|
-You can copy data from an Oracle database to any supported sink data store. You also can copy data from any supported source data store to an Oracle database. For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources or sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
Specifically, this Oracle connector supports:
data-factory Connector Phoenix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-phoenix.md
This article outlines how to use the Copy Activity in an Azure Data Factory or S
## Supported capabilities
-This Phoenix connector is supported for the following activities:
+This Phoenix connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from Phoenix to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
The service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.
data-factory Connector Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-postgresql.md
This article outlines how to use the Copy Activity in Azure Data Factory and Syn
## Supported capabilities
-This PostgreSQL connector is supported for the following activities:
+This PostgreSQL connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from PostgreSQL database to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
Specifically, this PostgreSQL connector supports PostgreSQL **version 7.4 and above**.
data-factory Control Flow Web Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-web-activity.md
Specify base64-encoded contents of a PFX file and the password.
"password":"****" } ```
+Certificate needs to be an x509 certificate. For conversion to PFX file, you can use your favorite utility. For base-64 encoding, you may use following PowerShell snippet.
+```
+$fileContentBytes = get-content 'enr.dev.webactivity.pfx' -AsByteStream
+
+[System.Convert]::ToBase64String($fileContentBytes) | Out-File ΓÇÿpfx-encoded-bytes.txtΓÇÖ
+```
### Managed Identity Specify the resource uri for which the access token will be requested using the managed identity for the data factory or Synapse workspace instance. To call the Azure Resource Management API, use `https://management.azure.com/`. For more information about how managed identities works see the [managed identities for Azure resources overview page](../active-directory/managed-identities-azure-resources/overview.md).
data-factory How To Manage Studio Preview Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md
Previously updated : 07/08/2022 Last updated : 07/13/2022 # Manage Azure Data Factory studio preview experience
There are two ways to enable preview experiences.
* [Adding activities](#adding-activities) * [Iteration & conditionals container view](#iteration-and-conditionals-container-view)
+ [**Monitoring experimental view**](#monitoring-experimental-view)
+ * [Simplified default monitoring view](#simplified-default-monitoring-view)
+ ### Dataflow data first experimental view UI (user interfaces) changes have been made to mapping data flows. These changes were made to simplify and streamline the dataflow creation process so that you can focus on what your data looks like.
You have two options to add activities to your iteration and conditional activit
:::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-18.png" alt-text="Screenshot of the drop-down list of activities in the right-most activity."::: Select an activity by using the search box or scrolling through the listed activities. The selected activity will be added to the canvas inside of the container.
-
+
+
+### Monitoring experimental view
+
+UI (user interfaces) changes have been made to the monitoring page. These changes were made to simplify and streamline your monitoring experience.
+The monitoring experience remains the same as detailed [here](monitor-visually.md), except for items detailed below.
+
+#### Simplified default monitoring view
+
+The default monitoring view has been simplified with fewer default columns. You can add/remove columns if youΓÇÖd like to personalize your monitoring view. Changes to the default will be cached.
++
+**Default columns**
+
+| **Column name** | **Description** |
+| | |
+| Pipeline Name | Name of the pipeline |
+| Run Start | Start date and time for the pipeline run (MM/DD/YYYY, HH:MM:SS AM/PM) |
+| Duration | Run duration (HH:MM:SS) |
+| Triggered By | The name of the trigger that started the pipeline |
+| Status | **Failed**, **Succeeded**, **In Progress**, **Cancelled**, or **Queued** |
+| Parameters | Parameters for the pipeline run (name/value pairs) |
+| Error | If the pipeline failed, the run error |
+| Run ID | ID of the pipeline run |
++
+You can edit your default view by clicking **Edit Columns**.
++
+Add columns by clicking **Add column** or remove columns by clicking the trashcan icon.
++
+
## Provide feedback We want to hear from you! If you see this pop-up, please let us know your thoughts by providing feedback on the updates you've tested. :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-19.png" alt-text="Screenshot of the feedback survey where user can select between one and five stars.":::++
+## Next steps
+
+- [What's New in Azure Data Factory](whats-new.md)
+- [How to manage Azure Data Factory Settings](how-to-manage-settings.md)
data-factory How To Use Sql Managed Instance With Ir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-use-sql-managed-instance-with-ir.md
You can now move your SQL Server Integration Services (SSIS) projects, packages,
- [Execute SSIS packages by Azure SQL Managed Instance Agent job](how-to-invoke-ssis-package-managed-instance-agent.md) - [Clean up SSISDB logs by Azure SQL Managed Instance Agent job](#clean-up-ssisdb-logs) - [Azure-SSIS IR failover with Azure SQL Managed Instance](configure-bcdr-azure-ssis-integration-runtime.md)-- [Migrate on-premises SSIS workloads to SSIS in ADF with Azure SQL Managed Instance as database workload destination](scenario-ssis-migration-overview.md#azure-sql-managed-instance-as-database-workload-destination)
+- [Migrate on-premises SSIS workloads to SSIS in ADF](scenario-ssis-migration-overview.md)
## Provision Azure-SSIS IR with SSISDB hosted by Azure SQL Managed Instance
data-factory Scenario Ssis Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scenario-ssis-migration-overview.md
Previously updated : 10/22/2021 Last updated : 07/07/2022
-# Migrate on-premises SSIS workloads to SSIS in ADF
+# Migrate on-premises SSIS workloads to SSIS in ADF or Synapse Pipelines
## Overview When you migrate your database workloads from SQL Server on premises to Azure database services, namely Azure SQL Database or Azure SQL Managed Instance, your ETL workloads on SQL Server Integration Services (SSIS) as one of the primary value-added services will need to be migrated as well.
-Azure-SSIS Integration Runtime (IR) in Azure Data Factory (ADF) supports running SSIS packages. Once Azure-SSIS IR is provisioned, you can then use familiar tools, such as SQL Server Data Tools (SSDT)/SQL Server Management Studio (SSMS), and command-line utilities, such as dtinstall/dtutil/dtexec, to deploy and run your packages in Azure. For more info, see [Azure SSIS lift-and-shift overview](/sql/integration-services/lift-shift/ssis-azure-lift-shift-ssis-packages-overview).
+Azure-SSIS Integration Runtime (IR) in Azure Data Factory (ADF) or Synapse Pipelines supports running SSIS packages. Once Azure-SSIS IR is provisioned, you can then use familiar tools, such as SQL Server Data Tools (SSDT)/SQL Server Management Studio (SSMS), and command-line utilities, such as dtinstall/dtutil/dtexec, to deploy and run your packages in Azure. For more info, see [Azure SSIS lift-and-shift overview](/sql/integration-services/lift-shift/ssis-azure-lift-shift-ssis-packages-overview).
This article highlights migration process of your ETL workloads from on-premises SSIS to SSIS in ADF. The migration process consists of two phases: **Assessment** and **Migration**.
Data Migration Assistant (DMA) is a freely downloadable tool for this purpose th
- Informative issues: partially supported or deprecated features that are used in source packages. DMA provides a comprehensive set of recommendations, alternative approaches available in Azure, and mitigating steps to resolve.
+You get detail list of migration blockers and informative issues here.
+ ### Four storage types for SSIS packages - SSIS catalog (SSISDB). Introduced with SQL Server 2012 and contains a set of stored procedures, views, and table-valued functions used for working with SSIS projects/packages.
Data Migration Assistant (DMA) is a freely downloadable tool for this purpose th
DMA currently supports the batch-assessment of packages stored in **File System**, **Package Store**, and **SSIS catalog** since **DMA version v5.0**.
-Get [DMA](/sql/dma/dma-overview), and [perform your package assessment with it](/sql/dma/dma-assess-ssis).
+Get [DMA](/sql/dma/dma-overview), and [perform your package assessment with it](/sql/dma/dma-assess-ssis).
## Migration
-Depending on the [storage types](#four-storage-types-for-ssis-packages) of source SSIS packages and the migration destination of database workloads, the steps to migrate **SSIS packages** and **SQL Server Agent jobs** that schedule SSIS package executions may vary. There are two scenarios:
--- [**Azure SQL Managed Instance** as database workload destination](#azure-sql-managed-instance-as-database-workload-destination)-- [**Azure SQL Database** as database workload destination](#azure-sql-database-as-database-workload-destination)
+Depending on the [storage types](#four-storage-types-for-ssis-packages) of source SSIS packages, the steps to migrate **SSIS packages** and **SQL Server Agent jobs** that schedule SSIS package executions may vary.
It is also a practical way to use [SSIS DevOps Tools](/sql/integration-services/devops/ssis-devops-overview), to do batch package redeployment to the migration destination.
-### **Azure SQL Managed Instance** as database workload destination
-
-| **Package storage type** |How to batch-migrate SSIS packages|How to batch-migrate SSIS jobs|
-|-|-|-|
-|SSISDB|<li> Redeploy packages via SSDT/SSMS to SSISDB hosted in Azure Managed Instance. For more info, see [Deploying SSIS packages in Azure](/sql/integration-services/lift-shift/ssis-azure-deploy-run-monitor-tutorial). <li> [Migrate **SSISDB**](scenario-ssis-migration-ssisdb-mi.md)|<li>[Migrate SSIS jobs to Azure SQL Managed Instance agent](scenario-ssis-migration-ssisdb-mi.md#ssis-jobs-to-sql-managed-instance-agent) <li>Convert them into ADF pipelines/activities/triggers via scripts/SSMS/ADF portal. For more info, see [SSMS scheduling feature](/sql/integration-services/lift-shift/ssis-azure-schedule-packages-ssms).|
-|File System|Redeploy them to file shares/Azure Files via dtinstall/dtutil/manual copy, or to keep in file systems to access via VNet/Self-Hosted IR. For more info, see [dtutil utility](/sql/integration-services/dtutil-utility).|<li>[Migrate SSIS jobs to Azure SQL Managed Instance agent](scenario-ssis-migration-ssisdb-mi.md#ssis-jobs-to-sql-managed-instance-agent) <li> Migrate with [SSIS Job Migration Wizard in SSMS](how-to-migrate-ssis-job-ssms.md) <li>Convert them into ADF pipelines/activities/triggers via scripts/SSMS/ADF portal. For more info, see [SSMS scheduling feature](/sql/integration-services/lift-shift/ssis-azure-schedule-packages-ssms).|
-|SQL Server (MSDB)|Export them to file systems/file shares/Azure Files via SSMS/dtutil. For more info, see [Exporting SSIS packages](/sql/integration-services/service/package-management-ssis-service#import-and-export-packages).|Convert them into ADF pipelines/activities/triggers via scripts/SSMS/ADF portal. For more info, see [SSMS scheduling feature](/sql/integration-services/lift-shift/ssis-azure-schedule-packages-ssms).|
-|Package Store|Export them to package store via SSMS/dtutil or redeploy them to package store via dtinstall/dtutil/manual copy. For more info, see [Manage packages with Azure-SSIS Integration Runtime package store](azure-ssis-integration-runtime-package-store.md).|<li>[Migrate SSIS jobs to Azure SQL Managed Instance agent](scenario-ssis-migration-ssisdb-mi.md#ssis-jobs-to-sql-managed-instance-agent) <li> Convert them into ADF pipelines/activities/triggers via scripts/SSMS/ADF portal. For more info, see [SSMS scheduling feature](/sql/integration-services/lift-shift/ssis-azure-schedule-packages-ssms).|
-
-### **Azure SQL Database** as database workload destination
-
-| **Package storage type** |How to batch-migrate SSIS packages|How to batch-migrate jobs|
+| **Package storage type** |How to migrate SSIS packages|How to migrate SSIS jobs|
|-|-|-|
-|SSISDB|Redeploy packages via SSDT/SSMS to SSISDB hosted in Azure SQL Database. For more info, see [Deploying SSIS packages in Azure](/sql/integration-services/lift-shift/ssis-azure-deploy-run-monitor-tutorial).|Convert them into ADF pipelines/activities/triggers via scripts/SSMS/ADF portal. For more info, see [SSMS scheduling feature](/sql/integration-services/lift-shift/ssis-azure-schedule-packages-ssms).|
-|File System|Redeploy them to file shares/Azure Files via dtinstall/dtutil/manual copy, or to keep in file systems to access via VNet/Self-Hosted IR. For more info, see [dtutil utility](/sql/integration-services/dtutil-utility).|<li> Migrate with [SSIS Job Migration Wizard in SSMS](how-to-migrate-ssis-job-ssms.md) <li> Convert them into ADF pipelines/activities/triggers via scripts/SSMS/ADF portal. For more info, see [SSMS scheduling feature](/sql/integration-services/lift-shift/ssis-azure-schedule-packages-ssms).|
+|SSISDB|Redeploy packages via SSDT/SSMS to SSISDB hosted in Azure Managed Instance. For more info, see [Deploying SSIS packages in Azure](/sql/integration-services/lift-shift/ssis-azure-deploy-run-monitor-tutorial). |<li> Migrate from SQL Server Agent on premises to SQL Managed Instance agent via scripts/manual copy. For more info, see [run SSIS packages via Azure SQL Managed Instance Agent](how-to-invoke-ssis-package-managed-instance-agent.md) <li>Convert them into ADF pipelines/activities/triggers via scripts/SSMS/ADF portal. For more info, see [SSMS scheduling feature](/sql/integration-services/lift-shift/ssis-azure-schedule-packages-ssms).|
+|File System|Redeploy them to file shares/Azure Files via dtinstall/dtutil/manual copy, or to keep in file systems to access via VNet/Self-Hosted IR. For more info, see [dtutil utility](/sql/integration-services/dtutil-utility).|<li>Migrate from SQL Server Agent on premises to SQL Managed Instance agent via scripts/manual copy. For more info, see [run SSIS packages via Azure SQL Managed Instance Agent](how-to-invoke-ssis-package-managed-instance-agent.md) <li> Migrate with [SSIS Job Migration Wizard in SSMS](how-to-migrate-ssis-job-ssms.md) <li>Convert them into ADF pipelines/activities/triggers via scripts/SSMS/ADF portal. For more info, see [SSMS scheduling feature](/sql/integration-services/lift-shift/ssis-azure-schedule-packages-ssms).|
|SQL Server (MSDB)|Export them to file systems/file shares/Azure Files via SSMS/dtutil. For more info, see [Exporting SSIS packages](/sql/integration-services/service/package-management-ssis-service#import-and-export-packages).|Convert them into ADF pipelines/activities/triggers via scripts/SSMS/ADF portal. For more info, see [SSMS scheduling feature](/sql/integration-services/lift-shift/ssis-azure-schedule-packages-ssms).|
-|Package Store|Export them to file systems/file shares/Azure Files via SSMS/dtutil or redeploy them to file shares/Azure Files via dtinstall/dtutil/manual copy or keep them in file systems to access via VNet/Self-Hosted IR. For more info, see dtutil utility. For more info, see [dtutil utility](/sql/integration-services/dtutil-utility).|Convert them into ADF pipelines/activities/triggers via scripts/SSMS/ADF portal. For more info, see [SSMS scheduling feature](/sql/integration-services/lift-shift/ssis-azure-schedule-packages-ssms).|
+|Package Store|Export them to package store via SSMS/dtutil or redeploy them to package store via dtinstall/dtutil/manual copy. For more info, see [Manage packages with Azure-SSIS Integration Runtime package store](azure-ssis-integration-runtime-package-store.md).|<li>Migrate from SQL Server Agent on premises to SQL Managed Instance agent via scripts/manual copy. For more info, see [run SSIS packages via Azure SQL Managed Instance Agent](how-to-invoke-ssis-package-managed-instance-agent.md) <li> Convert them into ADF pipelines/activities/triggers via scripts/SSMS/ADF portal. For more info, see [SSMS scheduling feature](/sql/integration-services/lift-shift/ssis-azure-schedule-packages-ssms).|
## Additional resources
It is also a practical way to use [SSIS DevOps Tools](/sql/integration-services/
- [Database Migration Assistant](/sql/dma/dma-overview) - [Lift and shift SSIS workloads to the cloud](/sql/integration-services/lift-shift/ssis-azure-lift-shift-ssis-packages-overview) - [SSIS DevOps Tools](/sql/integration-services/devops/ssis-devops-overview)-- [Migrate SSIS packages to Azure SQL Managed Instance](../dms/how-to-migrate-ssis-packages-managed-instance.md) - [Redeploy packages to Azure SQL Database](../dms/how-to-migrate-ssis-packages.md) - [On-premises data access from Azure-SSIS Integration Runtime](https://techcommunity.microsoft.com/t5/sql-server-integration-services/vnet-or-no-vnet-secure-data-access-from-ssis-in-azure-data/ba-p/1062056)
data-factory Scenario Ssis Migration Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scenario-ssis-migration-rules.md
+
+ Title: On-premises SQL Server Integration Services (SSIS) workloads to SSIS in Azure Data Factory (ADF) or Synapse Pipelines migration rules
+description: SSIS workloads migration assessment rules.
+++++ Last updated : 07/07/2022++
+# SSIS migration assessment rules
++
+When planning a migration of on-premises SSIS to SSIS in Azure Data Factory (ADF) or Synapse Pipelines, assessment will help identify issues with the source SSIS packages that would prevent a successful migration.
+
+[Data Migration Assistant (DMA) for Integration Services](/sql/dma/dma-assess-ssis) can perform assessment of your project, and below are full list of potential issues, known as DMA rules as well.
+
+### [1001]Connection with host name may fail
+
+**Impact**
+
+Connection that contains host name may fail, typically because the Azure virtual network requires the correct configuration to support DNS name resolution.</Impact>
+
+**Recommendation**
+
+You can use below options for SSIS Integration runtime to access these resources:
+
+- [Join Azure-SSIS IR to a virtual network that connects to on-premise sources](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network)
+- Migrate your data to Azure and use Azure resource endpoint.
+- Use Managed Identity authentication if moving to Azure resources.
+- [Use self-hosted IR to connect on-premise sources](/azure/data-factory/self-hosted-integration-runtime-proxy-ssis).
+
+### [1002]Connection with absolute or UNC path might not be accessible
+
+Impact
+
+Connection that contains absolute or UNC path may fail
+
+Recommendation
+
+You can use below options for SSIS Integration runtime to access these resources:
+
+- [Change to %TEMP%](/azure/data-factory/ssis-azure-files-file-shares)
+- [Migrate your files to Azure Files](/azure/data-factory/ssis-azure-files-file-shares)
+- [Join Azure-SSIS IR to a virtual network that connects to on-premise sources](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network).
+- [Use self-hosted IR to connect on-premise sources](/azure/data-factory/self-hosted-integration-runtime-proxy-ssis).
+
+### [1003]Connection with Windows authentication may fail
+
+Impact
+
+If a connection string uses Windows authentication, it may fail. Windows authentication requires additional configuration steps in Azure.
+
+Recommendation
+
+There are [four methods to access data stores Windows authentication in Azure SSIS integration runtime](/sql/integration-services/lift-shift/ssis-azure-connect-with-windows-auth):
+
+- Set up an activity-level execution context
+- Set up a catalog-level execution context
+- Persist credentials via cmdkey command
+- Mount drives at package execution time (non-persistent)
+
+### [1004]Connection with non built-in provider or driver may fail
+
+Impact
+
+Azure-SSIS IR only includes built-in providers or drivers by default. Without customization to install the provider or driver, the connection may fail.
+
+Recommendation
+
+[Customize Azure-SSIS integration runtime](/azure/data-factory/how-to-configure-azure-ssis-ir-custom-setup) to install non built-in provider or driver.
+
+### [1005]Analysis Services Connection Manager cannot use an account with MFA enabled
+
+Impact
+
+If you use SSIS in Azure Data Factory (ADF) and want to connect to Azure Analysis Services (AAS) instance, you cannot use an account with Multi-Factor Authentication (MFA) enabled.
+
+Recommendation
+
+Use an account that doesn't require any interactivity/MFA or a service principal instead.
+
+AdditionalInformation
+
+[Configuration of the Analysis Services Connection Manager](/sql/integration-services/connection-manager/analysis-services-connection-manager)
+
+### [1006]Windows environment variable in Connection Manager is discovered
+
+Impact
+
+Connection Manager using Windows environment variable is discovered.
+
+Recommendation
+
+You can use below methods to have Windows environment variables working in SSIS Integration runtime:
+
+- [Customize SSIS integration runtime setup](/azure/data-factory/how-to-configure-azure-ssis-ir-custom-setup) with Windows environment variables.
+- [Use Package or Project Parameter](/sql/integration-services/integration-services-ssis-package-and-project-parameters).
+
+### [1007]SQL Server Native Client (SNAC) OLE DB driver is deprecated
+
+Recommendation
+
+[Use latest Microsoft OLE DB Driver](/sql/connect/oledb/oledb-driver-for-sql-server)
+
+### [2001]Component only supported in enterprise edition
+
+Impact
+
+The component is only supported in Azure SSIS integration runtime enterprise edition.
+
+Recommendation
+
+[Configure Azure SSIS integration runtime to enterprise edition](/azure/data-factory/how-to-configure-azure-ssis-ir-enterprise-edition).
+
+### [2002]ORC and Parquet file format aren't by default enabled
+
+Impact
+
+ORC and Parquet file format need JRE, which isn't by default installed in Azure SSIS integration runtime.</Impact>
+
+Recommendation
+
+Install compatible JRE by [customize setup for the Azure-SSIS integration runtime](/azure/data-factory/how-to-configure-azure-ssis-ir-custom-setup).
+
+### [2003]Third party component isn't by default enabled
+
+Impact
+
+Azure SSIS Integration Runtime isn't by default enabled with third party components. Third party component may fail.
+
+Recommendation
+
+- Contact the third party to get an SSIS Integration runtime compatible version.
+
+- For in-house or open source component, [customize Azure-SSIS integration runtime](/azure/data-factory/how-to-configure-azure-ssis-ir-custom-setup) to install necessary SQL Server 2017 compatible components.
+
+### [2004]Azure Blob source and destination is discovered
+
+Recommendation
+
+Recommend to use [Flexible File source](/sql/integration-services/data-flow/flexible-file-source) or [destination](/sql/integration-services/data-flow/flexible-file-destination), which has more advanced functions than Azure Blob.
+
+### [2005]Non built-in log provider may not be installed by default
+
+Impact
+
+Azure SSIS integration time is provisioned with built-in log providers by default only, customize log provider may fail.
+
+Recommendation
+
+[Customize Azure-SSIS integration runtime](/azure/data-factory/how-to-configure-azure-ssis-ir-custom-setup) to install non built-in provider or driver.
+
+### [3001]Absolute or UNC path is discovered in Execute Process Task
+
+Impact
+
+Azure-SSIS Integration Runtime might not be able to launch your executable(s) with absolute or UNC path.</Impact>
+
+Recommendation
+
+You can use below options for SSIS Integration runtime to launch your executable(s):
+
+- [Migrate your executable(s) to Azure Files](/azure/data-factory/ssis-azure-files-file-shares).
+- [Join Azure-SSIS IR to a virtual network](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network) that connects to on-premise sources.
+- If necessary, [customize setup script to install your executable(s)](/azure/data-factory/how-to-configure-azure-ssis-ir-custom-setup) in advance when starting IR.
+
+### [4001]Absolute or UNC configuration path is discovered in package configuration
+
+Impact
+
+Package with absolute or UNC configuration path may fail in Azure SSIS Integration Runtime.
+
+Recommendation
+
+You can use below options for SSIS Integration runtime to access these resources:
+
+- [Migrate your files to Azure Files](/azure/data-factory/ssis-azure-files-file-shares)
+- [Join Azure-SSIS IR to a virtual network that connects to on-premise sources](/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network).
+- [Use self-hosted IR to connect on-premise sources](/azure/data-factory/self-hosted-integration-runtime-proxy-ssis).
+
+### [4002]Registry entry is discovered in package configuration
+
+Impact
+
+Registry entry in package configuration may fail in Azure SSIS Integration Runtime.
+
+Recommendation
+
+Use other package configuration types. XML configuration file is recommended.
+
+Additional Information
+
+[Package Configurations](/sql/integration-services/package-configurations)
+
+### [4003]Package encrypted with user key isn't supported
+
+Impact
+
+Package encrypted with user key isn't supported in Azure SSIS Integration Runtime.
+
+Recommendation
+
+You can use below options:
+
+- Change the package protection level to "Encrypt All With Password" or "Encrypt Sensitive With Password".
+- Keep or change the package protection level to "Encrypt Sensitive With User Key", override connection manager property during package execution
+
+Additional Information
+
+[Access Control for Sensitive Data in Packages](/sql/integration-services/security/access-control-for-sensitive-data-in-packages)
data-factory Scenario Ssis Migration Ssisdb Mi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scenario-ssis-migration-ssisdb-mi.md
- Title: SSIS migration with Azure SQL Managed Instance as the database workload destination
-description: SSIS migration with Azure SQL Managed Instance as the database workload destination.
----- Previously updated : 10/22/2021-
-# SSIS migration with Azure SQL Managed Instance as the database workload destination
--
-When migrating database workloads from a SQL Server instance to Azure SQL Managed Instance, you should be familiar with [Azure Data Migration Service](../dms/dms-overview.md)(DMS), and the [network topologies for SQL Managed Instance migrations using DMS](../dms/resource-network-topologies.md).
-
-This article focuses on the migration of SQL Server Integration Service (SSIS) packages stored in SSIS catalog (SSISDB) and SQL Server Agent jobs that schedule SSIS package executions.
-
-## Migrate packages in SSIS catalog (SSISDB)
-
-Database Migration Service can migrate SSIS packages stored in SSISDB, as described in the article:
-[Migrate SSIS packages to SQL Managed Instance](../dms/how-to-migrate-ssis-packages-managed-instance.md).
-
-## SSIS jobs to SQL Managed Instance agent
-
-SQL Managed Instance has a native, first-class scheduler just like SQL Server Agent on premises. You can [run SSIS packages via Azure SQL Managed Instance Agent](how-to-invoke-ssis-package-managed-instance-agent.md).
-
-Since a migration tool for SSIS jobs is not yet available, they have to be migrated from SQL Server Agent on premises to SQL Managed Instance agent via scripts/manual copy.
-
-## Additional resources
--- [Azure Data Factory](./introduction.md)-- [Azure-SSIS Integration Runtime](./create-azure-ssis-integration-runtime.md)-- [Azure Database Migration Service](../dms/dms-overview.md)-- [Network topologies for SQL Managed Instance migrations using DMS](../dms/resource-network-topologies.md)-- [Migrate SSIS packages to an SQL Managed Instance](../dms/how-to-migrate-ssis-packages-managed-instance.md)-
-## Next steps
--- [Connect to SSISDB in Azure](/sql/integration-services/lift-shift/ssis-azure-connect-to-catalog-database)-- [Run SSIS packages deployed in Azure](/sql/integration-services/lift-shift/ssis-azure-run-packages)
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
Defender for Cloud includes vulnerability scanning for your machines at no extra
> > Defender for Cloud's integrated vulnerability assessment solution works seamlessly with Azure Arc. When you've deployed Azure Arc, your machines will appear in Defender for Cloud and no Log Analytics agent is required.
-Deploy the vulnerability assessment solution that best meets your needs and budget:
- If you don't want to use the vulnerability assessment powered by Qualys, you can use [Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md) or [deploy a BYOL solution](deploy-vulnerability-assessment-byol-vm.md) with your own Qualys license, Rapid7 license, or another vulnerability assessment solution. ## Availability
devops-project Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/overview.md
## Advantages of using DevOps Starter
- DevOps starter the following supports two CI/CD providers, to automate your deployments
+ DevOps starter supports the following two CI/CD providers, to automate your deployments
* [GitHub Actions](https://github.com/features/actions) * [Azure DevOps](https://azure.microsoft.com/services/devops)
event-grid Get Topic Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/get-topic-schema.md
+
+ Title: Get schema supported by an Azure Event Grid topic
+description: This article describes how to get the type of schema (Event Grid event schema, cloud event schema, or custom input schema) supported by an Azure Event Grid topic.
+ Last updated : 07/14/2022 ++
+# Get the type of schema supported by an Azure Event Grid topic
+This article describes how to get the type of schema (Event Grid event schema, cloud event schema, or custom input schema) supported by an Azure Event Grid topic.
++
+
+## Get the schema type
+Here's a sample Curl command that sends an **HTTP OPTIONS** message to the topic. The response would contain the header property `aeg-input-event-schema` that gives you the schema type supported by the topic.
+
+```bash
+curl -X OPTIONS "<TOPIC ENDPOINT>" -H "aeg-sas-key: <ACCESS KEY>"
+```
+
+Here's the sample header output from the command:
+
+```bash
+Allow: POST,OPTIONS
+Content-Length: 0
+Server: Microsoft-HTTPAPI/2.0
+Strict-Transport-Security: max-age=31536000; includeSubDomains
+api-supported-versions: 2018-01-01
+x-ms-request-id: 2dd9ca30-c2d9-4c08-b9e1-d29c5ebd9802
+aeg-input-event-schema: EventGridEvent
+Date: Wed, 13 Jul 2022 20:04:06 GMT
+```
+
+The value of the `aeg-input-event-schema` header property gives you the type of the schema supported by the topic. In this example, it's the Event Grid event schema. The value for this property is set to one of these values: `EventGridEvent`, `CustomInputEvent`, `CloudEventV10`.
++
+## Next steps
+For information about schemas, see the following articles:
+
+- [Event Grid event schema](event-schema.md)
+- [Cloud event schema](cloud-event-schema.md)
+- [Custom input schema](input-mappings.md)
healthcare-apis Get Started With Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started-with-iot.md
Title: Get started with the MedTech service - Azure Health Data Services
+ Title: Get started with the MedTech service in Azure Health Data Services
description: This document describes how to get started with the MedTech service in Azure Health Data Services.-+ Previously updated : 05/03/2022- Last updated : 07/14/2022+
-# Get started with the MedTech service
+# Get started with MedTech service in Azure Health Data Services
-This article outlines the basic steps to get started with the MedTech service in [Azure Health Data Services](../healthcare-apis-overview.md).
+This article outlines the basic steps to get started with MedTech service in [Azure Health Data Services](../healthcare-apis-overview.md). MedTech service first processes data that has been sent to an event hub from a medical device, and then saves the data to the Fast Healthcare Interoperability Resources (FHIR&#174;) service as Observation resources. This procedure makes it possible to link the FHIR service Observation to patient and device resources.
-As a prerequisite, you'll need an Azure subscription and have been granted proper permissions to create Azure resource group and deploy Azure resources.
+The following diagram shows the four development steps of the data flow needed to get MedTech service to receive data from a device and send it to FHIR service.
-You can follow all the steps, or skip some if you have an existing environment. Also, you can combine all the steps and complete them in PowerShell, Azure CLI, and REST API scripts.
+- Step 1 introduces the subscription and permissions prerequisites needed.
-[![Get Started with DICOM](media/get-started-with-iot.png)](media/get-started-with-iot.png#lightbox)
+- Step 2 shows how Azure services are provisioned for MedTech services.
-## Create a workspace in your Azure subscription
+- Step 3 represents the flow of data sent from devices to the event hub and MedTech service.
-You can create a workspace from the [Azure portal](../healthcare-apis-quickstart.md) or using PowerShell, Azure CLI and REST API]. You can find scripts from the [Azure Health Data Services samples](https://github.com/microsoft/healthcare-apis-samples/tree/main/src/scripts).
+- Step 4 demonstrates the path needed to verify data sent to FHIR service.
+
+[![MedTech service data flow diagram.](media/get-started-with-iot.png)](media/get-started-with-iot.png#lightbox)
+
+Follow these four steps and you'll be able to deploy MedTech service effectively:
+
+## Step 1: Prerequisites for using Azure Health Data Services
+
+Before you can begin sending data from a device, you need to determine if you have the appropriate Azure subscription and Azure RBAC (Role-Based Access Control) roles. If you already have the appropriate subscription and roles, you can skip this step.
+
+- If you don't have an Azure subscription, see [Subscription decision guide](https://docs.microsoft.com/azure/cloud-adoption-framework/decision-guides/subscriptions/)
+
+- You must have the appropriate RBAC roles for the subscription resources you want to use. The roles required for a user to complete the provisioning would be Contributor AND User Access Administrator OR Owner. The Contributor role allows the user to provision resources, and the User Access Administrator role allows the user to grant access so resources can send data between them. The Owner role can perform both. For more information, see [Azure role-based access control](https://docs.microsoft.com/azure/cloud-adoption-framework/ready/considerations/roles).
+
+## Step 2: Provision services and obtain permissions
+
+After obtaining the required prerequisites, you must create a workspace and provision instances of Event Hubs service, FHIR service, and MedTech service. You must also give Event Hubs permission to read data from your device and give MedTech service permission to read and write to FHIR service.
+
+### Create a resource group and workspace
+
+You must first create a resource group to contain the deployed instances of workspace, Event Hubs service, FHIR service, and MedTech service. A [workspace](../workspace-overview.md) is required as a container for Azure Health Data Services. After you create a workspace from the [Azure portal](../healthcare-apis-quickstart.md), a FHIR service and MedTech service can be deployed to the workspace.
> [!NOTE]
-> There are limits to the number of workspaces and the number of MedTech service instances you can create in each Azure subscription.
+> There are limits to the number of workspaces and the number of MedTech service instances you can create in each Azure subscription. For more information, see [IoT Connector FAQs](iot-connector-faqs.md).
+
+### Provision an Event Hubs instance to a namespace
+
+In order to provision an Event Hubs service, an Event Hubs namespace must first be provisioned, because Event Hubs namespaces are logical containers for event hubs. Namespace must be associated with a resource. The event hub and namespace need to be provisioned in the same Azure subscription. For more information, see [Event Hubs](../../event-hubs/event-hubs-create.md).
+
+Once an event hub is provisioned, you must give permission to the event hub to read data from the device. Then, MedTech service can retrieve data from the event hub using a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md). This managed identity is assigned an Azure Event Hubs data receiver role. For more information on how to assign the managed-identity role to MedTech service from an Event Hubs service instance, see [Granting MedTech service access](../../healthcare-apis/iot/deploy-iot-connector-in-azure.md#granting-the-medtech-service-access).
+
+### Provision a FHIR service instance to the same workspace
-## Create the FHIR service and an event hub
+You must provision a [FHIR service](../fhir/fhir-portal-quickstart.md) instance in your workspace. MedTech service persists the data to FHIR service store using the system-managed identity. See details on how to assign the role to MedTech service from [FHIR service](../../healthcare-apis/iot/deploy-iot-connector-in-azure.md#accessing-the-medtech-service-from-the-fhir-service).
-The MedTech service works with Azure Event Hubs and the FHIR service. You can create a new [FHIR service](../fhir/get-started-with-fhir.md) or use an existing one in the same or different workspace. Similarly, you can create a new [Event Hub](../../event-hubs/event-hubs-create.md) or use an existing one.
+Once FHIR service is provisioned, you must give MedTech service permission to read and write to FHIR service. This permission enables the data to be persisted in the FHIR service store using system-assigned managed identity. See details on how to assign the role to MedTech service from [FHIR service](../../healthcare-apis/iot/deploy-iot-connector-in-azure.md#accessing-the-medtech-service-from-the-fhir-service).
-## Create a MedTech service in the workspace
+### Provision a MedTech service instance in the workspace
-You can create a MedTech service from the [Azure portal](deploy-iot-connector-in-azure.md) or using PowerShell, Azure CLI, or REST API. You can find scripts from the [Azure Health Data Services samples](https://github.com/microsoft/healthcare-apis-samples/tree/main/src/scripts).
+You must provision a MedTech service instance from the [Azure portal](deploy-iot-connector-in-azure.md) in your workspace. You can make the provisioning process easier and more efficient by automating everything with Azure PowerShell, Azure CLI, or Azure REST API. You can find automation scripts at the [Azure Health Data Services samples](https://github.com/microsoft/healthcare-apis-samples/tree/main/src/scripts) website.
-Optionally, you can create a [FHIR service](../fhir/fhir-portal-quickstart.md) and [DICOM service](../dicom/deploy-dicom-services-in-azure.md) in the workspace.
+## Step 3: Send the data
-## Assign roles to allow MedTech service to access Event Hubs
+When the relevant services are provisioned, you can send event data from the device to MedTech service using an event hub. The event data is routed in the following manner:
-By design, the MedTech service retrieves data from the specified event hub using the system-managed identity. For more information on how to assign the role to the MedTech service from [Event Hubs](../../healthcare-apis/iot/deploy-iot-connector-in-azure.md#granting-the-medtech-service-access).
+- Data is sent from your device to the event hub.
-## Assign roles to allow MedTech service to access FHIR service
+- After the data is received by the event hub, MedTech service reads it. Then it transforms the data into a FHIR service [Observation](http://hl7.org/fhir/observation.html) resource using the data mapping you supplied.
-The MedTech service persists the data to the FHIR store using the system-managed identity. See details on how to assign the role to the MedTech service from the [FHIR service](../../healthcare-apis/iot/deploy-iot-connector-in-azure.md#accessing-the-medtech-service-from-the-fhir-service).
+## Step 4: Verify the data
-## Sending data to the MedTech service
+If the data isn't mapped or if the mapping isn't authored properly, the data is skipped. If there are no problems with the [device mapping](./how-to-use-device-mappings.md) or the [FHIR destination mapping](./how-to-use-fhir-mappings.md), the data is persisted in the FHIR service.
-You can send data to the event hub, which is associated with the MedTech service. If you don't see any data in the FHIR service, check the mappings and role assignments for the MedTech service.
+### Metrics
-## MedTech service mappings, data flow, ML, Power BI, and Teams notifications
+You can verify that the data is correctly persisted into FHIR service by using [MedTech service metrics](./how-to-display-metrics.md) in the Azure portal.
-You can find more details about MedTech service mappings, data flow, machine-learning service, Power BI, and Teams notifications in the [MedTech service](iot-connector-overview.md) documentation.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and used with their permission.
## Next steps
-This article described the basic steps to get started using the MedTech service. For information about deploying the MedTech service in the workspace, see
+This article only described the basic steps needed to get started using MedTech service. For information about deploying MedTech service in the workspace, see
>[!div class="nextstepaction"] >[Deploy MedTech service in the Azure portal](deploy-iot-connector-in-azure.md)
hpc-cache Hpc Cache Support Ticket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-support-ticket.md
Title: Open a support ticket for Azure HPC Cache
-description: How to open a help request for Azure HPC Cache
-
+description: How to open a help request for Azure HPC Cache, including how to create a quota increase request
+ Previously updated : 08/19/2021- Last updated : 07/13/2022+ # Contact support for help with Azure HPC Cache
Navigate to your cache instance, then click the **New support request** link tha
To open a ticket when you do not have an active cache, use the main Help + support page from the Azure portal. Open the portal menu from the control at the top left of the screen, then scroll to the bottom and click **Help + support**.
-Choose **New support request** and select **Technical** for help specific to Azure HPC Cache.
+> [!TIP]
+> If you need a quota increase, most requests can be handled automatically. Follow the instructions below in [Request a quota increase](#request-a-quota-increase).
+
+Choose **Create a support request**. On the support request form, write a summary of the issue, and select **Technical** as the **Issue type**.
Select your subscription from the list.
-To find the Azure HPC Cache service, click the **All services** button and search for HPC.
+If you can't find the Azure HPC Cache service, click the **All services** button and search for HPC.
-![Screenshot of the support request - Basics tab, partly filled out as described](media/hpc-cache-support-request.png)
+![Screenshot of the support request - Basics tab, filled out as described.](media/hpc-cache-support-request.png)
Fill out the rest of the fields with your information and preferences, then submit the ticket when you are ready. After you submit the request, you will receive a confirmation email with a ticket number. A support staff member will contact you about the request.+
+## Request a quota increase
+
+Use the quotas page in the Azure portal to check your current quotas and request increases.
+
+The default quota for Azure HPC Cache is four caches per subscription. If you want to create more than six caches in the same subscription, support approval is needed. One HPC Cache uses multiple virtual machines, network resources, storage containers, and other Azure services, so it's unlikely that the number of caches per subscription will be the limiting factor in how many you can have.
+
+Use the support request form described above to request a quota increase.
+
+* For **Issue type**, select **Service and subscription limits (quotas)**.
+
+ ![Screenshot of portal "issue type" menu with the option "Service and subscription limits (quotas)" highlighted.](media/support-request-quota.png)
+
+* Select the **Subscription** for the quota increase.
+
+* Select the **Quota type** "HPC Cache".
+
+ ![Screenshot of portal "quota type" field with "hpc" typed in the search box and a matching result "HPC Cache" showing on the menu to be selected.](media/quota-type-search-hpc.png)
+
+ Click **Next** to go to the **Additional details** page.
+
+* In **Request details**, click the link that says **Enter details**. An additional form opens to the right.
+
+ ![Screenshot of Azure portal details form for HPC Cache, with options to select region and new limit.](media/quota-details.png)
+
+* For **Quota type** select **HPC Cache count**.
+
+* Select the **Region** where your cache is located.
+
+ The form shows your HPC Cache limit and current usage in this region.
+
+* Type the limit you're requesting in the **New limit** field. Click **Save and continue**.
+
+ Fill in the additional details required and create the request.
iot-central Howto Set Up Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-set-up-template.md
The following table shows the configuration settings for customizations:
| Field | Description | | -- | -- |
-|Display name | Override display name from model. |
-|Semantic type | Override semantic type from model. |
-|Unit | Override unit from model. |
-|Display unit | Override from model. |
-|Comment | Override from model. |
-|Description | Override from model. |
|Color | IoT Central-specific option. | |Min value | Set minimum value - IoT Central-specific option. | |Max value | Set maximum value - IoT Central-specific option. |
iot-central Tutorial In Store Analytics Create App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md
To customize the built-in interfaces of the RuuviTag device template:
:::image type="content" source="media/tutorial-in-store-analytics-create-app/ruuvitag-device-summary-view.png" alt-text="Azure IoT Central RuuviTag device template summary view.":::
-1. Select **Customize** in the RuuviTag device template menu.
+1. Select **RuvviTag** model in the RuuviTag device template menu.
1. Scroll in the list of capabilities and find the `RelativeHumidity` telemetry type. It's the row item with the editable **Display name** value of *RelativeHumidity*.
For the `RelativeHumidity` telemetry type, make the following changes:
To add a cloud property to a device template in your application:
-1. Select **Cloud Properties** in the RuuviTag device template menu.
-
-1. Select **Add Cloud Property**.
- Specify the following values to create a custom property to store the location of each device: 1. Enter the value *Location* for the **Display Name**. This value is automatically copied to the **Name** field, which is a friendly name for the property. You can use the copied value or change it.
+1. Select **Capability Type** as **Cloud Property**.
+ 1. Select *String* in the **Schema** dropdown. A string type enables you to associate a location name string with any device based on the template. For instance, you could associate an area in a store with each device. 1. Set **Minimum Length** to *2*.
iot-fundamentals Iot Services And Technologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/iot-services-and-technologies.md
To build an IoT solution from scratch, or extend one created using IoT Central,
### Devices
-Develop your IoT devices using a starter kit such as the [Azure MXChip IoT DevKit](/samples/azure-samples/mxchip-iot-devkit-get-started/sample/) or choose a device from the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com/). Implement your embedded code using the open-source [device SDKs](../iot-hub/iot-hub-devguide-sdks.md). The device SDKs support multiple operating systems, such as Linux, Windows, and real-time operating systems. There are SDKs for multiple programming languages, such as [C](https://github.com/Azure/azure-iot-sdk-c), [Node.js](https://github.com/Azure/azure-iot-sdk-node), [Java](https://github.com/Azure/azure-iot-sdk-java), [.NET](https://github.com/Azure/azure-iot-sdk-csharp), and [Python](https://github.com/Azure/azure-iot-sdk-python).
+Develop your IoT devices using a starter kit such as the [Azure MXChip IoT DevKit](/samples/azure-samples/mxchip-iot-devkit-pnp-get-started/sample/) or choose a device from the [Azure Certified for IoT device catalog](https://devicecatalog.azure.com/). Implement your embedded code using the open-source [device SDKs](../iot-hub/iot-hub-devguide-sdks.md). The device SDKs support multiple operating systems, such as Linux, Windows, and real-time operating systems. There are SDKs for multiple programming languages, such as [C](https://github.com/Azure/azure-iot-sdk-c), [Node.js](https://github.com/Azure/azure-iot-sdk-node), [Java](https://github.com/Azure/azure-iot-sdk-java), [.NET](https://github.com/Azure/azure-iot-sdk-csharp), and [Python](https://github.com/Azure/azure-iot-sdk-python).
To further simplify how you create the embedded code for your devices, follow the [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) conventions. At the core of IoT Plug and Play, is a *device capability model* schema that describes device capabilities. Use the device capability model to configure a cloud-based solution such as an IoT Central application.
iot-hub-device-update Device Update Control Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-control-access.md
# Azure role-based access control (RBAC) and Device Update
-Device Update uses Azure RBAC to provide authentication and authorization for users and service APIs.
+Device Update uses Azure RBAC to provide authentication and authorization for users and service APIs. In order for other users and applications to have access to Device Update, users or applications must be granted access to this resource. It is also necessary to [configure access for Azure Device Update service principal](./device-update-control-access.md#configuring-access-for-azure-device-update-service-principal-in-the-iot-hub) for successfully deploying updates and managing your devices.
## Configure access control roles
-In order for other users and applications to have access to Device Update, users or applications must be granted access to this resource. Here are the roles that are supported by Device Update:
+ These are the roles that are supported by Device Update:
| Role Name | Description | | : | :- |
In order for other users and applications to have access to Device Update, users
A combination of roles can be used to provide the right level of access. For example, a developer can import and manage updates using the Device Update Content Administrator role, but needs a Device Update Deployments Reader role to view the progress of an update. Conversely, a solution operator with the Device Update Reader role can view all updates, but needs to use the Device Update Deployments Administrator role to deploy a specific update to devices.
+## Configuring access for Azure Device Update service principal in the IoT Hub
+
+Device Update for IoT Hub uses [Automatic Device Management](../iot-hub/iot-hub-automatic-device-management.md) for deployments and uses ADM configs to perform device management operations like updates at scale. In order to enable Device Update to do this, users need to set Contributor access for Azure Device Update Service Principal in the IoT Hub permissions.
+
+Below actions will be blocked (after 9/28/22) if these permissions are not set:
+* Create Deployment
+* Cancel Deployment
+* Retry Deployment
+* Get Device
+
+1. Go to the **IoT Hub** connected to your Device Update Instance. Click **Access Control(IAM)**
+2. Click **+ Add** -> **Add role assignment**
+3. Under Role tab, select **Contributor**
+4. Click **Next**. For **Assign access to**, select **User, group, or service principal**. Click **+ Select Members**, search for '**Azure Device Update**'
+5. Click **Next** -> **Review + Assign**
++ ## Authenticate to Device Update REST APIs Device Update uses Azure Active Directory (AD) for authentication to its REST APIs. To get started, you need to create and configure a client application.
iot-hub Iot Hub Create Through Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-through-portal.md
To use the steps in this tutorial, you need an Azure subscription. If you don't
You can change the settings of an existing IoT hub after it's created from the IoT Hub pane. Here are some of the properties you can set for an IoT hub:
-**Pricing and scale**: You can use this property to migrate to a different tier or set the number of IoT Hub units.
+**Pricing and scale**: You can use this property to migrate to a different tier or set the number of IoT Hub units.
**IP Filter**: Specify a range of IP addresses that will be accepted or rejected by the IoT hub.
You can change the settings of an existing IoT hub after it's created from the I
### Shared access policies
-You can also view or modify the list of shared access policies by clicking **Shared access policies** in the **Settings** section. These policies define the permissions for devices and services to connect to IoT Hub.
+You can also view or modify the list of shared access policies by clicking **Shared access policies** in the **Security settings** section. These policies define the permissions for devices and services to connect to IoT Hub.
-Click **Add** to open the **Add a shared access policy** blade. You can enter the new policy name and the permissions that you want to associate with this policy, as shown in the following figure:
+Click **Add shared access policy** to open the **Add shared access policy** blade. You can enter the new policy name and the permissions that you want to associate with this policy, as shown in the following figure:
-![Screenshot showing adding a shared access policy](./media/iot-hub-create-through-portal/iot-hub-add-shared-access-policy.png)
-* The **Registry read** and **Registry write** policies grant read and write access rights to the identity registry. These permissions are used by back-end cloud services to manage device identities. Choosing the write option automatically chooses the read option.
+* The **Registry Read** and **Registry Write** policies grant read and write access rights to the identity registry. These permissions are used by back-end cloud services to manage device identities. Choosing the write option automatically chooses the read option.
-* The **Service connect** policy grants permission to access service endpoints. This permission is used by back-end cloud services to send and receive messages from devices as well as to update and read device twin and module twin data.
+* The **Service Connect** policy grants permission to access service endpoints. This permission is used by back-end cloud services to send and receive messages from devices as well as to update and read device twin and module twin data.
-* The **Device connect** policy grants permissions for sending and receiving messages using the IoT Hub device-side endpoints. This permission is used by devices to send and receive messages from an IoT hub, update and read device twin and module twin data, and perform file uploads.
+* The **Device Connect** policy grants permissions for sending and receiving messages using the IoT Hub device-side endpoints. This permission is used by devices to send and receive messages from an IoT hub, update and read device twin and module twin data, and perform file uploads.
-Click **Create** to add this newly created policy to the existing list.
+Click **Add** to add this newly created policy to the existing list.
For more detailed information about the access granted by specific permissions, see [IoT Hub permissions](./iot-hub-dev-guide-sas.md#access-control-and-permissions).
For more detailed information about the access granted by specific permissions,
## Message Routing for an IoT hub
-Click **Message Routing** under **Messaging** to see the Message Routing pane, where you define routes and custom endpoints for the hub. [Message routing](iot-hub-devguide-messages-d2c.md) enables you to manage how data is sent from your devices to your endpoints. The first step is to add a new route. Then you can add an existing endpoint to the route, or create a new one of the types supported, such as blob storage.
+Click **Message Routing** under **Messaging** to see the Message Routing pane, where you define routes and custom endpoints for the hub. [Message routing](iot-hub-devguide-messages-d2c.md) enables you to manage how data is sent from your devices to your endpoints. The first step is to add a new route. Then you can add an existing endpoint to the route, or create a new one of the types supported, such as blob storage.
![Message routing pane](./media/iot-hub-create-through-portal/iot-hub-message-routing.png) ### Routes
-Routes is the first tab on the Message Routing pane. To add a new route, click +**Add**. You see the following screen.
+Routes is the first tab on the Message Routing pane. To add a new route, click +**Add**. You see the following screen.
![Screenshot showing adding a new route](./media/iot-hub-create-through-portal/iot-hub-add-route-storage-endpoint.png)
-Name your route. The route name must be unique within the list of routes for that hub.
+Name your route. The route name must be unique within the list of routes for that hub.
For **Endpoint**, you can select one from the dropdown list, or add a new one. In this example, a storage account and container are already available. To add them as an endpoint, click +**Add** next to the Endpoint dropdown and select **Blob Storage**. The following screen shows where the storage account and container are specified.
For **Endpoint**, you can select one from the dropdown list, or add a new one. I
Click **Pick a container** to select the storage account and container. When you have selected those fields, it returns to the Endpoint pane. Use the defaults for the rest of the fields and **Create** to create the endpoint for the storage account and add it to the routing rules.
-For **Data source**, select Device Telemetry Messages.
+For **Data source**, select Device Telemetry Messages.
Next, add a routing query. In this example, the messages that have an application property called `level` with a value equal to `critical` are routed to the storage account.
Click **Save** to save the routing rule. You return to the Message Routing pane,
### Custom endpoints
-Click the **Custom endpoints** tab. You see any custom endpoints already created. From here, you can add new endpoints or delete existing endpoints.
+Click the **Custom endpoints** tab. You see any custom endpoints already created. From here, you can add new endpoints or delete existing endpoints.
> [!NOTE] > If you delete a route, it does not delete the endpoints assigned to that route. To delete an endpoint, click the Custom endpoints tab, select the endpoint you want to delete, and click Delete.
Click the **Custom endpoints** tab. You see any custom endpoints already created
You can read more about custom endpoints in [Reference - IoT hub endpoints](iot-hub-devguide-endpoints.md).
-You can define up to 10 custom endpoints for an IoT hub.
+You can define up to 10 custom endpoints for an IoT hub.
To see a full example of how to use custom endpoints with routing, see [Message routing with IoT Hub](tutorial-routing.md).
lab-services How To Manage Vm Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-manage-vm-pool.md
On the **Reset virtual machine(s)** dialog box, select **Reset**.
### Redeploy VMs
-In the [April 2022 Update (preview)](lab-services-whats-new.md), redeploying VMs replaces the previous reset VM behavior. In the Lab Services web portal: [https://labs.azure.com](https://labs.azure.com), the command is named **Troubleshoot** and is available in the student's view of their VMs.
-
-If students are facing difficulties accessing their VM, redeploying the VM may provide a resolution for the issue. Redeploying, unlike resetting, doesn't cause the data on the OS to be lost. When you [redeploy a VM](/troubleshoot/azure/virtual-machines/redeploy-to-new-node-windows), Azure Lab Services will shut down the VM, move it to a new host, and restart it. You can think of it as a refresh of the underlying VM for the studentΓÇÖs machine. The student doesnΓÇÖt need to re-register to the lab or do any other action. Any data you saved in the OS disk (usually C: drive) of the VM will still be available after the redeploy operation. Anything saved on the temporary disk (usually D: drive) will be lost.
-
+In the [April 2022 Update (preview)](lab-services-whats-new.md), redeploying VMs replaces the previous reset VM behavior. In the Lab Services web portal: [https://labs.azure.com](https://labs.azure.com), the command is named **Troubleshoot** and is available in the student's view of their VMs. For more information and instructions on how students can redeploy their VMs, see: [Redeploy VMs](how-to-reset-and-redeploy-vm.md#redeploy-vms).
## Connect to VMs
lab-services How To Reset And Redeploy Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-reset-and-redeploy-vm.md
+
+ Title: Troubleshoot a VM in Azure Lab Services
+description: Learn how to troubleshoot a VM in Azure Lab Services
+++ Last updated : 01/21/2022+
+<!-- As a student, I want to be able to troubleshoot connectivity problems with my VM so that I can get back up and running quickly, without having to escalate an issue -->
+
+# How to troubleshoot a VM
+On rare occasions, you may have problems connecting with a VM in one of your labs. There are some troubleshooting steps you can take as a student to resolve connectivity issues and get back up and running quickly. You can avoid having to escalate the issue to your educator and wait for a solution.
+
+## Reset VMs
+
+To reset one or more VMs, select them in the list, and then select **Reset** on the toolbar.
++
+On the **Reset virtual machine(s)** dialog box, select **Reset**.
++
+### Redeploy VMs
+
+In the [April 2022 Update (preview)](lab-services-whats-new.md), redeploying VMs replaces the previous reset VM behavior. In the Lab Services web portal: [https://labs.azure.com](https://labs.azure.com), the command is named **Troubleshoot** and is available in the student's view of their VMs.
+
+If you're facing difficulties accessing their VM, redeploying the VM may provide a resolution for the issue. Redeploying, unlike resetting, doesn't cause the data on the OS to be lost. When you [redeploy a VM](/troubleshoot/azure/virtual-machines/redeploy-to-new-node-windows), Azure Lab Services will shut down the VM, move it to a new host, and restart it. You can think of it as a refresh of the underlying VM for your machine. You donΓÇÖt need to re-register to the lab or perform any other action. Any data you saved in the OS disk (usually C: drive) of the VM will still be available after the redeploy operation. Anything saved on the temporary disk (usually D: drive) will be lost.
++
+## Next steps
+
+- As a student, learn to [access labs](how-to-use-lab.md).
+- As a student, [connect to a VM](connect-virtual-machine.md).
lab-services Quick Create Lab Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-bicep.md
Title: Azure Lab Services Quickstart - Create a lab using Bicep
description: In this quickstart, you learn how to create an Azure Lab Services lab using Bicep Last updated 05/23/2022-+ # Quickstart: Create a lab using a Bicep file
Remove-AzResourceGroup -Name exampleRG
In this quickstart, you deployed a simple virtual machine using a Bicep file. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs. > [!div class="nextstepaction"]
-> [Configure a template VM](how-to-create-manage-template.md)
+> [Configure a template VM](how-to-create-manage-template.md)
lab-services Quick Create Lab Plan Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-plan-bicep.md
Title: Azure Lab Services Quickstart - Create a lab plan using Bicep
description: In this quickstart, you learn how to create an Azure Lab Services lab plan using Bicep Last updated 05/23/2022-+ # Quickstart: Create a lab plan using a Bicep file
Remove-AzResourceGroup -Name exampleRG
In this quickstart, you deployed a simple virtual machine using a Bicep file. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs. > [!div class="nextstepaction"]
-> [Managing Labs](how-to-manage-labs.md)
+> [Managing Labs](how-to-manage-labs.md)
lab-services Quick Create Lab Plan Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-plan-portal.md
Title: Azure Lab Services Quickstart - Create a lab plan using the Azure portal
description: In this quickstart, you learn how to create an Azure Lab Services lab plan using the Azure portal. Last updated 1/18/2022-+ # Quickstart: Create a lab plan using the Azure portal
lab-services Quick Create Lab Plan Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-plan-powershell.md
Title: Azure Lab Services Quickstart - Create a lab plan using PowerShell
description: In this quickstart, you learn how to create an Azure Lab Services lab plan using PowerShell and the Az module. Previously updated : 02/15/2022- Last updated : 06/15/2022+ # Quickstart: Create a lab plan using PowerShell and the Azure modules
$plan | Remove-AzLabServicesLabPlan
In this QuickStart, you created a resource group and a lab plan. As an admin, you can learn more about [Azure PowerShell module](/powershell/azure) and [Az.LabServices cmdlets](/powershell/module/az.labservices/). > [!div class="nextstepaction"]
-> [Quickstart: Create a lab using PowerShell and the Azure module](quick-create-lab-powershell.md)
+> [Quickstart: Create a lab using PowerShell and the Azure module](quick-create-lab-powershell.md)
lab-services Quick Create Lab Plan Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-plan-template.md
Title: Azure Lab Services Quickstart - Create a lab plan by using Azure Resource
description: In this quickstart, you'll learn how to create an Azure Lab Services lab plan by using Azure Resource Manager template (ARM template). - Previously updated : 05/04/2022+ Last updated : 06/04/2022 # Quickstart: Create a lab plan using an ARM template
lab-services Quick Create Lab Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-portal.md
Title: Azure Lab Services Quickstart - Create a lab using the Azure Lab Services labs.azure.com portal. description: In this quickstart, you learn how to create an Azure Lab Services lab using the labs.azure.com portal. Previously updated : 1/18/2022- Last updated : 6/18/2022+ # Quickstart: Create a lab using the Azure Lab Services portal
lab-services Quick Create Lab Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-powershell.md
Title: Azure Lab Services Quickstart - Create a lab using PowerShell
description: In this quickstart, you learn how to create an Azure Lab Services lab using PowerShell and the Az module. Previously updated : 02/15/2022- Last updated : 06/15/2022+ # Quickstart: Create a lab using PowerShell and the Azure module
machine-learning Concept Enterprise Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-enterprise-security.md
For more information, see the following articles:
* [Connect to storage services](how-to-access-data.md) * [Use Azure Key Vault for secrets when training](how-to-use-secrets-in-runs.md) * [Use Azure AD managed identity with Azure Machine Learning](how-to-use-managed-identities.md)
-* [Use Azure AD managed identity with your web service](how-to-use-azure-ad-identity.md)
## Network security and isolation
machine-learning Concept Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow-models.md
If you are not familiar with MLflow, you may not be aware of the difference betw
Any file generated (and captured) from an experiment's run or job is an artifact. It may represent a model serialized as a Pickle file, the weights of a PyTorch or TensorFlow model, or even a text file containing the coefficients of a linear regression. Other artifacts can have nothing to do with the model itself, but they can contain configuration to run the model, pre-processing information, sample data, etc. As you can see, an artifact can come in any format.
-You can log artifacts in MLflow in a similar way you log a file with Azure ML SDK v1:
+You may have been logging artifacts already:
+
+# [Using MLflow SDK](#tab/mlflow)
```python filename = 'model.pkl'
with open(filename, 'wb') as f:
mlflow.log_artifact(filename) ```
+# [Using Azure ML SDK v1](#tab/sdkv1)
++
+```python
+filename = 'model.pkl'
+with open(filename, 'wb') as f:
+ pickle.dump(model, f)
+
+mlflow.log_file(filename)
+```
+
+# [Using the outputs folder](#tab/outputs)
++
+```python
+os.mkdirs("outputs", exists_ok=True)
+
+filename = 'outputs/model.pkl'
+with open(filename, 'wb') as f:
+ pickle.dump(model, f)
+```
+++ ### Models A model in MLflow is also an artifact, as it matches the definition we introduced above. However, we make stronger assumptions about this type of artifacts. Such assumptions allow us to create a clear contract between the saved artifacts and what they mean. When you log your models as artifacts (simple files), you need to know what the model builder meant for each of them in order to know how to load the model for inference. When you log your models as a Model entity, you should be able to tell what it is based on the contract mentioned.
Logging models has the following advantages:
> * Models can be used as pipelines inputs directly. > * You can use the Responsable AI dashbord.
+Models can get logged by:
+
+# [Using MLflow SDK](#tab/mlflow)
+
+```python
+mlflow..sklearn.log_model(sklearn_estimator, "classifier")
+```
+
+# [Using Azure ML SDK v1](#tab/sdkv1)
++
+Logging models using Azure ML SDK v1 is not possible. We recommend to use MLflow SDK.
+
+# [Using the outputs folder](#tab/outputs)
++
+```python
+os.mkdirs("outputs/classifier", exists_ok=True)
+
+mlflow.sklearn.save_model(sklearn_estimator, "outputs/classifier")
+```
+++ ## The MLModel format MLflow adopts the MLModel format as a way to create a contract between the artifacts and what they represent. The MLModel format stores assets in a folder. Among them, there is a particular file named MLModel. This file is the single source of truth about how a model can be loaded and used.
machine-learning How To Generate Automl Training Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-generate-automl-training-code.md
You can generate the code for automated ML experiments with task types classific
> [!WARNING] > Computer vision models and natural language processing based models in AutoML do not currently support model's training code generation.
-The following diagram illustrates that you can enable code generation for any AutoML created model from the Azure Machine Learning studio UI or with the Azure Machine Learning SDK. After you select a model, Azure Machine Learning copies the code files used to create the model, and displays them into your notebooks shared folder. From here, you can view and customize the code as needed.
+The following diagram illustrates that you can enable code generation for any AutoML created model from the Azure Machine Learning studio UI or with the Azure Machine Learning SDK. First select a model. The model you selected will be highlighted, then Azure Machine Learning copies the code files used to create the model, and displays them into your notebooks shared folder. From here, you can view and customize the code as needed.
-![Code generation diagram](./media/how-to-generate-automl-training-code/code-generation-design.svg)
## Prerequisites
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-cli.md
For more information on the Azure CLI extension for machine learning, see the [a
To check for problems with your workspace, see [How to use workspace diagnostics](how-to-workspace-diagnostic-api.md). To learn how to move a workspace to a new Azure subscription, see [How to move a workspace](how-to-move-workspace.md).+
+For information on how to keep your Azure ML up to date with the latest security updates, see [Vulnerability management](concept-vulnerability-management.md).
machine-learning How To Manage Workspace Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-terraform.md
There are several options to connect to your private link endpoint workspace. To
* To learn more about network configuration options, see [Secure Azure Machine Learning workspace resources using virtual networks (VNets)](./how-to-network-security-overview.md). * For alternative Azure Resource Manager template-based deployments, see [Deploy resources with Resource Manager templates and Resource Manager REST API](../azure-resource-manager/templates/deploy-rest.md).
+* For information on how to keep your Azure ML up to date with the latest security updates, see [Vulnerability management](concept-vulnerability-management.md).
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md
The Azure Machine Learning Python SDK provides the [PrivateEndpointConfig](/pyth
-### Vulnerability scanning
-Microsoft Defender for Cloud provides unified security management and advanced threat protection across hybrid cloud workloads. You should allow Microsoft Defender for Cloud to scan your resources and follow its recommendations. For more, see [Azure Container Registry image scanning by Defender for Cloud](../security-center/defender-for-container-registries-introduction.md) and [Azure Kubernetes Services integration with Defender for Cloud](../security-center/defender-for-kubernetes-introduction.md).
### Advanced
To learn more about planning a workspace for your organization's requirements, s
To check for problems with your workspace, see [How to use workspace diagnostics](how-to-workspace-diagnostic-api.md). If you need to move a workspace to another Azure subscription, see [How to move a workspace](how-to-move-workspace.md).+
+For information on how to keep your Azure ML up to date with the latest security updates, see [Vulnerability management](concept-vulnerability-management.md).
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
Otherwise, you'll see a list of your recent automated ML experiments, including
1. Select **+ New automated ML run** and populate the form.
-1. Select a dataset from your storage container, or create a new dataset. Datasets can be created from local files, web urls, datastores, or Azure open datasets. Learn more about [dataset creation](how-to-create-register-datasets.md).
+1. Select a data asset from your storage container, or create a new data asset. Data asset can be created from local files, web urls, datastores, or Azure open datasets. Learn more about [data asset creation](how-to-create-register-data-assets.md).
>[!Important] > Requirements for training data:
machine-learning How To Use Azure Ad Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-azure-ad-identity.md
- Title: Use Azure AD identity with your web service-
-description: Use Azure AD identity with your web service in Azure Kubernetes Service to access cloud resources during scoring.
------- Previously updated : 10/21/2021---
-# Use Azure AD identity with your machine learning web service in Azure Kubernetes Service
-
-In this how-to, you learn how to assign an Azure Active Directory (Azure AD) identity to your deployed machine learning model in Azure Kubernetes Service. The [Azure AD Pod Identity](https://github.com/Azure/aad-pod-identity) project allows applications to access cloud resources securely with Azure AD by using a [Managed Identity](../active-directory/managed-identities-azure-resources/overview.md) and Kubernetes primitives. This allows your web service to securely access your Azure resources without having to embed credentials or manage tokens directly inside your `score.py` script. This article explains the steps to create and install an Azure Identity in your Azure Kubernetes Service cluster and assign the identity to your deployed web service.
-
-## Prerequisites
--- The [Azure CLI extension for the Machine Learning service](v1/reference-azure-machine-learning-cli.md), the [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md).--- Access to your AKS cluster using the `kubectl` command. For more information, see [Connect to the cluster](../aks/learn/quick-kubernetes-deploy-cli.md#connect-to-the-cluster)--- An Azure Machine Learning web service deployed to your AKS cluster.-
-## Create and install an Azure Identity
-
-1. To determine if your AKS cluster is Kubernetes RBAC enabled, use the following command:
-
- ```azurecli-interactive
- az aks show --name <AKS cluster name> --resource-group <resource group name> --subscription <subscription id> --query enableRbac
- ```
-
- This command returns a value of `true` if Kubernetes RBAC is enabled. This value determines the command to use in the next step.
-
-1. Install [Azure AD Pod Identity](https://azure.github.io/aad-pod-identity/docs/getting-started/installation/) in your AKS cluster.
-
-1. [Create an Identity on Azure](https://azure.github.io/aad-pod-identity/docs/demo/standard_walkthrough/#2-create-an-identity-on-azure) following the steps shown in Azure AD Pod Identity project page.
-
-1. [Deploy AzureIdentity](https://azure.github.io/aad-pod-identity/docs/demo/standard_walkthrough/#3-deploy-azureidentity) following the steps shown in Azure AD Pod Identity project page.
-
-1. [Deploy AzureIdentityBinding](https://azure.github.io/aad-pod-identity/docs/demo/standard_walkthrough/#5-deploy-azureidentitybinding) following the steps shown in Azure AD Pod Identity project page.
-
-1. If the Azure Identity created in the previous step is not in the same node resource group for your AKS cluster, follow the [Role Assignment](https://azure.github.io/aad-pod-identity/docs/getting-started/role-assignment/#user-assigned-identities-that-are-not-within-the-node-resource-group) steps shown in Azure AD Pod Identity project page.
-
-## Assign Azure Identity to web service
-
-The following steps use the Azure Identity created in the previous section, and assign it to your AKS web service through a **selector label**.
-
-First, identify the name and namespace of your deployment in your AKS cluster that you want to assign the Azure Identity. You can get this information by running the following command. The namespaces should be your Azure Machine Learning workspace name and your deployment name should be your endpoint name as shown in the portal.
-
-```azurecli-interactive
-kubectl get deployment --selector=isazuremlapp=true --all-namespaces --show-labels
-```
-
-Add the Azure Identity selector label to your deployment by editing the deployment spec. The selector value should be the one that you defined in step 5 of [Deploy AzureIdentityBinding](https://azure.github.io/aad-pod-identity/docs/demo/standard_walkthrough/#5-deploy-azureidentitybinding).
-
-```yaml
-apiVersion: "aadpodidentity.k8s.io/v1"
-kind: AzureIdentityBinding
-metadata:
- name: demo1-azure-identity-binding
-spec:
- AzureIdentity: <a-idname>
- Selector: <label value to match>
-```
-
-Edit the deployment to add the Azure Identity selector label. Go to the following section under `/spec/template/metadata/labels`. You should see values such as `isazuremlapp: ΓÇ£trueΓÇ¥`. Add the aad-pod-identity label like shown below.
-
-```azurecli-interactive
- kubectl edit deployment/<name of deployment> -n azureml-<name of workspace>
-```
-
-```yaml
-spec:
- template:
- metadata:
- labels:
- aadpodidbinding: "<value of Selector in AzureIdentityBinding>"
- ...
-```
-
-To verify that the label was correctly added, run the following command. You should also see the statuses of the newly created pods.
-
-```azurecli-interactive
- kubectl get pod -n azureml-<name of workspace> --show-labels
-```
-
-Once the pods are up and running, the web services for this deployment will now be able to access Azure resources through your Azure Identity without having to embed the credentials in your code.
-
-## Assign roles to your Azure Identity
-
-[Assign your Azure Managed Identity with appropriate roles](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md) to access other Azure resources. Ensure that the roles you are assigning have the correct **Data Actions**. For example, the [Storage Blob Data Reader Role](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) will have read permissions to your Storage Blob while the generic [Reader Role](../role-based-access-control/built-in-roles.md#reader) might not.
-
-## Use Azure Identity with your web service
-
-Deploy a model to your AKS cluster. The `score.py` script can contain operations pointing to the Azure resources that your Azure Identity has access to. Ensure that you have installed your required client library dependencies for the resource that you are trying to access to. Below are a couple examples of how you can use your Azure Identity to access different Azure resources from your service.
-
-### Access Key Vault from your web service
-
-If you have given your Azure Identity read access to a secret inside a **Key Vault**, your `score.py` can access it using the following code.
-
-```python
-from azure.identity import DefaultAzureCredential
-from azure.keyvault.secrets import SecretClient
-
-my_vault_name = "yourkeyvaultname"
-my_vault_url = "https://{}.vault.azure.net/".format(my_vault_name)
-my_secret_name = "sample-secret"
-
-# This will use your Azure Managed Identity
-credential = DefaultAzureCredential()
-secret_client = SecretClient(
- vault_url=my_vault_url,
- credential=credential)
-secret = secret_client.get_secret(my_secret_name)
-```
-
-> [!IMPORTANT]
-> This example uses the DefaultAzureCredential. To grant your identity access using a specific access policy, see [Assign a Key Vault access policy using the Azure CLI](../key-vault/general/assign-access-policy-cli.md).
-
-### Access Blob from your web service
-
-If you have given your Azure Identity read access to data inside a **Storage Blob**, your `score.py` can access it using the following code.
-
-```python
-from azure.identity import DefaultAzureCredential
-from azure.storage.blob import BlobServiceClient
-
-my_storage_account_name = "yourstorageaccountname"
-my_storage_account_url = "https://{}.blob.core.windows.net/".format(my_storage_account_name)
-
-# This will use your Azure Managed Identity
-credential = DefaultAzureCredential()
-blob_service_client = BlobServiceClient(
- account_url=my_storage_account_url,
- credential=credential
-)
-blob_client = blob_service_client.get_blob_client(container="some-container", blob="some_text.txt")
-blob_data = blob_client.download_blob()
-blob_data.readall()
-```
-
-## Next steps
-
-* For more information on how to use the Python Azure Identity client library, see the [repository](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/identity/azure-identity#azure-identity-client-library-for-python) on GitHub.
machine-learning How To Use Batch Endpoint Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint-sdk-v2.md
A deployment is a set of resources required for hosting the model that does the
Using the `MLClient` created earlier, we'll get a handle to the endpoint. The endpoint can be invoked using the `invoke` command with the following parameters: * `name` - Name of the endpoint
-* `input_path` - Path where input data is present
+* `input` - Path where input data is present
* `deployment_name` - Name of the specific deployment to test in an endpoint 1. Invoke the endpoint:
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md
ms.devlang: azurecli
# Track ML experiments and models with MLflow -- > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning developer platform you are using:"] > * [v1](./v1/how-to-use-mlflow.md) > * [v2 (current version)](how-to-use-mlflow-cli-runs.md)
To track a run that is not running on Azure Machine Learning compute (from now o
# [Using the Azure ML SDK v2](#tab/azuremlsdk) + You can get the Azure ML MLflow tracking URI using the [Azure Machine Learning SDK v2 for Python](concept-v2.md). Ensure you have the library `azure-ai-ml` installed in the cluster you are using. The following sample gets the unique MLFLow tracking URI associated with your workspace. Then the method [`set_tracking_uri()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_tracking_uri) points the MLflow tracking URI to that URI. 1. Using the workspace configuration file:
You can get the Azure ML MLflow tracking URI using the [Azure Machine Learning S
# [Using an environment variable](#tab/environ) + Another option is to set one of the MLflow environment variables [MLFLOW_TRACKING_URI](https://mlflow.org/docs/latest/tracking.html#logging-to-a-tracking-server) directly in your terminal. ```Azure CLI
client.download_artifacts(run_id, "helloworld.txt", ".")
For more details about how to retrieve information from experiments and runs in Azure Machine Learning using MLflow view [Manage experiments and runs with MLflow](how-to-track-experiments-mlflow.md). - ## Manage models Register and track your models with the [Azure Machine Learning model registry](concept-model-management-and-deployment.md#register-package-and-deploy-models-from-anywhere), which supports the MLflow model registry. Azure Machine Learning models are aligned with the MLflow model schema making it easy to export and import these models across different workflows. The MLflow-related metadata, such as run ID, is also tracked with the registered model for traceability. Users can submit training runs, register, and deploy models produced from MLflow runs.
To register and view a model from a run, use the following steps:
## Example files
-[Use MLflow and CLI (v2)](https://github.com/Azure/azureml-examples/blob/main/cli/jobs/basics/hello-mlflow.yml)
+[Using MLflow (Jupyter Notebooks)](https://github.com/Azure/azureml-examples/tree/main/notebooks/using-mlflow)
## Limitations
machine-learning Tutorial Auto Train Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-auto-train-models.md
If you donΓÇÖt have an Azure subscription, create a free account before you begi
1. Select **Notebooks** in the studio. 1. Select the **Samples** tab. 1. Open the *tutorials/regression-automl-nyc-taxi-data/regression-automated-ml.ipynb* notebook.
+ 1. To run each cell in the tutorial, select **Clone this notebook**
This tutorial is also available on [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/tutorials) if you wish to run it in your own [local environment](how-to-configure-environment.md#local). To get the required packages,
marketplace Add Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/add-manage-users.md
description: Learn how to manage users in the commercial marketplace program for
--++ Last updated 01/20/2022
marketplace Add Publishers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/add-publishers.md
description: How to add new publishers to the commercial marketplace program for
--++ Last updated 01/20/2022
marketplace Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-account.md
description: Learn how to create a Microsoft commercial marketplace account in P
--++ Last updated 06/30/2022
marketplace Manage Aad Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/manage-aad-apps.md
description: Learn how to add and manage Azure AD applications for the commercia
--++ Last updated 01/20/2022
marketplace Manage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/manage-account.md
description: Manage a commercial marketplace account in Partner Center.
--++ Last updated 1/20/2022
marketplace Manage Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/manage-groups.md
description: Learn how to manage groups in the commercial marketplace program in
--++ Last updated 01/20/2022
marketplace Manage Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/manage-tenants.md
description: Learn how to manage tenants for the commercial marketplace program
--++ Last updated 06/30/2022
marketplace Switch Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/switch-accounts.md
description: Learn how switch between accounts in the commercial marketplace pro
--++ Last updated 03/28/2022
marketplace User Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/user-roles.md
description: Learn how to assign roles and permissions to users in the commercia
--++ Last updated 04/07/2021
open-datasets Dataset Oj Sales Simulated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-oj-sales-simulated.md
The data contains weekly sales of orange juice over 121 weeks. There are 3,991 s
```python from azureml.core.workspace import Workspace ws = Workspace.from_config()
-datastore = ws.get_default_config()
+datastore = ws.get_default_datastore()
``` ```python
if sys.platform == 'linux':
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
postgresql Concepts Logical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-logical.md
Logical replication and logical decoding have several similarities. They both:
The two technologies have their differences: Logical replication: * Allows you to specify a table or set of tables to be replicated.
-* Replicates data between PostgreSQL instances.
Logical decoding: * Extracts changes across all tables in a database.
-* Can't send data between PostgreSQL instances.
>[!NOTE] > As at this time, Flexible server does not support cross-region read replicas. Depending on the type of workload, you may choose to use logical replication feature for cross-region disaster recovery (DR) purpose.
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-supported-versions.md
Azure Database for PostgreSQL - Flexible Server currently supports the following
## PostgreSQL version 14 The current minor release is **14.3**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/14/static/release-14-3.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
+>[!NOTE]
+> PostgreSQL community released an out of band version 14.4 to address a critical issue. See the [release notes](https://www.postgresql.org/docs/release/14.4/) and this [discussion thread](https://www.postgresql.org/message-id/165473835807.573551.1512237163040609764%40wrigleys.postgresql.org) for details and the workaround till your server is patched to 14.4.
## PostgreSQL version 13
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
Previously updated : 07/06/2022 Last updated : 07/15/2022
In this article, we will provide an overview and introduction to core concepts o
## Overview
-Azure Database for PostgreSQL - Flexible Server is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. In general, the service provides more flexibility and server configuration customizations based on the user requirements. The flexible server architecture allows users to collocate database engine with the client-tier for lower latency, choose high availability within a single availability zone and across multiple availability zones. Flexible servers also provide better cost optimization controls with ability to stop/start your server and burstable compute tier that is ideal for workloads that do not need full compute capacity continuously. The service currently supports community version of PostgreSQL 11, 12, and 13. The service is currently available in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
+Azure Database for PostgreSQL - Flexible Server is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. In general, the service provides more flexibility and server configuration customizations based on the user requirements. The flexible server architecture allows users to collocate database engine with the client-tier for lower latency, choose high availability within a single availability zone and across multiple availability zones. Flexible servers also provide better cost optimization controls with ability to stop/start your server and burstable compute tier that is ideal for workloads that do not need full compute capacity continuously. The service currently supports community version of PostgreSQL 11, 12, 13, and 14. The service is currently available in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
![Flexible Server - Overview](./media/overview/overview-flexible-server.png)
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-high-availability.md
Previously updated : 01/12/2022 Last updated : 07/15/2022 # High availability in Azure Database for PostgreSQL ΓÇô Hyperscale (Citus)
Last updated 01/12/2022
[!INCLUDE[applies-to-postgresql-hyperscale](../includes/applies-to-postgresql-hyperscale.md)] High availability (HA) avoids database downtime by maintaining standby replicas
-of every node in a server group. If a node goes down, Hyperscale (Citus) switches
-incoming connections from the failed node to its standby. Failover happens
-within a few minutes, and promoted nodes always have fresh data through
+of every node in a server group. If a node goes down, Hyperscale (Citus)
+switches incoming connections from the failed node to its standby. Failover
+happens within a few minutes, and promoted nodes always have fresh data through
PostgreSQL synchronous streaming replication.
+All primary nodes in a server group are provisioned into one availability zone
+for better latency between the nodes. The standby nodes are provisioned into
+another zone. The Azure portal
+[displays](concepts-server-group.md#node-availability-zone) the availability
+zone of each node in a server group.
+ Even without HA enabled, each Hyperscale (Citus) node has its own locally redundant storage (LRS) with three synchronous replicas maintained by Azure Storage service. If there's a single replica failure, itΓÇÖs detected by Azure
promoted coordinator will be accessible with the same connection string.
Recovery can be broken into three stages: detection, failover, and full recovery. Hyperscale (Citus) runs periodic health checks on every node, and after four failed checks it determines that a node is down. Hyperscale (Citus)
-then promotes a standby to primary node status (failover), and provisions a new
+then promotes a standby to primary node status (failover), and creates a new
standby-to-be. Streaming replication begins, bringing the new node up to date. When all data has been replicated, the node has reached full recovery.
for server groups in the Azure portal.
synchronization is in progress. Once all data is replicated to the new standby, synchronous replication will be enabled between the primary and standby nodes, and the nodes' state will transition back to **Healthy**.
-* **No**: HA is not enabled on this node.
+* **No**: HA isn't enabled on this node.
## Next steps
postgresql Concepts Server Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-server-group.md
Previously updated : 01/13/2022 Last updated : 07/15/2022 # Hyperscale (Citus) server group
values:
states. For more information about subscription states, see [this page](../../cost-management-billing/manage/subscription-states.md).
+### Node availability zone
+
+Hyperscale (Citus) displays the [availability
+zone](../../availability-zones/az-overview.md#availability-zones) of each node
+in a server group on the Overview page in the Azure portal. The **Availability
+zone** column contains either the name of the zone, or `--` if the node isn't
+assigned to a zone. (Only [certain
+regions](https://azure.microsoft.com/global-infrastructure/geographies/#geographies)
+support availability zones.)
+
+If high availability is enabled for the server group, and a node [fails
+over](concepts-high-availability.md) to a standby, you may see its availability
+zone differs from the other nodes. In this case, the nodes will be moved back
+into the same availability zone together during the next [maintenance
+window](concepts-maintenance.md).
+ ## Tiers The basic tier in Azure Database for PostgreSQL - Hyperscale (Citus) is a
search Cognitive Search Working With Skillsets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-working-with-skillsets.md
Previously updated : 07/13/2022 Last updated : 07/14/2022 # Skillset concepts in Azure Cognitive Search
search Search Normalizers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-normalizers.md
Previously updated : 03/23/2022 Last updated : 07/14/2022 # Text normalization for case-insensitive filtering, faceting and sorting
Last updated 03/23/2022
> [!IMPORTANT] > This feature is in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [preview REST API](/rest/api/searchservice/index-preview) supports this feature.
-In Azure Cognitive Search, a *normalizer* is a component of the search engine responsible for pre-processing text for keyword matching in filters, facets, and sorts. Normalizers behave similar to [analyzers](search-analyzers.md) in how they process text, except they don't tokenize the query. Some of the transformations that can be achieved using normalizers are:
+In Azure Cognitive Search, a *normalizer* is a component that pre-processes text for keyword matching over fields marked as "filterable", "facetable", or "sortable". In contrast with full text "searchable" fields that are paired with [text analyzers](search-analyzers.md), content that's created for filter-facet-sort operations doesn't undergo analysis or tokenization. Omission of text analysis can produce unexpected results when casing and character differences show up.
-+ Convert to lowercase or upper-case
+By applying a normalizer, you can achieve light text transformations that improve results:
+++ Consistent casing (such as all lowercase or uppercase) + Normalize accents and diacritics like ö or ê to ASCII equivalent characters "o" and "e" + Map characters like `-` and whitespace into a user-specified character
-Normalizers are specified on string fields in the index and are applied during indexing and query execution.
- ## Benefits of normalizers
-Searching and retrieving documents from a search index requires matching the query to the contents of the document. The content can be analyzed to produce tokens for matching as is the case when "search" parameter is used, or can be used as-is for strict keyword matching as seen with "$filter", "facets", and "$orderby". This all-or-nothing approach covers most scenarios but falls short where simple pre-processing like casing, accent removal, asciifolding and so forth is required without undergoing through the entire analysis chain.
+Searching and retrieving documents from a search index requires matching the query input to the contents of the document. Matching is either over tokenized content, as is the case when you invoke "search", or over non-tokenized content if the request is a [filter](search-query-odata-filter.md), [facet](search-faceted-navigation.md), or [orderby](search-query-odata-orderby.md) operation.
-Consider the following examples:
+Because non-tokenized content is also not analyzed, small differences in the content are evaluated as distinctly different values. Consider the following examples:
-+ `$filter=City eq 'Las Vegas'` will only return documents that contain the exact text "Las Vegas" and exclude documents with "LAS VEGAS" and "las vegas" which is inadequate when the use-case requires all documents regardless of the casing.
++ `$filter=City eq 'Las Vegas'` will only return documents that contain the exact text "Las Vegas" and exclude documents with "LAS VEGAS" and "las vegas", which is inadequate when the use-case requires all documents regardless of the casing. + `search=*&facet=City,count:5` will return "Las Vegas", "LAS VEGAS" and "las vegas" as distinct values despite being the same city.
-+ `search=usa&$orderby=City` will return the cities in lexicographical order: "Las Vegas", "Seattle", "las vegas", even if the intent is to order the same cities together irrespective of the case.
++ `search=usa&$orderby=City` will return the cities in lexicographical order: "Las Vegas", "Seattle", "las vegas", even if the intent is to order the same cities together irrespective of the case.
-## Predefined and custom normalizers
+A normalizer, which is invoked during indexing and query execution, adds light transformations that smooth out minor differences in text for filter, facet, and sort scenarios. In the previous examples, the variants of "Las Vegas" would be processed according to the normalizer you select (for example, all text is lower-cased) for more uniform results.
-Azure Cognitive Search provides built-in normalizers for common use-cases along with the capability to customize as required.
-
-| Category | Description |
-|-|-|
-| [Predefined normalizers](#predefined-normalizers) | Provided out-of-the-box and can be used without any configuration. |
-|[Custom normalizers](#add-custom-normalizers) <sup>1</sup> | For advanced scenarios. Requires user-defined configuration of a combination of existing elements, consisting of char and token filters.|
-
-<sup>(1)</sup> Custom normalizers don't specify tokenizers since normalizers always produce a single token.
-
-## How to specify normalizers
+## How to specify a normalizer
Normalizers are specified in an index definition, on a per-field basis, on text fields (`Edm.String` and `Collection(Edm.String)`) that have at least one of "filterable", "sortable", or "facetable" properties set to true. Setting a normalizer is optional and it's null by default. We recommend evaluating predefined normalizers before configuring a custom one.
-Normalizers can only be specified when a new field is added to the index. Try to assess the normalization needs upfront and assign normalizers in the initial stages of development when dropping and recreating indexes is routine. Normalizers can't be specified on a field that has already been created.
+Normalizers can only be specified when you add a new field to the index, so if possible, try to assess the normalization needs upfront and assign normalizers in the initial stages of development when dropping and recreating indexes is routine.
-1. When creating a field definition in the [index](/rest/api/searchservice/create-index), set the "normalizer" property to one of the following: a [predefined normalizer](#predefined-normalizers) such as "lowercase", or a custom normalizer (defined in the same index schema).
+1. When creating a field definition in the [index](/rest/api/searchservice/create-index), set the "normalizer" property to one of the following values: a [predefined normalizer](#predefined-normalizers) such as "lowercase", or a custom normalizer (defined in the same index schema).
```json "fields": [
Normalizers can only be specified when a new field is added to the index. Try to
"analyzer": "en.microsoft", "normalizer": "lowercase" ...
- },
+ }
+ ]
``` 1. Custom normalizers are defined in the "normalizers" section of the index first, and then assigned to the field definition as shown in the previous step. For more information, see [Create Index](/rest/api/searchservice/create-index) and also [Add custom normalizers](#add-custom-normalizers). - ```json "fields": [ {
Normalizers can only be specified when a new field is added to the index. Try to
"normalizer": "my_custom_normalizer" }, ```
-
+ > [!NOTE] > To change the normalizer of an existing field, you'll have to rebuild the index entirely (you cannot rebuild individual fields). A good workaround for production indexes, where rebuilding indexes is costly, is to create a new field identical to the old one but with the new normalizer, and use it in place of the old one. Use [Update Index](/rest/api/searchservice/update-index) to incorporate the new field and [mergeOrUpload](/rest/api/searchservice/addupdate-or-delete-documents) to populate it. Later, as part of planned index servicing, you can clean up the index to remove obsolete fields.
+## Predefined and custom normalizers
+
+Azure Cognitive Search provides built-in normalizers for common use-cases along with the capability to customize as required.
+
+| Category | Description |
+|-|-|
+| [Predefined normalizers](#predefined-normalizers) | Provided out-of-the-box and can be used without any configuration. |
+|[Custom normalizers](#add-custom-normalizers) <sup>1</sup> | For advanced scenarios. Requires user-defined configuration of a combination of existing elements, consisting of char and token filters.|
+
+<sup>(1)</sup> Custom normalizers don't specify tokenizers since normalizers always produce a single token.
+
+## Normalizers reference
+
+### Predefined normalizers
+
+|**Name**|**Description and Options**|
+|-|-|
+|standard| Lowercases the text followed by asciifolding.|
+|lowercase| Transforms characters to lowercase.|
+|uppercase| Transforms characters to uppercase.|
+|asciifolding| Transforms characters that aren't in the Basic Latin Unicode block to their ASCII equivalent, if one exists. For example, changing à to a.|
+|elision| Removes elision from beginning of the tokens.|
+
+### Supported char filters
+
+Normalizers support two character filters that are identical to their counterparts in [custom analyzer character filters](index-add-custom-analyzers.md#CharFilter):
+++ [mapping](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/charfilter/MappingCharFilter.html)++ [pattern_replace](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/pattern/PatternReplaceCharFilter.html)+
+### Supported token filters
+
+The list below shows the token filters supported for normalizers and is a subset of the overall [token filters used in custom analyzers](index-add-custom-analyzers.md#TokenFilters).
+++ [arabic_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ar/ArabicNormalizationFilter.html)++ [asciifolding](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html)++ [cjk_width](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/cjk/CJKWidthFilter.html) ++ [elision](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/util/ElisionFilter.html) ++ [german_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/de/GermanNormalizationFilter.html)++ [hindi_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/hi/HindiNormalizationFilter.html) ++ [indic_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/in/IndicNormalizationFilter.html)++ [persian_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/fa/PersianNormalizationFilter.html)++ [scandinavian_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianNormalizationFilter.html) ++ [scandinavian_folding](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianFoldingFilter.html)++ [sorani_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ckb/SoraniNormalizationFilter.html) ++ [lowercase](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/LowerCaseFilter.html)++ [uppercase](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/UpperCaseFilter.html)+ ## Add custom normalizers Custom normalizers are [defined within the index schema](/rest/api/searchservice/create-index). The definition includes a name, a type, one or more character filters and token filters. The character filters and token filters are the building blocks for a custom normalizer and responsible for the processing of the text. These filters are applied from left to right.
Custom normalizers are [defined within the index schema](/rest/api/searchservice
Custom normalizers can be added during index creation or later by updating an existing one. Adding a custom normalizer to an existing index requires the "allowIndexDowntime" flag to be specified in [Update Index](/rest/api/searchservice/update-index) and will cause the index to be unavailable for a few seconds.
-## Normalizers reference
-
-### Predefined normalizers
-
-|**Name**|**Description and Options**|
-|-|-|
-|standard| Lowercases the text followed by asciifolding.|
-|lowercase| Transforms characters to lowercase.|
-|uppercase| Transforms characters to uppercase.|
-|asciifolding| Transforms characters that aren't in the Basic Latin Unicode block to their ASCII equivalent, if one exists. For example, changing à to a.|
-|elision| Removes elision from beginning of the tokens.|
-
-### Supported char filters
-
-Normalizers support two character filters that are identical to their counterparts in [custom analyzer character filters](index-add-custom-analyzers.md#CharFilter):
-
-+ [mapping](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/charfilter/MappingCharFilter.html)
-+ [pattern_replace](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/pattern/PatternReplaceCharFilter.html)
-
-### Supported token filters
-
-The list below shows the token filters supported for normalizers and is a subset of the overall [token filters used in custom analyzers](index-add-custom-analyzers.md#TokenFilters).
-
-+ [arabic_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ar/ArabicNormalizationFilter.html)
-+ [asciifolding](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilter.html)
-+ [cjk_width](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/cjk/CJKWidthFilter.html)
-+ [elision](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/util/ElisionFilter.html)
-+ [german_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/de/GermanNormalizationFilter.html)
-+ [hindi_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/hi/HindiNormalizationFilter.html)
-+ [indic_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/in/IndicNormalizationFilter.html)
-+ [persian_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/fa/PersianNormalizationFilter.html)
-+ [scandinavian_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianNormalizationFilter.html)
-+ [scandinavian_folding](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianFoldingFilter.html)
-+ [sorani_normalization](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/ckb/SoraniNormalizationFilter.html)
-+ [lowercase](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/LowerCaseFilter.html)
-+ [uppercase](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/UpperCaseFilter.html)
- ## Custom normalizer example The example below illustrates a custom normalizer definition with corresponding character filters and token filters. Custom options for character filters and token filters are specified separately as named constructs, and then referenced in the normalizer definition as illustrated below.
-* A custom normalizer named "my_custom_normalizer" is defined in the "normalizers" section of the index definition.
++ A custom normalizer named "my_custom_normalizer" is defined in the "normalizers" section of the index definition.
-* The normalizer is composed of two character filters and three token filters: elision, lowercase, and customized asciifolding filter "my_asciifolding".
++ The normalizer is composed of two character filters and three token filters: elision, lowercase, and customized asciifolding filter "my_asciifolding".
-* The first character filter "map_dash" replaces all dashes with underscores while the second one "remove_whitespace" removes all spaces.
++ The first character filter "map_dash" replaces all dashes with underscores while the second one "remove_whitespace" removes all spaces. ```json {
The example below illustrates a custom normalizer definition with corresponding
## See also ++ [Querying concepts in Azure Cognitive Search](search-query-overview.md)+ + [Analyzers for linguistic and text processing](search-analyzers.md)
-+ [Search Documents REST API](/rest/api/searchservice/search-documents)
++ [Search Documents REST API](/rest/api/searchservice/search-documents)
security Technical Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/technical-capabilities.md
Azure also provides several easy-to-use features to help secure both inbound and
- [Secure traffic to your app by enabling Transport Layer Security (TLS/SSL) - HTTPS](../../app-service/configure-ssl-bindings.md)
- - [Force all incoming traffic over HTTPS connection](http://microsoftazurewebsitescheatsheet.info/)
+ - Force all incoming traffic over HTTPS connection
- - [Enable Strict Transport Security (HSTS)](http://microsoftazurewebsitescheatsheet.info/#enable-http-strict-transport-security-hsts)
+ - Enable Strict Transport Security (HSTS)
-- [Restrict access to your app by client's IP address](http://microsoftazurewebsitescheatsheet.info/#filtering-traffic-by-ip)
+- Restrict access to your app by client's IP address
-- [Restrict access to your app by client's behavior - request frequency and concurrency](http://microsoftazurewebsitescheatsheet.info/#dynamic-ip-restrictions)
+- Restrict access to your app by client's behavior - request frequency and concurrency
- [Configure TLS mutual authentication to require client certificates to connect to your web app](../../app-service/app-service-web-configure-tls-mutual-auth.md)
sentinel Best Practices Workspace Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/best-practices-workspace-architecture.md
Costs are one of the main considerations when determining Microsoft Sentinel arc
### Working with multiple tenants
-If you have multiple tenants, such as if you're a managed security service provider (MSSP), we recommend that you create at least one workspace for each Azure AD tenant to support built-in, [service to service data connectors](connect-data-sources.md#service-to-service-integration) that work only within their own Azure AD tenant.
+If you have multiple tenants, such as if you're a managed security service provider (MSSP), we recommend that you create at least one workspace for each Azure AD tenant to support built-in, [service to service data connectors](connect-data-sources.md#service-to-service-integration-for-data-connectors) that work only within their own Azure AD tenant.
-All connectors based on diagnostics settings, cannot be connected to a workspace that is not located in the same tenant where the resource resides. This applies to connectors such as [Azure Firewall](./data-connectors-reference.md#azure-firewall), [Azure Storage](./data-connectors-reference.md#azure-storage-account), [Azure Activity](./data-connectors-reference.md#azure-activity) or [Azure Active Directory](connect-azure-active-directory.md).
+All connectors based on diagnostics settings cannot be connected to a workspace that is not located in the same tenant where the resource resides. This applies to connectors such as [Azure Firewall](./data-connectors-reference.md#azure-firewall), [Azure Storage](./data-connectors-reference.md#azure-storage-account), [Azure Activity](./data-connectors-reference.md#azure-activity) or [Azure Active Directory](connect-azure-active-directory.md).
Use [Azure Lighthouse](../lighthouse/how-to/onboard-customer.md) to help manage multiple Microsoft Sentinel instances in different tenants.
sentinel Connect Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-data-sources.md
Title: Microsoft Sentinel data connectors | Microsoft Docs
-description: Learn how to connect data sources like Microsoft 365 Defender (formerly Microsoft Threat Protection), Microsoft 365 and Office 365, Azure AD, ATP, and Defender for Cloud Apps to Microsoft Sentinel.
+ Title: Microsoft Sentinel data connectors
+description: Learn about supported data connectors, like Microsoft 365 Defender (formerly Microsoft Threat Protection), Microsoft 365 and Office 365, Azure AD, ATP, and Defender for Cloud Apps to Microsoft Sentinel.
- Previously updated : 11/09/2021 Last updated : 07/14/2022 # Microsoft Sentinel data connectors - [!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
-After onboarding Microsoft Sentinel into your workspace, connect data sources to start ingesting your data into Microsoft Sentinel. Microsoft Sentinel comes with many connectors for Microsoft products, available out of the box and providing real-time integration. For example, service-to-service connectors include Microsoft 365 Defender connectors and Microsoft 365 sources, such as Office 365, Azure Active Directory (Azure AD), Microsoft Defender for Identity, and Microsoft Defender for Cloud Apps.
+After you onboard Microsoft Sentinel into your workspace, you can use data connectors to start ingesting your data into Microsoft Sentinel. Microsoft Sentinel comes with many out of the box connectors for Microsoft services, which you can integrate in real time. For example, the Microsoft 365 Defender connector is a [service-to-service connector](#service-to-service-integration-for-data-connectors) that integrates data from Office 365, Azure Active Directory (Azure AD), Microsoft Defender for Identity, and Microsoft Defender for Cloud Apps.
-You can also enable out-of-the-box connectors to the broader security ecosystem for non-Microsoft products. For example, you can use [Syslog](#syslog), [Common Event Format (CEF)](#common-event-format-cef), or [REST APIs](#rest-api-integration) to connect your data sources with Microsoft Sentinel.
+You can also enable out-of-the-box connectors to the broader security ecosystem for non-Microsoft products. For example, you can use [Syslog](#syslog), [Common Event Format (CEF)](#common-event-format-cef), or [REST APIs](#rest-api-integration-using-azure-functions) to connect your data sources with Microsoft Sentinel.
-The **Data connectors** page, accessible from the Microsoft Sentinel navigation menu, shows the full list of connectors that Microsoft Sentinel provides, and their status in your workspace. Select the connector you want to connect, and then select **Open connector page**.
+Learn about [types of Microsoft Sentinel data connectors](data-connectors-reference.md) or learn about the [Microsoft Sentinel solutions catalog](sentinel-solutions-catalog.md).
-![Data connectors gallery](./media/collect-data/collect-data-page.png)
+The Microsoft Sentinel **Data connectors** page shows the full list of connectors and their status in your workspace.
-This article describes supported data connection methods. For more information, see [Microsoft Sentinel data connectors reference](data-connectors-reference.md) and the [Microsoft Sentinel solutions catalog](sentinel-solutions-catalog.md).
<a name="agent-options"></a> <a name="data-connection-methods"></a>
This article describes supported data connection methods. For more information,
## Enable a data connector
-The **Data connectors** page, accessible from the Microsoft Sentinel navigation menu, shows the full list of connectors that Microsoft Sentinel provides, and their status. Select the connector you want to connect, and then select **Open connector page**.
-
-![Data connectors gallery](./media/collect-data/collect-data-page.png)
-
-You'll need to have fulfilled all the prerequisites, and you'll see complete instructions on the connector page to ingest the data to Microsoft Sentinel. It may take some time for data to start arriving. After you connect, you see a summary of the data in the **Data received** graph, and the connectivity status of the data types.
+Select the connector you want to connect, and then select **Open connector page**.
-![Configure data connectors](./media/collect-data/opened-connector-page.png)
+- Once you fulfill all the prerequisites listed in the **Instructions** tab, the connector page describes how to ingest the data to Microsoft Sentinel. It may take some time for data to start arriving. After you connect, you see a summary of the data in the **Data received** graph, and the connectivity status of the data types.
+
+ :::image type="content" source="media/collect-data/opened-connector-page.png" alt-text="Screenshot showing how to configure data connectors." border="false":::
+
+- In the **Next steps** tab, you'll see more content for the specific data type: Sample queries, visualization workbooks, and analytics rule templates to help you detect and investigate threats.
-In the **Next steps** tab, you'll see additional content that Microsoft Sentinel provides for the specific data type - sample queries, visualization workbooks, and analytics rule templates to help you detect and investigate threats.
+ :::image type="content" source="media/collect-data/data-insights.png" alt-text="Screenshot showing the data connecter Next steps tab." border="false":::
-![Next steps for connectors](./media/collect-data/data-insights.png)
+Learn about your specific data connector in the [data connectors reference](data-connectors-reference.md).
-For more information, see the relevant section for your data connector in the [data connectors reference](data-connectors-reference.md).
-
-## REST API integration
+## REST API integration for data connectors
Many security technologies provide a set of APIs for retrieving log files, and some data sources can use those APIs to connect to Microsoft Sentinel. Data connectors that use APIs either integrate from the provider side or integrate using Azure Functions, as described in the following sections.
-For a complete listing and information about these connectors, see the [data connectors reference](data-connectors-reference.md).
+Learn more about data connectors in the [data connectors reference](data-connectors-reference.md).
### REST API integration on the provider side
-An API integration that is built by the provider connects with the provider data sources and pushes data into Microsoft Sentinel custom log tables using the [Azure Monitor Data Collector API](../azure-monitor/logs/data-collector-api.md).
+An API integration built by the provider connects with the provider data sources and pushes data into Microsoft Sentinel custom log tables using the [Azure Monitor Data Collector API](../azure-monitor/logs/data-collector-api.md).
-For more information, see your provider documentation and [Connect your data source to Microsoft Sentinel's REST-API to ingest data](connect-rest-api-template.md).
+To learn about REST API integration, read your provider documentation and [Connect your data source to Microsoft Sentinel's REST-API to ingest data](connect-rest-api-template.md).
### REST API integration using Azure Functions
-Integrations that use [Azure Functions](../azure-functions/index.yml) to connect with a provider API first format the data, and then send it to Microsoft Sentinel custom log tables using the [Azure Monitor Data Collector API](../azure-monitor/logs/data-collector-api.md).
-
-To configure these data connectors to connect with the provider API and collect logs in Microsoft Sentinel, follow the steps shown for each data connector in Microsoft Sentinel.
-
-For more information, see [Use Azure Functions to connect your data source to Microsoft Sentinel](connect-azure-functions-template.md).
+Integrations that use [Azure Functions](../azure-functions/index.yml) to connect with a provider API first format the data, and then send it to Microsoft Sentinel custom log tables using the [Azure Monitor Data Collector API](../azure-monitor/logs/data-collector-api.md). Learn how to [use Azure Functions to connect your data source to Microsoft Sentinel](connect-azure-functions-template.md).
> [!IMPORTANT]
-> Integrations that use Azure Functions may incur additional data ingestion costs, because you host Azure Functions on your Azure tenant. For more information, see the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/).
+> Integrations that use Azure Functions may have extra data ingestion costs, because you host Azure Functions on your Azure tenant. Learn more about [Azure Functions pricing](https://azure.microsoft.com/pricing/details/functions/).
-## Agent-based integration
+## Agent-based integration for data connectors
-Microsoft Sentinel can use the Syslog protocol to connect via an agent to any data source that can perform real-time log streaming. For example, most on-premises data sources connect via agent-based integration.
+Microsoft Sentinel can use the Syslog protocol to connect an agent to any data source that can perform real-time log streaming. For example, most on-premises data sources connect using agent-based integration.
The following sections describe the different types of Microsoft Sentinel agent-based data connectors. Follow the steps in each Microsoft Sentinel data connector page to configure connections using agent-based mechanisms.
-For a complete listing of firewalls, proxies, and endpoints that connect to Microsoft Sentinel through CEF or Syslog, see the [data connectors reference](data-connectors-reference.md).
+Learn which firewalls, proxies, and endpoints connect to Microsoft Sentinel through CEF or Syslog in the [data connectors reference](data-connectors-reference.md).
### Syslog
-You can stream events from Linux-based, Syslog-supporting devices into Microsoft Sentinel by using the Log Analytics agent for Linux, formerly called the OMS agent. The Log Analytics agent is supported for any device that allows you to install the Log Analytics agent directly on the device.
+You can stream events from Linux-based, Syslog-supporting devices into Microsoft Sentinel using the Log Analytics agent for Linux, formerly named the OMS agent. Depending on the device type, the agent is installed either directly on the device, or on a dedicated Linux-based log forwarder. The Log Analytics agent receives events from the Syslog daemon over UDP. If a Linux machine is expected to collect a high volume of Syslog events, it sends events over TCP from the Syslog daemon to the agent, and from there to Log Analytics. Learn how to [connect Syslog-based appliances to Microsoft Sentinel](connect-syslog.md).
-The device's built-in Syslog daemon collects local events of the specified types, and forwards them locally to the agent, which then streams them to your Log Analytics workspace. After successful configuration, the data appears in the Log Analytics Syslog table.
+Here is a simple flow that shows how Microsoft Sentinel streams Syslog data.
-Depending on the device type, the agent is installed either directly on the device, or on a dedicated Linux-based log forwarder. The Log Analytics agent receives events from the Syslog daemon over UDP. If a Linux machine is expected to collect a high volume of Syslog events, it sends events over TCP from the Syslog daemon to the agent, and from there to Log Analytics.
-
-For more information, see [Connect Syslog-based appliances to Microsoft Sentinel](connect-syslog.md).
+1. The device's built-in Syslog daemon collects local events of the specified types, and forwards the events locally to the agent.
+1. The agent streams the events to your Log Analytics workspace.
+1. After successful configuration, the data appears in the Log Analytics Syslog table.
### Common Event Format (CEF)
Log formats vary, but many sources support CEF-based formatting. The Microsoft S
For data sources that emit data in CEF, set up the Syslog agent and then configure the CEF data flow. After successful configuration, the data appears in the **CommonSecurityLog** table.
-For more information, see [Connect CEF-based appliances to Microsoft Sentinel](connect-common-event-format.md).
+Learn how to [connect CEF-based appliances to Microsoft Sentinel](connect-common-event-format.md).
### Custom logs
-Some data sources have logs available for collection as files on Windows or Linux. You can collect these logs by using the Log Analytics custom log collection agent.
+For some data sources, you can collect logs as files on Windows or Linux computers using the Log Analytics custom log collection agent.
Follow the steps in each Microsoft Sentinel data connector page to connect using the Log Analytics custom log collection agent. After successful configuration, the data appears in custom tables.
-For more information, see [Collect data in custom log formats to Microsoft Sentinel with the Log Analytics agent](connect-custom-logs.md).
+Learn how to [collect data in custom log formats to Microsoft Sentinel with the Log Analytics agent](connect-custom-logs.md).
-## Service-to-service integration
+## Service-to-service integration for data connectors
-Microsoft Sentinel uses the Azure foundation to provide out-of-the-box, service-to-service support for Microsoft services and Amazon Web Services.
+Microsoft Sentinel uses the Azure foundation to provide out-of-the-box, service-to-service support for Microsoft services and Amazon Web Services.
-For more information, see [Connect to Azure, Windows, Microsoft, and Amazon services](connect-azure-windows-microsoft-services.md) and the [data connectors reference](data-connectors-reference.md).
+Learn how to [connect to Azure, Windows, Microsoft, and Amazon services](connect-azure-windows-microsoft-services.md) or learn about data connector types in the [data connectors reference](data-connectors-reference.md).
-## Deploy as part of a solution
+## Deploy data connectors as part of a solution
-[Microsoft Sentinel solutions](sentinel-solutions.md) provide packages of security content, including data connectors, workbooks, analytics rules, playbooks, and more. When you deploy a solution with a data connector, you'll get the data connector together with related content in the same deployment.
+[Microsoft Sentinel solutions](sentinel-solutions.md) provide packages of security content, including data connectors, workbooks, analytics rules, playbooks, and more. When you deploy a solution with a data connector, you get the data connector together with related content in the same deployment.
-For more information, see [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions-deploy.md) and the [Microsoft Sentinel solutions catalog](sentinel-solutions-catalog.md).
+Learn how to [centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions-deploy.md) or learn about the [Microsoft Sentinel solutions catalog](sentinel-solutions-catalog.md).
## Data connector support
-Both Microsoft and other organizations author Microsoft Sentinel data connectors. Each data connector has one of the following support types:
+Both Microsoft and other organizations author Microsoft Sentinel data connectors. Each data connector has one of these support types:
| Support type| Description| |-||
-|**Microsoft-supported**|Applies to:<ul><li>Data connectors for data sources where Microsoft is the data provider and author.</li><li>Some Microsoft-authored data connectors for non-Microsoft data sources.</li></ul>Microsoft supports and maintains data connectors in this category in accordance with [Microsoft Azure Support Plans](https://azure.microsoft.com/support/options/#overview).<br><br>Partners or the Community support data connectors that are authored by any party other than Microsoft.|
+|**Microsoft-supported**|Applies to:<ul><li>Data connectors for data sources where Microsoft is the data provider and author.</li><li>Some Microsoft-authored data connectors for non-Microsoft data sources.</li></ul>Microsoft supports and maintains data connectors in this category according to the [Microsoft Azure Support Plans](https://azure.microsoft.com/support/options/#overview).<br><br>Partners or the Community support data connectors that are authored by any party other than Microsoft.|
|**Partner-supported**|Applies to data connectors authored by parties other than Microsoft.<br><br>The partner company provides support or maintenance for these data connectors. The partner company can be an Independent Software Vendor, a Managed Service Provider (MSP/MSSP), a Systems Integrator (SI), or any organization whose contact information is provided on the Microsoft Sentinel page for that data connector.<br><br>For any issues with a partner-supported data connector, contact the specified data connector support contact.| |**Community-supported**|Applies to data connectors authored by Microsoft or partner developers that don't have listed contacts for data connector support and maintenance on the specified data connector page in Microsoft Sentinel.<br><br>For questions or issues with these data connectors, you can [file an issue](https://github.com/Azure/Azure-Sentinel/issues/new/choose) in the [Microsoft Sentinel GitHub community](https://aka.ms/threathunters).| ### Find the support contact for a data connector
-To find the support contact information for a data connector:
-
-1. In the Microsoft Sentinel left menu, select **Data connectors**.
-
-1. Select the connector you want to find support information for.
-
-1. View the **Supported by** field on the side panel for the data connector.
-
- ![Screenshot showing the Supported by field for a data connector in Microsoft Sentinel.](./media/collect-data/connectors.png)
+1. In the Microsoft Sentinel **Data connectors** page, select the relevant connector.
+1. To access support and maintenance for the connector, use the support contact link in the **Supported by** field on the side panel for the connecter.
- The **Supported by** field has a support contact link you can use to access support and maintenance for the selected data connector.
+ :::image type="content" source="media/collect-data/connectors.png" alt-text="Screenshot showing the Supported by field for a data connector in Microsoft Sentinel." lightbox="media/collect-data/connectors.png":::
## Next steps - To get started with Microsoft Sentinel, you need a subscription to Microsoft Azure. If you don't have a subscription, you can sign up for a [free trial](https://azure.microsoft.com/free/).-- Learn how to [onboard your data to Microsoft Sentinel](quickstart-onboard.md) and [get visibility into your data and potential threats](get-visibility.md).
+- Learn how to [onboard your data to Microsoft Sentinel](quickstart-onboard.md) and [get visibility into your data and potential threats](get-visibility.md).
service-fabric Service Fabric Application Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-scenarios.md
Consider using the Service Fabric platform for the following types of applicatio
[Kohler](https://customers.microsoft.com/story/kohler-konnect-azure-iot), and [Dover Fueling Systems](https://customers.microsoft.com/story/775087-microsoft-country-corner-dover-fueling-solutions-oil-and-gas-azure).
-* **Gaming and session-based interactive applications**: Service Fabric is useful if your application requires low-latency reads and writes, such as in online gaming or instant messaging. Service Fabric enables you to build these interactive, stateful applications without having to create a separate store or cache. Visit [Azure gaming solutions](https://azure.microsoft.com/solutions/gaming/) for design guidance on [using Service Fabric in gaming services](/gaming/azure/reference-architectures/multiplayer-synchronous-sf).
+* **Gaming and session-based interactive applications**: Service Fabric is useful if your application requires low-latency reads and writes, such as in online gaming or instant messaging. Service Fabric enables you to build these interactive, stateful applications without having to create a separate store or cache. Visit [Azure gaming solutions](https://azure.microsoft.com/solutions/gaming/) for design guidance on [using Service Fabric in gaming services](/gaming/azure/reference-architectures/multiplayer-synchronous).
Customers who have built gaming services include [Next Games](https://customers.microsoft.com/story/next-games-media-telecommunications-azure). Customers who have built interactive sessions include [Honeywell with Hololens](https://customers.microsoft.com/story/honeywell-manufacturing-hololens).
service-fabric Service Fabric Best Practices Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-applications.md
Be thorough about adding [application logging](./service-fabric-diagnostics-even
## Design guidance on Azure * Visit the [Azure architecture center](/azure/architecture/microservices/) for design guidance on [building microservices on Azure](/azure/architecture/microservices/).
-* Visit [Get Started with Azure for Gaming](/gaming/azure/) for design guidance on [using Service Fabric in gaming services](/gaming/azure/reference-architectures/multiplayer-synchronous-sf).
+* Visit [Get Started with Azure for Gaming](/gaming/azure/) for design guidance on [using Service Fabric in gaming services](/gaming/azure/reference-architectures/multiplayer-synchronous).
service-fabric Service Fabric Concepts Partitioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-concepts-partitioning.md
As we literally want to have one partition per letter, we can use 0 as the low k
2. In the **New Project** dialog box, choose the Service Fabric application. 3. Call the project "AlphabetPartitions". 4. In the **Create a Service** dialog box, choose **Stateful** service and call it "Alphabet.Processing".
-5. Set the number of partitions. Open the Applicationmanifest.xml file located in the ApplicationPackageRoot folder of the AlphabetPartitions project and update the parameter Processing_PartitionCount to 26 as shown below.
+5. Set the number of partitions. Open the ApplicationManifest.xml file located in the ApplicationPackageRoot folder of the AlphabetPartitions project and update the parameter Processing_PartitionCount to 26 as shown below.
```xml <Parameter Name="Processing_PartitionCount" DefaultValue="26" />
service-fabric Service Fabric Containers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-containers-overview.md
Compared to virtual machines, containers have the following advantages:
* **Portability**: A containerized application image can be ported to run in the cloud, on premises, inside virtual machines, or directly on physical machines. * **Resource governance**: A container can limit the physical resources that it can consume on its host.
-### Container types and supported environments
+## Service Fabric support for containers
+
+Service Fabric supports the deployment of Docker containers on Linux, and Windows Server containers on Windows Server 2016 and later, along with support for Hyper-V isolation mode.
-Service Fabric supports containers on both Linux and Windows, and supports Hyper-V isolation mode on Windows.
+Container runtimes compatible with ServiceFabric:
+- Mirantis Container Runtime
+- Moby
#### Docker containers on Linux
Here are typical examples where a container is a good choice:
* **Reduce impact of "noisy neighbors" services**: You can use the resource governance ability of containers to restrict the resources that a service uses on a host. If services might consume many resources and affect the performance of others (such as a long-running, query-like operation), consider putting these services into containers that have resource governance.
-## Service Fabric support for containers
-
-Service Fabric supports the deployment of Docker containers on Linux, and Windows Server containers on Windows Server 2016 and later, along with support for Hyper-V isolation mode.
- > [!NOTE] > A Service Fabric cluster is single tenant by design and hosted applications are considered **trusted**. If you are considering hosting **untrusted applications**, please see [Hosting untrusted applications in a Service Fabric cluster](service-fabric-best-practices-security.md#hosting-untrusted-applications-in-a-service-fabric-cluster).
service-fabric Service Fabric Manage Application In Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-manage-application-in-visual-studio.md
Title: Manage applications in Visual Studio description: Use Visual Studio to create, develop, package, deploy, and debug your Azure Service Fabric applications and services.- Previously updated : 03/26/2018+++++ Last updated : 07/14/2022 + # Use Visual Studio to simplify writing and managing your Service Fabric applications You can manage your Azure Service Fabric applications and services through Visual Studio. Once you've [set up your development environment](service-fabric-get-started.md), you can use Visual Studio to create Service Fabric applications, add services, or package, register, and deploy applications in your local development cluster.
By default, deploying an application combines the following steps into one simpl
In Visual Studio, pressing **F5** deploys your application and attach the debugger to all application instances. You can use **Ctrl+F5** to deploy an application without debugging, or you can publish to a local or remote cluster by using the publish profile. ### Application Debug Mode
-Visual Studio provide a property called **Application Debug Mode**, which controls how you want Visual Studios to handle Application deployment as part of debugging.
+Visual Studio provides a property called **Application Debug Mode**, which controls how you want Visual Studios to handle Application deployment as part of debugging.
#### To set the Application Debug Mode property 1. On the Service Fabric application project's (*.sfproj) shortcut menu, choose **Properties** (or press the **F4** key).
service-fabric Service Fabric Manage Multiple Environment App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-manage-multiple-environment-app-configuration.md
Title: Manage apps for multiple environments description: Azure Service Fabric applications can be run on clusters that range in size from one machine to thousands of machines. In some cases, you will want to configure your application differently for those varied environments. This article covers how to define different application parameters per environment. Previously updated : 02/23/2018++++ Last updated : 07/11/2022 + # Manage applications for multiple environments Azure Service Fabric clusters enable you to create clusters using anywhere from one to many thousands machines. In most cases, you find yourself having to deploy your application across multiple cluster configurations: your local development cluster, a shared development cluster and your production cluster. All of these clusters are considered different environments your code has to run in. Application binaries can run without modification across this wide spectrum, but you often want to configure the application differently.
service-fabric Service Fabric Managed Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-managed-disk.md
Title: Deploy Service Fabric node types with managed data disks description: Learn how to create and deploy Service Fabric node types with attached managed data disks.--- Previously updated : 10/19/2021--+++++ Last updated : 07/11/2022 # Deploy an Azure Service Fabric cluster node type with managed data disks
service-fabric Service Fabric Manifest Example Container App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-manifest-example-container-app.md
Title: Azure Service Fabric container application manifest examples description: Learn how to configure application and service manifest settings for a multi-container Service Fabric application.-- Previously updated : 06/08/2018-++++ Last updated : 07/11/2022 # Multi-container application and service manifest examples
See [Application manifest elements](#application-manifest-elements), [FrontEndSe
## Application manifest elements ### ApplicationManifest Element
-Declaratively describes the application type and version. One or more service manifests of the constituent services are referenced to compose an application type. Configuration settings of the constituent services can be overridden using parameterized application settings. Default services, service templates, principals, policies, diagnostics set-up, and certificates can also declared at the application level. For more information, see [ApplicationManifest Element](service-fabric-service-model-schema-elements.md#ApplicationManifestElementApplicationManifestTypeComplexType)
+Declaratively describes the application type and version. One or more service manifests of the constituent services are referenced to compose an application type. Configuration settings of the constituent services can be overridden using parameterized application settings. Default services, service templates, principals, policies, diagnostics set-up, and certificates can also be declared at the application level. For more information, see [ApplicationManifest Element](service-fabric-service-model-schema-elements.md#ApplicationManifestElementApplicationManifestTypeComplexType)
### Parameters Element Declares the parameters that are used in this application manifest. The value of these parameters can be supplied when the application is instantiated and can be used to override application or service configuration settings. For more information, see [Parameters Element](service-fabric-service-model-schema-elements.md#ParametersElementanonymouscomplexTypeComplexTypeDefinedInApplicationManifestTypecomplexType)
Windows Server containers may not be compatible across different versions of the
is assumed to work across all versions of the OS and overrides the image specified in the service manifest. For more information, see [ImageOverrides Element](service-fabric-service-model-schema-elements.md#ImageOverridesElementImageOverridesTypeComplexTypeDefinedInContainerHostPoliciesTypecomplexType) ### Image Element
-Container image corresponding to OS build version number to be launched. If the Os attribute is not specified, the container image
+Container image corresponding to OS build version number to be launched. If the OS attribute is not specified, the container image
is assumed to work across all versions of the OS and overrides the image specified in the service manifest. For more information, see [Image Element](service-fabric-service-model-schema-elements.md#ImageElementImageTypeComplexTypeDefinedInImageOverridesTypecomplexType) ### EnvironmentOverrides Element
service-fabric Service Fabric Manifest Example Reliable Services App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-manifest-example-reliable-services-app.md
Title: Reliable services app manifest examples description: Learn how to configure application and service manifest settings for a reliable services Service Fabric application.-- Previously updated : 06/11/2018-++++ Last updated : 07/11/2022 # Reliable services application and service manifest examples
See [Application manifest elements](#application-manifest-elements), [VotingWeb
## Application manifest elements ### ApplicationManifest Element
-Declaratively describes the application type and version. One or more service manifests of the constituent services are referenced to compose an application type. Configuration settings of the constituent services can be overridden using parameterized application settings. Default services, service templates, principals, policies, diagnostics set-up, and certificates can also declared at the application level. For more information, see [ApplicationManifest Element](service-fabric-service-model-schema-elements.md#ApplicationManifestElementApplicationManifestTypeComplexType)
+Declaratively describes the application type and version. One or more service manifests of the constituent services are referenced to compose an application type. Configuration settings of the constituent services can be overridden using parameterized application settings. Default services, service templates, principals, policies, diagnostics set-up, and certificates can also be declared at the application level. For more information, see [ApplicationManifest Element](service-fabric-service-model-schema-elements.md#ApplicationManifestElementApplicationManifestTypeComplexType)
### Parameters Element Declares the parameters that are used in this application manifest. The value of these parameters can be supplied when the application is instantiated and can be used to override application or service configuration settings. For more information, see [Parameters Element](service-fabric-service-model-schema-elements.md#ParametersElementanonymouscomplexTypeComplexTypeDefinedInApplicationManifestTypecomplexType)
service-fabric Service Fabric Manifest Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-manifest-examples.md
Title: Azure Service Fabric application manifest examples description: Learn how to configure application and service manifest settings for a Service Fabric application.-- Previously updated : 06/11/2018-++++ Last updated : 07/11/2022 # Service Fabric application and service manifest examples
service-fabric Service Fabric Migrate Old Javaapp To Use Maven https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-migrate-old-javaapp-to-use-maven.md
Title: Migrate from Java SDK to Maven
-description: Update the older Java applications which used to use the Service Fabric Java SDK, to fetch Service Fabric Java dependencies from Maven. After completing this setup, your older Java applications would be able to build .
-- Previously updated : 08/23/2017-
+description: Update the older Java applications which used to use the Service Fabric Java SDK, to fetch Service Fabric Java dependencies from Maven. After completing this setup, your older Java applications would be able to build.
+++++ Last updated : 07/11/2022 + # Update your previous Java Service Fabric application to fetch Java libraries from Maven Service Fabric Java binaries have moved from the Service Fabric Java SDK to Maven hosting. You can use **mavencentral** to fetch the latest Service Fabric Java dependencies. This guide will help you update existing Java applications created for the Service Fabric Java SDK using either Yeoman template or Eclipse to be compatible with the Maven-based build.
service-fabric Service Fabric Networking Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-networking-modes.md
Title: Configure networking modes for container services description: Learn how to set up the different networking modes that are supported by Azure Service Fabric. --- Previously updated : 2/23/2018-+++++ Last updated : 07/11/2022 + # Service Fabric container networking modes An Azure Service Fabric cluster for container services uses **nat** networking mode by default. When more than one container service is listening on the same port and nat mode is being used, deployment errors can occur. To support multiple container services listening on the same port, Service Fabric offers **Open** networking mode (versions 5.7 and later). In Open mode, each container service has an internal, dynamically assigned IP address that supports multiple services listening on the same port.
service-fabric Service Fabric Node Transition Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-node-transition-apis.md
Title: Start and stop cluster nodes description: Learn how to use fault injection to test a Service Fabric application by starting and stopping cluster nodes.- Previously updated : 6/12/2017-+++++ Last updated : 07/11/2022 # Replacing the Start Node and Stop Node APIs with the Node Transition API
As described earlier, a *stopped* Service Fabric node is a node intentionally ta
In addition, some errors returned by these APIs are not as descriptive as they could be. For example, invoking the Stop Node API on an already *stopped* node will return the error *InvalidAddress*. This experience could be improved.
-Also, the duration a node is stopped for is "infinite" until the Start Node API is invoked. We've found this can cause problems and may be error-prone. For example, we've seen problems where a user invoked the Stop Node API on a node and then forgot about it. Later, it was unclear if the node was *down* or *stopped*.
+Also, the duration a node remains stopped for is "infinite" until the Start Node API is invoked. We've found this can cause problems and may be error-prone. For example, we've seen problems where a user invoked the Stop Node API on a node and then forgot about it. Later, it was unclear if the node was *down* or *stopped*.
## Introducing the Node Transition APIs
We've addressed these issues above in a new set of APIs. The new Node Transitio
**Usage**
-If the Node Transition API does not throw an exception when invoked, then the system has accepted the asynchronous operation, and will execute it. A successful call does not imply the operation is finished yet. To get information about the current state of the operation, call the Node Transition Progress API (managed: [GetNodeTransitionProgressAsync()][gntp]) with the guid used when invoking Node Transition API for this operation. The Node Transition Progress API returns an NodeTransitionProgress object. This object's State property specifies the current state of the operation. If the state is "Running", then the operation is executing. If it is Completed, the operation finished without error. If it is Faulted, there was a problem executing the operation. The Result property's Exception property will indicate what the issue was. See [TestCommandProgressState Enum](/dotnet/api/system.fabric.testcommandprogressstate) for more information about the State property, and the "Sample Usage" section below for code examples.
+If the Node Transition API does not throw an exception when invoked, then the system has accepted the asynchronous operation, and will execute it. A successful call does not imply the operation is finished yet. To get information about the current state of the operation, call the Node Transition Progress API (managed: [GetNodeTransitionProgressAsync()][gntp]) with the guid used when invoking Node Transition API for this operation. The Node Transition Progress API returns a NodeTransitionProgress object. This object's State property specifies the current state of the operation. If the state is "Running", then the operation is executing. If it is Completed, the operation finished without error. If it is Faulted, there was a problem executing the operation. The Result property's Exception property will indicate what the issue was. See [TestCommandProgressState Enum](/dotnet/api/system.fabric.testcommandprogressstate) for more information about the State property, and the "Sample Usage" section below for code examples.
**Differentiating between a stopped node and a down node** If a node is *stopped* using the Node Transition API, the output of a node query (managed: [GetNodeListAsync()][nodequery], PowerShell: [Get-ServiceFabricNode][nodequeryps]) will show that this node has an *IsStopped* property value of true. Note this is different from the value of the *NodeStatus* property, which will say *Down*. If the *NodeStatus* property has a value of *Down*, but *IsStopped* is false, then the node was not stopped using the Node Transition API, and is *Down* due to some other reason. If the *IsStopped* property is true, and the *NodeStatus* property is *Down*, then it was stopped using the Node Transition API.
service-fabric Service Fabric Overview Microservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-overview-microservices.md
Title: Introduction to microservices on Azure description: An overview of why building cloud applications with a microservices approach is important for modern application development and how Azure Service Fabric provides a platform to achieve this. Previously updated : 01/07/2020++++ Last updated : 07/11/2022 + # Why use a microservices approach to building applications For software developers, factoring an application into component parts is nothing new. Typically, a tiered approach is used, with a back-end store, middle-tier business logic, and a front-end user interface (UI). What *has* changed over the last few years is that developers are building distributed applications for the cloud.
service-fabric Service Fabric Package Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-package-apps.md
Title: Package an Azure Service Fabric app description: Learn about packaging an Azure Service Fabric application and how to prepare for deployment to a cluster.-- Previously updated : 2/23/2018-+++++ Last updated : 07/11/2022 + # Package an application This article describes how to package a Service Fabric application and make it ready for deployment.
service-fabric Service Fabric Patch Orchestration Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-patch-orchestration-application.md
Title: Use Patch Orchestration Application description: Automate operating system patching on non-Azure hosted Service Fabric clusters by using Patch Orchestration Application. Previously updated : 2/01/2019 -++++ Last updated : 07/11/2022 # Use Patch Orchestration Application
service-fabric Service Fabric Patterns Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-patterns-networking.md
Title: Networking patterns for Azure Service Fabric description: Describes common networking patterns for Service Fabric and how to create a cluster by using Azure networking features.- Previously updated : 01/19/2018 -++++ Last updated : 07/11/2022 + # Service Fabric networking patterns You can integrate your Azure Service Fabric cluster with other Azure networking features. In this article, we show you how to create clusters that use the following features:
service-fabric Service Fabric Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-powershell-samples.md
Title: Azure PowerShell Samples - Service Fabric description: Learn about the creation and management of Azure Service Fabric clusters, apps, and services using PowerShell.- Previously updated : 11/29/2018-++++ Last updated : 07/11/2022 + # Azure Service Fabric PowerShell samples The following table includes links to PowerShell scripts samples that create and manage Service Fabric clusters, applications, and services.
service-fabric Service Fabric Production Readiness Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-production-readiness-checklist.md
Title: Azure Service Fabric Production Readiness Checklist description: Get your Service Fabric application and cluster production ready by following best practices.- Previously updated : 6/05/2019++++ Last updated : 07/11/2022 # Production readiness checklist
service-fabric Service Fabric Project Creation Next Step Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-project-creation-next-step-tasks.md
Last updated 12/21/2020 + # Your Service Fabric application and next steps Your Azure Service Fabric application has been created. This article includes a number of resources, some more information you might be interested in, and potential [next steps](#next-steps).
service-fabric Service Fabric Quickstart Containers Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-quickstart-containers-linux.md
Title: Create a Linux container app on Service Fabric in Azure description: In this quickstart, you will build a Docker image with your application, push the image to a container registry, and then deploy your container to a Service Fabric cluster. Previously updated : 05/12/2022-++++ Last updated : 07/11/2022 + # Quickstart: Deploy Linux containers to Service Fabric Azure Service Fabric is a distributed systems platform for deploying and managing scalable and reliable microservices and containers.
service-fabric Service Fabric Quickstart Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-quickstart-containers.md
Title: Create a Windows container application on Service Fabric in Azure description: In this quickstart, you create your first Windows container application on Azure Service Fabric. - Previously updated : 07/10/2019--++++ Last updated : 07/11/2022 # Quickstart: Deploy Windows containers to Service Fabric
service-fabric Service Fabric Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-quickstart-dotnet.md
Title: Quickly create a .NET app on Service Fabric in Azure description: In this quickstart, you create a .NET application for Azure using the Service Fabric reliable services sample application. Previously updated : 06/26/2019-++++ Last updated : 07/11/2022 # Quickstart: Deploy a .NET reliable services application to Service Fabric
Using this application you learn how to:
* Use ASP.NET core as a web front end * Store application data in a stateful service * Debug your application locally
-* Scale-out the application across multiple nodes
+* Scale out the application across multiple nodes
* Perform a rolling application upgrade ## Prerequisites
In this quickstart, you learned how to:
* Use ASP.NET core as a web front end * Store application data in a stateful service * Debug your application locally
-* Scale-out the application across multiple nodes
+* Scale out the application across multiple nodes
* Perform a rolling application upgrade To learn more about Service Fabric and .NET, take a look at this tutorial:
service-fabric Service Fabric Quickstart Java Reliable Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-quickstart-java-reliable-services.md
Title: 'Quickstart: Create a Java app on Azure Service Fabric' description: In this quickstart, you create a Java application for Azure using a Service Fabric reliable services sample application. Previously updated : 01/29/2019 -++++ Last updated : 07/11/2022 # Quickstart: Deploy a Java app to Azure Service Fabric on Linux
In this quickstart, you learned how to:
* Use Eclipse as a tool for your Service Fabric Java applications * Deploy Java applications to your local cluster
-* Scale-out the application across multiple nodes
+* Scale out the application across multiple nodes
To learn more about working with Java apps in Service Fabric, continue to the tutorial for Java apps.
service-fabric Service Fabric Quickstart Java Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-quickstart-java-spring-boot.md
Title: 'Quickstart: Create a Spring Boot app on Azure Service Fabric' description: In this quickstart, you deploy a Spring Boot application for Azure Service Fabric using a Spring Boot sample application. Previously updated : 01/29/2019 -++++ Last updated : 07/11/2022 # Quickstart: Deploy a Java Spring Boot app on Azure Service Fabric
-In this quickstart, you deploy a Java Spring Boot application to Azure Service Fabric by using familiar command-line tools on Linux or MacOS. Azure Service Fabric is a distributed systems platform for deploying and managing microservices and containers.
+In this quickstart, you deploy a Java Spring Boot application to Azure Service Fabric by using familiar command-line tools on Linux or macOS. Azure Service Fabric is a distributed systems platform for deploying and managing microservices and containers.
## Prerequisites
In this quickstart, you deploy a Java Spring Boot application to Azure Service F
- [Service Fabric SDK & Service Fabric Command Line Interface (CLI)](./service-fabric-get-started-linux.md#installation-methods) - [Git](https://git-scm.com/downloads)
-#### [MacOS](#tab/macos)
+#### [macOS](#tab/macos)
- [Java environment and Yeoman](./service-fabric-get-started-mac.md#create-your-application-on-your-mac-by-using-yeoman) - [Service Fabric SDK & Service Fabric Command Line Interface (CLI)](./service-fabric-cli.md#cli-mac)
In this quickstart, you learned how to:
* Deploy a Spring Boot application to Service Fabric * Deploy the application to your local cluster
-* Scale-out the application across multiple nodes
+* Scale out the application across multiple nodes
* Perform failover of your service with no hit to availability To learn more about working with Java apps in Service Fabric, continue to the tutorial for Java apps.
service-fabric Service Fabric Reliable Actors Access Save Remove State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-access-save-remove-state.md
Title: Manage Azure Service Fabric state description: Learn about accessing, saving, and removing state for an Azure Service Fabric Reliable Actor, and considerations when designing an application.- Previously updated : 03/19/2018-+++++ Last updated : 07/11/2022 + # Access, save, and remove Reliable Actors state [Reliable Actors](service-fabric-reliable-actors-introduction.md) are single-threaded objects that can encapsulate both logic and state and maintain state reliably. Every actor instance has its own [state manager](service-fabric-reliable-actors-state-management.md): a dictionary-like data structure that reliably stores key/value pairs. The state manager is a wrapper around a state provider. You can use it to store data regardless of which [persistence setting](service-fabric-reliable-actors-state-management.md#state-persistence-and-replication) is used.
class MyActorImpl extends FabricActor implements MyActor
## Next steps
-State that's stored in Reliable Actors must be serialized before its written to disk and replicated for high availability. Learn more about [Actor type serialization](service-fabric-reliable-actors-notes-on-actor-type-serialization.md).
+State that's stored in Reliable Actors must be serialized before it's written to disk and replicated for high availability. Learn more about [Actor type serialization](service-fabric-reliable-actors-notes-on-actor-type-serialization.md).
Next, learn more about [Actor diagnostics and performance monitoring](service-fabric-reliable-actors-diagnostics.md).
service-fabric Service Fabric Reliable Actors Backup And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-backup-and-restore.md
Title: Backup and restore Azure Service Fabric actors
-description: Learn how to implement backup and restore in your Azure Service Fabric actors.
- Previously updated : 10/29/2018-
+ Title: Back up and restore Azure Service Fabric actors
+description: Learn how to implement back up and restore in your Azure Service Fabric actors.
+++++ Last updated : 07/11/2022
-# Implement Reliable Actors backup and restore
+# Implement Reliable Actors back up and restore
> [!NOTE]
-> Microsoft recommends to use [Periodic backup and restore](service-fabric-backuprestoreservice-quickstart-azurecluster.md) for configuring data backup of Reliable Stateful services and Reliable Actors.
+> Microsoft recommends to use [Periodic back up and restore](service-fabric-backuprestoreservice-quickstart-azurecluster.md) for configuring data backup of Reliable Stateful services and Reliable Actors.
> In the following example, a custom actor service exposes a method to back up actor data by taking advantage of the remoting listener already present in `ActorService`:
service-fabric Service Fabric Reliable Actors Delete Actors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-delete-actors.md
Title: Delete Azure Service Fabric actors description: Learn how to manually and fully delete Reliable Actors and their state in an Azure Service Fabric application.- Previously updated : 03/19/2018-+++++ Last updated : 07/11/2022 + # Delete Reliable Actors and their state Garbage collection of deactivated actors only cleans up the actor object, but it does not remove data that is stored in an actor's State Manager. When an actor is reactivated, its data is again made available to it through the State Manager. In cases where actors store data in State Manager and are deactivated but never reactivated, it may be necessary to clean up their data.
service-fabric Service Fabric Reliable Actors Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-diagnostics.md
Title: Actors diagnostics and monitoring description: This article describes the diagnostics and performance monitoring features in the Service Fabric Reliable Actors runtime, including the events and performance counters emitted by it.-- Previously updated : 10/26/2017-++++ Last updated : 07/11/2022 + # Diagnostics and performance monitoring for Reliable Actors The Reliable Actors runtime emits [EventSource](/dotnet/api/system.diagnostics.tracing.eventsource) events and [performance counters](/dotnet/api/system.diagnostics.performancecounter). These provide insights into how the runtime is operating and help with troubleshooting and performance monitoring.
service-fabric Service Fabric Reliable Actors Enumerate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-enumerate.md
Title: Enumerate actors on Azure Service Fabric description: Learn about enumeration of Reliable Actors and their metadata in an Azure Service Fabric application using examples.- Previously updated : 03/19/2018-+++++ Last updated : 07/11/2022 + # Enumerate Service Fabric Reliable Actors The Reliable Actors service allows a client to enumerate metadata about the actors that the service is hosting. Because the actor service is a partitioned stateful service, enumeration is performed per partition. Because each partition might contain many actors, the enumeration is returned as a set of paged results. The pages are looped over until all pages are read. The following example shows how to create a list of all active actors in one partition of an actor service:
do
while (continuationToken != null); ```
+While the code above will retrieve all the actors in a given partition, occasionally the need will arise to query the IDs of all actors (active or inactive) across each partition. This should be done by exception as it's quite a heavy task.
+
+The following example demonstrates how to query the partitions of the service and iterate through each in combination with the above example to produce a list of all the active and inactive actors in the service across the Service Fabric application:
++
+```csharp
+
+var serviceName = new Uri("fabric:/MyApp/MyService");
+
+//As the FabricClient is expensive to create, it should be shared as much as possible
+FabricClient fabricClient = new();
+
+//List each of the service's partitions
+ServicePartitionList partitions = await fabricClient.QueryManager.GetPartitionListAsync(serviceName);
+
+List<Guid> actorIds = new();
+
+foreach(var partition in partitions)
+{
+ //Retrieve the partition information
+ Int64RangePartitionInformation partitionInformation = (Int64RangePartitionInformation)partition.PartitionInformation; //Actors are restricted to the uniform Int64 scheme per https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-actors-introduction#distribution-and-failover
+ IActorService actorServiceProxy = ActorServiceProxy.Create(serviceName, partitionInformation.LowKey);
+
+ ContinuationToken? continuationToken = null;
+
+ do
+ {
+ var page = await actorServiceProxy.GetActorsAsync(continuationToken, cancellationToken);
+ actorIds.AddRange(page.Items.Select(actor => actor.ActorId.GetGuidId());
+ continuationToken = page.ContinuationToken;
+ } while (continuationToken != null);
+}
+
+return actorIds;
+```
## Next steps
service-fabric Service Fabric Reliable Actors Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-events.md
Title: Events in actor-based Azure Service Fabric actors description: Learn about events for Service Fabric Reliable Actors, an effective way to communicate between actor and client.- Previously updated : 10/06/2017-+++++ Last updated : 07/11/2022 + # Actor events Actor events provide a way to send best-effort notifications from the actor to the clients. Actor events are designed for actor-to-client communication and shouldn't be used for actor-to-actor communication.
service-fabric Service Fabric Reliable Actors Fabrictransportsettings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-fabrictransportsettings.md
Title: Change FabricTransport settings description: Learn about configuring Azure Service Fabric actor communication settings for different actor configurations.--- Previously updated : 04/20/2017--+++++ Last updated : 07/11/2022 + # Configure FabricTransport settings for Reliable Actors Here are the settings that you can configure:
If the client is not running as part of a service, you can create a "&lt;Client
</Section> ``` * Configuring FabricTransport Settings for Securing Actor Service/Client Using Subject Name.
- User needs to provide findType as FindBySubjectName,add CertificateIssuerThumbprints and CertificateRemoteCommonNames values.
+ User needs to provide findType as FindBySubjectName, add CertificateIssuerThumbprints and CertificateRemoteCommonNames values.
Below is the example for the Listener TransportSettings. ```xml
service-fabric Service Fabric Reliable Actors Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-get-started.md
Title: Create an actor-based service on Azure Service Fabric description: Learn how to create, debug, and deploy your first actor-based service in C# using Service Fabric Reliable Actors.- Previously updated : 07/10/2019-+++++ Last updated : 07/11/2022 + # Getting started with Reliable Actors > [!div class="op_single_selector"] > * [C# on Windows](service-fabric-reliable-actors-get-started.md)
service-fabric Service Fabric Reliable Actors Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-introduction.md
Title: Service Fabric Reliable Actors Overview description: Introduction to the Service Fabric Reliable Actors programming model, based on the Virtual Actor pattern. Previously updated : 11/01/2017-++++ Last updated : 07/11/2022 + # Introduction to Service Fabric Reliable Actors Reliable Actors is a Service Fabric application framework based on the [Virtual Actor](https://research.microsoft.com/en-us/projects/orleans/) pattern. The Reliable Actors API provides a single-threaded programming model built on the scalability and reliability guarantees provided by Service Fabric.
Service Fabric actors are virtual, meaning that their lifetime is not tied to th
This virtual actor lifetime abstraction carries some caveats as a result of the virtual actor model, and in fact the Reliable Actors implementation deviates at times from this model. * An actor is automatically activated (causing an actor object to be constructed) the first time a message is sent to its actor ID. After some period of time, the actor object is garbage collected. In the future, using the actor ID again, causes a new actor object to be constructed. An actor's state outlives the object's lifetime when stored in the state manager.
-* Calling any actor method for an actor ID activates that actor. For this reason, actor types have their constructor called implicitly by the runtime. Therefore, client code cannot pass parameters to the actor type's constructor, although parameters may be passed to the actor's constructor by the service itself. The result is that actors may be constructed in a partially-initialized state by the time other methods are called on it, if the actor requires initialization parameters from the client. There is no single entry point for the activation of an actor from the client.
+* Calling any actor method for an actor ID activates that actor. For this reason, actor types have their constructor called implicitly by the runtime. Therefore, client code cannot pass parameters to the actor type's constructor, although parameters may be passed to the actor's constructor by the service itself. The result is that actors may be constructed in a partially initialized state by the time other methods are called on it, if the actor requires initialization parameters from the client. There is no single entry point for the activation of an actor from the client.
* Although Reliable Actors implicitly create actor objects; you do have the ability to explicitly delete an actor and its state. ## Distribution and failover
The `ActorProxy`(C#) / `ActorProxyBase`(Java) class on the client side performs
The Reliable Actors runtime provides a simple turn-based access model for accessing actor methods. This means that no more than one thread can be active inside an actor object's code at any time. Turn-based access greatly simplifies concurrent systems as there is no need for synchronization mechanisms for data access. It also means systems must be designed with special considerations for the single-threaded access nature of each actor instance. * A single actor instance cannot process more than one request at a time. An actor instance can cause a throughput bottleneck if it is expected to handle concurrent requests.
-* Actors can deadlock on each other if there is a circular request between two actors while an external request is made to one of the actors simultaneously. The actor runtime will automatically time out on actor calls and throw an exception to the caller to interrupt possible deadlock situations.
+* Actors can deadlock each other if there is a circular request between two actors while an external request is made to one of the actors simultaneously. The actor runtime will automatically time out on actor calls and throw an exception to the caller to interrupt possible deadlock situations.
![Reliable Actors communication][3]
service-fabric Service Fabric Reliable Actors Kvsactorstateprovider Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-kvsactorstateprovider-configuration.md
Title: Change KVSActorStateProvider settings description: Learn about configuring Azure Service Fabric stateful actors of type KVSActorStateProvider.--- Previously updated : 10/2/2017-+++++ Last updated : 07/11/2022 + # Configuring Reliable Actors--KVSActorStateProvider You can modify the default configuration of KVSActorStateProvider by changing the settings.xml file that is generated in the Microsoft Visual Studio package root under the Config folder for the specified actor.
service-fabric Service Fabric Reliable Actors Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-lifecycle.md
Title: Overview the Azure Service Fabric actor lifecycle description: Explains Service Fabric Reliable Actor lifecycle, garbage collection, and manually deleting actors and their state Previously updated : 10/06/2017-++++ Last updated : 07/11/2022 + # Actor lifecycle, automatic garbage collection, and manual delete An actor is activated the first time a call is made to any of its methods. An actor is deactivated (garbage collected by the Actors runtime) if it is not used for a configurable period of time. An actor and its state can also be deleted manually at any time.
service-fabric Service Fabric Reliable Actors Notes On Actor Type Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-notes-on-actor-type-serialization.md
Title: Reliable Actors notes on actor type serialization description: Discusses basic requirements for defining serializable classes that can be used to define Service Fabric Reliable Actors states and interfaces Previously updated : 11/02/2017-++++ Last updated : 07/11/2022 + # Notes on Service Fabric Reliable Actors type serialization The arguments of all methods, result types of the tasks returned by each method in an actor interface, and objects stored in an actor's state manager must be [data contract serializable](/dotnet/framework/wcf/feature-details/types-supported-by-the-data-contract-serializer). This also applies to the arguments of the methods defined in [actor event interfaces](service-fabric-reliable-actors-events.md). (Actor event interface methods always return void.)
service-fabric Service Fabric Reliable Actors Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-platform.md
Title: Reliable Actors on Service Fabric description: Describes how Reliable Actors are layered on Reliable Services and use the features of the Service Fabric platform. Previously updated : 3/9/2018-++++ Last updated : 07/11/2022 + # How Reliable Actors use the Service Fabric platform This article explains how Reliable Actors work on the Azure Service Fabric platform. Reliable Actors run in a framework that is hosted in an implementation of a stateful reliable service called the *actor service*. The actor service contains all the components necessary to manage the lifecycle and message dispatching for your actors:
service-fabric Service Fabric Reliable Actors Polymorphism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-polymorphism.md
Title: Polymorphism in the Reliable Actors framework description: Build hierarchies of .NET interfaces and types in the Reliable Actors framework to reuse functionality and API definitions. Previously updated : 11/02/2017-++++ Last updated : 07/11/2022 + # Polymorphism in the Reliable Actors framework The Reliable Actors framework allows you to build actors using many of the same techniques that you would use in object-oriented design. One of those techniques is polymorphism, which allows types and interfaces to inherit from more generalized parents. Inheritance in the Reliable Actors framework generally follows the .NET model with a few additional constraints. In case of Java/Linux, it follows the Java model.
service-fabric Service Fabric Reliable Actors Reentrancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-reentrancy.md
Title: Reentrancy in Azure Service Fabric actors description: Introduction to reentrancy for Service Fabric Reliable Actors, a way to logically avoid blocking based on call context. Previously updated : 11/02/2017-++++ Last updated : 07/11/2022 + # Reliable Actors reentrancy The Reliable Actors runtime, by default, allows logical call context-based reentrancy. This allows for actors to be reentrant if they are in the same call context chain. For example, Actor A sends a message to Actor B, who sends a message to Actor C. As part of the message processing, if Actor C calls Actor A, the message is reentrant, so it will be allowed. Any other messages that are part of a different call context will be blocked on Actor A until it finishes processing.
service-fabric Service Fabric Reliable Actors Reliabledictionarystateprovider Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-reliabledictionarystateprovider-configuration.md
Title: Change ReliableDictionaryActorStateProvider settings description: Learn about configuring Azure Service Fabric stateful actors of type ReliableDictionaryActorStateProvider.--- Previously updated : 10/2/2017-+++++ Last updated : 07/11/2022 + # Configuring Reliable Actors--ReliableDictionaryActorStateProvider You can modify the default configuration of ReliableDictionaryActorStateProvider by changing the settings.xml file generated in the Visual Studio package root under the Config folder for the specified actor.
service-fabric Service Fabric Reliable Actors State Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-state-management.md
Title: Reliable Actors state management description: Describes how Reliable Actors state is managed, persisted, and replicated for high availability. Previously updated : 11/02/2017-++++ Last updated : 07/11/2022 + # Reliable Actors state management Reliable Actors are single-threaded objects that can encapsulate both logic and state. Because actors run on Reliable Services, they can maintain state reliably by using the same persistence and replication mechanisms. This way, actors don't lose their state after failures, upon reactivation after garbage collection, or when they are moved around between nodes in a cluster due to resource balancing or upgrades.
service-fabric Service Fabric Reliable Actors Timers Reminders https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-timers-reminders.md
Title: Reliable Actors timers and reminders description: Introduction to timers and reminders for Service Fabric Reliable Actors, including guidance on when to use each.- Previously updated : 11/02/2017-+++++ Last updated : 07/11/2022 + # Actor timers and reminders Actors can schedule periodic work on themselves by registering either timers or reminders. This article shows how to use timers and reminders and explains the differences between them.
service-fabric Service Fabric Reliable Actors Using https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-using.md
Title: Implement features in Azure Service Fabric actors description: Describes how to write your own actor service that implements service-level features in the same way as when you inherit StatefulService. Previously updated : 03/19/2018-++++ Last updated : 07/11/2022 + # Implement service-level features in your actor service As described in [service layering](service-fabric-reliable-actors-platform.md#service-layering), the actor service itself is a reliable service. You can write your own service that derives from `ActorService`. You also can implement service-level features in the same way as when you inherit a stateful service, such as:
service-fabric Service Fabric Reliable Serviceremoting Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-serviceremoting-diagnostics.md
Title: Azure ServiceFabric diagnostics and monitoring description: This article describes the performance monitoring features in the Service Fabric Reliable ServiceRemoting runtime, like performance counters emitted by it.-- Previously updated : 06/29/2017-++++ Last updated : 07/11/2022 + # Diagnostics and performance monitoring for Reliable Service Remoting The Reliable ServiceRemoting runtime emits [performance counters](/dotnet/api/system.diagnostics.performancecounter). These provide insights into how the ServiceRemoting is operating and help with troubleshooting and performance monitoring. - ## Performance counters The Reliable ServiceRemoting runtime defines the following performance counter categories:
service-fabric Service Fabric Reliable Services Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-backup-restore.md
Title: Service Fabric Backup and Restore description: Conceptual documentation for Service Fabric Backup and Restore, a service for configuring backup of Reliable Stateful services and Reliable Actors.--- Previously updated : 10/29/2018--+++++ Last updated : 07/11/2022 + # Backup and restore Reliable Services and Reliable Actors Azure Service Fabric is a high-availability platform that replicates the state across multiple nodes to maintain this high availability. Thus, even if one node in the cluster fails, the services continue to be available. While this in-built redundancy provided by the platform may be sufficient for some, in certain cases it is desirable for the service to back up data (to an external store).
service-fabric Service Fabric Reliable Services Communication Aspnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-communication-aspnetcore.md
Title: Service communication with the ASP.NET Core description: Learn how to use ASP.NET Core in stateless and stateful Azure Service Fabric Reliable Services applications. Previously updated : 10/12/2018-++++ Last updated : 07/11/2022 # ASP.NET Core in Azure Service Fabric Reliable Services
service-fabric Service Fabric Reliable Services Communication Remoting Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-communication-remoting-java.md
Title: Service remoting using Java in Azure Service Fabric description: Service Fabric remoting allows clients and services to communicate with Java services by using a remote procedure call.--- Previously updated : 06/30/2017--+++++ Last updated : 07/11/2022 + # Service remoting in Java with Reliable Services > [!div class="op_single_selector"] > * [C# on Windows](service-fabric-reliable-services-communication-remoting.md)
service-fabric Service Fabric Reliable Services Communication Remoting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-communication-remoting.md
Title: Service remoting by using C# in Service Fabric description: Service Fabric remoting allows clients and services to communicate with C# services by using a remote procedure call.- Previously updated : 06/03/2022-+++++ Last updated : 07/11/2022 + # Service remoting in C# with Reliable Services > [!div class="op_single_selector"]
service-fabric Service Fabric Reliable Services Communication Wcf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-communication-wcf.md
Title: Reliable Services WCF communication stack description: The built-in WCF communication stack in Service Fabric provides client-service WCF communication for Reliable Services.--- Previously updated : 06/07/2017--+++++ Last updated : 07/11/2022 + # WCF-based communication stack for Reliable Services The Reliable Services framework allows service authors to choose the communication stack that they want to use for their service. They can plug in the communication stack of their choice via the **ICommunicationListener** returned from the [CreateServiceReplicaListeners or CreateServiceInstanceListeners](service-fabric-reliable-services-communication.md) methods. The framework provides an implementation of the communication stack based on the Windows Communication Foundation (WCF) for service authors who want to use WCF-based communication.
service-fabric Service Fabric Reliable Services Communication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-communication.md
Title: Reliable Services communication overview description: Overview of the Reliable Services communication model, including opening listeners on services, resolving endpoints, and communicating between services.- Previously updated : 11/01/2017-+++++ Last updated : 07/11/2022 + # How to use the Reliable Services communication APIs Azure Service Fabric as a platform is completely agnostic about communication between services. All protocols and stacks are acceptable, from UDP to HTTP. It's up to the service developer to choose how services should communicate. The Reliable Services application framework provides built-in communication stacks as well as APIs that you can use to build your custom communication components.
service-fabric Service Fabric Reliable Services Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-configuration.md
Title: Configure Azure Service Fabric Reliable Services description: Learn about configuring stateful Reliable Services in an Azure Service Fabric application globally and for a single service.--- Previously updated : 10/02/2017--+++++ Last updated : 07/11/2022 + # Configure stateful reliable services There are two sets of configuration settings for reliable services. One set is global for all reliable services in the cluster while the other set is specific to a particular reliable service.
service-fabric Service Fabric Reliable Services Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-diagnostics.md
Title: Azure Service Fabric Stateful Reliable Services diagnostics description: Diagnostic functionality for Stateful Reliable Services in Azure Service Fabric Previously updated : 8/24/2018++++ Last updated : 07/11/2022 + # Diagnostic functionality for Stateful Reliable Services The Azure Service Fabric Stateful Reliable Services StatefulServiceBase class emits [EventSource](/dotnet/api/system.diagnostics.tracing.eventsource) events that can be used to debug the service, provide insights into how the runtime is operating, and help with troubleshooting.
service-fabric Service Fabric Reliable Services Exception Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-exception-serialization.md
Title: Enable data contract serialization for remoting exceptions in Service Fabric description: Enable data contract serialization for remoting exceptions in Azure Service Fabric.- Previously updated : 03/30/2022-+++++ Last updated : 07/11/2022 + # Remoting exception serialization overview BinaryFormatter-based serialization isn't secure, so don't use BinaryFormatter for data processing. For more information on the security implications, see [Deserialization risks in the use of BinaryFormatter and related types](/dotnet/standard/serialization/binaryformatter-security-guide).
-Azure Service Fabric used BinaryFormatter for serializing exceptions. Starting with ServiceFabric v9.0, [data contract-based serialization](/dotnet/api/system.runtime.serialization.datacontractserializer?view=net-6.0) for remoting exceptions is available as an opt-in feature. We recommend that you opt for DataContract remoting exception serialization by following the steps in this article.
+Azure Service Fabric used BinaryFormatter for serializing exceptions. Starting with ServiceFabric v9.0, [data contract-based serialization](/dotnet/api/system.runtime.serialization.datacontractserializer) for remoting exceptions is available as an opt-in feature. We recommend that you opt for DataContract remoting exception serialization by following the steps in this article.
Support for BinaryFormatter-based remoting exception serialization will be deprecated in the future.
service-fabric Service Fabric Reliable Services Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-introduction.md
Title: Overview of the Reliable Service programming model description: Learn about Service Fabric's Reliable Service programming model, and get started writing your own services.- Previously updated : 3/9/2018-++++ Last updated : 07/11/2022 + # Reliable Services overview Azure Service Fabric simplifies writing and managing stateless and stateful services. This topic covers:
service-fabric Service Fabric Reliable Services Lifecycle Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-lifecycle-java.md
Title: Azure Service Fabric Reliable Services lifecycle description: Learn about the lifecycle events in an Azure Service Fabric Reliable Services application using Java for stateful and stateless services.-- Previously updated : 06/30/2017--++++ Last updated : 07/11/2022 # Reliable Services lifecycle
service-fabric Service Fabric Reliable Services Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-lifecycle.md
Title: Overview of the lifecycle of Reliable Services description: Learn about the lifecycle events in an Azure Service Fabric Reliable Services application for stateful and stateless services.-- Previously updated : 08/18/2017-++++ Last updated : 07/11/2022 # Reliable Services lifecycle overview
service-fabric Service Fabric Reliable Services Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-notifications.md
Title: Reliable Services notifications description: Conceptual documentation for Service Fabric Reliable Services notifications for Reliable State Manager and Reliable Dictionary--- Previously updated : 6/29/2017--+++++ Last updated : 07/11/2022 + # Reliable Services notifications Notifications allow clients to track the changes that are being made to an object that they're interested in. Two types of objects support notifications: *Reliable State Manager* and *Reliable Dictionary*.
Common reasons for using notifications are:
* Building materialized views, such as secondary indexes or aggregated filtered views of the replica's state. An example is a sorted index of all keys in Reliable Dictionary. * Sending monitoring data, such as the number of users added in the last hour.
-Notifications are fired as part of applying operations.
-Because of that, notifications should be handled as fast as possible, and synchronous events shouldn't include any expensive operations.
+Notifications are fired as a part of applying operations. On a primary replica, operations are applied after quorum acknowledgment as a part of `transaction.CommitAsync()` or `this.StateManager.GetOrAddAsync()`. On secondary replicas, operations are applied at replication queue data processing. Because of that, notifications should be handled as fast as possible, and synchronous events shouldn't include any expensive operations. Otherwise, it could negatively impact transaction processing time as well as replica build-ups.
+ ## Reliable State Manager notifications Reliable State Manager provides notifications for the following events:
Here are some things to keep in mind:
* Because notifications are fired as part of the applying operations, clients see only notifications for locally committed operations. And because operations are guaranteed only to be locally committed (in other words, logged), they might or might not be undone in the future. * On the redo path, a single notification is fired for each applied operation. This means that if transaction T1 includes Create(X), Delete(X), and Create(X), you'll get one notification for the creation of X, one for the deletion, and one for the creation again, in that order. * For transactions that contain multiple operations, operations are applied in the order in which they were received on the primary replica from the user.
-* As part of processing false progress, some operations might be undone. Notifications are raised for such undo operations, rolling the state of the replica back to a stable point. One important difference of undo notifications is that events that have duplicate keys are aggregated. For example, if transaction T1 is being undone, you'll see a single notification to Delete(X).
+* As part of processing false progress, some operations might be undone on secondary replicas. Notifications are raised for such undo operations, rolling the state of the replica back to a stable point. One important difference of undo notifications is that events that have duplicate keys are aggregated. For example, if transaction T1 is being undone, you'll see a single notification to Delete(X).
## Next steps * [Reliable Collections](service-fabric-work-with-reliable-collections.md)
service-fabric Service Fabric Reliable Services Quick Start Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-quick-start-java.md
Title: Create your first reliable service in Java description: Introduction to creating a Microsoft Azure Service Fabric application with stateless and stateful services in Java.-- Previously updated : 11/02/2017-+++++ Last updated : 07/11/2022 + # Get started with Reliable Services in Java > [!div class="op_single_selector"] > * [C# on Windows](service-fabric-reliable-services-quick-start.md)
service-fabric Service Fabric Reliable Services Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-quick-start.md
Title: Create your first Service Fabric application in C# description: Introduction to creating a Microsoft Azure Service Fabric application with stateless and stateful services.- Previously updated : 07/10/2019--+++++ Last updated : 07/11/2022 + # Get started with Reliable Services > [!div class="op_single_selector"]
service-fabric Service Fabric Reliable Services Reliable Collections Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-reliable-collections-guidelines.md
Title: Guidelines for Reliable Collections description: Guidelines and Recommendations for using Service Fabric Reliable Collections in an Azure Service Fabric application.- Previously updated : 03/10/2020++++ Last updated : 07/11/2022 + # Guidelines and recommendations for Reliable Collections in Azure Service Fabric This section provides guidelines for using Reliable State Manager and Reliable Collections. The goal is to help users avoid common pitfalls.
service-fabric Service Fabric Reliable Services Reliable Collections Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-reliable-collections-serialization.md
Title: Reliable Collection object serialization description: Learn about Azure Service Fabric Reliable Collections object serialization, including the default strategy and how to define custom serialization.'- Previously updated : 5/8/2017-++++ Last updated : 07/11/2022 + # Reliable Collection object serialization in Azure Service Fabric Reliable Collections' replicate and persist their items to make sure they are durable across machine failures and power outages. Both to replicate and to persist items, Reliable Collections' need to serialize them.
service-fabric Service Fabric Reliable Services Reliable Collections Transactions Locks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-reliable-collections-transactions-locks.md
Title: Transactions And Lock Modes in Reliable Collections description: Azure Service Fabric Reliable State Manager and Reliable Collections Transactions and Locking. Previously updated : 5/1/2017++++ Last updated : 07/11/2022 + # Transactions and lock modes in Azure Service Fabric Reliable Collections ## Transaction
service-fabric Service Fabric Reliable Services Reliable Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-reliable-collections.md
Title: Introduction to Reliable Collections description: Service Fabric stateful services provide reliable collections that enable you to write highly available, scalable, and low-latency cloud applications.- Previously updated : 3/10/2020++++ Last updated : 07/11/2022 + # Introduction to Reliable Collections in Azure Service Fabric stateful services Reliable Collections enable you to write highly available, scalable, and low-latency cloud applications as though you were writing single computer applications. The classes in the **Microsoft.ServiceFabric.Data.Collections** namespace provide a set of collections that automatically make your state highly available. Developers need to program only to the Reliable Collection APIs and let Reliable Collections manage the replicated and local state.
service-fabric Service Fabric Reliable Services Reliable Concurrent Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-reliable-concurrent-queue.md
Title: ReliableConcurrentQueue in Azure Service Fabric description: ReliableConcurrentQueue is a high-throughput queue that allows parallel enqueues and dequeues.- Previously updated : 5/1/2017++++ Last updated : 07/11/2022 + # Introduction to ReliableConcurrentQueue in Azure Service Fabric Reliable Concurrent Queue is an asynchronous, transactional, and replicated queue which features high concurrency for enqueue and dequeue operations. It is designed to deliver high throughput and low latency by relaxing the strict FIFO ordering provided by [Reliable Queue](/dotnet/api/microsoft.servicefabric.data.collections.ireliablequeue-1#microsoft_servicefabric_data_collections_ireliablequeue_1) and instead provides a best-effort ordering.
service-fabric Service Fabric Reliable Services Secure Communication Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-secure-communication-java.md
Title: Secure service remoting communications with Java description: Learn how to secure service remoting based communication for Java reliable services that are running in an Azure Service Fabric cluster.--- Previously updated : 06/30/2017--+++++ Last updated : 07/11/2022 + # Secure service remoting communications in a Java service > [!div class="op_single_selector"] > * [C# on Windows](service-fabric-reliable-services-secure-communication.md)
service-fabric Service Fabric Reliable Services Secure Communication Wcf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-secure-communication-wcf.md
Title: Secure WCF-based service communication description: Learn how to secure WCF-based communications for reliable services that are running in an Azure Service Fabric cluster.--- Previously updated : 04/20/2017--+++++ Last updated : 07/11/2022 + # Secure WCF-based communications for a service Security is one of the most important aspects of communication. The Reliable Services application framework provides a few prebuilt communication stacks and tools that you can use to improve security. This article talks about how to improve security when you're using service remoting.
service-fabric Service Fabric Reliable Services Secure Communication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-secure-communication.md
Title: Secure service remoting communications with C# description: Learn how to secure service remoting based communication for C# reliable services that are running in an Azure Service Fabric cluster.--- Previously updated : 04/20/2017-+++++ Last updated : 07/11/2022 + # Secure service remoting communications in a C# service > [!div class="op_single_selector"] > * [C# on Windows](service-fabric-reliable-services-secure-communication.md)
service-fabric Service Fabric Report Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-report-health.md
Title: Add custom Service Fabric health reports description: Describes how to send custom health reports to Azure Service Fabric health entities. Gives recommendations for designing and implementing quality health reports. Previously updated : 2/28/2018-++++ Last updated : 07/11/2022 + # Add custom Service Fabric health reports Azure Service Fabric introduces a [health model](service-fabric-health-introduction.md) designed to flag unhealthy cluster and application conditions on specific entities. The health model uses **health reporters** (system components and watchdogs). The goal is easy and fast diagnosis and repair. Service writers need to think upfront about health. Any condition that can impact health should be reported on, especially if it can help flag problems close to the root. The health information can save time and effort on debugging and investigation. The usefulness is especially clear once the service is up and running at scale in the cloud (private or Azure).
service-fabric Service Fabric Resource Governance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-resource-governance.md
Title: Resource governance for containers and services description: Azure Service Fabric allows you to specify resource requests and limits for services running as processes or containers.- Previously updated : 8/9/2017++++ Last updated : 07/11/2022 # Resource governance
service-fabric Service Fabric Reverse Proxy Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reverse-proxy-diagnostics.md
Title: Azure Service Fabric reverse proxy diagnostics description: Learn how to monitor and diagnose request processing at the reverse proxy for an Azure Service Fabric application.--- Previously updated : 08/08/2017-+++++ Last updated : 07/11/2022 + # Monitor and diagnose request processing at the reverse proxy Starting with the 5.7 release of Service Fabric, reverse proxy events are available for collection.
service-fabric Service Fabric Reverseproxy Configure Secure Communication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reverseproxy-configure-secure-communication.md
Title: Azure Service Fabric reverse proxy secure communication description: Configure reverse proxy to enable secure end-to-end communication in an Azure Service Fabric application.- Previously updated : 08/10/2017+++++ Last updated : 07/11/2022 + # Connect to a secure service with the reverse proxy This article explains how to establish secure connection between the reverse proxy and services, thus enabling an end to end secure channel. To learn more about reverse proxy, see [Reverse proxy in Azure Service Fabric](service-fabric-reverseproxy.md)
service-fabric Service Fabric Reverseproxy Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reverseproxy-setup.md
Title: Azure Service Fabric set up reverse proxy description: Understand how to set up and configure the reverse proxy service for an Azure Service Fabric application.-- Previously updated : 11/13/2018-+++++ Last updated : 07/11/2022 + # Set up and configure reverse proxy in Azure Service Fabric Reverse proxy is an optional Azure Service Fabric service that helps microservices running in a Service Fabric cluster discover and communicate with other services that have http endpoints. To learn more, see [Reverse proxy in Azure Service Fabric](service-fabric-reverseproxy.md). This article shows you how to set up and configure reverse proxy in your cluster.
service-fabric Service Fabric Reverseproxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reverseproxy.md
Title: Azure Service Fabric reverse proxy description: Use Service Fabric's reverse proxy for communication to microservices from inside and outside the cluster.-- Previously updated : 11/03/2017--++++ Last updated : 07/11/2022 + # Reverse proxy in Azure Service Fabric Reverse proxy built into Azure Service Fabric helps microservices running in a Service Fabric cluster discover and communicate with other services that have http endpoints.
service-fabric Service Fabric Run Script At Service Startup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-run-script-at-service-startup.md
Title: Run a script when an Azure Service Fabric service starts description: Learn how to configure a policy for a Service Fabric service setup entry point and run a script at service start up time.--- Previously updated : 05/19/2022-+++++ Last updated : 07/11/2022 + # Run a service startup script as a local user or system account Before a Service Fabric service executable starts up it may be necessary to run some configuration or setup work. For example, configuring environment variables. You can specify a script to run before the service executable starts up in the service manifest for the service. By configuring a RunAs policy for the service setup entry point you can change which account the setup executable runs under. A separate setup entry point allows you to run high-privileged configuration for a short period of time so the service host executable doesn't need to run with high privileges for extended periods of time.
service-fabric Service Fabric Run Service As Ad User Or Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-run-service-as-ad-user-or-group.md
Title: Run an Azure Service Fabric service as an AD user or group description: Learn how to run a service as an Active Directory user or group on a Service Fabric Windows standalone cluster.- Previously updated : 03/29/2018+++++ Last updated : 07/11/2022 + # Run a service as an Active Directory user or group On a Windows Server standalone cluster, you can run a service as an Active Directory user or group using a RunAs policy. By default, Service Fabric applications run under the account that the Fabric.exe process runs under. Running applications under different accounts, even in a shared hosted environment, makes them more secure from one another. Note that this uses Active Directory on-premises within your domain and not Azure Active Directory (Azure AD). You can also run a service as a [group Managed Service Account (gMSA)](service-fabric-run-service-as-gmsa.md).
service-fabric Service Fabric Run Service As Gmsa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-run-service-as-gmsa.md
Title: Run an Azure Service Fabric service under a gMSA account description: Learn how to run a service as a group-Managed Service Account (gMSA) on a Service Fabric Windows standalone cluster. Previously updated : 03/29/2018++++ Last updated : 07/11/2022 + # Run a service as a group Managed Service Account On a Windows Server standalone cluster, you can run a service as a *group managed service account* (gMSA) using a *RunAs* policy. By default, Service Fabric applications run under the account that the `Fabric.exe` process runs under. Running applications under different accounts, even in a shared hosted environment, makes them more secure from one another. By using a gMSA, there is no password or encrypted password stored in the application manifest. You can also run a service as an [Active Directory user or group](service-fabric-run-service-as-ad-user-or-group.md).
service-fabric Service Fabric Scale Up Primary Node Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-scale-up-primary-node-type.md
Title: Scale up an Azure Service Fabric primary node type description: Vertically scale your Service Fabric cluster by adding a new node type and removing the previous one. Previously updated : 12/11/2020---+++++ Last updated : 07/11/2022 + # Scale up a Service Fabric cluster primary node type This article describes how to scale up a Service Fabric cluster primary node type with minimal downtime. In-place SKU upgrades are not supported on Service Fabric cluster nodes, as such operations potentially involve data and availability loss. The safest, most reliable, and recommended method for scaling up a Service Fabric node type is to:
service-fabric Service Fabric Securing Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-securing-containers.md
Title: Import certificates into a container description: Learn now to import certificate files into a Service Fabric container service.-- Previously updated : 2/23/2018-+++++ Last updated : 07/11/2022 # Import a certificate file into a container running on Service Fabric
service-fabric Service Fabric Service Manifest Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-service-manifest-resources.md
Title: Specifying Service Fabric service endpoints description: How to describe endpoint resources in a service manifest, including how to set up HTTPS endpoints- Previously updated : 09/16/2020-++++ Last updated : 07/11/2022 + # Specify resources in a service manifest ## Overview Service Fabric applications and services are defined and versioned using manifest files. For a higher-level overview of ServiceManifest.xml and ApplicationManifest.xml, see [Service Fabric application and service manifests](service-fabric-application-and-service-manifests.md).
service-fabric Service Fabric Service Model Schema Attribute Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-service-model-schema-attribute-groups.md
Title: Service model XML schema attribute groups description: Describes the attribute groups in the XML schema of the Service Fabric service model.- Previously updated : 12/10/2018++++ Last updated : 07/11/2022 <!-- This article was generated by the Python script found in the service-fabric-service-model-schema.md file -->
service-fabric Service Fabric Service Model Schema Complex Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-service-model-schema-complex-types.md
Title: Azure Service Fabric service model XML schema complex types description: Describes the complex types in the XML schema of the Service Fabric service model.- Previously updated : 12/10/2018++++ Last updated : 07/11/2022 <!-- This article was generated by the Python script found in the service-fabric-service-model-schema.md file -->
service-fabric Service Fabric Service Model Schema Element Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-service-model-schema-element-groups.md
Title: Azure Service Fabric service model XML schema element groups description: Describes the element groups in the XML schema of the Service Fabric service model.- Previously updated : 12/10/2018++++ Last updated : 07/11/2022 <!-- This article was generated by the Python script found in the service-fabric-service-model-schema.md file -->
service-fabric Service Fabric Service Model Schema Elements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-service-model-schema-elements.md
Title: Azure Service Fabric service model XML schema elements description: Describes the elements in the XML schema of the Service Fabric service model.- Previously updated : 12/10/2018++++ Last updated : 07/11/2022 <!-- This article was generated by the Python script found in the service-fabric-service-model-schema.md file -->
service-fabric Service Fabric Service Model Schema Simple Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-service-model-schema-simple-types.md
Title: Azure Service Fabric service model XML schema simple types description: Describes the simple types in the XML schema of the Service Fabric service model.- Previously updated : 12/10/2018++++ Last updated : 07/11/2022 <!-- This article was generated by the Python script found in the service-fabric-service-model-schema.md file -->
service-fabric Service Fabric Service Model Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-service-model-schema.md
Title: Azure Service Fabric service model XML schema descriptions description: A description of the XML schema of the Service Fabric service model, including location and a summary of components.- Previously updated : 12/10/2018++++ Last updated : 07/11/2022 <!-- The schema reference articles were generated by the Python script found in the end of this file -->
service-fabric Service Fabric Services Inside Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-services-inside-containers.md
Title: Containerize your Azure Service Fabric services on Windows description: Learn how to containerize your Service Fabric Reliable Services and Reliable Actors services on Windows.-- Previously updated : 5/23/2018-+++++ Last updated : 07/11/2022 + # Containerize your Service Fabric Reliable Services and Reliable Actors on Windows Service Fabric supports containerizing Service Fabric microservices (Reliable Services, and Reliable Actor based services). For more information, see [service fabric containers](service-fabric-containers-overview.md).
service-fabric Service Fabric Setup Gmsa For Windows Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-setup-gmsa-for-windows-containers.md
Title: Setup gMSA for Azure Service Fabric container services description: Learn now to setup group Managed Service Accounts (gMSA) for a container running in Azure Service Fabric.-- Previously updated : 03/20/2019+++++ Last updated : 07/11/2022 # Set up gMSA for Windows containers running on Service Fabric
service-fabric Service Fabric Sfctl Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-application.md
Title: Azure Service Fabric CLI- sfctl application description: Learn about sfctl, the Azure Service Fabric command line interface. Includes a list of commands for managing applications.-- Previously updated : 1/16/2020-++++ Last updated : 07/11/2022 # sfctl application Create, delete, and manage applications and application types.
service-fabric Service Fabric Sfctl Chaos Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-chaos-schedule.md
Title: Azure Service Fabric CLI- sfctl chaos schedule description: Learn about sfctl, the Azure Service Fabric command line interface. Includes a list of commands for chaos scheduling.-- Previously updated : 1/16/2020-++++ Last updated : 07/11/2022 # sfctl chaos schedule
service-fabric Service Fabric Sfctl Chaos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-chaos.md
Title: Azure Service Fabric CLI- sfctl chaos description: Learn about sfctl, the Azure Service Fabric command line interface. Includes a list of commands for managing chaos.- Previously updated : 1/16/2020-++++ Last updated : 07/11/2022 + # sfctl chaos Start, stop, and report on the chaos test service.
service-fabric Service Fabric Sfctl Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-cluster.md
Title: Azure Service Fabric CLI- sfctl cluster description: Learn about sfctl, the Azure Service Fabric command line interface. Includes a list of commands for managing clusters.-- Previously updated : 1/16/2020-++++ Last updated : 07/11/2022 # sfctl cluster
service-fabric Service Fabric Sfctl Compose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-compose.md
Title: Azure Service Fabric CLI- sfctl compose description: Learn about sfctl, the Azure Service Fabric command line interface. Includes a list of commands for Docker Compose applications.-- Previously updated : 1/16/2020-++++ Last updated : 07/11/2022 # sfctl compose
service-fabric Service Fabric Sfctl Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-container.md
Title: Azure Service Fabric CLI- sfctl container description: Learn about sfctl, the Azure Service Fabric command line interface. Includes a list of commands for containers.-- Previously updated : 1/16/2020-++++ Last updated : 07/11/2022 # sfctl container
service-fabric Service Fabric Sfctl Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-events.md
Title: Azure Service Fabric CLI- sfctl events description: Describes the Service Fabric CLI sfctl events commands.-- Previously updated : 1/16/2020-++++ Last updated : 07/11/2022 # sfctl events
service-fabric Service Fabric Sfctl Is https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-is.md
Title: Azure Service Fabric CLI- sfctl is description: Learn about sfctl, the Azure Service Fabric command-line interface. Includes a list of commands for managing infrastructure.-- Previously updated : 1/16/2020-++++ Last updated : 07/11/2022 # sfctl is
service-fabric Service Fabric Sfctl Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-node.md
Title: Azure Service Fabric CLI- sfctl node description: Learn about sfctl, the Azure Service Fabric command line interface. Includes a list of commands for managing cluster nodes.-- Previously updated : 1/16/2020-++++ Last updated : 07/11/2022 # sfctl node
service-fabric Service Fabric Sfctl Partition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-partition.md
Title: Azure Service Fabric CLI- sfctl partition description: Learn about sfctl, the Azure Service Fabric command line interface. Includes a list of commands for managing partitions for a service.-- Previously updated : 1/16/2020-++++ Last updated : 07/11/2022 # sfctl partition
service-fabric Service Fabric Sfctl Property https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-property.md
Title: Azure Service Fabric CLI- sfctl property description: Learn about sfctl, the Azure Service Fabric command line interface. Includes a list of commands for storing and querying properties.-- Previously updated : 1/16/2020-++++ Last updated : 07/11/2022 # sfctl property
service-fabric Service Fabric Sfctl Replica https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-replica.md
Title: Azure Service Fabric CLI- sfctl replica description: Learn about sfctl, the Azure Service Fabric command line interface. Includes a list of commands for managing replicas.-- Previously updated : 1/16/2020-++++ Last updated : 07/11/2022 # sfctl replica
service-fabric Service Fabric Sfctl Rpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-rpm.md
Title: Azure Service Fabric CLI- sfctl rpm description: Learn about sfctl, the Azure Service Fabric command line interface. Includes a list of commands for the repair manager service.-- Previously updated : 1/16/2020-++++ Last updated : 07/11/2022 # sfctl rpm
service-fabric Service Fabric Sfctl Sa Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-sa-cluster.md
Title: Azure Service Fabric CLI- sfctl sa-cluster description: Learn about sfctl, the Azure Service Fabric command line interface. Includes a list of commands for managing standalone clusters.-- Previously updated : 1/16/2020-++++ Last updated : 07/11/2022 # sfctl sa-cluster
service-fabric Service Fabric Sfctl Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-service.md
Title: Azure Service Fabric CLI- sfctl service description: Learn about sfctl, the Azure Service Fabric command line interface. Includes a list of commands for managing services, service types, and service packages.-- Previously updated : 1/16/2020-++++ Last updated : 07/11/2022 # sfctl service
service-fabric Service Fabric Sfctl Settings Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-settings-telemetry.md
Title: Azure Service Fabric CLI- sfctl settings telemetry description: Learn about sfctl, the Azure Service Fabric command line interface. Includes a list of commands for configuring sfctl telemetry.-- Previously updated : 1/16/2020-++++ Last updated : 07/11/2022 # sfctl settings telemetry
service-fabric Service Fabric Sfctl Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-settings.md
Title: Azure Service Fabric CLI- sfctl settings description: Learn about sfctl, the Azure Service Fabric command line interface. Includes a list of commands for configuring local sfctl settings.-- Previously updated : 1/16/2020-++++ Last updated : 07/11/2022 # sfctl settings
service-fabric Service Fabric Sfctl Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-store.md
Title: Azure Service Fabric CLI- sfctl store description: Learn about sfctl, the Azure Service Fabric command line interface. Includes a list of commands for performing file level operations on the cluster image store.-- Previously updated : 1/16/2020-++++ Last updated : 07/11/2022 # sfctl store
service-fabric Service Fabric Sfctl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl.md
Title: Azure Service Fabric CLI- sfctl description: Learn about sfctl, the Azure Service Fabric command line interface. Includes a list of commands and subgroups.-- Previously updated : 1/16/2020-++++ Last updated : 07/11/2022 # sfctl
service-fabric Service Fabric Standalone Clusters Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-standalone-clusters-overview.md
Title: Standalone Service Fabric clusters overview description: Service Fabric clusters run on Windows Server and Linux, which means you'll be able to deploy and host Service Fabric applications anywhere you can run Windows Server or Linux. Previously updated : 02/01/2019++++ Last updated : 07/11/2022 # Overview of Service Fabric Standalone clusters
service-fabric Service Fabric Startupservices Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-startupservices-model.md
Title: Define Service Configuration in StartupServices.xml for a Service Fabric Application description: Learn how to use StartupServices.xml to separate service level configuration from ApplicationManifest.xml.- Previously updated : 05/05/2021++++ Last updated : 07/11/2022 + # Introducing StartupServices.xml in Service Fabric Application This feature introduces StartupServices.xml file in a Service Fabric Application design. This file hosts DefaultServices section of ApplicationManifest.xml. With this implementation, DefaultServices and Service definition-related parameters are moved from existing ApplicationManifest.xml to this new file called StartupServices.xml. This file is used in each functionalities (Build/Rebuild/F5/Ctrl+F5/Publish) in Visual Studio.
service-fabric Service Fabric Stateless Node Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-stateless-node-types.md
Title: Deploy Stateless-only node types in a Service Fabric cluster description: Learn how to create and deploy stateless node types in Azure Service Fabric cluster.--- Previously updated : 03/16/2022-+++++ Last updated : 07/11/2022 # Deploy an Azure Service Fabric cluster with stateless-only node types Service Fabric node types come with inherent assumption that at some point of time, stateful services might be placed on the nodes. Stateless node types change this assumption for a node type, thus allowing the node type to use other features such as faster scale out operations, support for Automatic OS Upgrades on Bronze durability and scaling out to more than 100 nodes in a single virtual machine scale set.
service-fabric Service Fabric Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-support.md
Title: Learn about Azure Service Fabric Support options description: Azure Service Fabric cluster versions supported and links to file support tickets- Previously updated : 5/17/2021--++++ Last updated : 07/11/2022 + # Azure Service Fabric support options We have created a number of support request options to serve the needs of managing your Service Fabric clusters and application workloads, depending on the urgency of support needed and the severity of the issue.
service-fabric Service Fabric Technical Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-technical-overview.md
Title: Learn Azure Service Fabric terminology description: Learn key Service Fabric terminology and concepts used in the rest of the documentation.- Previously updated : 09/17/2018-++++ Last updated : 07/11/2022 # Service Fabric terminology overview
service-fabric Service Fabric Testability Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-testability-actions.md
Title: Simulate failures in Azure microservices description: This article talks about the testability actions found in Microsoft Azure Service Fabric.- Previously updated : 03/26/2021-+++++ Last updated : 07/11/2022 + # Testability actions In order to simulate an unreliable infrastructure, Azure Service Fabric provides you, the developer, with ways to simulate various real-world failures and state transitions. These are exposed as testability actions. The actions are the low-level APIs that cause a specific fault injection, state transition, or validation. By combining these actions, you can write comprehensive test scenarios for your services.
service-fabric Service Fabric Testability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-testability-overview.md
Title: Fault Analysis Service overview description: This article describes the Fault Analysis Service in Service Fabric for inducing faults and running test scenarios against your services.- Previously updated : 06/15/2017++++ Last updated : 07/11/2022 + # Introduction to the Fault Analysis Service The Fault Analysis Service is designed for testing services that are built on Microsoft Azure Service Fabric. With the Fault Analysis Service you can induce meaningful faults and run complete test scenarios against your applications. These faults and scenarios exercise and validate the numerous states and transitions that a service will experience throughout its lifetime, all in a controlled, safe, and consistent manner.
service-fabric Service Fabric Testability Scenarios Service Communication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-testability-scenarios-service-communication.md
Title: 'Testability: Service communication' description: Service-to-service communication is a critical integration point of a Service Fabric application. This article discusses design considerations and testing techniques.- Previously updated : 11/02/2017+++++ Last updated : 07/11/2022 + # Service Fabric testability scenarios: Service communication Microservices and service-oriented architectural styles surface naturally in Azure Service Fabric. In these types of distributed architectures, componentized microservice applications are typically composed of multiple services that need to talk to each other. In even the simplest cases, you generally have at least a stateless web service and a stateful data storage service that need to communicate.
service-fabric Service Fabric Testability Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-testability-scenarios.md
Title: Create chaos and failover tests for Azure Service Fabric description: Using the Service Fabric chaos test and failover test scenarios to induce faults and verify the reliability of your services.- Previously updated : 10/1/2019-+++++ Last updated : 07/11/2022 + # Testability scenarios Large distributed systems like cloud infrastructures are inherently unreliable. Azure Service Fabric gives developers the ability to write services to run on top of unreliable infrastructures. In order to write high-quality services, developers need to be able to induce such unreliable infrastructure to test the stability of their services.
service-fabric Service Fabric Testability Workload Tests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-testability-workload-tests.md
Title: Simulate faults in Azure Service Fabric apps description: Learn about how to harden your Azure Service Fabric services against graceful and ungraceful failures.-- Previously updated : 06/15/2017-+++++ Last updated : 07/11/2022 + # Simulate failures during service workloads The testability scenarios in Azure Service Fabric enable developers to not worry about dealing with individual faults. There are scenarios, however, where an explicit interleaving of client workload and failures might be needed. The interleaving of client workload and faults ensures that the service is actually performing some action when failure happens. Given the level of control that testability provides, these could be at precise points of the workload execution. This induction of faults at different states in the application can find bugs and improve quality.
service-fabric Service Fabric Troubleshoot Local Cluster Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-troubleshoot-local-cluster-setup.md
Title: Troubleshoot your local Azure Service Fabric cluster setup description: This article covers a set of suggestions for troubleshooting your local development cluster- Previously updated : 02/23/2018+++++ Last updated : 07/11/2022 + # Troubleshoot your local development cluster setup If you run into an issue while interacting with your local Azure Service Fabric development cluster, review the following suggestions for potential solutions.
service-fabric Service Fabric Tutorial Containers Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-containers-failover.md
Title: Failover and scale a containers app description: In this tutorial, you learn how failover is handled in an Azure Service Fabric containers application. Also learn how to scale the containers and services running in a cluster.- Previously updated : 07/22/2019-++++ Last updated : 07/14/2022 + # Tutorial: Demonstrate fail over and scaling of container services with Service Fabric This tutorial is part three of a series. In this tutorial, you learn how failover is handled in Service Fabric container applications. Additionally, you learn how to scale containers. In this tutorial, you:
service-fabric Service Fabric Tutorial Create Container Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-create-container-images.md
Title: Create container images on Service Fabric in Azure description: In this tutorial, you learn how to create container images for a multi-container Service Fabric application.- Previously updated : 07/22/2019-++++ Last updated : 07/14/2022 + # Tutorial: Create container images on a Linux Service Fabric cluster This tutorial is part one of a tutorial series that demonstrates how to use containers in a Linux Service Fabric cluster. In this tutorial, a multi-container application is prepared for use with Service Fabric. In subsequent tutorials, these images are used as part of a Service Fabric application. In this tutorial you learn how to:
service-fabric Service Fabric Tutorial Create Dotnet App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-create-dotnet-app.md
Title: Create a .NET app on Service Fabric in Azure description: In this tutorial, you learn how to create an application with an ASP.NET Core front-end and a reliable service stateful back-end and deploy the application to a cluster.- Previously updated : 07/10/2019-++++ Last updated : 07/14/2022 + # Tutorial: Create and deploy an application with an ASP.NET Core Web API front-end service and a stateful back-end service This tutorial is part one of a series. You will learn how to create an Azure Service Fabric application with an ASP.NET Core Web API front end and a stateful back-end service to store your data. When you're finished, you have a voting application with an ASP.NET Core web front-end that saves voting results in a stateful back-end service in the cluster. This tutorial series requires a Windows developer machine. If you don't want to manually create the voting application, you can [download the source code](https://github.com/Azure-Samples/service-fabric-dotnet-quickstart/) for the completed application and skip ahead to [Walk through the voting sample application](#walkthrough_anchor). If you prefer, you can also watch a [video walk-through](/Events/Connect/2017/E100) of this tutorial.
service-fabric Service Fabric Tutorial Create Java App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-create-java-app.md
Title: 'Tutorial: Create a Java app on Azure Service Fabric' description: In this tutorial, learn how to create a reliable service Java application with a front-end, create a reliable services stateful back-end, and deploy the application to a cluster.- Previously updated : 09/01/2018-++++ Last updated : 07/14/2022 + # Tutorial: Create an application with a Java API front-end service and a stateful back-end service on Azure Service Fabric This tutorial is part one of a series. When you are finished, you have a Voting application with a Java web front end that saves voting results in a stateful back-end service on Azure Service Fabric. This tutorial series requires that you have a working Mac OSX or Linux developer machine. If you don't want to manually create the voting application, you can [download the source code for the completed application](https://github.com/Azure-Samples/service-fabric-java-quickstart) and skip ahead to [Walk through the voting sample application](service-fabric-tutorial-create-java-app.md#walk-through-the-voting-sample-application). Also, consider following the [Quickstart for Java reliable services.](service-fabric-quickstart-java-reliable-services.md).
service-fabric Service Fabric Tutorial Create Vnet And Linux Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-create-vnet-and-linux-cluster.md
Title: Create a Linux Service Fabric cluster in Azure description: Learn how to deploy a Linux Service Fabric cluster into an existing Azure virtual network using Azure CLI.-- Previously updated : 02/14/2019-+++++ Last updated : 07/14/2022 + # Deploy a Linux Service Fabric cluster into an Azure virtual network In this article you learn how to deploy a Linux Service Fabric cluster into an [Azure virtual network (VNET)](../virtual-network/virtual-networks-overview.md) using Azure CLI and a template. When you're finished, you have a cluster running in the cloud that you can deploy applications to. To create a Windows cluster using PowerShell, see [Create a secure Windows cluster on Azure](service-fabric-tutorial-create-vnet-and-windows-cluster.md).
service-fabric Service Fabric Tutorial Create Vnet And Windows Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-create-vnet-and-windows-cluster.md
Title: Create a Service Fabric cluster running Windows in Azure description: In this tutorial, you learn how to deploy a Windows Service Fabric cluster into an Azure virtual network and network security group by using PowerShell.- Previously updated : 07/22/2019-++++ Last updated : 07/14/2022 + # Tutorial: Deploy a Service Fabric cluster running Windows into an Azure virtual network This tutorial is part one of a series. You learn how to deploy an Azure Service Fabric cluster running Windows into an [Azure virtual network](../virtual-network/virtual-networks-overview.md) and [network security group](../virtual-network/virtual-network-vnet-plan-design-arm.md) by using PowerShell and a template. When you're finished, you have a cluster running in the cloud to which you can deploy applications. To create a Linux cluster that uses the Azure CLI, see [Create a secure Linux cluster on Azure](service-fabric-tutorial-create-vnet-and-linux-cluster.md).
service-fabric Service Fabric Tutorial Debug Log Local Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-debug-log-local-cluster.md
Title: Debug a Java app on a local Service Fabric cluster description: In this tutorial, learn how to debug and get logs from a Service Fabric Java application running on a local cluster.- Previously updated : 02/26/2018-++++ Last updated : 07/14/2022 # Tutorial: Debug a Java application deployed on a local Service Fabric cluster
service-fabric Service Fabric Tutorial Delete Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-delete-cluster.md
Title: Delete a Service Fabric cluster in Azure description: In this tutorial, you learn how to delete an Azure-hosted Service Fabric cluster and all it's resources. You can delete the resource group containing the cluster or selectively delete resources.- Previously updated : 07/22/2019-++++ Last updated : 07/14/2022 + # Tutorial: Remove a Service Fabric cluster running in Azure This tutorial is part five of a series, and shows you how to delete a Service Fabric cluster running in Azure. To completely delete a Service Fabric cluster you also need to delete the resources used by the cluster. You have two options: either delete the resource group that the cluster is in (which deletes the cluster resource and all other resources in the resource group) or specifically delete the cluster resource and it's associated resources (but not other resources in the resource group).
service-fabric Service Fabric Tutorial Deploy Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-deploy-api-management.md
Title: Integrate API Management with Service Fabric in Azure description: Learn how to quickly get started with Azure API Management and route traffic to a back-end service in Service Fabric.-- Previously updated : 07/10/2019-+++++ Last updated : 07/14/2022 + # Integrate API Management with Service Fabric in Azure Deploying Azure API Management with Service Fabric is an advanced scenario. API Management is useful when you need to publish APIs with a rich set of routing rules for your back-end Service Fabric services. Cloud applications typically need a front-end gateway to provide a single point of ingress for users, devices, or other applications. In Service Fabric, a gateway can be any stateless service designed for traffic ingress such as an ASP.NET Core application, Event Hubs, IoT Hub, or Azure API Management.
service-fabric Service Fabric Tutorial Deploy App With Cicd Vsts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-deploy-app-with-cicd-vsts.md
Title: Deploy an app with CI and Azure Pipelines description: In this tutorial, you learn how to set up continuous integration and deployment for a Service Fabric application using Azure Pipelines.- Previously updated : 07/22/2019-++++ Last updated : 07/14/2022 + # Tutorial: Deploy an application with CI/CD to a Service Fabric cluster This tutorial is part four of a series and describes how to set up continuous integration and deployment for an Azure Service Fabric application using Azure Pipelines. An existing Service Fabric application is needed, the application created in [Build a .NET application](service-fabric-tutorial-create-dotnet-app.md) is used as an example.
service-fabric Service Fabric Tutorial Deploy App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-deploy-app.md
Title: Deploy a Service Fabric app to a cluster in Azure description: Learn how to deploy an existing application to a newly created Azure Service Fabric cluster from Visual Studio.- Previously updated : 07/22/2019-++++ Last updated : 07/14/2022 + # Tutorial: Deploy a Service Fabric application to a cluster in Azure This tutorial is part two of a series. It shows you how to deploy an Azure Service Fabric application to a new cluster in Azure.
service-fabric Service Fabric Tutorial Deploy Container App With Cicd Vsts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-deploy-container-app-with-cicd-vsts.md
Title: Deploy a container application with CI/CD description: In this tutorial, you learn how to set up continuous integration and deployment for an Azure Service Fabric container application using Visual Studio Azure DevOps.- Previously updated : 08/29/2018-++++ Last updated : 07/14/2022 + # Tutorial: Deploy a container application with CI/CD to a Service Fabric cluster This tutorial is part two of a series and describes how to set up continuous integration and deployment for an Azure Service Fabric container application using Visual Studio and Azure DevOps. An existing Service Fabric application is needed, the application created in [Deploy a .NET application in a Windows container to Azure Service Fabric](service-fabric-host-app-in-a-container.md) is used as an example.
service-fabric Service Fabric Tutorial Dotnet App Enable Https Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-dotnet-app-enable-https-endpoint.md
Title: Add an HTTPS endpoint using Kestrel description: In this tutorial, you learn how to add an HTTPS endpoint to an ASP.NET Core front-end web service using Kestrel and deploy the application to a cluster.- Previously updated : 07/22/2019-++++ Last updated : 07/14/2022 + # Tutorial: Add an HTTPS endpoint to an ASP.NET Core Web API front-end service using Kestrel This tutorial is part three of a series. You will learn how to enable HTTPS in an ASP.NET Core service running on Service Fabric. When you're finished, you have a voting application with an HTTPS-enabled ASP.NET Core web front-end listening on port 443. If you don't want to manually create the voting application in [Build a .NET Service Fabric application](service-fabric-tutorial-deploy-app-to-party-cluster.md), you can [download the source code](https://github.com/Azure-Samples/service-fabric-dotnet-quickstart/) for the completed application.
service-fabric Service Fabric Tutorial Java Deploy Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-java-deploy-azure.md
Title: Deploy a Java app to a Service Fabric cluster in Azure description: In this tutorial, learn how to deploy a Java Service Fabric application to an Azure Service Fabric cluster.- Previously updated : 02/26/2018-++++ Last updated : 07/14/2022 + # Tutorial: Deploy a Java application to a Service Fabric cluster in Azure This tutorial is part three of a series and shows you how to deploy a Service Fabric application to a cluster in Azure.
service-fabric Service Fabric Tutorial Java Elk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-java-elk.md
Title: Monitor your apps in Service Fabric using ELK in Azure description: In this tutorial, learn how to set up ELK and monitor your Service Fabric applications.- Previously updated : 02/26/2018-++++ Last updated : 07/14/2022 + # Tutorial: Monitor your Service Fabric applications using ELK This tutorial is part four of a series. It shows how to use ELK (Elasticsearch, Logstash, and Kibana) to monitor Service Fabric applications running in Azure.
service-fabric Service Fabric Tutorial Java Jenkins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-java-jenkins.md
Title: Configure Jenkins for a Java app on Service Fabric in Azure description: In this tutorial, learn how to set up continuous integration using Jenkins to deploy a Java Service Fabric application.- Previously updated : 08/27/2018-++++ Last updated : 07/14/2022 + # Tutorial: Configure a Jenkins environment to enable CI/CD for a Java application on Service Fabric This tutorial is part five of a series. It shows you how to use Jenkins to deploy upgrades to your application. In this tutorial, the Service Fabric Jenkins plugin is used in combination with a GitHub repository hosting the Voting application to deploy the application to a cluster.
service-fabric Service Fabric Tutorial Monitor Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-monitor-cluster.md
Title: Monitor a Service Fabric cluster in Azure description: In this tutorial, you learn how to monitor a cluster by viewing Service Fabric events, querying the EventStore APIs, monitoring perf counters, and viewing health reports. Previously updated : 07/22/2019-++++ Last updated : 07/14/2022 + # Tutorial: Monitor a Service Fabric cluster in Azure Monitoring and diagnostics are critical to developing, testing, and deploying workloads in any cloud environment. This tutorial is part two of a series, and shows you how to monitor and diagnose a Service Fabric cluster using events, performance counters, and health reports. For more information, read the overview about [cluster monitoring](service-fabric-diagnostics-overview.md#platform-cluster-monitoring) and [infrastructure monitoring](service-fabric-diagnostics-overview.md#infrastructure-performance-monitoring).
service-fabric Service Fabric Tutorial Monitoring Aspnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-monitoring-aspnet.md
Title: Monitor and diagnose ASP.NET Core services description: In this tutorial, learn to configure monitoring and diagnostics for an Azure Service Fabric ASP.NET Core application. Previously updated : 07/10/2019-++++ Last updated : 07/14/2022 + # Tutorial: Monitor and diagnose an ASP.NET Core application on Service Fabric using Application Insights This tutorial is part five of a series. It walks through the steps to configure monitoring and diagnostics for an ASP.NET Core application running on a Service Fabric cluster using Application Insights. We will collect telemetry from the application developed in the first part of the tutorial, [Build a .NET Service Fabric application](service-fabric-tutorial-create-dotnet-app.md).
service-fabric Service Fabric Tutorial Monitoring Wincontainers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-monitoring-wincontainers.md
Title: Monitor and diagnose Windows containers description: In this tutorial, you configure Azure Monitor logs for monitoring and diagnostics of Windows containers on Azure Service Fabric.- Previously updated : 07/22/2019-++++ Last updated : 07/14/2022 + # Tutorial: Monitor Windows containers on Service Fabric using Azure Monitor logs This is part three of a tutorial, and walks you through configuring Azure Monitor logs to monitor your Windows containers orchestrated on Service Fabric.
service-fabric Service Fabric Tutorial Package Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-package-containers.md
Title: Package and deploy containers description: In this tutorial, you learn how to generate an Azure Service Fabric application definition using Yeoman and package the application. - Previously updated : 07/22/2019-++++ Last updated : 07/14/2022 + # Tutorial: Package and deploy containers as a Service Fabric application using Yeoman This tutorial is part two in a series. In this tutorial, a template generator tool (Yeoman) is used to generate a Service Fabric application definition. This application can then be used to deploy containers to Service Fabric. In this tutorial you learn how to:
service-fabric Service Fabric Tutorial Scale Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-scale-cluster.md
Title: Scale a Service Fabric cluster in Azure description: In this tutorial, you learn how to scale an Service Fabric cluster in Azure out and in, and how to clean up leftover resources.- Previously updated : 07/22/2019-++++ Last updated : 07/14/2022 + # Tutorial: Scale a Service Fabric cluster in Azure This tutorial is part three of a series, and shows you how to scale your existing cluster out and in. When you've finished, you will know how to scale your cluster and how to clean up any left-over resources. For more information on scaling a cluster running in Azure, read [Scaling Service Fabric clusters](service-fabric-cluster-scaling.md).
service-fabric Service Fabric Tutorial Standalone Azure Create Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-standalone-azure-create-infrastructure.md
Title: Create infrastructure for a cluster on Azure VMs description: In this tutorial, you learn how to set up the Azure VM infrastructure to run a Service Fabric cluster.- Previously updated : 07/22/2019-++++ Last updated : 07/14/2022 + # Tutorial: Create Azure VM infrastructure to host a Service Fabric cluster Service Fabric standalone clusters offer you the option to choose your own environment and create a cluster as part of the "any OS, any cloud" approach that Service Fabric is taking. In this tutorial series, you create a standalone cluster hosted on Azure VMs and install an application onto it.
service-fabric Service Fabric Tutorial Standalone Clean Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-standalone-clean-up.md
Title: Clean up a standalone cluster description: In this tutorial, learn how to delete AWS or Azure resources for your standalone Service Fabric cluster. Previously updated : 07/22/2019-++++ Last updated : 07/14/2022 + # Tutorial: Clean up your standalone cluster Service Fabric standalone clusters offer you the option to choose your own environment to host Service Fabric. In this tutorial series, you'll create a standalone cluster hosted on AWS or Azure and deploy an application to it.
service-fabric Service Fabric Tutorial Standalone Create Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-standalone-create-infrastructure.md
Title: Create infrastructure for a cluster on AWS description: In this tutorial, you learn how to set up the AWS infrastructure to run a Service Fabric cluster. Previously updated : 05/11/2018-++++ Last updated : 07/14/2022 + # Tutorial: Create AWS infrastructure to host a Service Fabric cluster Service Fabric standalone clusters offer you the option to choose your own environment and create a cluster as part of the "any OS, any cloud" approach that Service Fabric is taking. In this tutorial series, you create a standalone cluster hosted on AWS and install an application into it.
service-fabric Service Fabric Tutorial Standalone Create Service Fabric Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-standalone-create-service-fabric-cluster.md
Title: Install Service Fabric standalone client description: In this tutorial, learn how to install the Service Fabric standalone client on the cluster. Previously updated : 07/22/2019-++++ Last updated : 07/14/2022 + # Tutorial: Install and create Service Fabric cluster Service Fabric standalone clusters offer you the option to choose your own environment and create a cluster as part of the "any OS, any cloud" approach that Service Fabric is taking. In this tutorial series, you create a standalone cluster hosted on AWS or Azure and install an application into it.
service-fabric Service Fabric Tutorial Standalone Install An Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-standalone-install-an-application.md
Title: Install an app on a standalone cluster description: In this tutorial, you learn how to install an application into your standalone Service Fabric cluster. Previously updated : 07/22/2019-++++ Last updated : 07/14/2022 + # Tutorial: Deploy an application on your Service Fabric standalone cluster Service Fabric standalone clusters offer you the option to choose your own environment and create a cluster as part of the "any OS, any cloud" approach that Service Fabric is taking. In this tutorial series, you create a standalone cluster hosted on AWS and deploy an application into it.
service-fabric Service Fabric Tutorial Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-upgrade-cluster.md
Title: Upgrade the Service Fabric runtime in Azure description: In this tutorial, you learn how to use PowerShell to upgrade the runtime of an Azure-hosted Service Fabric cluster.- Previously updated : 07/22/2019-++++ Last updated : 07/14/2022 + # Tutorial: Upgrade the runtime of a Service Fabric cluster in Azure This tutorial is part four of a series, and shows you how to upgrade the Service Fabric runtime on an Azure Service Fabric cluster. This tutorial part is written for Service Fabric clusters running on Azure and doesn't apply to standalone Service Fabric clusters.
service-fabric Service Fabric Understand And Troubleshoot With System Health Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-understand-and-troubleshoot-with-system-health-reports.md
Title: Troubleshoot with system health reports description: Describes the health reports sent by Azure Service Fabric components and their usage for troubleshooting cluster or application problems- Previously updated : 2/28/2018+++++ Last updated : 07/14/2022 # Use system health reports to troubleshoot
service-fabric Service Fabric Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-versions.md
Title: Azure Service Fabric versions description: Learn about cluster versions in Azure Service Fabric and platform versions actively supported-- Previously updated : 04/12/2021+++++ Last updated : 07/14/2022 # Service Fabric supported versions
service-fabric Service Fabric View Entities Aggregated Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-view-entities-aggregated-health.md
Title: How to view Azure Service Fabric entities' aggregated health description: Describes how to query, view, and evaluate Azure Service Fabric entities' aggregated health, through health queries and general queries.- Previously updated : 2/28/2018-+++++ Last updated : 07/14/2022 + # View Service Fabric health reports Azure Service Fabric introduces a [health model](service-fabric-health-introduction.md) with health entities on which system components and watchdogs can report local conditions that they are monitoring. The [health store](service-fabric-health-introduction.md#health-store) aggregates all health data to determine whether entities are healthy.
service-fabric Service Fabric Visualizing Your Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-visualizing-your-cluster.md
Title: Visualizing your cluster using Azure Service Fabric Explorer description: Service Fabric Explorer is an application for inspecting and managing cloud applications and nodes in a Microsoft Azure Service Fabric cluster.- Previously updated : 01/24/2019+++++ Last updated : 07/14/2022 + # Visualize your cluster with Service Fabric Explorer Service Fabric Explorer (SFX) is an open-source tool for inspecting and managing Azure Service Fabric clusters. Service Fabric Explorer is a desktop application for Windows, macOS and Linux.
service-fabric Service Fabric Visualstudio Configure Secure Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-visualstudio-configure-secure-connections.md
Title: Configure secure Azure Service Fabric cluster connections description: Learn how to use Visual Studio to configure secure connections that are supported by the Azure Service Fabric cluster.--- Previously updated : 8/04/2017-+++++ Last updated : 07/14/2022 + # Configure secure connections to a Service Fabric cluster from Visual Studio Learn how to use Visual Studio to securely access an Azure Service Fabric cluster with access control policies configured.
service-fabric Service Fabric Visualstudio Configure Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-visualstudio-configure-upgrade.md
Title: Configure the upgrade of a Service Fabric application description: Learn how to configure the settings for upgrading a Service Fabric application by using Microsoft Visual Studio. Previously updated : 06/29/2017++++ Last updated : 07/14/2022 + # Configure the upgrade of a Service Fabric application in Visual Studio Visual Studio tools for Azure Service Fabric provide upgrade support for publishing to local or remote clusters. There are three scenarios in which you want to upgrade your application to a newer version instead of replacing the application during testing and debugging:
service-fabric Service Fabric Windows Cluster Windows Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-windows-cluster-windows-security.md
Title: Secure a cluster running on Windows by using Windows security description: Learn how to configure node-to-node and client-to-node security on a standalone cluster running on Windows by using Windows security.- Previously updated : 08/24/2017+++++ Last updated : 07/14/2022 + # Secure a standalone cluster on Windows by using Windows security To prevent unauthorized access to a Service Fabric cluster, you must secure the cluster. Security is especially important when the cluster runs production workloads. This article describes how to configure node-to-node and client-to-node security by using Windows security in the *ClusterConfig.JSON* file. The process corresponds to the configure security step of [Create a standalone cluster running on Windows](service-fabric-cluster-creation-for-windows-server.md). For more information about how Service Fabric uses Windows security, see [Cluster security scenarios](service-fabric-cluster-security.md).
service-fabric Service Fabric Windows Cluster X509 Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-windows-cluster-x509-security.md
Title: Secure a cluster on Windows by using certificates description: Secure communication within an Azure Service Fabric standalone or on-premises cluster, as well as between clients and the cluster.- Previously updated : 10/15/2017+++++ Last updated : 07/14/2022 + # Secure a standalone cluster on Windows by using X.509 certificates This article describes how to secure communication between the various nodes of your standalone Windows cluster. It also describes how to authenticate clients that connect to this cluster by using X.509 certificates. Authentication ensures that only authorized users can access the cluster and the deployed applications and perform management tasks. Certificate security should be enabled on the cluster when the cluster is created.
service-fabric Service Fabric Work With Reliable Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-work-with-reliable-collections.md
Title: Working with Reliable Collections description: Learn the best practices for working with Reliable Collections within an Azure Service Fabric application.- Previously updated : 03/10/2020++++ Last updated : 07/14/2022 # Working with Reliable Collections
service-fabric Tutorial Managed Cluster Add Remove Node Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/tutorial-managed-cluster-add-remove-node-type.md
Title: Add and remove node types of a Service Fabric managed cluster description: In this tutorial, learn how to add and remove node types of a Service Fabric managed cluster. Previously updated : 05/10/2021 -++++ Last updated : 07/14/2022 # Tutorial: Add and remove node types from a Service Fabric managed cluster
service-fabric Tutorial Managed Cluster Deploy App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/tutorial-managed-cluster-deploy-app.md
Title: Deploy an application to a Service Fabric managed cluster via PowerShell description: In this tutorial, you will connect to a Service Fabric managed cluster and deploy an application via PowerShell. Previously updated : 5/10/2021 -++++ Last updated : 07/14/2022 # Tutorial: Deploy an app to a Service Fabric managed cluster
service-fabric Tutorial Managed Cluster Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/tutorial-managed-cluster-deploy.md
Title: Deploy a Service Fabric managed cluster description: In this tutorial, you will deploy a Service Fabric managed cluster for testing. Previously updated : 8/23/2021-++++ Last updated : 07/14/2022 # Tutorial: Deploy a Service Fabric managed cluster
service-fabric Tutorial Managed Cluster Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/tutorial-managed-cluster-scale.md
Title: Scale out a Service Fabric managed cluster description: In this tutorial, learn how to scale out a node type of a Service Fabric managed cluster. Previously updated : 8/23/2021 -++++ Last updated : 07/14/2022 # Tutorial: Scale out a Service Fabric managed cluster
service-fabric Virtual Machine Scale Set Scale Node Type Scale Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/virtual-machine-scale-set-scale-node-type-scale-out.md
Title: Add a node type to an Azure Service Fabric cluster description: Learn how to scale out a Service Fabric cluster by adding a Virtual Machine Scale Set.-- Previously updated : 02/13/2019+++++ Last updated : 07/14/2022 + # Scale a Service Fabric cluster out by adding a virtual machine scale set This article describes how to scale an Azure Service Fabric cluster by adding a new node type to an existing cluster. A Service Fabric cluster is a network-connected set of virtual or physical machines into which your microservices are deployed and managed. A machine or VM that's part of a cluster is called a node. Virtual machine scale sets are an Azure compute resource that you use to deploy and manage a collection of virtual machines as a set. Every node type that is defined in an Azure cluster is [set up as a separate scale set](service-fabric-cluster-nodetypes.md). Each node type can then be managed separately. After creating a Service Fabric cluster, you can scale a cluster horizontally by adding a new node type (virtual machine scale set) to an existing cluster. You can scale the cluster at any time, even when workloads are running on the cluster. As the cluster scales, your applications automatically scale as well.
storage File Sync Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-deployment-guide.md
Title: Deploy Azure File Sync | Microsoft Docs
-description: Learn how to deploy Azure File Sync, from start to finish, using the Azure portal, PowerShell, or the Azure CLI.
+description: Learn how to deploy Azure File Sync from start to finish using the Azure portal, PowerShell, or the Azure CLI.
The recommended steps to onboard on Azure File Sync for the first time with zero
1. Redirect users and applications to this new share. 1. You can optionally delete any duplicate shares on the servers.
-If you don't have extra storage for initial onboarding and would like to attach to the existing shares, you can pre-seed the data in the Azure files shares using another data transfer tool instead of using the Storage Sync Service to upload the data. The pre-seeding approach is only suggested if you can accept downtime and absolutely guarantee no data changes on the server shares during the initial onboarding process.
+If you don't have extra storage for initial onboarding and would like to attach to the existing shares, you can pre-seed the data in the Azure file shares using another data transfer tool instead of using the Storage Sync Service to upload the data. The pre-seeding approach is only suggested if you can accept downtime and absolutely guarantee no data changes on the server shares during the initial onboarding process.
1. Ensure that data on any of the servers can't change during the onboarding process.
-1. Pre-seed Azure file shares with the server data using any data transfer tool over SMB, such as Robocopy, or AzCopy over REST. If using Robocopy, make sure you mount the Azure file share using the storage account access key; don't use a domain identity. If using AzCopy, be sure to set the appropriate switches to preserve ACL timestamps and attributes.
+1. Pre-seed Azure file shares with the server data using any data transfer tool over SMB, such as Robocopy, or AzCopy over REST. If using Robocopy, make sure you mount the Azure file share(s) using the storage account access key; don't use a domain identity. If using AzCopy, be sure to set the appropriate switches to preserve ACL timestamps and attributes.
1. Create Azure File Sync topology with the desired server endpoints pointing to the existing shares. 1. Let sync finish reconciliation process on all endpoints. 1. Once reconciliation is complete, you can open shares for changes.
storage Files Manage Namespaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-manage-namespaces.md
New-DfsnFolder -Path $sharePath -TargetPath $targetUNC
Now that you have created a namespace, a folder, and a folder target, you should be able to mount your file share through DFS Namespaces. If you are using a domain-based namespace, the full path for your share should be `\\<domain-name>\<namespace>\<share>`. If you are using a standalone namespace, the full path for your share should be `\\<DFS-server>\<namespace>\<share>`. If you are using a standalone namespace with root consolidation, you can access directly through your old server name, such as `\\<old-server>\<share>`.
+## Access-Based Enumeration (ABE)
+
+Using ABE to control the visibility of the files and folders in SMB Azure file shares isn't currently a supported scenario. ABE is a feature of DFS-N, so it's possible to configure identity-based authentication and enable the ABE feature. However, this only applies to the DFS-N folder targets; it doesn't retroactively apply to the targeted file shares themselves. This is because DFS-N works by referral, rather than as a proxy in front of the folder target.
+
+For example, if the user types in the path \\mydfsnserver\share, the SMB client gets the referral of \\mydfsnserver\share => \\server123\share and makes the mount against the latter.
+
+Because of this, ABE will only work in cases where the DFS-N server is hosting the list of usernames before the redirection:
+
+ \\DFSServer\users\contosouser1 => \\SA.file.core.windows.net\contosouser1
+ \\DFSServer\users\contosouser1 => \\SA.file.core.windows.net\users\contosouser1
+
+(Where **contosouser1** is a subfolder of the **users** share)
+
+If each user is a subfolder *after* the redirection, ABE won't work:
+
+ \\DFSServer\SomePath\users --> \\SA.file.core.windows.net\users
+ ## See also - Deploying an Azure file share: [Planning for an Azure Files deployment](storage-files-planning.md) and [How to create an file share](storage-how-to-create-file-share.md). - Configuring file share access: [Identity-based authentication](storage-files-active-directory-overview.md) and [Networking considerations for direct access](storage-files-networking-overview.md).
storage Storage Files Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md
- If users are accessing the Azure file share via a Windows Server that has the Azure File Sync agent installed, use an [audit policy](/windows/security/threat-protection/auditing/apply-a-basic-audit-policy-on-a-file-or-folder) or third-party product to track file changes and user access on the Windows Server. * <a id="access-based-enumeration"></a>
-**Does Azure Files support using Access-Based Enumeration (ABE) to control the visibility of the files and folders in SMB Azure file shares? Can I use DFS Namespaces (DFS-N) as a workaround to use ABE with Azure Files?**
+**Does Azure Files support using Access-Based Enumeration (ABE) to control the visibility of the files and folders in SMB Azure file shares?**
- Using ABE with Azure Files isn't currently a supported scenario, but you can [use DFS-N with SMB Azure file shares](files-manage-namespaces.md). ABE is a feature of DFS-N, so it's possible to configure identity-based authentication and enable the ABE feature. However, this only applies to the DFS-N folder targets; it doesn't retroactively apply to the targeted file shares themselves. This is because DFS-N works by referral, rather than as a proxy in front of the folder target.
-
- For example, if the user types in the path \\mydfsnserver\share, the SMB client gets the referral of \\mydfsnserver\share => \\server123\share and makes the mount against the latter.
-
- Because of this, ABE will only work in cases where the DFS-N server is hosting the list of usernames before the redirection:
-
- \\DFSServer\users\contosouser1 => \\SA.file.core.windows.net\contosouser1
- \\DFSServer\users\contosouser1 => \\SA.file.core.windows.net\users\contosouser1
-
- (Where **contosouser1** is a subfolder of the **users** share)
-
- If each user is a subfolder *after* the redirection, ABE won't work:
-
- \\DFSServer\SomePath\users --> \\SA.file.core.windows.net\users
+ Using ABE with Azure Files isn't currently supported, but you can [use DFS-N with SMB Azure file shares](files-manage-namespaces.md#access-based-enumeration-abe).
### AD DS & Azure AD DS Authentication * <a id="ad-support-devices"></a>
stream-analytics Job States https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/job-states.md
A Stream Analytics job could be in one of four states at any given time: running
| State | Description | Recommended actions | | | | |
-| **Running** | Your job is running on Azure reading events coming from the defined input sources, processing them and writing the results to the configured output sinks. | It is a best practice to track your jobΓÇÖs performance by monitoring [key metrics](./stream-analytics-set-up-alerts.md#scenarios-to-monitor). |
+| **Running** | Your job is running on Azure reading events coming from the defined input sources, processing them and writing the results to the configured output sinks. | It is a best practice to track your jobΓÇÖs performance by monitoring [key metrics](./stream-analytics-job-metrics.md#scenarios-to-monitor). |
| **Stopped** | Your job is stopped and does not process events. | NA |
-| **Degraded** | There might be intermittent issues with your input and output connections. These errors are called transient errors which might make your job enter a Degraded state. Stream Analytics will immediately try to recover from such errors and return to a Running state (within few minutes). These errors could happen due to network issues, availability of other Azure resources, deserialization errors etc. Your jobΓÇÖs performance may be impacted when job is in degraded state.| You can look at the [diagnostic or activity logs](./stream-analytics-job-diagnostic-logs.md#debugging-using-activity-logs) to learn more about the cause of these transient errors. In cases such as deserialization errors, it is recommended to take corrective action to ensure events aren't malformed. If the job keeps reaching the resource utilization limit, try to increase the SU number or [parallelize your job](./stream-analytics-parallelization.md). In other cases where you cannot take any action, Stream Analytics will try to recover to a *Running* state. <br> You can use [watermark delay](./stream-analytics-set-up-alerts.md#scenarios-to-monitor) metric to understand if these transient errors are impacting your job's performance.|
+| **Degraded** | There might be intermittent issues with your input and output connections. These errors are called transient errors which might make your job enter a Degraded state. Stream Analytics will immediately try to recover from such errors and return to a Running state (within few minutes). These errors could happen due to network issues, availability of other Azure resources, deserialization errors etc. Your jobΓÇÖs performance may be impacted when job is in degraded state.| You can look at the [diagnostic or activity logs](./stream-analytics-job-diagnostic-logs.md#debugging-using-activity-logs) to learn more about the cause of these transient errors. In cases such as deserialization errors, it is recommended to take corrective action to ensure events aren't malformed. If the job keeps reaching the resource utilization limit, try to increase the SU number or [parallelize your job](./stream-analytics-parallelization.md). In other cases where you cannot take any action, Stream Analytics will try to recover to a *Running* state. <br> You can use [watermark delay](./stream-analytics-job-metrics.md#scenarios-to-monitor) metric to understand if these transient errors are impacting your job's performance.|
| **Failed** | Your job encountered a critical error resulting in a failed state. Events aren't read and processed. Runtime errors are a common cause for jobs ending up in a failed state. | You can [configure alerts](./stream-analytics-set-up-alerts.md#set-up-alerts-in-the-azure-portal) so that you get notified when job goes to Failed state. <br> <br>You can debug using [activity and resource logs](./stream-analytics-job-diagnostic-logs.md#debugging-using-activity-logs) to identify root cause and address the issue.| ## Next steps
-* [Setup alerts for Azure Stream Analytics jobs](stream-analytics-set-up-alerts.md)
* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md) * [Azure Stream Analytics metrics dimensions](./stream-analytics-job-metrics-dimensions.md)
-* [Troubleshoot using activity and resource logs](./stream-analytics-job-diagnostic-logs.md)
+* [Troubleshoot using activity and resource logs](./stream-analytics-job-diagnostic-logs.md)
stream-analytics Power Bi Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/power-bi-output.md
String | String | String | String | String
Datetime | String | String | Datetime | String ## Limitations and best practices
-Currently, Power BI can be called roughly once per second. Streaming visuals support packets of 15 KB. Beyond that, streaming visuals fail (but push continues to work). Because of these limitations, Power BI lends itself most naturally to cases where Azure Stream Analytics does a significant data load reduction. We recommend using a Tumbling window or Hopping window to ensure that data push is at most one push per second, and that your query lands within the throughput requirements.
+Currently, Power BI can be called roughly once per second. Streaming visuals support packets of 15 KB. Beyond that, streaming visuals fail (but push continues to work). Because of these limitations, Power BI lends itself most naturally to cases where Azure Stream Analytics does a significant data load reduction. We recommend using a Tumbling window or Hopping window to ensure that data push is at most one push per second, and that your query lands within the throughput requirements. For more info on output batch size, see [Power BI REST API limits](/power-bi/developer/automation/api-rest-api-limitations).
-For more info on output batch size, see [Power BI REST API limits](/power-bi/developer/automation/api-rest-api-limitations).
+You can use the following equation to compute the value to give your window in seconds:
+
+![Screenshot of equation to compute value to give window in seconds.](./media/stream-analytics-power-bi-dashboard/compute-window-seconds-equation.png)
+
+For example:
+
+* You have 1,000 devices sending data at one-second intervals.
+* You are using the Power BI Pro SKU that supports 1,000,000 rows per hour.
+* You want to publish the amount of average data per device to Power BI.
+
+As a result, the equation becomes:
+
+![Screenshot of equation based on example criteria.](./media/stream-analytics-power-bi-dashboard/power-bi-example-equation.png)
+
+Given this configuration, you can change the original query to the following:
+
+```SQL
+ SELECT
+ MAX(hmdt) AS hmdt,
+ MAX(temp) AS temp,
+ System.TimeStamp AS time,
+ dspl
+ INTO "CallStream-PowerBI"
+ FROM
+ Input TIMESTAMP BY time
+ GROUP BY
+ TUMBLINGWINDOW(ss,4),
+ dspl
+```
+
+### Renew authorization
+If the password has changed since your job was created or last authenticated, you need to reauthenticate your Power BI account. If Azure AD Multi-Factor Authentication is configured on your Azure Active Directory (Azure AD) tenant, you also need to renew Power BI authorization every two weeks. If you don't renew, you could see symptoms such as a lack of job output or an `Authenticate user error` in the operation logs.
+
+Similarly, if a job starts after the token has expired, an error occurs and the job fails. To resolve this issue, stop the job that's running and go to your Power BI output. To avoid data loss, select the **Renew authorization** link, and then restart your job from the **Last Stopped Time**.
+
+After the authorization has been refreshed with Power BI, a green alert appears in the authorization area to reflect that the issue has been resolved. To overcome this limitation, it is recommended to [use Managed Identity to authenticate your Azure Stream Analytics job to Power BI](powerbi-output-managed-identity.md)
## Next steps
stream-analytics Stream Analytics Job Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-metrics.md
Azure Stream Analytics provides the following metrics for you to monitor your jo
| SU (Memory) % Utilization | The percentage of memory utilized by your job. If SU % utilization is consistently over 80%, the watermark delay is rising, and the number of backlogged events is rising, consider increasing streaming units. High utilization indicates that the job is using close to the maximum allocated resources. | | Watermark Delay | The maximum watermark delay across all partitions of all outputs in the job. |
-You can use these metrics to [monitor the performance of your Stream Analytics job](./stream-analytics-set-up-alerts.md#scenarios-to-monitor).
+## Scenarios to monitor
+|Metric|Condition|Time Aggregation|Threshold|Corrective Actions|
+|-|-|-|-|-|
+|SU% Utilization|Greater than|Average|80|There are multiple factors that increase SU% Utilization. You can scale with query parallelization or increase the number of streaming units. For more information, see [Leverage query parallelization in Azure Stream Analytics](stream-analytics-parallelization.md).|
+|CPU % Utilization|Greater than|Average|90|This likely means that there are some operations such as UDFs, UDAs or complex input deserialization which is requiring a lot of CPU cycles. This is usually overcome by increasing number of Streaming Units of the job.|
+|Runtime errors|Greater than|Total|0|Examine the activity or resource logs and make appropriate changes to the inputs, query, or outputs.|
+|Watermark delay|Greater than|Average|When average value of this metric over the last 15 minutes is greater than late arrival tolerance (in seconds). If you have not modified the late arrival tolerance, the default is set to 5 seconds.|Try increasing the number of SUs or parallelizing your query. For more information on SUs, see [Understand and adjust Streaming Units](stream-analytics-streaming-unit-consumption.md#how-many-sus-are-required-for-a-job). For more information on parallelizing your query, see [Leverage query parallelization in Azure Stream Analytics](stream-analytics-parallelization.md).|
+|Input deserialization errors|Greater than|Total|0|Examine the activity or resource logs and make appropriate changes to the input. For more information on resource logs, see [Troubleshoot Azure Stream Analytics using resource logs](stream-analytics-job-diagnostic-logs.md)|
## Get help For further assistance, try our [Microsoft Q&A question page for Azure Stream Analytics](/answers/topics/azure-stream-analytics.html)
stream-analytics Stream Analytics Job Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-reliability.md
Title: Avoid service interruptions in Azure Stream Analytics jobs description: This article describes guidance on making your Stream Analytics jobs upgrade resilient.-+
_With the exception of Central India_ (whose paired region, South India, does no
The article on **[availability and paired regions](../availability-zones/cross-region-replication-azure.md)** has the most up-to-date information on which regions are paired.
-It is recommended to deploy identical jobs to both paired regions. You should then [monitor these jobs](./stream-analytics-set-up-alerts.md#scenarios-to-monitor) to get notified when something unexpected happens. If one of these jobs ends up in a [Failed state](./job-states.md) after a Stream Analytics service update, you can contact customer support to help identify the root cause. You should also fail over any downstream consumers to the healthy job output.
+It is recommended to deploy identical jobs to both paired regions. You should then [monitor these jobs](./stream-analytics-job-metrics.md#scenarios-to-monitor) to get notified when something unexpected happens. If one of these jobs ends up in a [Failed state](./job-states.md) after a Stream Analytics service update, you can contact customer support to help identify the root cause. You should also fail over any downstream consumers to the healthy job output.
## Next steps
It is recommended to deploy identical jobs to both paired regions. You should th
* [Get started with Stream Analytics](stream-analytics-real-time-fraud-detection.md) * [Scale Stream Analytics jobs](stream-analytics-scale-jobs.md) * [Stream Analytics query language reference](/stream-analytics-query/stream-analytics-query-language-reference)
-* [Stream Analytics management REST API reference](/rest/api/streamanalytics/)
+* [Stream Analytics management REST API reference](/rest/api/streamanalytics/)
stream-analytics Stream Analytics Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-monitoring.md
Alternatively, browse to the **Monitoring** blade in the left panel and click th
There are 17 types of metrics provided by Azure Stream Analytics service. To learn about the details of them, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md).
-You can also use these metrics to [monitor the performance of your Stream Analytics job](./stream-analytics-set-up-alerts.md#scenarios-to-monitor).
+You can also use these metrics to [monitor the performance of your Stream Analytics job](./stream-analytics-job-metrics.md#scenarios-to-monitor).
## Operate and aggregate metrics in portal monitor
-There are several options available for your to operate and aggregate the metrics in portal monitor page.
+There are several options available for you to operate and aggregate the metrics in portal monitor page.
To check the metrics data for a specific dimension, you can use **Add filter**. There are 3 important metrics dimensions available. To learn more about the metric dimensions, see [Azure Stream Analytics metrics dimensions](./stream-analytics-job-metrics-dimensions.md).
stream-analytics Stream Analytics Real Time Fraud Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-real-time-fraud-detection.md
Previously updated : 07/08/2022 Last updated : 07/15/2022 #Customer intent: As an IT admin/developer, I want to run a Stream Analytics job to analyze phone call data and visualize results in a Power BI dashboard.
In this tutorial, you learn how to:
Before you start, make sure you have completed the following steps: * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
-* Download the phone call event generator app [TelcoGenerator.zip](https://download.microsoft.com/download/8/B/D/8BD50991-8D54-4F59-AB83-3354B69C8A7E/TelcoGenerator.zip) from the Microsoft Download Center or get the source code from [GitHub](https://aka.ms/azure-stream-analytics-telcogenerator).
+* Download the phone call event generator app [TelcoGenerator.zip](https://aka.ms/asatelcodatagen) from the Microsoft Download Center or get the source code from [GitHub](https://github.com/Azure/azure-stream-analytics/tree/master/DataGenerators/TelcoGeneratorCore).
* You will need Power BI account. ## Sign in to Azure
Use the following steps to create an event hub and send call data to that event
![Event hub configuration in the Azure portal](media/stream-analytics-real-time-fraud-detection/create-event-hub-portal.png)
-7. After the deployment is complete, select **Configuration** under **Settings** in your Event Hubs namespace and change the Minimum TLS version to **Version 1.0**.
- ![Screenshot of Event hub TLS configuration version 1.0 in the Azure portal.](media/stream-analytics-real-time-fraud-detection/event-hubs-tls-version.png)
- ### Grant access to the event hub and get a connection string Before an application can send data to Azure Event Hubs, the event hub must have a policy that allows access. The access policy produces a connection string that includes authorization information.
Before an application can send data to Azure Event Hubs, the event hub must have
Before you start the TelcoGenerator app, you should configure it to send data to the Azure Event Hubs you created earlier.
-1. Extract the contents of [TelcoGenerator.zip](https://download.microsoft.com/download/8/B/D/8BD50991-8D54-4F59-AB83-3354B69C8A7E/TelcoGenerator.zip) file.
+1. Extract the contents of [TelcoGenerator.zip](https://aka.ms/asatelcodatagen) file.
2. Open the `TelcoGenerator\TelcoGenerator\telcodatagen.exe.config` file in a text editor of your choice There is more than one `.config` file, so be sure that you open the correct one. 3. Update the `<appSettings>` element in the config file with the following details:
stream-analytics Stream Analytics Set Up Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-set-up-alerts.md
Previously updated : 06/21/2019 Last updated : 07/12/2022 # Set up alerts for Azure Stream Analytics jobs
The following example demonstrates how to set up alerts for when your job enters
Add an **Alert rule name**, **Description**, and your **Resource Group** to the **ALERT DETAILS** and click **Create alert rule** to create the rule for your Stream Analytics job. ![Screenshot shows the Create rule dialog box with ALERT DETAILS.](./media/stream-analytics-set-up-alerts/stream-analytics-create-alert-rule.png)
-
-## Scenarios to monitor
-
-The following alerts are recommended for monitoring the performance of your Stream Analytics job. These metrics should be evaluated every minute over the last 5-minute period.
-
-|Metric|Condition|Time Aggregation|Threshold|Corrective Actions|
-|-|-|-|-|-|
-|SU% Utilization|Greater than|Maximum|80|There are multiple factors that increase SU% Utilization. You can scale with query parallelization or increase the number of streaming units. For more information, see [Leverage query parallelization in Azure Stream Analytics](stream-analytics-parallelization.md).|
-|Runtime errors|Greater than|Total|0|Examine the activity or resource logs and make appropriate changes to the inputs, query, or outputs.|
-|Watermark delay|Greater than|Maximum|When average value of this metric over the last 15 minutes is greater than late arrival tolerance (in seconds). If you have not modified the late arrival tolerance, the default is set to 5 seconds.|Try increasing the number of SUs or parallelizing your query. For more information on SUs, see [Understand and adjust Streaming Units](stream-analytics-streaming-unit-consumption.md#how-many-sus-are-required-for-a-job). For more information on parallelizing your query, see [Leverage query parallelization in Azure Stream Analytics](stream-analytics-parallelization.md).|
-|Input deserialization errors|Greater than|Total|0|Examine the activity or resource logs and make appropriate changes to the input. For more information on resource logs, see [Troubleshoot Azure Stream Analytics using resource logs](stream-analytics-job-diagnostic-logs.md)|
## Next steps * [Scale Azure Stream Analytics jobs](stream-analytics-scale-jobs.md)
-* [Azure Stream Analytics Query Language Reference](/stream-analytics-query/stream-analytics-query-language-reference)
+* [Azure Stream Analytics Query Language Reference](/stream-analytics-query/stream-analytics-query-language-reference)
synapse-analytics Apache Spark 32 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-32-runtime.md
Title: Azure Synapse Runtime for Apache Spark 3.2
+ Title: Azure Synapse Runtime for Apache Spark 3.2
description: Supported versions of Spark, Scala, Python, and .NET for Apache Spark 3.2.
Last updated 06/13/2022 +
-# Azure Synapse Runtime for Apache Spark 3.2 (Preview)
-
-> [!IMPORTANT]
-> Azure Synapse Runtime for Apache Spark 3.2 is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+# Azure Synapse Runtime for Apache Spark 3.2
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document will cover the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.2.
synapse-analytics Apache Spark Intelligent Cache Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-intelligent-cache-concept.md
Previously updated : 2/17/2022 Last updated : 7/7/2022
The Synapse Intelligent Cache simplifies this process by automatically caching e
The Synapse cache is a single cache per node. If you're using a medium size node and run with two small executors on a single medium size node, these two executors would share the same cache.
-> [!Note]
-> Intelligent Cache is currently in Public Preview.
- ## Enable or Disable the cache The cache size can be adjusted based on the percent of total disk size available for each Apache Spark pool. By default, the cache is set to disabled but it's as easy as moving the **slider** bar from 0 (disabled) to the desired percentage for your cache size to enable it. We reserve a minimum of 20% of available disk space for data shuffles. For shuffle intensive workloads, you can minimize the cache size or disable the cache. We recommend starting with a 50% cache size and adjust as necessary. It's important to note that if your workload requires a lot of disk space on the local SSD for shuffle or RDD caching, then consider reducing the cache size to reduce the chance of failure due to insufficient storage. The actual size of the available storage and the cache size on each node will depend on the node family and node size.
When creating a new Spark pool, browse under the **additional settings** tab to
### Enabling/Disabling cache for existing Spark pools
-For existing Spark pools, browse to the **Scale settings** of your Apache Spark pool of choice to enable, by moving the **slider** to a value more then 0, or disable it, by moving **slider** to 0.
+For existing Spark pools, browse to the **Scale settings** of your Apache Spark pool of choice to enable, by moving the **slider** to a value more than 0, or disable it, by moving **slider** to 0.
![How to enable or disable Intelligent Cache for existing Spark pools](./media/apache-spark-intelligent-cache-concept/inteligent-cache-setting-config.png)
This feature will benefit you if:
* Your workload uses Delta tables, parquet file formats and CSV files.
-* You're using Apache Spark v3.1 or higher on Azure Synapse.
+* You're using Apache Spark 3 or higher on Azure Synapse.
You won't see the benefit of this feature if:
-* You're reading a file that exceed the cache size because the beginning of the files could be evicted and subsequent queries will have to refetch the data from the remote storage. In this case, you won't see any benefits from the Intelligent Cache and you may want to increase your cache size and/or node size.
+* You're reading a file that exceeds the cache size because the beginning of the files could be evicted and subsequent queries will have to refetch the data from the remote storage. In this case, you won't see any benefits from the Intelligent Cache and you may want to increase your cache size and/or node size.
* Your workload requires large amounts of shuffle, then disabling the Intelligent Cache will free up available space to prevent your job from failing due to insufficient storage space.
You won't see the benefit of this feature if:
To learn more on Apache Spark, see the following articles: - [What is Apache Spark](./spark/../apache-spark-concepts.md) - [Apache Spark core concepts](./spark/../apache-spark-concepts.md)
- - [Azure Synapse Runtime for Apache Spark 3.1](./spark/../apache-spark-3-runtime.md)
+ - [Azure Synapse Runtime for Apache Spark 3.2](./spark/../apache-spark-32-runtime.md)
- [Apache Spark pool sizes and configurations](./spark/../apache-spark-pool-configurations.md) To learn about configuring Spark session settings
synapse-analytics Apache Spark Pool Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-pool-configurations.md
Title: Apache Spark pool concepts description: Introduction to Apache Spark pool sizes and configurations in Azure Synapse Analytics. -+ Previously updated : 08/19/2021 -- Last updated : 07/7/2022 ++ # Apache Spark pool configurations in Azure Synapse Analytics
When the autoscale feature is disabled, the number of nodes set will remain fixe
## Automatic pause
-The automatic pause feature releases resources after a set idle period reducing the overall cost of an Apache Spark pool. The number of minutes of idle time can be set once this feature is enabled. The automatic pause feature is independent of the autoscale feature. Resources can be paused whether the autoscale is enabled or disabled. This setting can be altered after pool creation although the instance may need to be restarted.
+The automatic pause feature releases resources after a set idle period, reducing the overall cost of an Apache Spark pool. The number of minutes of idle time can be set once this feature is enabled. The automatic pause feature is independent of the autoscale feature. Resources can be paused whether the autoscale is enabled or disabled. This setting can be altered after pool creation although active sessions will need to be restarted.
## Next steps * [Azure Synapse Analytics](../index.yml)
-* [Apache Spark Documentation](https://spark.apache.org/docs/2.4.5/)
+* [Apache Spark Documentation](https://spark.apache.org/docs/3.2.1/)
synapse-analytics Apache Spark Version Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-version-support.md
Previously updated : 04/20/2022 Last updated : 07/12/2022 + # Azure Synapse runtimes
When you create a serverless Apache Spark pool, you will have the option to sele
- Tested compatibility with specific Apache Spark versions - Access to popular, compatible connectors and open-source packages
-> [!IMPORTANT]
-> Azure Synapse Runtime for Apache Spark 3.2 is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- > [!NOTE] > - Maintenance updates will be automatically applied to new sessions for a given serverless Apache Spark pool. > - You should test and validate that your applications run properly when using new runtime versions.
When you create a serverless Apache Spark pool, you will have the option to sele
## Supported Azure Synapse runtime releases The following table lists the runtime name, Apache Spark version, and release date for supported Azure Synapse Runtime releases.
-| Runtime name | Release date | Release stage |
-| -- | -- | -- |
-| [Azure Synapse Runtime for Apache Spark 2.4](./apache-spark-24-runtime.md) | December 15, 2020 | GA|
-| [Azure Synapse Runtime for Apache Spark 3.1](./apache-spark-3-runtime.md) | May 26, 2021 | GA |
-| [Azure Synapse Runtime for Apache Spark 3.2](./apache-spark-32-runtime.md) | April 20, 2022 | Preview |
+| Runtime name | Release date | Release stage | End of life announcement date | End of life effective date |
+| -- | -- | -- | -- | -- |
+| [Azure Synapse Runtime for Apache Spark 3.2](./apache-spark-32-runtime.md) | July 8, 2022 | GA | July 8, 2023 | July 8, 2024 |
+| [Azure Synapse Runtime for Apache Spark 3.1](./apache-spark-3-runtime.md) | May 26, 2021 | LTS | January 26, 2023 | January 26, 2024 |
+| [Azure Synapse Runtime for Apache Spark 2.4](./apache-spark-24-runtime.md) | December 15, 2020 | __LTS<br/>End of Life to be announced__ | __July 22, 2022__ | July 21, 2023 |
## Runtime release stages
-## Preview runtimes
-Azure Synapse Analytics provides previews to give you a chance to evaluate and share feedback on features before they become generally available (GA). While a runtime is available in preview, new dependencies and component versions may be introduced. Support SLAs are not applicable for preview runtimes.
-
-## Generally available runtimes
-Generally available (GA) runtimes are open to all customers and are ready for production use. Once a runtime is generally available, security fixes and stability improvements may be backported. In addition, new components will only be introduced if they do not change underlying dependencies or component versions.
+For the complete runtime for Apache Spark lifecycle and support policies, refer to [Synapse runtime for Apache Spark lifecycle and supportability](./runtime-for-apache-spark-lifecycle-and-supportability.md).
synapse-analytics Runtime For Apache Spark Lifecycle And Supportability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/runtime-for-apache-spark-lifecycle-and-supportability.md
+
+ Title: Synapse runtime for Apache Spark lifecycle and supportability
+description: Lifecycle and support policies for Synapse runtime for Apache Spark
++++ Last updated : 07/12/2022++++
+# Synapse runtime for Apache Spark lifecycle and supportability
+
+Apache Spark pools in Azure Synapse use runtimes to tie together essential component versions such as Azure Synapse optimizations, packages, and connectors with a specific Apache Spark version. Each runtime will be upgraded periodically to include new improvements, features, and patches.
+
+## Release cadence
+
+The Apache Spark project releases minor versions about __every 6 months__. Once released, the Azure Synapse team aims to provide a __preview Runtime within 90 days__.
+
+## Runtime lifecycle
+
+The following diagram captures expected lifecycle paths for a Synapse runtime for Apache Spark.
+
+![How to enable Intelligent Cache during new Spark pools creation](./media/runtime-for-apache-spark-lifecycle/runtime-for-apache-spark-lifecycle.png)
+
+| Runtime release stage | Expected Lifecycle* | Notes |
+| -- | -- | -- |
+| Preview | 3 months* | Should be used to evaluate new features and validation of workload migration to newer versions. <br/> Must not be used for production workloads. <br/> A Preview runtime may not be elected to move into a GA stage at Microsoft discretion; moving directly to EOLA stage. |
+| Generally Available (GA) | 12 months* | Generally available (GA) runtimes are open to all customers and are ready for production use. <br/> A GA runtime may not be elected to move into an LTS stage at Microsoft discretion. |
+| Long Term Support (LTS) | 12 months* | Long term support (LTS) runtimes are open to all customers and are ready for production use, yet customers are encouraged to expedite validation and workload migration to latest GA runtimes. |
+| End of Life announced (EOLA) | 12 months* for GA or LTS runtimes.<br/>1 month* for Preview runtimes. | At the end of a lifecycle period, given a runtime is chosen to retire; an End-of-Life announcement will be made to all customers per [Azure retirements policy](https://docs.microsoft.com/lifecycle/faq/azure). This additional period serves as the exit ramp for customers to migrate workloads to a GA runtime. |
+| End of Life (EOL) | - | At this stage, the Runtime is retired and no longer supported. |
++
+\* *Expected duration of a runtime in the referred stage. Provided as an example for a given Runtime. Stage durations are subject to change at Microsoft discretion, as noted throughout this document.*
++
+> [!IMPORTANT]
+>
+> * The expected timelines are provided as best effort based on current Apache Spark releases. If the Apache Spark project changes lifecycle of a specific version affecting a Synapse runtime, changes to the stage dates will be noted on the [release notes](./apache-spark-version-support.md).
+> * Both GA and LTS runtimes may be moved into EOL stage faster based on outstanding security risks and usage rates criteria at Microsoft discretion. Proper notification will be performed based on current Azure service policies, please refer to [Lifecycle FAQ - Microsoft Azure](https://docs.microsoft.com/lifecycle/faq/azure) for information.
+>
+
+## Release stages and support
+
+### Preview runtimes
+Azure Synapse Analytics provides previews to give you a chance to evaluate and share feedback on features before they become generally available (GA). While a runtime is available in preview, new dependencies and component versions may be introduced. __Support SLAs are not applicable for preview runtimes, therefore no production grade workloads should be considered on a Preview runtime.__
+
+At the end of the Preview lifecycle for the runtime, Microsoft will assess if the runtime will move into a Generally Availability (GA) based on customer usage, security and stability criteria; as described in the next section.
+
+If not eligible for GA stage, the Preview runtime will move into the retirement cycle composed by the end of life announcement (EOLA), a 1 month period, then moving into the EOL stage.
+
+### Generally available runtimes
+Generally available (GA) runtimes are open to all customers and __are ready for production use__. Once a runtime is generally available, only security fixes will be backported. In addition, new components or features will be introduced if they do not change underlying dependencies or component versions.
+
+At the end of the GA lifecycle for the runtime, Microsoft will assess if the runtime will have an extended lifecycle (LTS) based on customer usage, security and stability criteria; as described in the next section.
+
+If not eligible for LTS stage, the GA runtime will move into the retirement cycle composed by the end of life announcement (EOLA), a 12 month period, then moving into the EOL stage.
+
+### Long term support runtimes
+Long term support (LTS) runtimes are open to all customers and are ready for production use, yet customers are encouraged to expedite validation and migration of code base and workloads to the latest GA runtimes. Customers should preferably not onboard new workloads using an LTS runtime. Security fixes and stability improvements may be backported. Yet, no new components or features will be introduced into the runtime at this stage.
+
+### End of life announcement
+At the end of the runtime lifecycle at any stage, an end of life announcement (EOLA) is performed. Proper notification will be performed based on current Azure service policies, please refer to [Lifecycle FAQ - Microsoft Azure](https://docs.microsoft.com/lifecycle/faq/azure) for information.
+
+Support SLAs are applicable for EOLA runtimes yet all customers must migrate to a GA stage runtime. During the retirement period, no security fixes and stability improvements will be backported.
+
+Existing pools will work as expected, yet __no new Synapse Spark pools of such a version can be created, as the runtime version will not be listed on Azure Synapse Studio, Synapse API and Azure Portal.__
+
+Based on outstanding security issues and runtime usage, Microsoft reserves its right to expedite moving a runtime into the final EOL stage.
+
+### End of life and retirement
+After the period of end of life announcement (EOLA), runtimes are considered retired.
+* Existing Spark Pools definitions and metadata will still exist in the workspace for a defined period, yet __all pipelines, jobs and notebooks will not be able to execute__.
+* For runtimes coming from GA or LTS stages, Spark Pools definitions will be deleted in 60 days.
+* For runtimes coming from a Preview stage, Spark Pools definitions will be deleted in 15 days.
virtual-desktop Language Packs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/language-packs.md
You need the following things to customize your Windows 10 Enterprise multi-sess
- [Windows 10, version 2004 or later 01C 2022 LXP ISO](https://software-download.microsoft.com/download/sg/LanguageExperiencePack.2201C.iso) - [Windows 10, version 2004 or later 02C 2022 LXP ISO](https://software-static.download.prss.microsoft.com/sg/download/888969d5-f34g-4e03-ac9d-1f9786c66749/LanguageExperiencePack.2202C.iso) - [Windows 10, version 2004 or later 04C 2022 LXP ISO](https://software-static.download.prss.microsoft.com/dbazure/888969d5-f34g-4e03-ac9d-1f9786c66750/LanguageExperiencePack.2204C.iso)
-
+ - [Windows 10, version 2004 or later 06C 2022 LXP ISO](https://software-static.download.prss.microsoft.com/dbazure/888969d5-f34g-4e03-ac9d-1f9786c66750/LanguageExperiencePack.2206C.iso)
+ - An Azure Files Share or a file share on a Windows File Server Virtual Machine >[!NOTE]
virtual-desktop Safe Url List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/safe-url-list.md
Below is the list of URLs your session host VMs need to access for Azure Virtual
| `*.prod.warm.ingest.monitor.core.windows.net` | 443 | Agent traffic | AzureMonitor | | `catalogartifact.azureedge.net` | 443 | Azure Marketplace | AzureFrontDoor.Frontend | | `kms.core.windows.net` | 1688 | Windows activation | Internet |
+| `azkms.core.windows.net` | 1688 | Windows activation | Internet |
| `mrsglobalsteus2prod.blob.core.windows.net` | 443 | Agent and SXS stack updates | AzureCloud | | `wvdportalstorageblob.blob.core.windows.net` | 443 | Azure portal support | AzureCloud | | `169.254.169.254` | 80 | [Azure Instance Metadata service endpoint](../virtual-machines/windows/instance-metadata-service.md) | N/A |
virtual-machines Image Builder Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder-powershell.md
Grant Azure image builder permissions to create images in the specified resource
1. Create a VM Image Builder distributor object.
+ ```azurepowershell-interactive
+ $disObjParams = @{
+ SharedImageDistributor = $true
+ ArtifactTag = @{tag='dis-share'}
+ GalleryImageId = "/subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup/providers/Microsoft.Compute/galleries/$myGalleryName/images/$imageDefName"
+ ReplicationRegion = $location
+ RunOutputName = $runOutputName
+ ExcludeFromLatest = $false
+ }
+ $disSharedImg = New-AzImageBuilderDistributorObject @disObjParams
+ ```
+ 1. Create a VM Image Builder customization object. ```azurepowershell-interactive
vpn-gateway Point To Site Vpn Client Cert Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-cert-windows.md
# Configure point-to-site VPN clients - certificate authentication - Windows
-When you connect to an Azure virtual network (VNet) using point-to-site (P2S) and certificate authentication, you use the VPN client that is natively installed on the operating system from which youΓÇÖre connecting. If you use the tunnel type OpenVPN, you also have the option of using the Azure VPN Client or the OpenVPN client software. All of the necessary configuration settings for the VPN clients are contained in a VPN client configuration zip file. The settings in the zip file help you easily configure the VPN clients Windows.
+When you connect to an Azure virtual network (VNet) using point-to-site (P2S) and certificate authentication, you can use the VPN client that is natively installed on the operating system from which youΓÇÖre connecting. If you use the tunnel type OpenVPN, you also have the option of using the Azure VPN Client or the OpenVPN client software. All of the necessary configuration settings for the VPN clients are contained in a VPN client configuration zip file. The settings in the zip file help you easily configure the VPN clients.
The VPN client configuration files that you generate are specific to the P2S VPN gateway configuration for the VNet. If there are any changes to the P2S VPN configuration after you generate the files, such as changes to the VPN protocol type or authentication type, you need to generate new VPN client configuration files and apply the new configuration to all of the VPN clients that you want to connect. For more information about P2S connections, see [About point-to-site VPN](point-to-site-about.md).