Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-monitor.md | Watch this video to learn how to configure monitoring for Azure AD B2C using Azu ## Deployment overview -Azure AD B2C uses [Azure Active Directory monitoring](../active-directory/reports-monitoring/overview-monitoring.md). Unlike Azure AD tenants, an Azure AD B2C tenant can't have a subscription associated with it. So, we need to take extra steps to enable the integration between Azure AD B2C and Log Analytics, which is where we'll send the logs. +Azure AD B2C uses [Azure Active Directory monitoring](../active-directory/reports-monitoring/overview-monitoring.md). Unlike Azure AD tenants, an Azure AD B2C tenant can't have a subscription associated with it. So, we need to take extra steps to enable the integration between Azure AD B2C and Log Analytics, which is where we send the logs. To enable _Diagnostic settings_ in Azure Active Directory within your Azure AD B2C tenant, you use [Azure Lighthouse](../lighthouse/overview.md) to [delegate a resource](../lighthouse/concepts/architecture.md), which allows your Azure AD B2C (the **Service Provider**) to manage an Azure AD (the **Customer**) resource. > [!TIP] During this deployment, you'll configure your Azure AD B2C tenant where logs are In summary, you'll use Azure Lighthouse to allow a user or group in your Azure AD B2C tenant to manage a resource group in a subscription associated with a different tenant (the Azure AD tenant). After this authorization is completed, the subscription and log analytics workspace can be selected as a target in the Diagnostic settings in Azure AD B2C. -## Pre-requisites +## Prerequisites - An Azure AD B2C account with [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator) role on the Azure AD B2C tenant. The workbook will display reports in the form of a dashboard. ## Create alerts -Alerts are created by alert rules in Azure Monitor and can automatically run saved queries or custom log searches at regular intervals. You can create alerts based on specific performance metrics or when certain events occur. You can also create alerts on absence of an event, or a number of events occur within a particular time window. For example, alerts can be used to notify you when average number of sign in exceeds a certain threshold. For more information, see [Create alerts](../azure-monitor/alerts/alerts-log.md). +Alerts are created by alert rules in Azure Monitor and can automatically run saved queries or custom log searches at regular intervals. You can create alerts based on specific performance metrics or when certain events occur. You can also create alerts on absence of an event, or when a number of events occur within a particular time window. For example, alerts can be used to notify you when average number of sign-ins exceeds a certain threshold. For more information, see [Create alerts](../azure-monitor/alerts/alerts-log.md). -Use the following instructions to create a new Azure Alert, which will send an [email notification](../azure-monitor/alerts/action-groups.md#configure-notifications) whenever there's a 25% drop in the **Total Requests** compared to previous period. Alert will run every 5 minutes and look for the drop in the last hour compared to the hour before it. The alerts are created using Kusto query language. +Use the following instructions to create a new Azure Alert, which will send an [email notification](../azure-monitor/alerts/action-groups.md) whenever there's a 25% drop in the **Total Requests** compared to previous period. Alert will run every 5 minutes and look for the drop in the last hour compared to the hour before it. The alerts are created using Kusto query language. 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Make sure you're using the directory that contains your *Azure AD* tenant. Select the **Directories + subscriptions** icon in the portal toolbar. 1. From **Log Analytics workspace**, select **Logs**.-1. Create a new **Kusto query** by using the query below. +1. Create a new **Kusto query** by using this query. ```kusto let start = ago(2h); Use the following instructions to create a new Azure Alert, which will send an [ | where PercentageChange <= threshold //Trigger's alert rule if matched. ``` -1. Select **Run**, to test the query. You should see the results if there is a drop of 25% or more in the total requests within the past hour. -1. To create an alert rule based on the query above, use the **+ New alert rule** option available in the toolbar. +1. Select **Run**, to test the query. You should see the results if there's a drop of 25% or more in the total requests within the past hour. +1. To create an alert rule based on this query, use the **+ New alert rule** option available in the toolbar. 1. On the **Create an alert rule** page, select **Condition name** 1. On the **Configure signal logic** page, set following values and then use **Done** button to save the changes. |
active-directory-b2c | Identity Provider Microsoft Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-microsoft-account.md | To enable sign-in for users with a Microsoft account in Azure Active Directory B 1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **App registrations**. 1. Select **New registration**. 1. Enter a **Name** for your application. For example, *MSAapp1*.-1. Under **Supported account types**, select **Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)**. +1. Under **Supported account types**, select **personal Microsoft accounts (e.g. Skype, Xbox)**. For more information on the different account type selections, see [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). 1. Under **Redirect URI (optional)**, select **Web** and enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your Azure AD B2C tenant, and `your-domain-name` with your custom domain. |
active-directory-b2c | Manage Users Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/manage-users-portal.md | To reset a user's password: For details about restoring a user within the first 30 days after deletion, or for permanently deleting a user, see [Restore or remove a recently deleted user using Azure Active Directory](../active-directory/fundamentals/active-directory-users-restore.md). ++## Export consumer users ++1. In your Azure AD B2C directory, search for **Azure Active Directory**. +2. Select **Users**, and then select **Bulk Operations** and **Download Users**. +3. Select **Start**, and then select **File is ready! Click here to download**. + ++When downloading users via Bulk Operations option, the CSV file will bring users with their UPN attribute with the format *objectID@B2CDomain*. This is by design since that's the way the UPN information is stored in the B2C tenant. ++ ## Next steps For automated user management scenarios, for example migrating users from another identity provider to your Azure AD B2C directory, see [Azure AD B2C: User migration](user-migration.md). |
active-directory | On Premises Ldap Connector Prepare Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-ldap-connector-prepare-directory.md | Now that we have configured the certificate and granted the network service acco - Place a check in the SSL box [](../../../includes/media/active-directory-app-provisioning-ldap/ldp-2.png#lightbox)</br> 5. You should see a response similar to the screenshot below.- [](../../../includes/media/active-directory-app-provisioning-ldap/ldp-3.png#lightbox)</br> + [](../../../includes/media/active-directory-app-provisioning-ldap/ldp-3.png#lightbox)</br> 6. At the top, under **Connection** select **Bind**. 7. Leave the defaults and click **OK**. [](../../../includes/media/active-directory-app-provisioning-ldap/ldp-4.png#lightbox)</br> New-SelfSignedCertificate -DnsName $DNSName -CertStoreLocation $CertLocation #Create directory New-Item -Path $logpath -Name $dirname -ItemType $dirtype -#Export the certifcate from the local machine personal store +#Export the certificate from the local machine personal store Get-ChildItem -Path cert:\LocalMachine\my | Export-Certificate -FilePath c:\test\allcerts.sst -Type SST #Import the certificate in to the trusted root |
active-directory | On Premises Scim Provisioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-scim-provisioning.md | Once the agent is installed, no further configuration is necessary on-premises, 6. In the **Tenant URL** field, provide the SCIM endpoint URL for your application. The URL is typically unique to each target application and must be resolvable by DNS. An example for a scenario where the agent is installed on the same host as the application is https://localhost:8585/scim  7. Select **Test Connection**, and save the credentials. The application SCIM endpoint must be actively listening for inbound provisioning requests, otherwise the test will fail. Use the steps [here](on-premises-ecma-troubleshoot.md#troubleshoot-test-connection-issues) if you run into connectivity issues. >[!NOTE]-> If the test connection fails, you will see the request made. Please note that while the URL in the test connection error message is truncated, the actual request sent to the aplication contains the entire URL provided above. +> If the test connection fails, you will see the request made. Please note that while the URL in the test connection error message is truncated, the actual request sent to the application contains the entire URL provided above. 8. Configure any [attribute mappings](customize-application-attributes.md) or [scoping](define-conditional-rules-for-provisioning-user-accounts.md) rules required for your application. 9. Add users to scope by [assigning users and groups](../../active-directory/manage-apps/add-application-portal-assign-users.md) to the application. |
active-directory | Application Proxy Deployment Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-deployment-plan.md | These logs provide detailed information about logins to applications configured #### Application Proxy Connector monitoring -The connectors and the service take care of all the high availability tasks. You can monitor the status of your connectors from the Application Proxy page in the Azure portal. For more information about connector maintainence see [Understand Azure AD Application Proxy Connectors](./application-proxy-connectors.md#maintenance). +The connectors and the service take care of all the high availability tasks. You can monitor the status of your connectors from the Application Proxy page in the Azure portal. For more information about connector maintenance see [Understand Azure AD Application Proxy Connectors](./application-proxy-connectors.md#maintenance).  |
active-directory | Concept Authentication Web Browser Cookies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-web-browser-cookies.md | Persistent session tokens are stored as persistent cookies on the web browser's | ESTSAUTHLIGHT | Common | Contains Session GUID Information. Lite session state cookie used exclusively by client-side JavaScript in order to facilitate OIDC sign-out. Security feature. | | SignInStateCookie | Common | Contains list of services accessed to facilitate sign-out. No user information. Security feature. | | CCState | Common | Contains session information state to be used between Azure AD and the [Azure AD Backup Authentication Service](../conditional-access/resilience-defaults.md). |-| buid | Common | Tracks browser related information. Used for service telemetry and protection mechanisms. | +| build | Common | Tracks browser related information. Used for service telemetry and protection mechanisms. | | fpc | Common | Tracks browser related information. Used for tracking requests and throttling. | | esctx | Common | Session context cookie information. For CSRF protection. Binds a request to a specific browser instance so the request can't be replayed outside the browser. No user information. | | ch | Common | ProofOfPossessionCookie. Stores the Proof of Possession cookie hash to the user agent. | |
active-directory | How To Migrate Mfa Server To Azure Mfa With Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa-with-federation.md | For step-by-step directions on this process, see [Configure the AD FS servers](/ Once you've configured the servers, you can add Azure AD MFA as an additional authentication method. - + ## Prepare Azure AD and implement migration |
active-directory | Howto Authentication Use Email Signin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-use-email-signin.md | To support this hybrid authentication approach, you synchronize your on-premises In both configuration options, the user submits their username and password to Azure AD, which validates the credentials and issues a ticket. When users sign in to Azure AD, it removes the need for your organization to host and manage an AD FS infrastructure. -One of the user attributes that's automatically synchronized by Azure AD Connect is *ProxyAddresses*. If users have an email address defined in the on-premesis AD DS environment as part of the *ProxyAddresses* attribute, it's automatically synchronized to Azure AD. This email address can then be used directly in the Azure AD sign-in process as an alternate login ID. +One of the user attributes that's automatically synchronized by Azure AD Connect is *ProxyAddresses*. If users have an email address defined in the on-premises AD DS environment as part of the *ProxyAddresses* attribute, it's automatically synchronized to Azure AD. This email address can then be used directly in the Azure AD sign-in process as an alternate login ID. > [!IMPORTANT] > Only emails in verified domains for the tenant are synchronized to Azure AD. Each Azure AD tenant has one or more verified domains, for which you have proven ownership, and are uniquely bound to your tenant. |
active-directory | Onboard Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-azure.md | This option allows subscriptions to be automatically detected and monitored with 1. For onboarding mode, select ΓÇÿAutomatically ManageΓÇÖ > [!NOTE]- > The steps listed on the screen outline how to create the role assignment for the Cloud Infrastructure Entitlements Management application. This can be performed manually in the Entra console, or programatically with PowerShell or the Azure CLI. + > The steps listed on the screen outline how to create the role assignment for the Cloud Infrastructure Entitlements Management application. This can be performed manually in the Entra console, or programmatically with PowerShell or the Azure CLI. - Once complete, Click ΓÇÿVerify Now & SaveΓÇÖ This option detects all subscriptions that are accessible by the Cloud Infrastru 1. For onboarding mode, select ΓÇÿAutomatically ManageΓÇÖ > [!NOTE]- > The steps listed on the screen outline how to create the role assignment for the Cloud Infrastructure Entitlements Management application. You can do this manually in the Entra console, or programatically with PowerShell or the Azure CLI. + > The steps listed on the screen outline how to create the role assignment for the Cloud Infrastructure Entitlements Management application. You can do this manually in the Entra console, or programmatically with PowerShell or the Azure CLI. - Once complete, Click ΓÇÿVerify Now & SaveΓÇÖ |
active-directory | Onboard Gcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-gcp.md | The automatically manage option allows projects to be automatically detected and Firstly, grant Viewer and Security Reviewer role to service account created in previous step at organization, folder or project scope. -Once done, the steps are listed in the screen, which shows how to further configure in the GPC console, or programatically with the gcloud CLI. +Once done, the steps are listed in the screen, which shows how to further configure in the GPC console, or programmatically with the gcloud CLI. Once everything has been configured, click next, then 'Verify Now & Save'. To view status of onboarding after saving the configuration: This option detects all projects that are accessible by the Cloud Infrastructure Entitlement Management application. - Firstly, grant Viewer and Security Reviewer role to service account created in previous step at organization, folder or project scope-- Once done, the steps are listed in the screen to do configure manually in the GPC console, or programatically with the gcloud CLI+- Once done, the steps are listed in the screen to do configure manually in the GPC console, or programmatically with the gcloud CLI - Click Next - Click 'Verify Now & Save' - Navigate to newly create Data Collector row under GCP data collectors |
active-directory | Accounts Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/accounts-overview.md | While an account may be a member or guest in multiple organizations, MSAL doesn' The claims exposed on the account object are always the claims from the 'home tenant'/{authority} for an account. If that account hasn't been used to request a token for their home tenant, MSAL can't provide claims via the account object. For example: ```java-// Psuedo Code +// Pseudo Code IAccount account = getAccount("accountid"); String username = account.getClaims().get("preferred_username"); String issuer = account.getClaims().get("iss"); // The tenant specific authority To access claims about an account as they appear in other tenants, you first need to cast your account object to `IMultiTenantAccount`. All accounts may be multi-tenant, but the number of tenant profiles available via MSAL is based on which tenants you have requested tokens from using the current account. For example: ```java-// Psuedo Code +// Pseudo Code IAccount account = getAccount("accountid"); IMultiTenantAccount multiTenantAccount = (IMultiTenantAccount)account; |
active-directory | Active Directory Optional Claims | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-optional-claims.md | These claims are always included in v1.0 Azure AD tokens, but not included in v2 | `in_corp` | Inside Corporate Network | Signals if the client is logging in from the corporate network. If they're not, the claim isn't included. | Based off of the [trusted IPs](../authentication/howto-mfa-mfasettings.md#trusted-ips) settings in MFA. | | `family_name` | Last Name | Provides the last name, surname, or family name of the user as defined in the user object. <br>"family_name":"Miller" | Supported in MSA and Azure AD. Requires the `profile` scope. | | `given_name` | First name | Provides the first or "given" name of the user, as set on the user object.<br>"given_name": "Frank" | Supported in MSA and Azure AD. Requires the `profile` scope. |-| `upn` | User Principal Name | An identifer for the user that can be used with the username_hint parameter. Not a durable identifier for the user and shouldn't be used for authorization or to uniquely identity user information (for example, as a database key). For more information, see [Validate the user has permission to access this data](access-tokens.md). Instead, use the user object ID (`oid`) as a database key. Users signing in with an [alternate login ID](../authentication/howto-authentication-use-email-signin.md) shouldn't be shown their User Principal Name (UPN). Instead, use the following `preferred_username` claim for displaying sign-in state to the user. | See [additional properties](#additional-properties-of-optional-claims) for configuration of the claim. Requires the `profile` scope.| +| `upn` | User Principal Name | An identifier for the user that can be used with the username_hint parameter. Not a durable identifier for the user and shouldn't be used for authorization or to uniquely identity user information (for example, as a database key). For more information, see [Validate the user has permission to access this data](access-tokens.md). Instead, use the user object ID (`oid`) as a database key. Users signing in with an [alternate login ID](../authentication/howto-authentication-use-email-signin.md) shouldn't be shown their User Principal Name (UPN). Instead, use the following `preferred_username` claim for displaying sign-in state to the user. | See [additional properties](#additional-properties-of-optional-claims) for configuration of the claim. Requires the `profile` scope.| ## v1.0-specific optional claims set |
active-directory | Developer Guide Conditional Access Authentication Context | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/developer-guide-conditional-access-authentication-context.md | These steps are the changes that you need to carry in your code base. The steps try {- // read the header and checks if it conatins error with insufficient_claims value. + // read the header and checks if it contains error with insufficient_claims value. if (null != errorValue && "insufficient_claims" == errorValue) { var claimChallengeParameter = GetParameterValue(parameters, "claims"); |
active-directory | Developer Support Help Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/developer-support-help-options.md | If you can't find an answer to your problem by searching Microsoft Q&A, submit a | -| | | Azure AD B2B / External Identities | [Azure Active Directory External Identities](/answers/tags/231/azure-active-directory-b2c) | | Azure AD B2C | [Azure Active Directory External Identities](/answers/tags/231/azure-active-directory-b2c) |-| All other Azure Active Directory areas | [Azure Active Diretory](/answers/tags/49/azure-active-directory) | +| All other Azure Active Directory areas | [Azure Active Directory](/answers/tags/49/azure-active-directory) | | Azure RBAC | [Azure Role-Based access control](/answers/tags/189/azure-rbac) | | Azure Key Vault | [Azure Key Vault](/answers/tags/5/azure-key-vault) | | Microsoft Security | [Microsoft Defender for Cloud](/answers/tags/392/defender-for-cloud) | |
active-directory | Howto Add App Roles In Azure Ad Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md | The **Status** column should reflect that consent has been **Granted for \<tenan If you're implementing app role business logic that signs in the users in your application scenario, first define the app roles in **App registrations**. Then, an admin assigns them to users and groups in the **Enterprise applications** pane. These assigned app roles are included with any token that's issued for your application, either access tokens when your app is the API being called by an app or ID tokens when your app is signing in a user. -If you're implementing app role business logic in an app-calling-API scenario, you have two app registrations. One app registration is for the app, and a second app registration is for the API. In this case, define the app roles and assign them to the user or group in the app registration of the API. When the user authenticates with the app and requests an ID token to call the API, a roles claim is included in the ID token. Your next step is to add code to your web API to check for those roles when the API is called. +If you're implementing app role business logic in an app-calling-API scenario, you have two app registrations. One app registration is for the app, and a second app registration is for the API. In this case, define the app roles and assign them to the user or group in the app registration of the API. When the user authenticates with the app and requests an access token to call the API, a roles claim is included in the token. Your next step is to add code to your web API to check for those roles when the API is called. To learn how to add authorization to your web API, see [Protected web API: Verify scopes and app roles](scenario-protected-web-api-verification-scope-app-roles.md). |
active-directory | Msal Net Token Cache Serialization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-token-cache-serialization.md | var app = ConfidentialClientApplicationBuilder ### Samples - The following sample showcases using the token cache serializers in .NET Framework and .NET Core applications: [ConfidentialClientTokenCache](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/tree/master/ConfidentialClientTokenCache). -- The following sample is an ASP.NET web app that uses the same technics: [Use OpenID Connect to sign in users to Microsoft identity platform](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect).+- The following sample is an ASP.NET web app that uses the same techniques: [Use OpenID Connect to sign in users to Microsoft identity platform](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect). ## [Desktop apps](#tab/desktop) |
active-directory | Quickstart V2 Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-javascript.md | After the browser loads the application, select **Sign In**. The first time that ### How the sample works - + ### msal.js |
active-directory | Quickstart V2 Netcore Daemon | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-netcore-daemon.md | -> Microsoft Identity Web (in the [Microsoft.Identity.Web.TokenAcquisition](https://www.nuget.org/packages/Microsoft.Identity.Web.TokenAcquisition) package) is the library that's used to request tokens for accessing an API protected by the Microsoft identity platform. This quickstart requests tokens by using the application's own identity instead of delegated permissions. The authentication flow in this case is known as a [client credentials OAuth flow](v2-oauth2-client-creds-grant-flow.md). For more information on how to use MSAL.NET with a client credentials flow, see [this article](https://aka.ms/msal-net-client-credentials). Given the daemon app in this quickstart calls Microsoft Graph, you install tje [Microsoft.Identity.Web.MicrosoftGraph](https://www.nuget.org/packages/Microsoft.Identity.Web.MicrosoftGraph) package, which handles automatically authenticated requests to Microsoft Graph (and references itself Microsoft.Identity.Web.TokenAcquisition) +> Microsoft Identity Web (in the [Microsoft.Identity.Web.TokenAcquisition](https://www.nuget.org/packages/Microsoft.Identity.Web.TokenAcquisition) package) is the library that's used to request tokens for accessing an API protected by the Microsoft identity platform. This quickstart requests tokens by using the application's own identity instead of delegated permissions. The authentication flow in this case is known as a [client credentials OAuth flow](v2-oauth2-client-creds-grant-flow.md). For more information on how to use MSAL.NET with a client credentials flow, see [this article](https://aka.ms/msal-net-client-credentials). Given the daemon app in this quickstart calls Microsoft Graph, you install the [Microsoft.Identity.Web.MicrosoftGraph](https://www.nuget.org/packages/Microsoft.Identity.Web.MicrosoftGraph) package, which handles automatically authenticated requests to Microsoft Graph (and references itself Microsoft.Identity.Web.TokenAcquisition) > > Microsoft.Identity.Web.MicrosoftGraph can be installed by running the following command in the Visual Studio Package Manager Console: > |
active-directory | Sample V2 Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/sample-v2-code.md | The following samples show public client mobile applications that access the Mic > [!div class="mx-tdCol2BreakAll"] > | Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow | > | -- | -- |-- |-- |-> | .NET Core | • [Call Microsoft Graph using MAUI](https://github.com/Azure-Samples/ms-identity-dotnetcore-maui/tree/main/MauiAppBasic) <br/> • [Call Microsoft Graph using MAUI wih broker](https://github.com/Azure-Samples/ms-identity-dotnetcore-maui/tree/main/MauiAppWithBroker) <br/> • [Call Active Directory B2C tenant using MAUI](https://github.com/Azure-Samples/ms-identity-dotnetcore-maui/tree/main/MauiAppB2C) | MSAL MAUI | Authorization code with PKCE | +> | .NET Core | • [Call Microsoft Graph using MAUI](https://github.com/Azure-Samples/ms-identity-dotnetcore-maui/tree/main/MauiAppBasic) <br/> • [Call Microsoft Graph using MAUI with broker](https://github.com/Azure-Samples/ms-identity-dotnetcore-maui/tree/main/MauiAppWithBroker) <br/> • [Call Active Directory B2C tenant using MAUI](https://github.com/Azure-Samples/ms-identity-dotnetcore-maui/tree/main/MauiAppB2C) | MSAL MAUI | Authorization code with PKCE | > | iOS | • [Call Microsoft Graph native](https://github.com/Azure-Samples/ms-identity-mobile-apple-swift-objc) <br/> • [Call Microsoft Graph with Azure AD nxoauth](https://github.com/azure-samples/active-directory-ios-native-nxoauth2-v2) | MSAL iOS | Authorization code with PKCE | > | Java | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-android-java) | MSAL Android | Authorization code with PKCE | > | Kotlin | [Sign in users and call Microsoft Graph](https://github.com/Azure-Samples/ms-identity-android-kotlin) | MSAL Android | Authorization code with PKCE | |
active-directory | Spa Quickstart Portal Angular Ciam | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-angular-ciam.md | Last updated 05/05/2023 > ```console > npm install && npm start > ```-> 1. Open your browser, visit `http://locahost:4200`, select **Sign-in**, then follow the prompts. +> 1. Open your browser, visit `http://localhost:4200`, select **Sign-in**, then follow the prompts. > |
active-directory | Spa Quickstart Portal React Ciam | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-react-ciam.md | Last updated 05/05/2023 > ```console > npm install && npm start > ```-> 1. Open your browser, visit `http://locahost:3000`, select **Sign-in**, then follow the prompts. +> 1. Open your browser, visit `http://localhost:3000`, select **Sign-in**, then follow the prompts. > |
active-directory | Spa Quickstart Portal Vanilla Js Ciam | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-vanilla-js-ciam.md | Last updated 05/05/2023 > ```console > npm install && npm start > ```-> 1. Open your browser, visit `http://locahost:3000`, select **Sign-in**, then follow the prompts. +> 1. Open your browser, visit `http://localhost:3000`, select **Sign-in**, then follow the prompts. > |
active-directory | Tutorial V2 Shared Device Mode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-shared-device-mode.md | private void loadAccount() { mSingleAccountApp.getCurrentAccountAsync(new ISingleAccountPublicClientApplication.CurrentAccountCallback()) {- @Overide + @Override public void onAccountLoaded(@Nullable IAccount activeAccount) { if (activeAccount != null) |
active-directory | Web App Quickstart Portal Dotnet Ciam | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-dotnet-ciam.md | Last updated 05/05/2023 > ```console > dotnet run > ```-> 1. Open your browser, visit `https://locahost:7274`, select **Sign-in**, then follow the prompts. +> 1. Open your browser, visit `https://localhost:7274`, select **Sign-in**, then follow the prompts. > |
active-directory | Web App Quickstart Portal Node Js Ciam | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-node-js-ciam.md | Last updated 05/05/2023 > ```console > npm install && npm start > ```-> 1. Open your browser, visit `http://locahost:3000`, select **Sign-in**, then follow the prompts. +> 1. Open your browser, visit `http://localhost:3000`, select **Sign-in**, then follow the prompts. > |
active-directory | Device Registration How It Works | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-registration-how-it-works.md | -Device Registration is a prerequisite to cloud-based authentication. Commonly, devices are Azure AD or hybrid Azure AD joined to complete device registration. This article provides details of how Azure AD join and hybrid Azure Ad join work in managed and federated environments.For more information about how Azure AD authentication works on these devices, see the article [Primary refresh tokens](concept-primary-refresh-token.md#detailed-flows) +Device Registration is a prerequisite to cloud-based authentication. Commonly, devices are Azure AD or hybrid Azure AD joined to complete device registration. This article provides details of how Azure AD join and hybrid Azure Ad join work in managed and federated environments. For more information about how Azure AD authentication works on these devices, see the article [Primary refresh tokens](concept-primary-refresh-token.md#detailed-flows). ## Azure AD joined in Managed environments Device Registration is a prerequisite to cloud-based authentication. Commonly, d ## Hybrid Azure AD joined in Managed environments | Phase | Description | | :-: | -- | |
active-directory | Troubleshoot Mac Sso Extension Plugin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-mac-sso-extension-plugin.md | The following actions should take place for a successful interactive sign-on: 1. Return an access token to the client application to access the Microsoft Graph with a scope of User.Read. > [!IMPORTANT]-> The sample log snippets that follows, have been annoted with comment headers // that are not seen in the logs. They are used to help illustrate a specific action being undertaken. We have documented the log snippets this way to assist with copy and paste operations. In addition, the log examples have been trimmed to only show lines of significance for troubleshooting. +> The sample log snippets that follows, have been annotated with comment headers // that are not seen in the logs. They are used to help illustrate a specific action being undertaken. We have documented the log snippets this way to assist with copy and paste operations. In addition, the log examples have been trimmed to only show lines of significance for troubleshooting. The User clicks on the **Call Microsoft Graph API** button to invoke the sign-in process. Resolved authority, validated: YES, error: 0 [MSAL] Resolving authority: Masked(not-null), upn: Masked(null) [MSAL] Resolved authority, validated: YES, error: 0 [MSAL] Start webview authorization session with webview controller class MSIDAADOAuthEmbeddedWebviewController: -[MSAL] Presenting web view contoller. +[MSAL] Presenting web view controller. ``` The logging sample can be broken down into three segments: SSOExtensionLogs //Acquire PRT// /////////////// [MSAL] -completeWebAuthWithURL: msauth://microsoft.aad.brokerplugin/?code=(not-null)&client_info=(not-null)&state=(not-null)&session_state=(not-null)-[MSAL] Dismissed web view contoller. +[MSAL] Dismissed web view controller. [MSAL] Result from authorization session callbackURL host: microsoft.aad.brokerplugin , has error: NO [MSAL] (Default accessor) Looking for token with aliases ( "login.windows.net", |
active-directory | B2b Tutorial Require Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-tutorial-require-mfa.md | When collaborating with external B2B guest users, itΓÇÖs a good idea to protect Example: 1. An admin or employee at Company A invites a guest user to use a cloud or on-premises application that is configured to require MFA for access. |
active-directory | 8 Secure Access Sensitivity Labels | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/8-secure-access-sensitivity-labels.md | You can apply sensitivity labels to containers such as Microsoft 365 Groups, Mic * **External user access** - determine if group owners can add guests to a group * **Access from unmanaged devices** - decide if and how unmanaged devices access content -  +  Sensitivity labels applied to a container, such as a SharePoint site, aren't applied to content in the container; they control access to content in the container. Labels can be applied automatically to the content in the container. For users to manually apply labels to content, enable sensitivity labels for Office files in SharePoint and OneDrive. |
active-directory | Active Directory Ops Guide Auth | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-auth.md | Below are the user and group settings that can be locked down if there isn't an  > [!NOTE]-> Non-adminstrators can still access to the Azure AD management interfaces via command-line and other programmatic interfaces. +> Non-administrators can still access to the Azure AD management interfaces via command-line and other programmatic interfaces. #### Group settings |
active-directory | Multi Tenant Common Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/multi-tenant-common-solutions.md | Along with the current functionality, they want to offer the following. Company A provides SSO to on-premises apps for its own internal users using Azure Application Proxy as illustrated in the following diagram. :::image type="complex" source="media/multi-tenant-common-solutions/app-access-scenario.png" alt-text="Diagram illustrates example of application access.":::- Diagram Title: Azure Application Proxy architecture solution. On the top left, a box labeled https: //sales.constoso.com contains a globe icon to represent a webiste. Below it, a group of icons represent the User and are connected by an arrow from the User to the website. On the top right, a cloud shape labeled Azure Active Directory contains an icon labeled Application Proxy Service. An arrow connects the website to the cloud shape. On the bottom right, a box labeled DMZ has the subtitle On-premises. An arrow connects the cloud shape to the DMZ box, splitting in two to point to icons labeled Connector. Below the Connector icon on the left, an arrow points down and splits in two to point to icons labeled App 1 and App 2. Below the Connector icon on the right, an arrow points down to an icon labeled App 3. + Diagram Title: Azure Application Proxy architecture solution. On the top left, a box labeled https: //sales.constoso.com contains a globe icon to represent a website. Below it, a group of icons represent the User and are connected by an arrow from the User to the website. On the top right, a cloud shape labeled Azure Active Directory contains an icon labeled Application Proxy Service. An arrow connects the website to the cloud shape. On the bottom right, a box labeled DMZ has the subtitle On-premises. An arrow connects the cloud shape to the DMZ box, splitting in two to point to icons labeled Connector. Below the Connector icon on the left, an arrow points down and splits in two to point to icons labeled App 1 and App 2. Below the Connector icon on the right, an arrow points down to an icon labeled App 3. :::image-end::: Admins in tenant A perform the following steps to enable their external users to access the same on-premises applications. |
active-directory | Road To The Cloud Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-migrate.md | To enable self-service capabilities, choose the appropriate [authentication meth Additional considerations include: -* Deploy [Azure AD Password Protection](../authentication/howto-password-ban-bad-on-premises-operations.md) in a subset of domain contollers with **Audit** mode to gather information about the impact of modern policies. +* Deploy [Azure AD Password Protection](../authentication/howto-password-ban-bad-on-premises-operations.md) in a subset of domain controllers with **Audit** mode to gather information about the impact of modern policies. * Gradually enable [combined registration for SSPR and Azure AD Multi-Factor Authentication](../authentication/concept-registration-mfa-sspr-combined.md). For example, roll out by region, subsidiary, or department for all users. * Go through a cycle of password change for all users to flush out weak passwords. After the cycle is complete, implement the policy expiration time. * Switch the Password Protection configuration in the domain controllers that have the mode set to **Enforced**. For more information, see [Enable on-premises Azure AD Password Protection](../authentication/howto-password-ban-bad-on-premises-operations.md). |
active-directory | Configure Logic App Lifecycle Workflows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/configure-logic-app-lifecycle-workflows.md | To configure those you follow these steps: "title": "Workflow.Id", "type": "string" },- "workflowVerson": { + "workflowVersion": { "description": "WorkflowVersion for Workflow Object", "title": "Workflow.WorkflowVersion", "type": "integer" |
active-directory | Customize Workflow Email | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-email.md | Title: Customize emails sent out by workflow tasks -description: A step by step guide for customizing emails sent out using tasks within Lifecycle Workflows + Title: Customize emails sent from workflow tasks +description: Get a step-by-step guide for customizing emails that you send by using tasks within lifecycle workflows. Last updated 02/06/2023 -# Customize emails sent out by workflow tasks (Preview) +# Customize emails sent from workflow tasks (preview) -Lifecycle Workflows provide several tasks that send out email notifications. Email notifications can be customized to suit the needs of a specific workflow. For a list of these tasks, see: [Lifecycle Workflows tasks and definitions (Preview)](lifecycle-workflow-tasks.md). +Lifecycle workflows provide several tasks that send email notifications. You can customize email notifications to suit the needs of a specific workflow. For a list of these tasks, see [Lifecycle workflow built-in tasks (preview)](lifecycle-workflow-tasks.md). -Emails tasks allow for the customization of the following aspects: +Email tasks allow for the customization of: -- Additional CC recipients+- Additional recipients - Sender domain-- Organizational branding of the email+- Organizational branding - Subject - Message body - Email language -> [!NOTE] -> When customizing the subject or message body, we recommend that you also enable the custom sender domain and organizational branding, otherwise an additional security disclaimer will be added to your email. +When you're customizing the subject or message body, we recommend that you also enable the custom sender domain and organizational branding. Otherwise, your email will contain an additional security disclaimer. -For more information on these customizable parameters, see: [Common email task parameters](lifecycle-workflow-tasks.md#common-email-task-parameters). +For more information on these customizable parameters, see [Common email task parameters](lifecycle-workflow-tasks.md#common-email-task-parameters). ## Prerequisites -- Azure AD Premium P2--For more information, see: [License requirements](what-are-lifecycle-workflows.md#license-requirements) +- Azure Active Directory (Azure AD) Premium P2. For more information, see [License requirements](what-are-lifecycle-workflows.md#license-requirements). -## Customize email using the Azure portal +## Customize email by using the Azure portal -When customizing an email sent via Lifecycle workflows, you can choose to customize either a new or existing task. These customizations are done the same way no matter if the task is new or existing, but the following steps walk you through updating an existing task. To customize emails sent out from tasks within workflows using the Azure portal, you'd follow these steps: +When you're customizing an email sent via lifecycle workflows, you can choose to customize either a new task or an existing task. You do these customizations the same way whether the task is new or existing, but the following steps walk you through updating an existing task. To customize emails sent from tasks within workflows by using the Azure portal: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. Type in **Identity Governance** in the search bar near the top of the page, and select it. +1. On the search bar near the top of the page, enter **Identity Governance** and select the result. -1. In the left menu, select **Lifecycle workflows (Preview)**. +1. On the left menu, select **Lifecycle workflows (Preview)**. -1. In the left menu, select **workflows (Preview)**. - -1. On the left side of the screen, select **Tasks (Preview)**. +1. On the left menu, select **Workflows (Preview)**. -1. On the tasks screen, select the task for which you want to customize the email. +1. Select **Tasks (Preview)**. -1. On the specific task screen, you're able to set CC to include others in the email outside of the default audience. +1. On the pane that lists tasks, select the task for which you want to customize the email. ++1. On the pane for the specific task, you can include email recipients outside the default audience. 1. Select the **Email Customization** tab. -1. On the email customization screen, enter a custom subject, message body, and the email language translation option that will be used to translate the message body of the email. The custom subject and message body will not be translated. - :::image type="content" source="media/customize-workflow-email/customize-workflow-email-example.png" alt-text="Screenshot of an example of a customized email from a workflow."::: -1. After making changes, select **save** to capture changes to the customized email. +1. Enter a custom subject, a message body, and the email language translation option that will be used to translate the message body of the email. ++ If you stay with the default templates and don't customize the subject and body of the email, the text will be automatically translated into the recipient's preferred language. If you select an email language, the determination based on the recipient's preferred language will be overridden. If you specify a custom subject or body, it won't be translated. + :::image type="content" source="media/customize-workflow-email/customize-workflow-email-example.png" alt-text="Screenshot of an example of a customized email from a workflow."::: ++1. Select **Save** to capture your changes in the customized email. ## Format attributes within customized emails -To further personalize customized emails, you can take advantage of dynamic attributes. With dynamic attributes by placing in specific attributes, you're able to specifically call out values such as a user's name, their generated Temporary Access Pass, or even their manager's email. +To further personalize customized emails, you can take advantage of dynamic attributes. By placing dynamic attributes in specific attributes, you can specifically call out values such as a user's name, their generated Temporary Access Pass, or even their manager's email. -To use dynamic attributes within your customized emails, you must follow the formatting rules within the customized email. The proper format is: +To use dynamic attributes within your customized emails, you must follow formatting rules. The proper format is: -{{**dynamic attribute**}} +`{{dynamic attribute}}` The following screenshot is an example of the proper format for dynamic attributes within a customized email: :::image type="content" source="media/customize-workflow-email/workflow-dynamic-attribute-example.png" alt-text="Screenshot of an example of dynamic attributes within a customized email."::: -When typing this it's written the following way: +When you're typing a dynamic attribute, the email is written the following way: ```html Welcome to the team, {{userGivenName}} For more information and next steps, please contact your manager, {{managerDispl ``` -For a full list of dynamic attributes that can be used with customized emails, see:[Dynamic attributes within email](lifecycle-workflow-tasks.md#dynamic-attributes-within-email). +For a full list of dynamic attributes that you can use with customized emails, see [Dynamic attributes within email](lifecycle-workflow-tasks.md#dynamic-attributes-within-email). ++## Use custom branding and domain in emails sent via workflows -## Use custom branding and domain in emails sent out using workflows +You can customize emails that you send via lifecycle workflows to have your own company branding and to use your company domain. When you opt in to using custom branding and a custom domain, every email that you send by using lifecycle workflows reflects these settings. -Emails sent out using Lifecycle workflows can be customized to have your own company branding, and be sent out using your company domain. When you opt in to using custom branding and domain, every email sent out using Lifecycle Workflows reflect these settings. To enable these features the following prerequisites are required: +To enable these features, you need the following prerequisites: -- A verified domain. To add a custom domain, see: [Managing custom domain names in your Azure Active Directory](../enterprise-users/domains-manage.md)-- Custom Branding set within Azure AD if you want to have your custom branding used in emails. To set organizational branding within your Azure tenant, see: [Configure your company branding (preview)](../fundamentals/how-to-customize-branding.md).+- A verified domain. To add a custom domain, see [Managing custom domain names in Azure Active Directory](../enterprise-users/domains-manage.md). +- Custom branding set within Azure AD if you want to use your custom branding in emails. To set organizational branding within your Azure tenant, see [Configure your company branding (preview)](../fundamentals/how-to-customize-branding.md). > [!NOTE]-> The recommendation is to use a domain that has the appropriate DNS records to facilitate email validation, like SPF, DKIM, DMARC, and MX as this then complies with the [RFC compliance](https://www.ietf.org/rfc/rfc2142.txt) for sending and receiving email. Please see [Learn more about Exchange Online Email Routing](/exchange/mail-flow-best-practices/mail-flow-best-practices) for more information. +> For compliance with the [RFC for sending and receiving email](https://www.ietf.org/rfc/rfc2142.txt), we recommend using a domain that has the appropriate DNS records to facilitate email validation, like SPF, DKIM, DMARC, and MX. [Learn more about Exchange Online email routing](/exchange/mail-flow-best-practices/mail-flow-best-practices). -After these prerequisites are satisfied, you'd follow these steps: +After you meet the prerequisites, follow these steps: -1. On the Lifecycle workflows page, select **Workflow settings (Preview)**. +1. On the page for lifecycle workflows, select **Workflow settings (Preview)**. -1. On the settings page, with **email domain** you're able to select your domain from a drop-down list of your verified domains. - :::image type="content" source="media/customize-workflow-email/workflow-email-settings.png" alt-text="Screenshot of workflow domain settings."::: -1. With the Use company branding banner logo setting, you're able to turn on whether or not company branding is used in emails. - :::image type="content" source="media/customize-workflow-email/customize-email-logo-setting.png" alt-text="Screenshot of email logo setting."::: +1. On the **Workflow settings (Preview)** pane, for **Email domain**, select your domain from the drop-down list of verified domains. + + :::image type="content" source="media/customize-workflow-email/workflow-email-settings.png" alt-text="Screenshot of workflow domain settings."::: +1. Turn on the **Use company branding banner logo** toggle if you want to use company branding in emails. + :::image type="content" source="media/customize-workflow-email/customize-email-logo-setting.png" alt-text="Screenshot of the email logo setting."::: -## Customize email using Microsoft Graph +## Customize email by using Microsoft Graph -To customize email using Microsoft Graph API see: [workflow: createNewVersion](/graph/api/identitygovernance-workflow-createnewversion). +To customize email by using the Microsoft Graph API, see [workflow: createNewVersion](/graph/api/identitygovernance-workflow-createnewversion). -## Set custom branding and domain workflow settings in Lifecycle Workflows using Microsoft Graph +## Set custom branding and domain workflow settings by using Microsoft Graph -To turn on custom branding and domain feature settings in Lifecycle Workflows using Microsoft Graph API, see: [lifecycleManagementSettings resource type](/graph/api/resources/identitygovernance-lifecyclemanagementsettings) +To turn on custom branding and domain feature settings in lifecycle workflows by using the Microsoft Graph API, see [lifecycleManagementSettings resource type](/graph/api/resources/identitygovernance-lifecyclemanagementsettings). ## Next steps -- [Lifecycle Workflow tasks](lifecycle-workflow-tasks.md)+- [Lifecycle workflow tasks](lifecycle-workflow-tasks.md) - [Manage workflow versions](manage-workflow-tasks.md)-- |
active-directory | Customize Workflow Schedule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-schedule.md | Title: 'Customize workflow schedule' -description: Describes how to customize the schedule of a Lifecycle Workflow. + Title: Customize a workflow schedule +description: Learn how to customize the schedule of a lifecycle workflow. -# Customize the schedule of workflows (Preview) +# Customize the schedule of workflows (preview) -Workflows created using Lifecycle Workflows can be fully customized to match the schedule that fits your organization's needs. By default, workflows are scheduled to run every 3 hours, but the interval can be set as frequent as 1 hour, or as infrequent as 24 hours. +When you create workflows by using lifecycle workflows, you can fully customize them to match the schedule that fits your organization's needs. By default, workflows are scheduled to run every 3 hours. But you can set the interval to be as frequent as 1 hour or as infrequent as 24 hours. +## Customize the schedule of workflows by using the Azure portal -## Customize the schedule of workflows using the Azure portal --Workflows created within Lifecycle Workflows follow the same schedule that you define within the **Workflow Settings** page. To adjust the schedule, you'd follow these steps: +Workflows that you create within lifecycle workflows follow the same schedule that you define on the **Workflow settings (Preview)** pane. To adjust the schedule, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. Type in **Identity Governance** on the search bar near the top of the page and select it. +1. On the search bar near the top of the page, enter **Identity Governance** and select the result. ++1. On the left menu, select **Lifecycle workflows (Preview)**. -1. In the left menu, select **Lifecycle workflows (Preview)**. +1. On the **Lifecycle workflows** overview page, select **Workflow settings (Preview)**. -1. Select **Workflow settings (Preview)** from the Lifecycle workflows overview page. +1. On the **Workflow settings (Preview)** pane, set the schedule of workflows as an interval of 1 to 24. -1. On the workflow settings page you can set the schedule of workflows from an interval between 1-24. - :::image type="content" source="media/customize-workflow-schedule/workflow-schedule-settings.png" alt-text="Screenshot of the settings for workflow schedule."::: -1. After setting the workflow schedule, select save. + :::image type="content" source="media/customize-workflow-schedule/workflow-schedule-settings.png" alt-text="Screenshot of the settings for a workflow schedule."::: +1. Select **Save**. -## Customize the schedule of workflows using Microsoft Graph +## Customize the schedule of workflows by using Microsoft Graph -To schedule workflow settings using API via Microsoft Graph, see: Update lifecycleManagementSettings [tenant settings for Lifecycle Workflows](/graph/api/resources/identitygovernance-lifecyclemanagementsettings). +To schedule workflow settings by using the Microsoft Graph API, see [lifecycleManagementSettings resource type](/graph/api/resources/identitygovernance-lifecyclemanagementsettings). ## Next steps - [Manage workflow properties](manage-workflow-properties.md)-- [Delete Lifecycle Workflows](delete-lifecycle-workflow.md)+- [Delete lifecycle workflows](delete-lifecycle-workflow.md) |
active-directory | Concept Azure Ad Connect Sync Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/concept-azure-ad-connect-sync-architecture.md | During the export process, sync engine pushes out changes that are staged on sta The following illustration shows where each of the processes occurs as identity information flows from one connected data source to another. - + ### Import process During the import process, sync engine evaluates updates to identity information. Sync engine compares the identity information received from the connected data source with the identity information about a staging object and determines whether the staging object requires updates. If it is necessary to update the staging object with new data, the staging object is flagged as pending import. |
active-directory | How To Connect Fed O365 Certs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-fed-o365-certs.md | -> This article provides information on manging your federation cerficates. For infromation on emergency rotation see [Emergency Rotation of the AD FS certificates](how-to-connect-emergency-ad-fs-certificate-rotation.md) +> This article provides information on manging your federation cerficates. For information on emergency rotation see [Emergency Rotation of the AD FS certificates](how-to-connect-emergency-ad-fs-certificate-rotation.md) This article provides you additional information to manage your token signing certificates and keep them in sync with Azure AD, in the following cases: |
active-directory | How To Connect Group Writeback V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-group-writeback-v2.md | -> The group writeback functionality is currently in Public Preview as we are collecting customer feedback and telemetry. Please refer to [the limitations](#understand-limitations-of-public-preview) before you enable this functionality. You should not deploy the functionality to write back security groups in your production environment. We are planning to replace the AADConnect security group writeback functionality with the new Cloud Sync group writeback feature, and when this releases we will remove the AADConnect Group Writeback functionality. This does not impact M365 group writeback funcitonality, which will remain unchanged. +> The group writeback functionality is currently in Public Preview as we are collecting customer feedback and telemetry. Please refer to [the limitations](#understand-limitations-of-public-preview) before you enable this functionality. You should not deploy the functionality to write back security groups in your production environment. We are planning to replace the AADConnect security group writeback functionality with the new Cloud Sync group writeback feature, and when this releases we will remove the AADConnect Group Writeback functionality. This does not impact M365 group writeback functionality, which will remain unchanged. There are two versions of group writeback. The original version is in general availability and is limited to writing back Microsoft 365 groups to your on-premises Active Directory instance as distribution groups. The new, expanded version of group writeback is in public preview and enables the following capabilities: These limitations and known issues are specific to group writeback: - Group Writeback does not support writeback of nested group members that have a scope of ‘Domain local’ in AD, since Azure AD security groups are written back with scope ‘Universal’. If you have a nested group like this, you'll see an export error in Azure AD Connect with the message “A universal group cannot have a local group as a member.” The resolution is to remove the member with scope ‘Domain local’ from the Azure AD group or update the nested group member scope in AD to ‘Global’ or ‘Universal’ group. - Nested cloud groups that are members of writeback enabled groups must also be enabled for writeback to remain nested in AD. - Group Writeback setting to manage new security group writeback at scale is not yet available. You will need to configure writeback for each group.  +- Group Writeback only supports writing back groups to a single Organization Unit (OU). ## Next steps |
active-directory | How To Connect Health Adfs Risky Ip Workbook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-health-adfs-risky-ip-workbook.md | badPasswordErrorCount + extranetLockoutErrorCount > [TODO: put error count thres ``` > [!NOTE]-> The alert logic means that the alert would be triggered if at least one IP from the extranet lockout error counts, or combined bad password and extranet lockout error counts exceeds the designated thresholds. You can select teh frequency for evaluating the query to detect Risky IPs. +> The alert logic means that the alert would be triggered if at least one IP from the extranet lockout error counts, or combined bad password and extranet lockout error counts exceeds the designated thresholds. You can select the frequency for evaluating the query to detect Risky IPs. ## FAQ |
active-directory | How To Connect Modify Group Writeback | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-modify-group-writeback.md | To configure directory settings to disable automatic writeback of newly created ### Disable writeback for all existing Microsoft 365 group -To disable writeback of all Microsoft 365 groups that were created before these modifications, use one of the folowing methods: +To disable writeback of all Microsoft 365 groups that were created before these modifications, use one of the following methods: - Portal: Use the [Microsoft Entra admin portal](../../enterprise-users/groups-write-back-portal.md). - PowerShell: Use the [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/installation?view=graph-powershell-1.0&preserve-view=true). For example: |
active-directory | How To Connect Staged Rollout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-staged-rollout.md | The following scenarios are not supported for Staged Rollout: - When you first add a security group for Staged Rollout, you're limited to 200 users to avoid a UX time-out. After you've added the group, you can add more users directly to it, as required. -- While users are in Staged Rollout with Password Hash Synchronization (PHS), by default no password expiration is applied. Password expiration can be applied by enabling "EnforceCloudPasswordPolicyForPasswordSyncedUsers". When "EnforceCloudPasswordPolicyForPasswordSyncedUsers" is enabled, password expiration policy is set to 90 days from the time password was set on-prem with no option to customize it. Programatically updating PasswordPolicies attribute is not supported while users are in Staged Rollout. To learn how to set 'EnforceCloudPasswordPolicyForPasswordSyncedUsers' see [Password expiration policy](./how-to-connect-password-hash-synchronization.md#enforcecloudpasswordpolicyforpasswordsyncedusers).+- While users are in Staged Rollout with Password Hash Synchronization (PHS), by default no password expiration is applied. Password expiration can be applied by enabling "EnforceCloudPasswordPolicyForPasswordSyncedUsers". When "EnforceCloudPasswordPolicyForPasswordSyncedUsers" is enabled, password expiration policy is set to 90 days from the time password was set on-prem with no option to customize it. Programmatically updating PasswordPolicies attribute is not supported while users are in Staged Rollout. To learn how to set 'EnforceCloudPasswordPolicyForPasswordSyncedUsers' see [Password expiration policy](./how-to-connect-password-hash-synchronization.md#enforcecloudpasswordpolicyforpasswordsyncedusers). - Windows 10 Hybrid Join or Azure AD Join primary refresh token acquisition for Windows 10 version older than 1903. This scenario will fall back to the WS-Trust endpoint of the federation server, even if the user signing in is in scope of Staged Rollout. |
active-directory | How To Connect Sync Configure Filtering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-sync-configure-filtering.md | If you have more than one forest, then you must apply the filtering configuratio ### Disable the synchronization scheduler To disable the built-in scheduler that triggers a synchronization cycle every 30 minutes, follow these steps: -1. Open Windows Powershell, import the ADSync module and disable the scheduler using the follwoing commands +1. Open Windows Powershell, import the ADSync module and disable the scheduler using the following commands ```Powershell import-module ADSync |
active-directory | How To Connect Sync Feature Directory Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-sync-feature-directory-extensions.md | An object in Azure AD can have up to 100 attributes for directory extensions. Th > [!NOTE] > It is not supported to sync constructed attributes, such as msDS-UserPasswordExpiryTimeComputed. If you upgrade from an old version of AADConnect you may still see these attributes show up in the installation wizard, you should not enable them though. Their value will not sync to Azure AD if you do. -> You can read more about constructed attributes in [this artice](/openspecs/windows_protocols/ms-adts/a3aff238-5f0e-4eec-8598-0a59c30ecd56). +> You can read more about constructed attributes in [this article](/openspecs/windows_protocols/ms-adts/a3aff238-5f0e-4eec-8598-0a59c30ecd56). > You should also not attempt to sync [Non-replicated attributes](/windows/win32/ad/attributes), such as badPwdCount, Last-Logon, and Last-Logoff, as their values will not be synced to Azure AD. ## Configuration changes in Azure AD made by the wizard |
active-directory | How To Upgrade Previous Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-upgrade-previous-version.md | To copy custom synchronization rules to another server, do the following: 1. Open **Synchronization Rules Editor** on your active server. 2. Select a custom rule. Click **Export**. This brings up a Notepad window. Save the temporary file with a PS1 extension. This makes it a PowerShell script. Copy the PS1 file to the staging server. -  +  3. The Connector GUID (globally-unique identifier) is different on the staging server, and you must change it. To get the GUID, start **Synchronization Rules Editor**, select one of the out-of-box rules that represent the same connected system, and click **Export**. Replace the GUID in your PS1 file with the GUID from the staging server. 4. In a PowerShell prompt, run the PS1 file. This creates the custom synchronization rule on the staging server. |
active-directory | Plan Hybrid Identity Design Considerations Directory Sync Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/plan-hybrid-identity-design-considerations-directory-sync-requirements.md | You also need to determine the security requirements and constraints directory s * Do you have a disaster recovery plan for the synchronization server? * Do you have an account with the correct permissions for all forests you want to synch with? * If your company doesnΓÇÖt know the answer for this question, review the section ΓÇ£Permissions for password synchronizationΓÇ¥ in the article [Install the Azure Active Directory Sync Service](/previous-versions/azure/azure-services/dn757602(v=azure.100)#BKMK_CreateAnADAccountForTheSyncService) and determine if you already have an account with these permissions or if you need to create one.-* If you have mutli-forest sync is the sync server able to get to each forest? +* If you have multi-forest sync is the sync server able to get to each forest? > [!NOTE] > Make sure to take notes of each answer and understand the rationale behind the answer. [Determine incident response requirements](plan-hybrid-identity-design-considerations-incident-response-requirements.md) will go over the options available. By having answered those questions you will select which option best suits your business needs. |
active-directory | Reference Connect Adsync | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-adsync.md | The following documentation provides reference information for the ADSync.psm1 P #### System.UInt32 - #### Sytem.Management.Automation.PSCredential + #### System.Management.Automation.PSCredential ### OUTPUTS |
active-directory | Reference Connect Version History Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-version-history-archive.md | We fixed a bug in the sync errors compression utility that wasn't handling surro >We are investigating an incident where some customers are experiencing an issue with existing Hybrid Azure AD joined devices after upgrading to this version of Azure AD Connect. We advise customers who have deployed Hybrid Azure AD join to postpone upgrading to this version until the root cause of these issues are fully understood and mitigated. More information will be provided as soon as possible. >[!IMPORTANT]->With this version of Azure AD Connect some customers may see some or all of their Windows devices disappear from Azure AD. These device identities aren't used by Azure AD during Conditional Access authorization. For more information, see [Understanding Azure AD Connect 1.4.xx.x device disappearnce](/troubleshoot/azure/active-directory/reference-connect-device-disappearance) +>With this version of Azure AD Connect some customers may see some or all of their Windows devices disappear from Azure AD. These device identities aren't used by Azure AD during Conditional Access authorization. For more information, see [Understanding Azure AD Connect 1.4.xx.x device disappearance](/troubleshoot/azure/active-directory/reference-connect-device-disappearance) ### Release status |
active-directory | Reference Connect Version History | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-version-history.md | To read more about auto-upgrade, see [Azure AD Connect: Automatic upgrade](how-t ### Functional changes - We have removed the public preview functionality for the Admin Agent from Azure AD Connect. We will not provide this functionality going forward. - We added support for two new attributes: employeeOrgDataCostCenter and employeeOrgDataDivision.+ - We added CertificateUserIds attribute to AAD Connector static schema. - The AAD Connect wizard will now abort if write event logs permission is missing. - We updated the AADConnect health endpoints to support the US government clouds. - We added new cmdlets ΓÇ£Get-ADSyncToolsDuplicateUsersSourceAnchor and Set-ADSyncToolsDuplicateUsersSourceAnchorΓÇ£ to fix bulk "source anchor has changed" errors. When a new forest is added to AADConnect with duplicate user objects, the objects are running into bulk "source anchor has changed" errors. This is happening due to the mismatch between msDsConsistencyGuid & ImmutableId. More information about this module and the new cmdlets can be found in [this article](./reference-connect-adsynctools.md). |
active-directory | Tshoot Connect Largeobjecterror Usercertificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-largeobjecterror-usercertificate.md | There should be an existing sync rule that is enabled and configured to export u | Connector Object Type |**user** | | MV attribute |**userCertificate** | -3. If you are using OOB (out-of-box) sync rules to Azure AD connector to export userCertficiate attribute for User objects, you should get back the *ΓÇ£Out to AAD ΓÇô User ExchangeOnlineΓÇ¥* rule. +3. If you are using OOB (out-of-box) sync rules to Azure AD connector to export userCertificate attribute for User objects, you should get back the *ΓÇ£Out to AAD ΓÇô User ExchangeOnlineΓÇ¥* rule. 4. Note down the **precedence** value of this sync rule. 5. Select the sync rule and click **Edit**. 6. In the *ΓÇ£Edit Reserved Rule ConfirmationΓÇ¥* pop-up dialog, click **No**. (DonΓÇÖt worry, we are not going to make any change to this sync rule). |
active-directory | Howto Export Risk Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-export-risk-data.md | Azure Event Hubs can look at incoming data from sources like Azure AD Identity P Organizations can choose to [connect Azure AD data to Microsoft Sentinel](../../sentinel/data-connectors/azure-active-directory-identity-protection.md) as well for further processing. -Organizations can use the [Microsoft Graph API to programatically interact with risk events](howto-identity-protection-graph-api.md). +Organizations can use the [Microsoft Graph API to programmatically interact with risk events](howto-identity-protection-graph-api.md). ## Next steps |
active-directory | Datawiza Azure Ad Sso Oracle Peoplesoft | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-azure-ad-sso-oracle-peoplesoft.md | Learn more: [Tutorial: Secure user sign-in events with Azure AD MFA](../authenti To enable SSO in the Oracle PeopleSoft environment: -1. Sign in to the PeopleSoft Consol `http://{your-peoplesoft-fqdn}:8000/psp/ps/?cmd=start` using Admin credentials, for example, PS/PS. +1. Sign in to the PeopleSoft Console `http://{your-peoplesoft-fqdn}:8000/psp/ps/?cmd=start` using Admin credentials, for example, PS/PS.  |
active-directory | F5 Big Ip Header Advanced | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-header-advanced.md | Create the BIG-IP SAML service provider and corresponding SAML IdP objects to fe 5. Scroll down to select the new SAML SP object. 6. Select **Bind/UnBind IdP Connectors**. -  +  7. Select **Create New IdP Connector**. 8. From the drop-down, select **From Metadata**. |
active-directory | F5 Big Ip Headers Easy Button | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-headers-easy-button.md | With SSO, users access BIG-IP published services without entering credentials. T 5. For **Header Name**, use **employeeid**. 6. For **Header Value**,use **%{session.saml.last.attr.name.employeeid}**. -  +  >[!NOTE] >APM session variables in curly brackets are case-sensitive. Inconsistencies cause attribute mapping failures. |
active-directory | F5 Big Ip Kerberos Advanced | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-kerberos-advanced.md | BIG-IP logs are a reliable source of information. To increase the log verbosity 4. Select **Debug** from the SSO list. 5. Select **OK**. -Reproduce your problem before you look at the logs. Then revert this feature, when finished. Otherwise the verbosity is signficant. +Reproduce your problem before you look at the logs. Then revert this feature, when finished. Otherwise the verbosity is significant. **BIG-IP error** |
active-directory | F5 Big Ip Kerberos Easy Button | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-kerberos-easy-button.md | Learn to secure Kerberos-based applications with Azure Active Directory (Azure A Integrating a BIG-IP with Azure Active Directory (Azure AD) provides many benefits, including: * Improved governance: See, [Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) and learn more about Azure AD pre-authentication. -* Enforce organizatinal policies. See [What is Conditional Access?](../conditional-access/overview.md). +* Enforce organizational policies. See [What is Conditional Access?](../conditional-access/overview.md). * Full SSO between Azure AD and BIG-IP published services * Manage identities and access from a single control plane, the [Azure portal](https://portal.azure.com/) |
active-directory | F5 Big Ip Ldap Header Easybutton | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-ldap-header-easybutton.md | A virtual server is a BIG-IP data plane object represented by a virtual IP addre 3. Check **Enable Redirect Port** and then enter **Redirect Port** to redirects incoming HTTP client traffic to HTTPS. 4. The Client SSL Profile enables the virtual server for HTTPS, so client connections are encrypted over TLS. Select the **Client SSL Profile** you created or leave the default while testing. -  +  ### Pool Properties The **Application Pool** tab has the services behind a BIG-IP represented as a p 2. Choose the **Load Balancing Method** such as Round Robin. 3. For **Pool Servers** select a node or specify an IP and port for the server hosting the header-based application. -  +  >[!NOTE] >Our back-end application sits on HTTP port 80. Switch to 443 if yours is HTTPS. |
active-directory | F5 Big Ip Sap Erp Easy Button | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-sap-erp-easy-button.md | Learn more: [Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos au The **Additional User Attributes** tab supports distributed systems requiring attributes stored in other directories, for session augmentation. Thus, attributes from an LDAP source are injected as more SSO headers to control role-based access, Partner IDs, etc. -  +  >[!NOTE] >This feature has no correlation to Azure AD but is another attribute source. |
active-directory | Howto Enforce Signed Saml Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/howto-enforce-signed-saml-authentication.md | If enabled Azure Active Directory will validate the requests against the public > [!NOTE] > A `Signature` element in `AuthnRequest` elements is optional. If `Require Verification certificates` is not checked, Azure AD does not validate signed authentication requests if a signature is present. Requestor verification is provided for by only responding to registered Assertion Consumer Service URLs. -> If `Require Verification certificates` is checked, SAML Request Signature Verification will work for SP-initiated(service provider/relying party initiated) authentication requests only. Only the application configured by the service provider will have the access to to the private and public keys for signing the incoming SAML Authentication Reqeusts from the applicaiton. The public key should be uploaded to allow the verification of the request, in which case AAD will have access to only the public key. +> If `Require Verification certificates` is checked, SAML Request Signature Verification will work for SP-initiated(service provider/relying party initiated) authentication requests only. Only the application configured by the service provider will have the access to to the private and public keys for signing the incoming SAML Authentication Requests from the application. The public key should be uploaded to allow the verification of the request, in which case AAD will have access to only the public key. -> Enabling `Require Verification certificates` will not allow IDP-initiated authentication requests (like SSO testing feature, MyApps or M365 app launcher) to be validated as the IDP would not possess the same private keys as the registered applicaiton. +> Enabling `Require Verification certificates` will not allow IDP-initiated authentication requests (like SSO testing feature, MyApps or M365 app launcher) to be validated as the IDP would not possess the same private keys as the registered application. ## To configure SAML Request Signature Verification in the Azure portal |
active-directory | Howto Saml Token Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/howto-saml-token-encryption.md | To configure token encryption, follow these steps: 1. Set the value for the `tokenEncryptionKeyId` attribute. - The following example shows an application manifest configured with two encryption certificates, and with the second selected as the active one using the tokenEnryptionKeyId. + The following example shows an application manifest configured with two encryption certificates, and with the second selected as the active one using the tokenEncryptionKeyId. ```json { |
active-directory | Migrate Applications From Okta To Azure Active Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-applications-from-okta-to-azure-active-directory.md | To create an application inventory: 1. With the Postman app, from the Okta admin console, generate an API token. 2. On the API dashboard, under **Security**, select **Tokens** > **Create Token**. -  +  3. Enter a token name and then select **Create Token**. |
active-directory | Migrate Okta Sign On Policies To Azure Active Directory Conditional Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access.md | Learn more: [Enable combined security information registration in Azure Active D 1. To test, change the created policies to **Enabled test user login**. -  +  2. On the Office 365 **Sign-In** pane, the test user John Smith is prompted to sign in with Okta MFA and Azure AD MFA. |
active-directory | Pim How To Add Role To User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-add-role-to-user.md | Privileged Identity Management support both built-in and custom Azure AD roles. >[!Note] >When a role is assigned, the assignment:->- Can't be asigned for a duration of less than five minutes +>- Can't be assigned for a duration of less than five minutes >- Can't be removed within five minutes of it being assigned ## Assign a role |
active-directory | Pim Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-troubleshoot.md | -Are you having a problem with Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsft Entra? The information that follows can help you to get things working again. +Are you having a problem with Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra? The information that follows can help you to get things working again. ## Access to Azure resources denied |
active-directory | Workbook Cross Tenant Access Activity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-cross-tenant-access-activity.md | Under **Step 1**, the external tenant list shows all the tenants that have had i [  ](./media/workbook-cross-tenant-access-activity/cross-tenant-workbook-step-1.png#lightbox) -The table under **Step 2** summarizes all outbound and inbound sign-in activity for the selected tenant, including the number of successful sign-ins and the resons for failed sign-ins. You can select **Outbound activity** or **Inbound activity** to update the remaining sections of the workbook with the type of activity you want to view. +The table under **Step 2** summarizes all outbound and inbound sign-in activity for the selected tenant, including the number of successful sign-ins and the reasons for failed sign-ins. You can select **Outbound activity** or **Inbound activity** to update the remaining sections of the workbook with the type of activity you want to view.  |
active-directory | Absorblms Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/absorblms-tutorial.md | For Azure AD users to sign in to Absorb LMS, they must be set up in Absorb LMS.  > [!NOTE]- > By Default, User Provisioning is not enabled in SSO. If the customer wants to enable this feature, they have to set it up as mentioned in [this](https://support.absorblms.com/hc/en-us/articles/360014083294-Incoming-SAML-2-0-SSO-Account-Provisioning) documentation. Also please note that User Provisioing is only available on **Absorb 5 - New Learner Experience** with ACS URL-`https://company.myabsorb.com/api/rest/v2/authentication/saml` + > By Default, User Provisioning is not enabled in SSO. If the customer wants to enable this feature, they have to set it up as mentioned in [this](https://support.absorblms.com/hc/en-us/articles/360014083294-Incoming-SAML-2-0-SSO-Account-Provisioning) documentation. Also please note that User Provisioning is only available on **Absorb 5 - New Learner Experience** with ACS URL-`https://company.myabsorb.com/api/rest/v2/authentication/saml` ## Test SSO |
active-directory | Air Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/air-tutorial.md | In this section, you'll enable B.Simon to use Azure single sign-on by granting a 1. Go to the **Settings** -> **SECURITY &IDENTITY** tab and perform the perform the following steps: -  +  a. In the **Manage approved email domains** text box, add your organizations email domains to the approved domains list to allow users with these domains to authenticate using SAML SSO. |
active-directory | Airstack Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/airstack-tutorial.md | To configure the integration of Airstack into Azure AD, you need to add Airstack Configure and test Azure AD SSO with Airstack using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Airstack. -To configure and test Azure AD SSO with Airstack, perfrom the following steps: +To configure and test Azure AD SSO with Airstack, perform the following steps: 1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. |
active-directory | Amazon Business Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/amazon-business-tutorial.md | Follow these steps to enable Azure AD SSO in the Azure portal. In this section, you'll create a test user in the Azure portal called B.Simon. > [!NOTE]-> Adminstrators need to create the test users in their tenant if needed. Following steps show how to create a test user. +> Administrators need to create the test users in their tenant if needed. Following steps show how to create a test user. 1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**. 1. Select **New user** at the top of the screen. |
active-directory | Ardoq Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ardoq-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi * If you don't intend to use the provisioning features of Azure AD along with SSO, please reach out to Ardoq customer support and they'll manually enable support for provisioning. -Before we proceed we need to obtain a *Tenant Url* and a *Secret Token*, to configure secure communcation between Azure AD and Ardoq. +Before we proceed we need to obtain a *Tenant Url* and a *Secret Token*, to configure secure communication between Azure AD and Ardoq. |
active-directory | Bizagi Studio For Digital Process Automation Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bizagi-studio-for-digital-process-automation-provisioning-tutorial.md | To configure Bizagi Studio for Digital Process Automation to support provisionin 5. Copy and save the **Client Secret**. In the Azure portal, for your Bizagi Studio for Digital Process Automation application, on the **Provisioning** tab, the client secret value is entered in the **Secret Token** field. -  +  ## Add the application from the Azure AD gallery |
active-directory | Citrix Netscaler Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/citrix-netscaler-tutorial.md | To enable Azure AD SSO by using the Azure portal, complete these steps: > [!NOTE] > * The URLs that are used in this section aren't real values. Update these values with the actual values for Identifier, Reply URL, and Sign-on URL. Contact the [Citrix ADC SAML Connector for Azure AD client support team](https://www.citrix.com/contact/technical-support.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.- > * To set up SSO, the URLs must be accessible from public websites. You must enable the firewall or other security settings on the Citrix ADC SAML Connector for Azure AD side to enble Azure AD to post the token at the configured URL. + > * To set up SSO, the URLs must be accessible from public websites. You must enable the firewall or other security settings on the Citrix ADC SAML Connector for Azure AD side to enable Azure AD to post the token at the configured URL. 1. On the **Set up Single Sign-On with SAML** pane, in the **SAML Signing Certificate** section, for **App Federation Metadata Url**, copy the URL and save it in Notepad. |
active-directory | Claromentis Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/claromentis-tutorial.md | Follow these steps to enable Azure AD SSO in the Azure portal. | `https://<CUSTOMER_SITE_URL>/login?no_auto=0` | | > [!NOTE]- > These values are not real. Update these values with the actual Reply URL and Sign-on URL which is explained later in the turorial. + > These values are not real. Update these values with the actual Reply URL and Sign-on URL which is explained later in the tutorial. 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. |
active-directory | Cofense Provision Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cofense-provision-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi 1. Login to Cofense PhishMe. Navigate to **Recipients > Recipient Sync**. 2. Accept the terms and conditions and then click **Get Started**. -  +  3. Copy the values from the **URL** and **Token** fields. -  +  ## Step 3. Add Cofense Recipient Sync from the Azure AD application gallery |
active-directory | Concur Travel And Expense Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/concur-travel-and-expense-tutorial.md | In this section, you'll enable B.Simon to use Azure single sign-on by granting a In this section, you create a user called B.Simon in SAP Concur Travel and Expense. Work with Concur support team to add the users in the SAP Concur Travel and Expense platform. Users must be created and activated before you use single sign-on. > [!NOTE]-> B.Simon's Concur login id needs to match B.Simon's unique identifier at Azure AD. For example, if B.Simon's Azure AD unique identifer is `B.Simon@contoso.com`. B.Simon's Concur login id needs to be `B.Simon@contoso.com` as well. +> B.Simon's Concur login id needs to match B.Simon's unique identifier at Azure AD. For example, if B.Simon's Azure AD unique identifier is `B.Simon@contoso.com`. B.Simon's Concur login id needs to be `B.Simon@contoso.com` as well. ## Configure Concur Mobile SSO To enable Concur mobile SSO, you need to give Concur support team **User access URL**. Follow steps below to get **User access URL** from Azure AD: |
active-directory | Github Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-provisioning-tutorial.md | This section guides you through connecting your Azure AD to GitHub's SCIM provis 10. Under the Mappings section, select **Synchronize Azure Active Directory Users to GitHub**. -11. In the **Attribute Mappings** section, review the user attributes that are synchronized from Azure AD to GitHub. The attributes selected as **Matching** properties are used to match the user accounts in GitHub for update operations. Do not enable the **Matching precendence** setting for the other default attributes in the **Provisioning** section because errors might occur. Select **Save** to commit any changes. +11. In the **Attribute Mappings** section, review the user attributes that are synchronized from Azure AD to GitHub. The attributes selected as **Matching** properties are used to match the user accounts in GitHub for update operations. Do not enable the **Matching precedence** setting for the other default attributes in the **Provisioning** section because errors might occur. Select **Save** to commit any changes. 12. To enable the Azure AD provisioning service for GitHub, change the **Provisioning Status** to **On** in the **Settings** section. |
active-directory | Harness Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/harness-tutorial.md | In this section, you'll enable B.Simon to use Azure single sign-on by granting a 6. On the **SAML Provider** pop-up, perform the following steps: -  +  a. Copy the **In your SSO Provider, please enable SAML-based login, then enter the following URL** instance and paste it in Reply URL textbox in **Basic SAML Configuration** section on Azure portal. |
active-directory | Header Citrix Netscaler Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/header-citrix-netscaler-tutorial.md | To enable Azure AD SSO by using the Azure portal, complete these steps: > [!NOTE] > * The URLs that are used in this section aren't real values. Update these values with the actual values for Identifier, Reply URL, and Sign-on URL. Contact the [Citrix ADC client support team](https://www.citrix.com/contact/technical-support.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.- > * To set up SSO, the URLs must be accessible from public websites. You must enable the firewall or other security settings on the Citrix ADC side to enble Azure AD to post the token at the configured URL. + > * To set up SSO, the URLs must be accessible from public websites. You must enable the firewall or other security settings on the Citrix ADC side to enable Azure AD to post the token at the configured URL. 1. On the **Set up Single Sign-On with SAML** pane, in the **SAML Signing Certificate** section, for **App Federation Metadata Url**, copy the URL and save it in Notepad. |
active-directory | Insight4grc Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/insight4grc-provisioning-tutorial.md | This section guides you through the steps to configure the Azure AD provisioning  -5. Under the **Admin Credentials** section, input the SCIM endpoint URL in **Tenant URL**. The enpoint URL should be in the format `https://<Insight4GRC Domain Name>.insight4grc.com/public/api/scim/v2 ` where **Insight4GRC Domain Name** is the value retrieved in previous steps. Input the bearer token value retrieved earlier in **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to Insight4GRC. If the connection fails, ensure your Insight4GRC account has Admin permissions and try again. +5. Under the **Admin Credentials** section, input the SCIM endpoint URL in **Tenant URL**. The endpoint URL should be in the format `https://<Insight4GRC Domain Name>.insight4grc.com/public/api/scim/v2 ` where **Insight4GRC Domain Name** is the value retrieved in previous steps. Input the bearer token value retrieved earlier in **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to Insight4GRC. If the connection fails, ensure your Insight4GRC account has Admin permissions and try again.  |
active-directory | Intacct Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/intacct-tutorial.md | Follow these steps to enable Azure AD SSO in the Azure portal. i. Click **Save**. - > Repeat steps c-i to add both custom attibutes. + > Repeat steps c-i to add both custom attributes. 1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Edit** to open the dialog. Click **...** next to the Active certificate and select **PEM certificate download** to download the certificate and save it to your local drive. |
active-directory | Kerbf5 Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kerbf5-tutorial.md | This adds the new Active Directory server to the Active Directory Servers list. * Profile Scope: Profile * Languages: English -  +  1. Click on the name, KerbApp200, complete the following information and click **Update**. |
active-directory | Lr Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lr-tutorial.md | In this section, you enable Azure AD single sign-on in the LoginRadius Admin Con 1. In **ID Provider Logout URL**, enter the SIGN-OUT ENDPOINT, which you get from your Azure AD account. - 1. In **ID Provider Certificate**, enter the Azure AD certificate, which you get from your Azure AD account. Enter the certificate value with the header and footer. Example: `--BEGIN CERTIFICATE--<certifciate value>--END CERTIFICATE--` + 1. In **ID Provider Certificate**, enter the Azure AD certificate, which you get from your Azure AD account. Enter the certificate value with the header and footer. Example: `--BEGIN CERTIFICATE--<certificate value>--END CERTIFICATE--` 1. In **Service Provider Certificate** and **Server Provider Certificate Key**, enter your certificate and key. In this section, you enable Azure AD single sign-on in the LoginRadius Admin Con > [!NOTE] > Be sure to enter the certificate and certificate key values with the header and footer:- > - Certificate value example format: `--BEGIN CERTIFICATE--<certifciate value>--END CERTIFICATE--` - > - Certificate key value example format: `--BEGIN RSA PRIVATE KEY--<certifciate key value>--END RSA PRIVATE KEY--` + > - Certificate value example format: `--BEGIN CERTIFICATE--<certificate value>--END CERTIFICATE--` + > - Certificate key value example format: `--BEGIN RSA PRIVATE KEY--<certificate key value>--END RSA PRIVATE KEY--` 5. In the **Data Mapping** section, select the fields (SP fields) and enter the corresponding Azure AD fields(IdP fields). |
active-directory | Open Text Directory Services Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/open-text-directory-services-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi 6. Select **client_credentials** for the grant_type and click **Execute**. -  +  7. The access token in the response should be used in the **Secret Token** field in Azure AD. This section guides you through the steps to configure the Azure AD provisioning  5. Under the **Admin Credentials** section, input your OpenText Directory Services Tenant URL- * Non-specifc tenant URL : {OTDS URL}/scim/{partitionName} + * Non-specific tenant URL : {OTDS URL}/scim/{partitionName} * Specific tenant URL : {OTDS URL}/otdstenant/{tenantID}/scim/{partitionName} 6. Enter the Secret Token retrieved from Step 2. Click **Test Connection** to ensure Azure AD can connect to OpenText Directory Services. If the connection fails, ensure your OpenText Directory Services account has Admin permissions and try again. |
active-directory | Riskware Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/riskware-tutorial.md | In this section, you'll enable B.Simon to use Azure single sign-on by granting a 1. On the top right, click **Maintenance** to open the maintenance page. -  +  1. In the maintenance page, click **Authentication**. |
active-directory | Sap Cloud Platform Identity Authentication Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-cloud-platform-identity-authentication-provisioning-tutorial.md | Before configuring and enabling automatic user provisioning, you should decide w 2. Press the **+Add** button on the left hand panel in order to add a new administrator to the list. Choose **Add System** and enter the name of the system. > [!NOTE]-> The admininistrator user in SAP Cloud Platform Identity Authentication must be of type **System**. Creating a normal administrator user can lead to *unauthorized* errors while provisioning. +> The administrator user in SAP Cloud Platform Identity Authentication must be of type **System**. Creating a normal administrator user can lead to *unauthorized* errors while provisioning. 3. Under Configure Authorizations, switch on the toggle button against **Manage Users**. |
active-directory | Sap Netweaver Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-netweaver-tutorial.md | Follow these steps to enable Azure AD SSO in the Azure portal. 1. SAP NetWeaver application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. Click **Edit** icon to open User Attributes dialog. -  +  1. In the **User Claims** section on the **User Attributes** dialog, configure SAML token attribute as shown in the image above and perform the following steps: |
active-directory | Segment Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/segment-tutorial.md | In this section, you'll enable B.Simon to use Azure single sign-on by granting a 1. In the **SAML 2.0 Endpoint URL** box, paste the **Login URL** value that you copied from the Azure portal. -1. Open the downloaded **Cerificate(Base64)** from the Azure portal into Notepad and paste the content into the **Public Certificate** textbox. +1. Open the downloaded **Certificate(Base64)** from the Azure portal into Notepad and paste the content into the **Public Certificate** textbox. 1. Click on **Configure Connection**. |
active-directory | Servicessosafe Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicessosafe-tutorial.md | In this section, you'll enable B.Simon to use Azure single sign-on by granting a a. Paste the value of Tenant ID value in **Azure Tenant ID** textbox from Azure portal. - b. Open the downloaded **Cerificate(Base64)** from the Azure portal into Notepad and paste the content into the **Certificate** textbox. + b. Open the downloaded **Certificate(Base64)** from the Azure portal into Notepad and paste the content into the **Certificate** textbox. c. In the **Login URL** box, paste the **Login URL** value that you copied from the Azure portal. |
active-directory | Showpad Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/showpad-tutorial.md | Follow these steps to enable Azure AD SSO in the Azure portal. `https://<company-name>.showpad.biz` b. In the **Sign on URL** text box, type a URL using the following pattern:- `https://<comapany-name>.showpad.biz/login` + `https://<company-name>.showpad.biz/login` > [!NOTE] > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [Showpad Client support team](https://help.showpad.com/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. |
active-directory | Snowflake Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/snowflake-tutorial.md | To enable Azure AD users to log in to Snowflake, they must be provisioned into S CREATE USER britta_simon PASSWORD = '' LOGIN_NAME = 'BrittaSimon@contoso.com' DISPLAY_NAME = 'Britta Simon'; ``` > [!NOTE]-> Manually provisioning is uneccesary, if users and groups are provisioned with a SCIM integration. See how to enable auto provisioning for [Snowflake](snowflake-provisioning-tutorial.md). +> Manually provisioning is unnecessary, if users and groups are provisioned with a SCIM integration. See how to enable auto provisioning for [Snowflake](snowflake-provisioning-tutorial.md). ## Test SSO |
active-directory | Standard For Success Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/standard-for-success-tutorial.md | In this section, you'll enable B.Simon to use Azure single sign-on by granting a 1. In a different web browser window, log into your Standard for Success K-12 website as an administrator with superuser privileges. -1. From the menu, navigate to **Utilites** -> **Accounts Manager**, then click **Create New User** and perform the following steps: +1. From the menu, navigate to **Utilities** -> **Accounts Manager**, then click **Create New User** and perform the following steps:  |
active-directory | Tableauonline Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tableauonline-tutorial.md | In this section, you'll enable B.Simon to use Azure single sign-on by granting a | Name | Source Attribute| | | |- | DispalyName | user.displayname | + | DisplayName | user.displayname | c. Copy the namespace value for these attributes: givenname, email and surname by using the following steps: |
active-directory | Tulip Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tulip-tutorial.md | In this section, you'll enable B.Simon to use Azure single sign-on by granting a * give the **Name Attribute** value as **displayName**. - * give the **Email Attribute** value as **emailAdress**. + * give the **Email Attribute** value as **emailAddress**. * give the **Badge Attribute** value as **badgeID**. |
active-directory | Unite Us Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/unite-us-tutorial.md | Complete the following steps to enable Azure AD single sign-on in the Azure port | `https://app.auth.uniteustraining.com/` | > [!Note]- > These values are not the real. Update these values with the actual Identifer and Reply URL. Contact [Unite Us Client support team](mailto:isd.support@uniteus.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. + > These values are not the real. Update these values with the actual Identifier and Reply URL. Contact [Unite Us Client support team](mailto:isd.support@uniteus.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. 1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. |
active-directory | Webce Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/webce-tutorial.md | Complete the following steps to enable Azure AD single sign-on in the Azure port `https://www.webce.com/<RootPortalFolder>/login` > [!Note]- > These values are not the real. Update these values with the actual Identifer, Reply URL and Sign on URL. Contact [WebCE Client support team](mailto:CustomerService@WebCE.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. + > These values are not the real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [WebCE Client support team](mailto:CustomerService@WebCE.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. 1. In the **SAML Signing Certificate** section, click **Edit** button to open **SAML Signing Certificate** dialog. |
active-directory | Nist Authentication Basics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-authentication-basics.md | Use the following table to understand NIST terminology. |Authentication factor|Something you are, know, or have. Every *authenticator* has one or more authentication factors| |Authenticator|Something the *claimant* possesses and controls to authenticate the *claimant* identity| |Claimant|A *subject* identity to be verified with one or more *authentication* protocols|-|Credential|An object or data structure that authoritatively binds an identity to at least one *subscriber authenticator* that a *subscriber* posseses and controls| +|Credential|An object or data structure that authoritatively binds an identity to at least one *subscriber authenticator* that a *subscriber* possesses and controls| |Credential service provider (CSP)|A trusted entity that issues or registers *subscriber authenticators* and issues electronic *credentials* to *subscribers*| |Relying party|An entity that relies on a *verifier assertion* or a *claimant authenticators* and *credentials*, usually to grant access to a system| |Subject|A person, organization, device, hardware, network, software, or service| |
active-directory | Pci Requirement 10 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-10.md | Title: Azure Active Directory and PCI-DSS Requirement 10 -description: Learn PCI-DSS defined approach requirements about logging and monitoring all acess to system components and CHD +description: Learn PCI-DSS defined approach requirements about logging and monitoring all access to system components and CHD |
active-directory | Admin Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/admin-api.md | We support two different didModels. One is `ion` and the other supported method | -- | -- | -- | | `subscriptionId` | string | The Azure subscription this Key Vault resides | | `resourceGroup` | string | name of the resource group from this Key Vault |-| `resouceName` | string | Key Vault name | +| `resourceName` | string | Key Vault name | | `resourceUrl` | string | URL to this Key Vault | |
active-directory | Error Codes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/error-codes.md | The inner error object contains error specific details useful to the developer t |Property |Type |Description | |||| | `code` | string| The internal error code. Contains a standardized code, based on the type of the error |-| `message`| string| The internal error message. Contains a detailed message of the error. In this example, the `inlcudeQRCode` field is of the wrong type.| +| `message`| string| The internal error message. Contains a detailed message of the error. In this example, the `includeQRCode` field is of the wrong type.| | `target` | string| Optional. Target contains the field in the request that is causing this error. This field is optional and may not be present, depending on the error type. | |
aks | Cluster Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-extensions.md | Title: Cluster extensions for Azure Kubernetes Service (AKS) description: Learn how to deploy and manage the lifecycle of extensions on Azure Kubernetes Service (AKS) Previously updated : 05/12/2023 Last updated : 05/15/2023 For supported Kubernetes versions, refer to the corresponding documentation for | [Dapr][dapr-overview] | Dapr is a portable, event-driven runtime that makes it easy for any developer to build resilient, stateless and stateful applications that run on cloud and edge. | | [Azure Machine Learning][azure-ml-overview] | Use Azure Kubernetes Service clusters to train, inference, and manage machine learning models in Azure Machine Learning. | | [Flux (GitOps)][gitops-overview] | Use GitOps with Flux to manage cluster configuration and application deployment. See also [supported versions of Flux (GitOps)][gitops-support] and [Tutorial: Deploy applications using GitOps with Flux v2][gitops-tutorial].|+| [Azure Container Storage](../storage/container-storage/container-storage-introduction.md) | Use Azure Container Storage to manage block storage on AKS clusters to store data in persistent volumes. | You can also [select and deploy Kubernetes applications available through Marketplace](deploy-marketplace.md). |
aks | Csi Migrate In Tree Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-migrate-in-tree-volumes.md | The following are important considerations to evaluate: if [[ $RECLAIM_POLICY == "Retain" ]]; then if [[ $STORAGECLASS == $EXISTING_STORAGE_CLASS ]]; then STORAGE_SIZE="$(kubectl get pv $PV -n $NAMESPACE -o jsonpath='{.spec.capacity.storage}')"- SKU_NAME="$(kubectl get storageClass $STORAGECLASS -o jsonpath='{.reclaimPolicy}')" + SKU_NAME="$(kubectl get storageClass $STORAGE_CLASS_NEW -o jsonpath='{.parameters.skuname}')" DISK_URI="$(kubectl get pv $PV -n $NAMESPACE -o jsonpath='{.spec.azureDisk.diskURI}')" PERSISTENT_VOLUME_RECLAIM_POLICY="$(kubectl get pv $PV -n $NAMESPACE -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')" Before proceeding, verify the following: RECLAIM_POLICY="$(kubectl get pv $PV -n $NAMESPACE -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')" if [[ $STORAGE_CLASS == $EXISTING_STORAGE_CLASS ]]; then STORAGE_SIZE="$(kubectl get pv $PV -n $NAMESPACE -o jsonpath='{.spec.capacity.storage}')"- SKU_NAME="$(kubectl get storageClass $STORAGE_CLASS -o jsonpath='{.reclaimPolicy}')" + SKU_NAME="$(kubectl get storageClass $STORAGE_CLASS_NEW -o jsonpath='{.parameters.skuname}')" DISK_URI="$(kubectl get pv $PV -n $NAMESPACE -o jsonpath='{.spec.azureDisk.diskURI}')" TARGET_RESOURCE_GROUP="$(cut -d'/' -f5 <<<"$DISK_URI")" echo $DISK_URI |
applied-ai-services | Form Recognizer Container Install Run | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md | In this article you learn how to download, install, and run Form Recognizer cont * **Read**, **Layout**, **General Document**, **ID Document**, **Receipt**, **Invoice**, and **Custom** models are supported by Form Recognizer v3.0 containers. -* **Business Card**,**ID Document**, **Receipt**, **Invoice**, and **Custom** models are currently only supported in the [v2.1 containers](form-recognizer-container-install-run.md?view=form-recog-2.1.0&preserve-view=true). +* **Business Card** model is currently only supported in the [v2.1 containers](form-recognizer-container-install-run.md?view=form-recog-2.1.0&preserve-view=true). ::: moniker-end |
automation | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview.md | Change Tracking and Inventory now support Python 2 and Python 3. If your machine - Ubuntu, Debian: ```bash- sudo apt-get udpate + sudo apt-get update sudo apt-get install -y python2 ``` - SUSE: |
azure-arc | Network Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/network-requirements.md | If using a proxy, Arc resource bridge must be configured for proxy so that it ca There are only two certificates that should be relevant when deploying the Arc resource bridge behind an SSL proxy: the SSL certificate for your SSL proxy (so that the managment machine and on-premises appliance VM trust your proxy FQDN and can establish an SSL connection to it), and the SSL certificate of the Microsoft download servers. This certificate must be trusted by your proxy server itself, as the proxy is the one establishing the final connection and needs to trust the endpoint. Non-Windows machines may not trust this second certificate by default, so you may need to ensure that it's trusted. -In order to deploy Arc resouce bridge, images need to be downloaded to the management machine and then uploaded to the on-premises private cloud gallery. If your proxy server throttles download speed, this may impact your ability to download the required images (~3 GB) within the alotted time (90 min). +In order to deploy Arc resource bridge, images need to be downloaded to the management machine and then uploaded to the on-premises private cloud gallery. If your proxy server throttles download speed, this may impact your ability to download the required images (~3 GB) within the alotted time (90 min). ## Exclusion list for no proxy |
azure-functions | Functions Develop Local | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-local.md | When you develop your functions locally, any local settings required by your app ## Triggers and bindings -When you develop your functions locally, you need to take trigger and binding behaviors into consideration. The easiest way to test bindings during local development is to use connection strings that target live Azure services. You can target live services by adding the appropriate connection string settings in the `Values` array in the local.settings.json file. When you do this, local executions during testing impact live service data. Because of this, consider setting-up separate services to use during development and testing, and then switch to difference services during production. You can also use a local storage emulator. +When you develop your functions locally, you need to take trigger and binding behaviors into consideration. The easiest way to test bindings during local development is to use connection strings that target live Azure services. You can target live services by adding the appropriate connection string settings in the `Values` array in the local.settings.json file. When you do this, local executions during testing impact live service data. Because of this, consider setting-up separate services to use during development and testing, and then switch to different services during production. You can also use a local storage emulator. ## Local storage emulator |
azure-monitor | Azure Monitor Agent Custom Text Log Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-custom-text-log-migration.md | + + Title: Migrate from custom text log version 1 to DCR agent custom text logs. +description: Steps that you must perform when migrating from custom text log v1 to DCR based AMA custom text logs. + Last updated : 05/09/2023+++# Migrate from MMA custom text log to AMA DCR based custom text logs +This article describes the steps to migrate a [MMA Custom text log](data-sources-custom-logs.md) table so you can use it as a destination for a new [AMA custom text logs](data-collection-text-log.md) DCR. When you follow the steps, you won't lose any data. If you're creating a new AMA custom text log table, then this article doesn't pertain to you. + +## Background +MMA custom text logs must be configured to support new features in order for AMA custom text log DCRs to write to it. The following actions are taken: +- The table is reconfigured to enable all DCR-based custom logs features. +- All MMA custom fields stop updating in the table. AMA can write data to any column in the table. +- The MMS Custom text log can write to noncustom fields, but it will not be able to create new columns. The portal table management UI can be used to change the schema after migration. ++## Migration procedure +You should follow the steps only if the following criteria are true: +- You created the original table using the Custom Log Wizard. +- You're going to preserve the existing data in the table. +- You're going to write new data using and [AMA custom text log DCR](data-collection-text-log.md) and possibly configure an [ingestion time transformation](azure-monitor-agent-transformation.md). ++1. Configure your data collection rule (DCR) following procedures at [collect text logs with Azure Monitor Agent](data-collection-text-log.md) +2. Issue the following API call against your existing custom logs table to enable ingestion from Data Collection Rule and manage your table from the portal UI. This call is idempotent and future calls have no effect. Migration is one-way, you can't migrate the table back to MMA. ++```rest ++POST +https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{tableName}/migrate?api-version=2021-12-01-preview +``` +3. Discontinue MMA custom text log collection and start using the AMA custom text log. MMA and AMA can both write to the table as you migrate your agents from MMA to AMA. ++## Next steps +- [Walk through a tutorial sending custom logs using the Azure portal.](data-collection-text-log.md) +- [Create an ingestion time transform for your custom text data](azure-monitor-agent-transformation.md) |
azure-monitor | Action Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md | Title: Manage action groups in the Azure portal -description: Find out how to create and manage action groups. Learn about notifications and actions that action groups enable, such as email, webhooks, and Azure Functions. + Title: Azure Monitor action groups +description: Find out how to create and manage action groups. Learn about notifications and actions that action groups enable, such as email, webhooks, and Azure functions. Previously updated : 09/07/2022 Last updated : 05/02/2023 -# Create and manage action groups in the Azure portal +# Action groups -When Azure Monitor data indicates that there might be a problem with your infrastructure or application, an alert is triggered. Azure Monitor, Azure Service Health, and Azure Advisor then use *action groups* to notify users about the alert and take an action. An action group is a collection of notification preferences that are defined by the owner of an Azure subscription. +When Azure Monitor data indicates that there might be a problem with your infrastructure or application, an alert is triggered. Alerts can contain action groups, which are a collection of notification preferences. Azure Monitor, Azure Service Health, and Azure Advisor use action groups to notify users about the alert and take an action. -This article shows you how to create and manage action groups in the Azure portal. Depending on your requirements, you can configure various alerts to use the same action group or different action groups. +This article shows you how to create and manage action groups. Depending on your requirements, you can configure various alerts to use the same action group or different action groups. Each action is made up of the following properties: -- **Type**: The notification that's sent or action that's performed. Examples include sending a voice call, SMS, or email. You can also trigger various types of automated actions. For detailed information about notification and action types, see [Action-specific information](#action-specific-information), later in this article.+- **Type**: The notification that's sent or action that's performed. Examples include sending a voice call, SMS, or email. You can also trigger various types of automated actions. - **Name**: A unique identifier within the action group. - **Details**: The corresponding details that vary by type. -For information about how to use Azure Resource Manager templates to configure action groups, see [Action group Resource Manager templates](./action-groups-create-resource-manager-template.md). +In general, an action group is a global service. Efforts to make them more available regionally are in development. +Global requests from clients can be processed by action group services in any region. If one region of the action group service is down, the traffic is automatically routed and processed in other regions. As a global service, an action group helps provide a disaster recovery solution. Regional requests rely on availability zone redundancy to meet privacy requirements and offer a similar disaster recovery solution. -An action group is a *global* service, so there's no dependency on a specific Azure region. Requests from clients can be processed by action group services in any region. For instance, if one region of the action group service is down, the traffic is automatically routed and processed by other regions. As a global service, an action group helps provide a disaster recovery solution. --## Create an action group by using the Azure portal --1. Go to the [Azure portal](https://portal.azure.com). +## Create an action group in the Azure portal +1. Go to the [Azure portal](https://portal.azure.com/). 1. Search for and select **Monitor**. The **Monitor** pane consolidates all your monitoring settings and data in one view.- 1. Select **Alerts**, and then select **Action groups**. - :::image type="content" source="./media/action-groups/manage-action-groups.png" alt-text="Screenshot that shows the Alerts page in the Azure portal. The Action groups button is called out."::: + :::image type="content" source="./media/action-groups/manage-action-groups.png" alt-text="Screenshot of the Alerts page in the Azure portal with the action groups button highlighter."::: 1. Select **Create**. - :::image type="content" source="./media/action-groups/create-action-group.png" alt-text="Screenshot that shows the Action groups page in the Azure portal. The Create button is called out."::: + :::image type="content" source="./media/action-groups/create-action-group.png" alt-text="Screenshot that shows the Action groups page in the Azure portal. The Create button is called out."::: -1. Enter information as explained in the following sections. --### Configure basic action group settings --1. Under **Project details**, select: - - Values for **Subscription** and **Resource group**. - - The region. +1. Configure basic action group settings. In the **Project details** section: + - Select values for **Subscription** and **Resource group**. + - Select the region. | Option | Behavior | | | -- | | Global | The action groups service decides where to store the action group. The action group is persisted in at least two regions to ensure regional resiliency. Processing of actions may be done in any [geographic region](https://azure.microsoft.com/explore/global-infrastructure/geographies/#overview).<br></br>Voice, SMS, and email actions performed as the result of [service health alerts](../../service-health/alerts-activity-log-service-notifications-portal.md) are resilient to Azure live-site incidents. | | Regional | The action group is stored within the selected region. The action group is [zone-redundant](../../availability-zones/az-region.md#highly-available-services). Processing of actions is performed within the region.</br></br>Use this option if you want to ensure that the processing of your action group is performed within a specific [geographic boundary](https://azure.microsoft.com/explore/global-infrastructure/geographies/#overview). |- - The action group is saved in the subscription, region, and resource group that you select. --1. Under **Instance details**, enter values for **Action group name** and **Display name**. The display name is used in place of a full action group name when the group is used to send notifications. -- :::image type="content" source="./media/action-groups/action-group-1-basics.png" alt-text="Screenshot that shows the Create action group dialog. Values are visible in the Subscription, Resource group, Action group name, and Display name boxes."::: --### Configure notifications --1. To open the **Notifications** tab, select **Next: Notifications**. Alternately, at the top of the page, select the **Notifications** tab. --1. Define a list of notifications to send when an alert is triggered. Provide the following information for each notification: -- - **Notification type**: Select the type of notification that you want to send. The available options are: -- - **Email Azure Resource Manager Role**: Send an email to users who are assigned to certain subscription-level Azure Resource Manager roles. - - **Email/SMS message/Push/Voice**: Send various notification types to specific recipients. -- - **Name**: Enter a unique name for the notification. - - **Details**: Based on the selected notification type, enter an email address, phone number, or other information. - - **Common alert schema**: You can choose to turn on the common alert schema, which provides the advantage of having a single extensible and unified alert payload across all the alert services in Azure Monitor. For more information about this schema, see [Common alert schema](./alerts-common-schema.md). - :::image type="content" source="./media/action-groups/action-group-2-notifications.png" alt-text="Screenshot that shows the Notifications tab of the Create action group dialog. Configuration information for an email notification is visible."::: + The action group is saved in the subscription, region, and resource group that you select. -1. Select **OK**. +1. In the **Instance details** section, enter values for **Action group name** and **Display name**. The display name is used in place of a full action group name when the group is used to send notifications. -### Configure actions + :::image type="content" source="./media/action-groups/action-group-1-basics.png" alt-text="Screenshot that shows the Create action group dialog. Values are visible in the Subscription, Resource group, Action group name, and Display name boxes."::: -1. To open the **Actions** tab, select **Next: Actions**. Alternately, at the top of the page, select the **Actions** tab. +1. Configure notifications. Select **Next: Notifications**, or select the **Notifications** tab at the top of the page. +1. Define a list of notifications to send when an alert is triggered. +1. For each notification: + 1. Select the **Notification type**, and then fill in the appropriate fields for that notification. The available options are: -1. Define a list of actions to trigger when an alert is triggered. Provide the following information for each action: + |Notification type|Description |Fields| + |||| + |Email Azure Resource Manager role|Send an email to the subscription members, based on their role.<br>A notification email is sent only to the primary email address configured for the Azure AD user.<br>The email is only sent to Azure Active Directory **user** members of the selected role, not to Azure AD groups or service principals.<br> See [Configure the email address for the Email Azure Resource Manager role](#email).|Enter the primary email address configured for the Azure AD user. See [Configure the email address for the Email Azure Resource Manager role](#email).| + |Email| Ensure that your email filtering and any malware/spam prevention services are configured appropriately. Emails are sent from the following email addresses:<br> * azure-noreply@microsoft.com<br> * azureemail-noreply@microsoft.com<br> * alerts-noreply@mail.windowsazure.com|Enter the email where the notification should be sent.| + |SMS|SMS notifications support bi-directional communication. The SMS contains the following information:<br> * Shortname of the action group this alert was sent to<br> * The title of the alert.<br> A user can respond to an SMS to:<br> * Unsubscribe from all SMS alerts for all action groups or a single action group.<br> * Resubscribe to alerts<br> * Request help.<br> For more information about supported SMS replies, see [SMS replies](#sms-replies).|Enter the **Country code** and the **Phone number** for the SMS recipient. If you can't select your country/region code in the Azure portal, SMS isn't supported for your country/region. If your country/region code isn't available, you can vote to have your country/region added at [Share your ideas](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). As a workaround until your country is supported, configure the action group to call a webhook to a third-party SMS provider that supports your country/region.| + |Azure app Push notifications|Send notifications to the Azure mobile app. To enable push notifications to the Azure mobile app, provide the For more information about the Azure mobile app, see [Azure mobile app](https://azure.microsoft.com/features/azure-portal/mobile-app/).|In the **Azure account email** field, enter the email address that you use as your account ID when you configure the Azure mobile app. | + |Voice | Voice notification.|Enter the **Country code** and the **Phone number** for the recipient of the notification. If you can't select your country/region code in the Azure portal, voice notifications aren't supported for your country/region. If your country/region code isn't available, you can vote to have your country/region added at [Share your ideas](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). As a workaround until your country is supported, configure the action group to call a webhook to a third-party voice call provider that supports your country/region. | - - **Action type**: Select from the following types of actions: + 1. Select if you want to enable the **Common alert schema**. The common alert schema is a single extensible and unified alert payload that can be used across all the alert services in Azure Monitor. For more information about the common schema, see [Common alert schema](./alerts-common-schema.md). - - An Azure Automation runbook - - An Azure Functions function - - A notification that's sent to Azure Event Hubs - - A notification that's sent to an IT service management (ITSM) tool - - An Azure Logic Apps workflow - - A secure webhook - - A webhook + :::image type="content" source="./media/action-groups/action-group-2-notifications.png" alt-text="Screenshot that shows the Notifications tab of the Create action group dialog. Configuration information for an email notification is visible."::: - - **Name**: Enter a unique name for the action. - - **Details**: Enter appropriate information for your selected action type. For instance, you might enter a webhook URI, the name of an Azure app, an ITSM connection, or an Automation runbook. For an ITSM action, also enter values for **Work item** and other fields that your ITSM tool requires. - - **Common alert schema**: You can choose to turn on the common alert schema, which provides the advantage of having a single extensible and unified alert payload across all the alert services in Azure Monitor. For more information about this schema, see [Common alert schema](./alerts-common-schema.md). + 1. Select **OK**. +1. Configure actions. Select **Next: Actions**. or select the **Actions** tab at the top of the page. +1. Define a list of actions to trigger when an alert is triggered. Select an action type and enter a name for each action. - :::image type="content" source="./media/action-groups/action-group-3-actions.png" alt-text="Screenshot that shows the Actions tab of the Create action group dialog. Several options are visible in the Action type list."::: + |Action type |Details | + ||| + |Automation Runbook|For information about limits on Automation runbook payloads, see [Automation limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#automation-limits). | + |Event hubs |An Event Hubs action publishes notifications to Event Hubs. For more information about Event Hubs, see [Azure Event HubsΓÇöA big data streaming platform and event ingestion service](../../event-hubs/event-hubs-about.md). You can subscribe to the alert notification stream from your event receiver. | + |Functions |Calls an existing HTTP trigger endpoint in functions. For more information, see [Azure Functions](../../azure-functions/functions-get-started.md).<br>When you define the function action, the function's HTTP trigger endpoint and access key are saved in the action definition, for example, `https://azfunctionurl.azurewebsites.net/api/httptrigger?code=<access_key>`. If you change the access key for the function, you must remove and re-create the function action in the action group.<br>Your endpoint must support the HTTP POST method.<br>The function must have access to the storage account. If it doesn't have access, keys aren't available and the function URI isn't accessible.<br>[Learn about restoring access to the storage account](../../azure-functions/functions-recover-storage-account.md).| + |ITSM |An ITSM action requires an ITSM connection. To learn how to create an ITSM connection, see [ITSM integration](./itsmc-overview.md). | + |Logic apps |You can use [Azure Logic Apps](../../logic-apps/logic-apps-overview.md) to build and customize workflows for integration and to customize your alert notifications.| + |Secure webhook|When you use a secure webhook action, you must use Azure AD to secure the connection between your action group and your endpoint, which is a protected web API. See [Configure authentication for Secure webhook](#configure-authentication-for-secure-webhook). Secure webhook doesn't support basic authentication. If you're using basic authentication, use the Webhook action.| + |Webhook| If you use the webhook action, your target webhook endpoint must be able to process the various JSON payloads that different alert sources emit.<br>You can't pass security certificates through a webhook action. To use basic authentication, you must pass your credentials through the URI.<br>If the webhook endpoint expects a specific schema, for example, the Microsoft Teams schema, use the **Logic Apps** action type to manipulate the alert schema to meet the target webhook's expectations.<br> For information about the rules used for retrying webhook actions, see [Webhook](#webhook).| -### Create the action group + :::image type="content" source="./media/action-groups/action-group-3-actions.png" alt-text="Screenshot that shows the Actions tab of the Create action group dialog. Several options are visible in the Action type list."::: +1. (Optional.) If you'd like to assign a key-value pair to the action group to categorize your Azure resources, select **Next: Tags** or the **Tags** tab. Otherwise, skip this step. -1. To assign a key-value pair to the action group, select **Next: Tags**. Alternately, at the top of the page, select the **Tags** tab. Otherwise, skip this step. By using tags, you can categorize your Azure resources. Tags are available for all Azure resources, resource groups, and subscriptions. + :::image type="content" source="./media/action-groups/action-group-4-tags.png" alt-text="Screenshot that shows the Tags tab of the Create action group dialog. Values are visible in the Name and Value boxes."::: - :::image type="content" source="./media/action-groups/action-group-4-tags.png" alt-text="Screenshot that shows the Tags tab of the Create action group dialog. Values are visible in the Name and Value boxes."::: +1. Select **Review + create** to review your settings. This step quickly checks your inputs to make sure you've entered all required information. If there are issues, they're reported here. After you've reviewed the settings, select **Create** to create the action group. -1. To review your settings, select **Review + create**. This step quickly checks your inputs to make sure you've entered all required information. If there are issues, they're reported here. After you've reviewed the settings, select **Create** to create the action group. -- :::image type="content" source="./media/action-groups/action-group-5-review.png" alt-text="Screenshot that shows the Review + create tab of the Create action group dialog. All configured values are visible."::: + :::image type="content" source="./media/action-groups/action-group-5-review.png" alt-text="Screenshot that shows the Review + create tab of the Create action group dialog. All configured values are visible."::: > [!NOTE] > An action group is a *global* service, so there's no dependency on a specific Az When you create or update an action group in the Azure portal, you can test the action group. -1. Define an action, as described in the previous few sections. Then select **Review + create**. +1. [Create an action group in the Azure portal](#create-an-action-group-in-the-azure-portal). > [!NOTE]- > - > If you're editing an already existing action group, you must save changes to the action group before you begin testing. + > If you're editing an existing action group, save the changes to the action group before testing. -1. On the page that lists the information you entered, select **Test action group**. +1. On the action group page, select **Test action group**. - :::image type="content" source="./media/action-groups/test-action-group.png" alt-text="Screenshot that shows the test action group page with the Test option."::: + :::image type="content" source="./media/action-groups/test-action-group.png" alt-text="Screenshot that shows the test action group page with the Test option."::: 1. Select a sample type and the notification and action types that you want to test. Then select **Test**. - :::image type="content" source="./media/action-groups/test-sample-action-group.png" alt-text="Screenshot that shows the Test sample action group page with an email notification type and a webhook action type."::: + :::image type="content" source="./media/action-groups/test-sample-action-group.png" alt-text="Screenshot that shows the Test sample action group page with an email notification type and a webhook action type."::: 1. If you close the window or select **Back to test setup** while the test is running, the test is stopped, and you don't get test results. - :::image type="content" source="./media/action-groups/stop-running-test.png" alt-text="Screenshot that shows the Test Sample action group page. A dialog contains a Stop button and asks the user about stopping the test."::: + :::image type="content" source="./media/action-groups/stop-running-test.png" alt-text="Screenshot that shows the Test Sample action group page. A dialog contains a Stop button and asks the user about stopping the test."::: 1. When the test is finished, a test status of either **Success** or **Failed** appears. If the test failed and you want to get more information, select **View details**. - :::image type="content" source="./media/action-groups/test-sample-failed.png" alt-text="Screenshot that shows the Test sample action group page showing a test that failed."::: + :::image type="content" source="./media/action-groups/test-sample-failed.png" alt-text="Screenshot that shows the Test sample action group page showing a test that failed."::: You can use the information in the **Error details** section to understand the issue. Then you can edit, save changes, and test the action group again. When you run a test and select a notification type, you get a message with "Test" in the subject. The tests provide a way to check that your action group works as expected before you enable it in a production environment. All the details and links in test email notifications are from a sample reference set. -#### Azure Resource Manager role membership requirements +### Role requirements for test action groups The following table describes the role membership requirements that are needed for the *test actions* functionality: -| User's role membership | Existing action group | Existing resource group and new action group | New resource group and new action group | +| Role membership | Existing action group | Existing resource group and new action group | New resource group and new action group | | - | - | -- | - | | Subscription contributor | Supported | Supported | Supported | | Resource group contributor | Supported | Supported | Not applicable | The following table describes the role membership requirements that are needed f | Azure Monitor contributor | Supported | Supported | Not applicable | | Custom role | Supported | Supported | Not applicable | -> [!NOTE] -> -> You can run a limited number of tests per time period. To check which limits apply to your situation, see [Rate limiting for voice, SMS, emails, Azure App Service push notifications, and webhook posts](./alerts-rate-limiting.md). -> -> When you configure an action group in the portal, you can opt in or out of the common alert schema: -> -> - To find common schema samples for all sample types, see [Alert payload samples](./alerts-payload-samples.md). -> - To find non-common schema alert definitions, see [Non-common alert schema definitions for Test Action Group](./alerts-non-common-schema-definitions.md). + > [!NOTE] + > - You can run a limited number of tests per time period. To check which limits apply to your situation, see [Azure Monitor service limits](../service-limits.md). + > - When you configure an action group in the portal, you can opt in or out of the common alert schema. + > - To find common schema samples for all sample types, see [Common alert schema definitions for Test Action Group](./alerts-common-schema-test-action-definitions.md). + > - To find non-common schema alert definitions, see [Non-common alert schema definitions for Test Action Group](./alerts-non-common-schema-definitions.md). ## Create an action group with a Resource Manager template You can use an [Azure Resource Manager template](../../azure-resource-manager/templates/syntax.md) to configure action groups. Using templates, you can automatically set up action groups that can be reused in certain types of alerts. These action groups ensure that all the correct parties are notified when an alert is triggered. You can use an [Azure Resource Manager template](../../azure-resource-manager/te The basic steps are: 1. Create a template as a JSON file that describes how to create the action group.- 2. Deploy the template by using [any deployment method](../../azure-resource-manager/templates/deploy-powershell.md). -### Resource Manager templates for an action group +### Action group Resource Manager templates To create an action group by using a Resource Manager template, you create a resource of the type `Microsoft.Insights/actionGroups`. Then you fill in all related properties. Here are two sample templates that create an action group. The first template describes how to create a Resource Manager template for an ac } } ```--## Manage your action groups +## Manage action groups After you create an action group, you can view it in the portal: -1. On the **Monitor** page, select **Alerts**. -1. Select **Manage actions**. +1. Go to the [Azure portal](https://portal.azure.com). +1. From the **Monitor** page, select **Alerts**. +1. Select **Action groups**. 1. Select the action group that you want to manage. You can: - Add, edit, or remove actions. - Delete the action group. -## Action-specific information --The following sections provide information about the various actions and notifications that you can configure in an action group. --> [!NOTE] -> -> To check numeric limits on each type of action or notification, see [Subscription service limits for monitoring](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-monitor-limits). --### Automation runbook --To check limits on Automation runbook payloads, see [Automation limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#automation-limits). +## Service limits for notifications -You are limited to 10 runbook actions per action group. +A phone number or email can be included in action groups in many subscriptions. Azure Monitor uses rate limiting to suspend notifications when too many notifications are sent to a particular phone number, email address or device. Rate limiting ensures that alerts are manageable and actionable. -### Azure App Service push notifications +Rate limiting applies to SMS, voice, and email notifications. All other notification actions aren't rate limited. For information about rate limits, see [Azure Monitor service limits](../service-limits.md). -To enable push notifications to the Azure mobile app, provide the email address that you use as your account ID when you configure the Azure mobile app. For more information about the Azure mobile app, see [Get the Azure mobile app](https://azure.microsoft.com/features/azure-portal/mobile-app/). +Rate limiting applies across all subscriptions. Rate limiting is applied as soon as the threshold is reached, even if messages are sent from multiple subscriptions. -You are limited to 10 Azure app actions per action group. +When an email address is rate limited, a notification is sent to communicate that rate limiting was applied and when the rate limiting expires. -### Email +## Email -Ensure that your email filtering and any malware/spam prevention services are configured appropriately. Emails are sent from the following email addresses: +When you use email notifications, you can send email to the members of a subscription's role. Email is only sent to Azure Active Directory (Azure AD) **user** members of the role. Email isn't sent to Azure AD groups or service principals. -- azure-noreply@microsoft.com-- azureemail-noreply@microsoft.com-- alerts-noreply@mail.windowsazure.com+A notification email is sent only to the primary email address. -You might have a limited number of email actions per action group. For information about rate limits, see [Rate limiting for voice, SMS, emails, Azure App Service push notifications, and webhook posts](./alerts-rate-limiting.md). --### Email Azure Resource Manager role --When you use this type of notification, you can send email to the members of a subscription's role. Email is only sent to Azure Active Directory (Azure AD) **user** members of the role. Email isn't sent to Azure AD groups or service principals. --A notification email is sent only to the *primary email* address. --If your primary email doesn't receive notifications: +If your primary email doesn't receive notifications, configure the email address for the Email Azure Resource Manager role: 1. In the Azure portal, go to **Active Directory**. 1. On the left, select **All users**. On the right, a list of users appears. If your primary email doesn't receive notifications: :::image type="content" source="media/action-groups/active-directory-add-primary-email.png" alt-text="Screenshot that shows a user profile page in the Azure portal. The Edit button and the Email box are called out." border="true"::: -You might have a limited number of email actions per action group. To check which limits apply to your situation, see [Rate limiting for voice, SMS, emails, Azure App Service push notifications, and webhook posts](./alerts-rate-limiting.md). +You may have a limited number of email actions per action group. To check which limits apply to your situation, see [Azure Monitor service limits](../service-limits.md). When you set up the Resource Manager role: When you set up the Resource Manager role: > [!NOTE] > > It can take up to 24 hours for a customer to start receiving notifications after they add a new Azure Resource Manager role to their subscription.+## SMS ++For information about rate limits, see [Rate limiting for voice, SMS, emails, Azure App Service push notifications, and webhook posts](./alerts-rate-limiting.md). ++For important information about using SMS notifications in action groups, see [SMS alert behavior in action groups](./alerts-sms-behavior.md). -### Event Hubs +You might have a limited number of SMS actions per action group. -An Event Hubs action publishes notifications to Event Hubs. For more information about Event Hubs, see [Azure Event HubsΓÇöA big data streaming platform and event ingestion service](../../event-hubs/event-hubs-about.md). You can subscribe to the alert notification stream from your event receiver. +> [!NOTE] +> +> If you can't select your country/region code in the Azure portal, SMS isn't supported for your country/region. If your country/region code isn't available, you can vote to have your country/region added at [Share your ideas](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). In the meantime, as a workaround, configure your action group to call a webhook to a third-party SMS provider that offers support in your country/region. -### Functions +### SMS replies -An action that uses Functions calls an existing HTTP trigger endpoint in Functions. For more information about Functions, see [Azure Functions](../../azure-functions/functions-get-started.md). To handle a request, your endpoint must handle the HTTP POST verb. +These replies are supported for SMS notifications. The recipient of the SMS can reply to the SMS with these values: -When you define the function action, the function's HTTP trigger endpoint and access key are saved in the action definition, for example, `https://azfunctionurl.azurewebsites.net/api/httptrigger?code=<access_key>`. If you change the access key for the function, you must remove and re-create the function action in the action group. +| REPLY | Description | +| -- | -- | +| DISABLE `<Action Group Short name>` | Disables further SMS from the Action Group | +| ENABLE `<Action Group Short name>` | Re-enables SMS from the Action Group | +| STOP | Disables further SMS from all Action Groups | +| START | Re-enables SMS from ALL Action Groups | +| HELP | A response is sent to the user with a link to this article. | -You are limited to 10 function actions per action group. +>[!NOTE] +>If a user has unsubscribed from SMS alerts, but is then added to a new action group; they WILL receive SMS alerts for that new action group, but remain unsubscribed from all previous action groups. - > [!NOTE] - > - > The function must have access to the storage account. If not, no keys will be available and the function URI won't be accessible. - > [Learn about restoring access to the storage account](../../azure-functions/functions-recover-storage-account.md) +You might have a limited number of Azure app actions per action group. +### Countries with SMS notification support ++| Country code | Country | +|:|:| +| 61 | Australia | +| 43 | Austria | +| 32 | Belgium | +| 55 | Brazil | +| 1 |Canada | +| 56 | Chile | +| 86 | China | +| 420 | Czech Republic | +| 45 | Denmark | +| 372 | Estonia | +| 358 | Finland | +| 33 | France | +| 49 | Germany | +| 852 | Hong Kong | +| 91 | India | +| 353 | Ireland | +| 972 | Israel | +| 39 | Italy | +| 81 | Japan | +| 352 | Luxembourg | +| 60 | Malaysia | +| 52 | Mexico | +| 31 | Netherlands | +| 64 | New Zealand | +| 47 | Norway | +| 351 | Portugal | +| 1 | Puerto Rico | +| 40 | Romania | +| 7 | Russia | +| 65 | Singapore | +| 27 | South Africa | +| 82 | South Korea | +| 34 | Spain | +| 41 | Switzerland | +| 886 | Taiwan | +| 971 | UAE | +| 44 | United Kingdom | +| 1 | United States | -### ITSM +## Voice -An ITSM action requires an ITSM connection. To learn how to create an ITSM connection, see [ITSM integration](./itsmc-overview.md). +For important information about rate limits, see [Rate limiting for voice, SMS, emails, Azure App Service push notifications, and webhook posts](./alerts-rate-limiting.md). -You are limited to 10 ITSM actions per action group. +You might have a limited number of voice actions per action group. -### Logic Apps +> [!NOTE] +> +> If you can't select your country/region code in the Azure portal, voice calls aren't supported for your country/region. If your country/region code isn't available, you can vote to have your country/region added at [Share your ideas](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). In the meantime, as a workaround, configure your action group to call a webhook to a third-party voice call provider that offers support in your country/region. ++### Countries with Voice notification support +| Country code | Country | +|:|:| +| 61 | Australia | +| 43 | Austria | +| 32 | Belgium | +| 55 | Brazil | +| 1 |Canada | +| 56 | Chile | +| 420 | Czech Republic | +| 45 | Denmark | +| 358 | Finland | +| 353 | Ireland | +| 972 | Israel | +| 352 | Luxembourg | +| 60 | Malaysia | +| 52 | Mexico | +| 31 | Netherlands | +| 64 | New Zealand | +| 47 | Norway | +| 351 | Portugal | +| 65 | Singapore | +| 27 | South Africa | +| 46 | Sweeden | +| 44 | United Kingdom | +| 1 | United States | -You are limited to 10 Logic Apps actions per action group. +For information about pricing for supported countries/regions, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). +## Webhook -### Secure webhook +> [!NOTE] +> +> If you use the webhook action, your target webhook endpoint must be able to process the various JSON payloads that different alert sources emit. You can't pass security certificates through a webhook action. To use basic authentication, you must pass your credentials through the URI. If the webhook endpoint expects a specific schema, for example, the Microsoft Teams schema, use the Logic Apps action to transform the alert schema to meet the target webhook's expectations. -When you use a secure webhook action, you must use Azure AD to secure the connection between your action group and your protected web API, which is your webhook endpoint. +Webhook action groups use the following rules: +- When a webhook is invoked, if the first call fails, it is retried at least 1 more time, and up to 5 times (5 retries) at various delay intervals (5, 20, 40 seconds). + - The delay between 1st and 2nd attempt is 5 seconds + - The delay between 2nd and 3rd attempt is 20 seconds + - The delay between 3rd and 4th attempt is 5 seconds + - The delay between 4th and 5th attempt is 40 seconds + - The delay between 5th and 6th attempt is 5 seconds +- After retries attempted to call the webhook fail, no action group calls the endpoint for 15 minutes. +- The retry logic assumes that the call can be retried. The status codes: 408, 429, 503, 504, or HttpRequestException, WebException, `TaskCancellationException` allow for the call to be retriedΓÇ¥. + +### Configure authentication for Secure webhook The secure webhook action authenticates to the protected API by using a Service Principal instance in the Azure AD tenant of the "AZNS AAD Webhook" Azure AD application. To make the action group work, this Azure AD Webhook Service Principal must be added as a member of a role on the target Azure AD application that grants access to the target endpoint. If you use the webhook action, your target webhook endpoint must be able to proc :::image type="content" source="./media/action-groups/action-groups-secure-webhook.png" alt-text="Screenshot that shows the Secured Webhook dialog in the Azure portal with the Object ID box." border="true"::: -#### Secure webhook PowerShell script +### Secure webhook PowerShell script ```PowerShell Connect-AzureAD -TenantId "<provide your Azure AD tenant ID here>" Write-Host "My Azure AD Application (ObjectId): " + $myApp.ObjectId Write-Host "My Azure AD Application's Roles" Write-Host $myApp.AppRoles ```--### SMS --For information about rate limits, see [Rate limiting for voice, SMS, emails, Azure App Service push notifications, and webhook posts](./alerts-rate-limiting.md). --For important information about using SMS notifications in action groups, see [SMS alert behavior in action groups](./alerts-sms-behavior.md). --You are limited to 10 SMS actions per action group. --> [!NOTE] -> -> If you can't select your country/region code in the Azure portal, SMS isn't supported for your country/region. If your country/region code isn't available, you can vote to have your country/region added at [Share your ideas](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). In the meantime, as a workaround, configure your action group to call a webhook to a third-party SMS provider that offers support in your country/region. --#### Countries with SMS notification support --| Country code | Country | -|:|:| -| 61 | Australia | -| 43 | Austria | -| 32 | Belgium | -| 55 | Brazil | -| 1 |Canada | -| 56 | Chile | -| 86 | China | -| 420 | Czech Republic | -| 45 | Denmark | -| 372 | Estonia | -| 358 | Finland | -| 33 | France | -| 49 | Germany | -| 852 | Hong Kong | -| 91 | India | -| 353 | Ireland | -| 972 | Israel | -| 39 | Italy | -| 81 | Japan | -| 352 | Luxembourg | -| 60 | Malaysia | -| 52 | Mexico | -| 31 | Netherlands | -| 64 | New Zealand | -| 47 | Norway | -| 351 | Portugal | -| 1 | Puerto Rico | -| 40 | Romania | -| 7 | Russia | -| 65 | Singapore | -| 27 | South Africa | -| 82 | South Korea | -| 34 | Spain | -| 41 | Switzerland | -| 886 | Taiwan | -| 971 | UAE | -| 44 | United Kingdom | -| 1 | United States | --### Voice --For important information about rate limits, see [Rate limiting for voice, SMS, emails, Azure App Service push notifications, and webhook posts](./alerts-rate-limiting.md). --You are limited to 10 voice actions per action group. --> [!NOTE] -> -> If you can't select your country/region code in the Azure portal, voice calls aren't supported for your country/region. If your country/region code isn't available, you can vote to have your country/region added at [Share your ideas](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). In the meantime, as a workaround, configure your action group to call a webhook to a third-party voice call provider that offers support in your country/region. --#### Countries with Voice notification support -| Country code | Country | -|:|:| -| 61 | Australia | -| 43 | Austria | -| 32 | Belgium | -| 55 | Brazil | -| 1 |Canada | -| 56 | Chile | -| 420 | Czech Republic | -| 45 | Denmark | -| 358 | Finland | -| 353 | Ireland | -| 972 | Israel | -| 352 | Luxembourg | -| 60 | Malaysia | -| 52 | Mexico | -| 31 | Netherlands | -| 64 | New Zealand | -| 47 | Norway | -| 351 | Portugal | -| 65 | Singapore | -| 27 | South Africa | -| 46 | Sweeden | -| 44 | United Kingdom | -| 1 | United States | --For information about pricing for supported countries/regions, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). --### Webhook --> [!NOTE] -> -> If you use the webhook action, your target webhook endpoint must be able to process the various JSON payloads that different alert sources emit. You can't pass security certificates through a webhook action. To use basic authentication, you must pass your credentials through the URI. If the webhook endpoint expects a specific schema, for example, the Microsoft Teams schema, use the Logic Apps action to transform the alert schema to meet the target webhook's expectations. --Webhook action groups use the following rules: --The retry logic below assumes that the failure is retriable. The status codes: 408, 429, 503, 504, or HttpRequestException, WebException, `TaskCancellationException` are considered ΓÇ£retriableΓÇ¥. --When a webhook is invoked, if the first call fails, it will be retried at least 1 more time (retry), and up to 5 times (5 retries) at various delay intervals (5, 20, 40 seconds). --- The delay between 1st and 2nd attempt is 5 seconds-- The delay between 2nd and 3rd attempt is 20 seconds-- The delay between 3rd and 4th attempt is 5 seconds-- The delay between 4th and 5th attempt is 40 seconds-- The delay between 5th and 6th attempt is 5 seconds--- After retries attempted to call the webhook fail, no action group calls the endpoint for 15 minutes.--For source IP address ranges, see [Action group IP addresses](../app/ip-addresses.md). - ## Next steps -- Learn more about [SMS alert behavior](./alerts-sms-behavior.md).-- Gain an [understanding of the activity log alert webhook schema](./activity-log-alerts-webhook.md).+- Get an [overview of alerts](./alerts-overview.md) and learn how to receive alerts. - Learn more about the [ITSM Connector](./itsmc-overview.md).-- Learn more about [rate limiting](./alerts-rate-limiting.md) on alerts.-- Get an [overview of activity log alerts](./alerts-overview.md) and learn how to receive alerts.-- Learn how to [configure alerts whenever a Service Health notification is posted](../../service-health/alerts-activity-log-service-notifications-portal.md).+- Learn about the [activity log alert webhook schema](./activity-log-alerts-webhook.md). |
azure-monitor | Alerts Log Webhook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-webhook.md | Title: Webhook actions for log alerts in Azure alerts description: This article describes how to configure log alert pushes with webhook action and available customizations. Previously updated : 2/23/2022 Last updated : 05/02/2023 # Webhook actions for log alert rules -[Log alerts](alerts-log.md) support [configuring webhook action groups](./action-groups.md#webhook). In this article, we describe the properties that are available. You can use webhook actions to invoke a single HTTP POST request. The service that's called should support webhooks and know how to use the payload it receives. +[Log alerts](alerts-log.md) support [configuring action groups to use webhooks](./action-groups.md). In this article, we describe the properties that are available. You can use webhook actions to invoke a single HTTP POST request. The service that's called should support webhooks and know how to use the payload it receives. We recommend that you use [common alert schema](../alerts/alerts-common-schema.md) for your webhook integrations. The common alert schema provides the advantage of having a single extensible and unified alert payload across all the alert services in Azure Monitor. |
azure-monitor | Alerts Manage Alerts Previous Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alerts-previous-version.md | This article describes the process of managing alert rules created in the previo 1. To make alerts stateful, select **Automatically resolve alerts (preview)**. -1. Specify if the alert rule should trigger one or more [action groups](./action-groups.md#webhook) when the alert condition is met. +1. Specify if the alert rule should trigger one or more [action groups](./action-groups.md) when the alert condition is met. > [!NOTE] > For limits on the actions that can be performed, see [Azure subscription service limits](../../azure-resource-manager/management/azure-subscription-service-limits.md). 1. (Optional) Customize actions in log alert rules: |
azure-monitor | Alerts Rate Limiting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-rate-limiting.md | - Title: Rate limiting for SMS, emails, push notifications -description: Understand how Azure limits the number of possible SMS, email, Azure App Service push, or webhook notifications from an action group. - Previously updated : 2/23/2022----# Rate limiting for voice, SMS, emails, Azure App Service push notifications, and webhook posts -Rate limiting is a suspension of notifications that occurs when too many notifications are sent to a particular phone number, email address, or device. Rate limiting ensures that alerts are manageable and actionable. --The rate limit thresholds in **production** are: --- **SMS**: No more than one SMS every 5 minutes.-- **Voice**: No more than one voice call every 5 minutes.-- **Email**: No more than 100 emails in an hour.-- Other actions aren't rate limited. --The rate limit thresholds for **test action group** are: --- **SMS**: No more than one SMS every 1 minute.-- **Voice**: No more than one voice call every 1 minute.-- **Email**: No more than two emails in every 1 minute.-- Other actions aren't rate limited. --## Rate limit rules -- A particular phone number or email is rate limited when it receives more messages than the threshold allows.-- A phone number or email can be part of action groups across many subscriptions. Rate limiting applies across all subscriptions. It applies as soon as the threshold is reached, even if messages are sent from multiple subscriptions.-- When an email address is rate limited, another notification is sent to communicate the rate limiting. The email states when the rate limiting expires.--## Next steps ## -* Learn more about [SMS alert behavior](alerts-sms-behavior.md). -* Get an [overview of activity log alerts](./alerts-overview.md) and learn how to receive alerts. -* Learn how to [configure alerts whenever a service health notification is posted](../../service-health/alerts-activity-log-service-notifications-portal.md). |
azure-monitor | Alerts Sms Behavior | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-sms-behavior.md | - Title: SMS alert behavior in action groups -description: SMS message format and responding to SMS messages to unsubscribe, resubscribe, or request help. -- Previously updated : 2/23/2022----# SMS alert behavior in action groups --Action groups enable you to configure a list of actions. These groups are used when you define alerts. They ensure that a particular action group is notified when the alert is triggered. One of the actions supported is SMS. SMS notifications support bidirectional communication. A user can respond to an SMS to: --- **Unsubscribe from alerts:** A user can unsubscribe from all SMS alerts for all action groups or a single action group.-- **Resubscribe to alerts:** A user can resubscribe to all SMS alerts for all action groups or a single action group.-- **Request help:** A user can ask for more information on the SMS. Users are redirected to this article.--This article covers the behavior of SMS alerts and the response actions the user can take based on the locale of the user. --## Receive an SMS alert -An SMS receiver configured as part of an action group receives an SMS when an alert is triggered. The SMS contains the following information: --* Short name of the action group where this alert was sent -* Title of the alert --| REPLY | Description | -| -- | -- | -| DISABLE `<Action Group Short name>` | Disables further SMS from the action group. | -| ENABLE `<Action Group Short name>` | Re-enables SMS from the action group. | -| STOP | Disables further SMS from all action groups. | -| START | Re-enables SMS from all action groups. | -| HELP | A response is sent to the user with a link to this article. | -->[!NOTE] ->If a user has unsubscribed from SMS alerts but is then added to a new action group, they *will* receive SMS alerts for that new action group but remain unsubscribed from all previous action groups. --## Next steps -* Get an [overview of activity log alerts](./alerts-overview.md) and learn how to get alerted. -* Learn more about [SMS rate limiting](alerts-rate-limiting.md). -* Learn more about [action groups](./action-groups.md). |
azure-monitor | Alerts Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot.md | If you can see a fired alert in the portal, but its configured action did not tr 1. **Have the source IP addresses been blocked?** - Add the [IP addresses](./action-groups.md#action-specific-information) that the webhook is called from to your allowlist. + Add the [IP addresses](../app/ip-addresses.md) that the webhook is called from to your allowlist. 1. **Does your webhook endpoint work correctly?** |
azure-monitor | Itsm Connector Secure Webhook Connections Azure Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsm-connector-secure-webhook-connections-azure-configuration.md | To add a webhook to an action, follow these instructions for Secure Webhook: 1. In the [Azure portal](https://portal.azure.com/), search for and select **Monitor**. The **Monitor** pane consolidates all your monitoring settings and data in one view. 1. Select **Alerts** > **Manage actions**.-1. Select [Add action group](../alerts/action-groups.md#create-an-action-group-by-using-the-azure-portal) and fill in the fields. +1. Select [Add action group](../alerts/action-groups.md#create-an-action-group-in-the-azure-portal) and fill in the fields. 1. Enter a name in the **Action group name** box and enter a name in the **Short name** box. The short name is used in place of a full action group name when notifications are sent by using this group. 1. Select **Secure Webhook**. 1. Select these details: |
azure-monitor | Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md | These properties are client specific, so you can configure `appInsights.defaultC | correlationIdRetryIntervalMs | The time to wait before retrying to retrieve the ID for cross-component correlation. (Default is `30000`.) | | correlationHeaderExcludedDomains| A list of domains to exclude from cross-component correlation header injection. (Default. See [Config.ts](https://github.com/Microsoft/ApplicationInsights-node.js/blob/develop/Library/Config.ts).)| +## How do I customize logs collection? ++By default, Application Insights Node.js SDK logs at warning level to console. ++To spot and diagnose issues with Application Insights, "Self-diagnostics" can be enabled. This means collection of internal logging from the Application Insights Node.js SDK. ++The following code demonstrates how to enable debug logging as well as generate telemetry for internal logs. ++``` +let appInsights = require("applicationinsights"); +appInsights.setup("<YOUR_CONNECTION_STRING>") + .setInternalLogging(true, true) // Enable both debug and warning logging + .setAutoCollectConsole(true, true) // Generate Trace telemetry for winston/bunyan and console logs + .start(); + +Logs could be put into local file using APPLICATIONINSIGHTS_LOG_DESTINATION environment variable, supported values are file and file+console, a file named applicationinsights.log will be generated on tmp folder by default, including all logs, /tmp for *nix and USERDIR\\AppData\\Local\\Temp for Windows. Log directory could be configured using APPLICATIONINSIGHTS_LOGDIR environment variable. ++process.env.APPLICATIONINSIGHTS_LOG_DESTINATION = "file+console"; +process.env.APPLICATIONINSIGHTS_LOGDIR = "C:\\applicationinsights\\logs"; ++// Application Insights SDK setup.... +``` + ## Troubleshooting [!INCLUDE [azure-monitor-app-insights-test-connectivity](../../../includes/azure-monitor-app-insights-test-connectivity.md)] |
azure-netapp-files | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md | Azure NetApp Files is updated regularly. This article provides a summary about t ## May 2023 +* [Azure Application Consistent Snapshot tool (AzAcSnap) 8 (GA)](azacsnap-introduction.md) ++ Version 8 of the AzAcSnap tool is now generally available. [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (AzAcSnap) is a command-line tool that enables customers to simplify data protection for third-party databases in Linux environments. AzAcSnap 8 introduces the following new capabilities and improvements: ++ * Restore change - ability to revert volume for Azure NetApp Files + * New global settings file (.azacsnaprc) to control behavior of azacsnap + * Logging enhancements for failure cases and new "mainlog" for summarized monitoring + * Backup (-c backup) and Details (-c details) fixes ++ Download the latest release of the installer [here](https://aka.ms/azacsnapinstaller). + * [Single-file snapshot restore](snapshots-restore-file-single.md) is now generally available (GA) * [Troubleshooting enhancement: break file locks](troubleshoot-file-locks.md) |
azure-portal | Azure Portal Safelist Urls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-safelist-urls.md | asazure.windows.net (Analysis Services) bastion.azure.com (Azure Bastion Service) batch.azure.com (Azure Batch Service) catalogapi.azure.com (Azure Marketplace)+catalogartifact.azureedge.net (Azure Marketplace) changeanalysis.azure.com (Change Analysis) cognitiveservices.azure.com (Cognitive Services) config.office.com (Microsoft Office) |
azure-resource-manager | File | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/file.md | For more information, see [Parameters in Bicep](./parameters.md). ## Parameter decorators -You can add one or more decorators for each parameter. These decorators describe the parameter and define constraints for the values that are passed in. The following example shows one decorator but there are many others that are available. +You can add one or more decorators for each parameter. These decorators describe the parameter and define constraints for the values that are passed in. The following example shows one decorator but many others are available. ```bicep @allowed([ |
azure-resource-manager | Delete Resource Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/delete-resource-group.md | resource_client.resources.begin_delete_by_id( To delete a resource group, you need access to the delete action for the **Microsoft.Resources/subscriptions/resourceGroups** resource. > [!IMPORTANT]-> The only permission required to delete a resource group is permission to the delete action for deleting resource groups. You do **not** need permission to delete individual resources within that resource group. Additionally, delete actions that are specified in **notActions** for a roleAssignment are superseded by the resource group delete action. This is consistent with the scope heirarchy in the Azure role-based access control model. +> The only permission required to delete a resource group is permission to the delete action for deleting resource groups. You do **not** need permission to delete individual resources within that resource group. Additionally, delete actions that are specified in **notActions** for a roleAssignment are superseded by the resource group delete action. This is consistent with the scope hierarchy in the Azure role-based access control model. For a list of operations, see [Azure resource provider operations](../../role-based-access-control/resource-provider-operations.md). For a list of built-in roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md). |
azure-video-indexer | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md | To stay up-to-date with the most recent Azure Video Indexer developments, this a ## May 2023 +### Support for HTTP/2 ++Added support for HTTP/2 for our [Data Plane API](https://api-portal.videoindexer.ai/). [HTTP/2](https://en.wikipedia.org/wiki/HTTP/2) offers several benefits over HTTP/1.1, which continues to be supported for backwards compatibility. One of the main benefits of HTTP/2 is increased performance, better reliability and reduced system resource requirements over HTTP/1.1. With this change we now support HTTP/2 for both the Video Indexer [Portal](https://videoindexer.ai/) and our Data Plane API. We advise to update your code to take advantage of this change. + ### Topics insight improvements We now support all five levels of IPTC ontology. |
backup | Backup Azure Enhanced Soft Delete About | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-enhanced-soft-delete-about.md | Title: Overview of enhanced soft delete for Azure Backup (preview) description: This article gives an overview of enhanced soft delete for Azure Backup. Previously updated : 03/21/2023 Last updated : 05/15/2023 The key benefits of enhanced soft delete are: - **Re-registration of soft deleted items**: You can now register the items in soft deleted state with another vault. However, you can't register the same item with two vaults for active backups. - **Soft delete and reregistration of backup containers**: You can now unregister the backup containers (which you can soft delete) if you've deleted all backup items in the container. You can now register such soft deleted containers to other vaults. This is applicable for applicable workloads only, including SQL in Azure VM backup, SAP HANA in Azure VM backup and backup of on-premises servers. - **Soft delete across workloads**: Enhanced soft delete applies to all vaulted workloads alike and is supported for Recovery Services vaults and Backup vaults. However, it currently doesn't support operational tier workloads, such as Azure Files backup, Operational backup for Blobs, Disk and VM snapshot backups.+- **Soft delete of recovery points**: This feature allows you to recover data from recovery points that might have been deleted due to making changes in a backup policy or changing the backup policy associated with a backup item. Soft delete of recovery points isn't supported for log recovery points in SQL and SAP HANA workloads. [Learn more](manage-recovery-points.md#impact-of-expired-recovery-points-for-items-in-soft-deleted-state). ## Supported regions -Enhanced soft delete is available in all Azure public regions. -+- Enhanced soft delete is available in all Azure public regions. +- Soft delete of recovery points is currently in preview in West Central US, North Europe, and Australia East. Support in other regions will be added shortly. ## Supported scenarios - Enhanced soft delete is supported for Recovery Services vaults and Backup vaults. Also, it's supported for new and existing vaults. If a backup item/container is in soft deleted state, you can register it to a va >[!Note] >You can't actively protect one item to two vaults simultaneously. So, if you start protecting a backup container using another vault, you can no longer re-protect the same backup container to the previous vault. +## Soft delete of recovery points ++Soft delete of recovery points helps you recover any recovery points that are accidentally or maliciously deleted for some operations that could lead to deletion of one or more recovery points. For example, modification of a backup policy associated with a backup item to reduce the backup retention or assigning a new policy to a backed-up item that has a lower retention can cause a loss of certain recovery points. ++This feature helps to retain these recovery points for an additional duration, as per the soft delete retention specified for the vault (the impacted recovery points show up as soft deleted during this period). You can undelete the recovery points by increasing the retentions in the backup policy. You can also restore your data from soft deleted state, if you don't choose to undelete them. ++>[!Note] +>- Soft delete of recovery points is not supported for log recovery points in SQL and SAP HANA workloads. +>- Thisfeature is currently available in selected Azure regions only. [Learn more](#supported-scenarios). + ## Pricing There is no retention cost for the default duration of *14* days, after which, it incurs regular backup charges. For soft delete retention *>14* days, the default period applies to the *last 14 days* of the continuous retention configured in soft delete, and then backups are permanently deleted. |
backup | Backup Azure Enhanced Soft Delete Configure Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-enhanced-soft-delete-configure-manage.md | Title: Configure and manage enhanced soft delete for Azure Backup (preview) description: This article describes about how to configure and manage enhanced soft delete for Azure Backup. Previously updated : 12/13/2022 Last updated : 05/15/2023 Here are some points to note: - Unregistering a container while its backup items are soft deleted (not permanently deleted) will change the state of the container to Soft deleted. -- You can re-register containers that are in soft deleted state to another vault. However, in such scenarios, the existing backups (that are soft deleted) will continue to be in the original vault and will be permanently deleted when the soft delete retention period expires. +- You can re-register containers that are in soft deleted state to another vault. However, in such scenarios, the existing backups (that is soft deleted) will continue to be in the original vault and will be permanently deleted when the soft delete retention period expires. - You can also *undelete* the container. Once undeleted, it's re-registered to the original vault. You can undelete a container only if it's not registered to another vault. If it's registered, then you need to unregister it with the vault before performing the *undelete* operation. +## Delete recovery points ++Soft delete of recovery points helps you recover any recovery points that are accidentally or maliciously deleted for some operations that could lead to deletion of one or more recovery points. Recovery points don't move to soft-deleted state immediately and have a *24 hour SLA* (same as before). The example here shows recovery points that were deleted as part of backup policy modifications. ++[Soft delete of recovery points](backup-azure-enhanced-soft-delete-about.md#soft-delete-of-recovery-points), a part of enhanced soft delete is currently available in selected Azure regions. [Learn more](backup-azure-enhanced-soft-delete-about.md#supported-regions) on the region availability. ++Follow these steps: ++1. Go to your *vault* > **Backup policies**. ++2. Select the *backup policy* you want to modify. ++3. Reduce the retention duration in the backup policy, and then select **Update**. ++4. Go to *vault* > **Backup items**. ++5. Select a *backup item* that is backed up using the modified policy, and view its details. ++6. To view all recovery points for this item, select **Restore**, and then filter for the impacted recovery points. ++ The impacted recovery points are labeled as *being soft deleted* in the **Recovery type** column and will be retained as per the soft delete retention of the vault. + + :::image type="content" source="./media/backup-azure-enhanced-soft-delete/select-restore-point-for-soft-delete.png" alt-text="Screenshot shows to filter recovery points for soft delete."::: ++## Undelete recovery points ++You can *undelete* recovery points that are in soft deleted state so that they can last till their expiry by modifying the policy again to increase the retention of backups. ++Follow these steps: ++1. Go to your *vault* > **Backup policies**. ++2. Select the *backup policy* you want to modify. ++3. Increase the retention duration in the backup policy, and then select **Update**. ++4. Go to *vault* > **Backup items**, select a *backup item* that is backed up using the modified policy, and then view its details. ++5. To view all recovery points for this item, select **Restore**, and then filter for the impacted recovery points. ++ The impacted recovery points don't have the *soft deleted* label and can't in soft-deleted state. If there are recovery points that are still beyond the increased retention duration, these would continue to be in the soft-deleted state unless the retention is further increased. + ## Disable soft delete Follow these steps: Follow these steps: ## Next steps -[About Enhanced soft delete for Azure Backup (preview)](backup-azure-enhanced-soft-delete-about.md). +[About Enhanced soft delete for Azure Backup (preview)](backup-azure-enhanced-soft-delete-about.md). |
backup | Backup Azure Vms Enhanced Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-enhanced-policy.md | Title: Back up Azure VMs with Enhanced policy description: Learn how to configure Enhanced policy to back up VMs. Previously updated : 05/02/2023 Last updated : 05/15/2023 This article explains how to use _Enhanced policy_ to configure _Multiple Backup Azure Backup now supports _Enhanced policy_ that's needed to support new Azure offerings. For example, [Trusted Launch VM](../virtual-machines/trusted-launch.md) is supported with _Enhanced policy_ only. >[!Important]->- [Default policy](./backup-during-vm-creation.md#create-a-vm-with-backup-configured) will not support protecting newer Azure offerings, such as [Trusted Launch VM](backup-support-matrix-iaas.md#tvm-backup), [Ultra SSD](backup-support-matrix-iaas.md#vm-storage-support), [Shared disk](backup-support-matrix-iaas.md#vm-storage-support), and Confidential Azure VMs. ->- Enhanced policy now supports protecting Ultra SSD (preview). To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/1GLRnNCntU). +>- [Default policy](./backup-during-vm-creation.md#create-a-vm-with-backup-configured) will not support protecting newer Azure offerings, such as [Trusted Launch VM](backup-support-matrix-iaas.md#tvm-backup), [Ultra SSD](backup-support-matrix-iaas.md#vm-storage-support), [Premium SSD v2](backup-support-matrix-iaas.md#vm-storage-support), [Shared disk](backup-support-matrix-iaas.md#vm-storage-support), and Confidential Azure VMs. +>- Enhanced policy now supports protecting both Ultra SSD (preview) and Premium SSD v2 (preview). To enroll your subscription for these features, fill these forms - [Ultra SSD protection](https://forms.office.com/r/1GLRnNCntU) and [Premium SSD v2 protection](https://forms.office.com/r/h56TpTc773). >- Backups for VMs having [data access authentication enabled disks](../virtual-machines/windows/download-vhd.md?tabs=azure-portal#secure-downloads-and-uploads-with-azure-ad) will fail. You must enable backup of Trusted Launch VM through enhanced policy only. Enhanced policy provides the following features: |
backup | Backup Support Matrix Iaas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md | Title: Support matrix for Azure VM backups description: Get a summary of support settings and limitations for backing up Azure VMs by using the Azure Backup service. Previously updated : 04/06/2023 Last updated : 05/15/2023 Resizing a disk on a protected VM | Supported. Shared storage| Backing up VMs by using Cluster Shared Volumes (CSV) or Scale-Out File Server isn't supported. CSV writers are likely to fail during backup. On restore, disks that contain CSV volumes might not come up. [Shared disks](../virtual-machines/disks-shared-enable.md) | Not supported. <a name="ultra-disk-backup">Ultra SSD disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> Supported region(s) - Sweden Central and South Central US <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/1GLRnNCntU). <br><br> - Configuration of Ultra disk protection is supported via Recovery Services vault only. This configuration is currently not supported via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Ultra disks. +<a name="premium-ssd-v2-backup">Premium SSD v2 disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> Supported region(s) - Sweden Central and South Central US <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/h56TpTc773). <br><br> - Configuration of Premium v2 disk protection is supported via Recovery Services vault only. This configuration is currently not supported via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Premium v2 disks. [Temporary disks](../virtual-machines/managed-disks-overview.md#temporary-disk) | Azure Backup doesn't back up temporary disks. NVMe/[ephemeral disks](../virtual-machines/ephemeral-os-disks.md) | Not supported. [Resilient File System (ReFS)](/windows-server/storage/refs/refs-overview) restore | Supported. Volume Shadow Copy Service (VSS) supports app-consistent backups on ReFS. |
backup | Backup Support Matrix Mars Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-mars-agent.md | The operating systems must be 64 bit and should be running the latest services p **Operating system** | **Files/folders** | **System state** | **Software/Module requirements** | | | -Windows 11 (Enterprise, Pro, Home) | Yes | No | Check the corresponding server version for software/module requirements -Windows 10 (Enterprise, Pro, Home) | Yes | No | Check the corresponding server version for software/module requirements -Windows Server 2022 (Standard, Datacenter, Essentials) | Yes | Yes | Check the corresponding server version for software/module requirements +Windows 11 (Enterprise, Pro, Home, IoT) | Yes | No | Check the corresponding server version for software/module requirements +Windows 10 (Enterprise, Pro, Home, IoT) | Yes | No | Check the corresponding server version for software/module requirements +Windows Server 2022 (Standard, Datacenter, Essentials, IoT) | Yes | Yes | Check the corresponding server version for software/module requirements Windows 8.1 (Enterprise, Pro)| Yes |No | Check the corresponding server version for software/module requirements Windows 8 (Enterprise, Pro) | Yes | No | Check the corresponding server version for software/module requirements Windows Server 2016 (Standard, Datacenter, Essentials) | Yes | Yes | - .NET 4.5 <br> - Windows PowerShell <br> - Latest Compatible Microsoft VC++ Redistributable <br> - Microsoft Management Console (MMC) 3.0 Windows Server 2012 R2 (Standard, Datacenter, Foundation, Essentials) | Yes | Yes | - .NET 4.5 <br> - Windows PowerShell <br> - Latest Compatible Microsoft VC++ Redistributable <br> - Microsoft Management Console (MMC) 3.0 Windows Server 2012 (Standard, Datacenter, Foundation) | Yes | Yes |- .NET 4.5 <br> -Windows PowerShell <br> - Latest Compatible Microsoft VC++ Redistributable <br> - Microsoft Management Console (MMC) 3.0 <br> - Deployment Image Servicing and Management (DISM.exe) Windows Storage Server 2016/2012 R2/2012 (Standard, Workgroup) | Yes | No | - .NET 4.5 <br> - Windows PowerShell <br> - Latest Compatible Microsoft VC++ Redistributable <br> - Microsoft Management Console (MMC) 3.0-Windows Server 2019 (Standard, Datacenter, Essentials) | Yes | Yes | - .NET 4.5 <br> - Windows PowerShell <br> - Latest Compatible Microsoft VC++ Redistributable <br> - Microsoft Management Console (MMC) 3.0 +Windows Server 2019 (Standard, Datacenter, Essentials, IoT) | Yes | Yes | - .NET 4.5 <br> - Windows PowerShell <br> - Latest Compatible Microsoft VC++ Redistributable <br> - Microsoft Management Console (MMC) 3.0 For more information, see [Supported MABS and DPM operating systems](backup-support-matrix-mabs-dpm.md#supported-mabs-and-dpm-operating-systems). |
cognitive-services | Call Analyze Image 40 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-analyze-image-40.md | To analyze a local image, you'd put the binary image data in the HTTP request bo -## Select analysis options (using standard model) +## Select analysis options -### Select visual features +### Select visual features when using the standard model The Analysis 4.0 API gives you access to all of the service's image analysis features. Choose which operations to do based on your own use case. See the [overview](../overview.md) for a description of each feature. The example in this section adds all of the available visual features, but for practical usage you likely need fewer. A populated URL might look like this: +### Set model name when using a custom model ++You can also do image analysis with a custom trained model. To create and train a model, see [Create a custom Image Analysis model](./model-customization.md). Once your model is trained, all you need is the model's name. You do not need to specify visual features if you use a custom model. ++### [C#](#tab/csharp) ++To use a custom model, create the [ImageAnalysisOptions](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object and set the [ModelName](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.modelname#azure-ai-vision-imageanalysis-imageanalysisoptions-modelname) property. You don't need to set any other properties on **ImageAnalysisOptions**. There's no need to set the [Features](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.features#azure-ai-vision-imageanalysis-imageanalysisoptions-features) property, as you do with the standard model, since your custom model already implies the visual features the service extracts. ++[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/3/Program.cs?name=model_name)] ++### [Python](#tab/python) ++To use a custom model, create the [ImageAnalysisOptions](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions) object and set the [model_name](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-model-name) property. You don't need to set any other properties on **ImageAnalysisOptions**. There's no need to set the [features](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-features) property, as you do with the standard model, since your custom model already implies the visual features the service extracts. ++[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/3/main.py?name=model_name)] ++### [C++](#tab/cpp) ++To use a custom model, create the [ImageAnalysisOptions](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions) object and call the [SetModelName](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setmodelname) method. You don't need to call any other methods on **ImageAnalysisOptions**. There's no need to call [SetFeatures](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setfeatures) as you do with standard model, since your custom model already implies the visual features the service extracts. ++[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/3/3.cpp?name=model_name)] ++### [REST API](#tab/rest) ++To use a custom model, don't use the features query parameter. Instead, set the `model-name` parameter to the name of your model as shown here. Replace `MyCustomModelName` with your custom model name. ++`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&model-name=MyCustomModelName` +++ ### Specify languages You can specify the language of the returned data. The language is optional, with the default being English. See [Language support](https://aka.ms/cv-languages) for a list of supported language codes and which visual features are supported for each language. +Language option only applies when you are using the standard model. + #### [C#](#tab/csharp) Use the [Language](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.language) property of your **ImageAnalysisOptions** object to specify a language. A populated URL might look like this: If you're extracting captions or dense captions, you can ask for gender neutral captions. Gender neutral captions is optional, with the default being gendered captions. For example, in English, when you select gender neutral captions, terms like **woman** or **man** are replaced with **person**, and **boy** or **girl** are replaced with **child**. +Gender neurtal caption option only applies when you are using the standard model. + #### [C#](#tab/csharp) Set the [GenderNeutralCaption](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.genderneutralcaption) property of your **ImageAnalysisOptions** object to true to enable gender neutral captions. A populated URL might look like this: An aspect ratio is calculated by dividing the target crop width by the height. Supported values are from 0.75 to 1.8 (inclusive). Setting this property is only relevant when the **smartCrop** option (REST API) or **CropSuggestions** (SDK) was selected as part the visual feature list. If you select smartCrop/CropSuggestions but don't specify aspect ratios, the service returns one crop suggestion with an aspect ratio it sees fit. In this case, the aspect ratio is between 0.5 and 2.0 (inclusive). +Smart cropping aspect rations only applies when you are using the standard model. + #### [C#](#tab/csharp) Set the [CroppingAspectRatios](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.croppingaspectratios) property of your **ImageAnalysisOptions** to a list of aspect ratios. For example, to set aspect ratios of 0.9 and 1.33: A populated URL might look like this: -## Get results from the service (standard model) +## Get results from the service ++### Get results using the standard model This section shows you how to make an analysis call to the service using the standard model, and get the results. The service returns a `200` HTTP response, and the body contains the returned da -## Select analysis options (using custom model) --You can also do image analysis with a custom trained model. To create and train a model, see [Create a custom Image Analysis model](./model-customization.md). Once your model is trained, all you need is the model's name. --### [C#](#tab/csharp) --To use a custom model, create the [ImageAnalysisOptions](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions) object and set the [ModelName](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.modelname#azure-ai-vision-imageanalysis-imageanalysisoptions-modelname) property. You don't need to set any other properties on **ImageAnalysisOptions**. There's no need to set the [Features](/dotnet/api/azure.ai.vision.imageanalysis.imageanalysisoptions.features#azure-ai-vision-imageanalysis-imageanalysisoptions-features) property, as you do with the standard model, since your custom model already implies the visual features the service extracts. --[!code-csharp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/csharp/image-analysis/3/Program.cs?name=model_name)] --### [Python](#tab/python) --To use a custom model, create the [ImageAnalysisOptions](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions) object and set the [model_name](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-model-name) property. You don't need to set any other properties on **ImageAnalysisOptions**. There's no need to set the [features](/python/api/azure-ai-vision/azure.ai.vision.imageanalysisoptions#azure-ai-vision-imageanalysisoptions-features) property, as you do with the standard model, since your custom model already implies the visual features the service extracts. --[!code-python[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/python/image-analysis/3/main.py?name=model_name)] --### [C++](#tab/cpp) --To use a custom model, create the [ImageAnalysisOptions](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions) object and call the [SetModelName](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setmodelname) method. You don't need to call any other methods on **ImageAnalysisOptions**. There's no need to call [SetFeatures](/cpp/cognitive-services/vision/imageanalysis-imageanalysisoptions#setfeatures) as you do with standard model, since your custom model already implies the visual features the service extracts. --[!code-cpp[](~/azure-ai-vision-sdk/docs/learn.microsoft.com/cpp/image-analysis/3/3.cpp?name=model_name)] --### [REST API](#tab/rest) --To use a custom model, don't use the features query parameter. Instead, set the `model-name` parameter to the name of your model as shown here. Replace `MyCustomModelName` with your custom model name. --`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&model-name=MyCustomModelName` ----## Get results from the service (using custom model) +### Get results using custom model This section shows you how to make an analysis call to the service, when using a custom model. |
cognitive-services | Model Customization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/model-customization.md | The API call returns an **ImageAnalysisResult** JSON object, which contains all In this guide, you created and trained a custom image classification model using Image Analysis. Next, learn more about the Analyze Image 4.0 API, so you can call your custom model from an application using REST or library SDKs. -* [Call the Analyze Image API](./call-analyze-image-40.md#select-analysis-options-using-custom-model) -* See the [Model customization concepts](../concept-model-customization.md) guide for a broad overview of this feature and a list of frequently asked questions. +* See the [Model customization concepts](../concept-model-customization.md) guide for a broad overview of this feature and a list of frequently asked questions. +* [Call the Analyze Image API](./call-analyze-image-40.md). Note the sections [Set model name when using a custom model](./call-analyze-image-40.md#set-model-name-when-using-a-custom-model) and [Get results using custom model](./call-analyze-image-40.md#get-results-using-custom-model). |
cognitive-services | Assertion Detection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/concepts/assertion-detection.md | -The meaning of medical content is highly affected by modifiers, such as negative or conditional assertions, which can have critical implications if misrepresented. Text Analytics for health supports three categories of assertion detection for entities in the text: +The meaning of medical content is highly affected by modifiers, such as negative or conditional assertions, which can have critical implications if misrepresented. Text Analytics for health supports four categories of assertion detection for entities in the text: * Certainty * Conditional * Association+* Temporal ## Assertion output -Text Analytics for health returns assertion modifiers, which are informative attributes assigned to medical concepts that provide a deeper understanding of the concepts’ context within the text. These modifiers are divided into three categories, each focusing on a different aspect and containing a set of mutually exclusive values. Only one value per category is assigned to each entity. The most common value for each category is the Default value. The service’s output response contains only assertion modifiers that are different from the default value. In other words, if no assertion is returned, the implied assertion is the default value. +Text Analytics for health returns assertion modifiers, which are informative attributes assigned to medical concepts that provide a deeper understanding of the concepts’ context within the text. These modifiers are divided into four categories, each focusing on a different aspect and containing a set of mutually exclusive values. Only one value per category is assigned to each entity. The most common value for each category is the Default value. The service’s output response contains only assertion modifiers that are different from the default value. In other words, if no assertion is returned, the implied assertion is the default value. **CERTAINTY** – provides information regarding the presence (present vs. absent) of the concept and how certain the text is regarding its presence (definite vs. possible). * **Positive** [Default]: the concept exists or has happened. Text Analytics for health returns assertion modifiers, which are informative att * **Negative_Possible**: the concept’s existence is unlikely but there is some uncertainty. * **Neutral_Possible**: the concept may or may not exist without a tendency to either side. -**CONDITIONALITY** – provides information regarding whether the existence of a concept depends on certain conditions. -* **None** [Default]: the concept is a fact and not hypothetical and does not depend on certain conditions. -* **Hypothetical**: the concept may develop or occur in the future. -* **Conditional**: the concept exists or occurs only under certain conditions. --**ASSOCIATION** – describes whether the concept is associated with the subject of the text or someone else. -* **Subject** [Default]: the concept is associated with the subject of the text, usually the patient. -* **Other**: the concept is associated with someone who is not the subject of the text. --**TEMPORAL** - provides additional temporal information for a concept detailing whether it is an occurrence related to the past, present, or future. -* **Current** [Default]: the concept is related to conditions/events that belong to the current encounter. For example, medical symptoms that have brought the patient to seek medical attention (e.g., “started having headaches 5 days prior to their arrival to the ER”). This includes newly made diagnoses, symptoms experienced during or leading to this encounter, treatments and examinations done within the encounter. -* **Past**: the concept is related to conditions, examinations, treatments, medication events that are mentioned as something that existed or happened prior to the current encounter, as might be indicated by hints like s/p, recently, ago, previously, in childhood, at age X. For example, diagnoses that were given in the past, treatments that were done, past examinations and their results, past admissions, etc. Medical background is considered as PAST. -* **Future**: the concept is related to conditions/events that are planned/scheduled/suspected to happen in the future, e.g., will be obtained, will undergo, is scheduled in two weeks from now. --Assertion detection represents negated entities as a negative value for the certainty category, for example: +An example of assertion detection is shown below where a negated entity is returned with a negative value for the certainty category: ```json { Assertion detection represents negated entities as a negative value for the cert } ``` +**CONDITIONALITY** – provides information regarding whether the existence of a concept depends on certain conditions. +* **None** [Default]: the concept is a fact and not hypothetical and does not depend on certain conditions. +* **Hypothetical**: the concept may develop or occur in the future. +* **Conditional**: the concept exists or occurs only under certain conditions. ++**ASSOCIATION** – describes whether the concept is associated with the subject of the text or someone else. +* **Subject** [Default]: the concept is associated with the subject of the text, usually the patient. +* **Other**: the concept is associated with someone who is not the subject of the text. ++**TEMPORAL** - provides additional temporal information for a concept detailing whether it is an occurrence related to the past, present, or future. +* **Current** [Default]: the concept is related to conditions/events that belong to the current encounter. For example, medical symptoms that have brought the patient to seek medical attention (e.g., “started having headaches 5 days prior to their arrival to the ER”). This includes newly made diagnoses, symptoms experienced during or leading to this encounter, treatments and examinations done within the encounter. +* **Past**: the concept is related to conditions, examinations, treatments, medication events that are mentioned as something that existed or happened prior to the current encounter, as might be indicated by hints like s/p, recently, ago, previously, in childhood, at age X. For example, diagnoses that were given in the past, treatments that were done, past examinations and their results, past admissions, etc. Medical background is considered as PAST. +* **Future**: the concept is related to conditions/events that are planned/scheduled/suspected to happen in the future, e.g., will be obtained, will undergo, is scheduled in two weeks from now. +++ ## Next steps [How to call the Text Analytics for health](../how-to/call-api.md) |
cognitive-services | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/policy-reference.md | Last updated 02/21/2023 -+ # Azure Policy built-in policy definitions for Azure Cognitive Services |
cost-management-billing | Tutorial Acm Create Budgets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-create-budgets.md | -You can configure alerts based on your actual cost or forecasted cost to ensure that your spending is within your organizational spending limit. Notifications are triggered when the budget thresholds you've created are exceeded. None of your resources is affected, and your consumption isn't stopped. You can use budgets to compare and track spending as you analyze costs. +You can configure alerts based on your actual cost or forecasted cost to ensure that your spending is within your organizational spending limit. Notifications are triggered when the budget thresholds you've created are exceeded. resources are affected, and your consumption isn't stopped. You can use budgets to compare and track spending as you analyze costs. Cost and usage data is typically available within 8-24 hours and budgets are evaluated against these costs every 24 hours. Be sure to get familiar with [Cost and usage data updates](./understand-cost-mgt-data.md#cost-and-usage-data-updates-and-retention) specifics. When a budget threshold is met, email notifications are normally sent within an hour of the evaluation. Budget cost evaluations are based on actual cost. They don't include amortizatio When you create or edit a budget for a subscription or resource group scope, you can configure it to call an action group. The action group can perform various actions when your budget threshold is met. -Action Groups are currently only supported for subscription and resource group scopes. For more information about creating action groups, see [Configure basic action group settings](../../azure-monitor/alerts/action-groups.md#configure-basic-action-group-settings). +Action Groups are currently only supported for subscription and resource group scopes. For more information about creating action groups, see [action groups](../../azure-monitor/alerts/action-groups.md). For more information about using budget-based automation with action groups, see [Manage costs with Azure budgets](../manage/cost-management-budget-scenario.md). |
ddos-protection | Ddos Protection Sku Comparison | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-sku-comparison.md | DDoS Network Protection and DDoS IP Protection have the following limitations: - PaaS services (multi-tenant), which includes Azure App Service Environment for Power Apps, Azure API Management in deployment modes other than those supported above, or Azure Virtual WAN aren't currently supported. - Protecting a public IP resource attached to a NAT Gateway isn't supported. - Virtual machines in Classic/RDFE deployments aren't supported.-- Scenarios in which a single VM is running behind a public IP isn't supported. +- Scenarios in which a single VM is running behind a public IP is not recommended. For more information, see [Fundamental best practices](./fundamental-best-practices.md#design-for-scalability) - Protected resources that include public IP address prefix, or public IP created from public IP address prefix aren't supported. Azure Load Balancer with a public IP created from a public IP prefix is supported. DDoS IP Protection is similar to Network Protection, but has the following additional limitation: |
defender-for-cloud | Alerts Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md | Title: Reference table for all security alerts in Microsoft Defender for Cloud description: This article lists the security alerts visible in Microsoft Defender for Cloud Previously updated : 05/08/2023 Last updated : 05/15/2023 # Security alerts - a reference guide Microsoft Defender for Containers provides security alerts on the cluster level | **DDoS Attack mitigated for Public IP**<br>(NETWORK_DDOS_MITIGATED) | DDoS Attack mitigated for Public IP (IP address). | Probing | Low | -## <a name="alerts-fusion"></a>Security incident alerts +## <a name="alerts-fusion"></a>Security incident [Further details and notes](alerts-overview.md#what-are-security-incidents) |
defender-for-cloud | Concept Agentless Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-containers.md | You can maximize the coverage of your container posture issues and extend your p Learn more about [Cloud Security Posture Management](concept-cloud-security-posture-management.md). > [!IMPORTANT]-> The Agentless Container Posture preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available" and are excluded from the service-level agreements and limited warranty. Agentless Container Posture previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use. +> The Agentless Container Posture preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available" and are excluded from the service-level agreements and limited warranty. ## Capabilities |
defender-for-cloud | Concept Cloud Security Posture Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md | Defender for Cloud continually assesses your resources, subscriptions and organi ## Prerequisites -- **Foundational CSPM capabilities** - None +- **Foundational CSPM** - None - **Defender Cloud Security Posture Management (CSPM)** - Agentless scanning requires the **Subscription Owner** to enable the plan. Anyone with a lower level of authorization can enable the Defender CSPM plan but the agentless scanner won't be enabled by default due to lack of permissions. Attack path analysis and security explorer won't be populated with vulnerabilities because the agentless scanner is disabled. For commercial and national cloud coverage, review [features supported in different Azure cloud environments](support-matrix-cloud-environment.md). Learn more about [Defender CSPM pricing](https://azure.microsoft.com/pricing/det The following table summarizes each plan and their cloud availability. -| Feature | Foundational CSPM capabilities | Defender CSPM | Cloud availability | +| Feature | Foundational CSPM | Defender CSPM | Cloud availability | |--|--|--|--| | [Security recommendations to fix misconfigurations and weaknesses](review-security-recommendations.md) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png":::| Azure, AWS, GCP, on-premises | | [Asset inventory](asset-inventory.md) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | |
defender-for-cloud | Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/permissions.md | To allow the Security Admin role to automatically provision agents and extension | Service Principal | Roles | |:-|:-| | Defender for Containers provisioning AKS Security Profile | ΓÇó Kubernetes Extension Contributor<br>ΓÇó Contributor<br>ΓÇó Azure Kubernetes Service Contributor<br>ΓÇó Log Analytics Contributor |-| Defender for Containers provisioning ARC K8s Enabled | ΓÇó Azure Kubernetes Service Contributor<br>ΓÇó Kubernetes Extension Contributor<br>ΓÇó Contributor<br>ΓÇó Log Analytics Contributor | +| Defender for Containers provisioning Arc-enabled Kubernetes | ΓÇó Azure Kubernetes Service Contributor<br>ΓÇó Kubernetes Extension Contributor<br>ΓÇó Contributor<br>ΓÇó Log Analytics Contributor | | Defender for Containers provisioning Azure Policy Addon for Kubernetes | ΓÇó Kubernetes Extension Contributor<br>ΓÇó Contributor<br>ΓÇó Azure Kubernetes Service Contributor | | Defender for Containers provisioning Policy extension for Arc-enabled Kubernetes | ΓÇó Azure Kubernetes Service Contributor<br>ΓÇó Kubernetes Extension Contributor<br>ΓÇó Contributor | |
defender-for-cloud | Upcoming Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md | If you're looking for the latest release notes, you can find them in the [What's | [Additional scopes added to existing Azure DevOps Connectors](#additional-scopes-added-to-existing-azure-devops-connectors) | May 2023 | | [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | June 2023 | | [Replacing agent-based discovery with agentless discovery for containers capabilities in Defender CSPM](#replacing-agent-based-discovery-with-agentless-discovery-for-containers-capabilities-in-defender-cspm) | June 2023+| [Release of containers vulnerability assessment runtime recommendation powered by Microsoft Defender Vulnerability Management (MDVM) in Defender CSPM](#release-of-containers-vulnerability-assessment-runtime-recommendation-powered-by-microsoft-defender-vulnerability-management-mdvm-in-defender-cspm) | June 2023 ### Release of containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender CSPM Learn more about [Microsoft Defender Vulnerability Management (MDVM)](/microsoft **Estimated date for change: May 2023** -Defender for DevOps will be adding an additional scope to the already existing Azure DevOps (ADO) application. +Defender for DevOps will be adding more scopes to the already existing Azure DevOps (ADO) application. -The scopes that will be added include: +The scopes that are set to be added include: - Advance Security management: `vso.advsec_manage`; Needed to enable, disable and manage, GitHub Advanced Security for ADO. Customers will have until June 30, 2023 to resolve this issue. After this date, **Estimated date for change: June 2023** -With Agentless Container Posture capabilities available in Defender CSPM, the agent-based discovery capabilities are set to be retired in June 2023. If you currently use container capabilities within Defender CSPM, please make sure that the [relevant extensions](concept-agentless-containers.md#enable-extension-for-agentless-container-posture-for-cspm) are enabled before this date to continue receiving container-related value of the new agentless capabilities such as container-related attack paths, insights, and inventory. +With Agentless Container Posture capabilities available in Defender CSPM, the agent-based discovery capabilities are set to be retired in June 2023. If you currently use container capabilities within Defender CSPM, make sure that the [relevant extensions](concept-agentless-containers.md#enable-extension-for-agentless-container-posture-for-cspm) are enabled before this date to continue receiving container-related value of the new agentless capabilities such as container-related attack paths, insights, and inventory. ++### Release of containers vulnerability assessment runtime recommendation powered by Microsoft Defender Vulnerability Management (MDVM) in Defender CSPM ++**Estimated date for change: June 2023** ++ A new container recommendation in Defender CSPM powered by MDVM is set to be released: ++|Recommendation | Description | Assessment Key| +|--|--|--| +| Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) | Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. | c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5 ++This new recommendation is set to replace the current recommendation of the same name, powered by Qualys, only in Defender CSPM (replacing assessment key 41503391-efa5-47ee-9282-4eff6131462c). ## Next steps |
defender-for-iot | Concept Sentinel Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-sentinel-integration.md | SecurityIncident For more information, see: +- [Integrations with Microsoft and partner services](integrate-overview.md) - [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](../../sentinel/iot-solution.md) - [Detect threats out-of-the-box with Defender for IoT data](../../sentinel/iot-advanced-threat-monitoring.md#detect-threats-out-of-the-box-with-defender-for-iot-data) - [Create custom analytics rules to detect threats](../../sentinel/detect-threats-custom.md) |
defender-for-iot | Integrate Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrate-overview.md | Integrate Microsoft Defender for Iot with partner services to view partner data ## Axonius - |Name |Description |Support scope |Supported by |Learn more | |||||| |**Axonius Cybersecurity Asset Management** | Import and manage device inventory discovered by Defender for IoT in your Axonius instance. | - OT networks<br>- Locally managed sensors and on-premises management consoles | Axonius | [Axonius documentation](https://docs.axonius.com/docs/azure-defender-for-iot) | Integrate Microsoft Defender for Iot with partner services to view partner data | **Splunk** | Send Defender for IoT alerts to Splunk | - OT networks <br>- Cloud connected sensors | Microsoft | [Stream Defender for IoT cloud alerts to a partner SIEM](integrations/send-cloud-data-to-partners.md) | |**Splunk** | Send Defender for IoT alerts to Splunk | - OT networks<br>- Locally managed sensors and on-premises management consoles | Microsoft | [Integrate Splunk with Microsoft Defender for IoT](tutorial-splunk.md) | - ## Next steps > [!div class="nextstepaction"] |
defender-for-iot | Arcsight | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/arcsight.md | This article describes how to send Microsoft Defender for IoT alerts to ArcSight Before you begin, make sure that you have the following prerequisites: -- Access to a Defender for IoT OT sensor as an Admin user.+- Access to a Defender for IoT OT sensor as an Admin user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](../roles-on-premises.md). ## Configure the ArcSight receiver type For more information, see the [ArcSight SmartConnectors Documentation](https://w This procedure describes how to create a forwarding rule from your OT sensor to send Defender for IoT alerts from that sensor to ArcSight. -Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule. +Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule. For more information, see [Forward alert information](../how-to-forward-alert-information-to-partners.md). -1. Sign in to your OT sensor console and select **Forwarding** on the left. +1. Sign in to your OT sensor console and select **Forwarding**. -1. Enter a meaningful name for your rule, and then define your rule details, including: +1. Select **+ Create new rule**. - - The minimal alert level. For example, if you select Minor, you are notified about all minor, major and critical incidents. - - The protocols you want to include in the rule. - - The traffic you want to include in the rule. +1. In the **Add forwarding rule** pane, define the rule parameters: ++ :::image type="content" source="../media/integrate-arcsight/create-new-forwarding-rule.png" alt-text="Screenshot of creating a new forwarding rule." lightbox="../media/integrate-arcsight/create-new-forwarding-rule.png"::: ++ | Parameter | Description | + ||| + | **Rule name** | Enter a meaningful name for your rule. | + | **Minimal alert level** | The minimal security level incident to forward. For example, if you select Minor, you're notified about all minor, major and critical incidents. | + | **Any protocol detected** | Toggle off to select the protocols you want to include in the rule. | + | **Traffic detected by any engine** | Toggle off to select the traffic you want to include in the rule. | 1. In the **Actions** area, define the following values: - - **Server**: Select **ArcSight** - - **Host**: The ArcSight server address - - **Port**: The ArcSight server port - - **Timezone**: The timezone of the ArcSight server + | Parameter | Description | + ||| + | **Server** | Select **ArcSight**. | + | **Host** | The ArcSight server address. | + | **Port** | The ArcSight server port. | + | **Timezone** | Enter the timezone of the ArcSight server. | 1. Select **Save** to save your forwarding rule. ## Next steps -For more information, see: --- [Integrations with partner services](../integrate-overview.md)+- [Integrations with Microsoft and partner services](../integrate-overview.md) - [Forward alert information](../how-to-forward-alert-information-to-partners.md) - [Manage individual sensors](../how-to-manage-individual-sensors.md)- |
defender-for-iot | Logrhythm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/logrhythm.md | This article describes how to send Microsoft Defender for IoT alerts to LogRhyth Before you begin, make sure that you have the following prerequisites: -- Access to a Defender for IoT OT sensor as an Admin user.+- Access to a Defender for IoT OT sensor as an Admin user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](../roles-on-premises.md). ## Create a Defender for IoT forwarding rule This procedure describes how to create a forwarding rule from your OT sensor to send Defender for IoT alerts from that sensor to LogRhythm. -Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule. +Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule. For more information, see [Forward alert information](../how-to-forward-alert-information-to-partners.md). -1. Sign in to your OT sensor console and select **Forwarding** on the left. +1. Sign in to your OT sensor console and select **Forwarding**. -1. Enter a meaningful name for your rule, and then define your rule details, including: +1. Select **+ Create new rule**. - - The minimal alert level. For example, if you select Minor, you are notified about all minor, major and critical incidents. - - The protocols you want to include in the rule. - - The traffic you want to include in the rule. +1. In the **Add forwarding rule** pane, define the rule parameters: ++ :::image type="content" source="../media/integrate-logrhythm/create-new-forwarding-rule.png" alt-text="Screenshot of creating a new forwarding rule." lightbox="../media/integrate-logrhythm/create-new-forwarding-rule.png"::: ++ | Parameter | Description | + ||| + | **Rule name** | Enter a meaningful name for your rule. | + | **Minimal alert level** | The minimal security level incident to forward. For example, if you select Minor, you're notified about all minor, major and critical incidents. | + | **Any protocol detected** | Toggle off to select the protocols you want to include in the rule. | + | **Traffic detected by any engine** | Toggle off to select the traffic you want to include in the rule. | 1. In the **Actions** area, define the following values: - - **Server**: Select a SYSLOG server option, such as **SYSLOG Server (LEEF format) - - **Host**: The IP or hostname of your LogRhythm collector - - **Port**: Enter **514** - - **Timezone**: Enter your timezone + | Parameter | Description | + ||| + | **Server** | Select a SYSLOG server option, such as **SYSLOG Server (LEEF format)**. | + | **Host** | The IP or hostname of your LogRhythm collector | + | **Port** | Enter 514. | + | **Timezone** | Enter your timezone. | -1. Select **Save** to save your forwarding rule. +1. Select **Save**. ## Configure LogRhythm to collect logs For more information, see the [LogRhythm documentation](https://docs.logrhythm.c ## Next steps -For more information, see: --- [Integrations with partner services](../integrate-overview.md)+- [Integrations with Microsoft and partner services](../integrate-overview.md) - [Forward alert information](../how-to-forward-alert-information-to-partners.md) |
defender-for-iot | Netwitness | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/netwitness.md | This article describes how to send Microsoft Defender for IoT alerts to RSA NetW Before you begin, make sure that you have the following prerequisites: -- Access to a Defender for IoT OT sensor as an Admin user.+- Access to a Defender for IoT OT sensor as an Admin user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](../roles-on-premises.md). - NetWitness configuration to collect events from sources that support Common Event Format (CEF). For more information, see the [CyberX Platform - RSA NetWitness CEF Parser Implementation Guide](https://community.netwitness.com//t5/netwitness-platform-integrations/cyberx-platform-rsa-netwitness-cef-parser-implementation-guide/ta-p/554364). Before you begin, make sure that you have the following prerequisites: This procedure describes how to create a forwarding rule from your OT sensor to send Defender for IoT alerts from that sensor to NetWitness. -Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule. +Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule. For more information, see [Forward alert information](../how-to-forward-alert-information-to-partners.md). -1. Sign in to your OT sensor console and select **Forwarding** on the left. +1. Sign in to your OT sensor console and select **Forwarding**. -1. Enter a meaningful name for your rule, and then define your rule details, including: +1. Select **+ Create new rule**. - - The minimal alert level. For example, if you select Minor, you are notified about all minor, major and critical incidents. - - The protocols you want to include in the rule. - - The traffic you want to include in the rule. +1. In the **Add forwarding rule** pane, define the rule parameters: ++ :::image type="content" source="../media/integrate-netwitness/create-new-forwarding-rule.png" alt-text="Screenshot of creating a new forwarding rule." lightbox="../media/integrate-netwitness/create-new-forwarding-rule.png"::: ++ | Parameter | Description | + ||| + | **Rule name** | Enter a meaningful name for your rule. | + | **Minimal alert level** | The minimal security level incident to forward. For example, if you select Minor, you're notified about all minor, major and critical incidents. | + | **Any protocol detected** | Toggle off to select the protocols you want to include in the rule. | + | **Traffic detected by any engine** | Toggle off to select the traffic you want to include in the rule. | 1. In the **Actions** area, define the following values: - - **Server**: Select **NetWitness** - - **Host**: The NetWitness hostname - - **Port**: The NetWitness port - - **Timezone**: Enter your NetWitness timezone + | Parameter | Description | + ||| + | **Server** | Select **NetWitness**. | + | **Host** | The NetWitness hostname. | + | **Port** | The NetWitness port. | + | **Timezone** | Enter your NetWitness timezone. | 1. Select **Save** to save your forwarding rule. ## Next steps -For more information, see: - - [CyberX Platform - RSA NetWitness CEF Parser Implementation Guide](https://community.netwitness.com//t5/netwitness-platform-integrations/cyberx-platform-rsa-netwitness-cef-parser-implementation-guide/ta-p/554364)-- [Integrations with partner services](../integrate-overview.md)+- [Integrations with Microsoft and partner services](../integrate-overview.md) - [Forward alert information](../how-to-forward-alert-information-to-partners.md) |
defender-for-iot | On Premises Sentinel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/on-premises-sentinel.md | Before you start, make sure that you have the following prerequisites as needed: - If you want to encrypt the data you send to Microsoft Sentinel using TLS, make sure to generate a valid TLS certificate from the proxy server to use in your forwarding alert rule. - ## Set up forwarding alert rules 1. Sign into your OT network sensor or on-premises management console and create a forwarding rule. For more information, see [Forward on-premises OT alert information](../how-to-forward-alert-information-to-partners.md). Select **Save** when you're done. Make sure to test the rule to make sure that i > [!IMPORTANT] > To forward alert details to multiple Microsoft Sentinel instances, make sure to create a separate forwarding rule for each instance. Don't use the **Add server** option in the same forwarding rule to send data to multiple Microsoft Sentinel instances. - ## Next steps > [!div class="nextstepaction"] > [Stream data from cloud-connected sensors](../iot-solution.md) > [!div class="nextstepaction"]-> [Investigate in Microsoft Sentinel](../../../sentinel/investigate-cases.md) +> [Investigate in Microsoft Sentinel](../../../sentinel/investigate-cases.md) ++For more information, see: +> [!div class="nextstepaction"] +> [Integrations with Microsoft and partner services](../integrate-overview.md) |
defender-for-iot | Send Cloud Data To Partners | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/send-cloud-data-to-partners.md | You'll need Azure Active Directory (Azure AD) defined as a service principal for - **Application (client) ID** - **Directory (tenant) ID** - 1. From the **Certificates & secrets** page, note the values of your client secret **Value** and **Secret ID**. ## Create an Azure event hub Once data starts getting ingested into Splunk from your event hub, query the dat ## Next steps -This article describes how to forward alerts generated by cloud-connected sensors only. If you're working on-premises, such as in air-gapped environments, you may be able to create a forwarding alert rule to forward alert data directly from an OT sensor or on-premises management console. +This article describes how to forward alerts generated by cloud-connected sensors only. If you're working on-premises, such as in air-gapped environments, you may be able to create a forwarding alert rule to forward alert data directly from an OT sensor or on-premises management console. -For more information, see [Integrations with Microsoft and partner services](../integrate-overview.md). +> [!div class="nextstepaction"] +> [Integrations with Microsoft and partner services](../integrate-overview.md). |
defender-for-iot | Service Now Legacy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/service-now-legacy.md | Last updated 08/11/2022 >- [Service Graph Connector (SGC)](https://store.servicenow.com/sn_appstore_store.do#!/store/application/ddd4bf1b53f130104b5cddeeff7b1229) >- [Vulnerability Response (VR)](https://store.servicenow.com/sn_appstore_store.do#!/store/application/463a7907c3313010985a1b2d3640dd7e). -This tutorial will help you learn how to integrate, and use ServiceNow with Microsoft Defender for IoT. +This article helps you learn how to integrate and use ServiceNow with Microsoft Defender for IoT. -The Defender for IoT integration with ServiceNow provides a new level of centralized visibility, monitoring, and control for the IoT and OT landscape. These bridged platforms enable automated device visibility and threat management to previously unreachable ICS & IoT devices. +The Defender for IoT integration with ServiceNow provides centralized visibility, monitoring, and control for the IoT and OT landscape. These bridged platforms enable automated device visibility and threat management to previously unreachable ICS and IoT devices. The ServiceNow Configuration Management Database (CMDB) is enriched, and supplemented with a rich set of device attributes that are pushed by the Defender for IoT platform. This ensures a comprehensive, and continuous visibility into the device landscape. This visibility lets you monitor, and respond from a single-pane-of-glass. -In this tutorial, you learn how to: +> [!NOTE] +> Microsoft Defender for IoT was formally known as [CyberX](https://blogs.microsoft.com/blog/2020/06/22/microsoft-acquires-cyberx-to-accelerate-and-secure-customers-iot-deployments/). References to CyberX refer to Defender for IoT. ++In this article, you learn how to: > [!div class="checklist"] > In this tutorial, you learn how to: ## Prerequisites +Before you begin, make sure that you have the following prerequisites: + ### Software requirements -Access to ServiceNow and Defender for IoT +- Access to ServiceNow and Defender for IoT -- ServiceNow Service Management version 3.0.2.+ - ServiceNow Service Management version 3.0.2. + - Defender for IoT patch 2.8.11.1 or above. -- Defender for IoT patch 2.8.11.1 or above.+- Access to a Defender for IoT OT sensor as an Admin user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](../roles-on-premises.md). > [!NOTE] >If you are already working with a Defender for IoT and ServiceNow integration and upgrade using the on-premises management console. In that case, the previous data from Defender for IoT sensors should be cleared from ServiceNow. Access to ServiceNow and Defender for IoT - **On-premises management console architecture**: Set up an on-premises management console to communicate with one instance of ServiceNow. The on-premises management console pushes sensor data to the Defender for IoT application using REST API. - To set up your system to work with an on-premises management console, you will need to disable the ServiceNow Sync, Forwarding Rules, and Proxy configurations on any sensors where they were set up. + To set up your system to work with an on-premises management console, you need to disable the ServiceNow Sync, Forwarding Rules, and Proxy configurations on any sensors where they were set up. - **Sensor architecture**: If you want to set up your environment to include direct communication between sensors and ServiceNow, for each sensor define the ServiceNow Sync, Forwarding rules, and proxy configuration (if a proxy is needed). ## Download the Defender for IoT application in ServiceNow -To access the Defender for IoT application within ServiceNow, you will need to download the application from the ServiceNow application store. +To access the Defender for IoT application within ServiceNow, you need to download the application from the ServiceNow application store. **To access the Defender for IoT application in ServiceNow**: To access the Defender for IoT application within ServiceNow, you will need to d 1. Search for `Defender for IoT` or `CyberX IoT/ICS Management`. - :::image type="content" source="../media/tutorial-servicenow/search-results.png" alt-text="Screenshot of the search screen in ServiceNow."::: - 1. Select the application. - :::image type="content" source="../media/tutorial-servicenow/cyberx-app.png" alt-text="Screenshot of the search screen results."::: - 1. Select **Request App**. - :::image type="content" source="../media/tutorial-servicenow/sign-in.png" alt-text="Sign in to the application with your credentials."::: - 1. Sign in, and download the application. ## Set up Defender for IoT to communicate with ServiceNow -Configure Defender for IoT to push alert information to the ServiceNow tables. Defender for IoT alerts will appear in ServiceNow as security incidents. This can be done by defining a Defender for IoT forwarding rule to send alert information to ServiceNow. +Configure Defender for IoT to push alert information to the ServiceNow tables. Defender for IoT alerts appear in ServiceNow as security incidents. This can be done by defining a Defender for IoT forwarding rule to send alert information to ServiceNow. -Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule. +Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule. **To push alert information to the ServiceNow tables**: -1. Sign in to the on-premises management console. --1. Select **Forwarding**, in the left side pane. --1. Select the :::image type="icon" source="../media/tutorial-servicenow/plus-icon.png" border="false"::: button. +1. Sign in to the on-premises management console, and select **Forwarding**. - :::image type="content" source="../media/tutorial-servicenow/forwarding-rule.png" alt-text="Screenshot of the Create Forwarding Rule window."::: +1. Select **+** to create a new rule. -1. Add a rule name. +1. In the **Create Forwarding Rule** pane, define the following values: -1. Define criteria under which Defender for IoT will trigger the forwarding rule. Working with Forwarding rule criteria helps pinpoint and manage the volume of information sent from Defender for IoT to ServiceNow. The following options are available: -- - **Severity levels:** This is the minimum-security level incident to forward. For example, if **Minor** is selected, minor alerts, and any alert above this severity level will be forwarded. Levels are pre-defined by Defender for IoT. + | Parameter | Description | + |--|--| + | **Name** | Enter a meaningful name for the forwarding rule. | + | **Warning** | From the drop-down menu, select the minimal security level incident to forward. <br> For example, if **Minor** is selected, minor alerts and any alert above this severity level will be forwarded.| + | **Protocols** | To select a specific protocol, select **Specific**, and select the protocol for which this rule is applied. <br> By default, all the protocols are selected. | + | **Engines** | To select a specific security engine for which this rule is applied, select **Specific**, and select the engine. <br> By default, all the security engines are involved. | + | **System Notifications** | Forward the sensor's *online* and *offline* status. | + | **Alert Notifications** | Forward the sensor's alerts. | - - **Protocols:** Only trigger the forwarding rule if the traffic detected was running over specific protocols. Select the required protocols from the drop-down list or choose them all. +1. In the **Actions** area, select **Add**, and then select **ServiceNow**. For example: - - **Engines:** Select the required engines or choose them all. Alerts from selected engines will be sent. + :::image type="content" source="../media/tutorial-servicenow/forwarding-rule.png" alt-text="Screenshot of the Create Forwarding Rule window." lightbox="../media/tutorial-servicenow/forwarding-rule.png"::: 1. Verify that **Report Alert Notifications** is selected. -1. In the Actions section, select **Add** and then select **ServiceNow**. -- :::image type="content" source="../media/tutorial-servicenow/select-servicenow.png" alt-text="Select ServiceNow from the dropdown options."::: --1. Enter the ServiceNow action parameters: -- :::image type="content" source="../media/tutorial-servicenow/parameters.png" alt-text="Fill in the ServiceNow action parameters."::: - 1. In the **Actions** pane, set the following parameters: | Parameter | Description | |--|--|- | Domain | Enter the ServiceNow server IP address. | - | Username | Enter the ServiceNow server username. | - | Password | Enter the ServiceNow server password. | - | Client ID | Enter the Client ID you received for Defender for IoT in the **Application Registries** page in ServiceNow. | - | Client Secret | Enter the client secret string you created for Defender for IoT in the **Application Registries** page in ServiceNow. | - | Report Type | **Incidents**: Forward a list of alerts that are presented in ServiceNow with an incident ID and short description of each alert.<br /><br />**Defender for IoT Application**: Forward full alert information, including the sensor details, the engine, the source, and destination addresses. The information is forwarded to the Defender for IoT on the ServiceNow application. | + | **Domain** | Enter the ServiceNow server IP address. | + | **Username** | Enter the ServiceNow server username. | + | **Password** | Enter the ServiceNow server password. | + | **Client ID** | Enter the Client ID you received for Defender for IoT in the **Application Registries** page in ServiceNow. | + | **Client Secret** | Enter the client secret string you created for Defender for IoT in the **Application Registries** page in ServiceNow. | + | **Select Report Type** | **Incidents**: Forward a list of alerts that are presented in ServiceNow with an incident ID and short description of each alert.<br /><br />**Defender for IoT Application**: Forward full alert information, including the sensor details, the engine, the source, and destination addresses. The information is forwarded to the Defender for IoT on the ServiceNow application. | 1. Select **SAVE**. Defender for IoT alerts will now appear as incidents in ServiceNow. A token is needed in order to allow ServiceNow to communicate with Defender for IoT. -You'll need the `Client ID` and `Client Secret` that you entered when creating the Defender for IoT Forwarding rules. The Forwarding rules forward alert information to ServiceNow, and when configuring Defender for IoT to push device attributes to ServiceNow tables. +You need the `Client ID` and `Client Secret` that you entered when creating the Defender for IoT Forwarding rules. The Forwarding rules forward alert information to ServiceNow, and when configuring Defender for IoT to push device attributes to ServiceNow tables. ## Send Defender for IoT device attributes to ServiceNow Configure Defender for IoT to push an extensive range of device attributes to th 1. Sign in to your Defender for IoT on-premises management console. -1. Select **System Settings**, and then **ServiceNow** from the on-premises management console Integration section. -- :::image type="content" source="../media/tutorial-servicenow/servicenow.png" alt-text="Screenshot of the select the ServiceNow button."::: +1. Select **System Settings**, and then **ServiceNow** from the **Management console integrations** section. 1. Enter the following sync parameters in the ServiceNow Sync dialog box. - :::image type="content" source="../media/tutorial-servicenow/sync.png" alt-text="Screenshot of the ServiceNow sync dialog box."::: + :::image type="content" source="../media/tutorial-servicenow/sync.png" alt-text="Screenshot of the ServiceNow sync dialog box." lightbox="../media/tutorial-servicenow/sync.png"::: Parameter | Description | |--|--|- | Enable Sync | Enable and disable the sync after defining parameters. | - | Sync Frequency (minutes) | By default, information is pushed to ServiceNow every 60 minutes. The minimum is 5 minutes. | - | ServiceNow Instance | Enter the ServiceNow instance URL. | - | Client ID | Enter the Client ID you received for Defender for IoT in the **Application Registries** page in ServiceNow. | - | Client Secret | Enter the Client Secret string you created for Defender for IoT in the **Application Registries** page in ServiceNow. | - | Username | Enter the username for this instance. | - | Password | Enter the password for this instance. | + | **Enable Sync** | Enable and disable the sync after defining parameters. | + | **Sync Frequency (minutes)** | By default, information is pushed to ServiceNow every 60 minutes. The minimum is 5 minutes. | + | **ServiceNow Instance** | Enter the ServiceNow instance URL. | + | **Client ID** | Enter the Client ID you received for Defender for IoT in the **Application Registries** page in ServiceNow. | + | **Client Secret** | Enter the Client Secret string you created for Defender for IoT in the **Application Registries** page in ServiceNow. | + | **Username** | Enter the username for this instance. | + | **Password** | Enter the password for this instance. | 1. Select **SAVE**. Verify that the on-premises management console is connected to the ServiceNow instance by reviewing the Last Sync date. ## Set up the integrations using an HTTPS proxy This article describes the device attributes and alert information presented in 3. Navigate to **Inventory**, or **Alert**. - [:::image type="content" source="../media/tutorial-servicenow/alert-list.png" alt-text="Screenshot of the Inventory or Alert.":::](../media/tutorial-servicenow/alert-list.png#lightbox) - ## View connected devices To view connected devices: 1. Select a device, and then select the **Appliance** listed in for that device. - :::image type="content" source="../media/tutorial-servicenow/appliance.png" alt-text="Screenshot of the desired appliance from the list."::: - 1. In the **Device Details** dialog box, select **Connected Devices**. -## Clean up resources --There are no resources to clean up. - ## Next steps -In this article, you learned how to get started with the ServiceNow integration. Continue on to learn about our [Cisco integration](../tutorial-forescout.md). +> [!div class="nextstepaction"] +> [Integrations with Microsoft and partner services](../integrate-overview.md) |
defender-for-iot | Configure Mirror Rspan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/traffic-mirroring/configure-mirror-rspan.md | This article describes a sample procedure for configuring [RSPAN](../best-practi - The remote VLAN increases the bandwidth on the trunked port by the amount of traffic being mirrored from the source session. Make sure that your switch's trunk port can support the increased bandwidth. +> [!CAUTION] +> An increased bandwidth, whether due to large amounts of throughput or a large number of switches, can cause a switch to fail and therefore to bring down the entire network. +> When configuring traffic mirroring with RSPAN, make sure to consider the following: +> - The number of access / distribution switches that you configure with RSPAN. +> - The correlating throughput for the remote VLAN on each switch. + ## Configure the source switch On your source switch: |
defender-for-iot | Tutorial Clearpass | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-clearpass.md | Title: Integrate ClearPass with Microsoft Defender for IoT -description: In this tutorial, you will learn how to integrate Microsoft Defender for IoT with ClearPass. +description: In this tutorial, you learn how to integrate Microsoft Defender for IoT with ClearPass. Last updated 02/07/2022 -This tutorial will help you learn how to integrate ClearPass Policy Manager (CPPM) with Microsoft Defender for IoT. +This article helps you learn how to integrate ClearPass Policy Manager (CPPM) with Microsoft Defender for IoT. The Defender for IoT platform delivers continuous ICS threat monitoring and device discovery, combining a deep embedded understanding of industrial protocols, devices, and applications with ICS-specific behavioral anomaly detection, threat intelligence, risk analytics, and automated threat modeling. -Defender for IoT detects, discovers, and classifies OT and ICS endpoints, and share information directly with ClearPass using the ClearPass Security Exchange framework, and the open API. +Defender for IoT detects, discovers, and classifies OT and ICS endpoints, and share information directly with ClearPass using the ClearPass Security Exchange framework and the OpenAPI. Defender for IoT automatically updates the ClearPass Policy Manager Endpoint Database with endpoint classification data and several custom security attributes. The integration allows for the following: -- Viewing ICS, and SCADA security threats identified by Defender for IoT security engines.+- Viewing ICS and SCADA security threats identified by Defender for IoT security engines. -- Viewing device inventory information discovered by the Defender for IoT sensor. The sensor delivers centralized visibility of all network devices, and endpoints across the IT, and OT infrastructure. From here a centralized endpoint and edge security policy can be defined and administered in the ClearPass system.+- Viewing device inventory information discovered by the Defender for IoT sensor. The sensor delivers centralized visibility of all network devices and endpoints across the IT and OT infrastructure. From here, a centralized endpoint and edge security policy can be defined and administered in the ClearPass system. -In this tutorial, you learn how to: +In this article, you learn how to: > [!div class="checklist"] > In this tutorial, you learn how to: ## Prerequisites +Before you begin, make sure that you have the following prerequisites: + ### Aruba ClearPass requirements -CPPM runs on hardware appliances with pre-installed software or as a Virtual Machine under the following hypervisors. Hypervisors that run on a client computer such as VMware Player are not supported. +CPPM runs on hardware appliances with pre-installed software or as a Virtual Machine under the following hypervisors. Hypervisors that run on a client computer such as VMware Player aren't supported. - VMware ESXi 5.5, 6.0, 6.5, 6.6 or higher. CPPM runs on hardware appliances with pre-installed software or as a Virtual Mac - Defender for IoT version 2.5.1 or higher. -- An Azure account. If you don't already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).+- Access to a Defender for IoT OT sensor as an Admin user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md). ## Create a ClearPass API user -As part of the communications channel between the two products, Defender for IoT uses many APIs (both TIPS, and REST). Access to the TIPS APIs is validated via username, and password combination credentials. This user ID needs to have minimum levels of access. Do not use a Super Administrator profile, use API Administrator as shown below. +As part of the communications channel between the two products, Defender for IoT uses many APIs (both TIPS, and REST). Access to the TIPS APIs is validated via username and password combination credentials. This user ID needs to have minimum levels of access. Don't use a Super Administrator profile, but instead use API Administrator as shown below. **To create a ClearPass API user**: -1. In the left pane, select **Administration** > **Users and Privileges**, and select **ADD**. +1. Select **Administration** > **Users and Privileges**, and then select **ADD**. 1. In the **Add Admin User** dialog box, set the following parameters: - :::image type="content" source="media/tutorial-clearpass/policy-manager.png" alt-text="Screenshot of the administrator user's dialog box view."::: - | Parameter | Description | |--|--| | **UserID** | Enter the user ID. | In order to secure access to the REST API for the API Client, create a restricte 1. Set all of the options to **No Access** except for the following: -| Parameter | Description | -|--|--| -| **API Services** | Set to **Allow Access** | -| **Policy Manager** | Set the following: <br />- **Dictionaries**: **Attributes** set to **Read, Write, Delete**<br />- **Dictionaries**: **Fingerprintsset** to **Read, Write, Delete**<br />- **Identity**: **Endpoints** set to **Read, Write, Delete** | --+ | Parameter | Description | + |--|--| + | **API Services** | Set to **Allow Access** | + | **Policy Manager** | Set the following: <br />- **Dictionaries**: **Attributes** set to **Read, Write, Delete**<br />- **Dictionaries**: **Fingerprints** set to **Read, Write, Delete**<br />- **Identity**: **Endpoints** set to **Read, Write, Delete** | ## Create a ClearPass OAuth API client 1. In the main window, select **Administrator** > **API Services** > **API Clients**. -1. In the Create API Client tab, set the following parameters: +1. In the **Create API Client** tab, set the following parameters: - **Operating Mode**: This parameter is used for API calls to ClearPass. Select **ClearPass REST API ΓÇô Client**. In order to secure access to the REST API for the API Client, create a restricte - **Grant Type**: Set **Client credentials (grant_type = client_credentials)**. -1. Ensure you record the **Client Secret** and the client ID. For example, `defender-rest`. -- :::image type="content" source="media/tutorial-clearpass/aruba.png" alt-text="Screenshot of the Create API Client."::: +1. Ensure you record the **Client Secret** and the **Client ID**. For example, `defender-rest`. 1. In the Policy Manager, ensure you collected the following list of information before proceeding to the next step. To enable viewing the device inventory in ClearPass, you need to set up Defender 1. Set the following parameters: - - **Enable Sync:** Enable the sync between Defender for IoT and ClearPass -- - **Sync Frequency:** Define the sync frequency in minutes. The default is 60 minutes. The minimum is 5 minutes. -- - **ClearPass IP Address:** The IP address of the ClearPass system with which Defender for IoT is in sync. -- - **Client ID:** The client ID that was created on ClearPass for syncing the data with Defender for IoT. -- - **Client Secret:** The client secret that was created on ClearPass for syncing the data with Defender for IoT. -- - **Username:** The ClearPass administrator user. -- - **Password:** The ClearPass administrator password. + | Parameter | Description | + |--|--| + | **Enable Sync** | Toggle on to enable the sync between Defender for IoT and ClearPass. | + | **Sync Frequency (minutes)** | Define the sync frequency in minutes. The default is 60 minutes. The minimum is 5 minutes. | + | **ClearPass Host** | The IP address of the ClearPass system with which Defender for IoT is in sync. | + | **Client ID** | The client ID that was created on ClearPass for syncing the data with Defender for IoT. | + | **Client Secret** | The client secret that was created on ClearPass for syncing the data with Defender for IoT. | + | **Username** | The ClearPass administrator user. | + | **Password** | The ClearPass administrator password. | 1. Select **Save**. To enable viewing the device inventory in ClearPass, you need to set up Defender To enable viewing the alerts discovered by Defender for IoT in Aruba, you need to set the forwarding rule. This rule defines which information about the ICS, and SCADA security threats identified by Defender for IoT security engines is sent to ClearPass. -Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule. +Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule. **To define a ClearPass forwarding rule on the Defender for IoT sensor**: -1. In the Defender for IoT sensor, select **Forwarding** and then select **Create new rule**. +1. Sign in to the sensor, and select **Forwarding**. -1. Define a rule name. +1. Select **+ Create new rule**. -1. Define the rule conditions. +1. In the **Add forwarding rule** pane, define the rule parameters: -1. In the Actions section, select **ClearPass**. + :::image type="content" source="media/tutorial-clearpass/create-rule.png" alt-text="Screenshot of how to create a Forwarding Rule." lightbox="media/tutorial-clearpass/create-rule.png"::: - :::image type="content" source="media/tutorial-clearpass/create-rule.png" alt-text="Screenshot of, create a Forwarding Rule window."::: + | Parameter | Description | + |--|--| + | **Rule name** | The forwarding rule name. | + | **Minimal alert level** | The minimal security level incident to forward. For example, if Minor is selected, minor alerts and any alert above this severity level will be forwarded. | + | **Any protocol detected** | Toggle off to select the protocols you want to include in the rule. | + | **Traffic detected by any engine** | Toggle off to select the traffic you want to include in the rule. | ++1. In the **Actions** area, define the following values: ++ | Parameter | Description | + |--|--| + | **Server** | Select ClearPass. | + | **Host** | Define the ClearPass server IP to send alert information. | + | **Port** | Define the ClearPass port to send alert information. | ++1. Configure which alert information you want to forward: ++ | Parameter | Description | + |--|--| + | **Report illegal function codes** | Protocol violations - Illegal field value violating ICS protocol specification (potential exploit). | + | **Report unauthorized PLC programming / firmware updates** | Unauthorized PLC changes. | + | **Report unauthorized PLC stop** | PLC stop (downtime). | + | **Report malware related alerts** | Industrial malware attempts, such as TRITON, NotPetya. | + | **Report unauthorized scanning** | Unauthorized scanning (potential reconnaissance) | -1. In the **Host** field, define the ClearPass server IP and port to send alert information. -1. Define which alert information you want to forward. - - **Report illegal function codes:** Protocol violations - Illegal field value violating ICS protocol specification (potential exploit). - - **Report unauthorized PLC programming and firmware updates:** Unauthorized PLC changes. - - **Report unauthorized PLC stop:** PLC stop (downtime). - - **Report malware related alerts:** Industrial malware attempts, such as TRITON, NotPetya. - - **Report unauthorized scanning:** Unauthorized scanning (potential reconnaissance) 1. Select **Save**. ## Monitor ClearPass and Defender for IoT communication Once the sync has started, endpoint data is populated directly into the Policy M 1. Select **System settings** > **Integrations** > **ClearPass**. - :::image type="content" source="media/tutorial-clearpass/last-sync.png" alt-text="Screenshot of the view the time and date of your last sync."::: --If Sync is not working, or shows an error, then itΓÇÖs likely youΓÇÖve missed capturing some of the information. Recheck the data recorded, additionally you can view the API calls between Defender for IoT and ClearPass from **Guest** > **Administration** > **Support** > **Application Log**. + :::image type="content" source="media/tutorial-clearpass/last-sync.png" alt-text="Screenshot of the view the time and date of your last sync." lightbox="media/tutorial-clearpass/last-sync.png"::: -Below is an example of API logs between Defender for IoT and ClearPass. +If Sync isn't working, or shows an error, then itΓÇÖs likely youΓÇÖve missed capturing some of the information. Recheck the data recorded. +Additionally, you can view the API calls between Defender for IoT and ClearPass from **Guest** > **Administration** > **Support** > **Application Log**. -## Clean up resources +For example, API logs between Defender for IoT and ClearPass: -There are no resources to clean up. ## Next steps -In this article, you learned how to get started with the ClearPass integration. Continue on to learn about our [CyberArk integration](./tutorial-cyberark.md). +> [!div class="nextstepaction"] +> [Integrations with Microsoft and partner services](integrate-overview.md) |
defender-for-iot | Tutorial Cyberark | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-cyberark.md | Title: Integrate CyberArk with Microsoft Defender for IoT -description: In this tutorial, you will learn how to integrate Microsoft Defender for IoT with CyberArk. +description: In this tutorial, you learn how to integrate Microsoft Defender for IoT with CyberArk. Last updated 02/08/2022 -This tutorial will help you learn how to integrate, and use CyberArk with Microsoft Defender for IoT. +This article helps you learn how to integrate and use CyberArk with Microsoft Defender for IoT. -Defender for IoT delivers ICS, and IIoT cybersecurity platform with ICS-aware threat analytics, and machine learning. +Defender for IoT delivers ICS and IIoT cybersecurity platforms with ICS-aware threat analytics and machine learning. Threat actors are using compromised remote access credentials to access critical infrastructure networks via remote desktop and VPN connections. By using trusted connections, this approach easily bypasses any OT perimeter security. Credentials are typically stolen from privileged users, such as control engineers and partner maintenance personnel, who require remote access to perform daily tasks. The Defender for IoT integration along with CyberARK allows you to: - Reduce OT risks from unauthorized remote access -- Provide continuous monitoring, and privileged access security for OT+- Provide continuous monitoring and privileged access security for OT - Enhance incident response, threat hunting, and threat modeling -The Defender for IoT appliance is connected to the OT network via a SPAN port (mirror port) on network devices such as switches, and routers via a one-way (inbound) connection to the dedicated network interfaces on the Defender for IoT appliance. +The Defender for IoT appliance is connected to the OT network via a SPAN port (mirror port) on network devices, such as switches and routers, via a one-way (inbound) connection to the dedicated network interfaces on the Defender for IoT appliance. A dedicated network interface is also provided in the Defender for IoT appliance for centralized management and API access. This interface is also used for communicating with the CyberArk PSM solution that is deployed in the data center of the organization to manage privileged users and secure remote access connections. -In this tutorial, you learn how to: +In this article, you learn how to: > [!div class="checklist"] > - Configure PSM in CyberArk > - Enable the integration in Defender for IoT > - View and manage detections-> - Stop the Integration +> - Stop the integration ## Prerequisites +Before you begin, make sure that you have the following prerequisites: + - CyberARK version 2.0. -- Verify that you have CLI access to all Defender for IoT appliances in your enterprise.+- Verify that you have [CLI](references-work-with-defender-for-iot-cli-commands.md) access to all Defender for IoT appliances in your enterprise. - An Azure account. If you don't already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/). +- Access to a Defender for IoT OT sensor as an Admin user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md). + ## Configure PSM CyberArk CyberArk must be configured to allow communication with Defender for IoT. This communication is accomplished by configuring PSM. **To configure PSM**: -1. Locate, open the `c:\Program Files\PrivateArk\Server\dbparam.xml` file. +1. Locate and open the `c:\Program Files\PrivateArk\Server\dbparam.xml` file. 1. Add the following parameters: - `[SYSLOG]` <br> - `UseLegacySyslogFormat=Yes` <br> - `SyslogTranslatorFile=Syslog\CyberX.xsl` <br> - `SyslogServerIP=<CyberX Server IP>` <br> - `SyslogServerProtocol=UDP` <br> - `SyslogMessageCodeFilter=319,320,295,378,380` <br> + `[SYSLOG]` + `UseLegacySyslogFormat=Yes` + `SyslogTranslatorFile=Syslog\CyberX.xsl` + `SyslogServerIP=<CyberX Server IP>` + `SyslogServerProtocol=UDP` + `SyslogMessageCodeFilter=319,320,295,378,380` -1. Save the file, and close it. +1. Save the file, then close it. 1. Place the Defender for IoT syslog configuration file `CyberX.xsl` in `c:\Program Files\PrivateArk\Server\Syslog\CyberX.xsl`. 1. Open the **Server Central Administration**. -1. Select the **Stop Traffic Light**, to stop the server. -- :::image type="content" source="media/tutorial-cyberark/server.png" alt-text="Screenshot of the server central administration stop traffic light."::: +1. Select the :::image type="icon" source="media/tutorial-cyberark/stoplight.png" border="false"::: **Stop Traffic Light** to stop the server. 1. Select the **Start Traffic Light** to start the server. ## Enable the integration in Defender for IoT -In order to enable the integration, Syslog Server will need to be enabled in the Defender for IoT management console. By default, the Syslog Server listens to the IP address of the system using port 514 UDP. +In order to enable the integration, Syslog Server needs to be enabled in the Defender for IoT on-premises management console. By default, the Syslog Server listens to the IP address of the system using port 514 UDP. -**To configure the Defender for IoT**: +**To configure Defender for IoT**: -1. In Defender for IoT management console, navigate to **System Settings**. +1. Sign into your Defender for IoT on-premises management console, then navigate to **System Settings**. 1. Toggle the Syslog Server to **On**. :::image type="content" source="media/tutorial-cyberark/toggle.png" alt-text="Screenshot of the syslog server toggled to on."::: -1. (Optional) Change the port by signing in to the system via the CLI, and navigate to `/var/cyberx/properties/syslog.properties`, and change `listener: 514/udp`. +1. (Optional) Change the port by signing into the system via the CLI, navigating to `/var/cyberx/properties/syslog.properties`, and then changing to `listener: 514/udp`. ## View and manage detections -The integration between Microsoft Defender for IoT, and CyberArk PSM is performed via syslog messages. These messages are sent by the PSM solution to Defender for IoT, notifying Defender for IoT of any remote sessions, or verification failures. +The integration between Microsoft Defender for IoT and CyberArk PSM is performed via syslog messages. These messages are sent by the PSM solution to Defender for IoT, notifying Defender for IoT of any remote sessions or verification failures. -Once the Defender for IoT platform receives these messages from PSM, it correlates them with the data it sees in the network. Thus validating that any remote access connections to the network were generated by the PSM solution and not by an unauthorized user. +Once the Defender for IoT platform receives these messages from PSM, it correlates them with the data it sees in the network. Thus, validating that any remote access connections to the network were generated by the PSM solution and not by an unauthorized user. ### View alerts -Whenever the Defender for IoT platform identifies remote sessions that haven't been authorized by PSM, it will issue an `Unauthorized Remote Session`. To facilitate immediate investigation, the alert also shows the IP addresses and names of the source and destination devices. +Whenever the Defender for IoT platform identifies remote sessions that haven't been authorized by PSM, it issues an `Unauthorized Remote Session`. To facilitate immediate investigation, the alert also shows the IP addresses and names of the source and destination devices. **To view alerts**: -1. Sign in to the management console. --1. Select **Alerts** from the left side panel. +1. Sign into your on-premises management console, then select **Alerts**. 1. From the list of alerts, select the alert titled **Unauthorized Remote Session**. - :::image type="content" source="media/tutorial-cyberark/unauthorized.png" alt-text="The Unauthorized Remote Session alert."::: + :::image type="content" source="media/tutorial-cyberark/unauthorized.png" alt-text="The Unauthorized Remote Session alert." lightbox="media/tutorial-cyberark/unauthorized.png"::: ### Event timeline -Whenever PSM authorizes a remote connection, it is visible in the Defender for IoT Event Timeline page. The Event Timeline page shows a timeline of all alerts and notifications. +Whenever PSM authorizes a remote connection, it's visible in the Defender for IoT Event Timeline page. The Event Timeline page shows a timeline of all alerts and notifications. **To view the event timeline**: -1. Sign in to the Defender for IoT sensor. --1. Select **Event timeline** from the left side panel. +1. Sign into your network sensor, then select **Event timeline**. 1. Locate any event titled PSM Remote Session. - ### Auditing & forensics -Administrators can audit, and investigate remote access sessions by querying the Defender for IoT platform via its built-in data mining interface. This information can be used to identify all remote access connections that have occurred including forensic details such as from, or to devices, protocols (RDP, or SSH), source, and destination users, time-stamps, and whether the sessions were authorized using PSM. +Administrators can audit and investigate remote access sessions by querying the Defender for IoT platform via its built-in data mining interface. This information can be used to identify all remote access connections that have occurred, including forensic details such as from or to devices, protocols (RDP, or SSH), source and destination users, time-stamps, and whether the sessions were authorized using PSM. **To audit and investigate**: -1. Sign in to the Defender for IoT sensor. --1. Select **Data mining** from the left side panel. +1. Sign into your network sensor, then select **Data mining**. 1. Select **Remote Access**. At any point in time, you can stop the integration from communicating. **To stop the integration**: -1. In the Defender for IoT management console, navigate to the **System Settings** screen. +1. In the Defender for IoT on-premises management console, navigate to **System Settings**. 1. Toggle the Syslog Server option to **Off** . :::image type="content" source="media/tutorial-cyberark/toggle.png" alt-text="A view of th Server status."::: -## Clean up resources --There are no resources to clean up. - ## Next steps -In this article, you learned how to get started with the CyberArk integration. Continue on to learn about our [Forescout integration](./tutorial-forescout.md). --+> [!div class="nextstepaction"] +> [Integrations with Microsoft and partner services](integrate-overview.md) |
defender-for-iot | Tutorial Forescout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-forescout.md | Title: Integrate Forescout with Microsoft Defender for IoT -description: In this tutorial, you'll learn how to integrate Microsoft Defender for IoT with Forescout. +description: In this tutorial, you learn how to integrate Microsoft Defender for IoT with Forescout. Last updated 02/08/2022 -> [!Note] -> References to CyberX refer to Microsoft Defender for IoT. +> [!NOTE] +> Microsoft Defender for IoT was formally known as [CyberX](https://blogs.microsoft.com/blog/2020/06/22/microsoft-acquires-cyberx-to-accelerate-and-secure-customers-iot-deployments/). References to CyberX refer to Defender for IoT. -This tutorial will help you learn how to integrate Forescout with Microsoft Defender for IoT. +This article helps you learn how to integrate Forescout with Microsoft Defender for IoT. -Microsoft Defender for IoT delivers an ICS, and IoT cybersecurity platform. Defender for IoT is the only platform with ICS aware threat analytics, and machine learning. Defender for IoT provides: +Microsoft Defender for IoT delivers an ICS and IoT cybersecurity platform. Defender for IoT is the only platform with ICS aware threat analytics and machine learning. Defender for IoT provides: -- Immediate insights about ICS the device landscape with an extensive range of details about attributes.+- Immediate insights about ICS and the device landscape with an extensive range of details about attributes. - ICS-aware deep embedded knowledge of OT protocols, devices, applications, and their behaviors. -- Immediate insights into vulnerabilities, and known zero-day threats.+- Immediate insights into vulnerabilities and known zero-day threats. - An automated ICS threat modeling technology to predict the most likely paths of targeted ICS attacks via proprietary analytics. The Forescout integration helps reduce the time required for industrial and critical infrastructure organizations to detect, investigate, and act on cyber threats. -- Use Microsoft Defender for IoT OT device intelligence to close the security cycle by triggering Forescout policy actions. For example, you can automatically send alert email to SOC administrators when specific protocols are detected, or when firmware details change.+- Use Microsoft Defender for IoT OT device intelligence to close the security cycle by triggering Forescout policy actions. For example, you can automatically send an alert email to SOC administrators when specific protocols are detected, or when firmware details change. - Correlate Defender for IoT information with other *Forescout eyeExtended* modules that oversee monitoring, incident management, and device control. -The Defender for IoT integration with the Forescout platform provides centralized visibility, monitoring, and control for the IoT, and OT landscape. These bridged platforms enable automated device visibility, management to ICS devices and, siloed workflows. The integration provides SOC analysts with multilevel visibility into OT protocols deployed in industrial environments. Information becomes available such as firmware, device types, operating systems, and risk analysis scores based on proprietary Microsoft Defender for IoT technologies. +The Defender for IoT integration with the Forescout platform provides centralized visibility, monitoring, and control for the IoT and OT landscape. These bridged platforms enable automated device visibility, management to ICS devices, and siloed workflows. The integration provides SOC analysts with multilevel visibility into OT protocols deployed in industrial environments. Information becomes available, such as firmware, device types, operating systems, and risk analysis scores, based on proprietary Microsoft Defender for IoT technologies. -In this tutorial, you learn how to: +In this article, you learn how to: > [!div class="checklist"] > - Generate an access token In this tutorial, you learn how to: > - View device attributes in Forescout > - Create Microsoft Defender for IoT policies in Forescout -If you don't already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/). - ## Prerequisites -- Microsoft Defender for IoT version 2.4 or above +Before you begin, make sure that you have the following prerequisites: ++- Microsoft Defender for IoT version 2.4 or above - Forescout version 8.0 or above - A license for the Forescout eyeExtend module for the Microsoft Defender for IoT Platform. +- Access to a Defender for IoT OT sensor as an Admin user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md). + ## Generate an access token Access tokens allow external systems to access data discovered by Defender for IoT. Access tokens allow that data to be used for external REST APIs, and over SSL connections. You can generate access tokens in order to access the Microsoft Defender for IoT REST API. To ensure communication from Defender for IoT to Forescout, you must generate an 1. Select **Generate token**. -1. Enter a token description in the **Description** field. -- :::image type="content" source="media/tutorial-forescout/new-forescout-token.png" alt-text="New access token"::: +1. In the **Description** field, add a short description regarding the purpose of the access token. For example: "integration with python script". 1. Select **Generate**. The token is then displayed in the dialog box. To ensure communication from Defender for IoT to Forescout, you must generate an 1. Select **Finish**. - :::image type="content" source="media/tutorial-forescout/forescout-access-token-added-successfully.png" alt-text="Finish adding token"::: - ## Configure the Forescout platform You can now configure the Forescout platform to communicate with a Defender for IoT sensor. **To configure the Forescout platform**: -1. On the Forescout platform, search for,and install **the Forescout eyeExtend module for CyberX**. +1. On the Forescout platform, search for and install **the Forescout eyeExtend module for CyberX**. 1. Sign in to the CounterACT console. You can now configure the Forescout platform to communicate with a Defender for 1. Navigate to **Modules** > **CyberX Platform**. - :::image type="content" source="media/tutorial-forescout/settings-for-module.png" alt-text="Microsoft Defender for IoT module settings"::: - 1. In the Server Address field, enter the IP address of the Defender for IoT sensor that will be queried by the Forescout appliance. 1. In the Access Token field, enter the access token that was generated earlier. You can now configure the Forescout platform to communicate with a Defender for ### Change sensors in Forescout -To make the Forescout platform, communicate with a different sensor, the configuration within Forescout has to be changed. +To make the Forescout platform communicate with a different sensor, the configuration within Forescout has to be changed. **To change sensors in Forescout**: To make the Forescout platform, communicate with a different sensor, the configu ## Verify communication -Once the connection has been configured, you'll need to confirm that the two platforms are communicating. +Once the connection has been configured, you need to confirm that the two platforms are communicating. **To confirm the two platforms are communicating**: Once the connection has been configured, you'll need to confirm that the two pla 1. Navigate to **System Settings** > **Access Tokens**. -The Used field will alert you if the connection between the sensor and the Forescout appliance is not working. If **N/A** is displayed, the connection is not working. If **Used** is displayed, it will indicate the last time an external call with this token was received. +The Used field alerts you if the connection between the sensor and the Forescout appliance isn't working. If **N/A** is displayed, the connection isn't working. If **Used** is displayed, it indicates the last time an external call with this token was received. ## View device attributes in Forescout -By integrating Defender for IoT with Forescout, you will be able to view different device's attributes that were detected by Defender for IoT, in the Forescout application. +By integrating Defender for IoT with Forescout, you're able to view different device's attributes that were detected by Defender for IoT, in the Forescout application. ++**To view a device's attributes**: ++1. Sign in to the Forescout platform and then navigate to the **Asset Inventory**. ++1. Select the **CyberX Platform**. ++ To view additional details, from the **Device Inventory Hosts** section, right-click on a device. The host details dialog box opens with additional information. The following table lists all of the attributes that are visible through the Forescout application: -| Item | Description | +| Attribute | Description | |--|--| | **Authorized by Microsoft Defender for IoT** | A device detected on your network by Defender for IoT during the network learning period. |-| **Firmware** | The firmware details of the device. For example, model, and version details. | +| **Firmware** | The firmware details of the device. For example, model and version details. | | **Name** | The name of the device. | | **Operating System** | The operating system of the device. |-| **Type** | The type of device. For example, a PLC, Historian or Engineering Station. | +| **Type** | The type of device. For example, a PLC, Historian, or Engineering Station. | | **Vendor** | The vendor of the device. For example, Rockwell Automation. | | **Risk level** | The risk level calculated by Defender for IoT. | | **Protocols** | The protocols detected in the traffic generated by the device. | -**To view a device's attributes**: --1. Sign in to the Forescout platform and then navigate to the **Asset Inventory**. -- :::image type="content" source="media/tutorial-forescout/device-firmware-attributes-in-forescout.png" alt-text="View the firmware attributes."::: --1. Select the **CyberX Platform**. -- :::image type="content" source="media/tutorial-forescout/vendor-attributes-in-forescout.png" alt-text="View the vendors attributes."::: --### View more details --After viewing a device's attributes, you can see more details for each device such as Forescout compliance and policy information. --**To view additional details**: --1. Sign in to the Forescout platform and then navigate to the **Asset Inventory**. --1. Select the **CyberX Platform**. --1. From the Device Inventory Hosts section, right-click on a device. The host details dialog box opens with additional information. - ## Create Microsoft Defender for IoT policies in Forescout -Forescout policies can be used to automate control and management of devices detected by Defender for IoT. For example, +Forescout policies can be used to automate control and management of devices detected by Defender for IoT. For example: - Automatically email the SOC administrators when specific firmware versions are detected. You can create custom policies in Forescout using Defender for IoT conditional p 1. Navigate to **Policy Conditions** > **Properties Tree**. -1. In the Properties Tree, expand the CyberX Platform folder. The Defender for IoT following properties are available. +1. In the Properties Tree, expand the **CyberX Platform** folder. The Defender for IoT following properties are available: - :::image type="content" source="media/tutorial-forescout/forescout-property-tree.png" alt-text="Properties"::: --## Clean up resources --There are no resources to clean up. + - Protocols + - Risk Level + - Authorized by CyberX + - Type + - Firmware + - Name + - Operating System + - Vendor ## Next steps -In this article, you learned how to get started with the Forescout integration. Continue on to learn about our [Palo Alto integration](./tutorial-palo-alto.md). --+> [!div class="nextstepaction"] +> [Integrations with Microsoft and partner services](integrate-overview.md) |
defender-for-iot | Tutorial Fortinet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-fortinet.md | Title: Integrate Fortinet with Microsoft Defender for IoT -description: In this article, you'll learn how to integrate Microsoft Defender for IoT with Fortinet. +description: In this article, you learn how to integrate Microsoft Defender for IoT with Fortinet. Last updated 01/01/2023 -This tutorial will help you learn how to integrate, and use Fortinet with Microsoft Defender for IoT. +This article helps you learn how to integrate and use Fortinet with Microsoft Defender for IoT. -Microsoft Defender for IoT mitigates IIoT and ICS and SCADA risk with ICS-aware self-learning engines that deliver immediate insights about ICS devices, vulnerabilities, and threats. Defender for IoT accomplishes this without relying on agents, rules, signatures, specialized skills, or prior knowledge of the environment. +Microsoft Defender for IoT mitigates IIoT, ICS, and SCADA risk with ICS-aware self-learning engines that deliver immediate insights about ICS devices, vulnerabilities, and threats. Defender for IoT accomplishes this without relying on agents, rules, signatures, specialized skills, or prior knowledge of the environment. -Defender for IoT, and Fortinet have established a technological partnership that detects, and stop attacks on IoT, and ICS networks. +Defender for IoT and Fortinet have established a technological partnership that detects and stop attacks on IoT and ICS networks. -Fortinet, and Microsoft Defender for IoT prevent: +Fortinet and Microsoft Defender for IoT prevent: - Unauthorized changes to programmable logic controllers (PLC). -- Malware that manipulates ICS, and IoT devices via their native protocols.+- Malware that manipulates ICS and IoT devices via their native protocols. - Reconnaissance tools from collecting data. -- Protocol violations caused by misconfigurations, or malicious attackers.+- Protocol violations caused by misconfigurations or malicious attackers. -Defender for IoT detects anomalous behavior in IoT, and ICS networks and delivers that information to FortiGate, and FortiSIEM, as follows: +Defender for IoT detects anomalous behavior in IoT and ICS networks and delivers that information to FortiGate and FortiSIEM, as follows: - **Visibility:** The information provided by Defender for IoT gives FortiSIEM administrators visibility into previously invisible IoT and ICS networks. -- **Blocking malicious attacks:** FortiGate administrators can use the information discovered by Defender for IoT to create rules to stop anomalous behavior, regardless of whether that behavior is caused by chaotic actors, or misconfigured devices, before it causes damage to production, profits, or people.+- **Blocking malicious attacks:** FortiGate administrators can use the information discovered by Defender for IoT to create rules to stop anomalous behavior, regardless of whether that behavior is caused by chaotic actors or misconfigured devices, before it causes damage to production, profits, or people. -FortiSIEM, and Fortinet’s multivendor security incident, and events management solution brings visibility, correlation, automated response, and remediation to a single scalable solution. +FortiSIEM and Fortinet’s multivendor security incident and events management solution brings visibility, correlation, automated response, and remediation to a single scalable solution. -Using a Business Services view, the complexity of managing network and security operations is reduced, freeing resources, improving breach detection. FortiSIEM provides cross correlation while applying machine learning, and UEBA to improve response, in order to stop breaches before they occur. +Using a Business Services view, the complexity of managing network and security operations is reduced, freeing resources and improving breach detection. FortiSIEM provides cross correlation, while applying machine learning and UEBA, to improve the response in order to stop breaches before they occur. -In this tutorial, you learn how to: +In this article, you learn how to: > [!div class="checklist"] > In this tutorial, you learn how to: > - Send Defender for IoT alerts to FortiSIEM > - Block a malicious source using the Fortigate firewall -If you do not already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/). - ## Prerequisites -There are no prerequisites for this tutorial. +Before you begin, make sure that you have the following prerequisites: ++- Access to a Defender for IoT OT sensor as an Admin user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md). ++- Ability to create API keys in Fortinet. ## Create an API key in Fortinet An application programming interface (API) key is a uniquely generated code that | | -- | | **Username** | Enter the forwarding rule name. | | **Comments** | Enter the minimal security level incident to forward. For example, if **Minor** is selected, minor alerts and any alert above this severity level will be forwarded. |- | **Administrator Profile** | From the dropdown list, select the profile name that you have defined in the previous step. | + | **Administrator Profile** | From the dropdown list, select the profile name that you've defined in the previous step. | | **PKI Group** | Toggle the switch to **Disable**. | | **CORS Allow Origin** | Toggle the switch to **Enable**. |- | **Restrict login to trusted hosts** | Add the IP addresses of the sensors, and management consoles that will connect to FortiGate. | --When the API key is generated, save it as it will not be provided again. + | **Restrict login to trusted hosts** | Add the IP addresses of the sensors and on-premises management consoles that will connect to FortiGate. | +Save the API key when it's generated, as it will not be provided again. The bearer of the generated API key will be granted all access privileges assigned to the account. ## Set a forwarding rule to block malware-related alerts The FortiGate firewall can be used to block suspicious traffic. -Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule. +Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule. **To set a forwarding rule to block malware-related alerts**: -1. Sign in to the Microsoft Defender for IoT Management Console. +1. Sign in to the Microsoft Defender for IoT sensor, and select **Forwarding**. -1. In the left pane, select **Forwarding**. +1. Select **+ Create new rule**. - [:::image type="content" source="media/tutorial-fortinet/forwarding-view.png" alt-text="Screenshot of the Forwarding window option in a sensor.":::](media/tutorial-fortinet/forwarding-view.png#lightbox) +1. In the **Add forwarding rule** pane, define the rule parameters: -1. Select **Create Forwarding Rules** and define the following rule parameters. + :::image type="content" source="media/tutorial-fortinet/forward-rule.png" alt-text="Screenshot of the Forwarding window option in a sensor." lightbox="media/tutorial-fortinet/forward-rule.png"::: | Parameter | Description |- | | -- | - | **Name** | Enter a meaningful name for the forwarding rule. | - | **Select Severity** | From the drop-down menu, select the minimal security level incident to forward. For example, if **Minor** is selected, minor alerts and any alert above this severity level will be forwarded. | - | **Protocols** | To select a specific protocol, select **Specific**, and select the protocol for which this rule is applied. By default, all the protocols are selected. | - | **Engines** | To select a specific security engine for which this rule is applied, select **Specific**, and select the engine. By default, all the security engines are involved. | - | **System Notifications** | Forward the sensor's *online* and *offline* status. This option is only available if you have logged into the on-premises management console. | --1. In the Actions section, select **Add**, and then select **Send to FortiGate** from the drop-down menu. -- :::image type="content" source="media/tutorial-fortinet/fortigate.png" alt-text="Screenshot of the Add an action section of the Create Forwarding Rule window."::: --1. To configure the FortiGate forwarding rule, set the following parameters: + |--|--| + | **Rule name** | The forwarding rule name. | + | **Minimal alert level** | The minimal security level incident to forward. For example, if Minor is selected, minor alerts and any alert above this severity level will be forwarded. | + | **Any protocol detected** | Toggle off to select the protocols you want to include in the rule. | + | **Traffic detected by any engine** | Toggle off to select the traffic you want to include in the rule. | - :::image type="content" source="media/tutorial-fortinet/configure.png" alt-text="Screenshot of the configure the Create Forwarding Rule window."::: +1. In the **Actions** area, define the following values: | Parameter | Description | |--|--|- | **Host** | Enter the FortiGate server IP address. | - | **API Key** | Enter the [API key](#create-an-api-key-in-fortinet) that you created in FortiGate. | + | **Server** | Select FortiGage. | + | **Host** | Define the ClearPass server IP to send alert information. | + | **API key** | Enter the [API key](#create-an-api-key-in-fortinet) that you created in FortiGate. | | **Incoming Interface** | Enter the incoming firewall interface port. | | **Outgoing Interface** | Enter the outgoing firewall interface port. |- | **Configure**| Ensure a **√** is showing in the following options to enable blocking of suspicious sources via the FortiGate firewall: <br> - **Block illegal function codes**: Protocol violations - Illegal field value violating ICS protocol specification (potential exploit) <br /> - **Block unauthorized PLC programming / firmware updates**: Unauthorized PLC changes <br /> - **Block unauthorized PLC stop**: PLC stop (downtime) <br> - **Block malware-related alerts**: Blocking of the industrial malware attempts (TRITON, NotPetya, etc.). <br> - **(Optional)** You can select the option for **Automatic blocking**. If Automatic Blocking is selected, blocking is executed automatically, and immediately. <br /> - **Block unauthorized scanning**: Unauthorized scanning (potential reconnaissance) | -1. Select **Submit**. +1. Configure which alert information you want to forward: ++ | Parameter | Description | + |--|--| + | **Block illegal function codes** | Protocol violations - Illegal field value violating ICS protocol specification (potential exploit) | + | **Block unauthorized PLC programming / firmware updates** | Unauthorized PLC changes. | + | **Block unauthorized PLC stop** | PLC stop (downtime). | + | **Block malware related alerts** | Blocking of the industrial malware attempts (TRITON, NotPetya, etc.). | + | **Block unauthorized scanning** | Unauthorized scanning (potential reconnaissance) | ++1. Select **Save**. ## Block the source of suspicious alerts The source of suspicious alerts can be blocked in order to prevent further occur **To block the source of suspicious alerts**: -1. Sign in to the management console and select **Alerts** from the left side menu. +1. Sign in to the on-premises management console, then select **Alerts**. 1. Select the alert related to Fortinet integration. 1. To automatically block the suspicious source, select **Block Source**. - :::image type="content" source="media/tutorial-fortinet/block-source.png" alt-text="Screenshot of the Alert window."::: - 1. In the Please Confirm dialog box, select **OK**. ## Send Defender for IoT alerts to FortiSIEM Defender for IoT alerts provide information about an extensive range of security - Protocol deviations from protocol specifications -You can configure Defender for IoT to send alerts to the FortiSIEM server, where alert information is displayed in the Analytics window: -+You can configure Defender for IoT to send alerts to the FortiSIEM server, where alert information is displayed in the **ANALYTICS** window: -Each Defender for IoT alert is then parsed without any other configuration on the FortiSIEM, side and they are presented in the FortiSIEM as security events. The following event details appear by default: +Each Defender for IoT alert is then parsed without any other configuration on the FortiSIEM side, and they're presented in the FortiSIEM as security events. The following event details appear by default: +- Application Protocol +- Application Version +- Category Type +- Collector ID +- Count +- Device Time +- Event ID +- Event Name +- Event Parse Status You can then use Defender for IoT's Forwarding Rules to send alert information to FortiSIEM. -Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule. +Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule. **To use Defender for IoT's Forwarding Rules to send alert information to FortiSIEM**: -1. From the sensor, or management console left pane, select **Forwarding**. +1. From the sensor console, select **Forwarding**. ++1. Select **+ Create new rule**. - [:::image type="content" source="media/tutorial-fortinet/forwarding-view.png" alt-text="Screenshot of the view of your forwarding rules in the Forwarding window.":::](media/tutorial-fortinet/forwarding-view.png#lightbox) +1. In the **Add forwarding rule** pane, define the rule parameters: -2. Select **Create Forwarding Rules**, and define the rule's parameters. + :::image type="content" source="media/tutorial-fortinet/forwarding-view.png" alt-text="Screenshot of the view of your forwarding rules in the Forwarding window." lightbox="media/tutorial-fortinet/forwarding-view.png"::: | Parameter | Description | |--|--|- | **Name** | Enter a meaningful name for the forwarding rule. | - | **Select Severity** | Select the minimum security level incident to forward. For example, if **Minor** is selected, minor alerts and any alert above this severity level will be forwarded. | - | **Protocols** | To select a specific protocol, select **Specific**, and select the protocol for which this rule is applied. By default, all the protocols are selected. | - | **Engines** | To select a specific security engine for which this rule is applied, select **Specific** and select the engine. By default, all the security engines are involved. | - | **System Notifications** | Forward a sensor's *online*, or *offline* status. This option is only available if you have logged into the on-premises management console. | --3. In the actions section, select **Send to FortiSIEM**. -- :::image type="content" source="media/tutorial-fortinet/forward-rule.png" alt-text="Screenshot of the create a Forwarding Rule and select send to Fortinet."::: --4. Enter the FortiSIEM server details. + | **Rule name** | The forwarding rule name. | + | **Minimal alert level** | The minimal security level incident to forward. For example, if Minor is selected, minor alerts and any alert above this severity level will be forwarded. | + | **Any protocol detected** | Toggle off to select the protocols you want to include in the rule. | + | **Traffic detected by any engine** | Toggle off to select the traffic you want to include in the rule. | - :::image type="content" source="media/tutorial-fortinet/details.png" alt-text="Screenshot of the add the FortiSIEm details to the forwarding rule."::: +1. In the **Actions** area, define the following values: | Parameter | Description |- | | -- | - | **Host** | Enter the FortiSIEM server IP address. | - | **Port** | Enter the FortiSIEM server port. | + |--|--| + | **Server** | Select FortiSIEM. | + | **Host** | Define the ClearPass server IP to send alert information. | + | **Port** | Define the ClearPass port to send alert information. | | **Timezone** | The time stamp for the alert detection. | -5. Select **Submit**. +1. Select **Save**. ## Block a malicious source using the Fortigate firewall -You can set policies to automatically block malicious sources in the FortiGate firewall using alerts in Defender for IoT. -+You can set policies to automatically block malicious sources in the FortiGate firewall, using alerts in Defender for IoT. For example, the following alert can block the malicious source: **To set a FortiGate firewall rule that blocks a malicious source**: 1. In FortiGate, [create an API key](#create-an-api-key-in-fortinet). -1. Sign in to the Defender for IoT sensor, or the management console, and select **Forwarding**, [set a forwarding rule that blocks malware-related alerts](#set-a-forwarding-rule-to-block-malware-related-alerts). +1. Sign in to the Defender for IoT sensor, or the on-premises management console, and select **Forwarding**, [set a forwarding rule that blocks malware-related alerts](#set-a-forwarding-rule-to-block-malware-related-alerts). -1. In the Defender for IoT sensor, or the management console, and select **Alerts**, and [block a malicious source](#block-a-malicious-source-using-the-fortigate-firewall). +1. In the Defender for IoT sensor, or the on-premises management console, select **Alerts**, and [block a malicious source](#block-a-malicious-source-using-the-fortigate-firewall). 1. Navigate to the FortiGage **Administrator** window, and locate the malicious source address you blocked. - :::image type="content" source="media/tutorial-fortinet/administrator.png" alt-text="Screenshot of the FortiGate Administrator window view."::: + The blocking policy is automatically created, and appears in the FortiGate IPv4 Policy window. - The blocking policy will be automatically created, and appears in the FortiGate IPv4 Policy window. + :::image type="content" source="media/tutorial-fortinet/policy.png" alt-text="Screenshot of the FortiGate IPv4 Policy window view." lightbox="media/tutorial-fortinet/policy.png"::: - :::image type="content" source="media/tutorial-fortinet/policy.png" alt-text="Screenshot of the FortiGate IPv4 Policy window view."::: +1. Select the policy and ensure that **Enable this policy** is toggled on. -1. Select the policy and ensure that Enable this policy is toggled to on position. -- :::image type="content" source="media/tutorial-fortinet/edit.png" alt-text="Screenshot of the FortiGate IPv4 Policy Edit view."::: + :::image type="content" source="media/tutorial-fortinet/edit.png" alt-text="Screenshot of the FortiGate IPv4 Policy Edit view." lightbox="media/tutorial-fortinet/edit.png"::: | Parameter | Description| |--|--| For example, the following alert can block the malicious source: | **Service** | The protocol, or specific ports for the traffic. | | **Action** | The action that the firewall will perform. | -## Clean up resources --There are no resources to clean up. - ## Next steps -In this article, you learned how to get started with the Fortinet integration. Continue on to learn about our [Palo Alto integration](./tutorial-palo-alto.md) +> [!div class="nextstepaction"] +> [Integrations with Microsoft and partner services](integrate-overview.md) |
defender-for-iot | Tutorial Palo Alto | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-palo-alto.md | -This tutorial will help you learn how to integrate, and use Palo Alto with Microsoft Defender for IoT. +This article helps you learn how to integrate and use Palo Alto with Microsoft Defender for IoT. Defender for IoT has integrated its continuous ICS threat monitoring platform with Palo AltoΓÇÖs next-generation firewalls to enable blocking of critical threats, faster and more efficiently. The following integration types are available: - Send recommendations for blocking to the central management system: Defender for IoT to Panorama integration. -In this tutorial, you learn how to: +In this article, you learn how to: > [!div class="checklist"] > If you don't have an Azure subscription, create a [free account](https://azure.m ## Prerequisites -### Panorama permissions +Before you begin, make sure that you have the following prerequisites: - Confirmation by the Panorama Administrator to allow automatic blocking.+- Access to a Defender for IoT OT sensor as an Admin user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md). ## Configure immediate blocking by a specified Palo Alto firewall In cases, such as malware-related alerts, you can enable automatic blocking. Defender for IoT forwarding rules are utilized to send a blocking command directly to a specific Palo Alto firewall. -Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule. +Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule. When Defender for IoT identifies a critical threat, it sends an alert that includes an option of blocking the infected source. Selecting **Block Source** in the alertΓÇÖs details activates the forwarding rule, which sends the blocking command to the specified Palo Alto firewall. **To configure immediate blocking**: -1. In the left pane, select **Forwarding**. +1. Sign in to the sensor, and select **Forwarding**. -1. Select **Create rule**. +1. Select **Create new rule**. -1. From the Actions drop down menu, select **Send to Palo Alto NGFW**. +1. In the **Add forwarding rule** pane, define the rule parameters: -1. In the Actions pane, set the following parameters: + :::image type="content" source="media/tutorial-palo-alto/forwarding-rule.png" alt-text="Screenshot of creating the rules for your forwarding rule." lightbox="media/tutorial-palo-alto/forwarding-rule.png"::: - - **Host**: Enter the NGFW server IP address. - - **Port**: Enter the NGFW server port. - - **Username**: Enter the NGFW server username. - - **Password**: Enter the NGFW server password. - - **Configure**: Set up the following options to allow blocking of the suspicious sources by the Palo Alto firewall: - - **Block illegal function codes**: Protocol violations - Illegal field value violating ICS protocol specification (potential exploit). - - **Block unauthorized PLC programming/firmware updates**: Unauthorized PLC changes. - - **Block unauthorized PLC stop**: PLC stop (downtime). - - **Block malware-related alerts**: Blocking of industrial malware attempts (TRITON, NotPetya, etc.). You can select the option of **Automatic blocking**. In that case, the blocking is executed automatically and immediately. - - **Block unauthorized scanning**: Unauthorized scanning (potential reconnaissance). + | Parameter | Description | + |--|--| + | **Rule name** | The forwarding rule name. | + | **Minimal alert level** | The minimal security level incident to forward. For example, if Minor is selected, minor alerts and any alert above this severity level will be forwarded. | + | **Any protocol detected** | Toggle off to select the protocols you want to include in the rule. | + | **Traffic detected by any engine** | Toggle off to select the traffic you want to include in the rule. | - :::image type="content" source="media/tutorial-palo-alto/edit.png" alt-text="Screenshot of the Edit your Forwarding Rule screen."::: +1. In the **Actions** area, set the following parameters: -1. Select **Submit**. + | Parameter | Description | + |--|--| + | **Server** | Select Palo Alto NGFW. | + | **Host** | Enter the NGFW server IP address. | + | **Port** | Enter the NGFW server port. | + | **Username** | Enter the NGFW server username. | + | **Password** | Enter the NGFW server password. | ++1. Configure the following options to allow blocking of the suspicious sources by the Palo Alto firewall: ++ | Parameter | Description | + |--|--| + | **Block illegal function codes** | Protocol violations - Illegal field value violating ICS protocol specification (potential exploit). | + | **Block unauthorized PLC programming / firmware updates** | Unauthorized PLC changes. | + | **Block unauthorized PLC stop** | PLC stop (downtime). | + | **Block malware related alerts** | Blocking of industrial malware attempts (TRITON, NotPetya, etc.). <br><br> You can select the option of **Automatic blocking**. <br> In that case, the blocking is executed automatically and immediately. | + | **Block unauthorized scanning** | Unauthorized scanning (potential reconnaissance). | ++1. Select **Save**. You'll then need to block any suspicious source. **To block a suspicious source**: -1. Navigate to the **Alerts** pane, and select the alert related to the Palo Alto integration. --1. To automatically block the suspicious source, select **Block Source**. The **Please Confirm** dialog box appears. +1. Navigate to the **Alerts** page, and select the alert related to the Palo Alto integration. - :::image type="content" source="media/tutorial-palo-alto/unauthorized.png" alt-text="Screenshot of the Block Source button, to block the unauthorized source."::: +1. To automatically block the suspicious source, select **Block Source**. -1. Select **OK**. +1. In the **Please Confirm** dialog box, select **OK**. The suspicious source is now blocked by the Palo Alto firewall. ## Create Panorama blocking policies in Defender for IoT -Defender for IoT, and Palo Alto Network's integration automatically creates new policies in the Palo Alto Network's NMS, and Panorama. +Defender for IoT and Palo Alto Network's integration automatically creates new policies in the Palo Alto Network's NMS and Panorama. This table shows which incidents this integration is intended for: This table shows which incidents this integration is intended for: |**Industrial malware found in the ICS network** | Malware that manipulates ICS devices using their native protocols, such as TRITON and Industroyer. Defender for IoT also detects IT malware that has moved laterally into the ICS, and SCADA environment. For example, Conficker, WannaCry, and NotPetya. | |**Scanning malware** | Reconnaissance tools that collect data about system configuration in a pre-attack phase. For example, the Havex Trojan scans industrial networks for devices using OPC, which is a standard protocol used by Windows-based SCADA systems to communicate with ICS devices. | -When Defender for IoT detects a pre-configured use case, the **Block Source** button is added to the alert. Then, when the CyberX user selects the **Block Source** button, Defender for IoT creates policies on Panorama by sending the predefined forwarding rule. +When Defender for IoT detects a pre-configured use case, the **Block Source** button is added to the alert. Then, when the Defender for IoT user selects the **Block Source** button, Defender for IoT creates policies on Panorama by sending the predefined forwarding rule. The policy is applied only when the Panorama administrator pushes it to the relevant NGFW in the network. In IT networks, there may be dynamic IP addresses. Therefore, for those subnets, the policy must be based on FQDN (DNS name) and not the IP address. Defender for IoT performs reverse lookup and matches devices with dynamic IP address to their FQDN (DNS name) every configured number of hours. -In addition, Defender for IoT sends an email to the relevant Panorama user to notify that a new policy created by Defender for IoT is waiting for the approval. The figure below presents the Defender for IoT-Panorama Integration Architecture. +In addition, Defender for IoT sends an email to the relevant Panorama user to notify that a new policy created by Defender for IoT is waiting for the approval. The figure below presents the Defender for IoT and Panorama integration architecture. The first step in creating Panorama blocking policies in Defender for IoT is to configure DNS lookup. **To configure DNS lookup**: -1. In the console left pane, select **System settings** > **Network monitoring** > **DNS Reverse Lookup**. -1. Select **Add DNS server**. +1. Sign in to your OT sensor and select **System settings** > **Network monitoring** > **DNS Reverse Lookup**. ++1. Turn on the **Enabled** toggle to activate the lookup. + 1. In the **Schedule Reverse Lookup** field, define the scheduling options: - By specific times: Specify when to perform the reverse lookup daily. - By fixed intervals (in hours): Set the frequency for performing the reverse lookup.-1. In the **Number of Labels** field instruct Defender for IoT to automatically resolve network IP addresses to device FQDNs. <br /> To configure DNS FQDN resolution, add the number of domain labels to display. Up to 30 characters are displayed from left to right. -1. Add the following server details: - - **DNS Server Address**: Enter the IP address, or the FQDN of the network DNS Server. - - **DNS Server Port**: Enter the port used to query the DNS server. - - **Subnets**: Set the Dynamic IP address subnet range. The range that Defender for IoT reverses lookup their IP address in the DNS server to match their current FQDN name. +1. Select **+ Add DNS Server**, and then add the following details: -1. Select **Save**. -1. Turn on the **Enabled** toggle to activate the lookup. + | Parameter | Description | + |--|--| + | **DNS Server Address** | Enter the IP address or the FQDN of the network DNS Server. | + | **DNS Server Port** | Enter the port used to query the DNS server. | + | **Number of Labels** | To configure DNS FQDN resolution, add the number of domain labels to display. <br> Up to 30 characters are displayed from left to right. | + | **Subnets** | Set the Dynamic IP address subnet range. <br> The range that Defender for IoT reverses lookup their IP address in the DNS server to match their current FQDN name. | 1. To ensure your DNS settings are correct, select **Test**. The test ensures that the DNS server IP address, and DNS server port are set correctly. -## Block suspicious traffic with the Palo Alto firewall --Suspicious traffic will need to be blocked with the Palo Alto firewall. You can block suspicious traffic through the use forwarding rules in Defender for IoT. --Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule. --**To block suspicious traffic with the Palo Alto firewall using a Defender for IoT forwarding rule**: --1. In the left pane, select **Forwarding**. --1. Select **Create Forwarding Rule**. +1. Select **Save**. -1. From the **Actions** drop down menu, select **Send to Palo Alto Panorama**. +## Block suspicious traffic with the Palo Alto firewall -1. In the Actions pane, set the following parameters: +Suspicious traffic needs to be blocked with the Palo Alto firewall. You can block suspicious traffic through the use forwarding rules in Defender for IoT. - - **Host**: Enter the Panorama server IP address. +Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule. - - **Port**: Enter the Panorama server port. +1. Sign in to the sensor, and select **Forwarding**. - - **Username**: Enter the Panorama server username. +1. Select **Create new rule**. - - **Password**: Enter the Panorama server password. +1. In the **Add forwarding rule** pane, define the rule parameters: - - **Report Address**: Define how the blocking is executed, as follows: + :::image type="content" source="media/tutorial-palo-alto/edit.png" alt-text="Screenshot of creating the rules for your Palo Alto Panorama forwarding rule." lightbox="media/tutorial-palo-alto/forwarding-rule.png"::: - - **By IP Address**: Always creates blocking policies on Panorama based on the IP address. + | Parameter | Description | + |--|--| + | **Rule name** | The forwarding rule name. | + | **Minimal alert level** | The minimal security level incident to forward. For example, if Minor is selected, minor alerts and any alert above this severity level will be forwarded. | + | **Any protocol detected** | Toggle off to select the protocols you want to include in the rule. | + | **Traffic detected by any engine** | Toggle off to select the traffic you want to include in the rule. | - - **By FQDN or IP Address**: Creates blocking policies on Panorama based on FQDN if it exists, otherwise by the IP Address. +1. In the **Actions** area, set the following parameters: - - **Email**: Set the email address for the policy notification email + | Parameter | Description | + |--|--| + | **Server** | Select Palo Alto NGFW. | + | **Host** | Enter the NGFW server IP address. | + | **Port** | Enter the NGFW server port. | + | **Username** | Enter the NGFW server username. | + | **Password** | Enter the NGFW server password. | + | **Report Addresses** | Define how the blocking is executed, as follows: <br><br> - **By IP Address**: Always creates blocking policies on Panorama based on the IP address. <br> - **By FQDN or IP Address**: Creates blocking policies on Panorama based on FQDN if it exists, otherwise by the IP Address. | + | **Email** | Set the email address for the policy notification email. | > [!NOTE] > Make sure you have configured a Mail Server in the Defender for IoT. If no email address is entered, Defender for IoT does not send a notification email. - - **Execute a DNS lookup upon alert detection (Checkbox)**: When the FQDN, or IP Address option is set in the Report Address. This checkbox is selected by default. If only the IP address is set, this option is disabled. -- - **Configure**: Set up the following options to allow blocking of the suspicious sources by the Palo Alto Panorama: -- - **Block illegal function codes**: Protocol violations - Illegal field value violating ICS, protocol specification (potential exploit). -- - **Block unauthorized PLC programming/firmware updates**: Unauthorized PLC changes. -- - **Block unauthorized PLC stop**: PLC stop (downtime). -- - **Block malware-related alerts**: Blocking of industrial malware attempts (TRITON, NotPetya, etc.). You can select the option of **Automatic blocking**. In that case, the blocking is executed automatically and immediately. +1. Configure the following options to allow blocking of the suspicious sources by the Palo Alto Panorama: - - **Block unauthorized scanning**: Unauthorized scanning (potential reconnaissance). + | Parameter | Description | + |--|--| + | **Block illegal function codes** | Protocol violations - Illegal field value violating ICS protocol specification (potential exploit). | + | **Block unauthorized PLC programming / firmware updates** | Unauthorized PLC changes. | + | **Block unauthorized PLC stop** | PLC stop (downtime). | + | **Block malware related alerts** | Blocking of industrial malware attempts (TRITON, NotPetya, etc.). <br><br> You can select the option of **Automatic blocking**. <br> In that case, the blocking is executed automatically and immediately. | + | **Block unauthorized scanning** | Unauthorized scanning (potential reconnaissance). | - :::image type="content" source="media/tutorial-palo-alto/details.png" alt-text="Screenshot of the Select action screen."::: --1. Select **Submit**. --You'll then need to block the suspicious source. +1. Select **Save**. -**To block the suspicious source**: +You'll then need to block any suspicious source. -1. In the **Alerts** pane, select the alert related to Palo Alto integration. The **AlertΓÇÖs Details** dialog box appears. +**To block a suspicious source**: - :::image type="content" source="media/tutorial-palo-alto/unauthorized.png" alt-text="Screenshot of the alert screen, select the one related to Palo Alto, and then select block source."::: +1. Navigate to the **Alerts** page, and select the alert related to the Palo Alto integration. 1. To automatically block the suspicious source, select **Block Source**. -1. Select **OK.** --## Clean up resources --There are no resources to clean up. +1. Select **OK**. ## Next step -In this article, you learned how to get started with the [Palo Alto integration](./tutorial-splunk.md). +> [!div class="nextstepaction"] +> [Integrations with Microsoft and partner services](integrate-overview.md) |
defender-for-iot | Tutorial Qradar | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-qradar.md | Integrating with QRadar supports: - Forwarding Defender for IoT alerts to IBM QRadar for unified IT and OT security monitoring and governance. -- An overview of both IT and OT environments, allowing you to detect, and respond to multi-stage attacks that often cross IT, and OT boundaries.+- An overview of both IT and OT environments, allowing you to detect and respond to multi-stage attacks that often cross IT and OT boundaries. - Integrating with existing SOC workflows. -If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. +## Prerequisites ++- Access to a Defender for IoT OT sensor as an Admin user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md). ++- Access to a Defender for IoT OT on-premises management console as an Admin user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md). ++- Access to the QRadar Admin area. ## Configure Syslog listener for QRadar If you don't have an Azure subscription, create a [free account](https://azure.m 1. Sign in to QRadar and select **Admin** > **Data Sources**. -1. In the Data Sources window, select **Log Sources**. For example: +1. In the Data Sources window, select **Log Sources**. - [:::image type="content" source="media/tutorial-qradar/log.png" alt-text="Screenshot of selecting a log sources from the available options.":::](media/tutorial-qradar/log.png#lightbox) --1. In the **Modal** window, select **Add**. For example: -- [:::image type="content" source="media/tutorial-qradar/modal.png" alt-text="Screenshot of after selecting Syslog the modal window opens.":::](media/tutorial-qradar/modal.png#lightbox) +1. In the **Modal** window, select **Add**. 1. In the **Add a log source** dialog box, define the following parameters: - - **Log Source Name**: `<Sensor name>` -- - **Log Source Description**: `<Sensor name>` -- - **Log Source Type**: `Universal LEEF` -- - **Protocol Configuration**: `Syslog` -- - **Log Source Identifier**: `<Sensor name>` + | Parameter | Description | + |--|--| + | **Log Source Name** | `<Sensor name>` | + | **Log Source Description** | `<Sensor name>` | + | **Log Source Type** | `Universal LEEF` | + | **Protocol Configuration** | `Syslog` | + | **Log Source Identifier** | `<Sensor name>` | > [!NOTE] > The Log Source Identifier name must not include white spaces. We recommend replacing any white spaces with an underscore. -1. Select **Save** > **Deploy Changes**. For example. -- :::image type="content" source="media/tutorial-qradar/deploy.png" alt-text="Screenshot of the Deploy Changes view"::: +1. Select **Save**, and then **Deploy Changes**. ## Deploy a Defender for IoT QID A **QID** is a QRadar event identifier. Since all Defender for IoT reports are t Create a forwarding rule from your on-premises management console to forward alerts to QRadar. -Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule. +Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule. **To create a QRadar forwarding rule**: -1. Sign in to the on-premises management console and select **Forwarding** on the left. +1. Sign in to the on-premises management console and select **Forwarding**. 1. Select the **+** to create a new rule. -1. Enter values for the rule name and conditions. In the **Actions** area, select **Add**, and then select **Qradar**. For example: +1. In the **Create Forwarding Rule** pane, define the following values: ++ | Parameter | Description | + |--|--| + | **Name** | Enter a meaningful name for the forwarding rule. | + | **Warning** | From the drop-down menu, select the minimal security level incident to forward. <br> For example, if **Minor** is selected, minor alerts and any alert above this severity level will be forwarded.| + | **Protocols** | To select a specific protocol, select **Specific**, and select the protocol for which this rule is applied. <br> By default, all the protocols are selected. | + | **Engines** | To select a specific security engine for which this rule is applied, select **Specific**, and select the engine. <br> By default, all the security engines are involved. | + | **System Notifications** | Forward the sensor's *online* and *offline* status. | + | **Alert Notifications** | Forward the sensor's alerts. | ++1. In the **Actions** area, select **Add**, and then select **Qradar**. For example: - :::image type="content" source="media/tutorial-qradar/create.png" alt-text="Screenshot of the Create a Forwarding Rule window."::: + :::image type="content" source="media/tutorial-qradar/create.png" alt-text="Screenshot of the Create a Forwarding Rule window." lightbox="media/tutorial-qradar/create.png"::: -1. Define the QRadar IP address and timezone, and then select **Save**. +1. Define the QRadar **Host**, **Port**, and **Timezone**. You can also choose to **Enable Encryption** and then **CONFIGURE ENCRYPTION**, and you can choose to **Manage alerts externally**. ++1. Select **SAVE**. The following is an example of a payload sent to QRadar: The following is an example of a payload sent to QRadar: ## Map notifications to QRadar -1. Sign into your QRadar console, select **QRadar**> **Log Activity** . +1. Sign into your QRadar console, and select **QRadar**> **Log Activity** . -1. Select **Add Filter** and define the following parameters: +1. Select **Add Filter**, and define the following parameters: - - **Parameter**: `Log Sources [Indexed]` - - **Operator**: `Equals` - - **Log Source Group**: `Other` - - **Log Source**: `<Xsense Name>` + | Parameter | Description | + |--|--| + | **Parameter** | `Log Sources [Indexed]` | + | **Operator** | `Equals` | + | **Log Source Group** | `Other` | + | **Log Source** | `<Xsense Name>` | 1. Locate an unknown report detected from your Defender for IoT sensor and double-click it. For example: 1. Configure the following fields: - - New Property: _choose from the list below_ -- - Sensor Alert Description - - Sensor Alert ID - - Sensor Alert Score - - Sensor Alert Title - - Sensor Destination Name - - Sensor Direct Redirect - - Sensor Sender IP - - Sensor Sender Name - - Sensor Alert Engine - - Sensor Source Device Name -- - Check **Optimize Parsing** -- - Field Type: `AlphaNumeric` -- - Check **Enabled** -- - Log Source Type: `Universal LEAF` -- - Log Source: `<Sensor Name>` -- - Event Name (should be already set as Sensor Alert) -- - Capture Group: 1 -- - Regex: -- - Sensor Alert Description RegEx: `msg=(.*)(?=\t)` - - Sensor Alert ID RegEx: `alertId=(.*)(?=\t)` - - Sensor Alert Score RegEx: `Detected score=(.*)(?=\t)` - - Sensor Alert Title RegEx: `title=(.*)(?=\t)` - - Sensor Destination Name RegEx: `dstName=(.*)(?=\t)` - - Sensor Direct Redirect RegEx: `rta=(.*)(?=\t)` - - Sensor Sender IP: RegEx: `reporter=(.*)(?=\t)` - - Sensor Sender Name RegEx: `senderName=(.*)(?=\t)` - - Sensor Alert Engine RegEx: `engine =(.*)(?=\t)` - - Sensor Source Device Name RegEx: `src` + | Parameter | Description | + |--|--| + | **New Property** | Choose from the list below: <br><br> - Sensor Alert Description <br> - Sensor Alert ID <br> - Sensor Alert Score <br> - Sensor Alert Title <br> - Sensor Destination Name <br> - Sensor Direct Redirect <br> - Sensor Sender IP <br> - Sensor Sender Name <br> - Sensor Alert Engine <br> - Sensor Source Device Name | + | **Optimize Parsing** | Check on. | + | **Field Type** | `AlphaNumeric` | + | **Enabled** | Check on. | + | **Log Source Type** | `Universal LEAF` | + | **Log Source** | `<Sensor Name>` | + | **Event Name** | Should be already set as *Sensor Alert* | + | Capture Group | 1 | + | Regex | Define the following: <br><br> - Sensor Alert Description RegEx: `msg=(.*)(?=\t)` <br> - Sensor Alert ID RegEx: `alertId=(.*)(?=\t)` <br> - Sensor Alert Score RegEx: `Detected score=(.*)(?=\t)` <br> - Sensor Alert Title RegEx: `title=(.*)(?=\t)` <br> - Sensor Destination Name RegEx: `dstName=(.*)(?=\t)` <br> - Sensor Direct Redirect RegEx: `rta=(.*)(?=\t)` <br> - Sensor Sender IP: RegEx: `reporter=(.*)(?=\t)` <br> - Sensor Sender Name RegEx: `senderName=(.*)(?=\t)` <br> - Sensor Alert Engine RegEx: `engine =(.*)(?=\t)` <br> - Sensor Source Device Name RegEx: `src` | ## Next steps -In this tutorial, you learned how to get started with the QRadar integration. Continue on to learn how to [Integrate ServiceNow with Microsoft Defender for IoT](tutorial-servicenow.md). - > [!div class="nextstepaction"]-> [Integrate ServiceNow with Microsoft Defender for IoT](tutorial-servicenow.md) +> [Integrations with Microsoft and partner services](integrate-overview.md) |
defender-for-iot | Tutorial Servicenow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-servicenow.md | For more information, read the ServiceNow supporting links and documentation for Access the ServiceNow integrations from the ServiceNow store: -- [Service Graph Connector (SGC)](https://store.servicenow.com/sn_appstore_store.do#!/store/application/ddd4bf1b53f130104b5cddeeff7b1229) -- [Vulnerability Response (VR)](https://store.servicenow.com/sn_appstore_store.do#!/store/application/463a7907c3313010985a1b2d3640dd7e) +- [Service Graph Connector (SGC)](https://store.servicenow.com/sn_appstore_store.do#!/store/application/ddd4bf1b53f130104b5cddeeff7b1229) +- [Vulnerability Response (VR)](https://store.servicenow.com/sn_appstore_store.do#!/store/application/463a7907c3313010985a1b2d3640dd7e) ++> [!div class="nextstepaction"] +> [Integrations with Microsoft and partner services](integrate-overview.md) |
defender-for-iot | Tutorial Splunk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-splunk.md | -This tutorial will help you learn how to integrate, and use Splunk with Microsoft Defender for IoT. +This article helps you learn how to integrate, and use Splunk with Microsoft Defender for IoT. Defender for IoT mitigates IIoT, ICS, and SCADA risk with patented, ICS-aware self-learning engines that deliver immediate insights about ICS devices, vulnerabilities, and threats in less than an image hour and without relying on agents, rules or signatures, specialized skills, or prior knowledge of the environment. The application provides SOC analysts with multidimensional visibility into the The Splunk application can be installed locally ('Splunk Enterprise') or run on a cloud ('Splunk Cloud'). The Splunk integration along with Defender for IoT supports 'Splunk Enterprise' only. -> [!Note] -> References to CyberX refer to Microsoft Defender for IoT. +> [!NOTE] +> Microsoft Defender for IoT was formally known as [CyberX](https://blogs.microsoft.com/blog/2020/06/22/microsoft-acquires-cyberx-to-accelerate-and-secure-customers-iot-deployments/). References to CyberX refer to Defender for IoT. -In this tutorial, you learn how to: +In this article, you learn how to: > [!div class="checklist"] >-> * Download the Defender for IoT application in Splunk -> * Send Defender for IoT alerts to Splunk +> - Download the Defender for IoT application in Splunk +> - Send Defender for IoT alerts to Splunk If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites +Before you begin, make sure that you have the following prerequisites: + ### Version requirements The following versions are required for the application to run. -* Defender for IoT version 2.4 and above. +- Defender for IoT version 2.4 and above. -* Splunkbase version 11 and above. +- Splunkbase version 11 and above. -* Splunk Enterprise version 7.2 and above. +- Splunk Enterprise version 7.2 and above. -### Splunk permission requirements +### Permission requirements -The following Splunk permission is required: +Make sure you have: -* Any user with an *Admin* level user role. +- Access to a Defender for IoT OT sensor as an Admin user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md). +- Splunk user with an *Admin* level user role. ## Download the Defender for IoT application in Splunk -To access the Defender for IoT application within Splunk, you will need to download the application form the Splunkbase application store. +To access the Defender for IoT application within Splunk, you need to download the application form the Splunkbase application store. **To access the Defender for IoT application in Splunk**: To access the Defender for IoT application within Splunk, you will need to downl ## Send Defender for IoT alerts to Splunk -The Defender for IoT alerts provides information about an extensive range of security events. These events include: --* Deviations from the learned baseline network activity. +The Defender for IoT alerts provide information about an extensive range of security events. These events include: -* Malware detections. +- Deviations from the learned baseline network activity. -* Detections based on suspicious operational changes. +- Malware detections. -* Network anomalies. +- Detections based on suspicious operational changes. -* Protocol deviations from protocol specifications. +- Network anomalies. - :::image type="content" source="media/tutorial-splunk/address-scan.png" alt-text="A screen capture if an Address Scan Detected alert."::: +- Protocol deviations from protocol specifications. You can also configure Defender for IoT to send alerts to the Splunk server, where alert information is displayed in the Splunk Enterprise dashboard. :::image type="content" source="media/tutorial-splunk/alerts-and-details.png" alt-text="View all of the alerts and their details." lightbox="media/tutorial-splunk/alerts-and-details-expanded.png"::: -To send alert information to the Splunk servers from Defender for IoT, you will need to create a Forwarding Rule. +To send alert information to the Splunk servers from Defender for IoT, you need to create a Forwarding Rule. -Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created are not affected by the rule. +Forwarding alert rules run only on alerts triggered after the forwarding rule is created. Alerts already in the system from before the forwarding rule was created aren't affected by the rule. **To create the forwarding rule**: -1. Sign in to the sensor, and select **Forwarding** from the left side pane. +1. Sign in to the sensor, and select **Forwarding**. -1. Select **Create nre rule**. +1. Select **Create new rule**. -1. In the **Add forwarding rule** dialog box, define the rule parameters. +1. In the **Add forwarding rule** pane, define the rule parameters: - :::image type="content" source="media/tutorial-splunk/forwarding-rule.png" alt-text="Create the rules for your forwarding rule." lightbox="media/tutorial-splunk/forwarding-rule-expanded.png"::: + :::image type="content" source="media/tutorial-splunk/forwarding-rule.png" alt-text="Screenshot of creating the rules for your forwarding rule." lightbox="media/tutorial-splunk/forwarding-rule.png"::: | Parameter | Description | |--|--|- | **Name** | The forwarding rule name. | - | **Select Severity** | The minimal security level incident to forward. For example, if Minor is selected, minor alerts and any alert above this severity level will be forwarded. | - | **Protocols** | By default, all the protocols are selected. To select a specific protocol, select **Specific** and select the protocol for which this rule is applied. | - | **Engines** | By default, all the security engines are involved. To select a specific security engine for which this rule is applied, select **Specific** and select the engine. | - | **System Notifications** | Forward sensor system notifications to the Splunk server. For example, send the online/offline status of connected sensor. This option is only available if you have logged into the Central Manager. | --1. Select **Action**, and then select **Send to Splunk Server**. + | **Rule name** | The forwarding rule name. | + | **Minimal alert level** | The minimal security level incident to forward. For example, if Minor is selected, minor alerts and any alert above this severity level will be forwarded. | + | **Any protocol detected** | Toggle off to select the protocols you want to include in the rule. | + | **Traffic detected by any engine** | Toggle off to select the traffic you want to include in the rule. | -1. Enter the following Splunk parameters. +1. In the **Actions** area, define the following values: | Parameter | Description | |--|--|- | **Host** | Splunk server address | - | **Port** | 8089 | - | **Username** | Splunk server username | - | **Password** | Splunk server password | + | **Server** | Select Splunk Server. | + | **Host** | Enter the Splunk server address. | + | **Port** | Enter 8089. | + | **Username** | Enter the Splunk server username. | + | **Password** | Enter the Splunk server password. | -1. Select **Submit**. --## Clean up resources --There are no resources to clean up. +1. Select **Save**. ## Next steps -In this tutorial, you learned how to get started with the Splunk integration. Continue on to learn how to [Integrate ServiceNow with Microsoft Defender for IoT](tutorial-servicenow.md). - > [!div class="nextstepaction"]-> [Integrate ServiceNow with Microsoft Defender for IoT](tutorial-servicenow.md) +> [Integrations with Microsoft and partner services](integrate-overview.md) |
dev-box | How To Manage Dev Box Definitions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-definitions.md | The following steps show you how to create a dev box definition by using an exis 1. Select **Create**. +> [!NOTE] +> Dev box definitions with 4 core SKUs are no longer supported. You will need to update to an 8 core SKU or delete the dev box definition. + ## Update a dev box definition Over time, your needs for dev boxes will change. You might want to move from a Windows 10 base operating system to a Windows 11 base operating system, or increase the default compute specification for your dev boxes. Your initial dev box definitions might no longer be appropriate for your needs. You can update a dev box definition so that new dev boxes will use the new configuration. You can delete a dev box definition when you no longer want to use it. Deleting ## Next steps - [Provide access to projects for project admins](./how-to-project-admin.md)-- [Configure Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md)+- [Configure Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md) |
digital-twins | Concepts Apis Sdks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-apis-sdks.md | The available helper classes are: ## Bulk import with the Jobs API -The [Jobs API](/rest/api/digital-twins/dataplane/import-jobs) is a data plane API that allows you to import a set of models, twins, and/or relationships in a single API call. Jobs API operations are also included with the [CLI commands](/cli/azure/dt/job) and [data plane SDKs](#data-plane-apis). Using the Jobs API requires use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md). +The [Jobs API](/rest/api/digital-twins/dataplane/import-jobs) (currently in preview) is a data plane API that allows you to import a set of models, twins, and/or relationships in a single API call. Jobs API operations are also included with the [CLI commands](/cli/azure/dt/job) and [data plane SDKs](#data-plane-apis). Using the Jobs API requires use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md). ### Check permissions |
digital-twins | How To Manage Graph | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-graph.md | You can even create multiple instances of the same type of relationship between ### Create relationships in bulk with the Jobs API -You can use the [Jobs API](concepts-apis-sdks.md#bulk-import-with-the-jobs-api) to create many relationships at once in a single API call. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for relationships and bulk jobs. +You can use the [Jobs API](concepts-apis-sdks.md#bulk-import-with-the-jobs-api) (currently in preview) to create many relationships at once in a single API call. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for relationships and bulk jobs. >[!TIP] >The Jobs API also allows models and twins to be imported in the same call, to create all parts of a graph at once. For more about this process, see [Upload models, twins, and relationships in bulk with the Jobs API](#upload-models-twins-and-relationships-in-bulk-with-the-jobs-api). This section describes strategies for creating a graph with multiple elements at ### Upload models, twins, and relationships in bulk with the Jobs API -You can use the [Jobs API](concepts-apis-sdks.md#bulk-import-with-the-jobs-api) to upload multiple models, twins, and relationships to your instance in a single API call, effectively creating the graph all at once. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for graph elements (models, twins, and relationships) and bulk jobs. +You can use the [Jobs API](concepts-apis-sdks.md#bulk-import-with-the-jobs-api) (currently in preview) to upload multiple models, twins, and relationships to your instance in a single API call, effectively creating the graph all at once. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for graph elements (models, twins, and relationships) and bulk jobs. To import resources in bulk, start by creating an *NDJSON* file containing the details of your resources. The file starts with a `Header` section, followed by the optional sections `Models`, `Twins`, and `Relationships`. You don't have to include all three types of graph data in the file, but any sections that are present must follow that order. Twins defined in the file can reference models that are either defined in this file or already present in the instance, and they can optionally include initialization of the twin's properties. Relationships defined in the file can reference twins that are either defined in this file or already present in the instance, and they can optionally include initialization of relationship properties. |
digital-twins | How To Manage Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-model.md | If you're using the [REST APIs](/rest/api/azure-digitaltwins/) or [Azure CLI](/c ### Upload large model sets with the Jobs API -For large model sets, you can use the [Jobs API](concepts-apis-sdks.md#bulk-import-with-the-jobs-api) to upload many models at once in a single API call. The API can simultaneously accept up to the [Azure Digital Twins limit for number of models in an instance](reference-service-limits.md), and it automatically reorders models if needed to resolve dependencies between them. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for models and bulk jobs. +For large model sets, you can use the [Jobs API](concepts-apis-sdks.md#bulk-import-with-the-jobs-api) (currently in preview) to upload many models at once in a single API call. The API can simultaneously accept up to the [Azure Digital Twins limit for number of models in an instance](reference-service-limits.md), and it automatically reorders models if needed to resolve dependencies between them. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for models and bulk jobs. >[!TIP] >The Jobs API also allows twins and relationships to be imported in the same call, to create all parts of a graph at once. For more about this process, see [Upload models, twins, and relationships in bulk with the Jobs API](how-to-manage-graph.md#upload-models-twins-and-relationships-in-bulk-with-the-jobs-api). |
digital-twins | How To Manage Twin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-twin.md | The helper class of `BasicDigitalTwin` allows you to store property fields in a ### Create twins in bulk with the Jobs API -You can use the [Jobs API](concepts-apis-sdks.md#bulk-import-with-the-jobs-api) to create many twins at once in a single API call. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for twins and bulk jobs. +You can use the [Jobs API](concepts-apis-sdks.md#bulk-import-with-the-jobs-api) (currently in preview) to create many twins at once in a single API call. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for twins and bulk jobs. >[!TIP] >The Jobs API also allows models and relationships to be imported in the same call, to create all parts of a graph at once. For more about this process, see [Upload models, twins, and relationships in bulk with the Jobs API](how-to-manage-graph.md#upload-models-twins-and-relationships-in-bulk-with-the-jobs-api). |
expressroute | Expressroute Faqs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-faqs.md | Supported bandwidth offers: 50 Mbps, 100 Mbps, 200 Mbps, 500 Mbps, 1 Gbps, 2 Gbps, 5 Gbps, 10 Gbps +### What's the maximum MTU supported? ++ExpressRoute and other hybrid networking services--VPN and vWAN--supports a maximum MTU of 1400 bytes. +See [TCP/IP performance tuning for Azure VMs](../virtual-network/virtual-network-tcpip-performance-tuning.md) for tuning the MTU of your VMs. + ### Which service providers are available? See [ExpressRoute partners and locations](expressroute-locations.md) for the list of service providers and locations. |
governance | How To Create Package | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to-create-package.md | Title: How to create custom machine configuration package artifacts description: Learn how to create a machine configuration package file. Previously updated : 04/18/2023 Last updated : 05/15/2023 # How to create custom machine configuration package artifacts Before you create a configuration package, author and compile a DSC configuratio configurations are available for Windows and Linux. > [!IMPORTANT]-> When compiling configurations for Windows, use **PSDesiredStateConfiguration** version 2.0.5 (the +> When compiling configurations for Windows, use **PSDesiredStateConfiguration** version 2.0.7 (the > stable release). When compiling configurations for Linux install the prerelease version 3.0.0. An example is provided in the DSC [Getting started document][04] for Windows. |
governance | Definition Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md | The following Resource Provider modes are fully supported: The following Resource Provider modes are currently supported as a **[preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)**: -- `Microsoft.ManagedHSM.Data` for managing [Managed HSM](../../../key-vault/managed-hsm/overview.md) keys using Azure Policy.+- `Microsoft.ManagedHSM.Data` for managing [Managed HSM](../../../key-vault/managed-hsm/azure-policy.md) keys using Azure Policy. +- `Microsoft.DataFactory.Data` for using Azure Policy to deny [Azure Data Factory](../../../data-factory/introduction.md) outbound traffic domain names not specified in an allow list. > [!NOTE] >Unless explicitly stated, Resource Provider modes only support built-in policy definitions, and exemptions are not supported at the component-level. |
governance | Policy Applicability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-applicability.md | The applicability of `AuditIfNotExists` and `DeployIfNotExists` policies is base The applicability of `Microsoft.Kubernetes.Data` policies is based off the entire `if` condition of the policy rule. When the `if` evaluates to false, the policy isn't applicable. -### Microsoft.KeyVault.Data +### Microsoft.KeyVault.Data, Microsoft.ManagedHSM.Data, Microsoft.DataFactory.Data -Policies with mode `Microsoft.KeyVault.Data` are applicable if the `type` condition of the policy rule evaluates to true. The `type` refers to component type, such as: +Policies with mode `Microsoft.KeyVault.Data` are applicable if the `type` condition of the policy rule evaluates to true. The `type` refers to component type. ++Key Vault component types: - Microsoft.KeyVault.Data/vaults/certificates - Microsoft.KeyVault.Data/vaults/keys - Microsoft.KeyVault.Data/vaults/secrets -### Microsoft.ManagedHSM.Data --Policies with mode `Microsoft.ManagedHSM.Data` are applicable if the `type` condition of the policy rule evaluates to true. The `type` refers to component type: +Managed HSM component type: - Microsoft.ManagedHSM.Data/managedHsms/keys +Azure Data Factory component type: +- Microsoft.DataFactory.Data/factories/outboundTraffic + ### Microsoft.Network.Data Policies with mode `Microsoft.Network.Data` are applicable if the `type` and `name` conditions of the policy rule evaluate to true. The `type` refers to component type: |
hdinsight | Cluster Availability Monitor Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/cluster-availability-monitor-logs.md | If you don't already have an existing action group, click **Create New** under t This will open **Add action group**. Choose an **Action group name**, **Short name**, **Subscription**, and **Resource group.** Under the **Actions** section, choose an **Action Name** and select **Email/SMS/Push/Voice** as the **Action Type.** > [!NOTE]-> There are several other actions an alert can trigger besides an Email/SMS/Push/Voice, such as an Azure Function, LogicApp, Webhook, ITSM, and Automation Runbook. [Learn More.](../azure-monitor/alerts/action-groups.md#action-specific-information) +> There are several other actions an alert can trigger besides an Email/SMS/Push/Voice, such as an Azure Function, LogicApp, Webhook, ITSM, and Automation Runbook. [Learn More.](../azure-monitor/alerts/action-groups.md) This will open **Email/SMS/Push/Voice**. Choose a **Name** for the recipient, **check** the **Email** box, and type an email address to which you want the alert sent. Select **OK** in **Email/SMS/Push/Voice**, then in **Add action group** to finish configuring your action group. |
hdinsight | Hdinsight 5X Component Versioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-5x-component-versioning.md | Title: Open-source components and versions - Azure HDInsight 5.x -description: Learn about the open-source components and versions in Azure HDInsight 5.x +description: Learn about the open-source components and versions in Azure HDInsight 5.x. Last updated 05/11/2023 Last updated 05/11/2023 In this article, you learn about the open-source components and their versions in Azure HDInsight 5.x. -## Public preview +## Preview -From February 27, 2023 we have started rolling out a new version of HDInsight 5.1, this version is backward compatible with HDInsight 4.0. and 5.0. All new open-source releases added as incremental releases on HDInsight 5.1. +On February 27, 2023, we started rolling out a new version of HDInsight: version 5.1. This version is backward compatible with HDInsight 4.0. and 5.0. All new open-source releases will be added as incremental releases on HDInsight 5.1. -**All upgraded cluster shapes are supported as part of HDI 5.1.** +All upgraded cluster shapes are supported as part of HDInsight 5.1. -## Open-source components available with HDInsight version 5.x +## Open-source components available with HDInsight 5.x -The Open-source component versions associated with HDInsight 5.1 listed in the following table. +The following table lists the versions of open-source components that are associated with HDInsight 5.x. | Component | HDInsight 5.1 | HDInsight 5.0 | |||| The Open-source component versions associated with HDInsight 5.1 listed in the f | Apache Zeppelin | 0.10.1 ** | 0.8.0 | | Apache Phoenix | 5.1.2 ** | - | -\* Under development/Planned +\* Under development or planned -** Public Preview +** Preview > [!NOTE]-> ESP isn't supported for HDI 5.1 clusters. +> Enterprise Security Package (ESP) isn't supported for HDInsight 5.1 clusters. ### Spark versions supported in Azure HDInsight -Apache Spark versions supported in Azure HDIinsight +Azure HDInsight supports the following Apache Spark versions. -|Apache Spark version on HDInsight|Release date|Release stage|End of life announcement date|End of standard support|End of basic support| +|Apache Spark version on HDInsight|Release date|Release stage|End-of-life announcement date|End of standard support|End of basic support| |--|--|--|--|--|--|-|2.4|July 8, 2019|End of Life Announced (EOLA)| Feb10,2023| Aug 10,2023|Feb 10,2024| -|3.1|March 11,2022|GA |-|-|-| -|3.3|To be announced for Public Preview|-|-|-|-| +|2.4|July 8, 2019|End of life announced (EOLA)| February 10, 2023| August 10, 2023|February 10, 2024| +|3.1|March 11, 2022|General availability |-|-|-| +|3.3|To be announced for preview|-|-|-|-| -### Apache Spark 2.4 to Spark 3.x Migration Guides +### Guide for migrating from Apache Spark 2.4 to Spark 3.x -Spark 2.4 to Spark 3.x Migration Guides see [here](https://spark.apache.org/docs/latest/migration-guide.html). +To learn how to migrate from Spark 2.4 to Spark 3.x, see the [migration guide on the Spark website](https://spark.apache.org/docs/latest/migration-guide.html). -## HDInsight version 5.0 --Starting from June 1, 2022, we have started rolling out a new version of HDInsight 5.0, this version is backward compatible with HDInsight 4.0. All new open-source releases will be added as incremental releases on HDInsight 5.0. +## HDInsight 5.0 +On June 1, 2022, we started rolling out a new version of HDInsight: version 5.0. This version is backward compatible with HDInsight 4.0. All new open-source releases will be added as incremental releases on HDInsight 5.0. ### Spark -> [!NOTE] -> * If you are using Azure User Interface to create a Spark Cluster for HDInsight, you will see from the dropdown list an additional version Spark 3.1.(HDI 5.0) along with the older versions. This version is a renamed version of Spark 3.1.(HDI 4.0) and it is backward compatible. -> * This is only a UI level change, which doesn’t impact anything for the existing users and users who are already using the ARM template to build their clusters. -> * For backward compatibility, ARM supports creating Spark 3.1 with HDI 4.0 and 5.0 versions which maps to same versions Spark 3.1 (HDI 5.0) -> * Spark 3.1 (HDI 5.0) cluster comes with HWC 2.0 which works well together with Interactive Query (HDI 5.0) cluster. +If you're using the Azure user interface to create a Spark cluster for HDInsight, the dropdown list contains an additional version along with the older version: Spark 3.1 (HDI 5.0). This version is a renamed version of Spark 3.1 (HDI 4.0), and it's backward compatible. ++This is only a UI-level change. It doesn't affect anything for existing users and for users who are already using the Azure Resource Manager template (ARM template) to build their clusters. ++For backward compatibility, Resource Manager supports creating Spark 3.1 with the HDInsight 4.0 and 5.0 versions, which map to the same versions for Spark 3.1 (HDI 5.0). ++The Spark 3.1 (HDI 5.0) cluster comes with Hive Warehouse Connector (HWC) 2.0, which works well together with the Interactive Query (HDI 5.0) cluster. ### Interactive Query -> [!NOTE] -> * If you are creating an Interactive Query Cluster, you will see from the dropdown list another version as Interactive Query 3.1 (HDI 5.0). -> * If you are going to use Spark 3.1 version along with Hive which require ACID support via Hive Warehouse Connector (HWC). You need to select this version Interactive Query 3.1 (HDI 5.0). +If you're creating an Interactive Query cluster, the dropdown list contains another version: Interactive Query 3.1 (HDI 5.0). If you're going to use the Spark 3.1 version along with Hive (which requires ACID support via HWC), you need to select this version. ++### Kafka -### Kafka +The current ARM template supports HDInsight 5.0 for Kafka 2.4.1. -Current ARM template supports HDI 5.0 for Kafka 2.4.1 +HDInsight 5.0 is supported for the Kafka cluster type and component version 2.4. -`HDI Version '5.0' is supported for clusterType "Kafka" and component Version '2.4'.` +We fixed the ARM template issue. -We have fixed the arm templated issue. +### Upcoming version upgrades -### Upcoming version upgrades. -HDInsight team is working on upgrading other open-source components. +The HDInsight team is working on upgrading other open-source components: * ESP cluster support for all cluster shapes * Oozie 5.2.1-* HWC 2.1  +* HWC 2.1 ## Next steps -- [Cluster setup for Apache Hadoop, Spark, and more on HDInsight](hdinsight-hadoop-provision-linux-clusters.md)-- [Enterprise Security Package](./enterprise-security-package.md)-- [Work in Apache Hadoop on HDInsight from a Windows PC](hdinsight-hadoop-windows-tools.md)+* [Cluster setup for Apache Hadoop, Spark, and more on HDInsight](hdinsight-hadoop-provision-linux-clusters.md) +* [Enterprise Security Package](./enterprise-security-package.md) +* [Work in Apache Hadoop on HDInsight from a Windows PC](hdinsight-hadoop-windows-tools.md) |
healthcare-apis | Dicom Services Conformance Statement V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement-v2.md | This Retrieve Transaction offers support for retrieving stored studies, series, | GET | ../studies/{study}/series/{series}/metadata | Retrieves the metadata for all instances within a series. | | GET | ../studies/{study}/series/{series}/instances/{instance} | Retrieves a single instance. | | GET | ../studies/{study}/series/{series}/instances/{instance}/metadata | Retrieves the metadata for a single instance. |-| GET | ../studies/{study}/series/{series}/instances/{instance}/frames/{frames} | Retrieves one or many frames from a single instance. To specify more than one frame, a comma separate each frame to return. For example, /studies/1/series/2/instance/3/frames/4,5,6 | +| GET | ../studies/{study}/series/{series}/instances/{instance}/rendered | Retrieves an instance rendered into an image format | +| GET | ../studies/{study}/series/{series}/instances/{instance}/frames/{frames} | Retrieves one or many frames from a single instance. To specify more than one frame, a comma separate each frame to return. For example, `/studies/1/series/2/instance/3/frames/4,5,6`. | +| GET | ../studies/{study}/series/{series}/instances/{instance}/frames/{frame}/rendered | Retrieves a single frame rendered into an image format. | #### Retrieve instances within study or series Cache validation is supported using the `ETag` mechanism. In the response to a m * Data hasn't changed since the last request: `HTTP 304 (Not Modified)` response is sent with no response body. * Data has changed since the last request: `HTTP 200 (OK)` response is sent with updated ETag. Required data is returned as part of the body. +### Retrieve Rendered Image (For Instance or Frame) +The following `Accept` header(s) are supported for retrieving a rendered image an instance or a frame: ++- `image/jpeg` +- `image/png` ++In the case that no `Accept` header is specified the service renders an `image/jpeg` by default. ++The service only supports rendering of a single frame. If rendering is requested for an instance with multiple frames, then only the first frame is rendered as an image by default. ++When specifying a particular frame to return, frame indexing starts at 1. ++The `quality` query parameter is also supported. An integer value between `1` and `100` inclusive (1 being worst quality, and 100 being best quality) may be passed as the value for the query parameter. This parameter is used for images rendered as `jpeg`, and is ignored for `png` render requests. If not specified the parameter defaults to `100`. + ### Retrieve response status codes | Code | Description | Cache validation is supported using the `ETag` mechanism. In the response to a m | `400 (Bad Request)` | The request was badly formatted. For example, the provided study instance identifier didn't conform to the expected UID format, or the requested transfer-syntax encoding isn't supported. | | `401 (Unauthorized)` | The client isn't authenticated. | | `403 (Forbidden)` | The user isn't authorized. |-| `404 (Not Found)` | The specified DICOM resource couldn't be found. | -| `406 (Not Acceptable)` | The specified `Accept` header isn't supported. | +| `404 (Not Found)` | The specified DICOM resource couldn't be found, or for rendered request the instance didn't contain pixel data. | +| `406 (Not Acceptable)` | The specified `Accept` header isn't supported, or for rendered and transcodes requests the file requested was too large. | | `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. | ### Search (QIDO-RS) |
healthcare-apis | Dicom Services Conformance Statement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement.md | This Retrieve Transaction offers support for retrieving stored studies, series, | GET | ../studies/{study}/series/{series}/metadata | Retrieves the metadata for all instances within a series. | | GET | ../studies/{study}/series/{series}/instances/{instance} | Retrieves a single instance. | | GET | ../studies/{study}/series/{series}/instances/{instance}/metadata | Retrieves the metadata for a single instance. |-| GET | ../studies/{study}/series/{series}/instances/{instance}/frames/{frames} | Retrieves one or many frames from a single instance. To specify more than one frame, use a comma to separate each frame to return. For example, /studies/1/series/2/instance/3/frames/4,5,6 | +| GET | ../studies/{study}/series/{series}/instances/{instance}/rendered | Retrieves an instance rendered into an image format | +| GET | ../studies/{study}/series/{series}/instances/{instance}/frames/{frames} | Retrieves one or many frames from a single instance. To specify more than one frame, use a comma to separate each frame to return. For example, `/studies/1/series/2/instance/3/frames/4,5,6`. | +| GET | ../studies/{study}/series/{series}/instances/{instance}/frames/{frame}/rendered | Retrieves a single frame rendered into an image format. | #### Retrieve instances within study or series Cache validation is supported using the `ETag` mechanism. In the response to a m * Data hasn't changed since the last request: `HTTP 304 (Not Modified)` response is sent with no response body. * Data has changed since the last request: `HTTP 200 (OK)` response is sent with updated ETag. Required data is also returned as part of the body. +### Retrieve Rendered Image (For Instance or Frame) +The following `Accept` header(s) are supported for retrieving a rendered image an instance or a frame: ++- `image/jpeg` +- `image/png` ++In the case that no `Accept` header is specified the service renders an `image/jpeg` by default. ++The service only supports rendering of a single frame. If rendering is requested for an instance with multiple frames, then only the first frame is rendered as an image by default. ++When specifying a particular frame to return, frame indexing starts at 1. ++The `quality` query parameter is also supported. An integer value between `1` and `100` inclusive (1 being worst quality, and 100 being best quality) may be passed as the value for the query parameter. This parameter is used for images rendered as `jpeg`, and is ignored for `png` render requests. If not specified the parameter defaults to `100`. + ### Retrieve response status codes | Code | Description | Cache validation is supported using the `ETag` mechanism. In the response to a m | `400 (Bad Request)` | The request was badly formatted. For example, the provided study instance identifier didn't conform to the expected UID format, or the requested transfer-syntax encoding isn't supported. | | `401 (Unauthorized)` | The client isn't authenticated. | | `403 (Forbidden)` | The user isn't authorized. |-| `404 (Not Found)` | The specified DICOM resource couldn't be found. | -| `406 (Not Acceptable)` | The specified `Accept` header isn't supported. | +| `404 (Not Found)` | The specified DICOM resource couldn't be found, or for rendered request the instance did not contain pixel data. | +| `406 (Not Acceptable)` | The specified `Accept` header isn't supported, or for rendered and transcode requests the file requested was too large. | | `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. | ### Search (QIDO-RS) |
healthcare-apis | Use Postman | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/use-postman.md | Create a new `POST` request: - **client_secret**: `{{clientsecret}}` - **resource**: `{{fhirurl}}` - Note : In the scenarios where FHIR service audience parameter is not mapped to FHIR service endpoint url, the resource parameter value should be mapped to Audience value under FHIR Service Authentication blade. +> [!NOTE] +> In the scenarios where the FHIR service audience parameter is not mapped to the FHIR service endpoint url. The resource parameter value should be mapped to Audience value under FHIR Service Authentication blade. 3. Select the **Test** tab and enter in the text section: `pm.environment.set("bearerToken", pm.response.json().access_token);` To make the value available to the collection, use the pm.collectionVariables.set method. For more information on the set method and its scope level, see [Using variables in scripts](https://learning.postman.com/docs/sending-requests/variables/#defining-variables-in-scripts). 4. Select **Save** to save the settings. |
healthcare-apis | Using Curl | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/using-curl.md | token=$(az account get-access-token --resource=$dicomtokenurl --query accessToke ``` +> [!NOTE] +> In the scenarios where the FHIR service audience parameter is not mapped to the FHIR service endpoint url. The resource parameter value should be mapped to Audience value under FHIR Service Authentication blade. ## Access data in the FHIR service To learn about how to access Azure Health Data Services data using REST Client e >[!div class="nextstepaction"] >[Access Azure Health Data Services using REST Client](using-rest-client.md) -FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. +FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. |
healthcare-apis | Using Rest Client | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/using-rest-client.md | grant_type=client_credentials [  ](media/rest-config.png#lightbox) +> [!NOTE] +> In the scenarios where the FHIR service audience parameter is not mapped to the FHIR service endpoint url. The resource parameter value should be mapped to Audience value under FHIR Service Authentication blade. + ## `GET` FHIR Patient data You can now get a list of patients or a specific patient with the `GET` request. The line with `Authorization` is the header info for the `GET` request. You can also send `PUT` or `POST` requests to create/update FHIR resources. To learn about how to validate FHIR resources against profiles in Azure Health D >[!div class="nextstepaction"] >[Validate FHIR resources against profiles in Azure Health Data Services](validation-against-profiles.md) -FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. +FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. |
machine-learning | Reference Migrate Sdk V1 Mlflow Tracking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-migrate-sdk-v1-mlflow-tracking.md | If you're migrating from SDK v1 to SDK v2, use the information in this section t ## Why MLflow? -MLflow, with over 13 million monthly downloads, has become the standard platform for end-to-end MLOps, enabling teams of all sizes to track, share, package and deploy any model for batch or real-time inference. By integrating with MLflow, your training code will not need to hold any specific code related to Azure Machine Learning, achieving true portability and seamless integration with other open-source platforms. +MLflow, with over 13 million monthly downloads, has become the standard platform for end-to-end MLOps, enabling teams of all sizes to track, share, package and deploy any model for batch or real-time inference. Azure Machine Learning integrates with MLflow, which enables your training code to achieve true portability and seamless integration with other platforms since it doesn't hold any Azure Machine Learning specific instructions. ## Prepare for migrating to MLflow -To use MLflow tracking, you will need to install `mlflow` and `azureml-mlflow` Python packages. All Azure Machine Learning environments have these packages already available for you but you will need to include them if creating your own environment. +To use MLflow tracking, you need to install Mlflow SDK package `mlflow` and Azure Machine Learning plug-in for MLflow `azureml-mlflow`. All Azure Machine Learning environments have these packages already available for you but you need to include them if creating your own environment. ```bash pip install mlflow azureml-mlflow ``` -> [!TIP] -> You can use the [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst) which is a lightweight MLflow package without SQL storage, server, UI, or data science dependencies. This is recommended for users who primarily need the tracking and logging capabilities without importing the full suite of MLflow features including deployments. - ## Connect to your workspace -Azure Machine Learning allows users to perform tracking in training jobs running on your workspace or running remotely (tracking experiments running outside Azure Machine Learning). If performing remote tracking, you will need to indicate the workspace you want to connect MLflow to. +Azure Machine Learning allows users to perform tracking in training jobs running on your workspace or running remotely (tracking experiments running outside Azure Machine Learning). If performing remote tracking, you need to indicate the workspace you want to connect MLflow to. # [Azure Machine Learning compute](#tab/aml) You are already connected to your workspace when running on Azure Machine Learni **Configure authentication** -Once the tracking is configured, you'll also need to configure how the authentication needs to happen to the associated workspace. By default, the Azure Machine Learning plugin for MLflow will perform interactive authentication by opening the default browser to prompt for credentials. Refer to [Configure MLflow for Azure Machine Learning: Configure authentication](how-to-use-mlflow-configure-tracking.md#configure-authentication) for more ways to configure authentication for MLflow in Azure Machine Learning workspaces. +Once the tracking is configured, you also need to configure how the authentication needs to happen to the associated workspace. By default, the Azure Machine Learning plugin for MLflow performs interactive authentication by opening the default browser to prompt for credentials. Refer to [Configure MLflow for Azure Machine Learning: Configure authentication](how-to-use-mlflow-configure-tracking.md#configure-authentication) for more ways to configure authentication for MLflow in Azure Machine Learning workspaces. [!INCLUDE [configure-mlflow-auth](../../includes/machine-learning-mlflow-configure-auth.md)] __SDK v2 with MLflow__ mlflow.log_text("sample_string_text", "string.txt") ``` -* The string will be logged as an _artifact_, not as a metric. In Azure Machine Learning studio, the value will be displayed in the __Outputs + logs__ tab. +* The string is logged as an _artifact_, not as a metric. In Azure Machine Learning studio, the value is displayed in the __Outputs + logs__ tab. ### Log an image to a PNG or JPEG file __SDK v2 with MLflow__ mlflow.log_artifact("Azure.png") ``` -The image is logged as an artifact and will appear in the __Images__ tab in Azure Machine Learning Studio. +The image is logged as an artifact and it appears in the __Images__ tab in Azure Machine Learning Studio. ### Log a matplotlib.pyplot ax.plot([0, 1], [2, 3]) mlflow.log_figure(fig, "sample_pyplot.png") ``` -* The image is logged as an artifact and will appear in the __Images__ tab in Azure Machine Learning Studio. -* The `mlflow.log_figure` method is __experimental__. -+* The image is logged as an artifact and it appears in the __Images__ tab in Azure Machine Learning Studio. ### Log a list of metrics mlflow.log_dict(RESIDUALS, 'mlflow_residuals.json') ## View run info and data -You can access run information using the MLflow run object's `data` and `info` properties. For more information, see [mlflow.entities.Run](https://mlflow.org/docs/latest/python_api/mlflow.entities.html#mlflow.entities.Run) reference. +You can access run information using the properties `data` and `info` of the MLflow [run (mlflow.entities.Run)](https://mlflow.org/docs/latest/python_api/mlflow.entities.html#mlflow.entities.Run) object. ++> [!TIP] +> Experiments and runs tracking information in Azure Machine Learning can be queried using MLflow, which provides a comprehensive search API to query and search for experiments and runs easily, and quickly compare results. For more information about all the capabilities in MLflow in this dimension, see [Query & compare experiments and runs with MLflow](how-to-track-experiments-mlflow.md) The following example shows how to retrieve a finished run: from mlflow.tracking import MlflowClient # Use MlFlow to retrieve the run that was just completed client = MlflowClient()-finished_mlflow_run = MlflowClient().get_run(mlflow_run.info.run_id) +finished_mlflow_run = MlflowClient().get_run("<RUN_ID>") ``` The following example shows how to view the `metrics`, `tags`, and `params`: To view the artifacts of a run, use [MlFlowClient.list_artifacts](https://mlflow client.list_artifacts(finished_mlflow_run.info.run_id) ``` -To download an artifact, use [MlFlowClient.download_artifacts](https://www.mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient.download_artifacts): +To download an artifact, use [mlflow.artifacts.download_artifacts](https://www.mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient.download_artifacts): ```python-client.download_artifacts(finished_mlflow_run.info.run_id, "Azure.png") +mlflow.artifacts.download_artifacts(run_id=finished_mlflow_run.info.run_id, artifact_path="Azure.png") ```+ ## Next steps -* [Track ML experiments and models with MLflow](how-to-use-mlflow-cli-runs.md) -* [Log and view metrics](how-to-log-view-metrics.md) +* [Track ML experiments and models with MLflow](how-to-use-mlflow-cli-runs.md). +* [Log metrics, parameters and files with MLflow](how-to-log-view-metrics.md). +* [Logging MLflow models](how-to-log-mlflow-models.md). +* [Query & compare experiments and runs with MLflow](how-to-track-experiments-mlflow.md). +* [Manage models registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md). |
machine-learning | Reference Yaml Job Pipeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-pipeline.md | The source JSON schema can be found at https://azuremlschemas.azureedge.net/late | | - | -- | - | | `default_datastore` | string | Name of the datastore to use as the default datastore for the pipeline job. This value must be a reference to an existing datastore in the workspace using the `azureml:<datastore-name>` syntax. Any outputs defined in the `outputs` property of the parent pipeline job or child step jobs will be stored in this datastore. If omitted, outputs will be stored in the workspace blob datastore. | | | `default_compute` | string | Name of the compute target to use as the default compute for all steps in the pipeline. If compute is defined at the step level, it will override this default compute for that specific step. This value must be a reference to an existing compute in the workspace using the `azureml:<compute-name>` syntax. | |-| `continue_on_step_failure` | boolean | Whether the execution of steps in the pipeline should continue if one step fails. The default value is `False`, which means that if one step fails, the pipeline execution will be stopped, canceling any running steps. | `False` | +| `continue_on_step_failure` | boolean | This setting determines what happens if a step in the pipeline fails. By default, the pipeline will continue to run even if one step fails. This means that any steps that do not depend on the failed step will still be executed. However, if you change this setting to ΓÇ£False,ΓÇ¥ the entire pipeline will stop running and any steps that are currently running will be canceled if one step fails.| `True` | | `force_rerun` | boolean | Whether to force rerun the whole pipeline. The default value is `False`, which means by default the pipeline will try to reuse the previous job's output if it meets reuse criteria. If set as `True`, all steps in the pipeline will rerun.| `False` | ### Job inputs |
migrate | Create Manage Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/create-manage-projects.md | PUT /subscriptions/<subid>/resourceGroups/<rg>/providers/Microsoft.Migrate/Migra If you already have a project and you want to create an additional project, do the following: 1. In the [Azure public portal](https://portal.azure.com) or [Azure Government](https://portal.azure.us), search for **Azure Migrate**.-2. On the Azure Migrate dashboard, select **Servers, databases and web apps** > **Create project** on the top left. ++ +3. On the Azure Migrate dashboard, select **Servers, databases and web apps** > **Create project** on the top left. :::image type="content" source="./media/create-manage-projects/switch-project.png" alt-text="Screenshot containing Create Project button."::: |
migrate | Migrate Support Matrix Hyper V | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v.md | After the appliance is connected, it gathers configuration and performance data Support | Details | -**Supported servers** | supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/Bare metal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. You can discover up to 300 SQL Server instances or 6,000 SQL databases, whichever is less. +**Supported servers** | Supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/Bare metal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. <br /><br /> You can discover up to 750 SQL Server instances or 15,000 SQL databases, whichever is less, from a single appliance. It is recommended that you ensure that an appliance is scoped to discover less than 600 servers running SQL to avoid scaling issues. **Windows servers** | Windows Server 2008 and later are supported. **Linux servers** | Currently not supported. **Authentication mechanism** | Both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager. |
migrate | Migrate Support Matrix Physical | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical.md | After the appliance is connected, it gathers configuration and performance data Support | Details | -**Supported servers** | supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/Bare metal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. You can discover up to 300 SQL Server instances or 6,000 SQL databases, whichever is less. +**Supported servers** | supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/Bare metal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. <br /><br /> You can discover up to 750 SQL Server instances or 15,000 SQL databases, whichever is less, from a single appliance. It is recommended that you ensure that an appliance is scoped to discover less than 600 servers running SQL to avoid scaling issues. **Windows servers** | Windows Server 2008 and later are supported. **Linux servers** | Currently not supported. **Authentication mechanism** | Both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager. |
migrate | Migrate Support Matrix Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware.md | After the appliance is connected, it gathers configuration and performance data Support | Details | -**Supported servers** | Supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/Bare metal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. You can discover up to 300 SQL Server instances or 6,000 SQL databases, whichever is less. +**Supported servers** | Supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/Bare metal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. <br /><br /> You can discover up to 750 SQL Server instances or 15,000 SQL databases, whichever is less, from a single appliance. It is recommended that you ensure that an appliance is scoped to discover less than 600 servers running SQL to avoid scaling issues. **Windows servers** | Windows Server 2008 and later are supported. **Linux servers** | Currently not supported. **Authentication mechanism** | Both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager. |
migrate | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md | +## Update (May 2023) +- SQL Server discovery and assessment in Azure Migrate is now Generally Available (GA). [Learn more](concepts-azure-sql-assessment-calculation.md). + ## Update (April 2023) - Build a quick business case for servers imported via a .csv file. [Learn more](tutorial-discover-import.md) - Build business case using Azure Migrate for: |
mysql | Concepts Slow Query Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-slow-query-logs.md | In Azure Database for MySQL - Flexible Server, the slow query log is available t For more information about the MySQL slow query log, see the [slow query log section](https://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html) in the MySQL engine documentation. ## Configure slow query logging-By default, the slow query log is disabled. To enable logs, set the `slow_query_log` server parameter to *ON*. This can be configured using the Azure portal or Azure CLI <!-- add link to server parameter-->. +By default, the slow query log is disabled. To enable logs, set the `slow_query_log` server parameter to *ON*. This can be configured using the Azure portal or Azure CLI. Other parameters you can adjust to control slow query logging behavior include: -- **long_query_time**: log a query if it takes longer than `long_query_time` (in seconds) to complete. The default is 10 seconds.+- **long_query_time**: log a query if it takes longer than `long_query_time` (in seconds) to complete. The default is 10 seconds. Server parameter `long_query_time` applies globally to all newly established connections in MySQL. However, it doesn't affect threads that are already connected. It's recommended to reconnect to Azure Database for MySQL - Flexible Server from the application or restarting the server will help clear out threads with older values of "long_query_time" and apply the updated parameter value. - **log_slow_admin_statements**: determines if administrative statements (ex. `ALTER_TABLE`, `ANALYZE_TABLE`) are logged.-- **log_queries_not_using_indexes**: determines if queries that do not use indexes are logged.+- **log_queries_not_using_indexes**: determines if queries that don't use indexes are logged. - **log_throttle_queries_not_using_indexes**: limits the number of non-indexed queries that can be written to the slow query log. This parameter takes effect when `log_queries_not_using_indexes` is set to *ON* > [!IMPORTANT] |
mysql | Tutorial Power Automate With Mysql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-power-automate-with-mysql.md | Power Automate is a service that helps you create automated workflows between yo - Connect to more than 500 data sources or any publicly available API - Perform CRUD (create, read, update, delete) operations on data -In this quickstart shows how to create an automated workflow usingPower automate flow with [Azure database for MySQL connector(Preview)](/connectors/azuremysql/). +In this quickstart shows how to create an automated workflow usingPower automate flow with [Azure Database for MySQL connector(Preview)](/connectors/azuremysql/). ## Prerequisites In this quickstart shows how to create an automated workflow usingPower automate ## Overview of cloud flows -Create a cloud flow when you want your automation to be triggered either automatically, instantly, or via a schedule. Here are types of flows you can create and then use with Azure database for MySQL connector. +Create a cloud flow when you want your automation to be triggered either automatically, instantly, or via a schedule. Here are types of flows you can create and then use with Azure Database for MySQL connector. | **Flow type** | **Use case** | **Automation target** | |-|--|-| Follow the steps to create an instant cloud flow with a manual trigger. An operation is an action. Power Automate flow allows you to add one or more advanced options and multiple actions for the same trigger. For example, add an advanced option that sends an email message as high priority. In addition to sending mail when an item is added to a list created in Microsoft Lists, create a file in Dropbox that contains the same information. 1. Once the flow app is created, select **Next Step** to create an operation. -2. In the box that shows Search connectors and actions, enter **Azure database for MySQL**. -3. Select **Azure database for MySQL** connector and then select **Get Rows** operation. Get rows operation allows you to get all the rows from a table or query. +2. In the box that shows Search connectors and actions, enter **Azure Database for MySQL**. +3. Select **Azure Database for MySQL** connector and then select **Get Rows** operation. Get rows operation allows you to get all the rows from a table or query. - :::image type="content" source="./media/tutorial-power-automate-with-mysql/azure-mysql-connector-add-action.png" alt-text="Screenshot that shows how to view all the actions for Azure database for MySQL connector."::: + :::image type="content" source="./media/tutorial-power-automate-with-mysql/azure-mysql-connector-add-action.png" alt-text="Screenshot that shows how to view all the actions for Azure Database for MySQL connector."::: 5. Add a new MySQL connection and enter the **authentication type**,**server name**, **database name**, **username**, **password**. Select **encrypt connection** if SSL is enabled on your MySQL server. After saving the flow, we need to test it and run the flow app. :::image type="content" source="./media/tutorial-power-automate-with-mysql/run-flow-to-get-rows-from-table.png" alt-text="Screenshot that shows output of the run."::: ## Next steps-[Azure database for MySQL connector](/connectors/azuremysql/) reference +[Azure Database for MySQL connector](/connectors/azuremysql/) reference |
postgresql | Concepts Compare Single Server Flexible Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compare-single-server-flexible-server.md | The following table provides a list of high-level features and capabilities comp | **General** | | | | General availability | GA since 2018 | GA since 2021| | PostgreSQL | Community | Community |-| Supported versions | 10, 11 | 11, 12, 13, 14 | +| Supported versions | 10, 11 | 11, 12, 13, 14 , 15(preview) | | Underlying O/S | Windows | Linux | | AZ selection for application colocation | No | Yes | | Built-in connection pooler | No | Yes (PgBouncer)| |
postgresql | Concepts Major Version Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-major-version-upgrade.md | -Azure Database for PostgreSQL Flexible server supports PostgreSQL versions 11, 12,13, and 14. Postgres community releases a new major version containing new features about once a year. Additionally, major version receives periodic bug fixes in the form of minor releases. Minor version upgrades include changes that are backward-compatible with existing applications. Azure Database for PostgreSQL Flexible service periodically updates the minor versions during customerΓÇÖs maintenance window. Major version upgrades are more complicated than minor version upgrades as they can include internal changes and new features that may not be backward-compatible with existing applications. +Azure Database for PostgreSQL Flexible server supports PostgreSQL versions 11, 12, 13, 14 and 15(preview). Postgres community releases a new major version containing new features about once a year. Additionally, major version receives periodic bug fixes in the form of minor releases. Minor version upgrades include changes that are backward-compatible with existing applications. Azure Database for PostgreSQL Flexible service periodically updates the minor versions during customerΓÇÖs maintenance window. Major version upgrades are more complicated than minor version upgrades as they can include internal changes and new features that may not be backward-compatible with existing applications. Azure Database for PostgreSQL Flexible Server Postgres has now introduced in-place major version upgrade feature that performs an in-place upgrade of the server with just a click. In-place major version upgrade simplifies the upgrade process minimizing the disruption to users and applications accessing the server. In-place upgrades are a simpler way to upgrade the major version of the instance, as they retain the server name and other settings of the current server after the upgrade, and don't require data migration or changes to the application connection strings. In-place upgrades are faster and involve shorter downtime than data migration. |
postgresql | Concepts Supported Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-supported-versions.md | Last updated 08/25/2022 Azure Database for PostgreSQL - Flexible Server currently supports the following major versions: +## PostgreSQL version 15 (Preview) ++PostgreSQL version 15 is now available in public preview in limited regions (West Europe, East US, West US2, South East Asia, UK SOuth, North Europe, Japan east). Refer to the [PostgreSQL documentation](https://www.postgresql.org/about/news/postgresql-15-released-2526/) to learn more about improvements and fixes in this release. New servers will be created with this minor version. ++ ## PostgreSQL version 14 The current minor release is **14.7**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/14.7/) to learn more about improvements and fixes in this release. New servers will be created with this minor version. |
postgresql | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md | This page provides latest news and updates regarding feature additions, engine v ## Release: May 2023 * Public preview of [Database availability metric](./concepts-monitoring.md#database-availability-metric) for Azure Database for PostgreSQL ΓÇô Flexible Server.+* Postgres 15 is now available in public preview for Azure Database for PostgreSQL ΓÇô Flexible Server in limited regions. ## Release: April 2023 * Public preview of [Query Performance Insight](./concepts-query-performance-insight.md) for Azure Database for PostgreSQL ΓÇô Flexible Server. |
postgresql | Concepts Single To Flexible | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md | Search for the **azure.extensions** parameter on the Server Parameters blade on :::image type="content" source="./media/concepts-single-to-flexible/allowlist-extensions.png" alt-text="Diagram that shows allow listing of extensions on Flexible Server." lightbox="./media/concepts-single-to-flexible/allowlist-extensions.png"::: > [!NOTE] -> If TIMESCALEDB, PG_PARTMAN or POSTGIS_TIGER_DECODER extensions are used in your single server database, please raise a support request since the Single to Flex migration tool will not handle these extensions. +> If TIMESCALEDB, POSTGIS, POSTGIS_TOPOLOGY, POSTGIS_SFCGAL or POSTGIS_TIGER_DECODER extensions are used in your single server database, please raise a support request since the Single to Flex migration tool will not handle these extensions. Check if the list contains any of the following extensions: - PG_CRON Use the **Save and Restart** option and wait for the postgresql server to restar Once the pre-migration steps are complete, you're ready to carry out the migration of the production databases of your single server. At this point, you've finalized the day and time of production migration along with a planned downtime for your applications. - Create a flexible server with a **General-Purpose** or **Memory Optimized** compute tier. Pick a minimum 4VCore or higher SKU to complete the migration quickly. Burstable SKUs are blocked for use as migration target servers.-- Don't include HA or geo redundancy option while creating flexible server. You can always enable it with zero downtime once the migration from single server is complete. Don't create any read-replicas yet on the flexible server.+- Don't include HA option while creating flexible server. You can always enable it with zero downtime once the migration from single server is complete. Don't create any read-replicas yet on the flexible server. - Before initiating the migration, stop all the applications that connect to your production server. - Checkpoint the source server by running **checkpoint** command and restart the source server. This command ensures any remaining applications or connections are disconnected. Additionally, you can run **select * from pg_stat_activity;** after the restart to ensure no applications is connected to the source server. If the above conditions are met, the table will be migrated in multiple partitio ### Post migration - Once the migration is complete, verify the data on your flexible server and make sure it's an exact copy of the single server.-- Post verification, enable HA/ backup options as needed on your flexible server.+- Post verification, enable HA option as needed on your flexible server. - Change the SKU of the flexible server to match the application needs. This change needs a database server restart. - Migrate users and roles from single to flexible servers. This step can be done by creating users on flexible servers and providing them with suitable privileges or by using the steps that are listed in this [doc](../single-server/how-to-upgrade-using-dump-and-restore.md). - If you've changed any server parameters from their default values in single server, copy those server parameter values in flexible server. |
private-5g-core | Azure Stack Edge Packet Core Compatibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-stack-edge-packet-core-compatibility.md | Title: Packet core and Azure Stack Edge compatibility -description: Discover which Azure Stack Edge versions are compatible with each packet core version +description: Discover which Azure Stack Edge models and versions are compatible with each packet core version Last updated 03/30/2023 # Packet core and Azure Stack Edge (ASE) compatibility -Each site in your deployment contains an Azure Stack Edge (ASE) Pro device that hosts a packet core instance. This article provides information on version compatibility between ASE and packet core that you can refer to when installing a packet core instance or performing an upgrade. +Each site in your deployment contains an Azure Stack Edge (ASE) Pro device that hosts a single packet core instance. This article provides information on version compatibility between ASE and packet core that you can refer to when installing a packet core instance or performing an upgrade. -## Packet core and ASE compatibility table +## Supported Azure Stack Edge Pro models ++The following Azure Stack Edge Pro models are supported: ++- Azure Stack Edge Pro with GPU +- Azure Stack Edge Pro 2 + - Model 64G2T + - Model 128G4T1GPU + - Model 256G6T2GPU ++## Packet core and Azure Stack Edge version compatibility table The following table provides information on which versions of the ASE device are compatible with each packet core version. |
private-5g-core | Azure Stack Edge Virtual Machine Sizing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-stack-edge-virtual-machine-sizing.md | Title: Azure Stack Edge virtual machine sizing description: Learn about the VMs that Azure Private 5G Core uses when running on an Azure Stack Edge device.--++ Last updated 01/27/2023 The following table contains information about the VMs that Azure Private 5G Cor | AP5GC Cluster Control Plane VM | Standard_F4s_v1 | 4 | 4 | Ephemeral - 128 GB | Control Plane of the Kubernetes cluster used for AP5GC | | AP5GC Cluster Node VM | Standard_F16s_HPN | 16 | 32 | Ephemeral - 128 GB </br> Persistent - 102 GB | AP5GC workload node | -## Remaining usable resource on ASE Pro GPU +## Remaining usable resource on Azure Stack Edge Pro The following resources are available within ASE after deploying AP5GC. You can use these resources, for example, to deploy additional virtual machines or storage accounts. -| Resource | Value | -|--|--| -| vCPUs | 16 | -| Memory | 56 GB | -| Storage | ~3.75 TB | +| Resource | Pro with GPU | Pro 2 - 64G2T | Pro 2 - 128G4T1GPU | Pro 2 - 256G6T2GPU | +|-|--||--|--| +| vCPUs | 16 | 4 | 4 | 4 | +| Memory | 56 GB | 3 GB | 51 GB | 163 GB | +| Storage | ~3.75 TB | ~280 GB | ~1.1 TB | ~2.0 TB | |
sap | About Azure Monitor Sap Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/about-azure-monitor-sap-solutions.md | Title: What is Azure Monitor for SAP solutions? (preview) + Title: What is Azure Monitor for SAP solutions? description: Learn about how to monitor your SAP resources on Azure for availability, performance, and operation. -# What is Azure Monitor for SAP solutions? (preview) +# What is Azure Monitor for SAP solutions? -When you have critical SAP applications and business processes that rely on Azure resources, you might want to monitor those resources for availability, performance, and operation. *Azure Monitor for SAP solutions* is an Azure-native monitoring product for SAP landscapes that run on Azure. Azure Monitor for SAP solutions uses specific parts of the [Azure Monitor](../../azure-monitor/overview.md) infrastructure. You can use Azure Monitor for SAP solutions with both [SAP on Azure Virtual Machines (Azure VMs)](../../virtual-machines/workloads/sap/hana-get-started.md) and [SAP on Azure Large Instances](../../virtual-machines/workloads/sap/hana-overview-architecture.md). --There are currently two versions of this product, *Azure Monitor for SAP solutions* and *Azure Monitor for SAP solutions (classic)*. +When you have critical SAP applications and business processes that rely on Azure resources, you might want to monitor those resources for availability, performance, and operation. *Azure Monitor for SAP solutions* is an Azure-native monitoring product for SAP landscapes that run on Azure. Azure Monitor for SAP solutions uses specific parts of the [Azure Monitor](../../azure-monitor/overview.md) infrastructure. You can use Azure Monitor for SAP solutions with both [SAP on Azure Virtual Machines (Azure VMs)](../../virtual-machines/workloads/sap/hana-get-started.md) and [SAP on Azure Large Instances](../../virtual-machines/workloads/sap/hana-overview-architecture.md). ## What can you monitor? You can use Azure Monitor for SAP solutions to collect data from Azure infrastru To monitor different components of an SAP landscape (such as Azure VMs, high-availability clusters, SAP HANA databases, SAP NetWeaver, etc.), add the corresponding *[provider](providers.md)*. For more information, see [how to deploy Azure Monitor for SAP solutions through the Azure portal](quickstart-portal.md). -The following table provides a quick comparison of the Azure Monitor for SAP solutions (classic) and Azure Monitor for SAP solutions. --| Azure Monitor for SAP solutions | Azure Monitor for SAP solutions (classic) | -| - | -- | -| Azure Functions-based collector architecture | VM-based collector architecture | -| Support for Microsoft SQL Server, SAP HANA, and IBM Db2 databases | Support for Microsoft SQL Server, and SAP HANA databases | Azure Monitor for SAP solutions uses the [Azure Monitor](../../azure-monitor/overview.md) capabilities of [Log Analytics](../../azure-monitor/logs/log-analytics-overview.md) and [Workbooks](../../azure-monitor/visualize/workbooks-overview.md). With it, you can: -- Create [custom visualizations](../../azure-monitor/visualize/workbooks-overview.md) by editing the default workbooks provided by Azure Monitor for SAP solutions. +- Create [custom visualizations](../../azure-monitor/visualize/workbooks-overview.md) by editing the default workbooks provided by Azure Monitor for SAP solutions. - Write [custom queries](../../azure-monitor/logs/log-analytics-tutorial.md).-- Create [custom alerts](../../azure-monitor/alerts/alerts-log.md) by using Azure Log Analytics workspace. -- Take advantage of the [flexible retention period](../../azure-monitor/logs/data-retention-archive.md) in Azure Monitor Logs/Log Analytics. +- Create [custom alerts](../../azure-monitor/alerts/alerts-log.md) by using Azure Log Analytics workspace. +- Take advantage of the [flexible retention period](../../azure-monitor/logs/data-retention-archive.md) in Azure Monitor Logs/Log Analytics. - Connect monitoring data with your ticketing system. ## What data is collected? Azure Monitor for SAP solutions doesn't collect Azure Monitor metrics or resource log data, like some other Azure resources do. Instead, Azure Monitor for SAP solutions sends custom logs directly to the Azure Monitor Logs system. There, you can then use the built-in features of Log Analytics. -Data collection in Azure Monitor for SAP solutions depends on the providers that you configure. During public preview, the following data is collected. +Data collection in Azure Monitor for SAP solutions depends on the providers that you configure. The following data is collected for each of the provider. ### Pacemaker cluster data SAP HANA data includes: ### Microsoft SQL Server data -Microsoft SQL server data includes: +Microsoft SQL server data includes: - CPU, memory, disk use - Hostname, SQL instance name, SAP system ID Microsoft SQL server data includes: - Problems recorded in the SQL Server error log - Blocking processes and SQL wait statistics over time -### OS (Linux) data +### OS (Linux) data -OS (Linux) data includes: +OS (Linux) data includes: -- CPU use, fork's count, running and blocked processes -- Memory use and distribution among used, cached, buffered -- Swap use, paging, and swap rate -- File systems usage, number of bytes read and written per block device -- Read/write latency per block device -- Ongoing I/O count, persistent memory read/write bytes -- Network packets in/out, network bytes in/out +- CPU use, fork's count, running and blocked processes +- Memory use and distribution among used, cached, buffered +- Swap use, paging, and swap rate +- File systems usage, number of bytes read and written per block device +- Read/write latency per block device +- Ongoing I/O count, persistent memory read/write bytes +- Network packets in/out, network bytes in/out ### SAP NetWeaver data IBM Db2 data includes: ## What is the architecture? -There are separate explanations for the [Azure Monitor for SAP solutions architecture](#azure-monitor-for-sap-solutions-architecture) and the [Azure Monitor for SAP solutions (classic) architecture](#azure-monitor-for-sap-solutions-classic-architecture). --Some important points about the architecture include: +Some important points about the architecture include: -- The architecture is **multi-instance**. You can monitor multiple instances of a given component type across multiple SAP systems (SID) within a virtual network with a single resource of Azure Monitor for SAP solutions. For example, you can monitor HANA databases, high availability (HA) clusters, Microsoft SQL server, SAP NetWeaver, etc.-- The architecture is **multi-provider**. The architecture diagram shows the SAP HANA provider as an example. Similarly, you can configure more providers for corresponding components to collect data from those components. For example, HANA DB, HA cluster, Microsoft SQL server, and SAP NetWeaver. -- The architecture has an **extensible query framework**. Write [SQL queries to collect data in JSON](https://github.com/Azure/AzureMonitorForSAPSolutions/blob/master/sapmon/content/SapHana.json). Easily add more SQL queries to collect other data. +- The architecture is **multi-instance**. You can monitor multiple instances of a given component type across multiple SAP systems (SID) within a virtual network with a single resource of Azure Monitor for SAP solutions. For example, you can monitor multiple HANA databases, high availability (HA) clusters, Microsoft SQL servers, SAP NetWeaver systems of multiple SID's etc., as part of one AMS monitor. +- The architecture is **multi-provider**. The architecture diagram shows the SAP HANA provider as an example. Similarly, you can configure more providers for corresponding components to collect data from those components. For example multiple providers of different types like HANA DB, HA cluster, Microsoft SQL server, and SAP NetWeaver as part of one AMS monitor. --### Azure Monitor for SAP solutions architecture +### Azure Monitor for SAP solutions architecture The following diagram shows, at a high level, how Azure Monitor for SAP solutions collects data from the SAP HANA database. The architecture is the same if SAP HANA is deployed on Azure VMs or Azure Large Instances. The key components of the architecture are: - The **Azure portal**, where you access the Azure Monitor for SAP solutions service. - The **Azure Monitor for SAP solutions resource**, where you view monitoring data. - The **managed resource group**, which is deployed automatically as part of the Azure Monitor for SAP solutions resource's deployment. The resources inside the managed resource group help to collect data. Key resources include:- - An **Azure Functions resource** that hosts the monitoring code. This logic collects data from the source systems and transfers the data to the monitoring framework. + - An **[Azure Functions resource](../../azure-functions/functions-overview.md)** that hosts the monitoring code. This logic collects data from the source systems and transfers the data to the monitoring framework. - An **[Azure Key Vault resource](../../key-vault/general/basic-concepts.md)**, which securely holds the SAP HANA database credentials and stores information about providers.- - The **Log Analytics workspace**, which is the destination for storing data. Optionally, you can choose to use an existing workspace in the same subscription as your Azure Monitor for SAP solutions resource at deployment. - -[Azure Workbooks](../../azure-monitor/visualize/workbooks-overview.md) provides customizable visualization of the data in Log Analytics. To automatically refresh your workbooks or visualizations, pin the items to the Azure dashboard. The maximum refresh frequency is every 30 minutes. - -You can also use Kusto Query Language (KQL) to [run log queries](../../azure-monitor/logs/log-query-overview.md) against the raw tables inside the Log Analytics workspace. + - The **[Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md)**, which is the destination for storing data. Optionally, you can choose to use an existing workspace in the same subscription as your Azure Monitor for SAP solutions resource at deployment. + - The **[Storage account](../../storage/common/storage-account-overview.md)**, which is associated with Azure functions resource, it's used to manage triggers and logging function executions. -### Azure Monitor for SAP solutions (classic) architecture --The following diagram shows, at a high level, how Azure Monitor for SAP solutions (classic) collects data from the SAP HANA database. The architecture is the same if SAP HANA is deployed on Azure VMs or Azure Large Instances. -- Diagram of the Azure Monitor for SAP solutions (classic) architecture. The customer connects to the Azure Monitor for SAP solutions resource through the Azure portal. There's a managed resource group containing Log Analytics, Azure Functions, Key Vault, and Storage queue. The Azure function connects to the providers. Providers include SAP NetWeaver (ABAP and JAVA), SAP HANA, Microsoft SQL Server, Pacemaker clusters, and Linux OS. --The key components of the architecture are: --- The **Azure portal**, which is your starting point. You can navigate to marketplace within the Azure portal and discover Azure Monitor for SAP solutions.-- The **Azure Monitor for SAP solutions resource**, which is the landing place for you to view monitoring data.-- **Managed resource group**, which is deployed automatically as part of the Azure Monitor for SAP solutions resource's deployment. The resources deployed within the managed resource group help with the collection of data. Key resources deployed and their purposes are:- - **Azure VM**, also known as the *collector VM*, which is a **Standard_B2ms** VM. The main purpose of this VM is to host the *monitoring payload*. The monitoring payload is the logic of collecting data from the source systems and transferring the data to the monitoring framework. In the architecture diagram, the monitoring payload contains the logic to connect to the SAP HANA database over the SQL port. You're responsible for patching and maintaining the VM. - - **[Azure Key Vault](../../key-vault/general/basic-concepts.md)**: which is deployed to securely hold SAP HANA database credentials and to store information about providers. - - **Log Analytics Workspace**, which is the destination where the data is stored. - - Visualization is built on top of data in Log Analytics using [Azure Workbooks](../../azure-monitor/visualize/workbooks-overview.md). You can customize visualization. You can also pin your Workbooks or specific visualization within Workbooks to Azure dashboard for auto-refresh. The maximum frequency for refresh is every 30 minutes. - - You can use your existing workspace within the same subscription as SAP monitor resource by choosing this option at deployment. - - You can use KQL to run [queries](../../azure-monitor/logs/log-query-overview.md) against the raw tables inside the Log Analytics workspace. Look at **Custom Logs**. - - You can use an existing Log Analytics workspace for data collection if it's deployed within the same Azure subscription as the resource for Azure Monitor for SAP solutions. --## Can you analyze metrics? +[Azure Workbooks](../../azure-monitor/visualize/workbooks-overview.md) provides customizable visualization of the data in Log Analytics. To automatically refresh your workbooks or visualizations, pin the items to the Azure dashboard. The maximum refresh frequency is every 30 minutes. -Azure Monitor for SAP solutions doesn't support metrics. +You can also use Kusto Query Language (KQL) to [run log queries](../../azure-monitor/logs/log-query-overview.md) against the raw tables inside the Log Analytics workspace. -### Analyze logs +## Analyze logs -Azure Monitor for SAP solutions doesn't support resource logs or activity logs. For a list of the tables used by Azure Monitor Logs that can be queried by Log Analytics, see [the data reference for monitoring SAP on Azure](data-reference.md#azure-monitor-logs-tables). +Azure Monitor for SAP solutions doesn't support resource logs or activity logs. For a list of the tables used by Azure Monitor Logs that can be queried in Log Analytics, see [the data reference for monitoring SAP on Azure](data-reference.md#azure-monitor-logs-tables). -### Make Kusto queries +## Make Kusto queries When you select **Logs** from the Azure Monitor for SAP solutions menu, Log Analytics is opened with the query scope set to the current Azure Monitor for SAP solutions. Log queries only include data from that resource. To run a query that includes data from other accounts or data from other Azure services, select **Logs** from the **Azure Monitor** menu. For more information, see [Log query scope and time range in Azure Monitor Log Analytics](../../azure-monitor/logs/scope.md) for details. -You can use Kusto queries to help you monitor your Azure Monitor for SAP solutions resources. The following sample query gives you data from a custom log for a specified time range. You can specify the time range and the number of rows. In this example, you'll get five rows of data for your selected time range. +You can use Kusto queries to help you monitor your Azure Monitor for SAP solutions resources. The following sample query gives you data from a custom log for a specified time range. You can view the list of custom tables by expanding the Custom Logs section. You can specify the time range and the number of rows. In this example, you get five rows of data for your selected time range. ```kusto-custom log name +Custom_log_table_name | take 5 ``` You have several options to deploy Azure Monitor for SAP solutions and configure - [Deploy Azure Monitor for SAP solutions directly from the Azure portal](quickstart-portal.md) - [Deploy Azure Monitor for SAP solutions with Azure PowerShell](quickstart-powershell.md)-- [Deploy Azure Monitor for SAP solutions (classic) using the Azure Command-Line Interface (Azure CLI)](https://github.com/Azure/azure-hanaonazure-cli-extension#sapmonitor).- ## What is the pricing? Azure Monitor for SAP solutions is a free product (no license fee). You're responsible for paying the cost of the underlying components in the managed resource group. You're also responsible for consumption costs associated with data use and retention. For more information, see standard Azure pricing documents: - [Azure Functions Pricing](https://azure.microsoft.com/pricing/details/functions/#pricing)-- [Azure VM pricing (applicable to Azure Monitor for SAP solutions (classic))](https://azure.microsoft.com/pricing/details/virtual-machines/linux/)+ - [Azure Key vault pricing](https://azure.microsoft.com/pricing/details/key-vault/) - [Azure storage account pricing](https://azure.microsoft.com/pricing/details/storage/queues/) - [Azure Log Analytics and alerts pricing](https://azure.microsoft.com/pricing/details/monitor/) -## How do you enable data sharing with Microsoft? --> [!NOTE] -> The following content only applies to the Azure Monitor for SAP solutions (classic) version. --Azure Monitor for SAP solutions collects system metadata to provide improved support for SAP on Azure. No PII/EUII is collected. --You can enable data sharing with Microsoft when you create Azure Monitor for SAP solutions resource by choosing *Share* from the drop-down. We recommend that you enable data sharing. Data sharing gives Microsoft support and engineering teams information about your environment, which helps us provide better support for your mission-critical SAP on Azure solution. - ## Next steps - For a list of custom logs relevant to Azure Monitor for SAP solutions and information on related data types, see [Monitor SAP on Azure data reference](data-reference.md). |
sap | Data Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/data-reference.md | Title: Data reference for Azure Monitor for SAP solutions (preview) + Title: Data reference for Azure Monitor for SAP solutions description: Important reference material needed when you monitor SAP on Azure. -# Data reference for Azure Monitor for SAP solutions (preview) +# Data reference for Azure Monitor for SAP solutions --This article provides a reference of log data collected to analyze the performance and availability of Azure Monitor for SAP solutions. See [Monitor SAP on Azure (preview)](about-azure-monitor-sap-solutions.md) for details on collecting and analyzing monitoring data for SAP on Azure. +This article provides a reference of log data collected to analyze the performance and availability of Azure Monitor for SAP solutions. See [Monitor SAP on Azure ](about-azure-monitor-sap-solutions.md) for details on collecting and analyzing monitoring data for SAP on Azure. ## Metrics |
sap | Enable Sap Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/enable-sap-insights.md | + + Title: Enable Insights to troubleshoot SAP workload issues +description: Learn to enable SAP Insights on your AMS instance to troubleshoot SAP workload issues. ++++ Last updated : 05/10/2023++#Customer intent: I am an SAP BASIS or cloud infrastructure team member, I want to enable SAP Insights on my Azure monitor for SAP Instance. +++# Enable Insights to troubleshoot SAP workload issues (preview) +++The Insights capability in Azure Monitor for SAP Solutions helps you troubleshoot Availability and Performance issues on your SAP workloads. It helps you correlate key SAP components issues with SAP logs, Azure platform metrics and health events. +In this how-to-guide, learn to enable Insights in Azure Monitor for SAP solutions. You can use SAP Insights with only the latest version of the service, *Azure Monitor for SAP solutions* and not *Azure Monitor for SAP solutions (classic)* ++> [!NOTE] +> This section applies to only Azure Monitor for SAP solutions. ++## Prerequisites ++- An Azure subscription. +- An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor for SAP solutions resource, see the [quickstart for the Azure portal](quickstart-portal.md) or the [quickstart for PowerShell](quickstart-powershell.md). +- An existing NetWeaver and HANA(optional) provider. To configure a NetWeaver provider, see the How to guides for [NetWeaver provider configuration](provider-netweaver.md). +- (Optional) Alerts set up for availability and/or performance issues on the NetWeaver/HANA provider. To configure a NetWeaver provider, see the How to guides for [setting up Alerts on Azure Monitor for SAP](get-alerts-portal.md) ++## Steps to Enable Insights in Azure Monitor for SAP solutions ++To enable Insights for Azure Monitor for SAP solutions, you need to: ++1. [Run a PowerShell script for access](#run-a-powershell-script-for-access) +1. [Prerequisite - Unprotect methods](#unprotect-the-getenvironment-method) ++### Run a PowerShell script for access ++> [!Note] +> The intent of this step is to give the Azure Monitor for SAP solutions(AMS) instance access to the virtual machines that host the SAP systems you want to monitor. This will help your AMS instance correlate issues you face with Azure infrastructure telemetry, giving you an end-to-end troubleshooting experience. +This script gives your AMS instance Reader role permission over the subscriptions that hold the SAP systems. Feel free to modify the script to scope it down to a resource group or a set of virtual machines. ++1. Download the onboarding script [from GitHub](https://github.com/Azure/Azure-Monitor-for-SAP-solutions-preview/blob/main/Scripts/AMS_AIOPS_SETUP.ps1) +1. Go to the Azure portal and select the Cloud Shell tab from the menu bar at the top. Refer [this guide](/articles/cloud-shell/quickstart.md) to get started with Cloud Shell. +1. Switch from Bash to PowerShell. +1. Upload the script downloaded in the first step. +1. Navigate to the folder where the script is present using the command: +```PowerShell +cd <script_path> +``` +6. Set the AMS Resource/ARM ID with the command: +```PowerShell +$armId = "<AMS ARM ID>" +``` +7. If the VMs belong to a different subscription than AMS, set the list of subscriptions in which VMs of the SAP system are present (use subscription IDs): +```PowerShell +$subscriptions = "<Subscription ID 1>","<Subscription ID 2>" +``` +> [!Important] +> To run this script successfully, ensure you have Contributor + User Access Admin or Owner access on all subscriptions in the list. See [steps to assign Azure roles](../../role-based-access-control/role-assignments-steps.md). ++8. Run the script uploaded from step 6 using the command: + * If ```$subscriptions``` was set: +```PowerShell +.\AMS_AIOPS_SETUP.ps1 -ArmId $armId -subscriptions $subscriptions +``` + * If ```$subscriptions``` wasn't set: +```PowerShell +.\AMS_AIOPS_SETUP.ps1 -ArmId $armId +``` ++### Unprotect the GetEnvironment method ++Follow steps to unprotect methods from the [NetWeaver provider configuration page](provider-netweaver.md#prerequisite-unprotect-methods-for-metrics). +<br/>If you have already followed these steps during Netweaver provider setup, you can skip this section. Ensure that you have unprotected the GetEnvironment method in particular for this capability to work. ++> [!Important] +> You might have to wait for up to 2 hours for your AMS to start receiving metadata of the infrastructure that it needs to monitor. ++## Using Insights on Azure Monitor for SAP Solutions(AMS) +We have two categories of issues we help you get insights for. +* [Availability issues](#availability-insights) +* [Performance degradations](#performance-insights) ++> [!Important] +> As a user of the Insights capability, you will require reader access on all virtual machines on which the SAP systems are hosted that you're trying to monitor using AMS. This is to make sure that you're able to view Azure monitor metrics and Resource health events of these virtual machines in context of SAP issues. See [steps to assign Azure roles](../../role-based-access-control/role-assignments-steps.md). ++### Availability Insights +This capability helps you get an overview regarding availability of your SAP system in one place. You can also correlate SAP availability with Azure platform VM availability and its health events easing the overall root-causing process. ++#### Steps to use availability insights +1. Open the AMS instance of your choice and visit the insights tab under Monitoring on the left navigation pane. +1. If you have followed all [the steps mentioned](#steps-to-enable-insights-in-azure-monitor-for-sap-solutions), you should see the above screen asking for context to be set up. You can set the Time range, SID and the provider (optional, All selected by default). +1. On the top, you're able to see all the fired alerts related to SAP system and instance availability on this screen. +1. If you're able to see SAP system availability trend, categorized by VM - SAP process list. If you have selected a fired alert in the previous step, you're able to see these trends in context with the fired alert. If not, these trends respect the time range you set on the main Time range filter. +1. You can see the Azure virtual machine on which the process is hosted and the corresponding availability trends for the combination. To view detailed insights, select the Investigate link. +1. It opens a context pane that shows you availability insights on the corresponding virtual machine and the SAP application. +It has two categories of insights: + * Azure platform: VM health events filtered by the time range set, either by the workbook filter or the selected alert. This pane also consists of VM availability metric trend for the chosen VM. + :::image type="content" source="./media/enable-sap-insights/availability-vm-health.png" alt-text="Screenshot of the VM health events of availability insights."::: + * SAP Application: Process availability and contextual insights on the process like error messages (SM21), Lock entries (SM12) and Canceled jobs (SM37) which can help you find issues that might exist in parallel in the system, at the point in time. ++### Performance Insights +This capability helps you get an overview regarding performance of your SAP system in one place. You can also correlate key SAP performance issues with related SAP application logs alongside Azure platform utilization metrics and SAP workload configuration drifts easing the overall root-causing process. ++#### Steps to use performance insights +1. Open the AMS instance of your choice and visit the insights tab under Monitoring on the left navigation pane. +1. On the top, you're able to see all the fired alerts related to SAP application performance degradations. + :::image type="content" source="./media/enable-sap-insights/performance-overview.png" alt-text="Screenshot of the overview page of performance insights."::: +1. Next you're able to see key metrics related to performance issues and its trend during the timerange you have chosen. +1. To view detailed insights issues, you can either choose to investigate a fired alert or view insights for a key metric. +1. On investigating, you see a context pane, which shows you four categories of metrics in context of the issue/key metric chosen. + * Issue/Key metric details - Detailed visualizations of the key metric that defines the problem. + :::image type="content" source="./media/enable-sap-insights/performance-detail-pane.png" alt-text="Screenshot of the context pane of performance insights."::: + + * SAP application - Visualizations of the key SAP logs that pertain the issue type. + :::image type="content" source="./media/enable-sap-insights/performance-sap-platform-pane.png" alt-text="Screenshot of the SAP pane of performance insights."::: ++ * Azure platform - Key Azure platform metrics that present an overview of the virtual machine of the SAP system. + :::image type="content" source="./media/enable-sap-insights/performance-azure-pane.png" alt-text="Screenshot of the infrastructure pane of performance insights."::: ++ * Configuration drift - [Quality checks](../center-sap-solutions/get-quality-checks-insights.md) violations on the SAP system. +1. This capability with the set of metrics in context of the issue, helps you visually correlate trends of key metrics. This experience eases the root-causing process of performance degradations observed in SAP workloads on Azure. ++#### Scope of the preview +We have insights only for a limited set of issues as part of the preview. We extend this capability to most of the issues supported by AMS alerts before this capability is Generally Available(GA). +* Availability insights let you detect and troubleshoot unavailability of Netweaver system, instance and HANA DB. +* Performance insights are provided for NetWeaver metrics - High response time(ST03) and Long running batch jobs. ++## Next steps ++- For information on providers available for Azure Monitor for SAP solutions, see [Azure Monitor for SAP solutions providers](providers.md). |
sap | Enable Tls Azure Monitor Sap Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/enable-tls-azure-monitor-sap-solutions.md | -# Enable TLS 1.2 or higher in Azure Monitor for SAP solutions (preview) -+# Enable TLS 1.2 or higher in Azure Monitor for SAP solutions In this document, learn about secure communication with TLS 1.2 or higher in Azure Monitor for SAP solutions. |
sap | Get Alerts Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/get-alerts-portal.md | Title: Configure alerts in Azure Monitor for SAP solutions in Azure portal (preview) + Title: Configure alerts in Azure Monitor for SAP solutions in Azure portal description: Learn how to use a browser method for configuring alerts in Azure Monitor for SAP solutions. Last updated 10/19/2022 #Customer intent: As a developer, I want to configure alerts in Azure Monitor for SAP solutions so that I can receive alerts and notifications about my SAP systems. -# Configure alerts in Azure Monitor for SAP solutions in Azure portal (preview) -+# Configure alerts in Azure Monitor for SAP solutions in Azure portal In this how-to guide, you'll learn how to configure alerts in Azure Monitor for SAP solutions. You can configure alerts and notifications from the [Azure portal](https://azure.microsoft.com/features/azure-portal) using its browser-based interface. -This content applies to both versions of the service, Azure Monitor for SAP solutions and Azure Monitor for SAP solutions (classic). - ## Prerequisites - An Azure subscription. |
sap | Provider Ha Pacemaker Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-ha-pacemaker-cluster.md | Title: Create a High Availability Pacemaker cluster provider for Azure Monitor for SAP solutions (preview) + Title: Create a High Availability Pacemaker cluster provider for Azure Monitor for SAP solutions description: Learn how to configure High Availability (HA) Pacemaker cluster providers for Azure Monitor for SAP solutions. -# Create High Availability cluster provider for Azure Monitor for SAP solutions (preview) +# Create High Availability cluster provider for Azure Monitor for SAP solutions --In this how-to guide, you'll learn to create a High Availability (HA) Pacemaker cluster provider for Azure Monitor for SAP solutions. You'll install the HA agent, then create the provider for Azure Monitor for SAP solutions. --This content applies to both Azure Monitor for SAP solutions and Azure Monitor for SAP solutions (classic) versions. +In this how-to guide, you learn to create a High Availability (HA) Pacemaker cluster provider for Azure Monitor for SAP solutions. You install the HA agent, then create the provider for Azure Monitor for SAP solutions. ## Prerequisites This content applies to both Azure Monitor for SAP solutions and Azure Monitor f Before adding providers for HA (Pacemaker) clusters, install the appropriate agent for your environment in each cluster node. -For SUSE-based clusters, install **ha_cluster_provider** in each node. For more information, see [the HA cluster exporter installation guide](https://github.com/ClusterLabs/ha_cluster_exporter#installation). Supported SUSE versions include SLES for SAP 12 SP3 and above. +For SUSE-based clusters, install **ha_cluster_provider** in each node. For more information, see [the HA cluster exporter installation guide](https://github.com/ClusterLabs/ha_cluster_exporter#installation). Supported SUSE versions include SLES for SAP 12 SP3 and later versions. -For RHEL-based clusters, install **performance co-pilot (PCP)** and the **pcp-pmda-hacluster** sub package in each node.For more information, see the [PCP HACLUSTER agent installation guide](https://access.redhat.com/articles/6139852). Supported RHEL versions include 8.2, 8.4 and above. +For RHEL-based clusters, install **performance co-pilot (PCP)** and the **pcp-pmda-hacluster** sub package in each node.For more information, see the [PCP HACLUSTER agent installation guide](https://access.redhat.com/articles/6139852). Supported RHEL versions include 8.2, 8.4 and later versions. For RHEL-based pacemaker clusters, also install [PMProxy](https://access.redhat.com/articles/6139852) in each node. For RHEL-based pacemaker clusters, also install [PMProxy](https://access.redhat. systemctl start pmcd ``` -1. Install and enable the HA Cluster PMDA. Replace `$PCP_PMDAS_DIR` with the path where `hacluster` is installed. Use the `find` command in Linuxto find the path. +1. Install and enable the HA Cluster PMDA. Replace `$PCP_PMDAS_DIR` with the path where `hacluster` is installed. Use the `find` command in Linux to find the path. ```bash cd $PCP_PMDAS_DIR/hacluster For RHEL-based pacemaker clusters, also install [PMProxy](https://access.redhat. 1. Enable and start the `pmproxy` service. ```bash- systemctl start pmproxy + systemctl enable pmproxy ``` ```bash- systemctl enable pmproxy + systemctl start pmproxy ``` -1. Data will then be collected by PCP on the system. You can export the data using `pmproxy` at `http://<SERVER-NAME-OR-IP-ADDRESS>:44322/metrics?names=ha_cluster`. Replace `<SERVER-NAME-OR-IP-ADDRESS>` with your server name or IP address. +1. Data will then be collected in the system by PCP. You can export the data using `pmproxy` at `http://<SERVER-NAME-OR-IP-ADDRESS>:44322/metrics?names=ha_cluster`. Replace `<SERVER-NAME-OR-IP-ADDRESS>` with your server name or IP address. -## Prerequisites to enable secure communication +## Prerequisites to enable secure communication To [enable TLS 1.2 or higher](enable-tls-azure-monitor-sap-solutions.md), follow the steps [mentioned here](https://github.com/ClusterLabs/ha_cluster_exporter#tls-and-basic-authentication) To [enable TLS 1.2 or higher](enable-tls-azure-monitor-sap-solutions.md), follow  1. For **Type**, select **High-availability cluster (Pacemaker)**.-1. *Optional* Select **Enable secure communciation**, choose a certificate type +1. *Optional* Select **Enable secure communication**, choose a certificate type 1. Configure providers for each node of the cluster by entering the endpoint URL for **HA Cluster Exporter Endpoint**. - 1. For SUSE-based clusters, enter `http://<IP-address> :9664/metrics`. + 1. For SUSE-based clusters, enter `http://<IP-address>:9664/metrics`. -  +  1. For RHEL-based clusters, enter `http://<'IP address'>:44322/metrics?names=ha_cluster`. -  -+  1. Enter the system identifiers, host names, and cluster names. For the system identifier, enter a unique SAP system identifier for each cluster. For the hostname, the value refers to an actual hostname in the VM. Use `hostname -s` for SUSE- and RHEL-based clusters. When the provider settings validation operation fails with the code ΓÇÿPrometheu ``` 1. Reenable the HA cluster exporter agent.+ ```bash systemctl enable pmproxy ``` |
sap | Provider Hana | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-hana.md | Title: Configure SAP HANA provider for Azure Monitor for SAP solutions (preview) + Title: Configure SAP HANA provider for Azure Monitor for SAP solutions description: Learn how to configure the SAP HANA provider for Azure Monitor for SAP solutions through the Azure portal. -# Configure SAP HANA provider for Azure Monitor for SAP solutions (preview) +# Configure SAP HANA provider for Azure Monitor for SAP solutions --In this how-to guide, you'll learn to configure an SAP HANA provider for Azure Monitor for SAP solutions through the Azure portal. There are instructions to set up the [current version](#configure-azure-monitor-for-sap-solutions) and the [classic version](#configure-azure-monitor-for-sap-solutions-classic) of Azure Monitor for SAP solutions. +In this how-to guide, you'll learn to configure an SAP HANA provider for Azure Monitor for SAP solutions through the Azure portal. ## Prerequisite to enable secure communication To [enable TLS 1.2 higher](enable-tls-azure-monitor-sap-solutions.md) for SAP HA 1. For **Database password**, enter the password for the database username. You can either enter the password directly or use a secret inside Azure Key Vault. 1. Save your changes to the Azure Monitor for SAP solutions resource. -## Configure Azure Monitor for SAP solutions (classic) --To configure the SAP HANA provider for Azure Monitor for SAP solutions (classic): --1. Sign in to the [Azure portal](https://portal.azure.com). -1. Search for and select the **Azure Monitors for SAP Solutions (classic)** service in the search bar. -1. On the Azure Monitor for SAP solutions (classic) service page, select **Create**. -1. On the creation page's **Basics** tab, enter the basic information for your Azure Monitor for SAP solutions instance. -1. On the **Providers** tab, add the providers that you want to configure. You can add multiple providers during creation. You can also add more providers after you deploy the Azure Monitor for SAP solutions resource. For each provider: - 1. Select **Add provider**. - 1. For **Type**, select **SAP HANA**. Make sure that you configure an SAP HANA provider for the main node. - 1. For **IP address**, enter the private IP address for the HANA server. - 1. For **Database tenant**, enter the name of the tenant that you want to use. You can choose any tenant. However, it's recommended to use **SYSTEMDB**, because this tenant has more monitoring areas. - 1. For **SQL port**, enter the port number for your HANA database. The format begins with 3, includes the instance number, and ends in 13. For example, 30013 is the SQL port for the instance 001. - 1. For **Database username**, enter the username that you want to use. Make sure the database user has **monitoring** and **catalog read** role assignments. - 1. Select **Add provider** to finish adding the provider. -1. Select **Review + create** to review and validate your configuration. -1. Select **Create** to finish creating the Azure Monitor for SAP solutions resource. - ## Next steps > [!div class="nextstepaction"] |
sap | Provider Ibm Db2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-ibm-db2.md | Title: Create IBM Db2 provider for Azure Monitor for SAP solutions (preview) + Title: Create IBM Db2 provider for Azure Monitor for SAP solutions description: This article provides details to configure an IBM DB2 provider for Azure Monitor for SAP solutions. -# Create IBM Db2 provider for Azure Monitor for SAP solutions (preview) ---In this how-to guide, you'll learn how to create an IBM Db2 provider for Azure Monitor for SAP solutions through the Azure portal. This content applies only to Azure Monitor for SAP solutions, not the Azure Monitor for SAP solutions (classic) version. +# Create IBM Db2 provider for Azure Monitor for SAP solutions +In this how-to guide, you'll learn how to create an IBM Db2 provider for Azure Monitor for SAP solutions through the Azure portal. ## Prerequisites - An Azure subscription. |
sap | Provider Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-linux.md | Title: Configure Linux provider for Azure Monitor for SAP solutions (preview) + Title: Configure Linux provider for Azure Monitor for SAP solutions description: This article explains how to configure a Linux OS provider for Azure Monitor for SAP solutions. Last updated 03/09/2023 #Customer intent: As a developer, I want to configure a Linux provider so that I can use Azure Monitor for SAP solutions for monitoring. -# Configure Linux provider for Azure Monitor for SAP solutions (preview) -+# Configure Linux provider for Azure Monitor for SAP solutions In this how-to guide, you learn to create a Linux OS provider for *Azure Monitor for SAP solutions* resources. -This content applies to both versions of the service, *Azure Monitor for SAP solutions* and *Azure Monitor for SAP solutions (classic)*. - ## Prerequisites - An Azure subscription. To [enable TLS 1.2 or higher](enable-tls-azure-monitor-sap-solutions.md), follow ## Create Linux OS provider 1. Sign in to the [Azure portal](https://portal.azure.com).-1. Go to the Azure Monitor for SAP solutions or Azure Monitor for SAP solutions (classic) service. +1. Go to the Azure Monitor for SAP solutions. 1. Select **Create** to make a new Azure Monitor for SAP solutions resource. 1. Select **Add provider**. 1. Configure the following settings for the new provider: |
sap | Provider Netweaver | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-netweaver.md | Title: Configure SAP NetWeaver for Azure Monitor for SAP solutions (preview) + Title: Configure SAP NetWeaver for Azure Monitor for SAP solutions description: Learn how to configure SAP NetWeaver for use with Azure Monitor for SAP solutions. -# Configure SAP NetWeaver for Azure Monitor for SAP solutions (preview) +# Configure SAP NetWeaver for Azure Monitor for SAP solutions --In this how-to guide, you'll learn to configure the SAP NetWeaver provider for use with *Azure Monitor for SAP solutions*. You can use SAP NetWeaver with both versions of the service, *Azure Monitor for SAP solutions* and *Azure Monitor for SAP solutions (classic)*. +In this how-to guide, you'll learn to configure the SAP NetWeaver provider for use with *Azure Monitor for SAP solutions*. User can select between the two connection types when configuring SAP Netweaver provider to collect information from SAP system. Metrics are collected by using - **SAP Control** - The SAP start service provides multiple services, including monitoring the SAP system. Both versions of Azure Monitor for SAP solutions use **SAP Control**, which is a SOAP web service interface that exposes these capabilities. The **SAP Control** interface [differentiates between protected and unprotected web service methods](https://wiki.scn.sap.com/wiki/display/SI/Protected+web+methods+of+sapstartsrv). It's necessary to unprotect some methods to use Azure Monitor for SAP solutions with NetWeaver.-- **SAP RFC** - Azure Monitor for SAP solutions also provides ability to collect additional information from the SAP system using Standard SAP RFC. It's available only as part of Azure Monitor for SAP solution and not available in the classic version.+- **SAP RFC** - Azure Monitor for SAP solutions also provides ability to collect additional information from the SAP system using Standard SAP RFC. It's available only as part of Azure Monitor for SAP solution. You can collect below metric using SAP NetWeaver Provider VERSION = 3.0 EXTCOMPANY = TESTC EXTPRODUCT = TESTP -## Configure NetWeaver for Azure Monitor for SAP solutions (classic) --To configure the NetWeaver provider for the Azure Monitor for SAP solutions (classic) version: --1. [Unprotect some methods](#unprotect-methods) -1. [Restart the SAP start service](#restart-sap-start-service) -1. [Check that your settings have been updated properly](#validate-changes) -1. [Install the NetWeaver provider through the Azure portal](#install-netweaver-provider) - ### Unprotect methods To fetch specific metrics, you need to unprotect some methods for the current release. Follow these steps for each SAP system: |
sap | Provider Sql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-sql-server.md | Title: Configure Microsoft SQL Server provider for Azure Monitor for SAP solutions (preview) + Title: Configure Microsoft SQL Server provider for Azure Monitor for SAP solutions description: Learn how to configure a Microsoft SQL Server provider for use with Azure Monitor for SAP solutions. -# Configure SQL Server for Azure Monitor for SAP solutions (preview) -+# Configure SQL Server for Azure Monitor for SAP solutions In this how-to guide, you'll learn to configure a Microsoft SQL Server provider for Azure Monitor for SAP solutions through the Azure portal. |
sap | Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/providers.md | Title: What are providers in Azure Monitor for SAP solutions? (preview) + Title: What are providers in Azure Monitor for SAP solutions? description: This article provides answers to frequently asked questions about Azure Monitor for SAP solutions providers. -# What are providers in Azure Monitor for SAP solutions? (preview) -+# What are providers in Azure Monitor for SAP solutions? In the context of *Azure Monitor for SAP solutions*, a *provider* contains the connection information for a corresponding component and helps to collect data from there. There are multiple provider types. For example, an SAP HANA provider is configured for a specific component within the SAP landscape, like an SAP HANA database. You can configure an Azure Monitor for SAP solutions resource (also known as SAP monitor resource) with multiple providers of the same type or multiple providers of multiple types. -This content applies to both versions of the service, *Azure Monitor for SAP solutions* and *Azure Monitor for SAP solutions (classic)*. - -You can choose to configure different provider types for data collection from the corresponding component in their SAP landscape. For example, you can configure one provider for the SAP HANA provider type, another provider for high availability cluster provider type, and so on. +You can choose to configure different provider types for data collection from the corresponding component in their SAP landscape. For example, you can configure one provider for the SAP HANA provider type, another provider for high availability cluster provider type, and so on. You can also configure multiple providers of a specific provider type to reuse the same SAP monitor resource and associated managed group. For more information, see [Manage Azure Resource Manager resource groups by using the Azure portal](../../azure-resource-manager/management/manage-resource-groups-portal.md).  -It's recommended to configure at least one provider when you deploy an Azure Monitor for SAP solutions resource. By configuring a provider, you start data collection from the corresponding component for which the provider is configured. +It's recommended to configure at least one provider when you deploy an Azure Monitor for SAP solutions resource. By configuring a provider, you start data collection from the corresponding component for which the provider is configured. If you don't configure any providers at the time of deployment, the Azure Monitor for SAP solutions resource is still deployed, but no data is collected. You can add providers after deployment through the SAP monitor resource within the Azure portal. You can add or delete providers from the SAP monitor resource at any time. ## Provider type: SAP NetWeaver -You can configure one or more providers of provider type SAP NetWeaver to enable data collection from SAP NetWeaver layer. Azure Monitor for SAP solutions NetWeaver provider uses the existing -- [**SAPControl** Web service](https://www.sap.com/documents/2016/09/0a40e60d-8b7c-0010-82c7-eda71af511fa.html) interface to retrieve the appropriate information (also available in Azure Monitor for SAP Solutions classic) -- SAP RFC - ability to collect additional information from the SAP system leveraging Standard SAP RFC. (available only in Azure Monitor for SAP solution)+You can configure one or more providers of provider type SAP NetWeaver to enable data collection from SAP NetWeaver layer. Azure Monitor for SAP solutions NetWeaver provider uses the existing +- [**SAPControl** Web service](https://www.sap.com/documents/2016/09/0a40e60d-8b7c-0010-82c7-eda71af511fa.html) interface to retrieve the appropriate information. +- SAP RFC - ability to collect additional information from the SAP system using Standard SAP RFC. -You can get the following data with the SAP NetWeaver provider: +You can get the following data with the SAP NetWeaver provider: - SAP system and application server availability (e.g Instance process availability of dispatcher,ICM,Gateway,Message server,Enqueue Server,IGS Watchdog) (SAPOsControl) - Work process usage statistics and trends (SAPOsControl) You can get the following data with the SAP NetWeaver provider: - Transactional RFC (**Tcode - SM59**) (RFC) - STMS Change Transport System Metrics (**Tcode - STMS**) (RFC) +Configuring the SAP NetWeaver provider requires: ++For SOAP Web Methods: + - Fully Qualified Domain Name of SAP Web dispatcher OR SAP Application server. + - SAP System ID, Instance no. + - Host file entries of all SAP application servers that get listed via SAPcontrol "GetSystemInstanceList" web method. ++For SOAP+RFC: + - Fully Qualified Domain Name of SAP Web dispatcher OR SAP Application server. + - SAP System ID, Instance no. + - SAP Client ID, HTTP port, SAP Username and Password for login. + - Host file entries of all SAP application servers that get listed via SAPcontrol "GetSystemInstanceList" web method. ++Check [SAP NetWeaver provider](provider-netweaver.md) creation for more detail steps. +  ## Provider type: SAP HANA -You can configure one or more providers of provider type *SAP HANA* to enable data collection from SAP HANA database. The SAP HANA provider connects to the SAP HANA database over SQL port, pulls data from the database, and pushes it to the Log Analytics workspace in your subscription. The SAP HANA provider collects data every 1 minute from the SAP HANA database. +You can configure one or more providers of provider type *SAP HANA* to enable data collection from SAP HANA database. The SAP HANA provider connects to the SAP HANA database over SQL port, pulls data from the database, and pushes it to the Log Analytics workspace in your subscription. The SAP HANA provider collects data every 1 minute from the SAP HANA database. You can see the following data with the SAP HANA provider: Configuring the SAP HANA provider requires: It's recommended to configure the SAP HANA provider against **SYSTEMDB**. However, more providers can be configured against other database tenants. +Check [SAP HANA provider](provider-hana.md) creation for more detail steps. +  ## Provider type: Microsoft SQL server -You can configure one or more Microsoft SQL Server providers to enable data collection from [SQL Server on Virtual Machines](https://azure.microsoft.com/services/virtual-machines/sql-server/). The SQL Server provider connects to Microsoft SQL Server over the SQL port. It then pulls data from the database and pushes it to the Log Analytics workspace in your subscription. Configure SQL Server for SQL authentication and for signing in with the SQL Server username and password. Set the SAP database as the default database for the provider. The SQL Server provider collects data from every 60 seconds up to every hour from the SQL server. +You can configure one or more Microsoft SQL Server providers to enable data collection from [SQL Server on Virtual Machines](https://azure.microsoft.com/services/virtual-machines/sql-server/). The SQL Server provider connects to Microsoft SQL Server over the SQL port. It then pulls data from the database and pushes it to the Log Analytics workspace in your subscription. Configure SQL Server for SQL authentication and for signing in with the SQL Server username and password. Set the SAP database as the default database for the provider. The SQL Server provider collects data from every 60 seconds up to every hour from the SQL server. -In public preview, you can expect to see the following data with SQL Server provider: +You can get the following data with the SQL Server provider: - Underlying infrastructure usage - Top SQL statements - Top largest table - Problems recorded in the SQL Server error log-- Blocking processes and others +- Blocking processes and others Configuring Microsoft SQL Server provider requires: - The SAP System ID Configuring Microsoft SQL Server provider requires: - The SQL Server port number - The SQL Server username and password +Check [SQL Database provider](provider-sql-server.md) creation for more detail steps. +  ## Provider type: High-availability cluster -You can configure one or more providers of provider type *High-availability cluster* to enable data collection from Pacemaker cluster within the SAP landscape. The High-availability cluster provider connects to Pacemaker using the [ha_cluster_exporter](https://github.com/ClusterLabs/ha_cluster_exporter) for **SUSE** based clusters and by using [Performance co-pilot](https://access.redhat.com/articles/6139852) for **RHEL** based clusters. Azure Monitor for SAP solutions then pulls data from the database and pushes it to Log Analytics workspace in your subscription. The High-availability cluster provider collects data every 60 seconds from Pacemaker. +You can configure one or more providers of provider type *High-availability cluster* to enable data collection from Pacemaker cluster within the SAP landscape. The High-availability cluster provider connects to Pacemaker using the [ha_cluster_exporter](https://github.com/ClusterLabs/ha_cluster_exporter) for **SUSE** based clusters and by using [Performance co-pilot](https://access.redhat.com/articles/6139852) for **RHEL** based clusters. Azure Monitor for SAP solutions then pulls data from cluster and pushes it to Log Analytics workspace in your subscription. The High-availability cluster provider collects data every 60 seconds from Pacemaker. -In public preview, you can expect to see the following data with the High-availability cluster provider: +You can get the following data with the High-availability cluster provider: + - Cluster status represented as a roll-up of node and resource status - Location constraints - Trends+ - [others](https://github.com/ClusterLabs/ha_cluster_exporter/blob/master/doc/metrics.md) +  To configure a High-availability cluster provider, two primary steps are involved: -1. Install [ha_cluster_exporter](https://github.com/ClusterLabs/ha_cluster_exporter) in *each* node within the Pacemaker cluster. +1. Install [ha_cluster_exporter](provider-ha-pacemaker-cluster.md) in *each* node within the Pacemaker cluster. You have two options for installing ha_cluster_exporter:- - - Use Azure Automation scripts to deploy a high-availability cluster. The scripts install [ha_cluster_exporter](https://github.com/ClusterLabs/ha_cluster_exporter) on each cluster node. - - Do a [manual installation](https://github.com/ClusterLabs/ha_cluster_exporter#manual-clone--build). ++ - Use Azure Automation scripts to deploy a high-availability cluster. The scripts install [ha_cluster_exporter](https://github.com/ClusterLabs/ha_cluster_exporter) on each cluster node. + - Do a [manual installation](https://github.com/ClusterLabs/ha_cluster_exporter#manual-clone--build). 2. Configure a High-availability cluster provider for *each* node within the Pacemaker cluster. To configure the High-availability cluster provider, the following information is required:- + - **Name**. A name for this provider. It should be unique for this Azure Monitor for SAP solutions instance. - **Prometheus Endpoint**. `http://<servername or ip address>:9664/metrics`.- - **SID**. For SAP systems, use the SAP SID. For other systems (for example, NFS clusters), use a three-character name for the cluster. The SID must be distinct from other clusters that are monitored. + - **SID**. For SAP systems, use the SAP SID. For other systems (for example, NFS clusters), use a three-character name for the cluster. The SID must be distinct from other clusters that are monitored. - **Cluster name**. The cluster name used when creating the cluster. The cluster name can be found in the cluster property `cluster-name`.- - **Hostname**. The Linux hostname of the virtual machine (VM). + - **Hostname**. The Linux hostname of the virtual machine (VM). ++ Check [High Availability Cluster provider](provider-ha-pacemaker-cluster.md) creation for more detail steps. ## Provider type: OS (Linux) -You can configure one or more providers of provider type OS (Linux) to enable data collection from a BareMetal or VM node. The OS (Linux) provider connects to BareMetal or VM nodes using the [Node_Exporter](https://github.com/prometheus/node_exporter) endpoint. It then pulls data from the nodes and pushes it to Log Analytics workspace in your subscription. The OS (Linux) provider collects data every 60 seconds for most of the metrics from the nodes. +You can configure one or more providers of provider type OS (Linux) to enable data collection from a BareMetal or VM node. The OS (Linux) provider connects to BareMetal or VM nodes using the [Node_Exporter](https://github.com/prometheus/node_exporter) endpoint. It then pulls data from the nodes and pushes it to Log Analytics workspace in your subscription. The OS (Linux) provider collects data every 60 seconds for most of the metrics from the nodes. ++You can get the following data with the OS (Linux) provider: -In public preview, you can expect to see the following data with the OS (Linux) provider: - - CPU usage, CPU usage by process - - Disk usage, I/O read & write - - Memory distribution, memory usage, swap memory usage - - Network usage, network inbound & outbound traffic details + - CPU usage, CPU usage by process + - Disk usage, I/O read & write + - Memory distribution, memory usage, swap memory usage + - Network usage, network inbound & outbound traffic details To configure an OS (Linux) provider, two primary steps are involved: 1. Install [Node_Exporter](https://github.com/prometheus/node_exporter) on each BareMetal or VM node.- You have two options for installing [Node_exporter](https://github.com/prometheus/node_exporter): - - For automated installation with Ansible, use [Node_Exporter](https://github.com/prometheus/node_exporter) on each BareMetal or VM node to install the OS (Linux) Provider. + You have two options for installing [Node_exporter](https://github.com/prometheus/node_exporter): + - For automated installation with Ansible, use [Node_Exporter](https://github.com/prometheus/node_exporter) on each BareMetal or VM node to install the OS (Linux) Provider. - Do a [manual installation](https://prometheus.io/docs/guides/node-exporter/). -1. Configure an OS (Linux) provider for each BareMetal or VM node instance in your environment. - To configure the OS (Linux) provider, the following information is required: - - **Name**: a name for this provider, unique to the Azure Monitor for SAP solutions instance. - - **Node Exporter endpoint**: usually `http://<servername or ip address>:9100/metrics`. +1. Configure an OS (Linux) provider for each BareMetal or VM node instance in your environment. + To configure the OS (Linux) provider, the following information is required: + - **Name**: a name for this provider, unique to the Azure Monitor for SAP solutions instance. + - **Node Exporter endpoint**: usually `http://<servername or ip address>:9100/metrics`. Port 9100 is exposed for the **Node_Exporter** endpoint. +Check [Operating System provider](provider-linux.md) creation for more detail steps. + > [!Warning]-> Make sure **Node-Exporter** keeps running after the node reboot. +> Make sure **Node-Exporter** keeps running after the node reboot. ## Provider type: IBM Db2 -You can configure one or more IBM Db2 providers. The following data is available with this provider type: +You can configure one or more IBM Db2 providers to enable data collection from IBM Db2 servers. The Db2 Server provider connects to database over given port. It then pulls data from the database and pushes it to the Log Analytics workspace in your subscription. The Db2 Server provider collects data from every 60 seconds up to every hour from the DB2 server. ++You can get the following data with the IBM Db2 provider: - Database availability - Number of connections - Logical and physical reads - Waits and current locks-- Top 20 runtime and executions +- Top 20 runtime and executions ++Configuring IBM Db2 provider requires: +- The SAP System ID +- The Host IP address +- The Database Name +- The Port number of the DB2 Server to connect to +- The Db2 Server username and password ++Check [IBM Db2 provider](provider-ibm-db2.md) creation for more detail steps.  |
sap | Quickstart Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/quickstart-portal.md | Title: Deploy Azure Monitor for SAP solutions with the Azure portal (preview) + Title: Deploy Azure Monitor for SAP solutions with the Azure portal description: Learn how to use a browser method for deploying Azure Monitor for SAP solutions. Last updated 10/19/2022 # Customer intent: As a developer, I want to deploy Azure Monitor for SAP solutions in the Azure portal so that I can configure providers. -# Quickstart: deploy Azure Monitor for SAP solutions in Azure portal (preview) -+# Quickstart: deploy Azure Monitor for SAP solutions in Azure portal Get started with Azure Monitor for SAP solutions by using the [Azure portal](https://azure.microsoft.com/features/azure-portal) to deploy Azure Monitor for SAP solutions resources and configure providers. -This content applies to both versions of the service, Azure Monitor for SAP solutions and Azure Monitor for SAP solutions (classic). - ## Prerequisites -If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. +- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. ++- [Setup Network](./set-up-network.md) before creating Azure Monitor. ++- Create or Use an existing Virtual Network for Azure Monitor for SAP solutions(AMS), which has access to the Source SAP systems Virtual Network. +- Create a new subnet with address range of IPv4/25 or larger in AMS associated virtual network with subnet delegation assigned to "Microsoft.Web/serverFarms" as shown. ++ > [!div class="mx-imgBorder"] + >  ## Create Azure Monitor for SAP solutions monitoring resource If you don't have an Azure subscription, create a [free](https://azure.microsoft 2. In Azure **Search**, select **Azure Monitor for SAP solutions**. -  -- 3. On the **Basics** tab, provide the required values.- 1. **Workload Region** is the region where the monitoring resources are created, make sure to select a region that is same as your virtual network. - 2. **Service Region** is where proxy resource gets created which manages monitoring resources deployed in the workload region. Service region is automatically selected based on your Workload Region selection. - 3. For **Virtual Network** field select a virtual network, which has connectivity to your SAP systems. - 4. For the **Subnet** field, select a subnet that has connectivity to your SAP systems. You can use an existing subnet or create a new subnet. Make sure that you select a subnet, which is an **IPv4/25 block or larger**. - 5. For **Log Analytics Workspace**, you can use an existing Log Analytics workspace or create a new one. If you create a new workspace, it will be created inside the managed resource group along with other monitoring resources. - 6. When entering **managed resource group** name, make sure to use a unique name. This name is used to create a resource group, which will contain all the monitoring resources. Managed Resource Group name cannot be changed once the resource is created. ++ 1. **Subscription** Add relevant Azure subscription details + 2. **Resource Group** Create a new or Select an existing Resource Group under the given subscription + 3. **Resource Name** Enter the name for Azure Monitor for SAP solutions + 4. **Workload Region** is the region where the monitoring resources are created, make sure to select a region that is same as your virtual network. + 5. **Service Region** is where proxy resource gets created which manages monitoring resources deployed in the workload region. Service region is automatically selected based on your Workload Region selection. + 6. For **Virtual Network** field select a virtual network, which has connectivity to your SAP systems for monitoring. + 7. For the **Subnet** field, select a subnet that has connectivity to your SAP systems. You can use an existing subnet or create a new subnet. Make sure that you select a subnet, which is an **IPv4/25 block or larger**. + 8. For **Log Analytics Workspace**, you can use an existing Log Analytics workspace or create a new one. If you create a new workspace, it is created inside the managed resource group along with other monitoring resources. + 1. When entering **Managed resource group** name, make sure to use a unique name. This name is used to create a resource group, which will contain all the monitoring resources. Managed Resource Group name can't be changed once the resource is created. <br/> -  + > [!div class="mx-imgBorder"] + >  4. On the **Providers** tab, you can start creating providers along with the monitoring resource. You can also create providers later by navigating to the **Providers** tab in the Azure Monitor for SAP solutions resource.+ 5. On the **Tags** tab, you can add tags to the monitoring resource. Make sure to add all the mandatory tags in case you have a tag policy in place. 6. On the **Review + create** tab, review the details and click **Create**. +## Create Provider's in Azure Monitor for SAP solutions -## Create Azure Monitor for SAP solutions (classic) monitoring resource --1. Sign in to the [Azure portal](https://portal.azure.com). --1. In Azure **Marketplace** or **Search**, select **Azure Monitor for SAP solutions (classic)**. --  ---1. On the **Basics** tab, provide the required values. If applicable, you can use an existing Log Analytics workspace. -- :::image type="content" source="./media/quickstart-portal/azure-monitor-quickstart-2.png" alt-text="Screenshot that shows configuration options on the Basics tab." lightbox="./media/quickstart-portal/azure-monitor-quickstart-2.png"::: -- When you're selecting a virtual network, ensure that the systems you want to monitor are reachable from within that virtual network. -- > [!IMPORTANT] - > Selecting **Share** for **Share data with Microsoft support** enables our support teams to help you with troubleshooting. This feature is available only for Azure Monitor for SAP solutions (classic) +Refer to the following for each Provider instance creation: +- [SAP NetWeaver Provider Creation](provider-netweaver.md) +- [SAP HANA Provider Creation](provider-hana.md) +- [SAP Microsoft SQL Provider Creation](provider-sql-server.md) +- [SAP IBM DB2 Provider Creation](provider-ibm-db2.md) +- [SAP Operating System Provider Creation](provider-linux.md) +- [SAP High Availability Provider Creation](provider-ha-pacemaker-cluster.md) ## Next steps |
sap | Quickstart Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/quickstart-powershell.md | Title: Deploy Azure Monitor for SAP solutions with Azure PowerShell (preview) + Title: Deploy Azure Monitor for SAP solutions with Azure PowerShell description: Deploy Azure Monitor for SAP solutions with Azure PowerShell -# Quickstart: deploy Azure Monitor for SAP solutions with PowerShell (preview) +# Quickstart: deploy Azure Monitor for SAP solutions with PowerShell --Get started with Azure Monitor for SAP solutions by using the -[Az.HanaOnAzure](/powershell/module/az.hanaonazure/#sap-hana-on-azure) PowerShell module to create Azure Monitor for SAP solutions resources. You'll create a resource group, set up monitoring, and create a provider instance. --This content only applies to the Azure Monitor for SAP solutions (classic) version of the service. +Get started with Azure Monitor for SAP solutions by using the [Az.Workloads](/powershell/module/az.workloads) PowerShell module to create Azure Monitor for SAP solutions resources. You create a resource group, set up monitoring, and create a provider instance. ## Prerequisites - If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.+- If you choose to use PowerShell locally, this article requires that you install the Az PowerShell module.Connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-az-ps). Alternately, you can use [Azure Cloud Shell](../../cloud-shell/overview.md). -- If you choose to use PowerShell locally, this article requires that you install the Az PowerShell module. You'll also need to connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell). Alternately, you can use [Azure Cloud Shell](../../cloud-shell/overview.md).--- While the **Az.HanaOnAzure** PowerShell module is in preview, you must install it separately using the `Install-Module` cmdlet. Once this PowerShell module becomes generally available, it becomes part of future Az PowerShell module releases and available natively from within Azure Cloud Shell.+Install **Az.Workloads** PowerShell module by running command. - ```azurepowershell-interactive - Install-Module -Name Az.HanaOnAzure - ``` +```azurepowershell-interactive +Install-Module -Name Az.Workloads +``` - If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources should be billed. Select a specific subscription using the [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet. - ```azurepowershell-interactive - Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000 - ``` +```azurepowershell-interactive +Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000 +``` ++- Create or Use an existing Virtual Network for Azure Monitor for SAP solutions(AMS), which has access to the Source SAP systems Virtual Network. +- Create a new subnet with address range of IPv4/25 or larger in AMS associated virtual network with subnet delegation assigned to "Microsoft.Web/serverFarms". ++ > [!div class="mx-imgBorder"] + >  ## Create a resource group Create an [Azure resource group](../../azure-resource-manager/management/overvie The following example creates a resource group with the specified name and in the specified location. ```azurepowershell-interactive-New-AzResourceGroup -Name myResourceGroup -Location westus2 +New-AzResourceGroup -Name Contoso-AMS-RG -Location <myResourceLocation> +``` ++## Azure Monitor for SAP: Monitor Creation ++To create an SAP monitor, use the [New-AzWorkloadsMonitor](/powershell/module/az.workloads/new-azworkloadsmonitor) cmdlet. The following example creates an SAP monitor for the specified subscription, resource group, and resource name. ++```azurepowershell-interactive +$monitor_name = 'Contoso-AMS-Monitor' +$rg_name = 'Contoso-AMS-RG' +$subscription_id = '00000000-0000-0000-0000-000000000000' +$location = 'eastus' +$managed_rg_name = 'MRG_Contoso-AMS-Monitor' +$subnet_id = '/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ams-vnet-rg/providers/Microsoft.Network/virtualNetworks/ams-vnet-eus/subnets/Contoso-AMS-Monitor' +$route_all = 'RouteAll' ++New-AzWorkloadsMonitor -Name $monitor_name -ResourceGroupName $rg_name -SubscriptionId $subscription_id -Location $location -AppLocation $location -ManagedResourceGroupName $managed_rg_name -MonitorSubnet $subnet_id -RoutingPreference $route_all +``` ++To retrieve the properties of an SAP monitor, use the [Get-AzWorkloadsMonitor](/powershell/module/az.workloads/get-azworkloadsmonitor) cmdlet. The following example gets properties of an SAP monitor for the specified subscription, resource group, and resource name. ++```azurepowershell-interactive +Get-AzWorkloadsMonitor -ResourceGroupName Contoso-AMS-RG -Name Contoso-AMS-Monitor +``` ++## Azure Monitor for SAP - Provider's Creation ++### SAP NetWeaver Provider Creation ++To create an SAP NetWeaver provider, use the [New-AzWorkloadsProviderInstance](/powershell/module/az.workloads/new-azworkloadsproviderinstance) cmdlet. The following example creates a NetWeaver provider for the specified subscription, resource group, and resource name. ++```azurepowershell-interactive +Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000 ``` -## SAP monitor +> [!NOTE] +> +> - hostname is SAP WebDispatcher or application server hostname/IP address +> - SapHostFileEntry is IP,FQDN,Hostname of every instance that gets listed in [GetSystemInstanceList](./provider-netweaver.md#determine-all-hostname-associated-with-an-sap-system) -To create an SAP monitor, use the [New-AzSapMonitor](/powershell/module/az.hanaonazure/new-azsapmonitor) cmdlet. The following example creates an SAP monitor for the specified subscription, resource group, and resource name. +```azurepowershell-interactive +$subscription_id = '00000000-0000-0000-0000-000000000000' +$rg_name = 'Contoso-AMS-RG' +$monitor_name = 'Contoso-AMS-Monitor' +$provider_name = 'Contoso-AMS-Monitor-NW' ++$SapClientId = '000' +$SapHostFileEntry = '["10.0.0.0 x01scscl1.ams.azure.com x01scscl1,10.0.0.0 x01erscl1.ams.azure.com x01erscl1,10.0.0.1 x01appvm1.ams.azure.com x01appvm1,10.0.0.2 x01appvm2.ams.azure.com x01appvm2"]' +$hostname = 'x01appvm0' +$instance_number = '00' +$password = 'Password@123' +$sapportNumber = '8000' +$sap_sid = 'X01' +$sap_username = 'AMS_NW' +$providerSetting = New-AzWorkloadsProviderSapNetWeaverInstanceObject -SapClientId $SapClientId -SapHostFileEntry $SapHostFileEntry -SapHostname $hostname -SapInstanceNr $instance_number -SapPassword $password -SapPortNumber $sapportNumber -SapSid $sap_sid -SapUsername $sap_username -SslPreference Disabled ++New-AzWorkloadsProviderInstance -MonitorName $monitor_name -Name $provider_name -ResourceGroupName $rg_name -SubscriptionId $subscription_id -ProviderSetting $providerSetting ++``` ++### SAP HANA Provider Creation ++To create an SAP HANA provider, use the [New-AzWorkloadsProviderInstance](/powershell/module/az.workloads/new-azworkloadsproviderinstance) cmdlet. The following example creates a HANA provider for the specified subscription, resource group, and resource name. ++```azurepowershell-interactive +$subscription_id = '00000000-0000-0000-0000-000000000000' +$rg_name = 'Contoso-AMS-RG' +$monitor_name = 'Contoso-AMS-Monitor' +$provider_name = 'Contoso-AMS-Monitor-HANA' ++$hostname = '10.0.0.0' +$sap_sid = 'X01' +$username = 'SYSTEM' +$password = 'password@123' +$dbName = 'SYSTEMDB' +$instance_number = '00' ++$providerSetting = New-AzWorkloadsProviderHanaDbInstanceObject -Name $dbName -Password $password -Username SYSTEM -Hostname $hostname -InstanceNumber $instance_number -SapSid $sap_sid -SqlPort 1433 -SslPreference Disabled +New-AzWorkloadsProviderInstance -MonitorName $monitor_name -Name $provider_name -ResourceGroupName $rg_name -SubscriptionId $subscription_id -ProviderSetting $providerSetting +``` ++### Operating System Provider Creation ++To create an Operating System provider, use the [New-AzWorkloadsProviderInstance](/powershell/module/az.workloads/new-azworkloadsproviderinstance) cmdlet. The following example creates an OS provider for the specified subscription, resource group, and resource name. ```azurepowershell-interactive-$Workspace = New-AzOperationalInsightsWorkspace -ResourceGroupName myResourceGroup -Name sapmonitor-test -Location westus2 -Sku Standard +$subscription_id = '00000000-0000-0000-0000-000000000000' +$rg_name = 'Contoso-AMS-RG' +$monitor_name = 'Contoso-AMS-Monitor' +$provider_name = 'Contoso-AMS-Monitor-OS' ++$hostname = 'http://10.0.0.0:9100/metrics' +$sap_sid = 'X01' ++$providerSetting = New-AzWorkloadsProviderPrometheusOSInstanceObject -PrometheusUrl $hostname -SapSid $sap_sid -SslPreference Disabled +New-AzWorkloadsProviderInstance -MonitorName $monitor_name -Name $provider_name -ResourceGroupName $rg_name -SubscriptionId $subscription_id -ProviderSetting $providerSetting +``` ++### High Availability Cluster Provider Creation -$WorkspaceKey = Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName myResourceGroup -Name sapmonitor-test +To create High Availability Cluster provider, use the [New-AzWorkloadsProviderInstance](/powershell/module/az.workloads/new-azworkloadsproviderinstance) cmdlet. The following example creates a High Availability Cluster provider for the specified subscription, resource group, and resource name. -$SapMonitorParams = @{ - Name = 'ps-sapmonitor-t01' - ResourceGroupName = 'myResourceGroup' - Location = 'westus2' - EnableCustomerAnalytic = $true - MonitorSubnet = '/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/vnet-sap/subnets/mysubnet' - LogAnalyticsWorkspaceSharedKey = $WorkspaceKey.PrimarySharedKey - LogAnalyticsWorkspaceId = $Workspace.CustomerId - LogAnalyticsWorkspaceResourceId = $Workspace.ResourceId -} -New-AzSapMonitor @SapMonitorParams +```azurepowershell-interactive +$subscription_id = '00000000-0000-0000-0000-000000000000' +$rg_name = 'Contoso-AMS-RG' +$monitor_name = 'Contoso-AMS-Monitor' +$provider_name = 'Contoso-AMS-Monitor-HA' ++$PrometheusHa_Url = 'http://10.0.0.0:44322/metrics' +$sap_sid = 'X01' +$cluster_name = 'haCluster' +$hostname = '10.0.0.0' +$providerSetting = New-AzWorkloadsProviderPrometheusHaClusterInstanceObject -ClusterName $cluster_name -Hostname $hostname -PrometheusUrl $PrometheusHa_Url -Sid $sap_sid -SslPreference Disabled ++New-AzWorkloadsProviderInstance -MonitorName $monitor_name -Name $provider_name -ResourceGroupName $rg_name -SubscriptionId $subscription_id -ProviderSetting $providerSetting ``` -To retrieve the properties of an SAP monitor, use the [Get-AzSapMonitor](/powershell/module/az.hanaonazure/get-azsapmonitor) cmdlet. The following example gets properties of an SAP monitor for the specified subscription, resource group, and resource name. +### SQL Database Provider Creation ++To create an SQL Database provider, use the [New-AzWorkloadsProviderInstance](/powershell/module/az.workloads/new-azworkloadsproviderinstance) cmdlet. The following example creates a SQL Database provider for the specified subscription, resource group, and resource name. ```azurepowershell-interactive-Get-AzSapMonitor -ResourceGroupName myResourceGroup -Name ps-spamonitor-t01 +$subscription_id = '00000000-0000-0000-0000-000000000000' +$rg_name = 'Contoso-AMS-RG' +$monitor_name = 'Contoso-AMS-Monitor' +$provider_name = 'Contoso-AMS-Monitor-SQL' ++$hostname = '10.0.0.0' +$sap_sid = 'X01' +$username = 'AMS_SQL' +$password = 'Password@123' +$port = '1433' ++$providerSetting = New-AzWorkloadsProviderSqlServerInstanceObject -Password $password -Port $port -Username $username -Hostname $hostname -SapSid $sap_sid -SslPreference Disabled +New-AzWorkloadsProviderInstance -MonitorName $monitor_name -Name $provider_name -ResourceGroupName $rg_name -SubscriptionId $subscription_id -ProviderSetting $providerSetting ``` -## Provider instance +### IBM Db2 Provider Creation -To create a provider instance, use the [New-AzSapMonitorProviderInstance](/powershell/module/az.hanaonazure/new-azsapmonitorproviderinstance) cmdlet. The following example creates a provider instance for the specified subscription, resource group, and resource name. +To create an IBM Db2 provider, use the [New-AzWorkloadsProviderInstance](/powershell/module/az.workloads/new-azworkloadsproviderinstance) cmdlet. The following example creates a NetWeaver provider for the specified subscription, resource group, and resource name. ```azurepowershell-interactive-$SapProviderParams = @{ - ResourceGroupName = 'myResourceGroup' - Name = 'ps-sapmonitorins-t01' - SapMonitorName = 'yemingmonitor' - ProviderType = 'SapHana' - HanaHostname = 'hdb1-0' - HanaDatabaseName = 'SYSTEMDB' - HanaDatabaseSqlPort = '30015' - HanaDatabaseUsername = 'SYSTEM' - HanaDatabasePassword = (ConvertTo-SecureString 'Manager1' -AsPlainText -Force) -} -New-AzSapMonitorProviderInstance @SapProviderParams -``` --To retrieve properties of a provider instance, use the [Get-AzSapMonitorProviderInstance](/powershell/module/az.hanaonazure/get-azsapmonitorproviderinstance) cmdlet. The following example gets properties of: +$subscription_id = '00000000-0000-0000-0000-000000000000' +$rg_name = 'Contoso-AMS-RG' +$monitor_name = 'Contoso-AMS-Monitor' +$provider_name = 'Contoso-AMS-Monitor-DB2' ++$hostname = '10.0.0.0' +$sap_sid = 'X01' +$username = 'AMS_DB2' +$password = 'password@123' +$dbName = 'X01' +$port = '5912' ++$providerSetting = New-AzWorkloadsProviderDB2InstanceObject -Name $dbName -Password $password -Port $port -Username $username -Hostname $hostname -SapSid $sap_sid -SslPreference Disabled ++New-AzWorkloadsProviderInstance -MonitorName $monitor_name -Name $provider_name -ResourceGroupName $rg_name -SubscriptionId $subscription_id -ProviderSetting $providerSetting +``` ++To retrieve properties of a provider instance, use the [Get-AzWorkloadsProviderInstance](/powershell/module/az.workloads/get-azworkloadsproviderinstance) cmdlet. The following example gets properties of: + - A provider instance for the specified subscription - The resource group - The SapMonitor name - The resource name ```azurepowershell-interactive-Get-AzSapMonitorProviderInstance -ResourceGroupName myResourceGroup -SapMonitorName ps-spamonitor-t01 +Get-AzWorkloadsProviderInstance -ResourceGroupName Contoso-AMS-RG -SapMonitorName Contoso-AMS-Monitor ``` -## Clean up resources +## Clean up of resources If the resources created in this article aren't needed, you can delete them by running the following examples. ### Delete the provider instance To remove a provider instance, use the-[Remove-AzSapMonitorProviderInstance](/powershell/module/az.hanaonazure/remove-azsapmonitorproviderinstance) cmdlet. The following example deletes a provider instance for the specified subscription, resource group, SapMonitor name, and resource name. +[Remove-AzWorkloadsProviderInstance](/powershell/module/az.workloads/remove-azworkloadsproviderinstance) cmdlet. The following example is for IBM DB2 provider instance deletion for the specified subscription, resource group, SapMonitor name, and resource name. ```azurepowershell-interactive-Remove-AzSapMonitorProviderInstance -ResourceGroupName myResourceGroup -SapMonitorName ps-spamonitor-t01 -Name ps-sapmonitorins-t02 +$subscription_id = '00000000-0000-0000-0000-000000000000' +$rg_name = 'Contoso-AMS-RG' +$monitor_name = 'Contoso-AMS-Monitor' +$provider_name = 'Contoso-AMS-Monitor-DB2' ++Remove-AzWorkloadsProviderInstance -MonitorName $monitor_name -Name $provider_name -ResourceGroupName $rg_name -SubscriptionId $subscription_id ``` ### Delete the SAP monitor -To remove an SAP monitor, use the [Remove-AzSapMonitor](/powershell/module/az.hanaonazure/remove-azsapmonitor) cmdlet. The following example deletes an SAP monitor for the specified subscription, resource group, and monitor name. +To remove an SAP monitor, use the [Remove-AzWorkloadsMonitor](/powershell/module/az.workloads/remove-azworkloadsmonitor) cmdlet. The following example deletes an SAP monitor for the specified subscription, resource group, and monitor name. ```azurepowershell-Remove-AzSapMonitor -ResourceGroupName myResourceGroup -Name ps-sapmonitor-t02 +$monitor_name = 'Contoso-AMS-Monitor' +$rg_name = 'Contoso-AMS-RG' +$subscription_id = '00000000-0000-0000-0000-000000000000' ++Remove-AzWorkloadsMonitor -Name $monitor_name -ResourceGroupName $rg_name -SubscriptionId $subscription_id ``` -### Delete the resource group +- ### Delete the resource group > [!CAUTION] > The following example deletes the specified resource group and all resources contained within it. > If resources outside the scope of this article exist in the specified resource group, they will also be deleted. ```azurepowershell-interactive-Remove-AzResourceGroup -Name myResourceGroup +Remove-AzResourceGroup -Name Contoso-AMS-RG ``` ## Next steps |
sap | Set Up Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/set-up-network.md | Title: Set up network for Azure Monitor for SAP solutions (preview) + Title: Set up network for Azure Monitor for SAP solutions description: Learn how to set up an Azure virtual network for use with Azure Monitor for SAP solutions. Last updated 10/27/2022 #Customer intent: As a developer, I want to set up an Azure virtual network so that I can use Azure Monitor for SAP solutions. -# Set up network for Azure Monitor for SAP solutions (preview) +# Set up network for Azure Monitor for SAP solutions --In this how-to guide, you'll learn how to configure an Azure virtual network so that you can deploy *Azure Monitor for SAP solutions*. You'll learn to [create a new subnet](#create-new-subnet) for use with Azure Functions for both versions of the product, *Azure Monitor for SAP solutions* and *Azure Monitor for SAP solutions (classic)*. Then, if you're using the current version of Azure Monitor for SAP solutions, you'll learn to [set up outbound internet access](#configure-outbound-internet-access) to the SAP environment that you want to monitor. +In this how-to guide, you'll learn how to configure an Azure virtual network so that you can deploy *Azure Monitor for SAP solutions*. +- You'll learn to [create a new subnet](#create-new-subnet) for use with Azure Functions. +- You'll learn to [set up outbound internet access](#configure-outbound-internet-access) to the SAP environment that you want to monitor. ## Create new subnet -> [!NOTE] -> This section applies to both Azure Monitor for SAP solutions and Azure Monitor for SAP solutions (classic). - Azure Functions is the data collection engine for Azure Monitor for SAP solutions. You'll need to create a new subnet to host Azure Functions. -[Create a new subnet](../../azure-functions/functions-networking-options.md#subnets) with an **IPv4/28** block or larger. +[Craete a new subnet](../../azure-functions/functions-networking-options.md#subnets) with an **IPv4/25** block or larger. Since we need atleast 100 IP addresses for monitoring resources. +After subnet creation is successful, verify the below steps to ensure connectivity between Azure Monitor for SAP solutions subnet to your SAP environment subnet. ++- If both the subnets are in different virtual networks, do a virtual network peering between the virtual networks. +- If the subnets are associated with user defined routes, make sure the routes are configured to allow traffic between the subnets. +- If the SAP Environment subnets have NSG rules, make sure the rules are configured to allow inbound traffic from Azure Monitor for SAP solutions subnet. +- If you have a firewall in your SAP environment, make sure the firewall is configured to allow inbound traffic from Azure Monitor for SAP solutions subnet. For more information, see how to [integrate your app with an Azure virtual network](../../app-service/overview-vnet-integration.md). For more information, see how to [integrate your app with an Azure virtual netwo This section only applies to if you are using Custom DNS for your Virtual Network. Add the IP Address 168.63.129.16 which points to Azure DNS Server. This will resolve the storage account and other resource urls which are required for proper functioning of Azure Monitor for SAP Solutions. see below reference image. - +> [!div class="mx-imgBorder"] +>  ## Configure outbound internet access -> [!IMPORTANT] -> This section only applies to the current version of Azure Monitor for SAP solutions. If you're using Azure Monitor for SAP solutions (classic), skip this section. - In many use cases, you might choose to restrict or block outbound internet access to your SAP network environment. However, Azure Monitor for SAP solutions requires network connectivity between the [subnet that you configured](#create-new-subnet) and the systems that you want to monitor. Before you deploy an Azure Monitor for SAP solutions resource, you need to configure outbound internet access, or the deployment will fail. There are multiple methods to address restricted or blocked outbound internet access. Choose the method that works best for your use case: You can configure the **Route All** setting when you create an Azure Monitor for You can only use this option before you deploy an Azure Monitor for SAP solutions resource. It's not possible to change the **Route All** setting after you create the Azure Monitor for SAP solutions resource. +### Allow Inbound Traffic ++In case you have NSG or User Defined Route rules that block inbound traffic to your SAP Environment, then you need to modify the rules to allow the inbound traffic, also depending on the types of providers you are trying to onboard you have to unblock a few ports as mentioned below. ++| **Provider Type** | **Port Number** | +||| +| Prometheus OS | 9100 | +| Prometheus HA Cluster on RHEL | 44322 | +| Prometheus HA Cluster on SUSE | 9100 | +| SQL Server | 1433 (can be different if you are not using the default port) | +| DB2 Server | 25000 (can be different if you are not using the default port) | +| SAP HANA DB | 3\<instance number\>13, 3\<instance number\>15 | +| SAP NetWeaver | 5\<instance number\>13, 5\<instance number\>15 | + ### Use service tags If you use NSGs, you can create Azure Monitor for SAP solutions-related [virtual network service tags](../../virtual-network/service-tags-overview.md) to allow appropriate traffic flow for your deployment. A service tag represents a group of IP address prefixes from a given Azure service. You can enable a private endpoint by creating a new subnet in the same virtual n To create a private endpoint for Azure Monitor for SAP solutions: -1. [Create a new subnet](../../virtual-network/virtual-network-manage-subnet.md#add-a-subnet) in the same virtual network as the SAP system that you're monitoring. -1. In the Azure portal, go to your Azure Monitor for SAP solutions resource. -1. On the **Overview** page for the Azure Monitor for SAP solutions resource, select the **Managed resource group**. -1. Create a private endpoint connection for the following resources inside the managed resource group. - 1. [Azure Key Vault resources](#create-key-vault-endpoint) - 2. [Azure Storage resources](#create-storage-endpoint) - 3. [Azure Log Analytics workspaces](#create-log-analytics-endpoint) +1. Create a Azure Private DNS zone which will contain the private endpoint records. You can follow the steps in [Create a private DNS zone](../../dns/private-dns-getstarted-portal.md) to create a private DNS zone. Make sure to link the private DNS zone to the virtual networks that contain you SAP System and Azure Monitor for SAP solutions resources. -#### Create key vault endpoint + > [!div class="mx-imgBorder"] + >  -You only need one private endpoint for all the Azure Key Vault resources (secrets, certificates, and keys). Once a private endpoint is created for key vault, the vault resources can't be accessed from systems outside the given vnet. +1. Create a subnet in the virtual network, that will be used for the private endpoint. Note down the subnet ID and Private IP Address for these subnets. +2. To find the resources in the Azure portal, go to your Azure Monitor for SAP solutions resource. +3. On the **Overview** page for the Azure Monitor for SAP solutions resource, select the **Managed resource group**. -1. On the key vault resource's menu, under **Settings**, select **Networking**. -1. Select the **Private endpoint connections** tab. -1. Select **Create** to open the endpoint creation page. -1. On the **Basics** tab, enter or select all required information. -1. On the **Resource** tab, enter or select all required information. For the key vault resource, there's only one subresource available, the vault. -1. On the **Virtual Network** tab, select the virtual network and the subnet that you created specifically for the endpoint. It's not possible to use the same subnet as the Azure Functions app. -1. On the **DNS** tab, for **Integrate with private DNS zone**, select **Yes**. If necessary, add tags. -1. Select **Review + create** to create the private endpoint. -1. On the **Networking** page again, select the **Firewalls and virtual networks** tab. - 1. For **Allow access from**, select **Allow public access from all networks**. - 1. Select **Apply** to save the changes. +#### Create key vault endpoint ++You can follow the steps in [Create a private endpoint for Azure Key Vault](../../key-vault/general/private-link-service.md) to configure the endpoint and test the connectivity to key vault. #### Create storage endpoint Repeat the following process for each type of storage subresource (table, queue, 1. On the **Basics** tab, enter or select all required information. 1. On the **Resource** tab, enter or select all required information. For the **Target sub-resource**, select one of the subresource types (table, queue, blob, or file). 1. On the **Virtual Network** tab, select the virtual network and the subnet that you created specifically for the endpoint. It's not possible to use the same subnet as the Azure Functions app.-1. On the **DNS** tab, for **Integrate with private DNS zone**, select **Yes**. If necessary, add tags. ++ > [!div class="mx-imgBorder"] + >  ++1. On the **DNS** tab, for **Integrate with private DNS zone**, select **Yes**. +1. On the **Tags** tab, add tags if necessary. 1. Select **Review + create** to create the private endpoint.-1. On the **Networking** page again, select the **Firewalls and virtual networks** tab. - 1. For **Allow access from**, select **Allow public access from all networks**. +1. After the deployment is complete, Navigate back to Storage Account. On the **Networking** page, select the **Firewalls and virtual networks** tab. + 1. For **Public network access**, select **Enable from all networks**. 1. Select **Apply** to save the changes.+1. Make sure to create private endpoints for all storage sub-resources (table, queue, blob, and file) #### Create log analytics endpoint Add outbound security rules: | 700 | Allow the source IP to access storage-account resources using private endpoint IP. (Include IPs for each of storage account sub resources: table, queue, file, and blob) | | 800 | Allow the source IP to access log-analytics workspace resource using private endpoint IP. | +### DNS Configuration for Private Endpoints ++After creating the private endpoints, you need to configure DNS to resolve the private endpoint IP addresses. You can use either Azure Private DNS or custom DNS servers. Refer to [Configure DNS for private endpoints](../../private-link/private-endpoint-dns.md) for more information. + ## Next steps - [Quickstart: set up Azure Monitor for SAP solutions through the Azure portal](quickstart-portal.md) |
search | Search Howto Aad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-aad.md | -> [!IMPORTANT] -> Role-based access control for data plane operations, such as creating or querying an index, is currently in public preview and available under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). This functionality is only available in public cloud regions and may impact the latency of your operations while the functionality is in preview. For more information on preview limitations, see [RBAC preview limitations](search-security-rbac.md#preview-limitations). - Search applications that are built on Azure Cognitive Search can now use the [Microsoft identity platform](../active-directory/develop/v2-overview.md) for authenticated and authorized access. On Azure, the identity provider is Azure Active Directory (Azure AD). A key [benefit of using Azure AD](../active-directory/develop/active-directory-how-to-integrate.md#benefits-of-integration) is that your credentials and API keys no longer need to be stored in your code. Azure AD authenticates the security principal (a user, group, or service) running the application. If authentication succeeds, Azure AD returns the access token to the application, and the application can then use the access token to authorize requests to Azure Cognitive Search. This article shows you how to configure your client for Azure AD: It's a best practice to grant minimum permissions. If your application only need + Contributor + Reader + Search Service Contributor- + Search Index Data Contributor (preview) - + Search Index Data Reader (preview) + + Search Index Data Contributor + + Search Index Data Reader For more information on the available roles, see [Built-in roles used in Search](search-security-rbac.md#built-in-roles-used-in-search). You can also [assign roles using PowerShell](search-security-rbac.md#assign-role Once you have a managed identity and a role assignment on the search service, you're ready to add code to your application to authenticate the security principal and acquire an OAuth 2.0 token. -Azure AD authentication is also supported in the preview SDKs for [Java](https://search.maven.org/artifact/com.azure/azure-search-documents/11.5.0-beta.3/jar), [Python](https://pypi.org/project/azure-search-documents/11.3.0b3/), and [JavaScript](https://www.npmjs.com/package/@azure/search-documents/v/11.3.0-beta.3). +Use the following client libraries for role-based access control: +++ [azure.search.documents (Azure SDK for .NET) version 11.4](https://www.nuget.org/packages/Azure.Search.Documents/)++ [azure-search-documents (Azure SDK for Java) version 11.5.6](https://central.sonatype.com/artifact/com.azure/azure-search-documents/11.5.6)++ [azure/search-documents (Azure SDK for JavaScript) version 11.3.1](https://www.npmjs.com/package/@azure/search-documents/v/11.3.1)++ [azure.search.documents (Azure SDK for Python) version 11.3](https://pypi.org/project/azure-search-documents/) > [!NOTE] > To learn more about the OAuth 2.0 code grant flow used by Azure AD, see [Authorize access to Azure Active Directory web applications using the OAuth 2.0 code grant flow](../active-directory/develop/v2-oauth2-auth-code-flow.md). ### [**.NET SDK**](#tab/aad-dotnet) -Use [Azure.Search.Documents version 11.4.0](https://www.nuget.org/packages/Azure.Search.Documents/11.4.0) for Azure AD authentication. - The following instructions reference an existing C# sample to demonstrate the code changes. 1. As a starting point, clone the [source code](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/quickstart/v11) for the [C# quickstart](search-get-started-dotnet.md). |
search | Search Security Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-overview.md | Additionally, you can add [network security features](#service-access-and-authen Outbound requests from a search service to other applications are typically made by indexers for text-based indexing and some aspects of AI enrichment. Outbound requests include both read and write operations. -The following list is a full enumeration of the outbound requests that can be made by a search service. A search makes requests on its own behalf, and on the behalf of an indexer or custom skill: +The following list is a full enumeration of the outbound requests that can be made by a search service. A search service makes requests on its own behalf, and on the behalf of an indexer or custom skill: + Indexers [read from external data sources](search-indexer-securing-resources.md). + Indexers write to Azure Storage when creating knowledge stores, persisting cached enrichments, and persisting debug sessions. The private endpoint uses an IP address from the virtual network address space f :::image type="content" source="media/search-security-overview/inbound-private-link-azure-cog-search.png" alt-text="sample architecture diagram for private endpoint access"::: -While this solution is the most secure, using additional services is an added cost so be sure you have a clear understanding of the benefits before diving in. For more information about costs, see the [pricing page](https://azure.microsoft.com/pricing/details/private-link/). For more information about how these components work together, [watch this video](#watch-this-video). Coverage of the private endpoint option starts at 5:48 into the video. For instructions on how to set up the endpoint, see [Create a Private Endpoint for Azure Cognitive Search](service-create-private-endpoint.md). +While this solution is the most secure, using more services is an added cost so be sure you have a clear understanding of the benefits before diving in. For more information about costs, see the [pricing page](https://azure.microsoft.com/pricing/details/private-link/). For more information about how these components work together, [watch this video](#watch-this-video). Coverage of the private endpoint option starts at 5:48 into the video. For instructions on how to set up the endpoint, see [Create a Private Endpoint for Azure Cognitive Search](service-create-private-endpoint.md). ## Authentication Once a request is admitted to the search service, it must still undergo authentication and authorization that determines whether the request is permitted. Cognitive Search supports two approaches: -+ [Key-based authentication](search-security-api-keys.md) is performed on the request (not the calling app or user) through an API key, where the key is a string composed of randomly generated numbers and letters that prove the request is from a trustworthy source. Keys are required on every request. Submission of a valid key is considered proof the request originates from a trusted entity. ++ [Azure AD authentication](search-security-rbac.md) establishes the caller (and not the request) as the authenticated identity. An Azure role assignment determines the allowed operation. -+ [Azure AD authentication (preview)](search-security-rbac.md) establishes the caller (and not the request) as the authenticated identity. An Azure role assignment determines the allowed operation. ++ [Key-based authentication](search-security-api-keys.md) is performed on the request (not the calling app or user) through an API key, where the key is a string composed of randomly generated numbers and letters that prove the request is from a trustworthy source. Keys are required on every request. Submission of a valid key is considered proof the request originates from a trusted entity. -Outbound requests made by an indexer are subject to the authentication protocols supported by the external service. A search service can be made a trusted service on Azure, connecting to other services using a system or user-assigned managed identity. For more information, see [Set up an indexer connection to a data source using a managed identity](search-howto-managed-identities-data-sources.md). +You can use both authentication methods, or [disable an approach](search-security-rbac.md#disable-api-key-authentication) that you don't want to use. ## Authorization -Cognitive Search provides different authorization models for content management and service management. +Cognitive Search provides authorization models for service management and content management. ++### Authorize service management -### Authorization for content management +Resource management is authorized through [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md). Azure RBAC is the authorization system for [Azure Resource Manager](../azure-resource-manager/management/overview.md). -If you're using key-based authentication, authorization on content operations is conferred through the type of [API key](search-security-api-keys.md) on the request: +In Azure Cognitive Search, Resource Manager is used to create or delete the service, manage API keys, scale the service, and configure security. As such, Azure role assignments will determine who can perform those tasks, regardless of whether they're using the [portal](search-manage.md), [PowerShell](search-manage-powershell.md), or the [Management REST APIs](/rest/api/searchmanagement). -+ Admin key (allows read-write access for create-read-update-delete operations on the search service), created when the service is provisioned +[Three basic roles](search-security-rbac.md) (Owner, Contributor, Reader) apply to search service administration. Role assignments can be made using any supported methodology (portal, PowerShell, and so forth) and are honored service-wide. -+ Query key (allows read-only access to the documents collection of an index), created as-needed and are designed for client applications that issue queries +> [!NOTE] +> Using Azure-wide mechanisms, you can lock a subscription or resource to prevent accidental or unauthorized deletion of your search service by users with admin rights. For more information, see [Lock resources to prevent unexpected deletion](../azure-resource-manager/management/lock-resources.md). -In application code, you specify the endpoint and an API key to allow access to content and options. An endpoint might be the service itself, the indexes collection, a specific index, a documents collection, or a specific document. When chained together, the endpoint, the operation (for example, a create or update request) and the permission level (full or read-only rights based on the key) constitute the security formula that protects content and operations. +### Authorize access to content -If you're using Azure AD authentication, [use role assignments instead of API keys](search-security-rbac.md) to establish who and what can read and write to your search service. +Content management refers to the objects created and hosted on a search service. -### Controlling access to indexes ++ For Azure AD authorization, [use Azure role assignments](search-security-rbac.md) to establish read-write access to your search service. -In Azure Cognitive Search, an individual index is generally not a securable object. As noted previously for key-based authentication, access to an index will include read or write permissions based on which API key you provide on the request, along with the context of an operation. Queries are read-only operations. In a query request, there's no concept of joining indexes or accessing multiple indexes simultaneously so all requests target a single index by definition. As such, construction of the query request itself (a key plus a single target index) defines the security boundary. ++ For key-based authorization, [an API key](search-security-api-keys.md) and a qualified endpoint determine access. An endpoint might be the service itself, the indexes collection, a specific index, a documents collection, or a specific document. When chained together, the endpoint, the operation (for example, a create or update request) and the type of key (admin or query) authorize access to content and operations. -However, if you're using Azure roles, you can [set permissions on individual indexes](search-security-rbac.md#grant-access-to-a-single-index) as long as it's done programmatically. +### Restricting access to indexes -For key-based authentication scenarios, administrator and developer access to indexes is undifferentiated: both need write access to create, delete, and update the objects managed by the service. Anyone with an [admin key](search-security-api-keys.md) to your service can read, modify, or delete any index in the same service. For protection against accidental or malicious deletion of indexes, your in-house source control for code assets is the solution for reversing an unwanted index deletion or modification. Azure Cognitive Search has failover within the cluster to ensure availability, but it doesn't store or execute your proprietary code used to create or load indexes. +Using Azure roles, you can [set permissions on individual indexes](search-security-rbac.md#grant-access-to-a-single-index) as long as it's done programmatically. -For multitenancy solutions requiring security boundaries at the index level, such solutions typically include a middle tier, which customers use to handle index isolation. For more information about the multitenant use case, see [Design patterns for multitenant SaaS applications and Azure Cognitive Search](search-modeling-multitenant-saas-applications.md). +Using keys, anyone with an [admin key](search-security-api-keys.md) to your service can read, modify, or delete any index in the same service. For protection against accidental or malicious deletion of indexes, your in-house source control for code assets is the solution for reversing an unwanted index deletion or modification. Azure Cognitive Search has failover within the cluster to ensure availability, but it doesn't store or execute your proprietary code used to create or load indexes. -### Controlling access to documents +For multitenancy solutions requiring security boundaries at the index level, it's common to handle index isolation in the middle tier in your application code. For more information about the multitenant use case, see [Design patterns for multitenant SaaS applications and Azure Cognitive Search](search-modeling-multitenant-saas-applications.md). -If you require granular, per-user control over search results, you can build security filters on your queries, returning documents associated with a given security identity. +### Restricting access to documents -Conceptually equivalent to "row-level security", authorization to content within the index isn't natively supported using predefined roles or role assignments that map to entities in Azure Active Directory. Any user permissions on data in external systems, such as Azure Cosmos DB, don't transfer with that data as its being indexed by Cognitive Search. +User permissions at the document level, also known as "row-level security", isn't natively supported in Cognitive Search. If you import data from an external system that provides row-level security, such as Azure Cosmos DB, those permissions won't transfer with the data as its being indexed by Cognitive Search. -Workarounds for solutions that require "row-level security" include creating a field in the data source that represents a security group or user identity, and then using filters in Cognitive Search to selectively trims search results of documents and content based on identities. The following table describes two approaches for trimming search results of unauthorized content. +If you require permissioned access over content in search results, there's a technique for applying filters that include or exclude documents based on user identity. This workaround adds a string field in the data source that represents a group or user identity, which you can make filterable in your index. The following table describes two approaches for trimming search results of unauthorized content. | Approach | Description | |-|-| |[Security trimming based on identity filters](search-security-trimming-for-azure-search.md) | Documents the basic workflow for implementing user identity access control. It covers adding security identifiers to an index, and then explains filtering against that field to trim results of prohibited content. | |[Security trimming based on Azure Active Directory identities](search-security-trimming-for-azure-search-with-aad.md) | This article expands on the previous article, providing steps for retrieving identities from Azure Active Directory (Azure AD), one of the [free services](https://azure.microsoft.com/free/) in the Azure cloud platform. | -### Authorization for Service Management --Service Management operations are authorized through [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md). Azure RBAC is an authorization system built on [Azure Resource Manager](../azure-resource-manager/management/overview.md) for provisioning of Azure resources. --In Azure Cognitive Search, Resource Manager is used to create or delete the service, manage API keys, and scale the service. As such, Azure role assignments will determine who can perform those tasks, regardless of whether they're using the [portal](search-manage.md), [PowerShell](search-manage-powershell.md), or the [Management REST APIs](/rest/api/searchmanagement). --[Three basic roles](search-security-rbac.md) are defined for search service administration. The role assignments can be made using any supported methodology (portal, PowerShell, and so forth) and are honored service-wide. The Owner and Contributor roles can perform various administration functions. You can assign the Reader role to users who only view essential information. --> [!NOTE] -> Using Azure-wide mechanisms, you can lock a subscription or resource to prevent accidental or unauthorized deletion of your search service by users with admin rights. For more information, see [Lock resources to prevent unexpected deletion](../azure-resource-manager/management/lock-resources.md). - ## Data residency When you set up a search service, you choose a location or region that determines where customer data is stored and processed. Azure Cognitive Search won't store customer data outside of your specified region unless you configure a feature that has a dependency on another Azure resource, and that resource is provisioned in a different region. Currently, the only external resource that a search service writes customer data ### Exceptions to data residency commitments -Object names will be stored and processed outside of your selected region or location. Customers shouldn't place any sensitive data in name fields or create applications designed to store sensitive data in these fields. This data will appear in the telemetry logs used by Microsoft to provide support for the service. Object names include names of indexes, indexers, data sources, skillsets, resources, containers, and key vault store. +Object names will be stored and processed outside of your selected region or location. Customers shouldn't place any sensitive data in name fields or create applications designed to store sensitive data in these fields. This data appears in the telemetry logs used by Microsoft to provide support for the service. Object names include names of indexes, indexers, data sources, skillsets, resources, containers, and key vault store. Telemetry logs are retained for one and a half years. During that period, Microsoft might access and reference object names under the following conditions: Azure Cognitive Search participates in regular audits, and has been certified ag For compliance, you can use [Azure Policy](../governance/policy/overview.md) to implement the high-security best practices of [Microsoft cloud security benchmark](/security/benchmark/azure/introduction). The Microsoft cloud security benchmark is a collection of security recommendations, codified into security controls that map to key actions you should take to mitigate threats to services and data. There are currently 12 security controls, including [Network Security](/security/benchmark/azure/mcsb-network-security), Logging and Monitoring, and [Data Protection](/security/benchmark/azure/mcsb-data-protection). -Azure Policy is a capability built into Azure that helps you manage compliance for multiple standards, including those of Microsoft cloud security benchmark. For well-known benchmarks, Azure Policy provides built-in definitions that provide both criteria and an actionable response that addresses non-compliance. +Azure Policy is a capability built into Azure that helps you manage compliance for multiple standards, including those of Microsoft cloud security benchmark. For well-known benchmarks, Azure Policy provides built-in definitions that provide both criteria and an actionable response that addresses noncompliance. -For Azure Cognitive Search, there's currently one built-in definition. It's for resource logging. With this built-in, you can assign a policy that identifies any search service that is missing resource logging, and then turns it on. For more information, see [Azure Policy Regulatory Compliance controls for Azure Cognitive Search](security-controls-policy.md). +For Azure Cognitive Search, there's currently one built-in definition. It's for resource logging. You can assign a policy that identifies search services that are missing resource logging, and then turn it on. For more information, see [Azure Policy Regulatory Compliance controls for Azure Cognitive Search](security-controls-policy.md). ## Watch this video |
search | Search Security Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md | -Azure provides a global [role-based access control authorization system](../role-based-access-control/role-assignments-portal.md) for all services running on the platform. In Cognitive Search, you can: +Azure provides a global [role-based access control authorization system](../role-based-access-control/role-assignments-portal.md) for all services running on the platform. In Cognitive Search, you can use Azure roles for: -+ Use generally available roles for service administration. ++ Control plane operations (service administration tasks through Azure Resource Manager). -+ Use new preview roles for data requests, including creating, loading, and querying indexes. ++ Data plane operations, such as creating, loading, and querying indexes. Per-user access over search results (sometimes referred to as row-level security or document-level security) isn't supported. As a workaround, [create security filters](search-security-trimming-for-azure-search.md) that trim results by user identity, removing documents for which the requestor shouldn't have access. Built-in roles include generally available and preview roles. If these roles are | [Search Index Data Reader](../role-based-access-control/built-in-roles.md#search-index-data-reader) | (Preview) Provides read-only data plane access to search indexes on the search service. This role is for apps and users who run queries. | > [!NOTE]-> Azure resources have the concept of [control plane and data plane](../azure-resource-manager/management/control-plane-and-data-plane.md) categories of operations. In Cognitive Search, "control plane" refers to any operation supported in the [Management REST API](/rest/api/searchmanagement/) or equivalent client libraries. The "data plane" refers to operations against the search service endpoint, such as indexing or queries, or any other operation specified in the [Search REST API](/rest/api/searchservice/) or equivalent client libraries. +> In Cognitive Search, "control plane" refers to operations supported in the [Management REST API](/rest/api/searchmanagement/) or equivalent client libraries. The "data plane" refers to operations against the search service endpoint, such as indexing or queries, or any other operation specified in the [Search REST API](/rest/api/searchservice/) or equivalent client libraries. <a name="preview-limitations"></a> Built-in roles include generally available and preview roles. If these roles are + If you migrate your Azure subscription to a new tenant, the Azure RBAC preview will need to be re-enabled. -+ Adoption of role-based access control might increase the latency of some requests. Each unique combination of service resource (index, indexer, etc.) and service principal used on a request will trigger an authorization check. These authorization checks can add up to 200 milliseconds of latency to a request. ++ Adoption of role-based access control might increase the latency of some requests. Each unique combination of service resource (index, indexer, etc.) and service principal used on a request triggers an authorization check. These authorization checks can add up to 200 milliseconds of latency to a request. + In rare cases where requests originate from a high number of different service principals, all targeting different service resources (indexes, indexers, etc.), it's possible for the authorization checks to result in throttling. Throttling would only happen if hundreds of unique combinations of search service resource and service principal were used within a second. In this step, configure your search service to recognize an **authorization** he The change is effective immediately, but wait a few seconds before testing. -All network calls for search service operations and content will respect the option you select: API keys, bearer token, or either one if you select **Both**. +All network calls for search service operations and content respect the option you select: API keys, bearer token, or either one if you select **Both**. -When you enable role-based access control in the portal, the failure mode will be "http401WithBearerChallenge" if authorization fails. +When you enable role-based access control in the portal, the failure mode is "http401WithBearerChallenge" if authorization fails. ### [**REST API**](#tab/config-svc-rest) Use the Management REST API version 2022-09-01, [Create or Update Service](/rest/api/searchmanagement/2022-09-01/services/create-or-update), to configure your service. -All calls to the Management REST API are authenticated through Azure Active Directory, with Contributor or Owner permissions. For help setting up authenticated requests in Postman, see [Manage Azure Cognitive Search using REST](search-manage-rest.md). +All calls to the Management REST API are authenticated through Azure Active Directory, with Contributor or Owner permissions. For help with setting up authenticated requests in Postman, see [Manage Azure Cognitive Search using REST](search-manage-rest.md). 1. Get service settings so that you can review the current configuration. Role assignments in the portal are service-wide. If you want to [grant permissio When [using PowerShell to assign roles](../role-based-access-control/role-assignments-powershell.md), call [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment), providing the Azure user or group name, and the scope of the assignment. -Before you start, make sure you load the **Az** and **AzureAD** modules and connect to Azure: +Before you start, make sure to load the **Az** and **AzureAD** modules and connect to Azure: ```powershell Import-Module -Name Az Import-Module -Name AzureAD Connect-AzAccount ``` -Scoped to the service, your syntax should look similar to the following example: +This example creates a role assignment scoped to a search service: ```powershell New-AzRoleAssignment -SignInName <email> ` New-AzRoleAssignment -SignInName <email> ` -Scope "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Search/searchServices/<search-service>" ``` -Scoped to an individual index: +This example creates a role assignment scoped to a specific index: ```powershell New-AzRoleAssignment -SignInName <email> ` Recall that you can only scope access to top-level resources, such as indexes, s ## Test role assignments -When testing roles, remember that roles are cumulative and inherited roles that are scoped to the subscription or resource group can't be deleted or denied at the resource (search service) level. +Use a client to test role assignments. Remember that roles are cumulative and inherited roles that are scoped to the subscription or resource group can't be deleted or denied at the resource (search service) level. ++Make sure that you [register your client application with Azure Active Directory](search-howto-aad.md) and have role assignments in place before testing access. ### [**Azure portal**](#tab/test-portal) When testing roles, remember that roles are cumulative and inherited roles that 1. On the Overview page, select the **Indexes** tab: - + Members of the Contributor role can view and create any object, but can't query an index using Search Explorer. + + Contributors can view and create any object, but can't query an index using Search Explorer. - + Members of Search Index Data Reader can use Search Explorer to query the index. You can use any API version to check for access. You should be able to issue queries and view results, but you shouldn't be able to view the index definition. + + Search Index Data Readers can use Search Explorer to query the index. You can use any API version to check for access. You should be able to send queries and view results, but you shouldn't be able to view the index definition. - + Members of Search Index Data Contributor can select **New Index** to create a new index. Saving a new index will verify write access on the service. + + Search Index Data Contributors can select **New Index** to create a new index. Saving a new index verifies write access on the service. ### [**REST API**](#tab/test-rest) -This approach assumes Postman as the REST client and uses a Postman collection and variables to provide the bearer token. You'll need Azure CLI or another tool to create a security principal for the REST client. +This approach assumes Postman as the REST client and uses a Postman collection and variables to provide the bearer token. Use Azure CLI or another tool to create a security principal for the REST client. 1. Open a command shell for Azure CLI and sign in to your Azure subscription. This approach assumes Postman as the REST client and uses a Postman collection a az login ``` -1. Get your subscription ID. The ID is used as a variable in a future step. +1. Get your subscription ID. The ID is used as a variable in a future step. ```azurecli az account show --query id -o tsv ```` -1. Create a resource group for your security principal, specifying a location and name. This example uses the West US region. You'll provide this value as variable in a future step. The role you'll create will be scoped to the resource group. +1. Create a resource group for your security principal. This example uses the West US region. You provide this value as a variable in a future step. The role that you create is scoped to the resource group. ```azurecli az group create -l westus -n MyResourceGroup This approach assumes Postman as the REST client and uses a Postman collection a az ad sp create-for-rbac --name mySecurityPrincipalName --role "Search Index Data Reader" --scopes /subscriptions/mySubscriptionID/resourceGroups/myResourceGroupName ``` - A successful response includes "appId", "password", and "tenant". You'll use these values for the variables "clientId", "clientSecret", and "tenant". + A successful response includes "appId", "password", and "tenant". You use these values for the variables "clientId", "clientSecret", and "tenant". 1. Start a new Postman collection and edit its properties. In the Variables tab, create the following variables: For more information on how to acquire a token for a specific environment, see [ ### [**.NET**](#tab/test-csharp) -See [Authorize access to a search app using Azure Active Directory](search-howto-aad.md) for instructions that create an identity for your client app, assign a role, and call [DefaultAzureCredential()](/dotnet/api/azure.identity.defaultazurecredential). +1. Use the [Azure.Search.Documents 11.4.0](https://www.nuget.org/packages/Azure.Search.Documents/11.4.0) package. -The Azure SDK for .NET supports an authorization header in the [NuGet Gallery | Azure.Search.Documents 11.4.0](https://www.nuget.org/packages/Azure.Search.Documents/11.4.0) package. Configuration is required to register an application with Azure Active Directory, and to obtain and pass authorization tokens: +1. Use [Azure.Identity for .NET](/dotnet/api/overview/azure/identity-readme) for token authentication. Microsoft recommends [`DefaultAzureCredential()`](/dotnet/api/azure.identity.defaultazurecredential) for most scenarios. -+ When obtaining the OAuth token, the scope is "https://search.azure.com/.default". The SDK requires the audience to be "https://search.azure.com". The ".default" is an Azure AD convention. + + When obtaining the OAuth token, the scope is "https://search.azure.com/.default". The SDK requires the audience to be "https://search.azure.com". The ".default" is an Azure AD convention. -+ The SDK validates that the user has the "user_impersonation" scope, which must be granted by your app, but the SDK itself just asks for "https://search.azure.com/.default". + + The SDK validates that the user has the "user_impersonation" scope, which must be granted by your app, but the SDK itself just asks for "https://search.azure.com/.default". -Example of using [client secret credential](/dotnet/api/azure.core.tokencredential): +1. Here's an example of a client connection using `DefaultAzureCredential()`. -```csharp -var tokenCredential = new ClientSecretCredential(aadTenantId, aadClientId, aadSecret); -SearchClient srchclient = new SearchClient(serviceEndpoint, indexName, tokenCredential); -``` + ```csharp + // Create a SearchIndexClient to send create/delete index commands + SearchIndexClient adminClient = new SearchIndexClient(serviceEndpoint, new DefaultAzureCredential()); ++ // Create a SearchClient to load and query documents + SearchClient srchclient = new SearchClient(serviceEndpoint, indexName, new DefaultAzureCredential()); + ``` -More details about using [Azure AD authentication with the Azure SDK for .NET](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/identity/Azure.Identity) are available in the SDK's GitHub repo. +1. Here's another example of using [client secret credential](/dotnet/api/azure.core.tokencredential): -> [!NOTE] -> If you get a 403 error, verify that your search service is enrolled in the preview program and that your service is configured for preview role assignments. + ```csharp + var tokenCredential = new ClientSecretCredential(aadTenantId, aadClientId, aadSecret); + SearchClient srchclient = new SearchClient(serviceEndpoint, indexName, tokenCredential); + ``` ++### [**Python**](#tab/test-python) ++1. Use [azure.search.documents (Azure SDK for Python) version 11.3](https://pypi.org/project/azure-search-documents/). ++1. Use [Azure.Identity for Python](/python/api/overview/azure/identity-readme) for token authentication. ++1. Use [DefaultAzureCredential](/python/api/overview/azure/identity-readme?view=azure-python#authenticate-with-defaultazurecredential&preserve-view=true) if the Python client is an application that executes server-side. Enable [interactive authentication](/python/api/overview/azure/identity-readme?view=azure-python#enable-interactive-authentication-with-defaultazurecredential&preserve-view=true) if the app runs in a browser. ++1. Here's an example: ++ ```python + from azure.search.documents import SearchClient + from azure.identity import DefaultAzureCredential + + credential = DefaultAzureCredential() + endpoint = "https://<mysearch>.search.windows.net" + index_name = "myindex" + client = SearchClient(endpoint=endpoint, index_name=index_name, credential=credential) + ``` ++### [**JavaScript**](#tab/test-javascript) ++1. Use [@azure/search-documents (Azure SDK for JavaScript), version 11.3](https://www.npmjs.com/package/@azure/search-documents). ++1. Use [Azure.Identity for JavaScript](/javascript/api/overview/azure/identity-readme) for token authentication. ++1. If you're using React, use `InteractiveBrowserCredential` for Azure AD authentication to Search. See [When to use `@azure/identity`](/javascript/api/overview/azure/identity-readme?view=azure-node-latest#when-to-use&preserve-view=true) for details. ++### [**Java**](#tab/test-java) ++1. Use [azure-search-documents (Azure SDK for Java) version 11.5.6](https://central.sonatype.com/artifact/com.azure/azure-search-documents/11.5.6). ++1. Use [Azure.Identity for Java](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true) for token authentication. ++1. Microsoft recommends [DefaultAzureCredential](/java/api/overview/azure/identity-readme?view=azure-java-stable#defaultazurecredential&preserve-view=true) for apps that run on Azure. To disable key-based authentication, set "disableLocalAuth" to true. } ``` -Requests that include an API key only, with no bearer token, will fail with an HTTP 401. +Requests that include an API key only, with no bearer token, fail with an HTTP 401. -To re-enable key authentication, rerun the last request, setting "disableLocalAuth" to false. The search service will resume acceptance of API keys on the request automatically (assuming they're specified). +To re-enable key authentication, rerun the last request, setting "disableLocalAuth" to false. The search service resumes acceptance of API keys on the request automatically (assuming they're specified). |
search | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md | Learn about the latest updates to Azure Cognitive Search functionality, docs, an |--||--| | [**ChatGPT + Enterprise data with Azure OpenAI and Cognitive Search (GitHub)**](https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/README.md) | Sample | Python code and a template for combining Cognitive Search with the large language models in OpenAI. For background, see this Tech Community blog post: [Revolutionize your Enterprise Data with ChatGPT](https://techcommunity.microsoft.com/t5/ai-applied-ai-blog/revolutionize-your-enterprise-data-with-chatgpt-next-gen-apps-w/ba-p/3762087). <br><br>Key points: <br><br>Use Cognitive Search to consolidate and index searchable content.</br> <br>Query the index for initial search results.</br> <br>Assemble prompts from those results and send to the gpt-35-turbo (preview) model in Azure OpenAI.</br> <br>Return a cross-document answer and provide citations and transparency in your customer-facing app so that users can assess the response.</br>| -## November 2022 --| Item | Type | Description | -|--||--| -| **Add search to websites** <ul><li>[C#](tutorial-csharp-overview.md)</li><li>[Python](tutorial-python-overview.md)</li><li>[JavaScript](tutorial-javascript-overview.md) </li></ul>| Sample | "Add search to websites" is a tutorial series with sample code available in three languages. This series was updated in November to run with current versions of React and the SDK client libraries. If you're integrating client code with a search index, these samples demonstrate an end-to-end approach to integration. | -| [Visual Studio Code extension for Azure Cognitive Search](https://github.com/microsoft/vscode-azurecognitivesearch/blob/master/README.md) | Feature | **Retired**. This preview feature isn't moving forward to general availability and has been removed from Visual Studio Code Marketplace. See the [documentation](search-get-started-vs-code.md) for details. | -| [Query performance dashboard](https://github.com/Azure-Samples/azure-samples-search-evaluation) | Sample | This Application Insights sample demonstrates an approach for deep monitoring of query usage and performance of an Azure Cognitive Search index. It includes a JSON template that creates a workbook and dashboard in Application Insights and a Jupyter Notebook that populates the dashboard with simulated data. | --## October 2022 --|Item | Type | Description | -|||-| -| [Compliance risk analysis using Azure Cognitive Search](/azure/architecture/guide/ai/compliance-risk-analysis) | Content | Published on Azure Architecture Center, this guide covers the implementation of a compliance risk analysis solution that uses Azure Cognitive Search. | -| [Beiersdorf customer story using Azure Cognitive Search](https://customers.microsoft.com/story/1552642769228088273-Beiersdorf-consumer-goods-azure-cognitive-search) | Content | This customer story showcases semantic search and document summarization to provide researchers with ready access to institutional knowledge. | --## September 2022 --|Item | Type | Description | -|||-| -| [Azure Cognitive Search Lab](https://github.com/Azure-Samples/azure-search-lab/blob/main/README.md) | Sample | This C# sample provides the source code for building a web front-end that accesses all of the REST API calls against an index. This tool is used by support engineers to investigate customer support issues. You can try this [demo site](https://azuresearchlab.azurewebsites.net/) before building your own copy. | -| [Event-driven indexing for Cognitive Search](https://github.com/aditmer/Event-Driven-Indexing-For-Cognitive-Search/blob/main/README.md) | Sample | This C# sample is an Azure Function app that demonstrates event-driven indexing in Azure Cognitive Search. If you've used indexers and skillsets before, you know that indexers can run on demand or on a schedule, but not in response to events. This demo shows you how to set up an indexing pipeline that responds to data update events. | --## August 2022 --|Item | Type | Description | -|||-| -| [Tutorial: Index large data from Apache Spark](search-synapseml-cognitive-services.md) | Content | This tutorial explains how to use the SynapseML open-source library to push data from Apache Spark into a search index. It also shows you how to make calls to Cognitive Services to get AI enrichment without skillsets and indexers. | --## June 2022 --|Item | Type | Description | -|||-| -| [Semantic search (preview)](semantic-search-overview.md) | Feature | New support for Storage Optimized tiers (L1, L2). | -| [Debug Sessions](cognitive-search-debug-session.md) | Feature | **General availability**. Debug sessions, a built-in editor that runs in Azure portal, is now generally available. | --## May 2022 --|Item | Type | Description | -|||-| -| [Power Query connector preview](/previous-versions/azure/search/search-how-to-index-power-query-data-sources) | Feature | **Retired**. This indexer data source was introduced in May 2021 but won't be moving forward. Migrate your data indexing code by November 2022. See the feature documentation for migration guidance. | --## February 2022 --|Item | Type | Description | -|||-| -| [Index aliases](search-how-to-alias.md) | Feature | An index alias is a secondary name that can be used to refer to an index for querying, indexing, and other operations. When index names change, for example if you version the index, instead of updating the references to an index name in your application, you can just update the mapping for your alias. | --## 2021 announcements --| Month | Feature | Description | -|-||-| -| December | [Enhanced configuration for semantic search](semantic-how-to-query-request.md#2create-a-semantic-configuration) | This configuration is a new addition to the 2021-04-30-Preview API, and is now required for semantic queries and Azure portal.| -| November | [Azure Files indexer (preview)](./search-file-storage-integration.md) | Public preview in the portal and preview REST APIs.| -| July | [Search REST API 2021-04-30-Preview](/rest/api/searchservice/index-preview) | Public preview announcement. | -| July | [Role-based access control for data plane (preview)](search-security-rbac.md) | Public preview announcement. | -| July | [Management REST API 2021-04-01-Preview](/rest/api/searchmanagement/) | Modifies [Create or Update Service](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update) to support new [DataPlaneAuthOptions](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#dataplaneauthoptions). Public preview announcement. | -| May | [Power Query connector support (preview)](/previous-versions/azure/search/search-how-to-index-power-query-data-sources) | Public preview announcement. | -| May | [Azure Data Lake Storage Gen2 indexer](search-howto-index-azure-data-lake-storage.md) | Generally available, using REST api-version=2020-06-30 and Azure portal. | -| May | [Azure MySQL indexer (preview)](search-howto-index-mysql.md) | Public preview, REST api-version=2020-06-30-Preview, [.NET SDK 11.2.1](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourcetype.mysql), and Azure portal. | -| May | [More queryLanguages for spell check and semantic results](/rest/api/searchservice/preview-api/search-documents#queryLanguage) | See [Announcement (techcommunity blog)](https://techcommunity.microsoft.com/t5/azure-ai/introducing-multilingual-support-for-semantic-search-on-azure/ba-p/2385110). Public preview ([by request](https://aka.ms/SemanticSearchPreviewSignup)). Use [Search Documents (REST)](/rest/api/searchservice/preview-api/search-documents) api-version=2020-06-30-Preview, [Azure.Search.Documents 11.3.0-beta.2](https://www.nuget.org/packages/Azure.Search.Documents/11.3.0-beta.2), or [Search explorer](search-explorer.md) in Azure portal. | -| May| [Full support for customer-managed key (CMK) encryption](search-security-manage-encryption-keys.md#full-double-encryption) | Generally available in all regions, subject to service creation dates. | -| April | [Azure Cosmos DB for Apache Gremlin support (preview)](search-howto-index-cosmosdb-gremlin.md) | Public preview ([by request](https://aka.ms/azure-cognitive-search/indexer-preview)), using api-version=2020-06-30-Preview. | -| March | [Semantic search (preview)](semantic-search-overview.md) | Search results relevance scoring based on semantic models. Public preview ([by request](https://aka.ms/SemanticSearchPreviewSignup)). Use [Search Documents (REST)](/rest/api/searchservice/preview-api/search-documents) api-version=2020-06-30-Preview or [Search explorer](search-explorer.md) in Azure portal. Region and tier restrictions apply. | -| March | [Spell check query terms (preview)](speller-how-to-add.md) | The `speller` option works with any query type (simple, full, or semantic). Public preview, REST only, api-version=2020-06-30-Preview| -| March | [SharePoint indexer (preview)](search-howto-index-sharepoint-online.md) | Public preview, REST only, api-version=2020-06-30-Preview | -| March | [Normalizers (preview)](search-normalizers.md) | Public preview, REST only, api-version=2020-06-30-Preview | -| March | [Custom Entity Lookup skill](cognitive-search-skill-custom-entity-lookup.md ) | Scans for strings specified in a custom, user-defined list of words and phrases. Generally available. | -| February | [Reset Documents (preview)](search-howto-run-reset-indexers.md) | Available in the [Search REST API 2020-06-30-Preview](/rest/api/searchservice/index-preview). | -| February | [Availability Zones](search-reliability.md#availability-zones) | Search services with two or more replicas in certain regions gain resiliency by having replicas in two or more distinct physical locations. The region and date of search service creation determine availability. | -| February | [Azure CLI](/cli/azure/search) </br>[Azure PowerShell](/powershell/module/az.search/) | New revisions now provide the full range of operations in the Management REST API 2020-08-01, including support for IP firewall rules and private endpoint. Generally available. | -| January | [Solution accelerator for Azure Cognitive Search and QnA Maker](https://github.com/Azure-Samples/search-qna-maker-accelerator) | Pulls questions and answers out of the document and suggest the most relevant answers. A live demo app can be found at [https://aka.ms/qnaWithAzureSearchDemo](https://aka.ms/qnaWithAzureSearchDemo). This feature is an open-source project (no SLA). | --## 2020 announcements --See [2020 Archive for "What's New in Cognitive Search"](/previous-versions/azure/search/search-whats-new-2020) in the content archive. --## 2019 announcements --See [2019 Archive for "What's New in Cognitive Search"](/previous-versions/azure/search/search-whats-new-2019) in the content archive. +## 2022 announcements ++| Month | Item | +|-|| +| November | **Add search to websites** updated versions of React and Azure SDK client libraries: <ul><li>[C#](tutorial-csharp-overview.md)</li><li>[Python](tutorial-python-overview.md)</li><li>[JavaScript](tutorial-javascript-overview.md) </li></ul> "Add search to websites" is a tutorial series with sample code available in three languages. This series was . If you're integrating client code with a search index, these samples demonstrate an end-to-end approach to integration. | +| November | **Retired** - [Visual Studio Code extension for Azure Cognitive Search](https://github.com/microsoft/vscode-azurecognitivesearch/blob/master/README.md). | +| November | [Query performance dashboard](https://github.com/Azure-Samples/azure-samples-search-evaluation). This Application Insights sample demonstrates an approach for deep monitoring of query usage and performance of an Azure Cognitive Search index. It includes a JSON template that creates a workbook and dashboard in Application Insights and a Jupyter Notebook that populates the dashboard with simulated data. | +| October | [Compliance risk analysis using Azure Cognitive Search](/azure/architecture/guide/ai/compliance-risk-analysis). Published on Azure Architecture Center, this guide covers the implementation of a compliance risk analysis solution that uses Azure Cognitive Search. | +| October | [Beiersdorf customer story using Azure Cognitive Search](https://customers.microsoft.com/story/1552642769228088273-Beiersdorf-consumer-goods-azure-cognitive-search). This customer story showcases semantic search and document summarization to provide researchers with ready access to institutional knowledge. | +| September | [Azure Cognitive Search Lab](https://github.com/Azure-Samples/azure-search-lab/blob/main/README.md). This C# sample provides the source code for building a web front-end that accesses all of the REST API calls against an index. This tool is used by support engineers to investigate customer support issues. You can try this [demo site](https://azuresearchlab.azurewebsites.net/) before building your own copy. | +| September | [Event-driven indexing for Cognitive Search](https://github.com/aditmer/Event-Driven-Indexing-For-Cognitive-Search/blob/main/README.md). This C# sample is an Azure Function app that demonstrates event-driven indexing in Azure Cognitive Search. If you've used indexers and skillsets before, you know that indexers can run on demand or on a schedule, but not in response to events. This demo shows you how to set up an indexing pipeline that responds to data update events. | +| August | [Tutorial: Index large data from Apache Spark](search-synapseml-cognitive-services.md). This tutorial explains how to use the SynapseML open-source library to push data from Apache Spark into a search index. It also shows you how to make calls to Cognitive Services to get AI enrichment without skillsets and indexers. | +| June | [Semantic search (preview)](semantic-search-overview.md). New support for Storage Optimized tiers (L1, L2). | +| June | **General availability** - [Debug Sessions](cognitive-search-debug-session.md).| +| May | **Retired** - [Power Query connector preview](/previous-versions/azure/search/search-how-to-index-power-query-data-sources). | +| February | [Index aliases](search-how-to-alias.md). An index alias is a secondary name that can be used to refer to an index for querying, indexing, and other operations. When index names change, for example if you version the index, instead of updating the references to an index name in your application, you can just update the mapping for your alias. | ++## Previous year's announcements +++ [2021 announcements](/previous-versions/azure/search/search-whats-new-2021)++ [2020 announcements](/previous-versions/azure/search/search-whats-new-2020)++ [2019 announcements](/previous-versions/azure/search/search-whats-new-2019) <a name="new-service-name"></a> -## Service re-brand announcement +## Service re-brand -Azure Search was renamed to **Azure Cognitive Search** in October 2019 to reflect the expanded (yet optional) use of cognitive skills and AI processing in service operations. API versions, NuGet packages, namespaces, and endpoints are unchanged. New and existing search solutions are unaffected by the service name change. +Azure Search was renamed to **Azure Cognitive Search** in October 2019 to reflect the expanded (yet optional) use of cognitive skills and AI processing in service operations. ## Service updates |
sentinel | Connect Google Cloud Platform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-google-cloud-platform.md | You can set up the GCP environment in one of two ways: 1. When asked if a workload Identity Pool has already been created for Azure, type *yes* or *no*. 1. When asked if you want to create the resources listed, type *yes*. 1. Save the resources parameters for later use. -1. In a new folder, copy the Terraform `GCPAuditLogsSetup` script into a new file, and save it as a .tf file: +1. In a new folder, copy the Terraform [GCPAuditLogsSetup script](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/GCP/Terraform/sentinel_resources_creation/GCPAuditLogsSetup) into a new file, and save it as a .tf file: ``` cd {foldername} This section shows you how to set up the GCP environment manually. Alternatively In this article, you learned how to ingest GCP data into Microsoft Sentinel using the GCP Pub/Sub Audit Logs connector. To learn more about Microsoft Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md).-- [Use workbooks](monitor-your-data.md) to monitor your data.+- [Use workbooks](monitor-your-data.md) to monitor your data. |
service-bus-messaging | Service Bus Transactions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-transactions.md | This article discusses the transaction capabilities of Microsoft Azure Service B > [!NOTE] > - The basic tier of Service Bus doesn't support transactions. The standard and premium tiers support transactions. For differences between these tiers, see [Service Bus pricing](https://azure.microsoft.com/pricing/details/service-bus/). > - Mixing management and messaging operations in a transaction isn't supported. +> - JavaScript SDK doesn't support transactions. ## Transactions in Service Bus |
service-health | Stay Informed Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/stay-informed-security.md | You receive security-related notifications affecting your Azure **subscription** Service health notifications are published by Azure and contain information about the resources under your subscription. You can review these security advisories in the Service Health experience in the Azure portal and get notified about security advisories via your preferred channel by setting up Service Health alerts for this type of notification. You can create [Activity Log alerts](../service-health/alerts-activity-log-service-notifications-portal.md) on Service notifications by using the Azure portal. >[!Note]->Depending on your requirements, you can configure various alerts to use the same [action group](../azure-monitor/alerts/action-groups.md) or different action groups. Action group types include sending a voice call, SMS, or email. You can also trigger various types of automated actions. For detailed information about notification and action types, see [Action-specific information](../azure-monitor/alerts/action-groups.md#action-specific-information). +>Depending on your requirements, you can configure various alerts to use the same [action group](../azure-monitor/alerts/action-groups.md) or different action groups. Action group types include sending a voice call, SMS, or email. You can also trigger various types of automated actions. **Email Notification** Ensure that there is a **contactable email address** entered for your [Global Ad Create **Azure Service Health** alerts for security events so that your organization can be alerted for any security event that Microsoft identifies. This is the same channel you would configure to be alerted of outages, or maintenance information on the platform: [Create Activity Log Alerts on Service Notifications using the Azure portal](../service-health/alerts-activity-log-service-notifications-portal.md). -Depending on your requirements, you can configure various alerts to use the same [action group](../azure-monitor/alerts/action-groups.md) or different action groups. Action group types include sending a voice call, SMS, or email. You can also trigger various types of automated actions. For detailed information about notification and action types, see [Action-specific information](../azure-monitor/alerts/action-groups.md#action-specific-information). +Depending on your requirements, you can configure various alerts to use the same [action group](../azure-monitor/alerts/action-groups.md) or different action groups. Action group types include sending a voice call, SMS, or email. You can also trigger various types of automated actions. There's an important difference between Service Health security advisories and [Microsoft Defender for Cloud](../defender-for-cloud/defender-for-cloud-introduction.md) security notifications. Security advisories in Service Health provide notifications dealing with platform vulnerabilities and security and privacy breaches at the subscription and tenant level, while security notifications in Microsoft Defender for Cloud communicate vulnerabilities that pertain to affected individual Azure resources. |
site-recovery | Deploy Vmware Azure Replication Appliance Modernized | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/deploy-vmware-azure-replication-appliance-modernized.md | Number of disks | 2, including the OS disk - 80 GB and a data disk - 620 GB **Component** | **Requirement** | -Operating system | Windows Server 2016 +Operating system | Windows Server 2019, Windows Server 2016 Operating system locale | English (en-*) Windows Server roles | Don't enable these roles: <br> - Active Directory Domain Services <br>- Internet Information Services <br> - Hyper-V Group policies | Don't enable these group policies: <br> - Prevent access to the command prompt. <br> - Prevent access to registry editing tools. <br> - Trust logic for file attachments. <br> - Turn on Script Execution. <br> [Learn more](/previous-versions/windows/it-pro/windows-7/gg176671(v=ws.10)) |
site-recovery | Site Recovery Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md | You can follow and subscribe to Site Recovery update notifications in the [Azure For Site Recovery components, we support N-4 versions, where N is the latest released version. These are summarized in the following table. -**Update** | **Unified Setup** | **Configuration server/Replication appliance** | **Mobility service agent** | **Site Recovery Provider** | **Recovery Services agent** +**Update** | **Unified Setup** | **Replication appliance / Configuration server** | **Mobility service agent** | **Site Recovery Provider** | **Recovery Services agent** | | | | | -[Rollup 66](https://support.microsoft.com/en-us/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 9.53.6615.1 | 5.1.8095.0 | 9.53.6615.1 | 5.1.8103.0 (Modernized VMware), 5.1.8095.0 (Hyper-V) & 5.23.0210.5 (Classic VMware) | 2.0.9260.0 -[Rollup 65](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 9.52.6522.1 | 5.1.7870.0 | 9.52.6522.1 | 5.1.7870.0 (VMware) & 5.1.7882.0 (Hyper-V) | 2.0.9259.0 -[Rollup 64](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | 9.51.6477.1 | 5.1.7802.0 | 9.51.6477.1 | 5.1.7802.0 | 2.0.9257.0 -[Rollup 63](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 9.50.6419.1 | 5.1.7626.0 | 9.50.6419.1 | 5.1.7626.0 | 2.0.9249.0 -[Rollup 62](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 9.49.6395.1 | 5.1.7418.0 | 9.49.6395.1 | 5.1.7418.0 | 2.0.9248.0 -+[Rollup 67](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f) | 9.54.6682.1 | 9.54.6682.1 / 5.1.8095.0 | 9.54.6682.1 | 5.23.0428.1 (VMware) & 5.1.8095.0 (Hyper-V) | 2.0.9261.0 +[Rollup 66](https://support.microsoft.com/en-us/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 9.53.6615.1 | 9.53.6615.1 / 5.1.8095.0 | 9.53.6615.1 | 5.1.8103.0 (Modernized VMware), 5.1.8095.0 (Hyper-V) & 5.23.0210.5 (Classic VMware) | 2.0.9260.0 +[Rollup 65](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 9.52.6522.1 | 9.52.6522.1 / 5.1.7870.0 | 9.52.6522.1 | 5.1.7870.0 (VMware) & 5.1.7882.0 (Hyper-V) | 2.0.9259.0 +[Rollup 64](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | 9.51.6477.1 | 9.51.6477.1 / 5.1.7802.0 | 9.51.6477.1 | 5.1.7802.0 | 2.0.9257.0 +[Rollup 63](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 9.50.6419.1 | 9.50.6419.1 / 5.1.7626.0 | 9.50.6419.1 | 5.1.7626.0 | 2.0.9249.0 [Learn more](service-updates-how-to.md) about update installation and support. +## Updates (May 2023) ++### Update Rollup 67 ++[Update rollup 67](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f) provides the following updates: ++**Update** | **Details** + | +**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. +**Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article. +**Azure VM disaster recovery** | Added support for Oracle Linux 8.7 with UEK7 kernel, RHEL 9 and Cent OS 9 Linux distros. +**VMware VM/physical disaster recovery to Azure** | Added support for Oracle Linux 8.7 with UEK7 kernel, RHEL 9, Cent OS 9 and Oracle Linux 9 Linux distros. <br> <br/> Added support for Windows Server 2019 as the ASR replication appliance. <br> <br/> Added support for Microsoft Edge to be the default browser in Appliance Configuration Manager. <br> <br/> Added support to select an Availability set or a Proximity Placement group, after enabling replication using modernized VMware/Physical machine replication scenario. + ## Updates (February 2023) |
site-recovery | Vmware Physical Azure Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md | Disaster recovery of physical servers | Replication of on-premises Windows/Linux vCenter Server | Version 8.0 & subsequent updates in this version, Version 7.0, 6.7, 6.5, 6.0, or 5.5 | We recommend that you use a vCenter server in your disaster recovery deployment. vSphere hosts | Version 8.0 & subsequent updates in this version, Version 7.0, 6.7, 6.5, 6.0, or 5.5 | We recommend that vSphere hosts and vCenter servers are located in the same network as the process server. By default the process server runs on the configuration server. [Learn more](vmware-physical-azure-config-process-server-overview.md). -## Site Recovery configuration server +## Azure Site Recovery replication appliance -The configuration server is an on-premises machine that runs Site Recovery components, including the configuration server, process server, and master target server. +The replication appliance is an on-premises machine that runs Site Recovery components, including various Site Recovery services that help with discovery of on-premises environment, orchestration of disaster recovery and act as a bridge between on-premises and Azure. -- For VMware VMs, you set the configuration server by downloading an OVF template to create a VMware VM.-- For physical servers, you set up the configuration server machine manually.+- For VMware VMs, you can create the replication appliance by downloading an OVF template to create a VMware VM. +- For physical servers, you can set up the replication appliance manually by running our PowerShell script. **Component** | **Requirements** | CPU cores | 8 RAM | 16 GB-Number of disks | 3 disks<br/><br/> Disks include the OS disk, process server cache disk, and retention drive for failback. -Disk free space | 600 GB of space for the process server cache. -Disk free space | 600 GB of space for the retention drive. -Operating system | Windows Server 2012 R2, or Windows Server 2016 with Desktop experience <br/><br> If you plan to use the in-built Master Target of this appliance for failback, ensure that the OS version is same or higher than the replicated items.| +Number of disks | 2 disks<br/><br/> Disks include the OS disk and data disk. +Operating system | Windows Server 2012 R2, Windows Server 2016 or Windows Server 2019 with Desktop experience Operating system locale | English (en-us)-[PowerCLI](https://my.vmware.com/web/vmware/details?productId=491&downloadGroup=PCLI600R1) | Not needed for configuration server version [9.14](https://support.microsoft.com/help/4091311/update-rollup-23-for-azure-site-recovery) or later. Windows Server roles | Don't enable Active Directory Domain Services; Internet Information Services (IIS) or Hyper-V.-Group policies| - Prevent access to the command prompt. <br/> - Prevent access to registry editing tools. <br/> - Trust logic for file attachments. <br/> - Turn on Script Execution. <br/> - [Learn more](/previous-versions/windows/it-pro/windows-7/gg176671(v=ws.10))| +Group policies| Don't enable these group policies: <br/> - Prevent access to the command prompt. <br/> - Prevent access to registry editing tools. <br/> - Trust logic for file attachments. <br/> - Turn on Script Execution. <br/> - [Learn more](/previous-versions/windows/it-pro/windows-7/gg176671(v=ws.10))| IIS | Make sure you:<br/><br/> - Don't have a pre-existing default website <br/> - Enable [anonymous authentication](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc731244(v=ws.10)) <br/> - Enable [FastCGI](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc753077(v=ws.10)) setting <br/> - Don't have preexisting website/app listening on port 443<br/> NIC type | VMXNET3 (when deployed as a VMware VM)-IP address type | Static +Fully qualified domain name (FQDN) | Static Ports | 443 used for control channel orchestration<br/>9443 for data transport-IP address | Make sure that configuration server and process server have a static IPv4 address, and doesn't have NAT configured. +NAT | Supported > [!NOTE] > Operating system has to be installed with English locale. Conversion of locale post installation could result in potential issues. ## Replicated machines -In Modernized, replication is done by the Azure Site Recovery replication appliance. For detailed information about replication appliance, see [this article](deploy-vmware-azure-replication-appliance-modernized.md). - Site Recovery supports replication of any workload running on a supported machine. **Component** | **Details** Soft delete | Not supported. **Feature** | **Supported** | -Availability sets | Yes. Not supported for modernized experience. +Availability sets | Yes +Proximity Placement Groups | Yes Availability zones | No HUB | Yes Managed disks | Yes |
storage | Storage Blob Upload Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-python.md | The following example uploads a block blob with index tags: You can have greater control over how to divide uploads into blocks by manually staging individual blocks of data. When all of the blocks that make up a blob are staged, you can commit them to Blob Storage. +Use the following method to create a new block to be committed as part of a blob: ++- [stage_block](/python/api/azure-storage-blob/azure.storage.blob.blobclient#azure-storage-blob-blobclient-stage-block) ++Use the following method to write a blob by specifying the list of block IDs that make up the blob: ++- [commit_block_list](/python/api/azure-storage-blob/azure.storage.blob.blobclient#azure-storage-blob-blobclient-commit-block-list) + The following example reads data from a file and stages blocks to be committed as part of a blob: :::code language="python" source="~/azure-storage-snippets/blobs/howto/python/blob-devguide-py/blob-devguide-upload.py" id="Snippet_upload_blob_blocks"::: |
storage | Container Storage Aks Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-aks-quickstart.md | + + Title: Quickstart for installing and configuring Azure Container Storage Preview with Azure Kubernetes Service (AKS) +description: Learn how to install and configure Azure Container Storage Preview for use with Azure Kubernetes Service. You'll end up with new storage classes that you can use for your Kubernetes workloads. +++ Last updated : 05/15/2023+++++# Quickstart: Install Azure Container Storage Preview for use with Azure Kubernetes Service +[Azure Container Storage](container-storage-introduction.md) is a cloud-based volume management, deployment, and orchestration service built natively for containers. This Quickstart shows you how to configure and use Azure Container Storage for use with [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md). At the end, you'll have new storage classes that you can use for your Kubernetes workloads, and you can then create a storage pool using one of three block storage options. +++## Getting started ++- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ++- Sign up for the public preview by completing the [onboarding survey](https://aka.ms/AzureContainerStoragePreviewSignUp). ++- Make sure the identity you're using to create your AKS cluster has the appropriate minimum permissions. For more details, see [Access and identity options for Azure Kubernetes Service](../../aks/concepts-identity.md). ++- This article requires version 2.0.64 or later of the Azure CLI. See [How to install the Azure CLI](/cli/azure/install-azure-cli). If you're using Azure Cloud Shell, the latest version is already installed. If you plan to run the commands in this quickstart locally instead of in Azure Cloud Shell, be sure to run them with administrative privileges. ++- If you're using Azure Cloud Shell, you might be prompted to mount storage. Select the Azure subscription where you want to create the storage account and select **Create**. ++- You'll need the Kubernetes command-line client, `kubectl`. It's already installed if you're using Azure Cloud Shell, or you can install it locally by running the `az aks install-cli` command. ++## Create a resource group ++An Azure resource group is a logical group that holds your Azure resources that you want to manage as a group. When you create a resource group, you're prompted to specify a location. This location is: ++* The storage location of your resource group metadata. +* Where your resources will run in Azure if you don't specify another region during resource creation. ++> [!IMPORTANT] +> Azure Container Storage Preview is only available in *eastus*, *westus2*, *westus3*, and *westeurope* regions. ++1. Set your subscription context using the `az account set` command. You can view the subscription IDs for all the subscriptions you have access to by running the `az account list --output table` command. Remember to replace `<subscription-id>` with your subscription ID. ++ ```azurecli-interactive + az account set --subscription <subscription-id> + ``` ++2. Create a resource group using the `az group create` command. Replace `<resource-group-name>` with the name of the resource group you want to create, and replace `<location>` with *eastus*, *westus2*, *westus3*, or *westeurope*. ++ ```azurecli-interactive + az group create --name <resource-group-name> --location <location> + ``` ++ If the resource group was created successfully, you'll see output similar to this: + + ```json + { + "id": "/subscriptions/<guid>/resourceGroups/myContainerStorageRG", + "location": "eastus", + "managedBy": null, + "name": "myContainerStorageRG", + "properties": { + "provisioningState": "Succeeded" + }, + "tags": null + } + ``` ++## Choose a data storage option and virtual machine type ++Before you create your cluster, you should understand which back-end storage option you'll ultimately choose to create your storage pool. This is because different storage services work best with different virtual machine (VM) types as cluster nodes, and you'll deploy your cluster before you create the storage pool. ++### Data storage options ++- **[Azure Elastic SAN Preview](../elastic-san/elastic-san-introduction.md)**: Azure Elastic SAN preview is a good fit for general purpose databases, streaming and messaging services, CD/CI environments, and other tier 1/tier 2 workloads. Storage is provisioned on demand per created volume and volume snapshot. Multiple clusters can access a single SAN concurrently, however persistent volumes can only be attached by one consumer at a time. ++- **[Azure Disks](../../virtual-machines/managed-disks-overview.md)**: Azure Disks are a good fit for databases such as MySQL, MongoDB, and PostgreSQL. Storage is provisioned per target container storage pool size and maximum volume size. ++- **Ephemeral Disk**: This option uses local NVMe drives on the AKS nodes and is extremely latency sensitive (low sub-ms latency), so it's best for applications with no data durability requirement or with built-in data replication support such as Cassandra. AKS discovers the available ephemeral storage on AKS nodes and acquires the drives for volume deployment. ++### VM types ++To use Azure Container Storage, you'll need a node pool of at least three Linux VMs. Each VM should have a minimum of four virtual CPUs (vCPUs). Azure Container Storage will consume one core for I/O processing on every VM the extension is deployed to. ++If you intend to use Azure Elastic SAN Preview or Azure Disks with Azure Container Storage, then you should choose a [general purpose VM type](../../virtual-machines/sizes-general.md) such as **standard_d4s_v5** for the cluster nodes. The VMs must have standard hard disk drives (HDD), not SSD. ++If you intend to use Ephemeral Disk, choose a [storage optimized VM type](../../virtual-machines/sizes-storage.md) with NVMe drives such as **standard_l8s_v3**. In order to use Ephemeral Disk, the VMs must have NVMe drives. ++## Create AKS cluster ++Run the following command to create a Linux-based AKS cluster and enable a system-assigned managed identity. Replace `<resource-group>` with the name of the resource group you created, `<cluster-name>` with the name of the cluster you want to create, and `<vm-type>` with the VM type you selected in the previous step. For this Quickstart, we'll create a cluster with three nodes. Increase the `--node-count` if you want a larger cluster. ++```azurecli-interactive +az aks create -g <resource-group> -n <cluster-name> --node-count 3 -s <vm-type> --generate-ssh-keys +``` ++The deployment will take a few minutes to complete. ++> [!NOTE] +> When you create an AKS cluster, AKS automatically creates a second resource group to store the AKS resources. This second resource group follows the naming convention `MC_YourResourceGroup_YourAKSClusterName_Region`. For more information, see [Why are two resource groups created with AKS?](../../aks/faq.md#why-are-two-resource-groups-created-with-aks). ++## Connect to the cluster ++To connect to the cluster, use the Kubernetes command-line client, `kubectl`. ++1. Configure `kubectl` to connect to your cluster using the `az aks get-credentials` command. The following command: ++ * Downloads credentials and configures the Kubernetes CLI to use them. + * Uses `~/.kube/config`, the default location for the Kubernetes configuration file. You can specify a different location for your Kubernetes configuration file using the *--file* argument. ++ ```azurecli-interactive + az aks get-credentials --resource-group <resource-group> --name <cluster-name> + ``` ++2. Verify the connection to your cluster using the `kubectl get` command. This command returns a list of the cluster nodes. ++ ```azurecli-interactive + kubectl get nodes + ``` ++3. The following output example shows the nodes in your cluster. Make sure the status for all nodes shows *Ready*: ++ ```output + NAME STATUS ROLES AGE VERSION + aks-nodepool1-34832848-vmss000000   Ready    agent   80m   v1.25.6 + aks-nodepool1-34832848-vmss000001   Ready    agent   80m   v1.25.6 + aks-nodepool1-34832848-vmss000002   Ready    agent   80m   v1.25.6 + ``` + + Take note of the name of your node pool. In this example, it would be **nodepool1**. ++## Label the node pool ++Next, you must update your node pool label to associate the node pool with the correct IO engine for Azure Container Storage. ++Run the following command to update the label. Remember to replace `<resource-group>` and `<cluster-name>` with your own values, and replace `<nodepool-name>` with the name of your node pool from the previous step. ++```azurecli-interactive +az aks nodepool update --resource-group <resource group> --cluster-name <cluster name> --name <nodepool name> --labels acstor.azure.com/io-engine=acstor +``` ++> [!TIP] +> You can verify that the node pool is correctly labeled by signing into the [Azure portal](https://portal.azure.com?azure-portal=true) and navigating to your AKS cluster. Go to **Settings > Node pools**, select your node pool, and under **Taints and labels** you should see `Labels: acstor.azure.com/io-engine:acstor`. ++## Assign Contributor role to AKS managed identity ++Azure Container Service is a separate service from AKS, so you'll need to grant permissions to allow Azure Container Storage to provision storage for your cluster. Specifically, you must assign the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) Azure RBAC built-in role to the AKS managed identity. You'll need an [Owner](../../role-based-access-control/built-in-roles.md#owner) role for your Azure subscription in order to do this. If you don't have sufficient permissions, ask your admin to perform these steps. ++1. Sign into the [Azure portal](https://portal.azure.com?azure-portal=true), and search for and select **Kubernetes services**. +1. Locate and select your AKS cluster. Select **Settings** > **Properties** from the left navigation. +1. Under **Infrastructure resource group**, you should see a link to the resource group that AKS created when you created the cluster. Select it. +1. Select **Access control (IAM)** from the left pane. +1. Select **Add > Add role assignment**. +1. Under **Assignment type**, select **Privileged administrator roles** and then **Contributor**. If you don't have an Owner role on the subscription, you won't be able to add the Contributor role. +1. Under **Assign access to**, select **Managed identity**. +1. Under **Members**, click **+ Select members**. The **Select managed identities** menu will appear. +1. Under **Managed identity**, select **User-assigned managed identity**. +1. Under **Select**, search for and select the managed identity with your cluster name and `-agentpool` appended. +1. Select **Review + assign**. ++## Install Azure Container Storage ++The initial install uses Azure Arc CLI commands to download a new extension. Replace `<cluster-name>` and `<resource-group>` with your own values. The `<name>` value can be whatever you want; it's just a label for the extension you're installing. ++During installation, you might be asked to install the `k8s-extension`. Select **Y**. ++```azurecli-interactive +az k8s-extension create --cluster-type managedClusters --cluster-name <cluster name> --resource-group <resource group name> --name <name of extension> --extension-type microsoft.azurecontainerstorage --scope cluster --release-train prod --release-namespace acstor +``` ++Installation takes 10-15 minutes to complete. You can check if the installation completed correctly by running the following command and ensuring that `provisioningState` says **Succeeded**: ++```azurecli-interactive +az k8s-extension list --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type managedClusters +``` ++Congratulations, you've successfully installed Azure Container Storage. You now have new storage classes that you can use for your Kubernetes workloads. ++## Next steps ++Now you can create a storage pool and persistent volume claim, and then deploy a pod and attach a persistent volume. Follow the steps in the appropriate how-to article. ++- [Use Azure Container Storage Preview with Azure Elastic SAN Preview](use-container-storage-with-elastic-san.md) +- [Use Azure Container Storage Preview with Azure Disks](use-container-storage-with-managed-disks.md) +- [Use Azure Container Storage with Azure Ephemeral disk (NVMe)](use-container-storage-with-local-disk.md) |
storage | Container Storage Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-faq.md | + + Title: Frequently asked questions (FAQ) for Azure Container Storage +description: Get answers to Azure Container Storage frequently asked questions. ++ Last updated : 05/12/2023++++++# Frequently asked questions (FAQ) about Azure Container Storage +[Azure Container Storage](container-storage-introduction.md) is a cloud-based volume management, deployment, and orchestration service built natively for containers. ++## General questions ++* <a id="azure-container-storage-vs-csi-drivers"></a> + **What's the difference between Azure Container Storage and Azure CSI drivers?** + Azure Container Storage is built natively for containers and provides a storage solution that's optimized for creating and managing volumes for running production-scale stateful container applications. Other Azure CSI drivers provide a standard storage solution that can be used with different container orchestrators and support the specific type of storage solution per CSI driver definition. ++* <a id="azure-container-storage-regions"></a> + **In which Azure regions is Azure Container Storage available?** + Azure Container Storage Preview is only available in East US, West Europe, West US 2, and West US 3. See [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products). ++* <a id="azure-container-storage-preview-limitations"></a> + **Which other Azure services does Azure Container Storage support?** + During public preview, Azure Container Storage supports only Azure Kubernetes Service (AKS) with storage pools provided by Azure Disks, Ephemeral Disk, or Azure Elastic SAN Preview. ++* <a id="azure-container-storage-delete-aks-resource-group"></a> + **I've created an Elastic SAN storage pool, and I'm trying to delete my resource group where my AKS cluster is located and it's not working. Why?** + Sign into the [Azure portal](https://portal.azure.com?azure-portal=true) and select **Resource groups**. Locate the resource group that AKS created (the resource group name starts with **MC_**). Select the SAN resource object within that resource group. Manually remove all volumes and volume groups. Then retry deleting the resource group that includes your AKS cluster. ++* <a id="azure-container-storage-autoupgrade"></a> + **Is there any performance impact when upgrading to a new version of Azure Container Storage?** + If you leave autoupgrade turned on (recommended), you might experience temporary I/O latency during the upgrade process. If you turn off autoupgrade and install the new version manually, there won't be any impact; however, you won't get the benefit of automatic upgrades and instant access to new features. ++* <a id="azure-container-storage-uninstall"></a> + **How do I uninstall Azure Container Storage?** + To uninstall Azure Container Storage, you can delete the `k8s-extension` by running the following Azure CLI command. Be sure to replace `<cluster-name>`, `<resource-group>`, and `<name>` with your own values (`<name>` should be the value you specified for the --name parameter when you installed Azure Container Storage). + + ```azurecli-interactive + az k8s-extension delete --cluster-type managedClusters --cluster-name <cluster-name> --resource-group <resource-group> --name <extension-name> + ``` + + You can also use the [`az group delete`](/cli/azure/group) command to delete the resource group and all resources contained in the resource group: + + ```azurecli-interactive + az group delete --name <resource-group> + ``` ++## Billing and pricing ++* <a id="azure-container-storage-billing"></a> + **How much does Azure Container Storage cost to use?** + See the [Azure Container Storage pricing page](https://aka.ms/AzureContainerStoragePricingPage). ++## See also +- [What is Azure Container Storage?](container-storage-introduction.md) |
storage | Container Storage Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-introduction.md | + + Title: Introduction to Azure Container Storage Preview +description: An overview of Azure Container Storage Preview, a service built natively for containers that enables customers to create and manage volumes for running production-scale stateful container applications. +++ Last updated : 05/12/2023++++++# What is Azure Container Storage? Preview ++> [!IMPORTANT] +> Azure Container Storage is currently in public preview and isn't available in all Azure regions. See [regional availability](#regional-availability). +> This preview version is provided without a service level agreement, and isn't recommended for production workloads. Certain features might not be supported or might have constrained capabilities. +> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ++Azure Container Storage is a cloud-based volume management, deployment, and orchestration service built natively for containers. It integrates with Kubernetes, allowing customers to dynamically and automatically provision persistent volumes to store data for stateful applications running on Kubernetes clusters. ++To sign up for Azure Container Storage Preview, complete the [onboarding survey](https://aka.ms/AzureContainerStoragePreviewSignUp). To get started using Azure Container Storage, see [Install Azure Container Storage for use with AKS](container-storage-aks-quickstart.md). ++## Supported storage types ++Azure Container Storage utilizes existing Azure Storage offerings for actual data storage and offers a volume orchestration and management solution purposely built for containers. You can choose any of the supported backing storage options to create a storage pool for your persistent volumes. ++Azure Container Storage offers persistent volume support with ReadWriteOnce access mode to Linux-based [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md) clusters. Supported backing storage options include block storage offerings only: Azure Disks, Ephemeral Disks, and Azure Elastic SAN Preview. The following table summarizes the supported storage types, recommended workloads, and provisioning models. ++| **Storage type** | **Description** | **Workloads** | **Offerings** | **Provisioning model** | +||--|||| +| **[Azure Elastic SAN Preview](../elastic-san/elastic-san-introduction.md)** | Provision on demand, fully managed resource | General purpose databases, streaming and messaging services, CD/CI environments, and other tier 1/tier 2 workloads. | Azure Elastic SAN Preview | Provisioned on demand per created volume and volume snapshot. Multiple clusters can access a single SAN concurrently, however persistent volumes can only be attached by one consumer at a time. | +| **[Azure Disks](../../virtual-machines/managed-disks-overview.md)** | Granular control of storage SKUs and configurationsΓÇï | Azure Disks are a good fit for tier 1 and general purpose databases such as MySQL, MongoDB, and PostgreSQL. | Premium SSD v2, Premium SSD | Provisioned per target container storage pool size and maximum volume size. | +| **Ephemeral Disk** | Utilizes local storage resources on AKS nodes | Ephemeral disk is extremely latency sensitive (low sub-ms latency), so it's best for applications with no data durability requirement or with built-in data replication support such as Cassandra. | NVMe only (available on [storage optimized VM SKUs](../../virtual-machines/sizes-storage.md)) | Deployed as part of the VMs hosting an AKS cluster. AKS discovers the available ephemeral storage on AKS nodes and acquires them for volume deployment. | ++## Regional availability ++Azure Container Storage Preview is only available in the following Azure regions: ++- East US +- West Europe +- West US 2 +- West US 3 ++## Why Azure Container Storage is useful +Until now, providing cloud storage for containers required using individual container storage interface (CSI) drivers to use storage services intended for IaaS-centric workloads and make them work for containers. This creates operational overhead and increases the risk of issues with application availability, scalability, performance, usability, and cost. ++Azure Container Storage is derived from [OpenEBS](https://openebs.io/), an open-source solution that provides container storage capabilities for Kubernetes. By offering a managed volume orchestration solution via microservice-based storage controllers in a Kubernetes environment, Azure Container Storage enables true container-native storage. ++You can use Azure Container Storage to: ++* **Accelerate VM-to-container initiatives:** Azure Container Storage surfaces the full spectrum of Azure block storage offerings that were previously only available for VMs and makes them available for containers. This includes ephemeral disk that provides extremely low latency for workloads like Cassandra, as well as Azure Elastic SAN Preview that provides native iSCSI and shared provisioned targets. ++* **Simplify volume management with Kubernetes:** By providing volume orchestration via the Kubernetes control plane, Azure Container Storage makes it easy to deploy and manage volumes within Kubernetes - without the need to move back and forth between different control planes. ++* **Reduce total cost of ownership (TCO):** Improve cost efficiency by increasing the scale of persistent volumes supported per pod or node. Reduce the storage resources needed for provisioning by dynamically sharing storage resources. Note that scale up support for the storage pool itself isn't supported. ++## Key benefits +* **Rapid scale out of stateful pods:** Azure Container Storage mounts persistent volumes over network block storage protocols (NVMe-oF or iSCSI), offering fast attach and detach of persistent volumes. You can start small and deploy resources as needed while making sure your applications aren't starved or disrupted, either during initialization or in production. Application resiliency is improved with pod respawns across the cluster, requiring rapid movement of persistent volumes. Leveraging remote network protocols, Azure Container Storage tightly couples with the pod lifecycle to support highly resilient, high-scale stateful applications on AKS. ++* **Improved performance for stateful workloads:** Azure Container Storage enables superior read performance and provides near-disk write performance by using NVMe-oF over RDMA. This allows customers to cost-effectively meet performance requirements for various container workloads including tier 1 I/O intensive, general purpose, throughput sensitive, and dev/test. Accelerate the attach/detach time of persistent volumes and minimize pod failover time. ++* **Kubernetes-native volume orchestration:** Create storage pools and persistent volumes, capture snapshots, and manage the entire lifecycle of volumes using `kubectl` commands without switching between toolsets for different control plane operations. ++## Glossary +It's helpful to understand some key terms relating to Azure Container Storage and Kubernetes: ++- **Containerization** ++ Packing application code with only the operating system and required dependencies to create a single executable. ++- **Kubernetes** ++ Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. A Kubernetes cluster is a set of nodes that run containerized applications. ++- **Azure Kubernetes Service (AKS)** ++ [Azure Kubernetes Service](../../aks/intro-kubernetes.md) is a hosted Kubernetes service that simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. Azure handles critical tasks, like health monitoring and maintenance. ++- **Storage pool** ++ The Azure Container Storage stack attempts to unify the object model across cluster owned resources and platform abstractions. To accomplish the unified representation, the available storage capacity is aggregated into a storage pool object. The storage capacity within a storage pool is considered homogeneous. An AKS cluster can have multiple storage pools. Storage pools also serve as the authentication and provisioning boundary. They provide a logical construct for operators to manage the storage infrastructure while simplifying volume creation and management for application developers. ++- **Storage class** ++ A Kubernetes storage class defines how a unit of storage is dynamically created with a persistent volume. For more information, see [Kubernetes Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/). ++- **Persistent volume** ++ Persistent volumes are like disks in a VM. They represent a raw block device that you can use to mount any file system. Volumes are thinly provisioned within a storage pool and share the performance characteristics (IOPS, bandwidth, and capacity) of the storage pool. Application developers create persistent volumes alongside their application or pod definitions, and the volumes are often tied to the lifecycle of the stateful application. For more information, see [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). ++- **Persistent volume claim (PVC)** ++ A persistent volume claim is used to automatically provision storage based on a storage class. ++## Next steps +- [Install Azure Container Storage for use with AKS](container-storage-aks-quickstart.md) +- [Azure Container Storage pricing page](https://aka.ms/AzureContainerStoragePricingPage) |
storage | Use Container Storage With Elastic San | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-elastic-san.md | + + Title: Use Azure Container Storage Preview with Azure Elastic SAN Preview +description: Configure Azure Container Storage Preview for use with Azure Elastic SAN Preview. Create a storage pool, select a storage class, create a persistent volume claim, and attach the persistent volume to a pod. +++ Last updated : 05/15/2023+++++# Use Azure Container Storage Preview with Azure Elastic SAN Preview +[Azure Container Storage](container-storage-introduction.md) is a cloud-based volume management, deployment, and orchestration service built natively for containers. This article shows you how to configure Azure Container Storage to use Azure Elastic SAN Preview as back-end storage for your Kubernetes workloads. ++## Prerequisites ++- This article requires version 2.0.64 or later of the Azure CLI. See [How to install the Azure CLI](/cli/azure/install-azure-cli). If you're using Azure Cloud Shell, the latest version is already installed. If you plan to run the commands locally instead of in Azure Cloud Shell, be sure to run them with administrative privileges. +- You'll need an Azure Kubernetes Service (AKS) cluster with a node pool of at least three [general purpose VMs](../../virtual-machines/sizes-general.md) such as **standard_d4s_v5** for the cluster nodes, each with a minimum of four virtual CPUs (vCPUs). The VMs must have standard hard disk drives (HDD), not SSD. +- Follow the instructions in [Install Azure Container Storage](container-storage-aks-quickstart.md) to assign [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role to the AKS managed identity and install Azure Container Storage Preview. ++## Regional availability ++Azure Container Storage Preview is only available in the following Azure regions: ++- East US +- West Europe +- West US 2 +- West US 3 ++## Create a storage pool ++First, create a storage pool, which is a logical grouping of storage for your Kubernetes cluster, by defining it in a YAML manifest file. Follow these steps to create a storage pool with Azure Elastic SAN Preview. ++1. Use your favorite text editor to create a YAML manifest file such as `code acstor-storagepool.yaml`. ++1. Paste in the following code. The storage pool **name** value can be whatever you want. ++ ```yml + apiVersion: containerstorage.azure.com/v1alpha1 + kind: StoragePool + metadata: + name: managed + namespace: acstor + spec: + poolType: + elasticSan: {} + resources: + requests: {"storage": 1Ti} + ``` ++1. Apply the YAML manifest file to create the storage pool. + + ```azurecli-interactive + kubectl apply -f acstor-storagepool.yaml + ``` + + When storage pool creation is complete, you'll see a message like: + + ```output + storagepool.containerstorage.azure.com/managed created + ``` + + You can also run this command to check the status of the storage pool. Replace `<storage-pool-name>` with your storage pool **name** value. For this example, the value would be **managed**. + + ```azurecli-interactive + kubectl describe sp <storage-pool-name> -n acstor + ``` ++When the storage pool is created, Azure Container Storage will create a storage class on your behalf using the naming convention `acstor-<storage-pool-name>`. It will also create an Azure Elastic SAN Preview resource. ++## Assign Contributor role to AKS managed identity on Azure Elastic SAN Preview subscription ++You'll need an [Owner](../../role-based-access-control/built-in-roles.md#owner) role for your Azure subscription in order to do this. If you don't have sufficient permissions, ask your admin to perform these steps. ++1. Sign into the [Azure portal](https://portal.azure.com?azure-portal=true). +1. Select **Subscriptions**, and locate and select the subscription associated with the Azure Elastic SAN Preview resource that Azure Container Storage created on your behalf. This will likely be the same subscription as the AKS cluster that Azure Container Storage is installed on. You can verify this by locating the Elastic SAN resource in the resource group that AKS created (`MC_YourResourceGroup_YourAKSClusterName_Region`). +1. Select **Access control (IAM)** from the left pane. +1. Select **Add > Add role assignment**. +1. Under **Assignment type**, select **Privileged administrator roles** and then **Contributor**. If you don't have an Owner role on the subscription, you won't be able to add the Contributor role. +1. Under **Assign access to**, select **Managed identity**. +1. Under **Members**, click **+ Select members**. The **Select managed identities** menu will appear. +1. Under **Managed identity**, select **User-assigned managed identity**. +1. Under **Select**, search for and select the managed identity with your cluster name and `-agentpool` appended. +1. Select **Review + assign**. ++## Display the available storage classes ++When the storage pool is ready to use, you must select a storage class to define how storage is dynamically created when creating persistent volume claims and deploying persistent volumes. ++Run `kubectl get sc` to display the available storage classes. You should see a storage class called `acstor-<storage-pool-name>`. ++> [!IMPORTANT] +> Don't use the storage class that's marked **internal**. It's an internal storage class that's needed for Azure Container Storage to work. ++## Create a persistent volume claim ++A persistent volume claim (PVC) is used to automatically provision storage based on a storage class. Follow these steps to create a PVC using the new storage class. ++1. Use your favorite text editor to create a YAML manifest file such as `code acstor-pvc.yaml`. ++1. Paste in the following code. The PVC `name` value can be whatever you want. ++ ```yml + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: managedpvc + spec: + accessModes: + - ReadWriteOnce + storageClassName: acstor-managed # replace with the name of your storage class if different + resources: + requests: + storage: 100Gi + ``` ++1. Apply the YAML manifest file to create the PVC. + + ```azurecli-interactive + kubectl apply -f acstor-pvc.yaml + ``` + + You should see output similar to: + + ```output + persistentvolumeclaim/managedpvc created + ``` + + You can verify the status of the PVC by running the following command: + + ```azurecli-interactive + kubectl describe pvc managedpvc + ``` ++Once the PVC is created, it's ready for use by a pod. ++## Deploy a pod and attach a persistent volume ++Create a pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for benchmarking and workload simulation, and specify a mount path for the persistent volume. For **claimName**, use the **name** value that you used when creating the persistent volume claim. ++1. Use your favorite text editor to create a YAML manifest file such as `code acstor-pod.yaml`. ++1. Paste in the following code. ++ ```yml + kind: Pod + apiVersion: v1 + metadata: + name: fiopod + spec: + nodeSelector: + acstor.azure.com/io-engine: acstor + volumes: + - name: managedpv + persistentVolumeClaim: + claimName: managedpvc + containers: + - name: fio + image: nixery.dev/shell/fio + args: + - sleep + - "1000000" + volumeMounts: + - mountPath: "/volume" + name: managedpv + ``` ++1. Apply the YAML manifest file to deploy the pod. + + ```azurecli-interactive + kubectl apply -f acstor-pod.yaml + ``` + + You should see output similar to the following: + + ```output + pod/fiopod created + ``` ++1. Check that the pod is running and that the persistent volume claim has been bound successfully to the pod: ++ ```azurecli-interactive + kubectl describe pod fiopod + kubectl describe pvc managedpvc + ``` ++1. Check fio testing to see its current status: ++ ```azurecli-interactive + kubectl exec -it fiopod -- fio --name=benchtest --size=800m --filename=/volume/test --direct=1 --rw=randrw --ioengine=libaio --bs=4k --iodepth=16 --numjobs=8 --time_based --runtime=60 + ``` ++You've now deployed a pod that's using an Elastic SAN as its storage, and you can use it for your Kubernetes workloads. ++## Detach and reattach a persistent volume ++To detach a persistent volume, delete the pod that the persistent volume is attached to. Replace `<pod-name>` with the name of the pod, for example **fiopod**. ++```azurecli-interactive +kubectl delete pods <pod-name> +``` ++To reattach a persistent volume, simply reference the persistent volume claim name in the YAML manifest file as described in [Deploy a pod and attach a persistent volume](#deploy-a-pod-and-attach-a-persistent-volume). ++To check which persistent volume a persistent volume claim is bound to, run `kubectl get pvc <persistent-volume-claim-name>`. ++## Delete the storage pool ++If you want to delete a storage pool, run the following command. Replace `<storage-pool-name>` with the storage pool name. ++```azurecli-interactive +kubectl delete sp -n acstor <storage-pool-name> +``` ++## See also ++- [What is Azure Container Storage?](container-storage-introduction.md) +- [What is Azure Elastic SAN? Preview](../elastic-san/elastic-san-introduction.md) |
storage | Use Container Storage With Local Disk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-local-disk.md | + + Title: Use Azure Container Storage Preview with Ephemeral Disk +description: Configure Azure Container Storage Preview for use with Ephemeral Disk (NVMe). Create a storage pool, select a storage class, create a persistent volume claim, and attach the persistent volume to a pod. +++ Last updated : 05/12/2023+++++# Use Azure Container Storage Preview with Ephemeral Disk +[Azure Container Storage](container-storage-introduction.md) is a cloud-based volume management, deployment, and orchestration service built natively for containers. This article shows you how to configure Azure Container Storage to use Ephemeral Disk as back-end storage for your Kubernetes workloads. ++> [!IMPORTANT] +> Azure Container Storage Preview only supports NVMe for local disk. Temp drives and local SSD aren't currently supported. Local NVMe disks are ephemeral, meaning that they're created on the local virtual machine (VM) storage and not saved to an Azure storage service. Data will be lost on these disks if you stop/deallocate your VM. ++## Prerequisites ++- This article requires version 2.0.64 or later of the Azure CLI. See [How to install the Azure CLI](/cli/azure/install-azure-cli). If you're using Azure Cloud Shell, the latest version is already installed. If you plan to run the commands locally instead of in Azure Cloud Shell, be sure to run them with administrative privileges. +- You'll need an Azure Kubernetes Service (AKS) cluster with a node pool of at least three [storage optimized VMs](../../virtual-machines/sizes-storage.md) with NVMe drives such as **standard_l8s_v3**. We recommend that each VM have a minimum of four virtual CPUs (vCPUs). +- Follow the instructions in [Install Azure Container Storage](container-storage-aks-quickstart.md) to assign [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role to the AKS managed identity and install Azure Container Storage Preview. ++## Regional availability ++Azure Container Storage Preview is only available in the following Azure regions: ++- East US +- West Europe +- West US 2 +- West US 3 ++## Create a storage pool ++First, create a storage pool, which is a logical grouping of storage for your Kubernetes cluster, by defining it in a YAML manifest file. Follow these steps to create a storage pool using local disk. ++1. Use your favorite text editor to create a YAML manifest file such as `code acstor-storagepool.yaml`. ++1. Paste in the following code. The storage pool **name** value can be whatever you want. ++ ```yml + apiVersion: containerstorage.azure.com/v1alpha1 + kind: StoragePool + metadata: + name: ephemeraldisk + namespace: acstor + spec: + poolType: + ephemeralDisk: {} + ``` ++1. Apply the YAML manifest file to create the storage pool. + + ```azurecli-interactive + kubectl apply -f acstor-storagepool.yaml + ``` + + When storage pool creation is complete, you'll see a message like: + + ```output + storagepool.containerstorage.azure.com/ephemeraldisk created + ``` + + You can also run this command to check the status of the storage pool. Replace `<storage-pool-name>` with your storage pool **name** value. For this example, the value would be **ephemeraldisk**. + + ```azurecli-interactive + kubectl describe sp <storage-pool-name> -n acstor + ``` ++When the storage pool is created, Azure Container Storage will create a storage class on your behalf, using the naming convention `acstor-<storage-pool-name>`. ++## Display the available storage classes ++When the storage pool is ready to use, you must select a storage class to define how storage is dynamically created when creating persistent volume claims and deploying persistent volumes. ++Run `kubectl get sc` to display the available storage classes. You should see a storage class called `acstor-<storage-pool-name>`. ++> [!IMPORTANT] +> Don't use the storage class that's marked **internal**. It's an internal storage class that's needed for Azure Container Storage to work. ++## Create a persistent volume claim ++A persistent volume claim (PVC) is used to automatically provision storage based on a storage class. Follow these steps to create a PVC using the new storage class. ++1. Use your favorite text editor to create a YAML manifest file such as `code acstor-pvc.yaml`. ++1. Paste in the following code. The PVC `name` value can be whatever you want. ++ ```yml + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: ephemeralpvc + spec: + accessModes: + - ReadWriteOnce + storageClassName: acstor-ephemeraldisk # replace with the name of your storage class if different + resources: + requests: + storage: 100Gi + ``` ++1. Apply the YAML manifest file to create the PVC. + + ```azurecli-interactive + kubectl apply -f acstor-pvc.yaml + ``` + + You should see output similar to: + + ```output + persistentvolumeclaim/ephemeralpvc created + ``` + + You can verify the status of the PVC by running the following command: + + ```azurecli-interactive + kubectl describe pvc ephemeralpvc + ``` ++Once the PVC is created, it's ready for use by a pod. ++## Deploy a pod and attach a persistent volume ++Create a pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for benchmarking and workload simulation, and specify a mount path for the persistent volume. For **claimName**, use the **name** value that you used when creating the persistent volume claim. ++1. Use your favorite text editor to create a YAML manifest file such as `code acstor-pod.yaml`. ++1. Paste in the following code. ++ ```yml + kind: Pod + apiVersion: v1 + metadata: + name: fiopod + spec: + nodeSelector: + acstor.azure.com/io-engine: acstor + volumes: + - name: ephemeralpv + persistentVolumeClaim: + claimName: ephemeralpvc + containers: + - name: fio + image: nixery.dev/shell/fio + args: + - sleep + - "1000000" + volumeMounts: + - mountPath: "/volume" + name: ephemeralpv + ``` ++1. Apply the YAML manifest file to deploy the pod. + + ```azurecli-interactive + kubectl apply -f acstor-pod.yaml + ``` + + You should see output similar to the following: + + ```output + pod/fiopod created + ``` ++1. Check that the pod is running and that the persistent volume claim has been bound successfully to the pod: ++ ```azurecli-interactive + kubectl describe pod fiopod + kubectl describe pvc ephemeralpvc + ``` ++1. Check fio testing to see its current status: ++ ```azurecli-interactive + kubectl exec -it fiopod -- fio --name=benchtest --size=800m --filename=/volume/test --direct=1 --rw=randrw --ioengine=libaio --bs=4k --iodepth=16 --numjobs=8 --time_based --runtime=60 + ``` ++You've now deployed a pod that's using Ephemeral Disk as its storage, and you can use it for your Kubernetes workloads. ++## Detach and reattach a persistent volume ++To detach a persistent volume, delete the pod that the persistent volume is attached to. Replace `<pod-name>` with the name of the pod, for example **fiopod**. ++```azurecli-interactive +kubectl delete pods <pod-name> +``` ++To reattach a persistent volume, simply reference the persistent volume claim name in the YAML manifest file as described in [Deploy a pod and attach a persistent volume](#deploy-a-pod-and-attach-a-persistent-volume). ++To check which persistent volume a persistent volume claim is bound to, run `kubectl get pvc <persistent-volume-claim-name>`. ++## Delete the storage pool ++If you want to delete a storage pool, run the following command. Replace `<storage-pool-name>` with the storage pool name. ++```azurecli-interactive +kubectl delete sp -n acstor <storage-pool-name> +``` ++## See also ++- [What is Azure Container Storage?](container-storage-introduction.md) |
storage | Use Container Storage With Managed Disks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-managed-disks.md | + + Title: Use Azure Container Storage Preview with Azure managed disks +description: Configure Azure Container Storage Preview for use with Azure managed disks. Create a storage pool, select a storage class, create a persistent volume claim, and attach the persistent volume to a pod. +++ Last updated : 05/12/2023+++++# Use Azure Container Storage Preview with Azure managed disks +[Azure Container Storage](container-storage-introduction.md) is a cloud-based volume management, deployment, and orchestration service built natively for containers. This article shows you how to configure Azure Container Storage to use Azure managed disks as back-end storage for your Kubernetes workloads. ++## Prerequisites ++- This article requires version 2.0.64 or later of the Azure CLI. See [How to install the Azure CLI](/cli/azure/install-azure-cli). If you're using Azure Cloud Shell, the latest version is already installed. If you plan to run the commands locally instead of in Azure Cloud Shell, be sure to run them with administrative privileges. +- You'll need an Azure Kubernetes Service (AKS) cluster with a node pool of at least three [general purpose VMs](../../virtual-machines/sizes-general.md) such as **standard_d4s_v5** for the cluster nodes, each with a minimum of four virtual CPUs (vCPUs). The VMs must have standard hard disk drives (HDD), not SSD. +- Follow the instructions in [Install Azure Container Storage](container-storage-aks-quickstart.md) to assign [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role to the AKS managed identity and install Azure Container Storage Preview. ++## Regional availability ++Azure Container Storage Preview is only available in the following Azure regions: ++- East US +- West Europe +- West US 2 +- West US 3 ++## Create a storage pool ++First, create a storage pool, which is a logical grouping of storage for your Kubernetes cluster, by defining it in a YAML manifest file. Follow these steps to create a storage pool for Azure Disks. ++1. Use your favorite text editor to create a YAML manifest file such as `code acstor-storagepool.yaml`. ++1. Paste in the following code. The storage pool **name** value can be whatever you want. ++ ```yml + apiVersion: containerstorage.azure.com/v1alpha1 + kind: StoragePool + metadata: + name: azuredisk + namespace: acstor + spec: + poolType: + azureDisk: {} + resources: + requests: {"storage": 1Ti} + ``` ++1. Apply the YAML manifest file to create the storage pool. + + ```azurecli-interactive + kubectl apply -f acstor-storagepool.yaml + ``` + + When storage pool creation is complete, you'll see a message like: + + ```output + storagepool.containerstorage.azure.com/azuredisk created + ``` + + You can also run this command to check the status of the storage pool. Replace `<storage-pool-name>` with your storage pool **name** value. For this example, the value would be **azuredisk**. + + ```azurecli-interactive + kubectl describe sp <storage-pool-name> -n acstor + ``` ++When the storage pool is created, Azure Container Storage will create a storage class on your behalf, using the naming convention `acstor-<storage-pool-name>`. ++## Display the available storage classes ++When the storage pool is ready to use, you must select a storage class to define how storage is dynamically created when creating persistent volume claims and deploying persistent volumes. ++Run `kubectl get sc` to display the available storage classes. You should see a storage class called `acstor-<storage-pool-name>`. ++> [!IMPORTANT] +> Don't use the storage class that's marked **internal**. It's an internal storage class that's needed for Azure Container Storage to work. ++## Create a persistent volume claim ++A persistent volume claim (PVC) is used to automatically provision storage based on a storage class. Follow these steps to create a PVC using the new storage class. ++1. Use your favorite text editor to create a YAML manifest file such as `code acstor-pvc.yaml`. ++1. Paste in the following code. The PVC `name` value can be whatever you want. ++ ```yml + apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: azurediskpvc + spec: + accessModes: + - ReadWriteOnce + storageClassName: acstor-azuredisk # replace with the name of your storage class if different + resources: + requests: + storage: 100Gi + ``` ++1. Apply the YAML manifest file to create the PVC. + + ```azurecli-interactive + kubectl apply -f acstor-pvc.yaml + ``` + + You should see output similar to: + + ```output + persistentvolumeclaim/azurediskpvc created + ``` + + You can verify the status of the PVC by running the following command: + + ```azurecli-interactive + kubectl describe pvc azurediskpvc + ``` ++Once the PVC is created, it's ready for use by a pod. ++## Deploy a pod and attach a persistent volume ++Create a pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for benchmarking and workload simulation, and specify a mount path for the persistent volume. For **claimName**, use the **name** value that you used when creating the persistent volume claim. ++1. Use your favorite text editor to create a YAML manifest file such as `code acstor-pod.yaml`. ++1. Paste in the following code. ++ ```yml + kind: Pod + apiVersion: v1 + metadata: + name: fiopod + spec: + nodeSelector: + acstor.azure.com/io-engine: acstor + volumes: + - name: azurediskpv + persistentVolumeClaim: + claimName: azurediskpvc + containers: + - name: fio + image: nixery.dev/shell/fio + args: + - sleep + - "1000000" + volumeMounts: + - mountPath: "/volume" + name: azurediskpv + ``` ++1. Apply the YAML manifest file to deploy the pod. + + ```azurecli-interactive + kubectl apply -f acstor-pod.yaml + ``` + + You should see output similar to the following: + + ```output + pod/fiopod created + ``` ++1. Check that the pod is running and that the persistent volume claim has been bound successfully to the pod: ++ ```azurecli-interactive + kubectl describe pod fiopod + kubectl describe pvc azurediskpvc + ``` ++1. Check fio testing to see its current status: ++ ```azurecli-interactive + kubectl exec -it fiopod -- fio --name=benchtest --size=800m --filename=/volume/test --direct=1 --rw=randrw --ioengine=libaio --bs=4k --iodepth=16 --numjobs=8 --time_based --runtime=60 + ``` ++You've now deployed a pod that's using Azure Disks as its storage, and you can use it for your Kubernetes workloads. ++## Detach and reattach a persistent volume ++To detach a persistent volume, delete the pod that the persistent volume is attached to. Replace `<pod-name>` with the name of the pod, for example **fiopod**. ++```azurecli-interactive +kubectl delete pods <pod-name> +``` ++To reattach a persistent volume, simply reference the persistent volume claim name in the YAML manifest file as described in [Deploy a pod and attach a persistent volume](#deploy-a-pod-and-attach-a-persistent-volume). ++To check which persistent volume a persistent volume claim is bound to, run `kubectl get pvc <persistent-volume-claim-name>`. ++## Delete the storage pool ++If you want to delete a storage pool, run the following command. Replace `<storage-pool-name>` with the storage pool name. ++```azurecli-interactive +kubectl delete sp -n acstor <storage-pool-name> +``` ++## See also ++- [What is Azure Container Storage?](container-storage-introduction.md) |
virtual-desktop | Agent Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/agent-overview.md | The agent update normally lasts 2-3 minutes on a new VM and shouldn't cause your Now that you have a better understanding of the Azure Virtual Desktop agent, here are some resources that might help you: - If you're experiencing agent or connectivity-related issues, check out the [Azure Virtual Desktop Agent issues troubleshooting guide](troubleshoot-agent.md).-- To schedule agent updates, see the [Scheduled Agent Updates (preview) document](scheduled-agent-updates.md).+- To schedule agent updates, see the [Scheduled Agent Updates document](scheduled-agent-updates.md). - To set up diagnostics for this feature, see the [Scheduled Agent Updates Diagnostics guide](agent-updates-diagnostics.md).-- To find information about the latest and previous agent versions, see the [Agent Updates version notes](whats-new-agent.md).+- To find information about the latest and previous agent versions, see the [Agent Updates version notes](whats-new-agent.md). |
virtual-machines | Vm Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-usage.md | Last updated 07/28/2020 By analyzing your Azure usage data, powerful consumption insights can be gained ΓÇô insights that can enable better cost management and allocation throughout your organization. This document provides a deep dive into your Azure Compute consumption details. For more details on general Azure usage, navigate to [Understanding your bill](../cost-management-billing/understand/review-individual-bill.md). ## Download your usage details-To begin, [download your usage details](../cost-management-billing/understand/download-azure-daily-usage.md). The table below provides the definition and example values of usage for Virtual Machines deployed via the Azure Resource Manager. This document does not contain detailed information for VMs deployed via our classic model. +To begin, [download your usage details](../cost-management-billing/understand/download-azure-daily-usage.md). The table provides the definition and example values of usage for Virtual Machines deployed via the Azure Resource Manager. This document doesn't contain detailed information for VMs deployed via our classic model. | Field | Meaning | Example Values | |||| | Usage Date | The date when the resource was used | `11/23/2017` | | Meter ID | Identifies the top-level service for which this usage belongs to| `Virtual Machines`|-| Meter Sub-Category | The billed meter identifier. <br><br> For Compute Hour usage, there is a meter for each VM Size + OS (Windows, Non-Windows) + Region. <br><br> For Premium software usage, there is a meter for each software type. Most premium software images have different meters for each core size. For more information, visit the [Compute Pricing Page](https://azure.microsoft.com/pricing/details/virtual-machines/)</li></ul>| `2005544f-659d-49c9-9094-8e0aea1be3a5`| -| Meter Name| This is specific for each service in Azure. For compute, it is always ΓÇ£Compute HoursΓÇ¥.| `Compute Hours`| +| Meter Sub-Category | The billed meter identifier. <br><br> For Compute Hour usage, there's a meter for each VM Size + OS (Windows, Non-Windows) + Region. <br><br> For Premium software usage, there's a meter for each software type. Most premium software images have different meters for each core size. For more information, visit the [Compute Pricing Page](https://azure.microsoft.com/pricing/details/virtual-machines/)</li></ul>| `2005544f-659d-49c9-9094-8e0aea1be3a5`| +| Meter Name| This value is specific for each service in Azure. For compute, it is always ΓÇ£Compute HoursΓÇ¥.| `Compute Hours`| | Meter Region| Identifies the location of the datacenter for certain services that are priced based on datacenter location.| `JA East`| | Unit| Identifies the unit that the service is charged in. Compute resources are billed per hour.| `Hours`|-| Consumed| The amount of the resource that has been consumed for that day. For Compute, we bill for each minute the VM ran for a given hour (up to 6 decimals of accuracy).| `1, 0.5`| +| Consumed| The amount of the resource that has been consumed for that day. For Compute, we bill for each minute the VM ran for a given hour (up to six decimals of accuracy).| `1, 0.5`| | Resource Location | Identifies the datacenter where the resource is running.| `JA East`| | Consumed Service | The Azure platform service that you used.| `Microsoft.Compute`| | Resource Group | The resource group in which the deployed resource is running in. For more information, see [Azure Resource Manager overview.](../azure-resource-manager/management/overview.md)|`MyRG`|-| Instance ID | The identifier for the resource. The identifier contains the name you specify for the resource when it was created. For VMs, the Instance ID will contain the SubscriptionId, ResourceGroupName, and VMName (or scale set name for scale set usage).| `/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/ resourceGroups/MyRG/providers/Microsoft.Compute/virtualMachines/MyVM1`<br><br>or<br><br>`/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/ resourceGroups/MyRG/providers/Microsoft.Compute/virtualMachineScaleSets/MyVMSS1`| +| Instance ID | The identifier for the resource. The identifier contains the name you specify for the resource when it was created. For VMs, the Instance ID contains the SubscriptionId, ResourceGroupName, and VM Name (or scale set name for scale set usage).| `/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/ resourceGroups/MyRG/providers/Microsoft.Compute/virtualMachines/MyVM1`<br><br>or<br><br>`/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/ resourceGroups/MyRG/providers/Microsoft.Compute/virtualMachineScaleSets/MyVMSS1`| | Tags| Tag you assign to the resource. Use tags to group billing records. Learn how to tag your Virtual Machines using the [CLI](./tag-cli.md) or [PowerShell](./tag-portal.md) This is available for Resource Manager VMs only.| `{"myDepartment":"RD","myUser":"myName"}`|-| Additional Info | Service-specific metadata. For VMs, we populate the following data in the additional info field: <br><br> Image Type- specific image that you ran. Find the full list of supported strings below under Image Types.<br><br> Service Type: the size that you deployed.<br><br> VMName: name of your VM. This field is only populated for scale set VMs. If you need your VM Name for scale set VMs, you can find that in the Instance ID string above.<br><br> UsageType: This specifies the type of usage this represents.<br><br> ComputeHR is the Compute Hour usage for the underlying VM, like Standard_D1_v2.<br><br> ComputeHR_SW is the premium software charge if the VM is using premium software. | Virtual Machines<br>`{"ImageType":"Canonical","ServiceType":"Standard_DS1_v2","VMName":"", "UsageType":"ComputeHR"}`<br><br>Virtual Machine Scale Sets<br> `{"ImageType":"Canonical","ServiceType":"Standard_DS1_v2","VMName":"myVM1", "UsageType":"ComputeHR"}`<br><br>Premium Software<br> `{"ImageType":"","ServiceType":"Standard_DS1_v2","VMName":"", "UsageType":"ComputeHR_SW"}` | +| More Info | Service-specific metadata. For VMs, we populate the following data in the additional info field: <br><br> Image Type- specific image that you ran. Find the full list of supported strings under Image Types.<br><br> Service Type: the size that you deployed.<br><br> VM Name: name of your VM. This field is only populated for scale set VMs. If you need your VM Name for scale set VMs, you can find that in the Instance ID string.<br><br> UsageType: Specifies the type of usage.<br><br> ComputeHR is the Compute Hour usage for the underlying VM, like Standard_D1_v2.<br><br> ComputeHR_SW is the premium software charge if the VM is using premium software. | Virtual Machines<br>`{"ImageType":"Canonical","ServiceType":"Standard_DS1_v2","VM Name":"", "UsageType":"ComputeHR"}`<br><br>Virtual Machine Scale Sets<br> `{"ImageType":"Canonical","ServiceType":"Standard_DS1_v2","VM Name":"myVM1", "UsageType":"ComputeHR"}`<br><br>Premium Software<br> `{"ImageType":"","ServiceType":"Standard_DS1_v2","VM Name":"", "UsageType":"ComputeHR_SW"}` | ## Image Type For some images in the Azure gallery, the image type is populated in the Additional Info field. This enables users to understand and track what they have deployed on their Virtual Machine. The following values that are populated in this field based on the image you have deployed: For some images in the Azure gallery, the image type is populated in the Additio - Windows Server Preview ## Service Type-The service type field in the Additional Info field corresponds to the exact VM size you deployed. Premium storage VMs (SSD-based) and non-premium storage VMs (HDD-based) are priced the same. If you deploy an SSD-based size, like Standard\_DS2\_v2, you see the non-SSD size (`Standard\_D2\_v2 VM`) in the Meter Sub-Category column and the SSD-size (`Standard\_DS2\_v2`) +The service type field in the Additional Info field corresponds to the exact VM size you deployed. Premium storage VMs (SSD-based) and nonpremium storage VMs (HDD-based) are priced the same. If you deploy an SSD-based size, like Standard\_DS2\_v2, you see the non-SSD size (`Standard\_D2\_v2 VM`) in the Meter Sub-Category column and the SSD-size (`Standard\_DS2\_v2`) in the Additional Info field. ## Region Names-The region name populated in the Resource Location field in the usage details varies from the region name used in the Azure Resource Manager. Here is a mapping between the region values: +The region name populated in the Resource Location field in the usage details varies from the region name used in the Azure Resource Manager. Here's a mapping between the region values: | **Resource Manager Region Name** | **Resource Location in Usage Details** | ||| The region name populated in the Resource Location field in the usage details va ### What resources are charged when deploying a VM? VMs acquire costs for the VM itself, any premium software running on the VM, the storage account\managed disk associated with the VM, and the networking bandwidth transfers from the VM. ### How can I tell if a VM is using Azure Hybrid Benefit in the Usage CSV?-If you deploy using the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/), you are charged the Non-Windows VM rate since you are bringing your own license to the cloud. In your bill, you can distinguish which Resource Manager VMs are running Azure Hybrid Benefit because they have either ΓÇ£Windows\_Server BYOLΓÇ¥ or ΓÇ£Windows\_Client BYOLΓÇ¥ in the ImageType column. +If you deploy using the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/), you're charged the Non-Windows VM rate since you're bringing your own license to the cloud. In your bill, you can distinguish which Resource Manager VMs are running Azure Hybrid Benefit because they have either ΓÇ£Windows\_Server BYOLΓÇ¥ or ΓÇ£Windows\_Client BYOLΓÇ¥ in the ImageType column. ### How are Basic vs. Standard VM Types differentiated in the Usage CSV? Both Basic and Standard A-Series VMs are offered. If you deploy a Basic VM, in the Meter Sub Category, it has the string ΓÇ£Basic.ΓÇ¥ If you deploy a Standard A-Series VM, then the VM size appears as ΓÇ£A1 VMΓÇ¥ since Standard is the default. To learn more about the differences between Basic and Standard, see the [Pricing Page](https://azure.microsoft.com/pricing/details/virtual-machines/). ### What are ExtraSmall, Small, Medium, Large, and ExtraLarge sizes? ExtraSmall - ExtraLarge are the legacy names for Standard\_A0 ΓÇô Standard\_A4. ### What is the difference between Meter Region and Resource Location? The Meter Region is associated with the meter. For some Azure services who use one price for all regions, the Meter Region field could be blank. However, since VMs have dedicated prices per region for Virtual Machines, this field is populated. Similarly, the Resource Location for Virtual Machines is the location where the VM is deployed. The Azure regions in both fields are the same, although they might have a different string convention for the region name. ### Why is the ImageType value blank in the Additional Info field?-The ImageType field is only populated for a subset of images. If you did not deploy one of the images above, the ImageType is blank. +The ImageType field is only populated for a subset of images. If you didn't deploy one of the listed images, the ImageType is blank. ### Why is the VMName blank in the Additional Info?-The VMName is only populated in the Additional Info field for VMs in a scale set. The InstanceID field contains the VM name for non-scale set VMs. +The VMName is only populated in the Additional Info field for VMs in a scale set. The InstanceID field contains the VM name for nonscale set VMs. ### What does ComputeHR mean in the UsageType field in the Additional Info?-ComputeHR stands for Compute Hour which represents the usage event for the underlying infrastructure cost. If the UsageType is ComputeHR\_SW, the usage event represents the premium software charge for the VM. -### How do I know if I am charged for premium software? -When exploring which VM Image best fits your needs, be sure to check out [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/compute). The image has the software plan rate. If you see ΓÇ£FreeΓÇ¥ for the rate, there is no additional cost for the software. -### What is the difference between Microsoft.ClassicCompute and Microsoft.Compute in the Consumed service? +ComputeHR stands for 'Compute Hour', which represents the usage event for the underlying infrastructure cost. If the UsageType is ComputeHR\_SW, the usage event represents the premium software charge for the VM. +### How do I know if I'm charged for premium software? +When exploring which VM Image best fits your needs, be sure to check out [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/compute). The image has the software plan rate. If you see ΓÇ£FreeΓÇ¥ for the rate, there's no extra cost for the software. +### What is the difference between 'Microsoft.ClassicCompute' and 'Microsoft.Compute' in the Consumed service? Microsoft.ClassicCompute represents classic resources deployed via the Azure Service Manager. If you deploy via the Resource Manager, then Microsoft.Compute is populated in the consumed service. Learn more about the [Azure Deployment models](../azure-resource-manager/management/deployment-models.md). ### Why is the InstanceID field blank for my Virtual Machine usage?-If you deploy via the classic deployment model, the InstanceID string is not available. +If you deploy via the classic deployment model, the InstanceID string isn't available. ### Why are the tags for my VMs not flowing to the usage details?-Tags flow to the Usage CSV for Resource Manager VMs only. Classic resource tags are not available in the usage details. +Tags flow to the Usage CSV for Resource Manager VMs only. Classic resource tags aren't available in the usage details. ### How can the consumed quantity be more than 24 hours one day?-In the Classic model, billing for resources is aggregated at the Cloud Service level. If you have more than one VM in a Cloud Service that uses the same billing meter, your usage is aggregated together. VMs deployed via Resource Manager are billed at the VM level, so this aggregation will not apply. +In the Classic model, billing for resources is aggregated at the Cloud Service level. If you have more than one VM in a Cloud Service that uses the same billing meter, your usage is aggregated together. VMs deployed via Resource Manager are billed at the VM level, so this aggregation won't apply. ### Why is pricing not available for DS/FS/GS/LS sizes on the pricing page?-Premium storage capable VMs are billed at the same rate as non-premium storage capable VMs. Only your storage costs differ. Visit the [storage pricing page](https://azure.microsoft.com/pricing/details/storage/unmanaged-disks/) for more information. +Premium storage capable VMs are billed at the same rate as nonpremium storage capable VMs. Only your storage costs differ. Visit the [storage pricing page](https://azure.microsoft.com/pricing/details/storage/unmanaged-disks/) for more information. ## Next steps-To learn more about your usage details, see [Understand your bill for Microsoft Azure.](../cost-management-billing/understand/review-individual-bill.md) +To learn more about your usage details, see [Understand your bill for Microsoft Azure.](../cost-management-billing/understand/review-individual-bill.md) |
virtual-network | Virtual Network Tcpip Performance Tuning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-tcpip-performance-tuning.md | We don't encourage customers to increase VM MTUs. This discussion is meant to ex > [!IMPORTANT] >Increasing MTU isn't known to improve performance and could have a negative effect on application performance.-> +>Hybrid networking services, such as VPN, ExpressRoute, and vWAN, support a maximum MTU of 1400 bytes. > #### Large send offload |
web-application-firewall | Waf Front Door Drs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md | The following rule groups and rules are available when using Web Application Fir ||| |Bot100100|Malicious bots detected by threat intelligence| |Bot100200|Malicious bots that have falsified their identity|+ + Bot100100 scans both client IP addresses and IPs in the X-Forwarded-For header. ### <a name="bot200"></a> Good bots |RuleId|Description| The following rule groups and rules are available when using Web Application Fir |Bot300300|General purpose HTTP clients and SDKs| |Bot300400|Service agents| |Bot300500|Site health monitoring services|-|Bot300600|Unknown bots detected by threat intelligence<br />(This rule also includes IP addresses matched to the Tor network.)| +|Bot300600|Unknown bots detected by threat intelligence| |Bot300700|Other bots| +Bot300600 scans both client IP addresses and IPs in the X-Forwarded-For header. + |
web-application-firewall | Application Gateway Crs Rulegroups Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md | The following rule groups and rules are available when using Web Application Fir |Bot100100|Malicious bots detected by threat intelligence| |Bot100200|Malicious bots that have falsified their identity| + Bot100100 scans both client IP addresses and the IPs in the X-Forwarded-For header. + ### <a name="bot200"></a> Good bots |RuleId|Description| ||| The following rule groups and rules are available when using Web Application Fir |Bot300300|General purpose HTTP clients and SDKs| |Bot300400|Service agents| |Bot300500|Site health monitoring services|-|Bot300600|Unknown bots detected by threat intelligence<br />(This rule also includes IP addresses matched to the Tor network.)| +|Bot300600|Unknown bots detected by threat intelligence| |Bot300700|Other bots| + Bot300600 scans both client IP addresses and the IPs in the X-Forwarded-For header. + ## Next steps |