Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Enable Authentication React Spa App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-react-spa-app.md | The sample code is made up of the following components. Add these components fro - [src/pages/Hello.jsx](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/6-AdvancedScenarios/1-call-api-obo/SPA/src/pages/Hello.jsx) - Demonstrate how to call a protected resource with OAuth2 bearer token. - It uses the [useMsal](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-react/docs/hooks.md) hook that returns the PublicClientApplication instance. - With PublicClientApplication instance, it acquires an access token to call the REST API.- - Invokes the [callApiWithToken](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/SPA/src/fetch.js) function to fetch the data from the REST API and renders the result using the **DataDisplay** component. + - Invokes the [callApiWithToken](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/4-Deployment/2-deploy-static/App/src/fetch.js) function to fetch the data from the REST API and renders the result using the **DataDisplay** component. - [src/components/NavigationBar.jsx](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/SPA/src/components/NavigationBar.jsx) - The app top navigation bar with the sign-in, sign-out, edit profile and call REST API reset buttons. - It uses the [AuthenticatedTemplate](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-react/docs/getting-started.md#authenticatedtemplate-and-unauthenticatedtemplate) and UnauthenticatedTemplate, which only render their children if a user is authenticated or unauthenticated, respectively. The sample code is made up of the following components. Add these components fro - [src/styles/App.css](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/SPA/src/styles/App.css) and [src/styles/index.css](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/SPA/src/styles/index.css) - CSS styling files for the app. -- [src/fetch.js](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/3-Authorization-II/2-call-api-b2c/SPA/src/fetch.js) - Fetches HTTP requests to the REST API. +- [src/fetch.js](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/blob/main/4-Deployment/2-deploy-static/App/src/fetch.js) - Fetches HTTP requests to the REST API. ## Step 4: Configure your React app |
active-directory-b2c | Identity Provider Microsoft Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-microsoft-account.md | To enable sign-in for users with a Microsoft account in Azure Active Directory B 1. Under **Supported account types**, select **Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)**. For more information on the different account type selections, see [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md).-1. Under **Redirect URI (optional)**, select **Web** and enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant, and `your-domain-name` with your custom domain. +1. Under **Redirect URI (optional)**, select **Web** and enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your Azure AD B2C tenant, and `your-domain-name` with your custom domain. 1. Select **Register** 1. Record the **Application (client) ID** shown on the application Overview page. You need the client ID when you configure the identity provider in the next section. 1. Select **Certificates & secrets** |
active-directory-b2c | Partner Nevis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-nevis.md | To get started, you'll need: - An [Azure AD B2C tenant](./tutorial-create-tenant.md) linked to your Azure subscription >[!NOTE]->To integrate Nevis into your sign-up policy flow, configure the Azure AD B2C environment to use custom policies. </br>See, [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](/azure/active-directory-b2c/tutorial-create-user-flows). +>To integrate Nevis into your sign-up policy flow, configure the Azure AD B2C environment to use custom policies. </br>See, [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md). ## Scenario description The diagram shows the implementation. ## Next steps - [Custom policies in Azure AD B2C](./custom-policy-overview.md)-- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)+- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy) |
active-directory | How Provisioning Works | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/how-provisioning-works.md | -The **Azure AD Provisioning Service** provisions users to SaaS apps and other systems by connecting to a System for Cross-Domain Identity Management (SCIM) 2.0 user management API endpoint provided by the application vendor. This SCIM endpoint allows Azure AD to programmatically create, update, and remove users. For selected applications, the provisioning service can also create, update, and remove additional identity-related objects, such as groups and roles. The channel used for provisioning between Azure AD and the application is encrypted using HTTPS TLS 1.2 encryption. +The **Azure AD Provisioning Service** provisions users to SaaS apps and other systems by connecting to a System for Cross-Domain Identity Management (SCIM) 2.0 user management API endpoint provided by the application vendor. This SCIM endpoint allows Azure AD to programmatically create, update, and remove users. For selected applications, the provisioning service can also create, update, and remove extra identity-related objects, such as groups and roles. The channel used for provisioning between Azure AD and the application is encrypted using HTTPS TLS 1.2 encryption.  To request an automatic Azure AD provisioning connector for an app that doesn't ## Authorization -Credentials are required for Azure AD to connect to the application's user management API. While you're configuring automatic user provisioning for an application, you'll need to enter valid credentials. For gallery applications, you can find credential types and requirements for the application by referring to the app tutorial. For non-gallery applications, you can refer to the [SCIM](./use-scim-to-provision-users-and-groups.md#authorization-to-provisioning-connectors-in-the-application-gallery) documentation to understand the credential types and requirements. In the Azure portal, you'll be able to test the credentials by having Azure AD attempt to connect to the app's provisioning app using the supplied credentials. +Credentials are required for Azure AD to connect to the application's user management API. While you're configuring automatic user provisioning for an application, you need to enter valid credentials. For gallery applications, you can find credential types and requirements for the application by referring to the app tutorial. For non-gallery applications, you can refer to the [SCIM](./use-scim-to-provision-users-and-groups.md#authorization-to-provisioning-connectors-in-the-application-gallery) documentation to understand the credential types and requirements. In the Azure portal, you are able to test the credentials by having Azure AD attempt to connect to the app's provisioning app using the supplied credentials. ## Mapping attributes When you configure provisioning to a SaaS application, one of the types of attri For outbound provisioning from Azure AD to a SaaS application, relying on [user or group assignments](../manage-apps/assign-user-or-group-access-portal.md) is the most common way to determine which users are in scope for provisioning. Because user assignments are also used for enabling single sign-on, the same method can be used for managing both access and provisioning. Assignment-based scoping doesn't apply to inbound provisioning scenarios such as Workday and Successfactors. -* **Groups.** With an Azure AD Premium license plan, you can use groups to assign access to a SaaS application. Then, when the provisioning scope is set to **Sync only assigned users and groups**, the Azure AD provisioning service will provision or de-provision users based on whether they're members of a group that's assigned to the application. The group object itself isn't provisioned unless the application supports group objects. Ensure that groups assigned to your application have the property "SecurityEnabled" set to "True". +* **Groups.** With an Azure AD Premium license plan, you can use groups to assign access to a SaaS application. Then, when the provisioning scope is set to **Sync only assigned users and groups**, the Azure AD provisioning service provisions or de-provisions users based on whether they're members of a group that's assigned to the application. The group object itself isn't provisioned unless the application supports group objects. Ensure that groups assigned to your application have the property "SecurityEnabled" set to "True". * **Dynamic groups.** The Azure AD user provisioning service can read and provision users in [dynamic groups](../enterprise-users/groups-create-rule.md). Keep these caveats and recommendations in mind: After the initial cycle, all other cycles will: 10. Persist a new watermark at the end of the incremental cycle, which provides the starting point for the later incremental cycles. > [!NOTE]-> You can optionally disable the **Create**, **Update**, or **Delete** operations by using the **Target object actions** check boxes in the [Mappings](customize-application-attributes.md) section. The logic to disable a user during an update is also controlled via an attribute mapping from a field such as "accountEnabled". +> You can optionally disable the **Create**, **Update**, or **Delete** operations by using the **Target object actions** check boxes in the [Mappings](customize-application-attributes.md) section. The logic to disable a user during an update is also controlled via an attribute mapping from a field such as *accountEnabled*. The provisioning service continues running back-to-back incremental cycles indefinitely, at intervals defined in the [tutorial specific to each application](../saas-apps/tutorial-list.md). Incremental cycles continue until one of the following events occurs: |
active-directory | Concept Authentication Oath Tokens | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-oath-tokens.md | Some OATH TOTP hardware tokens are programmable, meaning they don't come with a Azure AD supports the use of OATH-TOTP SHA-1 tokens that refresh codes every 30 or 60 seconds. Customers can purchase these tokens from the vendor of their choice. Hardware OATH tokens are available for users with an Azure AD Premium P1 or P2 license. +>[!IMPORTANT] +>The preview is only supported in Azure Global and Azure Government clouds. + OATH TOTP hardware tokens typically come with a secret key, or seed, pre-programmed in the token. These keys must be input into Azure AD as described in the following steps. Secret keys are limited to 128 characters, which may not be compatible with all tokens. The secret key can only contain the characters *a-z* or *A-Z* and digits *2-7*, and must be encoded in *Base32*. Programmable OATH TOTP hardware tokens that can be reseeded can also be set up with Azure AD in the software token setup flow. Helga@contoso.com,1234567,2234567abcdef2234567abcdef,60,Contoso,HardwareKey > [!NOTE] > Make sure you include the header row in your CSV file. -Once properly formatted as a CSV file, a Global Administrator can then sign in to the Azure portal, navigate to **Azure Active Directory** > **Security** > **Multifactor authentication** > **OATH tokens**, and upload the resulting CSV file. +Once properly formatted as a CSV file, a global administrator can then sign in to the Azure portal, navigate to **Azure Active Directory** > **Security** > **Multifactor authentication** > **OATH tokens**, and upload the resulting CSV file. Depending on the size of the CSV file, it may take a few minutes to process. Select the **Refresh** button to get the current status. If there are any errors in the file, you can download a CSV file that lists any errors for you to resolve. The field names in the downloaded CSV file are different than the uploaded version. Once any errors have been addressed, the administrator then can activate each key by selecting **Activate** for the token and entering the OTP displayed on the token. You can activate a maximum of 200 OATH tokens every 5 minutes. -Users may have a combination of up to five OATH hardware tokens or authenticator applications, such as the Microsoft Authenticator app, configured for use at any time. Hardware OATH tokens cannot be assigned to guest users in the resource tenant. +Users may have a combination of up to five OATH hardware tokens or authenticator applications, such as the Microsoft Authenticator app, configured for use at any time. Hardware OATH tokens cannot be assigned to guest users in the resource tenant. ->[!IMPORTANT] ->The preview is only supported in Azure Global and Azure Government clouds. +.[!IMPORTANT] +>Make sure to only assign each token to a single user. +In the future, support for the assignment of a single token to multiple users will stop to prevent a security risk. ## Determine OATH token registration type in mysecurityinfo |
active-directory | Howto Authentication Temporary Access Pass | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md | Keep these limitations in mind: - Users in scope for Self Service Password Reset (SSPR) registration policy *or* [Identity Protection Multi-factor authentication registration policy](../identity-protection/howto-identity-protection-configure-mfa-policy.md) will be required to register authentication methods after they've signed in with a Temporary Access Pass. Users in scope for these policies will get redirected to the [Interrupt mode of the combined registration](concept-registration-mfa-sspr-combined.md#combined-registration-modes). This experience doesn't currently support FIDO2 and Phone Sign-in registration. - A Temporary Access Pass can't be used with the Network Policy Server (NPS) extension and Active Directory Federation Services (AD FS) adapter.-- After a Temporary Access Pass is added to an account or expires, it can take a few minutes for the changes to replicate. Users may still see a prompt for Temporary Access Pass during this time. +- It can take a few minutes for changes to replicate. Because of this, after a Temporary Access Pass is added to an account it can take a while for the prompt to appear. For the same reason, after a Temporary Access Pass expires, users may still see a prompt for Temporary Access Pass. ## Troubleshooting |
active-directory | Howto Mfa Mfasettings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-mfasettings.md | Depending on the size of the CSV file, it might take a few minutes to process. S After any errors are addressed, the administrator can activate each key by selecting **Activate** for the token and entering the OTP displayed in the token. -Users can have a combination of up to five OATH hardware tokens or authenticator applications, such as the Microsoft Authenticator app, configured for use at any time. +Users can have a combination of up to five OATH hardware tokens or authenticator applications, such as the Microsoft Authenticator app, configured for use at any time. ++>[!IMPORTANT] +>Make sure to only assign each token to a single user. +>In the future, support for the assignment of a single token to multiple users will stop to prevent a security risk. ## Phone call settings |
active-directory | Onboard Enable Controller After Onboarding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-controller-after-onboarding.md | This article also describes how to enable the controller in Amazon Web Services ## Enable or disable the controller in Azure +You can enable or disable the controller in Azure at the Subscription level of you Management Group(s). -1. In Azure, open the **Access control (IAM)** page. +1. From the Azure **Home** page, select **Management groups**. +1. Locate the group for which you want to enable or disable the controller, then select the arrow to expand the group menu and view your subscriptions. Alternatively, you can select the **Total Subscriptions** number listed for your group. +1. Select the subscription for which you want to enable or disable the controller, then click **Access control (IAM)** in the navigation menu. 1. In the **Check access** section, in the **Find** box, enter **Cloud Infrastructure Entitlement Management**. The **Cloud Infrastructure Entitlement Management assignments** page appears, displaying the roles assigned to you. |
active-directory | Ui Remediation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-remediation.md | The **Remediation** dashboard in Permissions Management provides an overview of This article provides an overview of the components of the **Remediation** dashboard. > [!NOTE]-> To view the **Remediation** dashboard, your must have **Viewer**, **Controller**, or **Administrator** permissions. To make changes on this dashboard, you must have **Controller** or **Administrator** permissions. If you don't have these permissions, contact your system administrator. +> To view the **Remediation** dashboard, you must have **Viewer**, **Controller**, or **Approver** permissions. To make changes on this dashboard, you must have **Controller** or **Approver** permissions. If you don't have these permissions, contact your system administrator. > [!NOTE] > Microsoft Azure uses the term *role* for what other cloud providers call *policy*. Permissions Management automatically makes this terminology change when you select the authorization system type. In the user documentation, we use *role/policy* to refer to both. |
active-directory | Usage Analytics Active Tasks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-active-tasks.md | When you select **Active Tasks**, the **Analytics** dashboard provides a high-le The dashboard only lists tasks that are active. The following components make up the **Active Tasks** dashboard: - **Authorization System Type**: Select the authorization you want to use: Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP).- - **Authorization System**: Select from a **List** of accounts and **Folders***. + - **Authorization System**: Select from a **List** of accounts and **Folders**. + > [!NOTE] + > Folders can be used to organize and group together your list of accounts, or subscriptions. To create a folder, go to **Settings (gear icon) > Folders > Create Folder**. - **Tasks Type**: Select **All** tasks, **High Risk tasks** or, for a list of tasks where users have deleted data, select **Delete Tasks**.- - **Search**: Enter criteria to find specific tasks. + - **Search**: Enter criteria to find specific tasks. 1. Select **Apply** to display the criteria you've selected. |
active-directory | Howto Conditional Access Session Lifetime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md | Sign-in frequency previously applied to only to the first factor authentication ### User sign-in frequency and device identities -On Azure AD joined and hybrid Azure AD joined devices, unlocking the device, or signing in interactively will only refresh the Primary Refresh Token (PRT) every 4 hours. The last refresh timestamp recorded for PRT compared with the current timestamp must be within the time allotted in SIF policy for PRT to satisfy SIF and grant access to a PRT that has an existing MFA claim. On [Azure AD registered devices](/active-directory/devices/concept-azure-ad-register), unlock/sign-in would not satisfy the SIF policy because the user is not accessing an Azure AD registered device via an Azure AD account. However, the [Azure AD WAM](/azure/active-directory/develop/scenario-desktop-acquire-token-wam) plugin can refresh a PRT during native application authentication using WAM. +On Azure AD joined and hybrid Azure AD joined devices, unlocking the device, or signing in interactively will only refresh the Primary Refresh Token (PRT) every 4 hours. The last refresh timestamp recorded for PRT compared with the current timestamp must be within the time allotted in SIF policy for PRT to satisfy SIF and grant access to a PRT that has an existing MFA claim. On [Azure AD registered devices](/active-directory/devices/concept-azure-ad-register), unlock/sign-in would not satisfy the SIF policy because the user is not accessing an Azure AD registered device via an Azure AD account. However, the [Azure AD WAM](../develop/scenario-desktop-acquire-token-wam.md) plugin can refresh a PRT during native application authentication using WAM. Note: The timestamp captured from user log-in is not necessarily the same as the last recorded timestamp of PRT refresh because of the 4-hour refresh cycle. The case when it is the same is when a PRT has expired and a user log-in refreshes it for 4 hours. In the following examples, assume SIF policy is set to 1 hour and PRT is refreshed at 00:00. We factor for five minutes of clock skew, so that we donΓÇÖt prompt users more o ## Next steps -* If you're ready to configure Conditional Access policies for your environment, see the article [Plan a Conditional Access deployment](plan-conditional-access.md). +* If you're ready to configure Conditional Access policies for your environment, see the article [Plan a Conditional Access deployment](plan-conditional-access.md). |
active-directory | Msal Net Provide Httpclient | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-provide-httpclient.md | -`HttpClient` is intended to be instantiated once and then reused throughout the life of an application. See [Remarks](/dotnet/api/system.net.http.httpclient#remarks). +`HttpClient` is intended to be instantiated once and then reused throughout the life of an application. See [Guidelines for using HttpClient: Recommended use](/dotnet/fundamentals/networking/http/httpclient-guidelines#recommended-use). ## Initialize with HttpClientFactory The following example shows to create an `HttpClientFactory` and then initialize a public client application with it: |
active-directory | Msal Net Token Cache Serialization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-token-cache-serialization.md | The recommendation is: The [Microsoft.Identity.Web.TokenCache](https://www.nuget.org/packages/Microsoft.Identity.Web.TokenCache) NuGet package provides token cache serialization within the [Microsoft.Identity.Web](https://github.com/AzureAD/microsoft-identity-web) library. -If you're using the MSAL library directly in an ASP.NET Core app, consider moving to use [Microsoft.Identity.Web](https://github.com/AzureAD/microsoft-identity-web), which provides a simpler, higher-level API. Otherwise, see the [Non-ASP.NET Core web apps and web APIs](/azure/active-directory/develop/msal-net-token-cache-serialization?tabs=aspnet#configuring-the-token-cache), which covers direct MSAL usage. +If you're using the MSAL library directly in an ASP.NET Core app, consider moving to use [Microsoft.Identity.Web](https://github.com/AzureAD/microsoft-identity-web), which provides a simpler, higher-level API. Otherwise, see the [Non-ASP.NET Core web apps and web APIs](?tabs=aspnet#configuring-the-token-cache), which covers direct MSAL usage. | Extension method | Description | The following samples illustrate token cache serialization. | | -- | -- | |[active-directory-dotnet-desktop-msgraph-v2](https://github.com/azure-samples/active-directory-dotnet-desktop-msgraph-v2) | Desktop (WPF) | Windows Desktop .NET (WPF) application that calls the Microsoft Graph API. | |[active-directory-dotnet-v1-to-v2](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2) | Desktop (console) | Set of Visual Studio solutions that illustrate the migration of Azure AD v1.0 applications (using ADAL.NET) to Microsoft identity platform applications (using MSAL.NET). In particular, see [Token cache migration](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/blob/master/TokenCacheMigration/README.md) and [Confidential client token cache](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/tree/master/ConfidentialClientTokenCache). |-[ms-identity-aspnet-webapp-openidconnect](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) | ASP.NET (net472) | Example of token cache serialization in an ASP.NET MVC application (using MSAL.NET). In particular, see [MsalAppBuilder](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/master/WebApp/Utils/MsalAppBuilder.cs). +[ms-identity-aspnet-webapp-openidconnect](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) | ASP.NET (net472) | Example of token cache serialization in an ASP.NET MVC application (using MSAL.NET). In particular, see [MsalAppBuilder](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/master/WebApp/Utils/MsalAppBuilder.cs). |
active-directory | B2b Quickstart Invite Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-quickstart-invite-powershell.md | If you donΓÇÖt have an Azure subscription, create a [free account](https://azure ## Prerequisites ### PowerShell Module-Install the [Microsoft Graph Identity Sign-ins module](/powershell/module/microsoft.graph.identity.signins/?view=graph-powershell-beta&preserve-view=true) (Microsoft.Graph.Identity.SignIns) and the [Microsoft Graph Users module](/powershell/module/microsoft.graph.users/?view=graph-powershell-beta&preserve-view=true) (Microsoft.Graph.Users). +Install the [Microsoft Graph Identity Sign-ins module](/powershell/module/microsoft.graph.identity.signins/?view=graph-powershell-beta&preserve-view=true) (Microsoft.Graph.Identity.SignIns) and the [Microsoft Graph Users module](/powershell/module/microsoft.graph.users/?view=graph-powershell-beta&preserve-view=true) (Microsoft.Graph.Users). You can use the `#Requires` statement to prevent running a script unless the required PowerShell modules are met. ++```powershell +#Requires -Modules Microsoft.Graph.Identity.SignIns, Microsoft.Graph.Users +``` ### Get a test email account You need a test email account that you can send the invitation to. The account m Run the following command to connect to the tenant domain: ```powershell-Connect-MgGraph -Scopes user.readwrite.all +Connect-MgGraph -Scopes 'User.ReadWrite.All' ``` When prompted, enter your credentials. When prompted, enter your credentials. When no longer needed, you can delete the test user account in the directory. Run the following command to delete a user account: ```powershell- Remove-AzureADUser -ObjectId "<UPN>" + Remove-MgUser -UserId '<String>' +``` +For example: +```powershell +Remove-MgUser -UserId 'john_contoso.com#EXT#@fabrikam.onmicrosoft.com' +``` +or +```powershell +Remove-MgUser -UserId '3f80a75e-750b-49aa-a6b0-d9bf6df7b4c6' ```-For example: `Remove-AzureADUser -UserId john_contoso.com#EXT#@fabrikam.onmicrosoft.com` ## Next steps |
active-directory | Cross Tenant Access Settings B2b Collaboration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-collaboration.md | With inbound settings, you select which external users and groups will be able t ### Allow users to sync into this tenant -If you select **Inbound access** of the added organization, you'll see the **Cross-tenant sync (Preview)** tab and the **Allow users sync into this tenant** check box. Cross-tenant synchronization is a one-way synchronization service in Azure AD that automates creating, updating, and deleting B2B collaboration users across tenants in an organization. For more information, see [Configure cross-tenant synchronization](../../active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure.md) and the [Multi-tenant organizations documentation](/azure/active-directory/multi-tenant-organizations). +If you select **Inbound access** of the added organization, you'll see the **Cross-tenant sync (Preview)** tab and the **Allow users sync into this tenant** check box. Cross-tenant synchronization is a one-way synchronization service in Azure AD that automates creating, updating, and deleting B2B collaboration users across tenants in an organization. For more information, see [Configure cross-tenant synchronization](../../active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure.md) and the [Multi-tenant organizations documentation](../multi-tenant-organizations/index.yml). :::image type="content" source="media/cross-tenant-access-settings-b2b-collaboration/cross-tenant-sync-tab.png" alt-text="Screenshot that shows the Cross-tenant sync tab with the Allow users sync into this tenant check box." lightbox="media/cross-tenant-access-settings-b2b-collaboration/cross-tenant-sync-tab.png"::: When you remove an organization from your Organizational settings, the default c ## Next steps - See [Configure external collaboration settings](external-collaboration-settings-configure.md) for B2B collaboration with non-Azure AD identities, social identities, and non-IT managed external accounts.-- [Configure cross-tenant access settings for B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md)+- [Configure cross-tenant access settings for B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md) |
active-directory | Cross Tenant Access Settings B2b Direct Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-direct-connect.md | With inbound settings, you select which external users and groups will be able t 1. Select **Save**. > [!NOTE]-> When configuring settings for an organization, you'll notice a **Cross-tenant sync (Preview)** tab. This tab doesn't apply to your B2B direct connect configuration. Instead, this feature is used by multi-tenant organizations to enable B2B collaboration across their tenants. For more information, see the [multi-tenant organization documentation](/azure/active-directory/multi-tenant-organizations). +> When configuring settings for an organization, you'll notice a **Cross-tenant sync (Preview)** tab. This tab doesn't apply to your B2B direct connect configuration. Instead, this feature is used by multi-tenant organizations to enable B2B collaboration across their tenants. For more information, see the [multi-tenant organization documentation](../multi-tenant-organizations/index.yml). ## Modify outbound access settings When you remove an organization from your Organizational settings, the default c ## Next steps -[Configure cross-tenant access settings for B2B collaboration](cross-tenant-access-settings-b2b-collaboration.md) +[Configure cross-tenant access settings for B2B collaboration](cross-tenant-access-settings-b2b-collaboration.md) |
active-directory | External Identities Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-identities-overview.md | The following table gives a detailed comparison of the scenarios you can enable | **Branding** | Host/inviting organization's brand is used. | For sign-in screens, the userΓÇÖs home organization brand is used. In the shared channel, the resource organization's brand is used. | Fully customizable branding per application or organization. | | **More information** | [Blog post](https://blogs.technet.microsoft.com/enterprisemobility/2017/02/01/azure-ad-b2b-new-updates-make-cross-business-collab-easy/), [Documentation](what-is-b2b.md) | [Documentation](b2b-direct-connect-overview.md) | [Product page](https://azure.microsoft.com/services/active-directory-b2c/), [Documentation](../../active-directory-b2c/index.yml) | -Based on your organizationΓÇÖs requirements you might use cross-tenant synchronization (preview) in multi-tenant organizations. For more information about this new feature, see the [multi-tenant organization documentation](/azure/active-directory/multi-tenant-organizations) and the [feature comparison](../multi-tenant-organizations/overview.md#compare-multi-tenant-capabilities). +Based on your organizationΓÇÖs requirements you might use cross-tenant synchronization (preview) in multi-tenant organizations. For more information about this new feature, see the [multi-tenant organization documentation](../multi-tenant-organizations/index.yml) and the [feature comparison](../multi-tenant-organizations/overview.md#compare-multi-tenant-capabilities). ## Managing External Identities features Cross-tenant access settings let you manage B2B collaboration and B2B direct con For more information, see [Cross-tenant access in Azure AD External Identities](cross-tenant-access-overview.md). -Azure AD has a new feature for multi-tenant organizations called cross-tenant synchronization (preview), which allows for a seamless collaboration experience across Azure AD tenants. Cross-tenant synchronization settings are configured under the **Organization-specific access settings**. To learn more about multi-tenant organizations and cross-tenant synchronization see the [Multi-tenant organizations documentation](/azure/active-directory/multi-tenant-organizations). +Azure AD has a new feature for multi-tenant organizations called cross-tenant synchronization (preview), which allows for a seamless collaboration experience across Azure AD tenants. Cross-tenant synchronization settings are configured under the **Organization-specific access settings**. To learn more about multi-tenant organizations and cross-tenant synchronization see the [Multi-tenant organizations documentation](../multi-tenant-organizations/index.yml). ### Microsoft cloud settings for B2B collaboration (preview) A multi-tenant organization is an organization that has more than one instance o - [What is Azure AD B2B collaboration?](what-is-b2b.md) - [What is Azure AD B2B direct connect?](b2b-direct-connect-overview.md) - [About Azure AD B2C](../../active-directory-b2c/overview.md)-- [About Azure AD multi-tenant organizations](../../active-directory/multi-tenant-organizations/overview.md)+- [About Azure AD multi-tenant organizations](../../active-directory/multi-tenant-organizations/overview.md) |
active-directory | Users Default Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md | In Azure Active Directory (Azure AD), all users are granted a set of default per This article describes those default permissions and compares the member and guest user defaults. The default user permissions can be changed only in user settings in Azure AD. ## Member and guest users+ The set of default permissions depends on whether the user is a native member of the tenant (member user) or whether the user is brought over from another directory as a business-to-business (B2B) collaboration guest (guest user). For more information about adding guest users, see [What is Azure AD B2B collaboration?](../external-identities/what-is-b2b.md). Here are the capabilities of the default permissions: * *Member users* can register applications, manage their own profile photo and mobile phone number, change their own password, and invite B2B guests. These users can also read all directory information (with a few exceptions). You can restrict default permissions for member users in the following ways: | **Allow users to connect work or school account with LinkedIn** | Setting this option to **No** prevents users from connecting their work or school account with their LinkedIn account. For more information, see [LinkedIn account connections data sharing and consent](../enterprise-users/linkedin-user-consent.md). | | **Create security groups** | Setting this option to **No** prevents users from creating security groups. Global administrators and user administrators can still create security groups. To learn how, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md). | | **Create Microsoft 365 groups** | Setting this option to **No** prevents users from creating Microsoft 365 groups. Setting this option to **Some** allows a set of users to create Microsoft 365 groups. Global administrators and user administrators can still create Microsoft 365 groups. To learn how, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md). |-| **Restrict access to Azure AD administration portal** | **What does this switch do?** <br>**No** lets non-administrators browse the Azure AD administration portal. <br>**Yes** Restricts non-administrators from browsing the Azure AD administration portal. Non-administrators who are owners of groups or applications are unable to use the Azure portal to manage their owned resources. </p><p></p><p>**What does it not do?** <br> It doesn't restrict access to Azure AD data using PowerShell, Microsoft GraphAPI, or other clients such as Visual Studio. <br>It doesn't restrict access as long as a user is assigned a custom role (or any role). </p><p></p><p>**When should I use this switch?** <br>Use this option to prevent users from misconfiguring the resources that they own. </p><p></p><p>**When should I not use this switch?** <br>Don't use this switch as a security measure. Instead, create a Conditional Access policy that targets Microsoft Azure Management will block non-administrators access to [Microsoft Azure Management](../conditional-access/concept-conditional-access-cloud-apps.md#microsoft-azure-management). </p><p></p><p> **How do I grant only a specific non-administrator users the ability to use the Azure AD administration portal?** <br> Set this option to **Yes**, then assign them a role like global reader. </p><p></p><p>**Restrict access to the Entra administration portal** <br>A Conditional Access policy that targets Microsoft Azure Management will target access to all Azure management. | -| **Restrict non-admin users from creating tenants** | Users can create tenants in the Azure AD and Entra administration portal under Manage tenant. The creation of a tenant is recorded in the Audit log as category DirectoryManagement and activity Create Company. Anyone who creates a tenant will become the Global Administrator of that tenant. The newly created tenant does not inherit any settings or configurations. </p><p></p><p>**What does this switch do?** <br> Setting this option to **Yes** restricts creation of Azure AD tenants to the Global Administrator or tenant creator roles. Setting this option to **No** allows non-admin users to create Azure AD tenants. Tenant create will continue to be recorded in the Audit log. </p><p></p><p>**How do I grant only a specific non-administrator users the ability to create new tenants?** <br> Set this option to Yes, then assign them the tenant creator role.| +| **Restrict access to Azure AD administration portal** | **What does this switch do?** <br>**No** lets non-administrators browse the Azure AD administration portal. <br>**Yes** Restricts non-administrators from browsing the Azure AD administration portal. Non-administrators who are owners of groups or applications are unable to use the Azure portal to manage their owned resources. </p><p></p><p>**What does it not do?** <br> It doesn't restrict access to Azure AD data using PowerShell, Microsoft GraphAPI, or other clients such as Visual Studio. <br>It doesn't restrict access as long as a user is assigned a custom role (or any role). </p><p></p><p>**When should I use this switch?** <br>Use this option to prevent users from misconfiguring the resources that they own. </p><p></p><p>**When should I not use this switch?** <br>Don't use this switch as a security measure. Instead, create a Conditional Access policy that targets Microsoft Azure Management that blocks non-administrators access to [Microsoft Azure Management](../conditional-access/concept-conditional-access-cloud-apps.md#microsoft-azure-management). </p><p></p><p> **How do I grant only a specific non-administrator users the ability to use the Azure AD administration portal?** <br> Set this option to **Yes**, then assign them a role like global reader. </p><p></p><p>**Restrict access to the Entra administration portal** <br>A Conditional Access policy that targets Microsoft Azure Management targets access to all Azure management. | +| **Restrict non-admin users from creating tenants** | Users can create tenants in the Azure AD and Entra administration portal under Manage tenant. The creation of a tenant is recorded in the Audit log as category DirectoryManagement and activity Create Company. Anyone who creates a tenant becomes the Global Administrator of that tenant. The newly created tenant doesn't inherit any settings or configurations. </p><p></p><p>**What does this switch do?** <br> Setting this option to **Yes** restricts creation of Azure AD tenants to the Global Administrator or tenant creator roles. Setting this option to **No** allows non-admin users to create Azure AD tenants. Tenant create will continue to be recorded in the Audit log. </p><p></p><p>**How do I grant only a specific non-administrator users the ability to create new tenants?** <br> Set this option to Yes, then assign them the tenant creator role.| | **Read other users** | This setting is available in Microsoft Graph and PowerShell only. Setting this flag to `$false` prevents all non-admins from reading user information from the directory. This flag doesn't prevent reading user information in other Microsoft services like Exchange Online.</p><p>This setting is meant for special circumstances, so we don't recommend setting the flag to `$false`. | +The **Restrict non-admin users from creating tenants** option is shown [below](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/UserSettings) +++ ## Restrict guest users' default permissions You can restrict default permissions for guest users in the following ways. Permission | Setting explanation ## Object ownership ### Application registration owner permissions+ When a user registers an application, they're automatically added as an owner for the application. As an owner, they can manage the metadata of the application, such as the name and permissions that the app requests. They can also manage the tenant-specific configuration of the application, such as the single sign-on (SSO) configuration and user assignments. An owner can also add or remove other owners. Unlike global administrators, owners can manage only the applications that they own. ### Enterprise application owner permissions+ When a user adds a new enterprise application, they're automatically added as an owner. As an owner, they can manage the tenant-specific configuration of the application, such as the SSO configuration, provisioning, and user assignments. An owner can also add or remove other owners. Unlike global administrators, owners can manage only the applications that they own. ### Group owner permissions+ When a user creates a group, they're automatically added as an owner for that group. As an owner, they can manage properties of the group (such as the name) and manage group membership. An owner can also add or remove other owners. Unlike global administrators and user administrators, owners can manage only the groups that they own. An owner can also add or remove other owners. Unlike global administrators and u To assign a group owner, see [Managing owners for a group](active-directory-accessmanagement-managing-group-owners.md). ### Ownership permissions+ The following tables describe the specific permissions in Azure AD that member users have over owned objects. Users have these permissions only on objects that they own. #### Owned application registrations+ Users can perform the following actions on owned application registrations: | **Action** | **Description** | Users can perform the following actions on owned application registrations: | microsoft.directory/applications/restore | Restore applications in Azure AD. | #### Owned enterprise applications+ Users can perform the following actions on owned enterprise applications. An enterprise application consists of a service principal, one or more application policies, and sometimes an application object in the same tenant as the service principal. | **Action** | **Description** | Users can perform the following actions on owned enterprise applications. An ent | microsoft.directory/signInReports/allProperties/read | Read all properties (including privileged properties) on sign-in reports in Azure AD. | #### Owned devices+ Users can perform the following actions on owned devices: | **Action** | **Description** | Users can perform the following actions on owned devices: | microsoft.directory/devices/disable | Disable devices in Azure AD. | #### Owned groups+ Users can perform the following actions on owned groups. > [!NOTE] |
active-directory | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md | For more information:, see: - [Create a new configuration for Azure AD Connect cloud sync](../cloud-sync/how-to-configure.md) - [Attribute mapping in Azure AD Connect cloud sync](../cloud-sync/how-to-attribute-mapping.md)-- [Azure AD cloud sync insights workbook](/azure/active-directory/cloud-sync/how-to-cloud-sync-workbook)+- [Azure AD cloud sync insights workbook](../cloud-sync/how-to-cloud-sync-workbook.md) For more information:, see: Hybrid IT Admins now can sync both Active Directory and Azure AD Directory Extensions using Azure AD Cloud Sync. This new capability adds the ability to dynamically discover the schema for both Active Directory and Azure AD, allowing customers to simply map the needed attributes using Cloud Sync's attribute mapping experience. -For more details on how to enable this feature, see: [Cloud Sync directory extensions and custom attribute mapping](/azure/active-directory/cloud-sync/custom-attribute-mapping) +For more details on how to enable this feature, see: [Cloud Sync directory extensions and custom attribute mapping](../cloud-sync/custom-attribute-mapping.md) |
active-directory | How To Connect Emergency Ad Fs Certificate Rotation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-emergency-ad-fs-certificate-rotation.md | Title: Emergency Rotation of the AD FS certificates | Microsoft Docs + Title: Emergency rotation of the AD FS certificates | Microsoft Docs description: This article explains how to revoke and update AD FS certificates immediately. -# Emergency Rotation of the AD FS certificates -In the event that you need to rotate the AD FS certificates immediately, you can follow the steps outlined below in this section. +# Emergency rotation of the AD FS certificates ++If you need to rotate the Active Directory Federation Services (AD FS) certificates immediately, you can follow the steps in this section. > [!IMPORTANT]-> Conducting the steps below in the AD FS environment will revoke the old certificates immediately. Because this is done immediately, the normal time usually allowed for your federation partners to consume your new certificate is by-passed. It might result in a service outage as trusts update to use the new certificates. This should resolve once all of the federation partners have the new certificates. +> Rotating certificates in the AD FS environment revokes the old certificates immediately, and the time it usually takes for your federation partners to consume your new certificate is bypassed. The action might also result in a service outage as trusts update to use the new certificates. The outage should be resolved after all the federation partners have the new certificates. > [!NOTE]-> Microsoft highly recommends using a Hardware Security Module (HSM) to protect and secure certificates. -> For more information, see [Hardware Security Module](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#hardware-security-module-hsm) under best practices for securing AD FS. +> We highly recommend that you use a Hardware Security Module (HSM) to protect and secure certificates. +> For more information, see the [Hardware Security Module](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#hardware-security-module-hsm) section in the best practices for securing AD FS. ## Determine your Token Signing Certificate thumbprint-In order to revoke the old Token Signing Certificate which AD FS is currently using, you need to determine the thumbprint of the token-signing certificate. To do this, use the following steps below: - 1. Connect to the Microsoft Online Service -`PS C:\>Connect-MsolService` - 2. Document both your on-premises and cloud Token Signing Certificate thumbprint and expiration dates. -`PS C:\>Get-MsolFederationProperty -DomainName <domain>` - 3. Copy down the thumbprint. It will be used later to remove the existing certificates. +To revoke the old Token Signing Certificate that AD FS is currently using, you need to determine the thumbprint of the token-signing certificate. Do the following: ++1. Connect to the Microsoft Online Service by running `PS C:\>Connect-MsolService`. ++1. Document both your on-premises and cloud Token Signing Certificate thumbprint and expiration dates by running `PS C:\>Get-MsolFederationProperty -DomainName <domain>`. +1. Copy down the thumbprint. You'll use it later to remove the existing certificates. -You can also get the thumbprint by using AD FS Management, navigating to Service/Certificates, right-clicking on the certificate, select View certificate and then selecting Details. +You can also get the thumbprint by using AD FS Management. Go to **Service** > **Certificates**, right-click the certificate, select **View certificate**, and then select **Details**. ## Determine whether AD FS renews the certificates automatically-By default, AD FS is configured to generate token signing and token decryption certificates automatically, both at the initial configuration time and when the certificates are approaching their expiration date. +By default, AD FS is configured to generate token signing and token decryption certificates automatically. It does so both during the initial configuration and when the certificates are approaching their expiration date. You can run the following Windows PowerShell command: `PS C:\>Get-AdfsProperties | FL AutoCert*, Certificate*`. -The AutoCertificateRollover property describes whether AD FS is configured to renew token signing and token decrypting certificates automatically. If AutoCertificateRollover is set to TRUE, follow the instructions outlined below in [Generating new self-signed certificate if AutoCertificateRollover is set to TRUE](#generating-new-self-signed-certificate-if-autocertificaterollover-is-set-to-true). If AutoCertificateRollover is set to FALSE, follow the instructions outlined below in [Generating new certificates manually if AutoCertificateRollover is set to FALSE](#generating-new-certificates-manually-if-autocertificaterollover-is-set-to-false). +The `AutoCertificateRollover` property describes whether AD FS is configured to renew token signing and token decrypting certificates automatically. Do either of the following: +* If `AutoCertificateRollover` is set to `TRUE`, [generate a new self-signed certificate](#if-autocertificaterollover-is-set-to-true-generate-a-new-self-signed-certificate). +* If `AutoCertificateRollover` is set to `FALSE`, [generate new certificates manually](#if-autocertificaterollover-is-set-to-false-generate-new-certificates-manually). -## Generating new self-signed certificate if AutoCertificateRollover is set to TRUE -In this section, you will be creating **two** token-signing certificates. The first will use the **-urgent** flag, which will replace the current primary certificate immediately. The second will be used for the secondary certificate. ++## If AutoCertificateRollover is set to TRUE, generate a new self-signed certificate ++In this section, you create *two* token-signing certificates. The first uses the `-urgent` flag, which replaces the current primary certificate immediately. The second is used for the secondary certificate. >[!IMPORTANT]->The reason we are creating two certificates is because Azure holds on to information regarding the previous certificate. By creating a second one, we are forcing Azure to release information about the old certificate and replace it with information about the second certificate. +> You're creating two certificates because Azure holds on to information about the previous certificate. By creating a second one, you're forcing Azure to release information about the old certificate and replace it with information about the second one. >->If you do not create the second certificate and update Azure with it, it may be possible for the old token-signing certificate to authenticate users. +>If you don't create the second certificate and update Azure with it, it might be possible for the old token-signing certificate to authenticate users. ++To generate the new token-signing certificates, do the following: ++1. Ensure that you're logged in to the primary AD FS server. +1. Open Windows PowerShell as an administrator. +1. Make sure that `AutoCertificateRollover` is set to `True` by running: ++ `PS C:\>Get-AdfsProperties | FL AutoCert*, Certificate*` ++1. To generate a new token signing certificate, run: ++ `Update-ADFSCertificate ΓÇôCertificateType token-signing -Urgent` ++1. Verify the update by running: ++ `Get-ADFSCertificate ΓÇôCertificateType token-signing` -You can use the following steps to generate the new token-signing certificates. +1. Now generate the second token signing certificate by running: - 1. Ensure that you are logged on to the primary AD FS server. - 2. Open Windows PowerShell as an administrator. - 3. Check to make sure that your AutoCertificateRollover is set to True. -`PS C:\>Get-AdfsProperties | FL AutoCert*, Certificate*` - 4. To generate a new token signing certificate: `Update-ADFSCertificate ΓÇôCertificateType token-signing -Urgent`. - 5. Verify the update by running the following command: `Get-ADFSCertificate ΓÇôCertificateType token-signing` - 6. Now generate the second token signing certificate: `Update-ADFSCertificate ΓÇôCertificateType token-signing`. - 7. You can verify the update by running the following command again: `Get-ADFSCertificate ΓÇôCertificateType token-signing` + `Update-ADFSCertificate ΓÇôCertificateType token-signing` +1. You can verify the update by running the following command again: + + `Get-ADFSCertificate ΓÇôCertificateType token-signing` -## Generating new certificates manually if AutoCertificateRollover is set to FALSE -If you are not using the default automatically generated, self-signed token signing and token decryption certificates, you must renew and configure these certificates manually. This involves creating two new token-signing certificates and importing them. Then you promote one to primary, revoke the old certificate and configure the second certificate as the secondary certificate. -First, you must obtain a two new certificates from your certificate authority and import them into the local machine personal certificate store on each federation server. For instructions, see the [Import a Certificate](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc754489(v=ws.11)) article. +## If AutoCertificateRollover is set to FALSE, generate new certificates manually ++If you're not using the default automatically generated, self-signed token signing and token decryption certificates, you must renew and configure these certificates manually. Doing so involves creating two new token-signing certificates and importing them. Then, you promote one to primary, revoke the old certificate, and configure the second certificate as the secondary certificate. ++First, you must obtain two new certificates from your certificate authority and import them into the local machine personal certificate store on each federation server. For instructions, see [Import a Certificate](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc754489(v=ws.11)). >[!IMPORTANT]->The reason we are creating two certificates is because Azure holds on to information regarding the previous certificate. By creating a second one, we are forcing Azure to release information about the old certificate and replace it with information about the second certificate. +>You're creating two certificates because Azure holds on to information about the previous certificate. By creating a second one, you're forcing Azure to release information about the old certificate and replace it with information about the second one. >->If you do not create the second certificate and update Azure with it, it may be possible for the old token-signing certificate to authenticate users. +>If you don't create the second certificate and update Azure with it, it might be possible for the old token-signing certificate to authenticate users. ++### Configure a new certificate as a secondary certificate ++Next, configure one certificate as the secondary AD FS token signing or decryption certificate and then promote it to the primary. ++1. After you've imported the certificate, open the **AD FS Management** console. ++1. Expand **Service**, and then select **Certificates**. +1. On the **Actions** pane, select **Add Token-Signing Certificate**. +1. Select the new certificate from the list of displayed certificates, and then select **OK**. -### To configure a new certificate as a secondary certificate -Then you must configure one certificate as the secondary AD FS token signing or decryption certificate and then promote it to the primary +### Promote the new certificate from secondary to primary -1. Once you have imported the certificate. Open the **AD FS Management** console. -2. Expand **Service** and then select **Certificates**. -3. In the Actions pane, click **Add Token-Signing Certificate**. -4. Select the new certificate from the list of displayed certificates, and then click OK. +Now that you've imported the new certificate and configured it in AD FS, you need to set it as the primary certificate. -### To promote the new certificate from secondary to primary -Now that the new certificate has been imported and configured in AD FS, you need to set as the primary certificate. 1. Open the **AD FS Management** console.-2. Expand **Service** and then select **Certificates**. -3. Click the secondary token signing certificate. -4. In the **Actions** pane, click **Set As Primary**. Click Yes at the confirmation prompt. -5. Once you promoted the new certificate as the primary certificate, you should remove the old certificate because it can still be used. See the [Remove your old certificates](#remove-your-old-certificates) section below. ++1. Expand **Service**, and then select **Certificates**. +1. Select the secondary token signing certificate. +1. On the **Actions** pane, select **Set As Primary**. At the prompt, select **Yes**. +1. After you've promoted the new certificate as the primary certificate, you should remove the old certificate because it can still be used. For more information, see the [Remove your old certificates](#remove-your-old-certificates) section. ### To configure the second certificate as a secondary certificate-Now that you have added the first certificate and made it primary and removed the old one, import the second certificate. Then you must configure the certificate as the secondary AD FS token signing certificate +Now that you've added the first certificate, made it primary, and removed the old one, you can import the second certificate. Configure the certificate as the secondary AD FS token signing certificate by doing the following: ++1. After you've imported the certificate, open the **AD FS Management** console. -1. Once you have imported the certificate. Open the **AD FS Management** console. -2. Expand **Service** and then select **Certificates**. -3. In the Actions pane, click **Add Token-Signing Certificate**. -4. Select the new certificate from the list of displayed certificates, and then click OK. +1. Expand **Service**, and then select **Certificates**. +1. On the **Actions** pane, select **Add Token-Signing Certificate**. +1. Select the new certificate from the list of displayed certificates, and then select **OK**. ## Update Azure AD with the new token-signing certificate-Open the Microsoft Azure Active Directory Module for Windows PowerShell. Alternatively, open Windows PowerShell and then run the command `Import-Module msonline` -Connect to Azure AD by running the following command: `Connect-MsolService`, and then, enter your Hybrid Identity Administrator credentials. +1. Open the Microsoft Azure Active Directory Module for Windows PowerShell. Alternatively, open Windows PowerShell, and then run the `Import-Module msonline` command. ++1. Connect to Azure Active Directory (Azure AD) by running the following command: ->[!Note] -> If you are running these commands on a computer that is not the primary federation server, enter the following command first: `Set-MsolADFSContext ΓÇôComputer <servername>`. Replace \<servername\> with the name of the AD FS server. Then enter the administrator credentials for the AD FS server when prompted. + `Connect-MsolService` + +1. Enter your Hybrid Identity Administrator credentials. -Optionally, verify whether an update is required by checking the current certificate information in Azure AD. To do so, run the following command: `Get-MsolFederationProperty`. Enter the name of the Federated domain when prompted. + > [!Note] + > If you're running these commands on a computer that isn't the primary federation server, enter the following command first: + > + > `Set-MsolADFSContext ΓÇôComputer <servername>` + > + > Replace \<servername\> with the name of the AD FS server and then, at the prompt, enter the administrator credentials for the AD FS server. -To update the certificate information in Azure AD, run the following command: `Update-MsolFederatedDomain` and then enter the domain name when prompted. +1. Optionally, verify whether an update is required by checking the current certificate information in Azure AD. To do so, run the following command: `Get-MsolFederationProperty`. Enter the name of the Federated domain when prompted. ->[!Note] -> If you see an error when running this command, run the following command: `Update-MsolFederatedDomain ΓÇôSupportMultipleDomain`, and then enter the domain name when prompted. +1. To update the certificate information in Azure AD, run the following command: `Update-MsolFederatedDomain` and then enter the domain name when prompted. ++ > [!Note] + > If you receive an error when you run this command, run `Update-MsolFederatedDomain ΓÇôSupportMultipleDomain` and then, at the prompt, enter the domain name. ## Replace SSL certificates-In the event that you need to replace your token-signing certificate because of a compromise, you should also revoke and replace the SSL certificates for AD FS and your WAP servers. -Revoking your SSL certificates must be done at the certificate authority (CA) that issued the certificate. These certificates are often issued by 3rd party providers such as GoDaddy. For an example, see (Revoke a certificate | SSL Certificates - GoDaddy Help US). For more information, see [How Certificate Revocation Works](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ee619754(v=ws.10)). +If you need to replace your token-signing certificate because of a compromise, you should also revoke and replace the Secure Sockets Layer (SSL) certificates for AD FS and your Web Application Proxy (WAP) servers. ++Revoking your SSL certificates must be done at the certificate authority (CA) that issued the certificate. These certificates are often issued by third-party providers, such as GoDaddy. For an example, see [Revoke a certificate | SSL Certificates - GoDaddy Help US](https://www.godaddy.com/help/revoke-a-certificate-4747). For more information, see [How certificate revocation works](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ee619754(v=ws.10)). -Once the old SSL certificate has been revoked and a new one issued, you can replace the SSL certificates. For more information, see [Replacing the SSL certificate for AD FS](/windows-server/identity/ad-fs/operations/manage-ssl-certificates-ad-fs-wap#replacing-the-ssl-certificate-for-ad-fs). +After the old SSL certificate has been revoked and a new one issued, you can replace the SSL certificates. For more information, see [Replace the SSL certificate for AD FS](/windows-server/identity/ad-fs/operations/manage-ssl-certificates-ad-fs-wap#replacing-the-ssl-certificate-for-ad-fs). ## Remove your old certificates-Once you have replaced your old certificates, you should remove the old certificate because it can still be used. To do this, follow the steps below: +After you've replaced your old certificates, you should remove the old certificate because it can still be used. To do so: ++1. Ensure that you're logged in to the primary AD FS server. +1. Open Windows PowerShell as an administrator. +1. To remove the old token signing certificate, run: -1. Ensure that you are logged on to the primary AD FS server. -2. Open Windows PowerShell as an administrator. -4. To remove the old token signing certificate: `Remove-ADFSCertificate ΓÇôCertificateType token-signing -thumbprint <thumbprint>`. + `Remove-ADFSCertificate ΓÇôCertificateType token-signing -thumbprint <thumbprint>` -## Updating federation partners who can consume Federation Metadata -If you have renewed and configure a new token signing or token decryption certificate, you must make sure that the all your federation partners (resource organization or account organization partners that are represented in your AD FS by relying party trusts and claims provider trusts) have picked up the new certificates. +## Update federation partners who can consume federation metadata +If you've renewed and configure a new token signing or token decryption certificate, you must make sure that all your federation partners have picked up the new certificates. This list includes resource organization or account organization partners that are represented in AD FS by relying party trusts and claims provider trusts. -## Updating federation partners who can NOT consume Federation Metadata -If your federation partners cannot consume your federation metadata, you must manually send them the public key of your new token-signing / token-decrypting certificate. Send your new certificate public key (.cer file or .p7b if you wish to include the entire chain) to all of your resource organization or account organization partners (represented in your AD FS by relying party trusts and claims provider trusts). Have the partners implement changes on their side to trust the new certificates. +## Update federation partners who can't consume federation metadata +If your federation partners can't consume your federation metadata, you must manually send them the public key of your new token-signing / token-decrypting certificate. Send your new certificate public key (.cer file or .p7b if you want to include the entire chain) to all your resource organization or account organization partners (represented in your AD FS by relying party trusts and claims provider trusts). Have the partners implement changes on their side to trust the new certificates. -## Revoke refresh tokens via PowerShell -Now we want to revoke refresh tokens for users who may have them and force them to re-logon and get new tokens. This will log users out of their phone, current webmail sessions, along with other items that are using Tokens and Refresh Tokens. Information can be found [here](/powershell/module/azuread/revoke-azureaduserallrefreshtoken?preserve-view=true&view=azureadps-2.0) and you can also reference how to [Revoke user access in Azure Active Directory](../../active-directory/enterprise-users/users-revoke-access.md). +## Revoke the refresh tokens via PowerShell +Now you want to revoke the refresh tokens for users who might have them and force them to log in again and get new tokens. This logs users out of their phones, current webmail sessions, and other places that are using tokens and refresh tokens. For more information, see [Revoke-AzureADUserAllRefreshToken](/powershell/module/azuread/revoke-azureaduserallrefreshtoken?preserve-view=true&view=azureadps-2.0). Also see [Revoke user access in Azure Active Directory](../../active-directory/enterprise-users/users-revoke-access.md). ## Next steps -- [Managing SSL Certificates in AD FS and WAP in Windows Server 2016](/windows-server/identity/ad-fs/operations/manage-ssl-certificates-ad-fs-wap#replacing-the-ssl-certificate-for-ad-fs)-- [Obtain and Configure Token Signing and Token Decryption Certificates for AD FS](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn781426(v=ws.11)#updating-federation-partners)+- [Manage SSL certificates in AD FS and WAP in Windows Server 2016](/windows-server/identity/ad-fs/operations/manage-ssl-certificates-ad-fs-wap#replacing-the-ssl-certificate-for-ad-fs) +- [Obtain and configure token signing and token decryption certificates for AD FS](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn781426(v=ws.11)#updating-federation-partners) - [Renew federation certificates for Microsoft 365 and Azure Active Directory](how-to-connect-fed-o365-certs.md) |
active-directory | How To Connect Fed Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-management.md | Title: Azure AD Connect - AD FS management and customization | Microsoft Docs -description: AD FS management with Azure AD Connect and customization of user AD FS sign-in experience with Azure AD Connect and PowerShell. +description: This article discusses how to manage AD FS with Azure AD Connect and customize the AD FS user sign-in experience with Azure AD Connect and PowerShell. keywords: AD FS, ADFS, AD FS management, AAD Connect, Connect, sign-in, AD FS customization, repair trust, M365, federation, relying party documentationcenter: ''-# Manage and customize Active Directory Federation Services by using Azure AD Connect -This article describes how to manage and customize Active Directory Federation Services (AD FS) by using Azure Active Directory (Azure AD) Connect. It also includes other common AD FS tasks that you might need to do for a complete configuration of an AD FS farm. +# Manage and customize AD FS by using Azure AD Connect -| Topic | What it covers | +This article describes how to manage and customize Active Directory Federation Services (AD FS) by using Azure Active Directory (Azure AD) Connect. ++You'll also learn about other common AD FS tasks that you might need to perform to completely configure an AD FS farm. These tasks are listed in the following table: ++| Task | Description | |: |: | | **Manage AD FS** | |-| [Repair the trust](#repairthetrust) |How to repair the federation trust with Microsoft 365. | -| [Federate with Azure AD using alternate login ID](#alternateid) | Configure federation using alternate login ID | -| [Add an AD FS server](#addadfsserver) |How to expand an AD FS farm with an additional AD FS server. | -| [Add an AD FS Web Application Proxy server](#addwapserver) |How to expand an AD FS farm with an additional Web Application Proxy (WAP) server. | -| [Add a federated domain](#addfeddomain) |How to add a federated domain. | -| [Update the TLS/SSL certificate](how-to-connect-fed-ssl-update.md)| How to update the TLS/SSL certificate for an AD FS farm. | +| [Repair the trust](#repairthetrust) |Learn how to repair the federation trust with Microsoft 365. | +| [Federate with Azure AD by using an alternative sign-in ID](#alternateid) | Learn how to configure federation by using an alternative sign-in ID. | +| [Add an AD FS server](#addadfsserver) |Learn how to expand an AD FS farm with an extra AD FS server. | +| [Add an AD FS Web Application Proxy (WAP) server](#addwapserver) |Learn how to expand an AD FS farm with an additional WAP server. | +| [Add a federated domain](#addfeddomain) |Learn how to add a federated domain. | +| [Update the TLS/SSL certificate](how-to-connect-fed-ssl-update.md)| Learn how to update the TLS/SSL certificate for an AD FS farm. | | **Customize AD FS** | |-| [Add a custom company logo or illustration](#customlogo) |How to customize an AD FS sign-in page with a company logo and illustration. | -| [Add a sign-in description](#addsignindescription) |How to add a sign-in page description. | -| [Modify AD FS claim rules](#modclaims) |How to modify AD FS claims for various federation scenarios. | +| [Add a custom company logo or illustration](#customlogo) |Learn how to customize an AD FS sign-in page with a company logo and illustration. | +| [Add a sign-in description](#addsignindescription) |Learn how to add a sign-in page description. | +| [Modify AD FS claim rules](#modclaims) |Learn how to modify AD FS claims for various federation scenarios. | ## Manage AD FS-You can perform various AD FS-related tasks in Azure AD Connect with minimal user intervention by using the Azure AD Connect wizard. After you've finished installing Azure AD Connect by running the wizard, you can run the wizard again to perform additional tasks. -## <a name="repairthetrust"></a>Repair the trust -You can use Azure AD Connect to check the current health of the AD FS and Azure AD trust and take appropriate actions to repair the trust. Follow these steps to repair your Azure AD and AD FS trust. +You can perform various AD FS-related tasks in Azure AD Connect with minimal user intervention by using the Azure AD Connect wizard. After you've finished installing Azure AD Connect by running the wizard, you can run it again to perform other tasks. ++<a name="repairthetrust"></a> ++## Repair the trust ++You can use Azure AD Connect to check the current health of the AD FS and Azure AD trust and then take appropriate actions to repair the trust. To repair your Azure AD and AD FS trust, do the following: -1. Select **Repair AAD and ADFS Trust** from the list of additional tasks. -  +1. Select **Repair AAD and ADFS Trust** from the list of tasks. ++  ++1. On the **Connect to Azure AD** page, provide your Hybrid Identity Administrator credentials for Azure AD, and then select **Next**. -2. On the **Connect to Azure AD** page, provide your Hybrid Identity Administrator credentials for Azure AD, and click **Next**.  -3. On the **Remote access credentials** page, enter the credentials for the domain administrator. +1. On the **Remote access credentials** page, enter the credentials for the domain administrator. -  +  - After you click **Next**, Azure AD Connect checks for certificate health and shows any issues. +1. Select **Next**. -  + Azure AD Connect checks for certificate health and shows any issues. - The **Ready to configure** page shows the list of actions that will be performed to repair the trust. +  -  + The **Ready to configure** page shows the list of actions that will be performed to repair the trust. -4. Click **Install** to repair the trust. +  ++1. Select **Install** to repair the trust. > [!NOTE]-> Azure AD Connect can only repair or act on certificates that are self-signed. Azure AD Connect can't repair third-party certificates. +> Azure AD Connect can repair or act on only certificates that are self-signed. Azure AD Connect can't repair third-party certificates. ++## <a name="alternateid"></a>Federate with Azure AD by using alternateID -## <a name="alternateid"></a>Federate with Azure AD using AlternateID -It is recommended that the on-premises User Principal Name(UPN) and the cloud User Principal Name are kept the same. If the on-premises UPN uses a non-routable domain (ex. Contoso.local) or cannot be changed due to local application dependencies, we recommend setting up alternate login ID. Alternate login ID allows you to configure a sign-in experience where users can sign in with an attribute other than their UPN, such as mail. The choice for User Principal Name in Azure AD Connect defaults to the userPrincipalName attribute in Active Directory. If you choose any other attribute for User Principal Name and are federating using AD FS, then Azure AD Connect will configure AD FS for alternate login ID. An example of choosing a different attribute for User Principal Name is shown below: +We recommend that you keep the *on-premises* User Principal Name (UPN) and the *cloud* User Principal Name the same. If the on-premises UPN uses a non-routable domain (for example, Contoso.local) or can't be changed because of local application dependencies, we recommend setting up an alternative sign-in ID. By using an alternative sign-in ID, you can configure a sign-in experience where users can sign in with an attribute other than their UPN, such as an email address. - +The choice of UPN in Azure AD Connect defaults to the userPrincipalName attribute in Active Directory. If you choose any other attribute for the UPN and are federating by using AD FS, Azure AD Connect configures AD FS for an alternative sign-in ID. -Configuring alternate login ID for AD FS consists of two main steps: -1. **Configure the right set of issuance claims**: The issuance claim rules in the Azure AD relying party trust are modified to use the selected UserPrincipalName attribute as the alternate ID of the user. -2. **Enable alternate login ID in the AD FS configuration**: The AD FS configuration is updated so that AD FS can look up users in the appropriate forests using the alternate ID. This configuration is supported for AD FS on Windows Server 2012 R2 (with KB2919355) or later. If the AD FS servers are 2012 R2, Azure AD Connect checks for the presence of the required KB. If the KB is not detected, a warning will be displayed after configuration completes, as shown below: +An example of choosing a different attribute for the UPN is shown in the following image: -  + - To rectify the configuration in case of missing KB, install the required [KB2919355](https://go.microsoft.com/fwlink/?LinkID=396590) and then repair the trust using [Repair AAD and AD FS Trust](#repairthetrust). +Configuring an alternative sign-in ID for AD FS consists of two main steps: ++1. **Configure the right set of issuance claims**: The issuance claim rules in the Azure AD relying party trust are modified to use the selected UserPrincipalName attribute as the alternative ID of the user. ++1. **Enable an alternative sign-in ID in the AD FS configuration**: The AD FS configuration is updated so that AD FS can look up users in the appropriate forests by using the alternative ID. This configuration is supported for AD FS on Windows Server 2012 R2 (with KB2919355) or later. If the AD FS servers are 2012 R2, Azure AD Connect checks for the presence of the required KB. If the KB isn't detected, a warning is displayed after the configuration is completed, as shown in the following image: ++  ++ If there's a missing KB, you can remedy the configuration by installing the required [KB2919355](https://go.microsoft.com/fwlink/?LinkID=396590). You can then follow the instructions in [repair the trust](#repair-the-trust). > [!NOTE]-> For more information on alternateID and steps to manually configure, read [Configuring Alternate Login ID](/windows-server/identity/ad-fs/operations/configuring-alternate-login-id) +> For more information about alternateID and steps to manually configure it, see [Configure an alternative sign-in ID](/windows-server/identity/ad-fs/operations/configuring-alternate-login-id). ## <a name="addadfsserver"></a>Add an AD FS server > [!NOTE]-> To add an AD FS server, Azure AD Connect requires the PFX certificate. Therefore, you can perform this operation only if you configured the AD FS farm by using Azure AD Connect. +> To add an AD FS server, Azure AD Connect requires a PFX certificate. Therefore, you can perform this operation only if you configured the AD FS farm by using Azure AD Connect. -1. Select **Deploy an additional Federation Server**, and click **Next**. +1. Select **Deploy an additional Federation Server**, and then select **Next**. -  +  -2. On the **Connect to Azure AD** page, enter your Hybrid Identity Administratoristrator credentials for Azure AD, and click **Next**. +1. On the **Connect to Azure AD** page, enter your Hybrid Identity Administrator credentials for Azure AD, and then select **Next**.  -3. Provide the domain administrator credentials. +1. Provide the domain administrator credentials. ++  -  +1. Azure AD Connect asks for the password of the PFX file that you provided when you configured your new AD FS farm with Azure AD Connect. Select **Enter Password** to provide the password for the PFX file. -4. Azure AD Connect asks for the password of the PFX file that you provided while configuring your new AD FS farm with Azure AD Connect. Click **Enter Password** to provide the password for the PFX file. +  -  +  -  +1. On the **AD FS Servers** page, enter the server name or IP address to be added to the AD FS farm. -5. On the **AD FS Servers** page, enter the server name or IP address to be added to the AD FS farm. +  -  +1. Select **Next**, and then continue completing the final **Configure** page. -6. Click **Next**, and go through the final **Configure** page. After Azure AD Connect has finished adding the servers to the AD FS farm, you will be given the option to verify the connectivity. + After Azure AD Connect has finished adding the servers to the AD FS farm, you'll be given the option to verify the connectivity. -  +  -  +  ## <a name="addwapserver"></a>Add an AD FS WAP server > [!NOTE]-> To add a WAP server, Azure AD Connect requires the PFX certificate. Therefore, you can only perform this operation if you configured the AD FS farm by using Azure AD Connect. +> To add a Web Application Proxy server, Azure AD Connect requires the PFX certificate. Therefore, you can perform this operation only after you've configured the AD FS farm by using Azure AD Connect. 1. Select **Deploy Web Application Proxy** from the list of available tasks.  -2. Provide the Azure Hybrid Identity Administrator credentials. +1. Provide the Azure Hybrid Identity Administrator credentials.  -3. On the **Specify SSL certificate** page, provide the password for the PFX file that you provided when you configured the AD FS farm with Azure AD Connect. +1. On the **Specify SSL certificate** page, provide the password for the PFX file that you provided when you configured the AD FS farm with Azure AD Connect.   -4. Add the server to be added as a WAP server. Because the WAP server might not be joined to the domain, the wizard asks for administrative credentials to the server being added. +1. Add the server to be added as a WAP server. Because the WAP server might not be joined to the domain, the wizard asks for administrative credentials to the server being added.  -5. On the **Proxy trust credentials** page, provide administrative credentials to configure the proxy trust and access the primary server in the AD FS farm. +1. On the **Proxy trust credentials** page, provide administrative credentials to configure the proxy trust and access the primary server in the AD FS farm.  -6. On the **Ready to configure** page, the wizard shows the list of actions that will be performed. +1. On the **Ready to configure** page, the wizard shows the list of actions that will be performed. -  +  -7. Click **Install** to finish the configuration. After the configuration is complete, the wizard gives you the option to verify the connectivity to the servers. Click **Verify** to check connectivity. +1. Select **Install** to finish the configuration. After the configuration is complete, the wizard gives you the option to verify the connectivity to the servers. Select **Verify** to check connectivity.  Configuring alternate login ID for AD FS consists of two main steps: It's easy to add a domain to be federated with Azure AD by using Azure AD Connect. Azure AD Connect adds the domain for federation and modifies the claim rules to correctly reflect the issuer when you have multiple domains federated with Azure AD. -1. To add a federated domain, select the task **Add an additional Azure AD domain**. +1. To add a federated domain, select **Add an additional Azure AD domain**. -  +  -2. On the next page of the wizard, provide the global administrator credentials for Azure AD. +1. On the next page of the wizard, provide the global administrator credentials for Azure AD. -  +  -3. On the **Remote access credentials** page, provide the domain administrator credentials. +1. On the **Remote access credentials** page, provide the domain administrator credentials. -  +  -4. On the next page, the wizard provides a list of Azure AD domains that you can federate your on-premises directory with. Choose the domain from the list. +1. On the next page, the wizard provides a list of Azure AD domains that you can federate your on-premises directory with. Choose the domain from the list. -  +  - After you choose the domain, the wizard provides you with appropriate information about further actions that the wizard will take and the impact of the configuration. In some cases, if you select a domain that isn't yet verified in Azure AD, the wizard provides you with information to help you verify the domain. See [Add your custom domain name to Azure Active Directory](../fundamentals/add-custom-domain.md) for more details. + After you choose the domain, the wizard informs you about further actions that it will take and the impact of the configuration. In some cases, if you select a domain that isn't yet verified in Azure AD, the wizard helps you verify the domain. For more information, see [Add your custom domain name to Azure Active Directory](../fundamentals/add-custom-domain.md). -5. Click **Next**. The **Ready to configure** page shows the list of actions that Azure AD Connect will perform. Click **Install** to finish the configuration. +1. Select **Next**. -  + The **Ready to configure** page lists the actions that Azure AD Connect will perform. ++  ++1. Select **Install** to finish the configuration. > [!NOTE]-> Users from the added federated domain must be synchronized before they will be able to login to Azure AD. +> Users in the added federated domain must be synchronized before they can sign in to Azure AD. -## AD FS customization -The following sections provide details about some of the common tasks that you might have to perform when you customize your AD FS sign-in page. +## Customize AD FS ++The following sections provide details about some of the common tasks that you might have to perform to customize your AD FS sign-in page. ## <a name="customlogo"></a>Add a custom company logo or illustration To change the logo of the company that's displayed on the **Sign-in** page, use the following Windows PowerShell cmdlet and syntax. Set-AdfsWebTheme -TargetName default -Logo @{path="c:\Contoso\logo.PNG"} To add a sign-in page description to the **Sign-in page**, use the following Windows PowerShell cmdlet and syntax. ```azurepowershell-interactive-Set-AdfsGlobalWebContent -SignInPageDescriptionText "<p>Sign-in to Contoso requires device registration. Click <A href='http://fs1.contoso.com/deviceregistration/'>here</A> for more information.</p>" +Set-AdfsGlobalWebContent -SignInPageDescriptionText "<p>Sign-in to Contoso requires device registration. Select <A href='http://fs1.contoso.com/deviceregistration/'>here</A> for more information.</p>" ``` ## <a name="modclaims"></a>Modify AD FS claim rules AD FS supports a rich claim language that you can use to create custom claim rul The following sections describe how you can write custom rules for some scenarios that relate to Azure AD and AD FS federation. ### Immutable ID conditional on a value being present in the attribute-Azure AD Connect lets you specify an attribute to be used as a source anchor when objects are synced to Azure AD. If the value in the custom attribute is not empty, you might want to issue an immutable ID claim. +Azure AD Connect lets you specify an attribute to be used as a source anchor when objects are synced to Azure AD. If the value in the custom attribute isn't empty, you might want to issue an immutable ID claim. -For example, you might select **ms-ds-consistencyguid** as the attribute for the source anchor and issue **ImmutableID** as **ms-ds-consistencyguid** in case the attribute has a value against it. If there's no value against the attribute, issue **objectGuid** as the immutable ID. You can construct the set of custom claim rules as described in the following section. +For example, you might select `ms-ds-consistencyguid` as the attribute for the source anchor and issue **ImmutableID** as `ms-ds-consistencyguid` in case the attribute has a value against it. If there's no value against the attribute, issue `objectGuid` as the immutable ID. You can construct the set of custom claim rules as described in the following section. **Rule 1: Query attributes** c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccou => add(store = "Active Directory", types = ("http://contoso.com/ws/2016/02/identity/claims/objectguid", "http://contoso.com/ws/2016/02/identity/claims/msdsconsistencyguid"), query = "; objectGuid,ms-ds-consistencyguid;{0}", param = c.Value); ``` -In this rule, you're querying the values of **ms-ds-consistencyguid** and **objectGuid** for the user from Active Directory. Change the store name to an appropriate store name in your AD FS deployment. Also change the claims type to a proper claims type for your federation, as defined for **objectGuid** and **ms-ds-consistencyguid**. +In this rule, you're querying the values of `ms-ds-consistencyguid` and `objectGuid` for the user from Active Directory. Change the store name to an appropriate store name in your AD FS deployment. Also change the claims type to a proper claims type for your federation, as defined for `objectGuid` and `ms-ds-consistencyguid`. -Also, by using **add** and not **issue**, you avoid adding an outgoing issue for the entity, and can use the values as intermediate values. You will issue the claim in a later rule after you establish which value to use as the immutable ID. +Also, by using `add` and not `issue`, you avoid adding an outgoing issue for the entity, and can use the values as intermediate values. you'll issue the claim in a later rule after you establish which value to use as the immutable ID. -**Rule 2: Check if ms-ds-consistencyguid exists for the user** +**Rule 2: Check to see whether ms-ds-consistencyguid exists for the user** ```claim-rule-language NOT EXISTS([Type == "http://contoso.com/ws/2016/02/identity/claims/msdsconsistencyguid"]) => add(Type = "urn:anandmsft:tmp/idflag", Value = "useguid"); ``` -This rule defines a temporary flag called **idflag** that is set to **useguid** if there's no **ms-ds-consistencyguid** populated for the user. The logic behind this is the fact that AD FS doesn't allow empty claims. So when you add claims `http://contoso.com/ws/2016/02/identity/claims/objectguid` and `http://contoso.com/ws/2016/02/identity/claims/msdsconsistencyguid` in Rule 1, you end up with an **msdsconsistencyguid** claim only if the value is populated for the user. If it isn't populated, AD FS sees that it will have an empty value and drops it immediately. All objects will have **objectGuid**, so that claim will always be there after Rule 1 is executed. +This rule defines a temporary flag called `idflag` that's set to `useguid` if there's no `ms-ds-consistencyguid` populated for the user. The logic behind this is that AD FS doesn't allow empty claims. When you add claims `http://contoso.com/ws/2016/02/identity/claims/objectguid` and `http://contoso.com/ws/2016/02/identity/claims/msdsconsistencyguid` in Rule 1, you end up with an *msdsconsistencyguid* claim only if the value is populated for the user. If it isn't populated, AD FS sees that it will have an empty value and drops it immediately. All objects will have `objectGuid`, so that claim will always be there after Rule 1 is executed. **Rule 3: Issue ms-ds-consistencyguid as immutable ID if it's present** c:[Type == "http://contoso.com/ws/2016/02/identity/claims/msdsconsistencyguid"] => issue(Type = "http://schemas.microsoft.com/LiveID/Federation/2008/05/ImmutableID", Value = c.Value); ``` -This is an implicit **Exist** check. If the value for the claim exists, then issue that as the immutable ID. The previous example uses the **nameidentifier** claim. You'll have to change this to the appropriate claim type for the immutable ID in your environment. +This is an implicit `Exist` check. If the value for the claim exists, issue it as the immutable ID. The previous example uses the `nameidentifier` claim. You'll have to change this to the appropriate claim type for the immutable ID in your environment. -**Rule 4: Issue objectGuid as immutable ID if ms-ds-consistencyGuid is not present** +**Rule 4: Issue objectGuid as an immutable ID if ms-ds-consistencyGuid isn't present** ```claim-rule-language c1:[Type == "urn:anandmsft:tmp/idflag", Value =~ "useguid"] c1:[Type == "urn:anandmsft:tmp/idflag", Value =~ "useguid"] => issue(Type = "http://schemas.microsoft.com/LiveID/Federation/2008/05/ImmutableID", Value = c2.Value); ``` -In this rule, you're simply checking the temporary flag **idflag**. You decide whether to issue the claim based on its value. +With this rule, you're simply checking the temporary flag `idflag`. You decide whether to issue the claim based on its value. > [!NOTE] > The sequence of these rules is important. ### SSO with a subdomain UPN -You can add more than one domain to be federated by using Azure AD Connect, as described in [Add a new federated domain](how-to-connect-fed-management.md#addfeddomain). Azure AD Connect version 1.1.553.0 and latest creates the correct claim rule for issuerID automatically. If you cannot use Azure AD Connect version 1.1.553.0 or latest, it is recommended that [Azure AD RPT Claim Rules](https://aka.ms/aadrptclaimrules) tool is used to generate and set correct claim rules for the Azure AD relying party trust. +You can add more than one domain to be federated by using Azure AD Connect, as described in [Add a new federated domain](#addadfsserver). Azure AD Connect versions 1.1.553.0 and later create the correct claim rule for `issuerID` automatically. If you can't use Azure AD Connect version 1.1.553.0 or later, we recommend that you use the [Azure AD RPT Claim Rules](https://aka.ms/aadrptclaimrules) tool to generate and set correct claim rules for the Azure AD relying party trust. ## Next steps+ Learn more about [user sign-in options](plan-connect-user-signin.md). |
active-directory | How To Connect Health Adfs Risky Ip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-adfs-risky-ip.md | Title: Azure AD Connect Health with AD FS risky IP report | Microsoft Docs -description: Describes the Azure AD Connect Health AD FS risky IP report. + Title: Azure AD Connect Health with the AD FS Risky IP report | Microsoft Docs +description: This article describes the Azure AD Connect Health AD FS Risky IP report. documentationcenter: '' -# Risky IP report (public preview) -AD FS customers may expose password authentication endpoints to the internet to provide authentication services for end users to access SaaS applications such as Microsoft 365. It is possible for a bad actor to attempt logins against your AD FS system to guess an end user’s password and get access to application resources. AD FS provides the extranet account lockout functionality to prevent these types of attacks since AD FS in Windows Server 2012 R2. If you are on a lower version, we strongly recommend that you upgrade your AD FS system to Windows Server 2016. <br /> +# The Risky IP report (preview) -Additionally, it is possible for a single IP address to attempt multiple logins against multiple users. In these cases, the number of attempts per user may be under the threshold for account lockout protection in AD FS. Azure AD Connect Health now provides the “Risky IP report” that detects this condition and notifies administrators. The following are the key benefits for this report: -- Detection of IP addresses that exceed a threshold of failed password-based logins-- Supports failed logins due to bad password or due to extranet lockout state-- Email notification to alert administrators with customizable email settings-- Customizable threshold settings that match with the security policy of an organization-- Downloadable reports for offline analysis and integration with other systems via automation+Active Directory Federation Services (AD FS) customers may expose password authentication endpoints to the internet to provide authentication services for end users to access SaaS applications such as Microsoft 365. ++It's possible for a bad actor to attempt logins against your AD FS system to guess an end user’s password and get access to application resources. As of Windows Server 2012 R2, AD FS provides the extranet account lockout functionality to prevent these types of attacks. If you're on an earlier version, we strongly recommend that you upgrade your AD FS system to Windows Server 2016. ++Additionally, it's possible for a single IP address to attempt multiple logins against multiple users. In these cases, the number of attempts per user might be under the threshold for account lockout protection in AD FS. ++Azure Active Directory (Azure AD) Connect Health now provides the *Risky IP report*, which detects this condition and notifies administrators. Here are the key benefits of using this report: ++- Detects IP addresses that exceed a threshold of failed password-based logins +- Supports failed logins resulting from bad password or extranet lockout state +- Provides email notifications to alert administrators, with customizable email settings +- Provides customizable threshold settings that match the security policy of an organization +- Provides downloadable reports for offline analysis and integration with other systems via automation > [!NOTE]-> To use this report, you must ensure that AD FS auditing is enabled. For more information, see [Enable Auditing for AD FS](how-to-connect-health-agent-install.md#enable-auditing-for-ad-fs). <br /> -> To access preview, Global Administrator or [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader) permission is required.   +> To use this report, you must ensure that AD FS auditing is enabled. For more information, see [Enable auditing for AD FS](how-to-connect-health-agent-install.md#enable-auditing-for-ad-fs). >+> To access this preview release, you need Global Administrator or [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader) permissions.   ++## What's in the report? -## What is in the report? -The failed sign in activity client IP addresses are aggregated through Web Application Proxy servers. Each item in the Risky IP report shows aggregated information about failed AD FS sign-in activities that have exceeded the designated threshold. It provides the following information: - +The failed sign-in activity client IP addresses are aggregated through Web Application Proxy servers. Each item in the Risky IP report shows aggregated information about failed AD FS sign-in activities that have exceeded the designated threshold. -| Report Item | Description | +The report provides the following information: ++ ++| Report item | Description | | - | -- |-| Time Stamp | Shows the time stamp based on Azure portal local time when the detection time window starts.<br /> All daily events are generated at mid-night UTC time. <br />Hourly events have the timestamp rounded to the beginning of the hour. You can find first activity start time from “firstAuditTimestamp” in the exported file. | -| Trigger Type | Shows the type of detection time window. The aggregation trigger types are per hour or per day. Helpful in determing between a high frequency brute force attack versus a slow attack where the number of attempts is distributed throughout the day. | -| IP Address | The single risky IP address that had either bad password or extranet lockout sign-in activities. It can be either IPv4 or an IPv6 address. | -| Bad Password Error Count | The count of Bad Password error occurred from the IP address during the detection time window. The Bad Password errors can happen multiple times to certain users. Notice this does not include failed attempts due to expired passwords. | -| Extranet Lock Out Error Count | The count of Extranet Lockout error occurred from the IP address during the detection time window. The Extranet Lockout errors can happen multiple times to certain users. This will only be seen if Extranet Lockout is configured in AD FS (versions 2012R2 or higher). <b>Note</b> We strongly recommend enabling this feature if you allow extranet logins using passwords. | -| Unique Users Attempted | The count of unique user accounts attempted from the IP address during the detection time window. Differentiates between a single user attack pattern versus multi-user attack pattern. | +| Time Stamp | The time stamp that's based on Azure portal local time when the detection time window starts.<br> All daily events are generated at midnight UTC time. <br>Hourly events have the time stamp rounded to the beginning of the hour. You can find the first activity start time from “firstAuditTimestamp” in the exported file. | +| Trigger Type | The type of detection time window. The aggregation trigger types are per hour or per day. They're helpful in differentiating between a high-frequency brute force attack and a slow attack, where the number of attempts is distributed throughout the day. | +| IP Address | The single risky IP address that had either bad password or extranet lockout sign-in activities. It can be either an IPv4 or an IPv6 address. | +| Bad Password Error Count | The count of bad password errors that occur from the IP address during the detection time window. Bad password errors can happen multiple times to certain users. **Note**: This count doesn't include failed attempts resulting from expired passwords. | +| Extranet Lockout Error Count | The count of extranet lockout errors that occur from the IP address during the detection time window. The extranet lockout errors can happen multiple times to certain users. This count is displayed only if Extranet Lockout is configured in AD FS (versions 2012R2 and later). **Note**: We strongly recommend enabling this feature if you allow extranet logins that use passwords. | +| Unique Users Attempted | The count of unique user accounts that are attempted from the IP address during the detection time window. Differentiates between a single user attack pattern and a multi-user attack pattern. | -For example, the below report item indicates from the 6pm to 7pm hour window on 02/28/2018, IP address <i>104.2XX.2XX.9</i> had no bad password errors and 284 extranet lockout errors. 14 unique users were impacted within the criteria. The activity event exceeded the designated report hourly threshold. +For example, the following report item indicates that during the 6 PM to 7 PM window on February 28, 2018, the IP address *104.2XX.2XX.9* had no bad password errors and 284 extranet lockout errors. Fourteen unique users were affected within the criteria. The activity event exceeded the designated report's hourly threshold.  > [!NOTE]-> - Only activities exceeding designated threshold will be showing in the report list. -> - This report can be trace back at most 30 days. -> - This alert report does not show Exchange IP addresses or private IP addresses. They are still included in the export list. -> +> - Only activities that exceed the designated threshold are displayed in the report list. +> - This report tracks the past 30 days at most. +> - This alert report doesn't show Exchange IP addresses or private IP addresses. They are still included in the export list. - + ## Load balancer IP addresses in the list-Load balancer aggregate failed sign-in activities and hit the alert threshold. If you are seeing load balancer IP addresses, it is highly likely that your external load balancer is not sending the client IP address when it passes the request to the Web Application Proxy server. Configure your load balancer correctly to pass forward client IP address. -## Download risky IP report +Your load balancer aggregate might have failed, causing it to hit the alert threshold. If you're seeing load balancer IP addresses, it's highly likely that your external load balancer isn't sending the client IP address when it passes the request to the Web Application Proxy server. Configure your load balancer correctly to pass forward the client IP address. ++## Download the Risky IP report + Using the **Download** functionality, the whole risky IP address list in the past 30 days can be exported from the Connect Health Portal The export result will include all the failed AD FS sign-in activities in each detection time window, so you can customize the filtering after the export. Besides the highlighted aggregations in the portal, the export result also shows more details about failed sign-in activities per IP address: | Report Item | Description | | - | -- | -| firstAuditTimestamp | Shows the first timestamp when the failed activities started during the detection time window. | -| lastAuditTimestamp | Shows the last timestamp when the failed activities ended during the detection time window. | +| firstAuditTimestamp | The first time stamp when the failed activities started during the detection time window. | +| lastAuditTimestamp | The last time stamp when the failed activities ended during the detection time window. | | attemptCountThresholdIsExceeded | The flag if the current activities are exceeding the alerting threshold. | -| isWhitelistedIpAddress | The flag if the IP address is filtered from alerting and reporting. Private IP addresses (<i>10.x.x.x, 172.x.x.x & 192.168.x.x</i>) and Exchange IP addresses are filtered and marked as True. If you are seeing private IP address ranges, it is highly likely that your external load balancer is not sending the client IP address when it passes the request to the Web Application Proxy server. | +| isWhitelistedIpAddress | The flag if the IP address is filtered from alerting and reporting. Private IP addresses (*10.x.x.x, 172.x.x.x* and *192.168.x.x*) and Exchange IP addresses are filtered and marked as *True*. If you're seeing private IP address ranges, it's highly likely that your external load balancer isn't sending the client IP address when it passes the request to the Web Application Proxy server. | ## Configure notification settings-Admin contacts of the report can be updated through the **Notification Settings**. By default, the risky IP alert email notification is in off state. You can enable the notification by toggle the button under “Get email notifications for IP addresses exceeding failed activity threshold report” -Like generic alert notification settings in Connect Health, it allows you to customize designated notification recipient list about risky IP report from here. You can also notify all Hybrid Identity Administrators while making the change. ++You can update the report's administrator contacts through the **Notification Settings**. By default, the risky IP alert email notification is in an *off* state. You can enable the notification by toggling the button under **Get email notifications for IP addresses exceeding failed activity threshold report**. ++Like generic alert notification settings in Connect Health, it allows you to customize the designated notification recipient list about the Risky IP report from here. You can also notify all hybrid identity administrators while you're making the change. ## Configure threshold settings-Alerting threshold can be updated through Threshold Settings. To start with, system has threshold set by default. The default values are given below. There are four categories in the risk IP report threshold settings: - +You can update the alerting threshold in **Threshold Settings**. The system threshold is set with default values, which are shown in the following screenshot and described in the table. ++The risk IP report threshold settings are separated into four categories. -| Threshold Item | Description | + ++| Threshold setting | Description | | | |-| (Bad U/P + Extranet Lockout) / Day | Threshold setting to report the activity and trigger alert notification when the count of Bad Password plus the count of Extranet Lockout exceeds it per **day**. Default value is 100.| -| (Bad U/P + Extranet Lockout) / Hour | Threshold setting to report the activity and trigger alert notification when the count of Bad Password plus the count of Extranet Lockout exceeds it per **hour**. Default Value is 50.| -| Extranet Lockout / Day | Threshold setting to report the activity and trigger alert notification when the count of Extranet Lockout exceeds it per **day**. Default value is 50.| -| Extranet Lockout / Hour| Threshold setting to report the activity and trigger alert notification when the count of Extranet Lockout exceeds it per **hour**. Default value is 25| +| (Bad U/P + Extranet Lockout) / Day | Reports the activity and triggers an alert notification when the count of Bad Password plus the count of Extranet Lockout exceeds the threshold, per *day*. The default value is 100.| +| (Bad U/P + Extranet Lockout) / Hour | Reports the activity and triggers an alert notification when the count of Bad Password plus the count of Extranet Lockout exceeds the threshold, per *hour*. The default value is 50.| +| Extranet Lockout / Day | Reports the activity and triggers an alert notification when the count of Extranet Lockout exceeds the threshold, per *day*. The default value is 50.| +| Extranet Lockout / Hour | Reports the activity and triggers an alert notification when the count of Extranet Lockout exceeds the threshold, per *hour*. The default value is 25.| > [!NOTE]-> - The change of report threshold will be applied after an hour of the setting change. +> - The change of the report threshold will be applied an hour after the setting change. > - Existing reported items will not be affected by the threshold change. -> - We recommend that you analyze the number of events seen within your environment and adjust the threshold appropriately. +> - We recommend that you analyze the number of events reported within your environment and adjust the threshold appropriately. > > ## FAQ-**Why am I seeing private IP address ranges in the report?** <br /> -Private IP addresses (<i>10.x.x.x, 172.x.x.x & 192.168.x.x</i>) and Exchange IP addresses are filtered and marked as True in the IP approved list. If you are seeing private IP address ranges, it is highly likely that your external load balancer is not sending the client IP address when it passes the request to the Web Application Proxy server. -**Why am I seeing load balancer IP addresses in the report?** <br /> -If you are seeing load balancer IP addresses, it is highly likely that your external load balancer is not sending the client IP address when it passes the request to the Web Application Proxy server. Configure your load balancer correctly to pass forward client IP address. +**Why am I seeing private IP address ranges in the report?** ++Private IP addresses (*10.x.x.x, 172.x.x.x* and *192.168.x.x*) and Exchange IP addresses are filtered and marked as *True* in the IP approved list. If you're seeing private IP address ranges, it's highly likely that your external load balancer isn't sending the client IP address when it passes the request to the Web Application Proxy server. ++**Why am I seeing load balancer IP addresses in the report?** ++If you're seeing load balancer IP addresses, it's highly likely that your external load balancer isn't sending the client IP address when it passes the request to the Web Application Proxy server. Configure your load balancer correctly to pass forward the client IP address. ++**How can I block the IP address?** ++You should add the identified malicious IP address to the firewall or block it in Exchange. -**What do I do to block the IP address?** <br /> -You should add identified malicious IP address to the firewall or block in Exchange. <br /> +**Why can't I see any items in this report?** -**Why am I not seeing any items in this report?** <br /> -- Failed sign-in activities are not exceeding the threshold settings.-- Ensure no “Health service is not up to date” alert active in your AD FS server list. Read more about [how to troubleshoot this alert](how-to-connect-health-data-freshness.md).-- Audits is not enabled in AD FS farms.+- Failed sign-in activities aren't exceeding the threshold settings. +- Ensure that no “Health service isn't up to date” alert is active in your AD FS server list. Read more about [how to troubleshoot this alert](how-to-connect-health-data-freshness.md). +- Audits aren't enabled in AD FS farms. -**Why am I seeing no access to the report?** <br /> -Global Administrator or [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader) permission is required. Contact your Global Administrator to get access. +**Why can't I access the report?** +You need to have Global Administrator or [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader) permissions. Contact your Global Administrator for access. ## Next steps * [Azure AD Connect Health](./whatis-azure-ad-connect.md)-* [Azure AD Connect Health Agent Installation](how-to-connect-health-agent-install.md) +* [Azure AD Connect Health agent installation](how-to-connect-health-agent-install.md) |
active-directory | Cross Tenant Synchronization Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-overview.md | Does cross-tenant synchronization enhance any cross-tenant Microsoft 365 app acc - Cross-tenant synchronization utilizes a feature that improves the user experience by suppressing the first-time B2B consent prompt and redemption process in each tenant. - Synchronized users will have the same cross-tenant Microsoft 365 experiences available to any other B2B collaboration user. +Can cross-tenant synchronization enable people search scenarios where synchronized users appear in the global address list of the target tenant? ++- Yes, but you must set the value for the **showInAddressList** attribute of synchronized users to **True**, which is not set by default. If you want to create a unified address list, you'll need to set up a [mesh peer-to-peer topology](./cross-tenant-synchronization-topology.md#mesh-peer-to-peer). For more information, see [Step 9: Review attribute mappings](./cross-tenant-synchronization-configure.md#step-9-review-attribute-mappings). +- Cross-tenant synchronization creates B2B collaboration users and doesn't create contacts. + #### Teams Does cross-tenant synchronization enhance any current Teams experiences? |
active-directory | Overview Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-recommendations.md | To get started, follow these instructions to work with recommendations using Mic GET https://graph.microsoft.com/beta/directory/recommendations?$filter=recommendationType eq 'adfsAppsMigration'&$expand=impactedResources ``` -For more information, see the [Microsoft Graph documentation for recommendations](/graph/api/resources/recommendation). +For more information, see the [Microsoft Graph documentation for recommendations](/graph/api/resources/recommendations-api-overview). ## Next steps |
active-directory | Agiloft Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/agiloft-tutorial.md | Follow these steps to enable Azure AD SSO in the Azure portal. 5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: - In the **Sign-on URL** text box, type the URL: - `https://www.agiloft.com` + In the **Sign-on URL** text box, type a URL using the following pattern: + `https://<SUBDOMAIN>.agiloft.com:443/gui2/samlssologin.jsp?project=<KB_NAME>` > [!NOTE]- > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Agiloft Contract Management Suite Client support team](https://www.agiloft.com/support-login.htm) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. + > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Agiloft Contract Management Suite Client support team](https://www.agiloft.com/support-login.htm) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. 6. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer. |
active-directory | Oracle Idcs For Ebs Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-idcs-for-ebs-tutorial.md | + + Title: Azure Active Directory SSO integration with Oracle IDCS for E-Business Suite +description: Learn how to configure single sign-on between Azure Active Directory and Oracle IDCS for E-Business Suite. ++++++++ Last updated : 02/07/2023+++++# Azure Active Directory SSO integration with Oracle IDCS for E-Business Suite ++In this article, you'll learn how to integrate Oracle IDCS for E-Business Suite with Azure Active Directory (Azure AD). When you integrate Oracle IDCS for E-Business Suite with Azure AD, you can: ++* Control in Azure AD who has access to Oracle IDCS for E-Business Suite. +* Enable your users to be automatically signed-in to Oracle IDCS for E-Business Suite with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You'll configure and test Azure AD single sign-on for Oracle IDCS for E-Business Suite in a test environment. Oracle IDCS for E-Business Suite supports only **SP** initiated single sign-on. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Prerequisites ++To integrate Azure Active Directory with Oracle IDCS for E-Business Suite, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Oracle IDCS for E-Business Suite single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the Oracle IDCS for E-Business Suite application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add Oracle IDCS for E-Business Suite from the Azure AD gallery ++Add Oracle IDCS for E-Business Suite from the Azure AD application gallery to configure single sign-on with Oracle IDCS for E-Business Suite. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **Oracle IDCS for E-Business Suite** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a URL using the following pattern: ` https://<SUBDOMAIN>.oraclecloud.com/` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: `https://<SUBDOMAIN>.oraclecloud.com/v1/saml/<UNIQUEID>` ++ c. In the **Sign on URL** textbox, type a URL using the following pattern: + ` https://<SUBDOMAIN>.oraclecloud.com/` + + >[!NOTE] + > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Oracle IDCS for E-Business Suite support team](https://www.oracle.com/support/advanced-customer-services/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. Your Oracle IDCS for E-Business Suite application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but Oracle IDCS for E-Business Suite expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration. ++  ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ++  ++## Configure Oracle IDCS for E-Business Suite SSO ++To configure single sign-on on Oracle IDCS for E-Business Suite side, you need to send the downloaded Federation Metadata XML file from Azure portal to [Oracle IDCS for E-Business Suite support team](https://www.oracle.com/support/advanced-customer-services/). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Oracle IDCS for E-Business Suite test user ++In this section, you create a user called Britta Simon at Oracle IDCS for E-Business Suite. Work with [Oracle IDCS for E-Business Suite support team](https://www.oracle.com/support/advanced-customer-services/) to add the users in the Oracle IDCS for E-Business Suite platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++* Click on **Test this application** in Azure portal. This will redirect to Oracle IDCS for E-Business Suite Sign-on URL where you can initiate the login flow. ++* Go to Oracle IDCS for E-Business Suite Sign-on URL directly and initiate the login flow from there. ++* You can use Microsoft My Apps. When you select the Oracle IDCS for E-Business Suite tile in the My Apps, this will redirect to Oracle IDCS for E-Business Suite Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure Oracle IDCS for E-Business Suite you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
aks | Azure Ad Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-ad-rbac.md | Title: Use Azure AD and Kubernetes RBAC for clusters description: Learn how to use Azure Active Directory group membership to restrict access to cluster resources using Kubernetes role-based access control (Kubernetes RBAC) in Azure Kubernetes Service (AKS)- Previously updated : 01/10/2023 Last updated : 02/13/2023 -# Control access to cluster resources using Kubernetes role-based access control and Azure Active Directory identities in Azure Kubernetes Service +# Use Kubernetes role-based access control with Azure Active Directory in Azure Kubernetes Service Azure Kubernetes Service (AKS) can be configured to use Azure Active Directory (Azure AD) for user authentication. In this configuration, you sign in to an AKS cluster using an Azure AD authentication token. Once authenticated, you can use the built-in Kubernetes role-based access control (Kubernetes RBAC) to manage access to namespaces and cluster resources based on a user's identity or group membership. This article shows you how to: ## Before you begin -* This article assumes that you have an existing AKS cluster enabled with Azure AD integration. If you need an AKS cluster, see [Integrate Azure AD with AKS][azure-ad-aks-cli]. -* Kubernetes RBAC is enabled by default during AKS cluster creation. If Kubernetes RBAC wasn't enabled when you originally deployed your cluster, you'll need to delete and recreate your cluster. +* You have an existing AKS cluster with Azure AD integration enabled. If you need an AKS cluster with this configuration, see [Integrate Azure AD with AKS][azure-ad-aks-cli]. +* Kubernetes RBAC is enabled by default during AKS cluster creation. To upgrade your cluster with Azure AD integration and Kubernetes RBAC, [Enable Azure AD integration on your existing AKS cluster][enable-azure-ad-integration-existing-cluster]. * Make sure that Azure CLI version 2.0.61 or later is installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. * If using Terraform, install [Terraform][terraform-on-azure] version 2.99.0 or later. -Use the Azure portal or Azure CLI to verify if Kubernetes RBAC is enabled. +Use the Azure portal or Azure CLI to verify Azure AD integration with Kubernetes RBAC is enabled. #### [Azure portal](#tab/portal) -Verify Kubernetes RBAC is enabled using the Azure portal: +To verify using the Azure portal: * From your browser, sign in to the [Azure portal](https://portal.azure.com).-* Navigate to Kubernetes services, and from the left-hand pane select **Cluster configuration**. -* Under the **Authentication and Authorization** section, check to see if the **Local accounts with Kubernetes RBAC** or the **Azure AD authentication with Kubernetes RBAC** option is shown. +* Navigate to **Kubernetes services**, and from the left-hand pane select **Cluster configuration**. +* Under the **Authentication and Authorization** section, verify the **Azure AD authentication with Kubernetes RBAC** option is selected. #### [Azure CLI](#tab/azure-cli) -Verify Kubernetes RBAC is enabled using Azure CLI, with the `az aks show` command: +You can verify using the Azure CLI `az aks show` command. Replace the value *myResourceGroup* with the resource group name hosting the AKS cluster resource, and replace *myAKSCluster* with the actual name of your AKS cluster. ```azurecli az aks show --resource-group myResourceGroup --name myAKSCluster ``` -If it's enabled, the output will show the value for `enableRbac` is `true`. +If it's enabled, the output shows the value for `enableAzureRbac` is `false`. az ad group delete --group opssre [rbac-authorization]: concepts-identity.md#kubernetes-rbac [operator-best-practices-identity]: operator-best-practices-identity.md [terraform-on-azure]: /azure/developer/terraform/overview+[enable-azure-ad-integration-existing-cluster]: managed-aad.md#enable-aks-managed-azure-ad-integration-on-your-existing-cluster |
aks | Csi Migrate In Tree Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-migrate-in-tree-volumes.md | For more about storage best practices, see [Best practices for storage and backu <!-- LINKS - internal --> [install-azure-cli]: /cli/azure/install-azure-cli [aks-rbac-cluster-admin-role]: manage-azure-rbac.md#create-role-assignments-for-users-to-access-cluster-[azure-resource-locks]: /azure/azure-resource-manager/management/lock-resources +[azure-resource-locks]: ../azure-resource-manager/management/lock-resources.md [csi-driver-overview]: csi-storage-drivers.md [aks-storage-backups-best-practices]: operator-best-practices-storage.md |
aks | Egress Outboundtype | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/egress-outboundtype.md | For more information, see [using a standard load balancer in AKS](load-balancer- ### Outbound type of `managedNatGateway` or `userAssignedNatGateway` -If `managedNatGateway` or `userAssignedNatGateway` are selected for `outboundType`, AKS relies on [Azure Networking NAT gateway](/azure/virtual-network/nat-gateway/manage-nat-gateway) for cluster egress. +If `managedNatGateway` or `userAssignedNatGateway` are selected for `outboundType`, AKS relies on [Azure Networking NAT gateway](../virtual-network/nat-gateway/manage-nat-gateway.md) for cluster egress. - `managedNatGateway` is used when using managed virtual networks, and tells AKS to provision a NAT gateway and attach it to the cluster subnet. - `userAssignedNatGateway` is used when using bring-your-own virtual networking, and requires that a NAT gateway has been provisioned before cluster creation. az aks update -g <resourceGroup> -n <clusterName> --outbound-type <loadBalancer| - [Configure standard load balancing in an AKS cluster](load-balancer-standard.md) - [Configure NAT gateway in an AKS cluster](nat-gateway.md) - [Configure user-defined routing in an AKS cluster](egress-udr.md)-- [NAT gateway documentation](/azure/aks/nat-gateway)+- [NAT gateway documentation](./nat-gateway.md) - [Azure networking UDR overview](../virtual-network/virtual-networks-udr-overview.md). - [Manage route tables](../virtual-network/manage-route-table.md). <!-- LINKS - internal --> [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials-[byo-route-table]: configure-kubenet.md#bring-your-own-subnet-and-route-table-with-kubenet +[byo-route-table]: configure-kubenet.md#bring-your-own-subnet-and-route-table-with-kubenet |
aks | Internal Lb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md | Last updated 03/04/2019 # Use an internal load balancer with Azure Kubernetes Service (AKS) You can create and use an internal load balancer to restrict access to your applications in Azure Kubernetes Service (AKS).-An internal load balancer makes a Kubernetes service accessible only to applications running in the same virtual network as the Kubernetes cluster. This article shows you how to create and use an internal load balancer with AKS. +An internal load balancer does not have a public IP and makes a Kubernetes service accessible only to applications that can reach the private IP. These applications can be within the same VNET or in another VNET through VNET peering. This article shows you how to create and use an internal load balancer with AKS. > [!NOTE] > Azure Load Balancer is available in two SKUs: *Basic* and *Standard*. The *Standard* SKU is used by default when you create an AKS cluster. When you create a *LoadBalancer* service type, you'll get the same load balancer type as when you provisioned the cluster. For more information, see [Azure Load Balancer SKU comparison][azure-lb-comparison]. |
aks | Use Node Public Ips | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-node-public-ips.md | az vmss list-instance-public-ips -g MC_MyResourceGroup2_MyManagedCluster_eastus ## Use public IP tags on node public IPs (PREVIEW) -Public IP tags can be utilized on node public IPs to utilize the [Azure Routing Preference](/azure/virtual-network/ip-services/routing-preference-overview.md) feature. +Public IP tags can be utilized on node public IPs to utilize the [Azure Routing Preference](../virtual-network/ip-services/routing-preference-overview.md) feature. [!INCLUDE [preview features callout](includes/preview/preview-callout.md)] az aks nodepool add --cluster-name <clusterName> -n <nodepoolName> -l <location> AKS nodes utilizing node public IPs that host services on their host address need to have an NSG rule added to allow the traffic. Adding the desired ports in the node pool configuration will create the appropriate allow rules in the cluster network security group. -If a network security group is in place on the subnet with a cluster using bring-your-own virtual network, an allow rule must be added to that network security group. This can be limited to the nodes in a given node pool by adding the node pool to an [application security group](/azure/virtual-network/network-security-groups-overview#application-security-groups) (ASG). A managed ASG will be created by default in the managed resource group if allowed host ports are specified. Nodes can also be added to one or more custom ASGs by specifying the resource ID of the NSG(s) in the node pool parameters. +If a network security group is in place on the subnet with a cluster using bring-your-own virtual network, an allow rule must be added to that network security group. This can be limited to the nodes in a given node pool by adding the node pool to an [application security group](../virtual-network/network-security-groups-overview.md#application-security-groups) (ASG). A managed ASG will be created by default in the managed resource group if allowed host ports are specified. Nodes can also be added to one or more custom ASGs by specifying the resource ID of the NSG(s) in the node pool parameters. ### Host port specification format Containers: [use-labels]: use-labels.md [cordon-and-drain]: resize-node-pool.md#cordon-the-existing-nodes [internal-lb-different-subnet]: internal-lb.md#specify-a-different-subnet-[drain-nodes]: resize-node-pool.md#drain-the-existing-nodes +[drain-nodes]: resize-node-pool.md#drain-the-existing-nodes |
aks | Use Wasi Node Pools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-wasi-node-pools.md | spec: spec: runtimeClassName: wasmtime-slight-v1 containers:- - name: testwasm + - name: hello-slight image: ghcr.io/deislabs/containerd-wasm-shims/examples/slight-rust-hello:latest command: ["/"]+ resources: + requests: + cpu: 10m + memory: 10Mi + limits: + cpu: 500m + memory: 128Mi apiVersion: v1 kind: Service |
api-management | Api Management Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-kubernetes.md | To solve this problem, Kubernetes introduced the concept of [Services](https://k When we are ready to publish our microservices as APIs through API Management, we need to think about how to map our Services in Kubernetes to APIs in API Management. There are no set rules. It depends on how you designed and partitioned your business capabilities or domains into microservices at the beginning. For instance, if the pods behind a Service are responsible for all operations on a given resource (e.g., Customer), the Service may be mapped to one API. If operations on a resource are partitioned into multiple microservices (e.g., GetOrder, PlaceOrder), then multiple Services may be logically aggregated into one single API in API management (See Fig. 1). -The mappings can also evolve. Since API Management creates a façade in front of the microservices, it allows us to refactor and right-size our microservices over time. +The mappings can also evolve. Since API Management creates a facade in front of the microservices, it allows us to refactor and right-size our microservices over time.  |
app-service | Configure Vnet Integration Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-vnet-integration-enable.md | keywords: vnet integration Previously updated : 10/20/2021 Last updated : 02/13/2023 ms.tool: azure-cli, azure-powershell # Enable virtual network integration in Azure App Service -Through integrating with an Azure virtual network (VNet) from your [Azure App Service app](./overview.md), you can reach private resources from your app within the virtual network. The VNet integration feature has two variations: --* **Regional virtual network integration**: Connect to Azure virtual networks in the same region. You must have a dedicated subnet in the virtual network you're integrating with. -* **Gateway-required virtual network integration**: When you connect directly to a virtual network in other regions or to a classic virtual network in the same region, you must use gateway-required virtual network integration. --This article describes how to set up regional virtual network integration. +Through integrating with an Azure virtual network from your [Azure App Service app](./overview.md), you can reach private resources from your app within the virtual network. ## Prerequisites -The VNet integration feature requires: +The virtual network integration feature requires: - An App Service pricing tier [that supports virtual network integration](./overview-vnet-integration.md). - A virtual network in the same region with an empty subnet. -The subnet must be delegated to Microsoft.Web/serverFarms. If the delegation isn't done before integration, the provisioning process will configure this delegation. The subnet must be allocated an IPv4 `/28` block (16 addresses). We recommend that you have a minimum of 64 addresses (IPv4 `/26` block) to allow for maximum horizontal scale. +The subnet must be delegated to Microsoft.Web/serverFarms. If the delegation isn't done before integration, the provisioning process configures this delegation. The subnet must be allocated an IPv4 `/28` block (16 addresses). We recommend that you have a minimum of 64 addresses (IPv4 `/26` block) to allow for maximum horizontal scale. ++If the virtual network is in a different subscription than the app, you must ensure that the subscription with the virtual network is registered for the `Microsoft.Web` resource provider. You can explicitly register the provider [by following this documentation](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider), but it's automatically registered when creating the first web app in a subscription. ## Configure in the Azure portal The subnet must be delegated to Microsoft.Web/serverFarms. If the delegation isn :::image type="content" source="./media/configure-vnet-integration-enable/vnetint-add-vnet.png" alt-text="Screenshot that shows selecting the virtual network."::: -During the integration, your app is restarted. When integration is finished, you'll see details on the virtual network you're integrated with. +During the integration, your app is restarted. When integration is finished, you see details on the virtual network you're integrated with. ## Configure with the Azure CLI az webapp vnet-integration add --resource-group <group-name> --name <app-name> - ``` > [!NOTE]-> The command checks if the subnet is delegated to Microsoft.Web/serverFarms and applies the necessary delegation if it isn't configured. If the subnet was configured, and you don't have permissions to check it, or if the virtual network is in another subscription, you can use the *--skip-delegation-check* parameter to bypass the validation. +> The command checks if the subnet is delegated to Microsoft.Web/serverFarms and applies the necessary delegation if it isn't configured. If the subnet was configured, and you don't have permissions to check it, or if the virtual network is in another subscription, you can use the `--skip-delegation-check` parameter to bypass the validation. ## Configure with Azure PowerShell $subnet = Add-AzDelegation -Name "myDelegation" -ServiceName "Microsoft.Web/serv Set-AzVirtualNetwork -VirtualNetwork $vnet ``` -Configure VNet Integration. +Configure virtual network integration. > [!NOTE] > If the webapp is in another subscription than virtual network, you can use the *Set-AzContext -Subscription "xxxx-xxxx-xxxx-xxxx"* command to set the current subscription context. Set the current subscription context to the subscription where the web app was deployed. |
app-service | Creation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/creation.md | If you're creating an App Service Environment with an external VIP, you can sele  -5. From the **Review + create** tab, check that your configuration is correct, and select **Create**. Your App Service Environment can take up to two hours to create. +5. From the **Review + create** tab, check that your configuration is correct, and select **Create**. Your App Service Environment can more than one hour to create. When your App Service Environment has been successfully created, you can select it as a location when you're creating your apps. |
app-service | How To Create From Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-create-from-template.md | parameterPath="PATH/azuredeploy.parameters.json" az deployment group create --resource-group "YOUR-RG-NAME-HERE" --template-file $templatePath --parameters $parameterPath ``` -It takes about two hours for the App Service Environment to be created. +It can take more than one hour for the App Service Environment to be created. ## Next steps |
app-service | Monitor App Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-app-service.md | See [Azure Monitor queries for App Service](https://github.com/microsoft/AzureMo Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). -If you're running an application on App Service [Azure Monitor Application Insights](../azure-monitor/overview.md#application-insights) offers more types of alerts. +If you're running an application on App Service [Azure Monitor Application Insights](../azure-monitor/app/app-insights-overview.md) offers more types of alerts. The following table lists common and recommended alert rules for App Service. |
app-service | Overview Vnet Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md | Apps in App Service are hosted on worker roles. Virtual network integration work :::image type="content" source="./media/overview-vnet-integration/vnetint-how-regional-works.png" alt-text="Diagram that shows how virtual network integration works."::: -When virtual network integration is enabled, your app makes outbound calls through your virtual network. The outbound addresses that are listed in the app properties portal are the addresses still used by your app. However, if your outbound call is to a virtual machine or private endpoint in the integration virtual network or peered virtual network, the outbound address will be an address from the integration subnet. The private IP assigned to an instance is exposed via the environment variable, WEBSITE_PRIVATE_IP. +When virtual network integration is enabled, your app makes outbound calls through your virtual network. The outbound addresses that are listed in the app properties portal are the addresses still used by your app. However, if your outbound call is to a virtual machine or private endpoint in the integration virtual network or peered virtual network, the outbound address is an address from the integration subnet. The private IP assigned to an instance is exposed via the environment variable, WEBSITE_PRIVATE_IP. -When all traffic routing is enabled, all outbound traffic is sent into your virtual network. If all traffic routing isn't enabled, only private traffic (RFC1918) and service endpoints configured on the integration subnet will be sent into the virtual network. Outbound traffic to the internet will be routed directly from the app. +When all traffic routing is enabled, all outbound traffic is sent into your virtual network. If all traffic routing isn't enabled, only private traffic (RFC1918) and service endpoints configured on the integration subnet is sent into the virtual network. Outbound traffic to the internet is routed directly from the app. The feature supports two virtual interfaces per worker. Two virtual interfaces per worker mean two virtual network integrations per App Service plan. The apps in the same App Service plan can only use one of the virtual network integrations to a specific subnet. If you need an app to connect to more virtual networks or more subnets in the same virtual network, you need to create another App Service plan. The virtual interfaces used aren't resources customers have direct access to. When you scale up or down in size, the required address space is doubled for a s | /27 | 27 | 13 | | /26 | 59 | 29 | -<sup>*</sup>Assumes that you'll need to scale up or down in either size or SKU at some point. +<sup>*</sup>Assumes that you need to scale up or down in either size or SKU at some point. Because subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. To avoid any issues with subnet capacity, use a `/26` with 64 addresses. When you're creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of /27 is required. If the subnet already exists before integrating through the portal, you can use a /28 subnet. You must have at least the following Role-based access control permissions on th | Microsoft.Network/virtualNetworks/subnets/read | Read a virtual network subnet definition | | Microsoft.Network/virtualNetworks/subnets/join/action | Joins a virtual network | -If the virtual network is in a different subscription than the app, you must ensure that the subscription with the virtual network is registered for the `Microsoft.Web` resource provider. You can explicitly register the provider [by following this documentation](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider), but it will also automatically be registered when creating the first web app in a subscription. +If the virtual network is in a different subscription than the app, you must ensure that the subscription with the virtual network is registered for the `Microsoft.Web` resource provider. You can explicitly register the provider [by following this documentation](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider), but it's automatically registered when creating the first web app in a subscription. ## Routes You can control what traffic goes through the virtual network integration. There are three types of routing to consider when you configure virtual network integration. [Application routing](#application-routing) defines what traffic is routed from your app and into the virtual network. [Configuration routing](#configuration-routing) affects operations that happen before or during startup of your app. Examples are container image pull and [app settings with Key Vault reference](./app-service-key-vault-references.md). [Network routing](#network-routing) is the ability to handle how both app and configuration traffic are routed from your virtual network and out. -Through application routing or configuration routing options, you can configure what traffic will be sent through the virtual network integration. Traffic is only subject to [network routing](#network-routing) if it's sent through the virtual network integration. +Through application routing or configuration routing options, you can configure what traffic is sent through the virtual network integration. Traffic is only subject to [network routing](#network-routing) if it's sent through the virtual network integration. ### Application routing Application routing applies to traffic that is sent from your app after it has been started. See [configuration routing](#configuration-routing) for traffic during startup. When you configure application routing, you can either route all traffic or only private traffic (also known as [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) into your virtual network. You configure this behavior through the **Route All** setting. If **Route All** is disabled, your app only routes private traffic into your virtual network. If you want to route all your outbound app traffic into your virtual network, make sure that **Route All** is enabled. * Only traffic configured in application or configuration routing is subject to the NSGs and UDRs that are applied to your integration subnet.-* When **Route All** is enabled, the source address for your outbound public traffic from your app is still one of the IP addresses that are listed in your app properties. If you route your traffic through a firewall or a NAT gateway, the source IP address will then originate from this service. +* When **Route All** is enabled, the source address for your outbound public traffic from your app is still one of the IP addresses that are listed in your app properties. If you route your traffic through a firewall or a NAT gateway, the source IP address originates from this service. Learn [how to configure application routing](./configure-vnet-integration-routing.md#configure-application-routing). Learn [how to configure application routing](./configure-vnet-integration-routin ### Configuration routing -When you're using virtual network integration, you can configure how parts of the configuration traffic are managed. By default, configuration traffic will go directly over the public route, but for the mentioned individual components, you can actively configure it to be routed through the virtual network integration. +When you're using virtual network integration, you can configure how parts of the configuration traffic are managed. By default, configuration traffic goes directly over the public route, but for the mentioned individual components, you can actively configure it to be routed through the virtual network integration. #### Content share When using custom containers, you can pull the container over the virtual networ #### App settings using Key Vault references -App settings using Key Vault references will attempt to get secrets over the public route. If the Key Vault is blocking public traffic and the app is using virtual network integration, an attempt will then be made to get the secrets through the virtual network integration. +App settings using Key Vault references attempt to get secrets over the public route. If the Key Vault is blocking public traffic and the app is using virtual network integration, an attempt is made to get the secrets through the virtual network integration. > [!NOTE] > * Backup/restore to private storage accounts is currently not supported. App settings using Key Vault references will attempt to get secrets over the pub You can use route tables to route outbound traffic from your app without restriction. Common destinations can include firewall devices or gateways. You can also use a [network security group](../virtual-network/network-security-groups-overview.md) (NSG) to block outbound traffic to resources in your virtual network or the internet. An NSG that's applied to your integration subnet is in effect regardless of any route tables applied to your integration subnet. -Route tables and network security groups only apply to traffic routed through the virtual network integration. See [application routing](#application-routing) and [configuration routing](#configuration-routing) for details. Routes won't apply to replies from inbound app requests and inbound rules in an NSG don't apply to your app. Virtual network integration affects only outbound traffic from your app. To control inbound traffic to your app, use the [access restrictions](./overview-access-restrictions.md) feature or [private endpoints](./networking/private-endpoint.md). +Route tables and network security groups only apply to traffic routed through the virtual network integration. See [application routing](#application-routing) and [configuration routing](#configuration-routing) for details. Routes don't apply to replies from inbound app requests and inbound rules in an NSG don't apply to your app. Virtual network integration affects only outbound traffic from your app. To control inbound traffic to your app, use the [access restrictions](./overview-access-restrictions.md) feature or [private endpoints](./networking/private-endpoint.md). -When configuring network security groups or route tables that applies to outbound traffic, you must make sure you consider your application dependencies. Application dependencies include endpoints that your app needs during runtime. Besides APIs and services the app is calling, these endpoints could also be derived endpoints like certificate revocation list (CRL) check endpoints and identity/authentication endpoint, for example Azure Active Directory. If you're using [continuous deployment in App Service](./deploy-continuous-deployment.md), you might also need to allow endpoints depending on type and language. Specifically for [Linux continuous deployment](https://github.com/microsoft/Oryx/blob/main/doc/hosts/appservice.md#network-dependencies), you'll need to allow `oryx-cdn.microsoft.io:443`. +When configuring network security groups or route tables that applies to outbound traffic, you must make sure you consider your application dependencies. Application dependencies include endpoints that your app needs during runtime. Besides APIs and services the app is calling, these endpoints could also be derived endpoints like certificate revocation list (CRL) check endpoints and identity/authentication endpoint, for example Azure Active Directory. If you're using [continuous deployment in App Service](./deploy-continuous-deployment.md), you might also need to allow endpoints depending on type and language. Specifically for [Linux continuous deployment](https://github.com/microsoft/Oryx/blob/main/doc/hosts/appservice.md#network-dependencies), you need to allow `oryx-cdn.microsoft.io:443`. When you want to route outbound traffic on-premises, you can use a route table to send outbound traffic to your Azure ExpressRoute gateway. If you do route traffic to a gateway, set routes in the external network to send any replies back. Border Gateway Protocol (BGP) routes also affect your app traffic. If you have BGP routes from something like an ExpressRoute gateway, your app outbound traffic is affected. Similar to user-defined routes, BGP routes affect traffic according to your routing scope setting. Connecting and disconnecting with a virtual network is at an app level. Operatio In the app view of your virtual network integration instance, you can disconnect your app from the virtual network and you can configure application routing. To disconnect your app from a virtual network, select **Disconnect**. Your app is restarted when you disconnect from a virtual network. Disconnecting doesn't change your virtual network. The subnet isn't removed. If you then want to delete your virtual network, first disconnect your app from the virtual network. -The private IP assigned to the instance is exposed via the environment variable WEBSITE_PRIVATE_IP. Kudu console UI also shows the list of environment variables available to the web app. This IP is assigned from the address range of the integrated subnet. This IP will be used by the web app to connect to the resources through the Azure virtual network. +The private IP assigned to the instance is exposed via the environment variable WEBSITE_PRIVATE_IP. Kudu console UI also shows the list of environment variables available to the web app. This IP is assigned from the address range of the integrated subnet. This IP is used by the web app to connect to the resources through the Azure virtual network. > [!NOTE] > The value of WEBSITE_PRIVATE_IP is bound to change. However, it will be an IP within the address range of the integration subnet, so you'll need to allow access from the entire address range. The virtual network integration feature has no extra charge for use beyond the A ## Troubleshooting -The feature is easy to set up, but that doesn't mean your experience will be problem free. If you encounter problems accessing your desired endpoint, there are various steps you can take depending on what you are observing. For more information, see [virtual network integration troubleshooting guide](/troubleshoot/azure/app-service/troubleshoot-vnet-integration-apps). +The feature is easy to set up, but that doesn't mean your experience is problem free. If you encounter problems accessing your desired endpoint, there are various steps you can take depending on what you are observing. For more information, see [virtual network integration troubleshooting guide](/troubleshoot/azure/app-service/troubleshoot-vnet-integration-apps). > [!NOTE] > * Virtual network integration isn't supported for Docker Compose scenarios in App Service. The feature is easy to set up, but that doesn't mean your experience will be pro ### Deleting the App Service plan or app before disconnecting the network integration -If you deleted the app or the App Service plan without disconnecting the virtual network integration first, you won't be able to do any update/delete operations on the virtual network or subnet that was used for the integration with the deleted resource. A subnet delegation 'Microsoft.Web/serverFarms' will remain assigned to your subnet and will prevent the update/delete operations. +If you deleted the app or the App Service plan without disconnecting the virtual network integration first, you aren't able to do any update/delete operations on the virtual network or subnet that was used for the integration with the deleted resource. A subnet delegation 'Microsoft.Web/serverFarms' remains assigned to your subnet and prevents the update/delete operations. In order to do update/delete the subnet or virtual network again, you need to re-create the virtual network integration, and then disconnect it: 1. Re-create the App Service plan and app (it's mandatory to use the exact same web app name as before). |
app-service | Tutorial Python Postgresql App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md | To run the application locally, make sure you have [Python 3.7 or higher](https: ### [Flask](#tab/flask) ```bash-git clone https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app +git clone https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app.git ``` ### [Django](#tab/django) git clone https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app.g Go to the application folder: +### [Flask](#tab/flask) + ```bash cd msdocs-python-flask-webapp-quickstart ``` +### [Django](#tab/django) ++```bash +cd msdocs-django-postgresql-sample-app +``` ++-- Create an *.env* file as shown below using the *.env.sample* file as a guide. Set the value of `DBNAME` to the name of an existing database in your local PostgreSQL instance. Set the values of `DBHOST`, `DBUSER`, and `DBPASS` as appropriate for your local PostgreSQL instance. |
application-gateway | Configuration Infrastructure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md | Since application gateway resources are deployed within a virtual network resour You should check your [Azure role-based access control](../role-based-access-control/role-assignments-list-portal.md) to verify that users or Service Principals who operate application gateways have at least **Microsoft.Network/virtualNetworks/subnets/join/action** or some higher permission such as the built-in [Network contributor](../role-based-access-control/built-in-roles.md) role on the virtual network. Visit [Add, change, or delete a virtual network subnet](../virtual-network/virtual-network-manage-subnet.md) to know more on subnet permissions. -If a [built-in](../role-based-access-control/built-in-roles.md) role doesn't provide the right permission, you can [create and assign a custom role](../role-based-access-control/custom-roles-portal.md) for this purpose. +If a [built-in](../role-based-access-control/built-in-roles.md) role doesn't provide the right permission, you can [create and assign a custom role](../role-based-access-control/custom-roles-portal.md) for this purpose. Also, [allow sufficient time](../role-based-access-control/troubleshooting.md?tabs=bicep#symptomrole-assignment-changes-are-not-being-detected) after you make changes to a role assignments. ++> [!NOTE] +> As a temporary extension, we have introduced a subscription-level Azure Feature Exposure Control (AFEC) flag to help you fix the permissions for all your users and/or service principals' permissions. Register for this interim feature on your own through a subscription owner, contributor, or custom role. </br> +> +> "**name**": "Microsoft.Network/DisableApplicationGatewaySubnetPermissionCheck", </br> +> "**description**": "Disable Application Gateway Subnet Permission Check", </br> +> "**providerNamespace**": "Microsoft.Network", </br> +> "**enrollmentType**": "AutoApprove" </br> +> +> The provision to circumvent the virtual network permission check by using this feature control is **available only for a limited period, until 6th April 2023**. Ensure all the roles and permissions managing Application Gateways are updated by then, as there will be no further extensions. Read more about [Preview Feature registration](../azure-resource-manager/management/preview-features.md?tabs=azure-portal). ## Network security groups |
application-gateway | Monitor Application Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/monitor-application-gateway.md | Azure Monitor alerts proactively notify you when important conditions are found <!-- only include next line if applications run on your service and work with App Insights. --> -If you're creating or running an application which use Application Gateway [Azure Monitor Application Insights](../azure-monitor/overview.md#application-insights) may offer additional types of alerts. +If you're creating or running an application which use Application Gateway [Azure Monitor Application Insights](../azure-monitor/app/app-insights-overview.md) may offer additional types of alerts. <!-- end --> The following tables list common and recommended alert rules for Application Gateway. |
applied-ai-services | Managed Identities Secured Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/managed-identities-secured-access.md | To get started, you'll need: * An [**Azure virtual network**](https://portal.azure.com/#create/Microsoft.VirtualNetwork-ARM) in the same region as your Form Recognizer resource. You'll create a virtual network to deploy your application resources to train models and analyze documents. -* An **Azure data science VM** for [**Windows**](/azure/machine-learning/data-science-virtual-machine/provision-vm) or [**Linux/Ubuntu**](/azure/machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro) to optionally deploy a data science VM in the virtual network to test the secure connections being established. +* An **Azure data science VM** for [**Windows**](../../machine-learning/data-science-virtual-machine/provision-vm.md) or [**Linux/Ubuntu**](../../machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro.md) to optionally deploy a data science VM in the virtual network to test the secure connections being established. ## Configure resources That's it! You can now configure secure access for your Form Recognizer resource ## Next steps > [!div class="nextstepaction"]-> [Access Azure Storage from a web app using managed identities](../../app-service/scenario-secure-app-access-storage.md?bc=%2fazure%2fapplied-ai-services%2fform-recognizer%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fapplied-ai-services%2fform-recognizer%2ftoc.json) -+> [Access Azure Storage from a web app using managed identities](../../app-service/scenario-secure-app-access-storage.md?bc=%2fazure%2fapplied-ai-services%2fform-recognizer%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fapplied-ai-services%2fform-recognizer%2ftoc.json) |
attestation | Custom Tcb Baseline Enforcement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/custom-tcb-baseline-enforcement.md | -Microsoft Azure Attestation is a unified solution for attesting different types of Trusted Execution Environments (TEEs) such as [Intel® Software Guard Extensions](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html) (SGX) enclaves. While attesting SGX enclaves, Azure Attestation validates the evidence against Azure default Trusted Computing Base (TCB) baseline. The default TCB baseline is provided by an Azure service named [Trusted Hardware Identity Management](/azure/security/fundamentals/trusted-hardware-identity-management) (THIM) and includes collateral fetched from Intel like certificate revocation lists (CRLs), Intel certificates, Trusted Computing Base (TCB) information and Quoting Enclave identity (QEID). The default TCB baseline from THIM might lag the latest baseline offered by Intel. This is to prevent any attestation failure scenarios for ACC customers who require more time for patching platform software (PSW) updates. +Microsoft Azure Attestation is a unified solution for attesting different types of Trusted Execution Environments (TEEs) such as [Intel® Software Guard Extensions](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html) (SGX) enclaves. While attesting SGX enclaves, Azure Attestation validates the evidence against Azure default Trusted Computing Base (TCB) baseline. The default TCB baseline is provided by an Azure service named [Trusted Hardware Identity Management](../security/fundamentals/trusted-hardware-identity-management.md) (THIM) and includes collateral fetched from Intel like certificate revocation lists (CRLs), Intel certificates, Trusted Computing Base (TCB) information and Quoting Enclave identity (QEID). The default TCB baseline from THIM might lag the latest baseline offered by Intel. This is to prevent any attestation failure scenarios for ACC customers who require more time for patching platform software (PSW) updates. -The custom TCB baseline enforcement feature in Azure Attestation will empower you to perform SGX attestation against a desired TCB baseline. It is always recommended for [Azure Confidential Computing](/azure/confidential-computing/overview) (ACC) SGX customers to install the latest PSW version supported by Intel and configure their SGX attestation policy with the latest TCB baseline supported by Azure. +The custom TCB baseline enforcement feature in Azure Attestation will empower you to perform SGX attestation against a desired TCB baseline. It is always recommended for [Azure Confidential Computing](../confidential-computing/overview.md) (ACC) SGX customers to install the latest PSW version supported by Intel and configure their SGX attestation policy with the latest TCB baseline supported by Azure. ## Why use custom TCB baseline enforcement feature? c:[type=="x-ms-attestation-type"] => issue(type="tee", value=c.value); - If the PSW version of ACC node is lower than the minimum PSW version of the TCB baseline configured in SGX attestation policy, attestation scenarios will fail - If the PSW version of ACC node is greater than or equal to the minimum PSW version of the TCB baseline configured in SGX attestation policy, attestation scenarios will pass - For customers who do not configure a custom TCB baseline in attestation policy, attestation will be performed against the Azure default TCB baseline-- For customers using an attestation policy without configurationrules section, attestation will be performed against the Azure default TCB baseline+- For customers using an attestation policy without configurationrules section, attestation will be performed against the Azure default TCB baseline |
automation | Migrate Run As Accounts Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-run-as-accounts-managed-identity.md | Title: Migrate from a Run As account to a managed identity description: This article describes how to migrate from a Run As account to a managed identity in Azure Automation. Previously updated : 10/17/2022 Last updated : 02/11/2023 A managed identity can be [system assigned](enable-managed-identity-for-automati ## Prerequisites -Before you migrate from a Run As account to a managed identity: +Before you migrate from a Run As account or Classic Run As account to a managed identity: 1. Create a [system-assigned](enable-managed-identity-for-automation.md) or [user-assigned](add-user-assigned-identity.md) managed identity, or create both types. To learn more about the differences between them, see [Managed identity types](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types). Before you migrate from a Run As account to a managed identity: For example, if the Automation account is required only to start or stop an Azure VM, then the permissions assigned to the Run As account need to be only for starting or stopping the VM. Similarly, assign read-only permissions if a runbook is reading from Azure Blob Storage. For more information, see [Azure Automation security guidelines](../automation/automation-security-guidelines.md#authentication-certificate-and-identities). +1. If you are using Classic Run As accounts, ensure that you have [migrated](../virtual-machines/classic-vm-deprecation.md) resources deployed through classic deployment model to Azure Resource Manager. + ## Migrate from an Automation Run As account to a managed identity -To migrate from an Automation Run As account to a managed identity for your runbook authentication, follow these steps: +To migrate from an Automation Run As account or Classic Run As account to a managed identity for your runbook authentication, follow these steps: 1. Change the runbook code to use a managed identity. |
azure-arc | Managed Instance Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-features.md | Azure Arc-enabled SQL Managed Instance share a common code base with the latest | Feature | Azure Arc-enabled SQL Managed Instance | |--|--| | JSON | Yes |-| Query Store | Yes | +| Query Store | No | | Temporal | Yes | | Native XML support | Yes | | XML indexing | Yes | |
azure-arc | Privacy Data Collection And Reporting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/privacy-data-collection-and-reporting.md | -This article describes the data that Azure Arc-enabled data services transmits to Microsoft. +This article describes the data that Azure Arc-enabled data services transmit to Microsoft. -Neither Azure Arc-enabled data services nor any of the applicable data services store any customer data. This applies to Azure Arc-enabled SQL Managed Instance and Azure Arc-enabled PostgreSQL. +Neither Azure Arc-enabled data services nor any of the applicable data services store any customer data. This applies to: -## Related products +- Azure Arc-enabled SQL Managed Instance +- Azure Arc-enabled PostgreSQL ++## Azure Arc-enabled data services Azure Arc-enabled data services may use some or all of the following products: Azure Arc-enabled data services may use some or all of the following products: - Azure CLI (az) -## Directly connected +### Directly connected When a cluster is configured to be directly connected to Azure, some data is automatically transmitted to Microsoft. The following table describes the type of data, how it is sent, and requirement. |Data category|What data is sent?|How is it sent?|Is it required? |:-|:-|:-|:-|-|Operational Data|Metrics and logs|Automatic, when configured to do so|No -Billing & inventory data|Inventory such as number of instances, and usage such as number of vCores consumed|Automatic |Yes +|Operational Data|Metrics and logs|Automatically, when configured to do so|No +Billing & inventory data|Inventory such as number of instances, and usage such as number of vCores consumed|Automatically |Yes Diagnostics|Diagnostic information for troubleshooting purposes|Manually exported and provided to Microsoft Support|Only for the scope of troubleshooting and follows the standard [privacy policies](https://privacy.microsoft.com/privacystatement) -## Indirectly connected +### Indirectly connected When a cluster not configured to be directly connected to Azure, it does not automatically transmit operational, or billing and inventory data to Microsoft. To transmit data to Microsoft, you need to configure the export. The following table describes the type of data, how it is sent, and requirement. |Data category|What data is sent?|How is it sent?|Is it required? |:-|:-|:-|:-|-|Operational Data|Metrics and logs|Manual|No -Billing & inventory data|Inventory such as number of instances, and usage such as number of vCores consumed|Manual |Yes +|Operational Data|Metrics and logs|Manually|No +Billing & inventory data|Inventory such as number of instances, and usage such as number of vCores consumed|Manually |Yes Diagnostics|Diagnostic information for troubleshooting purposes|Manually exported and provided to Microsoft Support|Only for the scope of troubleshooting and follows the standard [privacy policies](https://privacy.microsoft.com/privacystatement) -## Detailed description of data --This section provides more details about the information included with the Azure Arc-enabled data services transmits to Microsoft. --### Operational data +## Operational data Operational data is collected for all database instances and for the Azure Arc-enabled data services platform itself. There are two types of operational data: The operational data does not leave your environment unless you chooses to expor If the data is sent to Azure Monitor or Log Analytics, you can choose which Azure region or datacenter the Log Analytics workspace resides in. After that, access to view or copy it from other locations can be controlled by you. -### Billing and inventory data +## Inventory data -Billing data is used for purposes of tracking usage that is billable. This data is essential for running of the service and needs to be transmitted manually or automatically in all modes. +The collected inventory data is represented by several Azure resource types. The following sections show the properties, types, and descriptions that are collected for each resource type: Every database instance and the data controller itself will be reflected in Azure as an Azure resource in Azure Resource Manager. There are three resource types: The following sections show the properties, types, and descriptions that are collected and stored about each type of resource: -### Data controller +### SQL Server - Azure Arc ++| Description | Property name | Property type| +|:--|:--|:--| +| Computer name | name | string | +| SQL Server instance name| instanceName | string | +| SQL Server Version | version | string | +| SQL Server Edition | edition | string | +| Containing server resource ID | containerResourceId | string | +| Virtual cores | vCore | string | +| Connectivity status | status | string | +| SQL Server patch level | patchLevel | string | +| Collation | collation | string | +| Current version | currentVersion | string | +| TCP dynamic ports | tcpDynamicPorts | string | +| TCP static ports | tcpStaticPorts | string | +| Product ID | productId | string | +| License type | licenseType | string | +| Microsoft Defender status | azureDefenderStatus | string | +| Microsoft Defender status last updated | azureDefenderStatusLastUpdated | string | +| Provisioning state | provisioningState | string | ++The following JSON document is an example of the SQL Server - Azure Arc resource. ++```json +{ + + "name": "SQL22-EE_PAYGTEST", + "version": "SQL Server 2022", + "edition": "Enterprise", + "containerResourceId": "/subscriptions/a5082b19-8a6e-4bc5-8fdd-8ef39dfebc39/resourcegroups/sashan-arc-eastasia/providers/Microsoft.HybridCompute/machines/SQL22-EE", + "vCore": "8", + "status": "Connected", + "patchLevel": "16.0.1000.6", + "collation": "SQL_Latin1_General_CP1_CI_AS", + "currentVersion": "16.0.1000.6", + "instanceName": "PAYGTEST", + "tcpDynamicPorts": "61394", + "tcpStaticPorts": "", + "productId": "00488-00010-05000-AB944", + "licenseType": "PAYG", + "azureDefenderStatusLastUpdated": "2023-02-08T07:57:37.5597421Z", + "azureDefenderStatus": "Protected", + "provisioningState": "Succeeded" +} +``` ++### SQL Server database - Azure Arc ++| Description | Property name | Property type| +|:--|:--|:--| +| Database name | name | string | +| Collation | collationName | string | +| Database creation date | databaseCreationDate | System.DateTime | +| Compatibility level | compatibilityLevel | string | +| Database state | state | string | +| Readonly mode | isReadOnly | boolean | +| Recovery mode | recoveryMode | boolean | +| Auto close enabled | isAutoCloseOn | boolean | +| Auto shrink enabled | isAutoShrinkOn | boolean | +| Auto create stats enabled | isAutoCreateStatsOn | boolean | +| Auto update stats enabled | isAutoUpdateStatsOn | boolean | +| Remote data archive enabled | isRemoteDataArchiveEnabled | boolean | +! Memory optimization enabled | isMemoryOptimizationEnabled | boolean | +| Encryption enabled | isEncrypted | boolean | +| Trustworthy mode enabled | isTrustworthyOn | boolean | +| Backup information | backupInformation | | +| Provisioning state | provisioningState | string | ++The following JSON document is an example of the SQL Server database - Azure Arc resource. ++```json +{ + "name": "newDb80", + "collationName": "SQL_Latin1_General_CP1_CI_AS", + "databaseCreationDate": "2023-01-09T03:40:45Z", + "compatibilityLevel": 150, + "state": "Online", + "isReadOnly": false, + "recoveryMode": "Full", + "databaseOptions": { + "isAutoCloseOn": false, + "isAutoShrinkOn": false, + "isAutoCreateStatsOn": true, + "isAutoUpdateStatsOn": true, + "isRemoteDataArchiveEnabled": false, + "isMemoryOptimizationEnabled": true, + "isEncrypted": false, + "isTrustworthyOn": false + }, + "backupInformation": {}, + "provisioningState": "Succeeded" +} +``` ++### Azure Arc data controller ++| Description | Property name | Property type| +|:--|:--|:--| +| Location information | OnPremiseProperty | public: OnPremiseProperty | +| The raw Kubernetes information (`kubectl get datacontroller`) | K8sRaw | object | +| Last uploaded date from on-premises cluster | LastUploadedDate | System.DateTime | +| Data controller state | ProvisioningState | string | ++#### Data controller - Location information - `public OnPremiseProperty OnPremiseProperty` The following sections show the properties, types, and descriptions that are col - Data controller state - `string: ProvisioningState` -### Azure Arc-enabled PostgreSQL +++### PostgreSQL server - Azure Arc ++| Description | Property name | Property type| +|:--|:--|:--| +| The data controller ID | DataControllerId | string | +| The instance admin name | Admin | string | +| Username and password for basic authentication | BasicLoginInformation | public: BasicLoginInformation | +| The raw Kubernetes information (`kubectl get postgres12`) | K8sRaw | object | +| Last uploaded date from on premises cluster | LastUploadedDate | System.DateTime | +| Group provisioning state | ProvisioningState | string | ++#### Azure Arc-enabled PostgreSQL - The data controller ID - `string: DataControllerId` The following sections show the properties, types, and descriptions that are col - Group provisioning state - `string: ProvisioningState` -### SQL Managed Instance +### SQL managed instance - Azure Arc ++| Description | Property name | Property type| +|:--|:--|:--| +| The managed instance ID | DataControllerId | string | +| The instance admin username | Admin | string | +| The instance start time | StartTime | string | +| The instance end time | EndTime | string | +| The raw kubernetes information (`kubectl get sqlmi`) | K8sRaw | object | +| Username and password for basic authentication | BasicLoginInformation | BasicLoginInformation | +| Last uploaded date from on-premises cluster | LastUploadedDate | System.DateTime | +| SQL managed instance provisioning state | ProvisioningState | string | ++The following JSON document is an example of the SQL managed instance - Azure Arc resource. ++#### SQL managed instance - The managed instance ID - `public string: DataControllerId` The following sections show the properties, types, and descriptions that are col - SQL managed instance provisioning state - `public string: ProvisioningState` -### Examples +## Examples Example of resource inventory data JSON document that is sent to Azure to create Azure resources in your subscription. Example of resource inventory data JSON document that is sent to Azure to create { "customObjectName": "<resource type>-2020-29-5-23-13-17-164711", - "uid": "4bc3dc6b-9148-4c7a-b7dc-01afc1ef5373", - "instanceName": "sqlInstance001", - "instanceNamespace": "arc", - "instanceType": "<resource>", - "location": "eastus", - "resourceGroupName": "production-resources", - "subscriptionId": "<subscription_id>", - "isDeleted": false, - "externalEndpoint": "32.191.39.83:1433", - "vCores": "2", - "createTimestamp": "05/29/2020 23:13:17", - "updateTimestamp": "05/29/2020 23:13:17" - } ``` - --Billing data captures the start time (“created”) and end time (“deleted”) of a given instance.as well as any start and time whenever a change in the number of cores available to a given instance (“core limit”) happens. --```json -{ -- "requestType": "usageUpload", -- "clusterId": "4b0917dd-e003-480e-ae74-1a8bb5e36b5d", -- "name": "DataControllerTestName", -- "subscriptionId": "<subscription_id>", +## Billing data - "resourceGroup": "production-resources", -- "location": "eastus", -- "uploadRequest": { -- "exportType": "usages", -- "dataTimestamp": "2020-06-17T22:32:24Z", -- "data": "[{\"name\":\"sqlInstance001\", -- \"namespace\":\"arc\", -- \"type\":\"<resource type>\", -- \"eventSequence\":1, -- \"eventId\":\"50DF90E8-FC2C-4BBF-B245-CB20DC97FF24\", +Billing data is used for purposes of tracking usage that is billable. This data is essential for running of the service and needs to be transmitted manually or automatically in all modes. - \"startTime\":\"2020-06-17T19:11:47.7533333\", +### Arc-enabled data services - \"endTime\":\"2020-06-17T19:59:00\", +Billing data captures the start time (“created”) and end time (“deleted”) of a given instance, as well as any start and time whenever a change in the number of cores available to a given instance (“core limit”) happens. - \"quantity\":1, +```json +{ + "requestType": "usageUpload", + "clusterId": "4b0917dd-e003-480e-ae74-1a8bb5e36b5d", + "name": "DataControllerTestName", + "subscriptionId": "<subscription_id>", + "resourceGroup": "production-resources", + "location": "eastus", + "uploadRequest": { + "exportType": "usages", + "dataTimestamp": "2020-06-17T22:32:24Z", + "data": + "[{\"name\":\"sqlInstance001\", + \"namespace\":\"arc\", + \"type\":\"<resource type>\", + \"eventSequence\":1, + \"eventId\":\"50DF90E8-FC2C-4BBF-B245-CB20DC97FF24\", + \"startTime\":\"2020-06-17T19:11:47.7533333\", + \"endTime\":\"2020-06-17T19:59:00\", + \"quantity\":1, + \"id\":\"<subscription_id>\"}]", + "signature":"MIIE7gYJKoZIhvcNAQ...2xXqkK" +``` - \"id\":\"<subscription_id>\"}]", +### Arc-enabled SQL Server - "signature":"MIIE7gYJKoZIhvcNAQ...2xXqkK" +Billing data captures a snapshot of the SQL Server instance properties as well as the machine properties every hour and compose the usage upload payload to report usage. There is a snapshot time in the payload for each SQL Server instance.  +```json +{ + "hostType": "Unknown", + "osType": "Windows", + "manufacturer": "Microsoft", + "model": "Hyper-V", + "isVirtualMachine": true, + "serverName": "TestArcServer", + "serverId": "<server id>", + "location": "eastus", + "timestamp": "2021-07-08T01:42:15.0388467Z", + "uploadRequest": { + "exportType": "usages", + "dataTimestamp": "2020-06-17T22:32:24Z", + "data": + "[{\"hostType\":\"VirtualMachine\", + \"numberOfCores\":4, + \"numberOfProcessors\":1, + \"numberOfLogicalProcessors\":4, + \"subscriptionId\":\"<subscription id>\",\"resourceGroup\":\"ArceeBillingPipelineStorage_Test\", + \"location\":\"eastus2euap\", + \"version\":\"Sql2019\", + \"edition\":\"Enterprise\", + \"editionOriginalString\":\"Enterprise Edition: Core based licensing\", + \"coreInfoOriginalString\":\"using 16 logical processors based on SQL Server licensing\", + \"vCore\":4, + \"instanceName\":\"INSTANCE01\", + \"licenseType\":\"LicenseOnly\", + \"hostLicenseType\":\"Paid\", + \"instanceLicenseType\":\"Paid\", + \"serverName\":\"TestArcServer\", + \"isRunning\":false, + \"eventId\":\"00000000-0000-0000-0000-000000000000\", + \"snapshotTime\":\"2020-06-17T19:59:00\", + \"isAzureBilled\":\"Enabled\", + \"hasSoftwareAssurance\":\"Undefined\"}]" + } +} ``` -### Diagnostic data +## Diagnostic data In support situations, you may be asked to provide database instance logs, Kubernetes logs, and other diagnostic logs. The support team will provide a secure location for you to upload to. Dynamic management views (DMVs) may also provide diagnostic data. The DMVs or queries used could contain database schema metadata details but typically not customer data. Diagnostic data does not contain any passwords, cluster IPs or individually identifiable data. These are cleaned and the logs are made anonymous for storage when possible. They are not transmitted automatically and administrator has to manually upload them. |Field name |Notes |-||| +|:--|:--| |Error logs |Log files capturing errors may contain customer or personal data (see below) are restricted and shared by user | |DMVs  |Dynamic management views can contain query and query plans but are restricted and shared by user | |Views |Views can contain customer data but are restricted and shared only by user  | |
azure-arc | Validation Program | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md | The following providers and their corresponding Kubernetes distributions have su | Provider name | Distribution name | Version | | | -- | - | | RedHat | [OpenShift Container Platform](https://www.openshift.com/products/container-platform) | [4.9.43](https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html), [4.10.23](https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html), 4.11.0-rc.6 |-| VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) | TKGm 1.6.0; upstream K8s v1.23.8+vmware.2 <br>TKGm 1.5.3; upstream K8s v1.22.8+vmware.1 <br>TKGm 1.4.0; upstream K8s v1.21.2+vmware.1 <br>TKGm 1.3.1; upstream K8s v1.20.5_vmware.2 <br>TKGm 1.2.1; upstream K8s v1.19.3+vmware.1 | +| VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) | TKG 2.1.0; upstream K8s v1.24.9_vmware.1 <br> TKGm 1.6.0; upstream K8s v1.23.8+vmware.2 <br>TKGm 1.5.3; upstream K8s v1.22.8+vmware.1 <br>TKGm 1.4.0; upstream K8s v1.21.2+vmware.1 <br>TKGm 1.3.1; upstream K8s v1.20.5_vmware.2 <br>TKGm 1.2.1; upstream K8s v1.19.3+vmware.1 | | Canonical | [Charmed Kubernetes](https://ubuntu.com/kubernetes) | [1.24](https://ubuntu.com/kubernetes/docs/1.24/components) | | SUSE Rancher | [Rancher Kubernetes Engine](https://rancher.com/products/rke/) | RKE CLI version: [v1.3.13](https://github.com/rancher/rke/releases/tag/v1.3.13); Kubernetes versions: 1.24.2, 1.23.8 | | Nutanix | [Nutanix Kubernetes Engine](https://www.nutanix.com/products/kubernetes-engine) | Version [2.5](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_5:Nutanix-Kubernetes-Engine-v2_5); upstream K8s v1.23.11 | |
azure-arc | Deploy Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/deploy-cli.md | Last updated 02/06/2023 + # Azure Arc resource bridge (preview) deployment command overview [Azure CLI](/cli/azure/install-azure-cli) is required to deploy the Azure Arc resource bridge. When deploying Arc resource bridge with a corresponding partner product, the Azure CLI commands may be combined into an automation script, along with additional provider-specific commands. To learn about installing Arc resource bridge with a corresponding partner product, see:-Creates the configuration files used by Arc resource bridge. Credentials that are provided during `createconfig`, such as vCenter credentials for VMware vSphere, are stored in a configuration file and locally within Arc resource bridge. These credentials should be a separate user account used only by Arc resource bridge, with permission to view, create, delete, and manage on-premises resources. If the credentials change, then the credentials on the resource bridge should be rotated. +This command creates the configuration files used by Arc resource bridge. Credentials that are provided during `createconfig`, such as vCenter credentials for VMware vSphere, are stored in a configuration file and locally within Arc resource bridge. These credentials should be a separate user account used only by Arc resource bridge, with permission to view, create, delete, and manage on-premises resources. If the credentials change, then the credentials on the resource bridge should be updated. ++The `createconfig` command features two modes: interactive and non-interactive. Interactive mode provides helpful prompts that explain the parameter and what to pass. To initiate interactive mode, pass only the three required parameters. Non-interactive mode allows you to pass all the parameters needed to create the configuration files without being prompted, which saves time and is useful for automation scripts. ++Three configuration files are generated: resource.yaml, appliance.yaml and infra.yaml. These files should be kept and stored in a secure location, as they're required for maintenance of Arc resource bridge. -The `createconfig` command features two modes: interactive and non-interactive. Interactive mode provides helpful prompts that explain the parameter and what to pass. To initiate interactive mode, pass only the three required parameters. Non-interactive mode allows you to pass all the parameters needed to create the configuration files without being prompted, which saves time and is useful for automation scripts. Three configuration files are generated: resource.yaml, appliance.yaml and infra.yaml. These files should be kept and stored in a secure location, as they're required for maintenance of Arc resource bridge. +This command also calls the `validate` command to check the configuration files. > [!NOTE] > Azure Stack HCI and Hybrid AKS use different commands to create the Arc resource bridge configuration files. + ## az arcappliance validate -Checks the configuration files for a valid schema, cloud and core validations (such as management machine connectivity to required URLs), network settings, and no proxy settings. +The `validate` command checks the configuration files for a valid schema, cloud and core validations (such as management machine connectivity to required URLs), network settings, and proxy settings. It also performs tests on identity privileges and role assignments, network configuration, loadbalancer configuration and content delivery network connectivity. + ## az arcappliance prepare -Downloads the OS images from Microsoft and uploads them to the on-premises cloud image gallery to prepare for the creation of the appliance VM. +This command downloads the OS images from Microsoft that are used to deploy the on-premises appliance VM. Once downloaded, the images are then uploaded to the local cloud image gallery to prepare for the creation of the appliance VM. ++This command takes about 10-30+ minutes to complete, depending on the network speed. Allow the command to complete before continuing with the deployment. -This command can take up to 30 minutes to complete, depending on the network download speed. Allow the command to complete before continuing with the deployment. ## az arcappliance deploy -Deploys an on-premises instance of Arc resource bridge as an appliance VM, bootstrapped to be a Kubernetes management cluster. Gets all necessary pods into a running state. +The `deploy` command deploys an on-premises instance of Arc resource bridge as an appliance VM, bootstrapped to be a Kubernetes management cluster. This command gets all necessary pods and agents within the Kubernetes cluster into a running state. Once the appliance VM is up, the kubeconfig file is generated. + ## az arcappliance create -Creates Arc resource bridge in Azure as an ARM resource, then establishes the connection between the ARM resource and on-premises appliance VM. +This command creates Arc resource bridge in Azure as an ARM resource, then establishes the connection between the ARM resource and on-premises appliance VM. ++Once the `create` command initiates the connection, it will return in the terminal even though the connection between the ARM resource and on-premises appliance VM is not yet complete. The resource bridge needs about 5 minutes to establish the connection between the ARM resource and the on-premises VM. -Running this command is the last step in the deployment process. ## az arcappliance show -Gets the ARM resource information for Arc resource bridge. This information helps you monitor the status of the appliance. Successful appliance creation results in `ProvisioningState = Succeeded` and `Status = Running`. +The `show` command gets the status of the Arc resource bridge and ARM resource information. It can be used to check the progress of the connection between the ARM resource and on-premises appliance VM. ++While the Arc resource bridge is connecting the ARM resource to the on-premises VM, the resource bridge will progress through the stages below: ++`ProvisioningState` will be `Creating`, `Created`, `Failed`, `Deleting`, or `Succeeded`. ++`Status` will transition between `WaitingForHeartbeat` -> `Validating` -> `Connected` -> `Running`. ++Successful Arc resource bridge creation results in `ProvisioningState = Succeeded` and `Status = Running`. + ## az arcappliance delete -Deletes the appliance VM and Azure resources. It doesn't clean up the OS image, which remains in the on-premises cloud gallery. +This command deletes the appliance VM and Azure resources. It doesn't clean up the OS image, which remains in the on-premises cloud gallery. ++If a deployment fails, run this command to clean up the environment before you attempt to deploy again. -If a deployment fails, you must run this command to clean up the environment before you attempt to deploy again. ## Next steps |
azure-arc | Ssh Arc Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-troubleshoot.md | Possible errors: - {"level":"fatal","msg":"sshproxy: error copying information from the connection: read tcp 192.168.1.180:60887-\u003e40.122.115.96:443: wsarecv: An existing connection was forcibly closed by the remote host.","time":"2022-02-24T13:50:40-05:00"} Resolution:-+ - Ensure that the SSHD service is running on the Arc-enabled server. + - Ensure that port 22 (or other non-default port) is listed in allowed incoming connections. Run `azcmagent config list` on the Arc-enabled server in an elevated session. The ssh port (22) isn't set by default, so you must add it. This setting is used by other services, like admin center, so just add port 22 without deleting previously added ports. ++ ```powershell + # Set 22 port: + azcmagent config list + azcmagent config get incomingconnections.port + azcmagent config set incomingconnections.port 22 + azcmagent config + + # Add multiple ports: + azcmagent config set incomingconnections.port 22,3516 + ``` + ## Azure permissions issues ### Incorrect role assignments |
azure-functions | Quickstart Netherite | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-netherite.md | The snippet above is just a *minimal* configuration. Later, you may want to cons Your app is now ready for local development: You can start the Function app to test it. One way to do this is to run `func host start` on your application's root and executing a simple orchestrator Function. -While the function app is running, Netherite will publish load information about its active partitions to an Azure Storage table named "DurableTaskPartitions". You can use [Azure Storage Explorer](/azure/vs-azure-tools-storage-manage-with-storage-explorer) to check that it's working as expected. If Netherite is running correctly, the table won't be empty; see the example below. +While the function app is running, Netherite will publish load information about its active partitions to an Azure Storage table named "DurableTaskPartitions". You can use [Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md) to check that it's working as expected. If Netherite is running correctly, the table won't be empty; see the example below.  While the function app is running, Netherite will publish load information about ## Run your app on Azure -You need to create an Azure Functions app on Azure. To do this, follow the instructions in the **Create a function app** section of [these instructions](/azure/azure-functions/functions-create-function-app-portal#create-a-function-app-a-function). +You need to create an Azure Functions app on Azure. To do this, follow the instructions in the **Create a function app** section of [these instructions](../functions-create-function-app-portal.md). ### Set up Event Hubs You will need to set up an Event Hubs namespace to run Netherite on Azure. You c #### Create an Event Hubs namespace -Follow [these steps](/azure/event-hubs/event-hubs-create#create-an-event-hubs-namespace) to create an Event Hubs namespace on the Azure portal. When creating the namespace, you may be prompted to: +Follow [these steps](../../event-hubs/event-hubs-create.md#create-an-event-hubs-namespace) to create an Event Hubs namespace on the Azure portal. When creating the namespace, you may be prompted to: 1. Choose a *resource group*: Use the same resource group as the Function app. 2. Choose a *plan* and provision *throughput units*. Select the defaults, this setting can be changed later. |
azure-functions | Functions Create First Quarkus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-first-quarkus.md | public String funqyHello() { } ``` -Azure Functions Java has its own set of Azure-specific annotations, but these annotations aren't necessary when you're using Quarkus on Azure Functions in a simple capacity as we're doing here. For more information about Azure Functions Java annotations, see the [Azure Functions Java developer guide](/azure/azure-functions/functions-reference-java). +Azure Functions Java has its own set of Azure-specific annotations, but these annotations aren't necessary when you're using Quarkus on Azure Functions in a simple capacity as we're doing here. For more information about Azure Functions Java annotations, see the [Azure Functions Java developer guide](./functions-reference-java.md). Unless you specify otherwise, the function's name is the same as the method name. You can also use the following command to define the function name with a parameter to the annotation: Sign in to [the portal](https://aka.ms/publicportal) and ensure that you've sele Now that you've opened your Azure function in the portal, here are more features that you can access from the portal: -* Monitor the performance of your Azure function. For more information, see [Monitoring Azure Functions](/azure/azure-functions/monitor-functions). -* Explore telemetry. For more information, see [Analyze Azure Functions telemetry in Application Insights](/azure/azure-functions/analyze-telemetry-data). -* Set up logging. For more information, see [Enable streaming execution logs in Azure Functions](/azure/azure-functions/streaming-logs). +* Monitor the performance of your Azure function. For more information, see [Monitoring Azure Functions](./monitor-functions.md). +* Explore telemetry. For more information, see [Analyze Azure Functions telemetry in Application Insights](./analyze-telemetry-data.md). +* Set up logging. For more information, see [Enable streaming execution logs in Azure Functions](./streaming-logs.md). ## Clean up resources In this article, you learned how to: To learn more about Azure Functions and Quarkus, see the following articles and references: -* [Azure Functions Java developer guide](/azure/azure-functions/functions-reference-java) -* [Quickstart: Create a Java function in Azure using Visual Studio Code](/azure/azure-functions/create-first-function-vs-code-java) -* [Azure Functions documentation](/azure/azure-functions/) -* [Quarkus guide to deploying on Azure](https://quarkus.io/guides/deploying-to-azure-cloud) +* [Azure Functions Java developer guide](./functions-reference-java.md) +* [Quickstart: Create a Java function in Azure using Visual Studio Code](./create-first-function-vs-code-java.md) +* [Azure Functions documentation](./index.yml) +* [Quarkus guide to deploying on Azure](https://quarkus.io/guides/deploying-to-azure-cloud) |
azure-functions | Functions Triggers Bindings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-triggers-bindings.md | You can connect your function to other services by using input or output binding [!INCLUDE [Full bindings table](../../includes/functions-bindings.md)] -For information about which bindings are in preview or are approved for production use, see [Supported languages](supported-languages.md). +For information about which bindings are in preview or are approved for production use, see [Supported languages](supported-languages.md). ++Specific binding extension versions are only supported while the underlying service SDK is supported. Changes to support in the underlying service SDK version affect the support for the consuming extension. ## Bindings code examples |
azure-functions | Monitor Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/monitor-functions.md | FunctionAppLogs Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks. -If you're creating or running an application that run on Functions [Azure Monitor Application Insights](../azure-monitor/overview.md#application-insights) may offer other types of alerts. +If you're creating or running an application that run on Functions [Azure Monitor Application Insights](../azure-monitor/app/app-insights-overview.md) may offer other types of alerts. The following table lists common and recommended alert rules for Functions. |
azure-maps | How To Dev Guide Java Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-java-sdk.md | public class Demo{ [maven]: /azure/developer/java/sdk/get-started-maven [Identity library]: /java/api/overview/azure/identity-readme?source=recommendations&view=azure-java-stable [defaultazurecredential]: /azure/developer/java/sdk/identity-azure-hosted-auth#default-azure-credential-[Host daemon]: /azure/azure-maps/how-to-secure-daemon-app#host-a-daemon-on-non-azure-resources +[Host daemon]: ./how-to-secure-daemon-app.md#host-a-daemon-on-non-azure-resources <!-- Java SDK Developers Guide > [java search package]: https://repo1.maven.org/maven2/com/azure/azure-maps-search public class Demo{ [java timezone sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-timezone/src/samples/java/com/azure/maps/timezone/samples [java elevation package]: https://repo1.maven.org/maven2/com/azure/azure-maps-elevation [java elevation readme]: https://github.com/Azure/azure-sdk-for-jav-[java elevation sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-elevation/src/samples/java/com/azure/maps/elevation/samples +[java elevation sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-elevation/src/samples/java/com/azure/maps/elevation/samples |
azure-maps | Release Notes Map Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md | Stay up to date on Azure Maps: [3.0.0-preview.1]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.1 [2.2.3]: https://www.npmjs.com/package/azure-maps-control/v/2.2.3 [2.2.2]: https://www.npmjs.com/package/azure-maps-control/v/2.2.2-[Azure AD]: /azure/active-directory/develop/v2-overview +[Azure AD]: ../active-directory/develop/v2-overview.md [adal-angular]: https://github.com/AzureAD/azure-activedirectory-library-for-js [@azure/msal-browser]: https://github.com/AzureAD/microsoft-authentication-library-for-js-[migration guide]: /azure/active-directory/develop/msal-compare-msal-js-and-adal-js +[migration guide]: ../active-directory/develop/msal-compare-msal-js-and-adal-js.md [CameraBoundsOptions]: /javascript/api/azure-maps-control/atlas.cameraboundsoptions?view=azure-maps-typescript-latest [Map.setCamera(options)]: /javascript/api/azure-maps-control/atlas.map?view=azure-maps-typescript-latest#azure-maps-control-atlas-map-setcamera [language mapping]: https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/azure-maps/supported-languages.md#azure-maps-supported-languages Stay up to date on Azure Maps: [StyleOptions]: /javascript/api/azure-maps-control/atlas.styleoptions [TrafficControlOptions]: /javascript/api/azure-maps-control/atlas.trafficcontroloptions [Azure Maps Samples]: https://samples.azuremaps.com-[Azure Maps Blog]: https://techcommunity.microsoft.com/t5/azure-maps-blog/bg-p/AzureMapsBlog +[Azure Maps Blog]: https://techcommunity.microsoft.com/t5/azure-maps-blog/bg-p/AzureMapsBlog |
azure-monitor | Azure Monitor Agent Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md | Your migration plan to the Azure Monitor Agent should take into account: - If you're setting up a *new environment* with resources, such as deployment scripts and onboarding templates, assess the effort of migrating to Azure Monitor Agent later. If the setup will take a significant amount of rework, install Azure Monitor Agent together with a legacy agent in your new environment to decrease the migration effort later. - Azure Monitor Agent **can run alongside the legacy Log Analytics agents on the same machine** so that you can continue to use existing functionality during evaluation or migration. You can begin the transition, but ensure you understand the **limitations below**: - Be careful when you collect duplicate data from the same machine, as this could skew query results, affect downstream features like alerts, dashboards, workbooks and generate more charges for data ingestion and retention. To avoid data duplication, ensure the agents are *collecting data from different machines* or *sending the data to different destinations*. Additionally,- - For **Defender for Cloud**, you will only be [billed once per machine](/azure/defender-for-cloud/auto-deploy-azure-monitoring-agent#impact-of-running-with-both-the-log-analytics-and-azure-monitor-agents) when running both agents + - For **Defender for Cloud**, you will only be [billed once per machine](../../defender-for-cloud/auto-deploy-azure-monitoring-agent.md#impact-of-running-with-both-the-log-analytics-and-azure-monitor-agents) when running both agents - For **Sentinel**, you can easily [disable the legacy connector](../../sentinel/ama-migrate.md#recommended-migration-plan) to stop ingestion of logs from legacy agents. - Running two telemetry agents on the same machine consumes double the resources, including but not limited to CPU, memory, storage space, and network bandwidth. For more information, see: - [Azure Monitor Agent overview](agents-overview.md) - [Azure Monitor Agent migration for Microsoft Sentinel](../../sentinel/ama-migrate.md)-- [Frequently asked questions for Azure Monitor Agent migration](/azure/azure-monitor/faq#azure-monitor-agent)+- [Frequently asked questions for Azure Monitor Agent migration](/azure/azure-monitor/faq#azure-monitor-agent) |
azure-monitor | Create New Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-new-resource.md | Sign in to the [Azure portal](https://portal.azure.com) and create an Applicatio | **Resource mode** | `Classic` or `Workspace-based` | Workspace-based resources allow you to send your Application Insights telemetry to a common Log Analytics workspace. For more information, see [Workspace-based Application Insights resources](create-workspace-resource.md). > [!NOTE]-> You can use the same resource name across different resource groups, but it can be beneficial to use a globally unique name. If you plan to [perform cross-resource queries](../logs/cross-workspace-query.md#identifying-an-application), using a globally unique name simplifies the required syntax. +> You can use the same resource name across different resource groups, but it can be beneficial to use a globally unique name. If you plan to [perform cross-resource queries](../logs/cross-workspace-query.md#identify-an-application), using a globally unique name simplifies the required syntax. Enter the appropriate values in the required fields. Select **Review + create**. |
azure-monitor | Data Model Context | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-context.md | Every telemetry item might have a strongly typed context field. Every field enab Information in the application context fields is always about the application that's sending the telemetry. The application version is used to analyze trend changes in the application behavior and its correlation to the deployments. -Maximum length: 1,024 +**Maximum length:** 1,024 ## Client IP address -This field is the IP address of the client device. IPv4 and IPv6 are supported. When telemetry is sent from a service, the location context is about the user that initiated the operation in the service. Application Insights extract the geo-location information from the client IP and then truncate it. The client IP by itself can't be used as user identifiable information. +This field is the IP address of the client device. IPv4 and IPv6 are supported. When telemetry is sent from a service, the location context is about the user who initiated the operation in the service. Application Insights extract the geo-location information from the client IP and then truncate it. The client IP by itself can't be used as user identifiable information. -Maximum length: 46 +**Maximum length:** 46 ## Device type Originally, this field was used to indicate the type of the device the user of the application is using. Today it's used primarily to distinguish JavaScript telemetry with the device type `Browser` from server-side telemetry with the device type `PC`. -Maximum length: 64 +**Maximum length:** 64 ## Operation ID This field is the unique identifier of the root operation. This identifier allows grouping telemetry across multiple components. For more information, see [Telemetry correlation](./correlation.md). The operation ID is created by either a request or a page view. All other telemetry sets this field to the value for the containing request or page view. -Maximum length: 128 +**Maximum length:** 128 ## Parent operation ID This field is the unique identifier of the telemetry item's immediate parent. For more information, see [Telemetry correlation](./correlation.md). -Maximum length: 128 +**Maximum length:** 128 ## Operation name This field is the name (group) of the operation. The operation name is created by either a request or a page view. All other telemetry items set this field to the value for the containing request or page view. The operation name is used for finding all the telemetry items for a group of operations (for example, `GET Home/Index`). This context property is used to answer questions like What are the typical exceptions thrown on this page? -Maximum length: 1,024 +**Maximum length:** 1,024 ## Synthetic source of the operation This field is the name of the synthetic source. Some telemetry from the application might represent synthetic traffic. It might be the web crawler indexing the website, site availability tests, or traces from diagnostic libraries like the Application Insights SDK itself. -Maximum length: 1,024 +**Maximum length:** 1,024 ## Session ID Session ID is the instance of the user's interaction with the app. Information in the session context fields is always about the user. When telemetry is sent from a service, the session context is about the user who initiated the operation in the service. -Maximum length: 64 +**Maximum length:** 64 ## Anonymous user ID -Anonymous user ID (User.Id) represents the user of the application. When telemetry is sent from a service, the user context is about the user who initiated the operation in the service. +The anonymous user ID (User.Id) represents the user of the application. When telemetry is sent from a service, the user context is about the user who initiated the operation in the service. [Sampling](./sampling.md) is one of the techniques to minimize the amount of collected telemetry. A sampling algorithm attempts to either sample in or out all the correlated telemetry. An anonymous user ID is used for sampling score generation, so an anonymous user ID should be a random enough value. User IDs can be cross referenced with session IDs to provide unique telemetry di Using an anonymous user ID to store a username is a misuse of the field. Use an authenticated user ID. -Maximum length: 128 +**Maximum length:** 128 ## Authenticated user ID Use the Application Insights SDK to initialize the authenticated user ID with a User IDs can be cross referenced with session IDs to provide unique telemetry dimensions and establish user activity over a session duration. -Maximum length: 1,024 +**Maximum length:** 1,024 ## Account ID -The account ID, in multi-tenant applications, is the tenant account ID or name that the user is acting with. It's used for more user segmentation when a user ID and an authenticated user ID aren't sufficient. Examples might be a subscription ID for Azure portal or the blog name for a blogging platform. +The account ID, in multi-tenant applications, is the tenant account ID or name that the user is acting with. It's used for more user segmentation when a user ID and an authenticated user ID aren't sufficient. Examples might be a subscription ID for the Azure portal or the blog name for a blogging platform. -Maximum length: 1,024 +**Maximum length:** 1,024 ## Cloud role This field is the name of the role of which the application is a part. It maps directly to the role name in Azure. It can also be used to distinguish micro services, which are part of a single application. -Maximum length: 256 +**Maximum length:** 256 ## Cloud role instance This field is the name of the instance where the application is running. For example, it's the computer name for on-premises or the instance name for Azure. -Maximum length: 256 +**Maximum length:** 256 ## Internal: SDK version For more information, see this [SDK version article](https://github.com/MohanGsk/ApplicationInsights-Home/blob/master/EndpointSpecs/SDK-VERSIONS.md). -Maximum length: 64 +**Maximum length:** 64 ## Internal: Node name This field represents the node name used for billing purposes. Use it to override the standard detection of nodes. -Maximum length: 256 +**Maximum length:** 256 ## Next steps - Learn how to [extend and filter telemetry](./api-filtering-sampling.md).-- See [Application Insights telemetry data model](data-model.md) for Application Insights types and data model.+- See the [Application Insights telemetry data model](data-model.md) for Application Insights types and data model. - Check out standard context properties collection [configuration](./configuration-with-applicationinsights-config.md#telemetry-initializers-aspnet). |
azure-monitor | Get Metric | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/get-metric.md | Title: Get-Metric in Azure Monitor Application Insights -description: Learn how to effectively use the GetMetric() call to capture locally pre-aggregated metrics for .NET and .NET Core applications with Azure Monitor Application Insights -+description: Learn how to effectively use the GetMetric() call to capture locally pre-aggregated metrics for .NET and .NET Core applications with Azure Monitor Application Insights. + Last updated 04/28/2020 ms.devlang: csharp-The Azure Monitor Application Insights .NET and .NET Core SDKs have two different methods of collecting custom metrics, `TrackMetric()`, and `GetMetric()`. The key difference between these two methods is local aggregation. `TrackMetric()` lacks pre-aggregation while `GetMetric()` has pre-aggregation. The recommended approach is to use aggregation, therefore, `TrackMetric()` is no longer the preferred method of collecting custom metrics. This article will walk you through using the GetMetric() method, and some of the rationale behind how it works. +The Azure Monitor Application Insights .NET and .NET Core SDKs have two different methods of collecting custom metrics: `TrackMetric()` and `GetMetric()`. The key difference between these two methods is local aggregation. The `TrackMetric()` method lacks pre-aggregation. The `GetMetric()` method has pre-aggregation. We recommend that you use aggregation, so `TrackMetric()` is no longer the preferred method of collecting custom metrics. This article walks you through using the `GetMetric()` method and some of the rationale behind how it works. -## Pre-aggregating vs non pre-aggregating API +## Pre-aggregating vs. non-pre-aggregating API -`TrackMetric()` sends raw telemetry denoting a metric. It's inefficient to send a single telemetry item for each value. `TrackMetric()` is also inefficient in terms of performance since every `TrackMetric(item)` goes through the full SDK pipeline of telemetry initializers and processors. Unlike `TrackMetric()`, `GetMetric()` handles local pre-aggregation for you and then only submits an aggregated summary metric at a fixed interval of one minute. So if you need to closely monitor some custom metric at the second or even millisecond level you can do so while only incurring the storage and network traffic cost of only monitoring every minute. This behavior also greatly reduces the risk of throttling occurring since the total number of telemetry items that need to be sent for an aggregated metric are greatly reduced. +The `TrackMetric()` method sends raw telemetry denoting a metric. It's inefficient to send a single telemetry item for each value. The `TrackMetric()` method is also inefficient in terms of performance because every `TrackMetric(item)` goes through the full SDK pipeline of telemetry initializers and processors. -In Application Insights, custom metrics collected via `TrackMetric()` and `GetMetric()` aren't subject to [sampling](./sampling.md). Sampling important metrics can lead to scenarios where alerting you may have built around those metrics could become unreliable. By never sampling your custom metrics, you can generally be confident that when your alert thresholds are breached, an alert will fire. But since custom metrics aren't sampled, there are some potential concerns. +Unlike `TrackMetric()`, `GetMetric()` handles local pre-aggregation for you and then only submits an aggregated summary metric at a fixed interval of one minute. If you need to closely monitor some custom metric at the second or even millisecond level, you can do so while only incurring the storage and network traffic cost of only monitoring every minute. This behavior also greatly reduces the risk of throttling occurring because the total number of telemetry items that need to be sent for an aggregated metric are greatly reduced. -Trend tracking in a metric every second, or at an even more granular interval can result in: +In Application Insights, custom metrics collected via `TrackMetric()` and `GetMetric()` aren't subject to [sampling](./sampling.md). Sampling important metrics can lead to scenarios where alerting you might have built around those metrics could become unreliable. By never sampling your custom metrics, you can generally be confident that when your alert thresholds are breached, an alert fires. Because custom metrics aren't sampled, there are some potential concerns. -- Increased data storage costs. There's a cost associated with how much data you send to Azure Monitor. (The more data you send the greater the overall cost of monitoring.)-- Increased network traffic/performance overhead. (In some scenarios this overhead could have both a monetary and application performance cost.)-- Risk of ingestion throttling. (The Azure Monitor service drops ("throttles") data points when your app sends a high rate of telemetry in a short time interval.)+Trend tracking in a metric every second, or at an even more granular interval, can result in: -Throttling is a concern as it can lead to missed alerts. The condition to trigger an alert could occur locally and then be dropped at the ingestion endpoint due to too much data being sent. We don't recommend using `TrackMetric()` for .NET and .NET Core unless you've implemented your own local aggregation logic. If you're trying to track every instance an event occurs over a given time period, you may find that [`TrackEvent()`](./api-custom-events-metrics.md#trackevent) is a better fit. Though keep in mind that unlike custom metrics, custom events are subject to sampling. You can still use `TrackMetric()` even without writing your own local pre-aggregation, but if you do so be aware of the pitfalls. +- **Increased data storage costs.** There's a cost associated with how much data you send to Azure Monitor. The more data you send, the greater the overall cost of monitoring. +- **Increased network traffic or performance overhead.** In some scenarios, this overhead could have both a monetary and application performance cost. +- **Risk of ingestion throttling.** Azure Monitor drops ("throttles") data points when your app sends a high rate of telemetry in a short time interval. -In summary `GetMetric()` is the recommended approach since it does pre-aggregation, it accumulates values from all the Track() calls and sends a summary/aggregate once every minute. `GetMetric()` can significantly reduce the cost and performance overhead by sending fewer data points, while still collecting all relevant information. +Throttling is a concern because it can lead to missed alerts. The condition to trigger an alert could occur locally and then be dropped at the ingestion endpoint because of too much data being sent. We don't recommend using `TrackMetric()` for .NET and .NET Core unless you've implemented your own local aggregation logic. If you're trying to track every instance an event occurs over a given time period, you might find that [`TrackEvent()`](./api-custom-events-metrics.md#trackevent) is a better fit. Keep in mind that unlike custom metrics, custom events are subject to sampling. You can still use `TrackMetric()` even without writing your own local pre-aggregation. But if you do so, be aware of the pitfalls. ++In summary, we recommend `GetMetric()` because it does pre-aggregation, it accumulates values from all the `Track()` calls, and sends a summary/aggregate once every minute. The `GetMetric()` method can significantly reduce the cost and performance overhead by sending fewer data points while still collecting all relevant information. > [!NOTE]-> Only the .NET and .NET Core SDKs have a GetMetric() method. If you are using Java, see [sending custom metrics using micrometer](./java-standalone-config.md#auto-collected-micrometer-metrics-including-spring-boot-actuator-metrics). For JavaScript and Node.js you would still use `TrackMetric()`, but keep in mind the caveats that were outlined in the previous section. For Python you can use [OpenCensus.stats](./opencensus-python.md#metrics) to send custom metrics but the metrics implementation is different. +> Only the .NET and .NET Core SDKs have a `GetMetric()` method. If you're using Java, see [Sending custom metrics using micrometer](./java-standalone-config.md#auto-collected-micrometer-metrics-including-spring-boot-actuator-metrics). For JavaScript and Node.js, you would still use `TrackMetric()`, but keep in mind the caveats that were outlined in the previous section. For Python, you can use [OpenCensus.stats](./opencensus-python.md#metrics) to send custom metrics, but the metrics implementation is different. ++## Get started with GetMetric -## Getting started with GetMetric +For our examples, we're going to use a basic .NET Core 3.1 worker service application. If you want to replicate the test environment used with these examples, follow steps 1-6 in the [Monitoring worker service article](worker-service.md#net-core-lts-worker-service-application). These steps add Application Insights to a basic worker service project template. The concepts apply to any general application where the SDK can be used, including web apps and console apps. -For our examples, we're going to use a basic .NET Core 3.1 worker service application. If you would like to replicate the test environment used with these examples, follow steps 1-6 of the [monitoring worker service article](worker-service.md#net-core-lts-worker-service-application). These steps will add Application Insights to a basic worker service project template and the concepts apply to any general application where the SDK can be used including web apps and console apps. -### Sending metrics +### Send metrics Replace the contents of your `worker.cs` file with the following code: namespace WorkerService3 } ``` -When running the sample code, you'll see the while loop repeatedly executing with no telemetry being sent in the Visual Studio output window. A single telemetry item will be sent by around the 60-second mark, which in our test looks as follows: +When you run the sample code, you see the `while` loop repeatedly executing with no telemetry being sent in the Visual Studio output window. A single telemetry item is sent by around the 60-second mark, which in our test looks like: ```json Application Insights Telemetry: {"name":"Microsoft.ApplicationInsights.Dev.00000000-0000-0000-0000-000000000000.Metric", "time":"2019-12-28T00:54:19.0000000Z", Application Insights Telemetry: {"name":"Microsoft.ApplicationInsights.Dev.00000 "DeveloperMode":"true"}}}} ``` -This single telemetry item represents an aggregate of 41 distinct metric measurements. Since we were sending the same value over and over again we have a *standard deviation (stDev)* of 0 with an identical *maximum (max)*, and *minimum (min)* values. The *value* property represents a sum of all the individual values that were aggregated. +This single telemetry item represents an aggregate of 41 distinct metric measurements. Because we were sending the same value over and over again, we have a standard deviation (`stDev`) of `0` with identical maximum (`max`) and minimum (`min`) values. The `value` property represents a sum of all the individual values that were aggregated. > [!NOTE]-> GetMetric does not support tracking the last value (i.e. "gauge") or tracking histograms/distributions. +> The `GetMetric` method doesn't support tracking the last value (for example, `gauge`) or tracking histograms or distributions. -If we examine our Application Insights resource in the Logs (Analytics) experience, the individual telemetry item would look as follows: +If we examine our Application Insights resource in the **Logs (Analytics)** experience, the individual telemetry item would look like the following screenshot. - + > [!NOTE]-> While the raw telemetry item did not contain an explicit sum property/field once ingested we create one for you. In this case both the `value` and `valueSum` property represent the same thing. +> While the raw telemetry item didn't contain an explicit sum property/field once ingested, we create one for you. In this case, both the `value` and `valueSum` property represent the same thing. ++You can also access your custom metric telemetry in the [_Metrics_](../essentials/metrics-charts.md) section of the portal as both a [log-based and custom metric](pre-aggregated-metrics-log-metrics.md). The following screenshot is an example of a log-based metric. -You can also access your custom metric telemetry in the [_Metrics_](../essentials/metrics-charts.md) section of the portal. As both a [log-based, and custom metric](pre-aggregated-metrics-log-metrics.md). (The screenshot below is an example of log-based.) - + -### Caching metric reference for high-throughput usage +### Cache metric reference for high-throughput usage -Metric values may be observed frequently in some cases. For example, a high-throughput service that processes 500 requests/second may want to emit 20 telemetry metrics for each request. The result means tracking 10,000 values per second. In such high-throughput scenarios, users may need to help the SDK by avoiding some lookups. +Metric values might be observed frequently in some cases. For example, a high-throughput service that processes 500 requests per second might want to emit 20 telemetry metrics for each request. The result means tracking 10,000 values per second. In such high-throughput scenarios, users might need to help the SDK by avoiding some lookups. -For example, the example above performed a lookup for a handle for the metric "ComputersSold" and then tracked an observed value 42. Instead, the handle may be cached for multiple track invocations: +For example, the preceding example performed a lookup for a handle for the metric `ComputersSold` and then tracked an observed value of `42`. Instead, the handle might be cached for multiple track invocations: ```csharp //... For example, the example above performed a lookup for a handle for the metric "C ``` -In addition to caching the metric handle, the example above also reduced the `Task.Delay` to 50 milliseconds so that the loop would execute more frequently resulting in 772 `TrackValue()` invocations. +In addition to caching the metric handle, the preceding example also reduced `Task.Delay` to 50 milliseconds so that the loop would execute more frequently. The result is 772 `TrackValue()` invocations. -## Multi-dimensional metrics +## Multidimensional metrics -The examples in the previous section show zero-dimensional metrics. Metrics can also be multi-dimensional. We currently support up to 10 dimensions. +The examples in the previous section show zero-dimensional metrics. Metrics can also be multidimensional. We currently support up to 10 dimensions. Here's an example of how to create a one-dimensional metric: The examples in the previous section show zero-dimensional metrics. Metrics can ``` -Running the sample code for at least 60 seconds will result in three distinct telemetry items being sent to Azure, each representing the aggregation of one of the three form factors. As before you can further examine in the Logs (Analytics) view: +Running the sample code for at least 60 seconds results in three distinct telemetry items being sent to Azure. Each item represents the aggregation of one of the three form factors. As before, you can further examine in the **Logs (Analytics)** view. - + -In the Metrics explorer experience: +In the metrics explorer: - + -However, you'll notice that you aren't able to split the metric by your new custom dimension, or view your custom dimension with the metrics view: +Notice that you can't split the metric by your new custom dimension or view your custom dimension with the metrics view. - + -By default multi-dimensional metrics within the Metric explorer experience aren't turned on in Application Insights resources. +By default, multidimensional metrics within the metric explorer aren't turned on in Application Insights resources. -### Enable multi-dimensional metrics +### Enable multidimensional metrics -To enable multi-dimensional metrics for an Application Insights resource, Select **Usage and estimated costs** > **Custom Metrics** > **Enable alerting on custom metric dimensions** > **OK**. More details about can be found [here](pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation). +To enable multidimensional metrics for an Application Insights resource, select **Usage and estimated costs** > **Custom Metrics** > **Enable alerting on custom metric dimensions** > **OK**. For more information, see [Custom metrics dimensions and pre-aggregation](pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation). -Once you have made that change and send new multi-dimensional telemetry, you'll be able to **Apply splitting**. +After you've made that change and sent new multidimensional telemetry, you can select **Apply splitting**. > [!NOTE] > Only newly sent metrics after the feature was turned on in the portal will have dimensions stored. - + -And view your metric aggregations for each _FormFactor_ dimension: +View your metric aggregations for each `FormFactor` dimension. - + -### How to use MetricIdentifier when there are more than three dimensions +### Use MetricIdentifier when there are more than three dimensions -Currently 10 dimensions are supported however, greater than three dimensions requires the use of `MetricIdentifier`: +Currently, 10 dimensions are supported. More than three dimensions requires the use of `MetricIdentifier`: ```csharp // Add "using Microsoft.ApplicationInsights.Metrics;" to use MetricIdentifier computersSold.TrackValue(110,"Laptop", "Nvidia", "DDR4", "39Wh", "1TB"); ## Custom metric configuration -If you want to alter the metric configuration, you need make alterations in the place where the metric is initialized. +If you want to alter the metric configuration, you must make alterations in the place where the metric is initialized. ### Special dimension names -Metrics don't use the telemetry context of the `TelemetryClient` used to access them, special dimension names available as constants in `MetricDimensionNames` class is the best workaround for this limitation. +Metrics don't use the telemetry context of the `TelemetryClient` used to access them. Using special dimension names available as constants in the `MetricDimensionNames` class is the best workaround for this limitation. -Metric aggregates sent by the below "Special Operation Request Size"-metric will **not** have their `Context.Operation.Name` set to "Special Operation". Whereas `TrackMetric()` or any other TrackXXX() will have `OperationName` set correctly to "Special Operation". +Metric aggregates sent by the following `Special Operation Request Size` metric *won't* have `Context.Operation.Name` set to `Special Operation`. The `TrackMetric()` method or any other `TrackXXX()` method will have `OperationName` set correctly to `Special Operation`. ``` csharp //... Metric aggregates sent by the below "Special Operation Request Size"-metric will } ``` -In this circumstance, use the special dimension names listed in the `MetricDimensionNames` class in order to specify `TelemetryContext` values. +In this circumstance, use the special dimension names listed in the `MetricDimensionNames` class to specify the `TelemetryContext` values. -For example, when the metric aggregate resulting from the next statement is sent to the Application Insights cloud endpoint, its `Context.Operation.Name` data field will be set to "Special Operation": +For example, when the metric aggregate resulting from the next statement is sent to the Application Insights cloud endpoint, its `Context.Operation.Name` data field will be set to `Special Operation`: ```csharp _telemetryClient.GetMetric("Request Size", MetricDimensionNames.TelemetryContext.Operation.Name).TrackValue(requestSize, "Special Operation"); ``` -The values of this special dimension will be copied into the `TelemetryContext` and won't be used as a 'normal' dimension. If you want to also keep an operation dimension for normal metric exploration, you need to create a separate dimension for that purpose: +The values of this special dimension will be copied into `TelemetryContext` and won't be used as a *normal* dimension. If you want to also keep an operation dimension for normal metric exploration, you need to create a separate dimension for that purpose: ```csharp _telemetryClient.GetMetric("Request Size", "Operation Name", MetricDimensionNames.TelemetryContext.Operation.Name).TrackValue(requestSize, "Special Operation", "Special Operation"); _telemetryClient.GetMetric("Request Size", "Operation Name", MetricDimensionName ### Dimension and time-series capping - To prevent the telemetry subsystem from accidentally using up your resources, you can control the maximum number of data series per metric. The default limits are no more than 1000 total data series per metric, and no more than 100 different values per dimension. + To prevent the telemetry subsystem from accidentally using up your resources, you can control the maximum number of data series per metric. The default limits are no more than 1,000 total data series per metric, and no more than 100 different values per dimension. > [!IMPORTANT] > Use low cardinal values for dimensions to avoid throttling. - In the context of dimension and time series capping, we use `Metric.TrackValue(..)` to make sure that the limits are observed. If the limits are already reached, `Metric.TrackValue(..)` will return "False" and the value won't be tracked. Otherwise it will return "True". This behavior is useful if the data for a metric originates from user input. + In the context of dimension and time series capping, we use `Metric.TrackValue(..)` to make sure that the limits are observed. If the limits are already reached, `Metric.TrackValue(..)` returns `False` and the value won't be tracked. Otherwise, it returns `True`. This behavior is useful if the data for a metric originates from user input. The `MetricConfiguration` constructor takes some options on how to manage different series within the respective metric and an object of a class implementing `IMetricSeriesConfiguration` that specifies aggregation behavior for each individual series of the metric: computersSold.TrackValue(100, "Dim1Value1", "Dim2Value3"); // The above call does not track the metric, and returns false. ``` -* `seriesCountLimit` is the max number of data time series a metric can contain. Once this limit is reached, calls to `TrackValue()` that would normally result in a new series will return false. +* `seriesCountLimit` is the maximum number of data time series a metric can contain. When this limit is reached, calls to `TrackValue()` that would normally result in a new series return `false`. * `valuesPerDimensionLimit` limits the number of distinct values per dimension in a similar manner. * `restrictToUInt32Values` determines whether or not only non-negative integer values should be tracked. SeverityLevel.Error); ## Next steps -* [Learn more ](./worker-service.md)about monitoring worker service applications. -* For further details on [log-based and pre-aggregated metrics](./pre-aggregated-metrics-log-metrics.md). -* [Metric Explorer](../essentials/metrics-getting-started.md) -* How to enable Application Insights for [ASP.NET Core Applications](asp-net-core.md) -* How to enable Application Insights for [ASP.NET Applications](asp-net.md) +* [Learn more](./worker-service.md) about monitoring worker service applications. +* Use [log-based and pre-aggregated metrics](./pre-aggregated-metrics-log-metrics.md). +* Get started with [metrics explorer](../essentials/metrics-getting-started.md). +* Learn how to enable Application Insights for [ASP.NET Core applications](asp-net-core.md). +* Learn how to enable Application Insights for [ASP.NET applications](asp-net.md). |
azure-monitor | Java Standalone Arguments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md | For more information, see [Application monitoring for Azure App Service and Java ## Azure Functions -For more information, see [Monitoring Azure Functions with Azure Monitor Application Insights](./monitor-functions.md#distributed-tracing-for-java-applications-public-preview). +For more information, see [Monitoring Azure Functions with Azure Monitor Application Insights](./monitor-functions.md#distributed-tracing-for-java-applications-preview). ## Spring Boot |
azure-monitor | Javascript Angular Plugin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-angular-plugin.md | Title: Angular plugin for Application Insights JavaScript SDK -description: How to install and use Angular plugin for Application Insights JavaScript SDK. + Title: Angular plug-in for Application Insights JavaScript SDK +description: Learn how to install and use the Angular plug-in for the Application Insights JavaScript SDK. ibiza ms.devlang: javascript -# Angular plugin for Application Insights JavaScript SDK +# Angular plug-in for the Application Insights JavaScript SDK -The Angular plugin for the Application Insights JavaScript SDK, enables: +The Angular plug-in for the Application Insights JavaScript SDK enables: -- Tracking of router changes-- Tracking uncaught exceptions+- Tracking of router changes. +- Tracking uncaught exceptions. > [!WARNING]-> Angular plugin is NOT ECMAScript 3 (ES3) compatible. +> The Angular plug-in *isn't* ECMAScript 3 (ES3) compatible. -> [!IMPORTANT] -> When we add support for a new Angular version, our NPM package becomes incompatible with down-level Angular versions. Continue to use older NPM packages until you're ready to upgrade your Angular version. +When we add support for a new Angular version, our npm package becomes incompatible with down-level Angular versions. Continue to use older npm packages until you're ready to upgrade your Angular version. -## Getting started +## Get started -Install npm package: +Install an npm package: ```bash npm install @microsoft/applicationinsights-angularplugin-js @microsoft/applicationinsights-web --save export class AppComponent { } ``` -To track uncaught exceptions, setup ApplicationinsightsAngularpluginErrorService in `app.module.ts`: +To track uncaught exceptions, set up `ApplicationinsightsAngularpluginErrorService` in `app.module.ts`: ```js import { ApplicationinsightsAngularpluginErrorService } from '@microsoft/applicationinsights-angularplugin-js'; import { ApplicationinsightsAngularpluginErrorService } from '@microsoft/applica export class AppModule { } ``` -## Enable Correlation +## Enable correlation -Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools. +Correlation generates and sends data that enables distributed tracing and powers [Application Map](../app/app-map.md), the [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools. -In JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation please reference [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing). +In JavaScript, correlation is turned off by default to minimize the telemetry we send by default. To enable correlation, see the [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing). ### Route tracking -The Angular Plugin automatically tracks route changes and collects other Angular specific telemetry. +The Angular plug-in automatically tracks route changes and collects other Angular-specific telemetry. > [!NOTE]-> `enableAutoRouteTracking` should be set to `false` if it set to true then when the route changes duplicate PageViews may be sent. +> Set `enableAutoRouteTracking` to `false`. If it's set to `true`, when the route changes, duplicate `PageViews` might be sent. ### PageView -If a custom `PageView` duration is not provided, `PageView` duration defaults to a value of 0. +If a custom `PageView` duration isn't provided, the `PageView` duration defaults to a value of `0`. ## Next steps -- To learn more about the JavaScript SDK, see the [Application Insights JavaScript SDK documentation](javascript.md)-- [Angular plugin on GitHub](https://github.com/microsoft/applicationinsights-angularplugin-js)+- To learn more about the JavaScript SDK, see the [Application Insights JavaScript SDK documentation](javascript.md). +- See the [Angular plug-in on GitHub](https://github.com/microsoft/applicationinsights-angularplugin-js). |
azure-monitor | Kubernetes Codeless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/kubernetes-codeless.md | Title: Monitor applications on Azure Kubernetes Service (AKS) with Application Insights - Azure Monitor | Microsoft Docs -description: Azure Monitor seamlessly integrates with your application running on Kubernetes, and allows you to spot the problems with your apps in no time. + Title: Monitor applications on AKS with Application Insights - Azure Monitor | Microsoft Docs +description: Azure Monitor integrates seamlessly with your application running on Azure Kubernetes Service and allows you to spot the problems with your apps quickly. Last updated 11/15/2022 -> Currently you can enable monitoring for your Java apps running on Kubernetes without instrumenting your code - use the [Java standalone agent](./opentelemetry-enable.md?tabs=java). -> While the solution to seamlessly enabling application monitoring is in the works for other languages, use the SDKs to monitor your apps running on AKS: [ASP.NET Core](./asp-net-core.md), [ASP.NET](./asp-net.md), [Node.js](./nodejs.md), [JavaScript](./javascript.md), and [Python](./opencensus-python.md). +> Currently, you can enable monitoring for your Java apps running on Azure Kubernetes Service (AKS) without instrumenting your code by using the [Java standalone agent](./opentelemetry-enable.md?tabs=java). +> While the solution to seamlessly enable application monitoring is in process for other languages, use the SDKs to monitor your apps running on AKS. Use [ASP.NET Core](./asp-net-core.md), [ASP.NET](./asp-net.md), [Node.js](./nodejs.md), [JavaScript](./javascript.md), and [Python](./opencensus-python.md). ## Application monitoring without instrumenting the code-Currently, only Java lets you enable application monitoring without instrumenting the code. To monitor applications in other languages use the SDKs. +Currently, only Java lets you enable application monitoring without instrumenting the code. To monitor applications in other languages, use the SDKs. -For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers). +For a list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers). ## Java-Once enabled, the Java agent will automatically collect a multitude of requests, dependencies, logs, and metrics from the most widely used libraries and frameworks. +After the Java agent is enabled, it automatically collects a multitude of requests, dependencies, logs, and metrics from the most widely used libraries and frameworks. -Follow [the detailed instructions](./opentelemetry-enable.md?tabs=java) to monitor your Java apps running in Kubernetes apps, as well as other environments. +Follow [the detailed instructions](./opentelemetry-enable.md?tabs=java) to monitor your Java apps running in Kubernetes apps and other environments. ## Other languages -For the applications in other languages we currently recommend using the SDKs: +For the applications in other languages, we currently recommend using the SDKs: * [ASP.NET Core](./asp-net-core.md) * [ASP.NET](./asp-net.md) * [Node.js](./nodejs.md) For the applications in other languages we currently recommend using the SDKs: ## Troubleshooting +Troubleshoot the following issue. + [!INCLUDE [azure-monitor-app-insights-test-connectivity](../../../includes/azure-monitor-app-insights-test-connectivity.md)] ## Next steps -* Learn more about [Azure Monitor](../overview.md) and [Application Insights](./app-insights-overview.md) -* Get an overview of [Distributed Tracing](./distributed-tracing.md) and see what [Application Map](./app-map.md?tabs=net) can do for your business +* Learn more about [Azure Monitor](../overview.md) and [Application Insights](./app-insights-overview.md). +* Get an overview of [distributed tracing](./distributed-tracing.md) and see what [Application Map](./app-map.md?tabs=net) can do for your business. |
azure-monitor | Monitor Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md | Last updated 02/09/2023 -# Monitoring Azure Functions with Azure Monitor Application Insights +# Monitor Azure Functions with Azure Monitor Application Insights -[Azure Functions](../../azure-functions/functions-overview.md) offers built-in integration with Azure Application Insights to monitor functions. For languages other than .NET and .NET Core, other language-specific workers/extensions are needed to get the full benefits of distributed tracing. +[Azure Functions](../../azure-functions/functions-overview.md) offers built-in integration with Application Insights to monitor functions. For languages other than .NET and .NET Core, other language-specific workers/extensions are needed to get the full benefits of distributed tracing. -Application Insights collects log, performance, and error data, and automatically detects performance anomalies. Application Insights includes powerful analytics tools to help you diagnose issues and to understand how your functions are used. When you have the visibility into your application data, you can continuously improve performance and usability. You can even use Application Insights during local function app project development. +Application Insights collects log, performance, and error data and automatically detects performance anomalies. Application Insights includes powerful analytics tools to help you diagnose issues and understand how your functions are used. When you have visibility into your application data, you can continually improve performance and usability. You can even use Application Insights during local function app project development. -The required Application Insights instrumentation is built into Azure Functions. The only thing you need is a valid connection string to connect your function app to an Application Insights resource. The connection string should be added to your application settings when your function app resource is created in Azure. If your function app doesn't already have a connection string, you can set it manually. For more information, read more about [monitoring Azure Functions](../../azure-functions/functions-monitoring.md?tabs=cmd) and [connection strings](sdk-connection-string.md). +The required Application Insights instrumentation is built into Azure Functions. All you need is a valid connection string to connect your function app to an Application Insights resource. The connection string should be added to your application settings when your function app resource is created in Azure. If your function app doesn't already have a connection string, you can set it manually. For more information, see [Monitor executions in Azure Functions](../../azure-functions/functions-monitoring.md?tabs=cmd) and [Connection strings](sdk-connection-string.md). [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] -For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers). +For a list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers). -## Distributed tracing for Java applications (public preview) +## Distributed tracing for Java applications (preview) -> [!IMPORTANT] -> This feature is currently in public preview for Java Azure Functions both Windows and Linux +This feature is currently in public preview for Java Azure Functions for both Windows and Linux. > [!Note]-> This feature used to have a 8-9 second cold startup implication, which has been reduced to less than 1 sec. If you were an early adopter of this feature (i.e. prior to Feb 2023), then review the troubleshooting section to update to the current version and benefit from the new faster startup. +> This feature used to have an 8- to 9-second cold startup implication, which has been reduced to less than 1 second. If you were an early adopter of this feature (for example, prior to February 2023), review the "Troubleshooting" section to update to the current version and benefit from the new faster startup. -To view more data from your Java-based Azure Functions applications than is [collected by default](../../azure-functions/functions-monitoring.md?tabs=cmd), you can enable the [Application Insights Java 3.x agent](./java-in-process-agent.md). This agent allows Application Insights to automatically collect and correlate dependencies, logs, and metrics from popular libraries and Azure SDKs, in addition to the request telemetry already captured by Functions. +To view more data from your Java-based Azure Functions applications than is [collected by default](../../azure-functions/functions-monitoring.md?tabs=cmd), enable the [Application Insights Java 3.x agent](./java-in-process-agent.md). This agent allows Application Insights to automatically collect and correlate dependencies, logs, and metrics from popular libraries and Azure SDKs. This telemetry is in addition to the request telemetry already captured by Functions. -By using the application map and having a more complete view of end-to-end transactions, you can better diagnose issues and see a topological view of how systems interact, along with data on average performance and error rates. You have more data for end-to-end diagnostics and the ability to use the application map to easily find the root cause of reliability issues and performance bottlenecks on a per request basis. +By using the application map and having a more complete view of end-to-end transactions, you can better diagnose issues. You have a topological view of how systems interact along with data on average performance and error rates. You also have more data for end-to-end diagnostics. You can use the application map to easily find the root cause of reliability issues and performance bottlenecks on a per-request basis. -For more advanced use cases, you're able to modify telemetry (add spans, update span status, add span attributes) or send custom telemetry using standard APIs. +For more advanced use cases, you can modify telemetry by adding spans, updating span status, and adding span attributes. You can also send custom telemetry by using standard APIs. -### How to enable distributed tracing for Java Function apps +### Enable distributed tracing for Java function apps -Navigate to the functions app Overview pane and go to configurations. Under Application Settings, select "+ New application setting". +On the function app **Overview** pane, go to **Configuration**. Under **Application settings**, select **New application setting**. > [!div class="mx-imgBorder"]->  +>  -Add the following application settings with below values, then select Save on the upper left. DONE! +Add application settings with the following values and select **Save**. ``` APPLICATIONINSIGHTS_ENABLE_AGENT: true APPLICATIONINSIGHTS_ENABLE_AGENT: true ### Troubleshooting -Your Java Functions may have slow startup times if you adopted this feature before Feb 2023. Follow the steps to fix the issue. +Your Java functions might have slow startup times if you adopted this feature before February 2023. Follow the steps to fix the issue. #### Windows -1. Check to see if the following settings exist and remove them. +1. Check to see if the following settings exist and remove them: -``` -XDT_MicrosoftApplicationInsights_Java -> 1 -ApplicationInsightsAgent_EXTENSION_VERSION -> ~2 -``` --2. Enable the latest version by adding this setting. + ``` + XDT_MicrosoftApplicationInsights_Java -> 1 + ApplicationInsightsAgent_EXTENSION_VERSION -> ~2 + ``` -``` -APPLICATIONINSIGHTS_ENABLE_AGENT: true -``` +1. Enable the latest version by adding this setting: + + ``` + APPLICATIONINSIGHTS_ENABLE_AGENT: true + ``` #### Linux Dedicated/Premium -1. Check to see if the following settings exist and remove it. +1. Check to see if the following settings exist and remove them: -``` -ApplicationInsightsAgent_EXTENSION_VERSION -> ~3 -``` --2. Enable the latest version by adding this setting. + ``` + ApplicationInsightsAgent_EXTENSION_VERSION -> ~3 + ``` -``` -APPLICATIONINSIGHTS_ENABLE_AGENT: true -``` +1. Enable the latest version by adding this setting: + + ``` + APPLICATIONINSIGHTS_ENABLE_AGENT: true + ``` > [!NOTE]-> If the latest version of the Application Insights Java agent isn't available in Azure Function, you can upload it manually by following [these instructions](https://github.com/Azure/azure-functions-java-worker/wiki/Distributed-Tracing-for-Java-Azure-Functions#customize-distribute-agent). +> If the latest version of the Application Insights Java agent isn't available in Azure Functions, upload it manually by following [these instructions](https://github.com/Azure/azure-functions-java-worker/wiki/Distributed-Tracing-for-Java-Azure-Functions#customize-distribute-agent). [!INCLUDE [azure-monitor-app-insights-test-connectivity](../../../includes/azure-monitor-app-insights-test-connectivity.md)] -## Distributed tracing for Python Function apps +## Distributed tracing for Python function apps -To collect custom telemetry from services such as Redis, Memcached, MongoDB, and more, you can use the [OpenCensus Python Extension](https://github.com/census-ecosystem/opencensus-python-extensions-azure) and [log your telemetry](../../azure-functions/functions-reference-python.md?tabs=azurecli-linux%2capplication-level#log-custom-telemetry). You can find the list of supported services [here](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib). +To collect custom telemetry from services such as Redis, Memcached, and MongoDB, use the [OpenCensus Python extension](https://github.com/census-ecosystem/opencensus-python-extensions-azure) and [log your telemetry](../../azure-functions/functions-reference-python.md?tabs=azurecli-linux%2capplication-level#log-custom-telemetry). You can find the list of supported services in this [GitHub folder](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib). -## Next Steps +## Next steps -* Read more instructions and information about monitoring [Monitoring Azure Functions](../../azure-functions/functions-monitoring.md) -* Get an overview of [Distributed Tracing](./distributed-tracing.md) -* See what [Application Map](./app-map.md?tabs=net) can do for your business -* Read about [requests and dependencies for Java apps](./java-in-process-agent.md) -* Learn more about [Azure Monitor](../overview.md) and [Application Insights](./app-insights-overview.md) +* Read more instructions and information about [monitoring Azure Functions](../../azure-functions/functions-monitoring.md). +* Get an overview of [distributed tracing](./distributed-tracing.md). +* See what [Application Map](./app-map.md?tabs=net) can do for your business. +* Read about [requests and dependencies for Java apps](./java-in-process-agent.md). +* Learn more about [Azure Monitor](../overview.md) and [Application Insights](./app-insights-overview.md). |
azure-monitor | Getting Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/getting-started.md | These articles provide detailed information about each of the main steps you'll | Article | Description | |:|:|-| [Planning](best-practices-plan.md) | Things that you should consider before starting your implementation. Includes design decisions and information about your organization and requirements that you should gather. | -| [Configure data collection](best-practices-data-collection.md) | Tasks required to collect monitoring data from your Azure and hybrid applications and resources. | -| [Analysis and visualizations](best-practices-analysis.md) | Standard features and additional visualizations that you can create to analyze collected monitoring data. | -| [Alerts and automated responses](best-practices-alerts.md) | Configure notifications and processes that are automatically triggered when an alert is created. | +| [Plan your implementation](best-practices-plan.md) |Things that you should consider before starting your implementation. Includes design decisions and information about your organization and requirements that you should gather. | +| [Configure data collection](best-practices-data-collection.md) |Tasks required to collect monitoring data from your Azure and hybrid applications and resources. | +| [Analysis and visualizations](best-practices-analysis.md) |Get to know the standard features and additional visualizations that you can create to analyze collected monitoring data. | +| [Configure alerts and automated responses](best-practices-alerts.md) |Configure notifications and processes that are automatically triggered when an alert is fired. | | [Optimize costs](best-practices-cost.md) | Reduce your cloud monitoring costs by implementing and managing Azure Monitor in the most cost-effective manner. | |
azure-monitor | Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/insights-overview.md | The following table lists the available curated visualizations and information a |Name with docs link| State | [Azure portal link](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/more)| Description | |:--|:--|:--|:--|-|Compute|||| +|**Compute**|||| | [Azure VM Insights](/azure/azure-monitor/insights/vminsights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/virtualMachines) | Monitors your Azure VMs and Virtual Machine Scale Sets at scale. It analyzes the performance and health of your Windows and Linux VMs and monitors their processes and dependencies on other resources and external processes. | | [Azure Container Insights](/azure/azure-monitor/insights/container-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/containerInsights) | Monitors the performance of container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service. It gives you performance visibility by collecting metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. |-|Networking|||| +|**Networking**|||| | [Azure Network Insights](../../network-watcher/network-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/networkInsights) | Provides a comprehensive view of health and metrics for all your network resources. The advanced search capability helps you identify resource dependencies, enabling scenarios like identifying resources that are hosting your website, by searching for your website name. |-|Storage|||| +|**Storage**|||| | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/storageInsights) | Provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. | | [Azure Backup](../../backup/backup-azure-monitoring-use-azuremonitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. |-|Databases|||| +|**Databases**|||| | [Azure Cosmos DB Insights](../../cosmos-db/cosmosdb-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/cosmosDBInsights) | Provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. | | [Azure Monitor for Azure Cache for Redis (preview)](../../azure-cache-for-redis/redis-cache-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/redisCacheInsights) | Provides a unified, interactive view of overall performance, failures, capacity, and operational health. |+|**Analytics**|||| | [Azure Data Explorer Insights](/azure/data-explorer/data-explorer-insights) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/adxClusterInsights) | Azure Data Explorer Insights provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures. | | [Azure Monitor Log Analytics Workspace](../logs/log-analytics-workspace-insights-overview.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/lawsInsights) | Log Analytics Workspace Insights (preview) provides comprehensive monitoring of your workspaces through a unified view of your workspace usage, performance, health, agent, queries, and change log. This article will help you understand how to onboard and use Log Analytics Workspace Insights (preview). |-|Security|||| +|**Security**|||| | [Azure Key Vault Insights (preview)](../../key-vault/key-vault-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/keyvaultsInsights) | Provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. |-|Monitor|||| +|**Monitor**|||| | [Azure Monitor Application Insights](../app/app-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/applicationsInsights) | Extensible application performance management service that monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It uses the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. It enables you to diagnose errors without waiting for a user to report them. Application Insights includes connection points to various development tools and integrates with Visual Studio to support your DevOps processes. | | [Azure activity Log Insights](../essentials/activity-log-insights.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. |- | [Azure Monitor for Resource Groups](resource-group-insights.md) | GA | No | Triage and diagnose any problems your individual resources encounter, while offering context for the health and performance of the resource group as a whole. | -|Integration|||| - | [Azure Service Bus Insights](../../service-bus-messaging/service-bus-insights.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/serviceBusInsights) | Azure Service Bus Insights provide a view of the overall performance, failures, capacity, and operational health of all your Service Bus resources in a unified interactive experience. | +| [Azure Monitor for Resource Groups](resource-group-insights.md) | GA | No | Triage and diagnose any problems your individual resources encounter, while offering context for the health and performance of the resource group as a whole. | +|**Integration**|||| +| [Azure Service Bus Insights](../../service-bus-messaging/service-bus-insights.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/serviceBusInsights) | Azure Service Bus Insights provide a view of the overall performance, failures, capacity, and operational health of all your Service Bus resources in a unified interactive experience. | [Azure IoT Edge](../../iot-edge/how-to-explore-curated-visualizations.md) | GA | No | Visualize and explore metrics collected from the IoT Edge device right in the Azure portal by using Azure Monitor Workbooks-based public templates. The curated workbooks use built-in metrics from the IoT Edge runtime. These views don't need any metrics instrumentation from the workload modules. |-|Workloads|||| +|**Workloads**|||| | [Azure SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/sqlWorkloadInsights) | A comprehensive interface for monitoring any product in the Azure SQL family. SQL Insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. Note: If you're just setting up SQL monitoring, use SQL Insights instead of the SQL Analytics solution. | | [Azure Monitor for SAP solutions](../../virtual-machines/workloads/sap/monitor-sap-on-azure.md) | Preview | No | An Azure-native monitoring product for anyone running their SAP landscapes on Azure. It works with both SAP on Azure Virtual Machines and SAP on Azure Large Instances. Collects telemetry data from Azure infrastructure and databases in one central location and visually correlates the data for faster troubleshooting. You can monitor different components of an SAP landscape, such as Azure virtual machines (VMs), high-availability clusters, SAP HANA database, and SAP NetWeaver, by adding the corresponding provider for that component. |-|Other|||| - | [Azure Virtual Desktop Insights](../../virtual-desktop/azure-monitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_WVD/WvdManagerMenuBlade/insights/menuId/insights) | Azure Virtual Desktop Insights is a dashboard built on Azure Monitor Workbooks that helps IT professionals understand their Azure Virtual Desktop environments. | - | [Azure Stack HCI Insights](/azure-stack/hci/manage/azure-stack-hci-insights) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/azureStackHCIInsights) | Based on Azure Monitor Workbooks. Provides health, performance, and usage insights about registered Azure Stack HCI version 21H2 clusters that are connected to Azure and enrolled in monitoring. It stores its data in a Log Analytics workspace, which allows it to deliver powerful aggregation and filtering and analyze data trends over time. | -|Not in Azure portal Insight hub|||| +|**Other**|||| +| [Azure Virtual Desktop Insights](../../virtual-desktop/azure-monitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_WVD/WvdManagerMenuBlade/insights/menuId/insights) | Azure Virtual Desktop Insights is a dashboard built on Azure Monitor Workbooks that helps IT professionals understand their Azure Virtual Desktop environments. | +| [Azure Stack HCI Insights](/azure-stack/hci/manage/azure-stack-hci-insights) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/azureStackHCIInsights) | Based on Azure Monitor Workbooks. Provides health, performance, and usage insights about registered Azure Stack HCI version 21H2 clusters that are connected to Azure and enrolled in monitoring. It stores its data in a Log Analytics workspace, which allows it to deliver powerful aggregation and filtering and analyze data trends over time. | +| [Windows Update for Business](/windows/deployment/update/wufb-reports-overview) | GA | [Yes](https://ms.portal.azure.com/#view/AppInsightsExtension/WorkbookViewerBlade/Type/updatecompliance-insights/ComponentId/Azure%20Monitor/GalleryResourceType/Azure%20Monitor/ConfigurationId/community-Workbooks%2FUpdateCompliance%2FUpdateComplianceHub) | Detailed deployment monitoring, compliance assessment and failure troubleshooting for all Windows 10/11 devices.| +|**Not in Azure portal Insight hub**|||| | [Azure Monitor Workbooks for Azure Active Directory](../../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) | General availability (GA) | [Yes](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Workbooks) | Azure Active Directory provides workbooks to understand the effect of your Conditional Access policies, troubleshoot sign-in failures, and identify legacy authentications. | | [Azure HDInsight](../../hdinsight/log-analytics-migration.md#insights) | Preview | No | An Azure Monitor workbook that collects important performance metrics from your HDInsight cluster and provides the visualizations and dashboards for most common scenarios. Gives a complete view of a single HDInsight cluster including resource utilization and application status.|-|Analytics|||| +++ ## Next steps |
azure-monitor | Analyze Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/analyze-usage.md | let freeTables = dynamic([ Usage | where DataType !in (freeTables) | where TimeGenerated > ago(30d) -| where IsBillable = false +| where IsBillable == false | summarize MonthlyPotentialUnderbilledGB=sum(Quantity)/1000 by DataType ``` |
azure-monitor | Cross Workspace Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cross-workspace-query.md | Title: Query across resources with Azure Monitor | Microsoft Docs -description: This article describes how you can query against resources from multiple workspaces and App Insights app in your subscription. +description: This article describes how you can query against resources from multiple workspaces and an Application Insights app in your subscription. Last updated 04/28/2022 # Create a log query across multiple workspaces and apps in Azure Monitor -Azure Monitor Logs support querying across multiple Log Analytics workspaces and Application Insights apps in the same resource group, another resource group, or another subscription. This provides you with a system-wide view of your data. +Azure Monitor Logs support querying across multiple Log Analytics workspaces and Application Insights apps in the same resource group, another resource group, or another subscription. This capability provides you with a systemwide view of your data. If you manage subscriptions in other Azure Active Directory (Azure AD) tenants through [Azure Lighthouse](../../lighthouse/overview.md), you can include [Log Analytics workspaces created in those customer tenants](../../lighthouse/how-to/monitor-at-scale.md) in your queries. -There are two methods to query data that is stored in multiple workspace and apps: +There are two methods to query data that's stored in multiple workspaces and apps: -1. Explicitly by specifying the workspace and app details. This technique is detailed in this article. -2. Implicitly using [resource-context queries](manage-access.md#access-mode). When you query in the context of a specific resource, resource group or a subscription, the relevant data will be fetched from all workspaces that contains data for these resources. Application Insights data that is stored in apps, will not be fetched. +* Explicitly by specifying the workspace and app information. This technique is used in this article. +* Implicitly by using [resource-context queries](manage-access.md#access-mode). When you query in the context of a specific resource, resource group, or a subscription, the relevant data will be fetched from all workspaces that contain data for these resources. Application Insights data that's stored in apps won't be fetched. > [!IMPORTANT]-> If you are using a [workspace-based Application Insights resource](../app/create-workspace-resource.md), telemetry is stored in a Log Analytics workspace with all other log data. Use the workspace() expression to write a query that includes applications in multiple workspaces. For multiple applications in the same workspace, you don't need a cross workspace query. +> If you're using a [workspace-based Application Insights resource](../app/create-workspace-resource.md), telemetry is stored in a Log Analytics workspace with all other log data. Use the `workspace()` expression to write a query that includes applications in multiple workspaces. For multiple applications in the same workspace, you don't need a cross-workspace query. -## Cross-resource query limits +## Cross-resource query limits * The number of Application Insights resources and Log Analytics workspaces that you can include in a single query is limited to 100. * Cross-resource queries in log alerts are only supported in the current [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules). If you're using the legacy Log Analytics Alerts API, you'll need to [switch to the current API](../alerts/alerts-log-api-switch.md).-* References to cross resource such as another workspace, should be explicit and cannot be parameterized. See [Identifying workspace resources](#identifying-workspace-resources) for examples. +* References to a cross resource, such as another workspace, should be explicit and can't be parameterized. See [Identify workspace resources](#identify-workspace-resources) for examples. +## Query across Log Analytics workspaces and from Application Insights +To reference another workspace in your query, use the [workspace](../logs/workspace-expression.md) identifier. For an app from Application Insights, use the [app](./app-expression.md) identifier. -## Querying across Log Analytics workspaces and from Application Insights -To reference another workspace in your query, use the [*workspace*](../logs/workspace-expression.md) identifier, and for an app from Application Insights, use the [*app*](./app-expression.md) identifier. +### Identify workspace resources +The following examples demonstrate queries across Log Analytics workspaces to return summarized counts of logs from the Update table on a workspace named `contosoretail-it`. -### Identifying workspace resources -The following examples demonstrate queries across Log Analytics workspaces to return summarized counts of logs from the Update table on a workspace named *contosoretail-it*. +You can identify a workspace in one of several ways: -Identifying a workspace can be accomplished one of several ways: --* Resource name - is a human-readable name of the workspace, sometimes referred to as *component name*. +* **Resource name**: This human-readable name of the workspace is sometimes referred to as the *component name*. >[!IMPORTANT]- >Because app and workspace names are not unique, this identifier might be ambiguous. It's recommended that reference is by Qualified name, Workspace ID, or Azure Resource ID. + >Because app and workspace names aren't unique, this identifier might be ambiguous. We recommend that the reference uses a qualified name, workspace ID, or Azure Resource ID. `workspace("contosoretail-it").Update | count` -* Qualified name - is the "full name" of the workspace, composed of the subscription name, resource group, and component name in this format: *subscriptionName/resourceGroup/componentName*. +* **Qualified name**: This "full name" of the workspace is composed of the subscription name, resource group, and component name in the format *subscriptionName/resourceGroup/componentName*. `workspace('contoso/contosoretail/contosoretail-it').Update | count` >[!NOTE]- >Because Azure subscription names are not unique, this identifier might be ambiguous. + >Because Azure subscription names aren't unique, this identifier might be ambiguous. -* Workspace ID - A workspace ID is the unique, immutable, identifier assigned to each workspace represented as a globally unique identifier (GUID). +* **Workspace ID**: A workspace ID is the unique, immutable, identifier assigned to each workspace represented as a globally unique identifier (GUID). `workspace("b459b4u5-912x-46d5-9cb1-p43069212nb4").Update | count` -* Azure Resource ID ΓÇô the Azure-defined unique identity of the workspace. You use the Resource ID when the resource name is ambiguous. For workspaces, the format is: */subscriptions/subscriptionId/resourcegroups/resourceGroup/providers/microsoft.OperationalInsights/workspaces/componentName*. +* **Azure Resource ID**: This ID is the Azure-defined unique identity of the workspace. You use the Resource ID when the resource name is ambiguous. For workspaces, the format is */subscriptions/subscriptionId/resourcegroups/resourceGroup/providers/microsoft.OperationalInsights/workspaces/componentName*. For example:+ ``` workspace("/subscriptions/e427519-5645-8x4e-1v67-3b84b59a1985/resourcegroups/ContosoAzureHQ/providers/Microsoft.OperationalInsights/workspaces/contosoretail-it").Update | count ``` -### Identifying an application -The following examples return a summarized count of requests made against an app named *fabrikamapp* in Application Insights. +### Identify an application +The following examples return a summarized count of requests made against an app named *fabrikamapp* in Application Insights. -Identifying an application in Application Insights can be accomplished with the *app(Identifier)* expression. The *Identifier* argument specifies the app using one of the following: +You can identify an application in Application Insights with the `app(Identifier)` expression. The `Identifier` argument specifies the app by using one of the following names or IDs: -* Resource name - is a human readable name of the app, sometimes referred to as the *component name*. +* **Resource name**: This human readable name of the app is sometimes referred to as the *component name*. `app("fabrikamapp")` >[!NOTE] >Identifying an application by name assumes uniqueness across all accessible subscriptions. If you have multiple applications with the specified name, the query fails because of the ambiguity. In this case, you must use one of the other identifiers. -* Qualified name - is the ΓÇ£full nameΓÇ¥ of the app, composed of the subscription name, resource group, and component name in this format: *subscriptionName/resourceGroup/componentName*. +* **Qualified name**: This "full name" of the app is composed of the subscription name, resource group, and component name in the format *subscriptionName/resourceGroup/componentName*. `app("AI-Prototype/Fabrikam/fabrikamapp").requests | count` >[!NOTE]- >Because Azure subscription names are not unique, this identifier might be ambiguous. + >Because Azure subscription names aren't unique, this identifier might be ambiguous. > -* ID - the app GUID of the application. +* **ID**: This ID is the app GUID of the application. `app("b459b4f6-912x-46d5-9cb1-b43069212ab4").requests | count` -* Azure Resource ID - the Azure-defined unique identity of the app. You use the Resource ID when the resource name is ambiguous. The format is: */subscriptions/subscriptionId/resourcegroups/resourceGroup/providers/microsoft.OperationalInsights/components/componentName*. +* **Azure Resource ID**: This ID is the Azure-defined unique identity of the app. You use the resource ID when the resource name is ambiguous. The format is */subscriptions/subscriptionId/resourcegroups/resourceGroup/providers/microsoft.OperationalInsights/components/componentName*. For example:+ ``` app("/subscriptions/b459b4f6-912x-46d5-9cb1-b43069212ab4/resourcegroups/Fabrikam/providers/microsoft.insights/components/fabrikamapp").requests | count ``` -### Performing a query across multiple resources -You can query multiple resources from any of your resource instances, these can be workspaces and apps combined. - -Example for query across two workspaces: +### Perform a query across multiple resources +You can query multiple resources from any of your resource instances. These resources can be workspaces and apps combined. ++Example for a query across two workspaces: ``` union Update, workspace("contosoretail-it").Update, workspace("b459b4u5-912x-46d5-9cb1-p43069212nb4").Update union Update, workspace("contosoretail-it").Update, workspace("b459b4u5-912x-46d | summarize dcount(Computer) by Classification ``` -## Using cross-resource query for multiple resources -When using cross-resource queries to correlate data from multiple Log Analytics workspaces and Application Insights resources, the query can become complex and difficult to maintain. You should leverage [functions in Azure Monitor log queries](./functions.md) to separate the query logic from the scoping of the query resources, which simplifies the query structure. The following example demonstrates how you can monitor multiple Application Insights resources and visualize the count of failed requests by application name. +## Use a cross-resource query for multiple resources +When you use cross-resource queries to correlate data from multiple Log Analytics workspaces and Application Insights resources, the query can become complex and difficult to maintain. You should make use of [functions in Azure Monitor log queries](./functions.md) to separate the query logic from the scoping of the query resources. This method simplifies the query structure. The following example demonstrates how you can monitor multiple Application Insights resources and visualize the count of failed requests by application name. -Create a query like the following that references the scope of Application Insights resources. The `withsource= SourceApp` command adds a column that designates the application name that sent the log. [Save the query as function](./functions.md#create-a-function) with the alias _applicationsScoping_. +Create a query like the following example that references the scope of Application Insights resources. The `withsource= SourceApp` command adds a column that designates the application name that sent the log. [Save the query as a function](./functions.md#create-a-function) with the alias `applicationsScoping`. ```Kusto // crossResource function that scopes my Application Insights resources app('Contoso-app4').requests, app('Contoso-app5').requests ``` ---You can now [use this function](./functions.md#use-a-function) in a cross-resource query like the following. The function alias _applicationsScoping_ returns the union of the requests table from all the defined applications. The query then filters for failed requests and visualizes the trends by application. The _parse_ operator is optional in this example. It extracts the application name from _SourceApp_ property. +You can now [use this function](./functions.md#use-a-function) in a cross-resource query like the following example. The function alias `applicationsScoping` returns the union of the requests table from all the defined applications. The query then filters for failed requests and visualizes the trends by application. The `parse` operator is optional in this example. It extracts the application name from the `SourceApp` property. ```Kusto applicationsScoping applicationsScoping ``` >[!NOTE]-> This method can't be used with log alerts because the access validation of the alert rule resources, including workspaces and applications, is performed at alert creation time. Adding new resources to the function after the alert creation isnΓÇÖt supported. If you prefer to use function for resource scoping in log alerts, you need to edit the alert rule in the portal or with a Resource Manager template to update the scoped resources. Alternatively, you can include the list of resources in the log alert query. -+> This method can't be used with log alerts because the access validation of the alert rule resources, including workspaces and applications, is performed at alert creation time. Adding new resources to the function after the alert creation isn't supported. If you prefer to use a function for resource scoping in log alerts, you must edit the alert rule in the portal or with an Azure Resource Manager template to update the scoped resources. Alternatively, you can include the list of resources in the log alert query. - + ## Next steps -- Review [Analyze log data in Azure Monitor](./log-query-overview.md) for an overview of log queries and how Azure Monitor log data is structured.+See [Analyze log data in Azure Monitor](./log-query-overview.md) for an overview of log queries and how Azure Monitor log data is structured. |
azure-monitor | Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/customer-managed-keys.md | Customer-Managed key is provided on dedicated cluster and these operations are r - If you create a cluster and get an errorΓÇö"region-name doesnΓÇÖt support Double Encryption for clusters", you can still create the cluster without Double encryption, by adding `"properties": {"isDoubleEncryptionEnabled": false}` in the REST request body. - Double encryption settings can not be changed after the cluster has been created. -Deleting a linked workspace is permitted while linked to cluster. If you decide to [recover](./delete-workspace.md#recover-workspace) the workspace during the [soft-delete](./delete-workspace.md#soft-delete-behavior) period, it returns to previous state and remains linked to cluster. +Deleting a linked workspace is permitted while linked to cluster. If you decide to [recover](./delete-workspace.md#recover-a-workspace) the workspace during the [soft-delete](./delete-workspace.md#soft-delete-behavior) period, it returns to previous state and remains linked to cluster. - Customer-managed key encryption applies to newly ingested data after the configuration time. Data that was ingested prior to the configuration, remains encrypted with Microsoft key. You can query data ingested before and after the Customer-managed key configuration seamlessly. |
azure-monitor | Delete Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/delete-workspace.md | Title: Delete and recover Azure Log Analytics workspace | Microsoft Docs + Title: Delete and recover an Azure Log Analytics workspace | Microsoft Docs description: Learn how to delete your Log Analytics workspace if you created one in a personal subscription or restructure your workspace model. -# Delete and recover Azure Log Analytics workspace +# Delete and recover an Azure Log Analytics workspace -This article explains the concept of Azure Log Analytics workspace soft-delete and how to recover deleted workspace. +This article explains the concept of Azure Log Analytics workspace soft-delete and how to recover a deleted workspace. -## Considerations when deleting a workspace +## Considerations when you delete a workspace -When you delete a Log Analytics workspace, a soft-delete operation is performed to allow the recovery of the workspace including its data and connected agents within 14 days, whether the deletion was accidental or intentional. -After the soft-delete period, the workspace resource and its data are non-recoverable and queued for purged completely within 30 days. The workspace name is 'released' and you can use it to create a new workspace. +When you delete a Log Analytics workspace, a soft-delete operation is performed to allow the recovery of the workspace, including its data and connected agents, within 14 days. This process occurs whether the deletion was accidental or intentional. ++After the soft-delete period, the workspace resource and its data are nonrecoverable and queued for purge completely within 30 days. The workspace name is released and you can use it to create a new workspace. > [!NOTE] > If you want to override the soft-delete behavior and permanently delete your workspace, follow the steps in [Permanent workspace delete](#permanent-workspace-delete). -You want to exercise caution when you delete a workspace because there might be important data and configuration that may negatively impact your service operation. Review what agents, solutions and other Azure services store their data in Log Analytics, such as: +Be careful when you delete a workspace because there might be important data and configuration that might negatively affect your service operation. Review what agents, solutions, and other Azure services store their data in Log Analytics, such as: -* Management solutions -* Azure Automation -* Agents running on Windows and Linux virtual machines -* Agents running on Windows and Linux computers in your environment -* System Center Operations Manager +* Management solutions. +* Azure Automation. +* Agents running on Windows and Linux virtual machines. +* Agents running on Windows and Linux computers in your environment. +* System Center Operations Manager. -The soft-delete operation deletes the workspace resource and any associated users' permission is broken. If users are associated with other workspaces, then they can continue using Log Analytics with those other workspaces. +The soft-delete operation deletes the workspace resource, and any associated users' permission is broken. If users are associated with other workspaces, they can continue using Log Analytics with those other workspaces. ## Soft-delete behavior -The workspace delete operation removes the workspace Resource Manager resource, but its configuration and data are kept for 14 days, while giving the appearance that the workspace is deleted. Any agents and System Center Operations Manager management groups configured to report to the workspace remain in an orphaned state during the soft-delete period. The service further provides a mechanism for recovering the deleted workspace including its data and connected resources, essentially undoing the deletion. +The workspace delete operation removes the workspace Azure Resource Manager resource. Its configuration and data are kept for 14 days, although it will look as if the workspace is deleted. Any agents and System Center Operations Manager management groups configured to report to the workspace remain in an orphaned state during the soft-delete period. The service provides a mechanism for recovering the deleted workspace, including its data and connected resources, essentially undoing the deletion. -> [!NOTE] -> Installed solutions and linked services like your Azure Automation account are permanently removed from the workspace at deletion time and can't be recovered. These should be reconfigured after the recovery operation to bring the workspace to its previously configured state. +> [!NOTE] +> Installed solutions and linked services like your Azure Automation account are permanently removed from the workspace at deletion time and can't be recovered. These resources should be reconfigured after the recovery operation to bring the workspace back to its previously configured state. -You can delete a workspace using [PowerShell](/powershell/module/azurerm.operationalinsights/remove-azurermoperationalinsightsworkspace), [REST API](/rest/api/loganalytics/workspaces/delete), or in the [Azure portal](https://portal.azure.com). +You can delete a workspace by using [PowerShell](/powershell/module/azurerm.operationalinsights/remove-azurermoperationalinsightsworkspace), the [REST API](/rest/api/loganalytics/workspaces/delete), or the [Azure portal](https://portal.azure.com). ### Azure portal -1. Sign in to the [Azure portal](https://portal.azure.com). -2. In the Azure portal, select **All services**. In the list of resources, type **Log Analytics**. As you begin typing, the list filters based on your input. Select **Log Analytics workspaces**. -3. In the list of Log Analytics workspaces, select a workspace and then click **Delete** from the top of the middle pane. -4. A confirmation page appears that shows the data ingestion to the workspace over the past week. -5. If you want to permanently delete the workspace removing the option to later recover it, select the **Delete the workspace permanently** checkbox. -6. Type in the name of the workspace to confirm and then click **Delete**. +1. Sign in to the [Azure portal](https://portal.azure.com). +1. In the Azure portal, select **All services**. In the list of resources, enter **Log Analytics**. As you begin typing, the list filters based on your input. Select **Log Analytics workspaces**. +1. In the list of Log Analytics workspaces, select a workspace. Select **Delete**. +1. A confirmation page appears that shows the data ingestion to the workspace over the past week. +1. If you want to permanently delete the workspace and remove the option to later recover it, select the **Delete the workspace permanently** checkbox. +1. Enter the name of the workspace to confirm and then select **Delete**. -  +  ### PowerShell ```PowerShell PS C:\>Remove-AzOperationalInsightsWorkspace -ResourceGroupName "resource-group- ``` ## Permanent workspace delete-The soft-delete method may not fit in some scenarios such as development and testing, where you need to repeat a deployment with the same settings and workspace name. In such cases you can permanently delete your workspace and "override" the soft-delete period. The permanent workspace delete operation releases the workspace name and you can create a new workspace using the same name. +The soft-delete method might not fit in some scenarios, such as development and testing, where you need to repeat a deployment with the same settings and workspace name. In such cases, you can permanently delete your workspace and "override" the soft-delete period. The permanent workspace delete operation releases the workspace name. You can create a new workspace by using the same name. > [!IMPORTANT]-> Use permanent workspace delete operation with caution since its irreversible and you won't be able to recover your workspace and its data. +> Use the permanent workspace delete operation with caution because it's irreversible. You won't be able to recover your workspace and its data. -To permanently delete a workspace using the Azure portal, select the **Delete the workspace permanently** checkbox before clicking the **Delete** button. +To permanently delete a workspace by using the Azure portal, select the **Delete the workspace permanently** checkbox before you select **Delete**. -To permanently delete a workspace using PowerShell, add '-ForceDelete' tag to permanently delete your workspace. The '-ForceDelete' option is currently available with Az.OperationalInsights 2.3.0 or higher. +To permanently delete a workspace by using PowerShell, add a `-ForceDelete` tag to permanently delete your workspace. The `-ForceDelete` option is currently available with Az.OperationalInsights 2.3.0 or higher. ```powershell PS C:\>Remove-AzOperationalInsightsWorkspace -ResourceGroupName "resource-group-name" -Name "workspace-name" -ForceDelete ``` -## Recover workspace -When you delete a Log Analytics workspace accidentally or intentionally, the service places the workspace in a soft-delete state making it inaccessible to any operation. The name of the deleted workspace is preserved during the soft-delete period and can't be used for creating a new workspace. After the soft-delete period, the workspace is non-recoverable, it is scheduled for permanent deletion and its name it released and can be used for creating a new workspace. +## Recover a workspace +When you delete a Log Analytics workspace accidentally or intentionally, the service places the workspace in a soft-delete state and makes it inaccessible to any operation. The name of the deleted workspace is preserved during the soft-delete period. It can't be used to create a new workspace. After the soft-delete period, the workspace is nonrecoverable, it's scheduled for permanent deletion, and its name is released and can be used to create a new workspace. -You can recover your workspace during the soft-delete period including its data, configuration and connected agents. You need to have Contributor permissions to the subscription and resource group where the workspace was located before the soft-delete operation. The workspace recovery is performed by re-creating the Log Analytics workspace with the details of the deleted workspace including: +You can recover your workspace during the soft-delete period, including its data, configuration, and connected agents. You must have Contributor permissions to the subscription and resource group where the workspace was located before the soft-delete operation. The workspace recovery is performed by re-creating the Log Analytics workspace with the details of the deleted workspace, including: - Subscription ID-- Resource Group name+- Resource group name - Workspace name - Region > [!IMPORTANT]-> If your workspace was deleted as part of resource group delete operation, you must first re-create the resource group. +> If your workspace was deleted as part of a resource group delete operation, you must first re-create the resource group. ### Azure portal -1. Sign in to the [Azure portal](https://portal.azure.com). -2. In the Azure portal, select **All services**. In the list of resources, type **Log Analytics**. As you begin typing, the list filters based on your input. Select **Log Analytics workspaces**. You see the list of workspaces you have in the selected scope. -3. Click **Recover** on the top left menu to open a page with workspaces in soft-delete state that can be recovered. +1. Sign in to the [Azure portal](https://portal.azure.com). +1. In the Azure portal, select **All services**. In the list of resources, enter **Log Analytics**. As you begin typing, the list filters based on your input. Select **Log Analytics workspaces**. You see the list of workspaces you have in the selected scope. +1. Select **Recover** on the top left menu to open a page with workspaces in a soft-delete state that can be recovered. -  +  -4. Select the workspace and click **Recover** to recover that workspace. --  +1. Select the workspace. Then select **Recover** to recover the workspace. +  ### PowerShell ```PowerShell PS C:\>Select-AzSubscription "subscription-name-the-workspace-was-in" PS C:\>Restore-AzOperationalInsightsWorkspace -ResourceGroupName "resource-group-name-the-workspace-was-in" -Name "deleted-workspace-name" -Location "region-name-the-workspace-was-in" ``` -The workspace and all its data are brought back after the recovery operation. Solutions and linked services were permanently removed from the workspace when it was deleted and these should be reconfigured to bring the workspace to its previously configured state. Some of the data may not be available for query after the workspace recovery until the associated solutions are re-installed and their schemas are added to the workspace. +The workspace and all its data are brought back after the recovery operation. However, solutions and linked services were permanently removed from the workspace when it was deleted. These resources should be reconfigured to bring the workspace to its previously configured state. Some of the data might not be available for query after the workspace recovery until the associated solutions are reinstalled and their schemas are added to the workspace. ## Troubleshooting -You must have at least *Log Analytics Contributor* permissions to delete a workspace. +You must have at least Log Analytics Contributor permissions to delete a workspace. ++* If you aren't sure if a deleted workspace is in a soft-delete state and can be recovered, select [Open recycle bin](#recover-a-workspace) on the **Log Analytics workspaces** page to see a list of soft-deleted workspaces per subscription. Permanently deleted workspaces aren't included in the list. +* If you get the error message "This workspace name is already in use" or "conflict" when you create a workspace, it could be because: + * The workspace name isn't available because it's being used by someone in your organization or another customer. + * The workspace was deleted in the last 14 days and its name was kept reserved for the soft-delete period. To override the soft-delete and permanently delete your workspace to create a new workspace with the same name, follow these steps to recover the workspace first and then perform a permanent delete:<br> + 1. [Recover](#recover-a-workspace) your workspace. + 1. [Permanently delete](#permanent-workspace-delete) your workspace. + 1. Create a new workspace by using the same workspace name. -* If you aren't sure if deleted workspace is in soft-delete state and can be recovered, click [Open recycle bin](#recover-workspace) in *Log Analytics workspaces* page to see a list of soft-deleted workspaces per subscription. Permanently deleted workspaces aren't included in the list. -* If you get an error message *This workspace name is already in use* or *conflict* when creating a workspace, it could be since: - * The workspace name isn't available and being used by someone in your organization, or by other customer. - * The workspace was deleted in the last 14 days and its name kept reserved for the soft-delete period. To override the soft-delete and permanently delete your workspace to create a new workspace with the same name, follow these steps to recover the workspace first and then perform permanent delete:<br> - 1. [Recover](#recover-workspace) your workspace. - 2. [Permanently delete](#permanent-workspace-delete) your workspace. - 3. Create a new workspace using the same workspace name. - - After the deletion call is successfully completed on the back end, you can restore the workspace and complete the permanent delete operation in one of the methods suggested earlier. + After the deletion call is successfully completed on the back end, you can restore the workspace and finish the permanent delete operation by using one of the methods suggested earlier. -* If you get 204 response code with *Resource not found* when deleting a workspace, a consecutive retries operations may occurred. 204 is an empty response, which usually means that the resource doesn't exist, so the delete completed without doing anything. -* If you delete your resource group and your workspace included, you can see the deleted workspace in [Open recycle bin](#recover-workspace) page, however the recovery operation will fail with error code 404 since the resource group does not exist -- Re-create your resource group and try the recovery again. +* If you get a 204 response code with "Resource not found" when you delete a workspace, consecutive retries operations might have occurred. The 204 code is an empty response, which usually means that the resource doesn't exist, so the delete finished without doing anything. +* If you deleted your resource group and your workspace was included, you can see the deleted workspace on the [Open recycle bin](#recover-a-workspace) page. The recovery operation will fail with the error code 404 because the resource group doesn't exist. Re-create your resource group and try the recovery again. |
azure-monitor | Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/functions.md | Last updated 06/22/2022 # Functions in Azure Monitor log queries-A function is a log query in Azure Monitor that can be used in other log queries as though it's a command. Functions allow developers to provide solutions to different customers and for you to reuse query logic in your own environment. This article provides details on how to use functions and how to create your own. +A function is a log query in Azure Monitor that can be used in other log queries as though it's a command. You can use functions to provide solutions to different customers and also reuse query logic in your own environment. This article describes how to use functions and how to create your own. ## Types of functions There are two types of functions in Azure Monitor: -- **Solution function:** Pre-built functions included with Azure Monitor. These are available in all Log Analytics workspaces and can't be modified.-- **Workspace functions:** Functions installed in a particular Log Analytics workspace and can be modified and controlled by the user.+- **Solution functions:** Prebuilt functions are included with Azure Monitor. These functions are available in all Log Analytics workspaces and can't be modified. +- **Workspace functions:** These functions are installed in a particular Log Analytics workspace. They can be modified and controlled by the user. -## Viewing functions -You can view solution functions and workspace functions in the current workspace from the **Functions** tab in the left pane of a Log Analytics workspace. Use the **Filter** button to filter the functions included in the list and **Group by** to change their grouping. Type a string into the **Search** box to locate a particular function. Hover over a function to view details about it including a description and parameters. +## View functions +You can view solution functions and workspace functions in the current workspace on the **Functions** tab in the left pane of a Log Analytics workspace. Use **Filter** to filter the functions included in the list. Use **Group by** to change their grouping. Enter a string in the **Search** box to locate a particular function. Hover over a function to view details about it, including a description and parameters. ## Use a function-Use a function in a query by typing its name with values for any parameters just as you would type in a command. The output of the function can either be returned as results or piped to another command. +Use a function in a query by typing its name with values for any parameters the same as you would type in a command. The output of the function can either be returned as results or piped to another command. -Add a function to the current query by double-clicking on its name or hovering over it and selecting **Use in editor**. Functions in the workspace will also be included in intellisense as you type in a query. +Add a function to the current query by double-clicking on its name or hovering over it and selecting **Use in editor**. Functions in the workspace will also be included in IntelliSense as you type in a query. -If a query requires parameters, provide them using the syntax: `function_name(param1,param2,...)`. +If a query requires parameters, provide them by using the syntax `function_name(param1,param2,...)`. ## Create a function-To create a function from the current query in the editor, select **Save** and then **Save as function**. +To create a function from the current query in the editor, select **Save** > **Save as function**. -Create a function with Log Analytics in the Azure portal by clicking **Save** and then providing the information in the following table. +Create a function with Log Analytics in the Azure portal by selecting **Save** and then providing the information in the following table: | Setting | Description | |:|:|-| Function Name | Name for the function. This may not include a space or any special characters. It also may not start with an underscore (_) since this character is reserved for solution functions. | -| Legacy category | User defined category to help filter and group functions. | +| Function name | Name for the function. The name may not include a space or any special characters. It also may not start with an underscore (_) because this character is reserved for solution functions. | +| Legacy category | User-defined category to help filter and group functions. | | Save as computer group | Save the query as a [computer group](computer-groups.md). |-| Parameters | Add a parameter for each variable in the function that requires a value when it's used. See [Function parameters](#function-parameters) for details. | +| Parameters | Add a parameter for each variable in the function that requires a value when it's used. For more information, see [Function parameters](#function-parameters). | ## Function parameters -You can add parameters to a function so that you can provide values for certain variables when calling it. This allows the same function to be used in different queries, each providing different values for the parameters. Parameters are defined by the following properties. +You can add parameters to a function so that you can provide values for certain variables when you call it. As a result, the same function can be used in different queries, each providing different values for the parameters. Parameters are defined by the following properties: | Setting | Description | |:|:| | Type | Data type for the value. |-| Name | Name for the parameter. This is the name that must be used in the query to replace with the parameter value. | +| Name | Name for the parameter. This name must be used in the query to replace with the parameter value. | | Default value | Value to be used for the parameter if a value isn't provided. | -Parameters are ordered as they are created with any parameters that have no default value positioned in front of those that have a default value. +Parameters are ordered as they're created. Parameters that have no default value are positioned in front of parameters that have a default value. -## Working with function code -You can view the code of a function either to gain insight into how it works or to modify the code for a workspace function. Select **Load the function code** to add the function code to the current query in the editor. If you add it to an empty query or the first line of an existing query, then it will add the function name to the tab. If it's a workspace function, then this enables the option to edit the function details. +## Work with function code +You can view the code of a function either to gain insight into how it works or to modify the code for a workspace function. Select **Load the function code** to add the function code to the current query in the editor. +If you add the function code to an empty query or the first line of an existing query, the function name is added to the tab. A workspace function enables the option to edit the function details. + ## Edit a function-Edit the properties or the code of a function by creating a new query and then hover over the name of the function and select **load function code**. Make any modifications that you want to the code and select **Save** and then **Edit function details**. Make any changes you want to the properties and parameters of the function before clicking **Save**. +Edit the properties or the code of a function by creating a new query. Hover over the name of the function and select **Load function code**. Make any modifications that you want to the code and select **Save**. Then select **Edit function details**. Make any changes you want to the properties and parameters of the function and select **Save**. + ## Example-The following sample function returns all events in the Azure Activity log since a particular date and that match a particular category. +The following sample function returns all events in the Azure activity log since a particular date and that match a particular category. -Start with the following query using hardcoded values. This verifies that the query works as expected. +Start with the following query by using hardcoded values to verify that the query works as expected. ```Kusto AzureActivity AzureActivity | where TimeGenerated > todatetime("2021/04/05 5:40:01.032 PM") ``` -Next, replace the hardcoded values with parameter names and then save the function by selecting **Save** and then **Save as function**. +Next, replace the hardcoded values with parameter names. Then save the function by selecting **Save** > **Save as function**. ```Kusto AzureActivity AzureActivity | where TimeGenerated > DateParam ``` - Provide the following values for the function properties. + Provide the following values for the function properties: | Property | Value | |:|:| | Function name | AzureActivityByCategory | | Legacy category | Demo functions | -Define the following parameters before saving the function. +Define the following parameters before you save the function: | Type | Name | Default value | |:|:|:| | string | CategoryParam | "Administrative" | | datetime | DateParam | | --Create a new query and view the new function by hovering over it. Note the order of the parameters since this is the order they must be specified when you use the function. +Create a new query and view the new function by hovering over it. Look at the order of the parameters. They must be specified in this order when you use the function. -Select **Use in editor** to add the new function to a query and then add values for the parameters. Note that you don't need to specify a value for CategoryParam because it has a default value. - +Select **Use in editor** to add the new function to a query. Then add values for the parameters. You don't need to specify a value for `CategoryParam` because it has a default value. ## Next steps-See other lessons for writing Azure Monitor log queries: --- [String operations](/azure/data-explorer/kusto/query/samples?&pivots=azuremonitor#string-operations)-+See [String operations](/azure/data-explorer/kusto/query/samples?&pivots=azuremonitor#string-operations) for more information on how to write Azure Monitor log queries. |
azure-monitor | Logs Dedicated Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md | Authorization: Bearer <token> - If you create a cluster and get an error "region-name doesn't support Double Encryption for clusters.", you can still create the cluster without Double encryption by adding `"properties": {"isDoubleEncryptionEnabled": false}` in the REST request body. - Double encryption setting can't can not be changed after the cluster has been created. -- Deleting a linked workspace is permitted while linked to cluster. If you decide to [recover](./delete-workspace.md#recover-workspace) the workspace during the [soft-delete](./delete-workspace.md#soft-delete-behavior) period, it returns to previous state and remains linked to cluster.+- Deleting a linked workspace is permitted while linked to cluster. If you decide to [recover](./delete-workspace.md#recover-a-workspace) the workspace during the [soft-delete](./delete-workspace.md#soft-delete-behavior) period, it returns to previous state and remains linked to cluster. ## Troubleshooting |
azure-monitor | Migrate Splunk To Azure Monitor Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/migrate-splunk-to-azure-monitor-logs.md | The benefits of migrating to Azure Monitor include: |Splunk offering|Azure offering| |||-|Splunk Observability|[Azure Monitor](../overview.md) is an end-to-end solution for collecting, analyzing, and acting on telemetry from your cloud, multicloud, and on-premises environments, built over a powerful data ingestion pipeline that's shared with Microsoft Sentinel. Azure Monitor offers enterprises a comprehensive solution for monitoring cloud, hybrid, and on-premises environments, with [network isolation](../logs/private-link-security.md), [resilience features and protection from data center failures](../logs/availability-zones.md), [reporting](../overview.md#insights-and-curated-visualizations), and [alerts and response](../overview.md#respond-to-critical-situations) capabilities.| +|Splunk Observability|[Azure Monitor](../overview.md) is an end-to-end solution for collecting, analyzing, and acting on telemetry from your cloud, multicloud, and on-premises environments, built over a powerful data ingestion pipeline that's shared with Microsoft Sentinel. Azure Monitor offers enterprises a comprehensive solution for monitoring cloud, hybrid, and on-premises environments, with [network isolation](../logs/private-link-security.md), [resilience features and protection from data center failures](../logs/availability-zones.md), [reporting](../overview.md#insights-and-visualizations), and [alerts and response](../overview.md#respond) capabilities.| |Splunk Security|[Microsoft Sentinel](../../sentinel/overview.md) is a cloud-native solution that runs over the Azure Monitor platform to provide intelligent security analytics and threat intelligence across the enterprise.| ## Introduction to key concepts -|Azure Monitor Logs |Similar Splunk concept|Description| -|||| -|[Log Analytics workspace](../logs/log-analytics-workspace-overview.md)|Namespace|A Log Analytics workspace is an environment in which you can collect log data from all Azure and non-Azure monitored resources. The data in the workspace is available for querying and analysis, Azure Monitor features, and other Azure services. Similar to a Splunk namespace, you can manage access to the data and artifacts, such as alerts and workbooks, in your Log Analytics workspace. | -|[Table management](../logs/manage-logs-tables.md)|Indexing|Azure Monitor Logs ingests log data into tables in a managed [Azure Data Explorer](/azure/data-explorer/data-explorer-overview) database. During ingestion, the service automatically indexes and timestamps the data, which means you can store various types of data and access the data quickly using Kusto Query Language (KQL) queries.<br/>Use table properties to manage the table schema, data retention and archive, and whether to store the data for occasional auditing and troubleshooting or for ongoing analysis and use by features and services.<br/>For a comparison of Splunk and Azure Data Explorer data handling and querying concepts, see [Splunk to Kusto Query Language map](/azure/data-explorer/kusto/query/splunk-cheat-sheet).|| -|[Basic and Analytics log data plans](../logs/basic-logs-configure.md)| |Azure Monitor Logs offers two log data plans that let you reduce log ingestion and retention costs and take advantage of Azure Monitor's advanced features and analytics capabilities based on your needs.<br/>The **Analytics** plan makes log data available for interactive queries and use by features and services.<br/>The **Basic** log data plan provides a low-cost way to ingest and retain logs for troubleshooting, debugging, auditing, and compliance. | -|[Archiving and quick access to archived data](../logs/data-retention-archive.md)|Data bucket states (hot, warm, cold, thawed), archiving, Dynamic Data Active Archive (DDAA) |The cost-effective archive option keeps your logs in your Log Analytics workspace and lets you access archived log data immediately, when you need it. Archive configuration changes are effective immediately because data isn't physically transferred to external storage. You can [restore archived data](../logs/restore.md) or run a [search job](../logs/search-jobs.md) to make a specific time range of archived data available for real-time analysis. | -|[Access control](../logs/manage-access.md)|Role-based user access, permissions |Role-based access control lets you define which people in your organization have access to read, write, and perform operations in a Log Analytics workspace. You can configure permissions at the workspace level, at the resource level, and at the table level, so you have granular control over specific resources and log types.| -|[Data transformations](../essentials/data-collection-transformations.md)|Transforms, field extractions |Transformations let you filter or modify incoming data before it's sent to a Log Analytics workspace. Use transformations to remove sensitive data, enrich data in your Log Analytics workspace, perform calculations, and filter out data you don't need to reduce data costs. | -|[Data collection rules](../essentials/data-collection-rule-overview.md)|Data inputs, data pipeline|Define which data to collect, how to transform that data, and where to send the data. | -|[Kusto Query Language (KQL)](/azure/kusto/query/)|Splunk Search Processing Language (SPL)|Azure Monitor Logs uses a large subset of KQL that's suitable for simple log queries but also includes advanced functionality such as aggregations, joins, and smart analytics. Use the [Splunk to Kusto Query Language map](/azure/data-explorer/kusto/query/splunk-cheat-sheet) to translate your Splunk SPL knowledge to KQL. You can also [learn KQL with tutorials](../logs/get-started-queries.md) and [KQL training modules](/training/modules/analyze-logs-with-kql/).| ++|Azure Monitor Logs|Similar Splunk concept|Description| +|||| +|[Log Analytics workspace](../logs/log-analytics-workspace-overview.md)|Namespace|A Log Analytics workspace is an environment in which you can collect log data from all Azure and non-Azure monitored resources. The data in the workspace is available for querying and analysis, Azure Monitor features, and other Azure services. Similar to a Splunk namespace, you can manage access to the data and artifacts, such as alerts and workbooks, in your Log Analytics workspace.| +|[Table management](../logs/manage-logs-tables.md)|Indexing|Azure Monitor Logs ingests log data into tables in a managed [Azure Data Explorer](/azure/data-explorer/data-explorer-overview) database. During ingestion, the service automatically indexes and timestamps the data, which means you can store various types of data and access the data quickly using Kusto Query Language (KQL) queries.<br/>Use table properties to manage the table schema, data retention and archive, and whether to store the data for occasional auditing and troubleshooting or for ongoing analysis and use by features and services.<br/>For a comparison of Splunk and Azure Data Explorer data handling and querying concepts, see [Splunk to Kusto Query Language map](/azure/data-explorer/kusto/query/splunk-cheat-sheet). | +|[Basic and Analytics log data plans](../logs/basic-logs-configure.md)| |Azure Monitor Logs offers two log data plans that let you reduce log ingestion and retention costs and take advantage of Azure Monitor's advanced features and analytics capabilities based on your needs.<br/>The **Analytics** plan makes log data available for interactive queries and use by features and services.<br/>The **Basic** log data plan provides a low-cost way to ingest and retain logs for troubleshooting, debugging, auditing, and compliance.| +|[Archiving and quick access to archived data](../logs/data-retention-archive.md)|Data bucket states (hot, warm, cold, thawed), archiving, Dynamic Data Active Archive (DDAA)|The cost-effective archive option keeps your logs in your Log Analytics workspace and lets you access archived log data immediately, when you need it. Archive configuration changes are effective immediately because data isn't physically transferred to external storage. You can [restore archived data](../logs/restore.md) or run a [search job](../logs/search-jobs.md) to make a specific time range of archived data available for real-time analysis. | +|[Access control](../logs/manage-access.md)|Role-based user access, permissions|Role-based access control lets you define which people in your organization have access to read, write, and perform operations in a Log Analytics workspace. You can configure permissions at the workspace level, at the resource level, and at the table level, so you have granular control over specific resources and log types.| +|[Data transformations](../essentials/data-collection-transformations.md)|Transforms, field extractions|Transformations let you filter or modify incoming data before it's sent to a Log Analytics workspace. Use transformations to remove sensitive data, enrich data in your Log Analytics workspace, perform calculations, and filter out data you don't need to reduce data costs.| +|[Data collection rules](../essentials/data-collection-rule-overview.md)|Data inputs, data pipeline|Define which data to collect, how to transform that data, and where to send the data.| +|[Kusto Query Language (KQL)](/azure/kusto/query/)|Splunk Search Processing Language (SPL)|Azure Monitor Logs uses a large subset of KQL that's suitable for simple log queries but also includes advanced functionality such as aggregations, joins, and smart analytics. Use the [Splunk to Kusto Query Language map](/azure/data-explorer/kusto/query/splunk-cheat-sheet) to translate your Splunk SPL knowledge to KQL. You can also [learn KQL with tutorials](../logs/get-started-queries.md) and [KQL training modules](/training/modules/analyze-logs-with-kql/). | |[Log Analytics](../logs/log-analytics-tutorial.md)|Splunk Web, Search app, Pivot tool|A tool in the Azure portal for editing and running log queries in Azure Monitor Logs. Log Analytics also provides a rich set of tools for exploring and visualizing data without using KQL.|-|[Cost optimization](../../azure-monitor/best-practices-cost.md)||Azure Monitor provides [tools and best practices to help you understand, monitor, and optimize your costs](../../azure-monitor/best-practices-cost.md) based on your needs. | +|[Cost optimization](../../azure-monitor/best-practices-cost.md)| |Azure Monitor provides [tools and best practices to help you understand, monitor, and optimize your costs](../../azure-monitor/best-practices-cost.md) based on your needs.| ## 1. Understand your current usage This table lists Splunk artifacts and links to guidance for setting up the equiv |Alert actions|[Action groups](../alerts/action-groups.md)| |Apps|[Azure Monitor Insights](../insights/insights-overview.md) are a set of ready-to-use, curated monitoring experiences with pre-configured data inputs, searches, alerts, and visualizations to get you started analyzing data quickly and effectively. | |Dashboards|[Workbooks](../visualize/workbooks-overview.md)|-|Lookups|Azure Monitor provides various ways to enrich data, including:<br>- [Data collection rules](../essentials/data-collection-rule-overview.md), which let you send data from multiple sources to a Log Analytics workspace, and perform calculations and transformations before ingesting the data.<br>- KQL operators, such as the [join operator](/data-explorer/kusto/query/joinoperator?pivots=azuremonitor), which combines data from different tables, and the [externaldata operator](/azure/data-explorer/kusto/query/externaldata-operator?pivots=azuremonitor), which returns data from external storage.<br>- Integration with services, such as [Azure Machine Learning](/azure/machine-learning/overview-what-is-azure-machine-learning) or [Azure Event Hubs](/azure/event-hubs/event-hubs-about), to apply advanced machine learning and stream in additional data.| +|Lookups|Azure Monitor provides various ways to enrich data, including:<br>- [Data collection rules](../essentials/data-collection-rule-overview.md), which let you send data from multiple sources to a Log Analytics workspace, and perform calculations and transformations before ingesting the data.<br>- KQL operators, such as the [join operator](/data-explorer/kusto/query/joinoperator?pivots=azuremonitor), which combines data from different tables, and the [externaldata operator](/azure/data-explorer/kusto/query/externaldata-operator?pivots=azuremonitor), which returns data from external storage.<br>- Integration with services, such as [Azure Machine Learning](../../machine-learning/overview-what-is-azure-machine-learning.md) or [Azure Event Hubs](../../event-hubs/event-hubs-about.md), to apply advanced machine learning and stream in additional data.| |Namespaces|You can grant or limit permission to artifacts in Azure Monitor based on [access control](../logs/manage-access.md) you define on your [Log Analytics workspace](../logs/log-analytics-workspace-overview.md) or [Azure resource groups](../../azure-resource-manager/management/manage-resource-groups-portal.md).| |Permissions|[Access management](../logs/manage-access.md)| |Reports|Azure Monitor offers a range of options for analyzing, visualizing, and sharing data, including:<br>- [Integration with Grafana](../visualize/grafana-plugin.md)<br>- [Insights](../insights/insights-overview.md)<br>- [Workbooks](../visualize/workbooks-overview.md)<br>- [Dashboards](../visualize/tutorial-logs-dashboards.md)<br>- [Integration with Power BI](../logs/log-powerbi.md)<br>- [Integration with Excel](../logs/log-excel.md)| To export your historical data from Splunk: - Learn more about using [Log Analytics](../logs/log-analytics-overview.md) and the [Log Analytics Query API](../logs/api/overview.md). - [Enable Microsoft Sentinel on your Log Analytics workspace](../../sentinel/quickstart-onboard.md).-- Take the [Analyze logs in Azure Monitor with KQL training module](/training/modules/analyze-logs-with-kql/).---+- Take the [Analyze logs in Azure Monitor with KQL training module](/training/modules/analyze-logs-with-kql/). |
azure-monitor | Move Workspace Region | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/move-workspace-region.md | The following procedures show how to prepare the workspace and resources for the | summarize max(TimeGenerated) by Type ``` -After data sources are connected to the target workspace, ingested data is stored in the target workspace. Older data stays in the original workspace and is subject to the retention policy. You can perform a [cross-workspace query](./cross-workspace-query.md#performing-a-query-across-multiple-resources). If both workspaces were assigned the same name, use a qualified name (*subscriptionName/resourceGroup/componentName*) in the workspace reference. +After data sources are connected to the target workspace, ingested data is stored in the target workspace. Older data stays in the original workspace and is subject to the retention policy. You can perform a [cross-workspace query](./cross-workspace-query.md#perform-a-query-across-multiple-resources). If both workspaces were assigned the same name, use a qualified name (*subscriptionName/resourceGroup/componentName*) in the workspace reference. Here's an example for a query across two workspaces that have the same name: If you want to discard the source workspace, delete the exported resources or th ## Clean up -While new data is being ingested to your new workspace, older data in the original workspace remains available for query and is subject to the retention policy defined in the workspace. We recommend that you keep the original workspace for as long as you need older data to [query across](./cross-workspace-query.md#performing-a-query-across-multiple-resources) workspaces. +While new data is being ingested to your new workspace, older data in the original workspace remains available for query and is subject to the retention policy defined in the workspace. We recommend that you keep the original workspace for as long as you need older data to [query across](./cross-workspace-query.md#perform-a-query-across-multiple-resources) workspaces. If you no longer need access to older data in the original workspace: |
azure-monitor | Parse Text | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/parse-text.md | Title: Parse text data in Azure Monitor logs | Microsoft Docs -description: Describes different options for parsing log data in Azure Monitor records when the data is ingested and when it's retrieved in a query, comparing the relative advantages for each. +description: This article describes options for parsing log data in Azure Monitor records when the data is ingested and when it's retrieved in a query and compares the relative advantages for each. Last updated 10/20/2021 # Parse text data in Azure Monitor logs-Some log data collected by Azure Monitor will include multiple pieces of information in a single property. Parsing this data into multiple properties make it easier to use in queries. A common example is a [custom log](../agents/data-sources-custom-logs.md) that collects an entire log entry with multiple values into a single property. By creating separate properties for the different values, you can search and aggregate on each. +Some log data collected by Azure Monitor will include multiple pieces of information in a single property. Parsing this data into multiple properties makes it easier to use in queries. A common example is a [custom log](../agents/data-sources-custom-logs.md) that collects an entire log entry with multiple values into a single property. By creating separate properties for the different values, you can search and aggregate on each one. This article describes different options for parsing log data in Azure Monitor when the data is ingested and when it's retrieved in a query, comparing the relative advantages for each. - ## Parsing methods-You can parse data either at ingestion time when the data is collected or at query time when analyzing the data with a query. Each strategy has unique advantages as described below. +You can parse data either at ingestion time when the data is collected or at query time when you analyze the data with a query. Each strategy has unique advantages. ### Parse data at collection time-When you parse data at collection time, you configure [Custom Fields](../logs/custom-fields.md) that create new properties in the table. Queries don't have to include any parsing logic and simply use these properties as any other field in the table. +When you parse data at collection time, you configure [custom fields](../logs/custom-fields.md) that create new properties in the table. Queries don't have to include any parsing logic and use these properties as any other field in the table. ++**Advantages:** -Advantages to this method include the following: +- Easier to query the collected data because you don't need to include parse commands in the query. +- Better query performance because the query doesn't need to perform parsing. -- Easier to query the collected data since you don't need to include parse commands in the query.-- Better query performance since the query doesn't need to perform parsing.- -Disadvantages to this method include the following: +**Disadvantages:** - Must be defined in advance. Can't include data that's already been collected. - If you change the parsing logic, it will only apply to new data. Disadvantages to this method include the following: - Increases latency time for collecting data. - Errors can be difficult to handle. - ### Parse data at query time When you parse data at query time, you include logic in your query to parse data into multiple fields. The actual table itself isn't modified. -Advantages to this method include the following: +**Advantages:** - Applies to any data, including data that's already been collected. - Changes in logic can be applied immediately to all data.-- Flexible parsing options including predefined logic for particular data structures.- -Disadvantages to this method include the following: +- Flexible parsing options, including predefined logic for particular data structures. ++**Disadvantages:** -- Requires more complex queries. This can be mitigated by using [functions to simulate a table](#use-function-to-simulate-a-table).+- Requires more complex queries. This drawback can be mitigated by using [functions to simulate a table](#use-a-function-to-simulate-a-table). - Must replicate parsing logic in multiple queries. Can share some logic through functions.-- Can create overhead when running complex logic against very large record sets (billions of records).+- Can create overhead when you run complex logic against very large record sets (billions of records). ## Parse data as it's collected-See [Create custom fields in Azure Monitor](../logs/custom-fields.md) for details on parsing data as it's collected. This creates custom properties in the table that can be used by queries just like any other property. +For more information on parsing data as it's collected, see [Create custom fields in Azure Monitor](../logs/custom-fields.md). This approach creates custom properties in the table that can be used by queries like any other property. -## Parse data in query using patterns -When the data you want to parse can be identified by a pattern repeated across records, you can use different operators in the [Kusto query language](/azure/kusto/query/) to extract the specific piece of data into one or more new properties. +## Parse data in a query by using patterns +When the data you want to parse can be identified by a pattern repeated across records, you can use different operators in the [Kusto Query Language](/azure/kusto/query/) to extract the specific piece of data into one or more new properties. ### Simple text patterns -Use the [parse](/azure/kusto/query/parseoperator) operator in your query to create one or more custom properties that can be extracted from a string expression. You specify the pattern to be identified and the names of the properties to create. This is particularly useful for data with key-value strings with a form similar to _key=value_. +Use the [parse](/azure/kusto/query/parseoperator) operator in your query to create one or more custom properties that can be extracted from a string expression. You specify the pattern to be identified and the names of the properties to create. This approach is useful for data with key-value strings with a form similar to `key=value`. -Consider a custom log with data in the following format. +Consider a custom log with data in the following format: ``` Time=2018-03-10 01:34:36 Event Code=207 Status=Success Message=Client 05a26a97-272a-4bc9-8f64-269d154b0e39 connected Time=2018-03-10 01:38:22 Event Code=302 Status=Error Message=Application could n Time=2018-03-10 01:31:34 Event Code=303 Status=Error Message=Application lost connection to database ``` -The following query would parse this data into individual properties. The line with _project_ is added to only return the calculated properties and not _RawData_, which is the single property holding the entire entry from the custom log. +The following query would parse this data into individual properties. The line with `project` is added to only return the calculated properties and not `RawData`, which is the single property that holds the entire entry from the custom log. ```Kusto MyCustomLog_CL MyCustomLog_CL | project EventTime, Code, Status, Message ``` -Following is another example that breaks out the user name of a UPN in the _AzureActivity_ table. +This example breaks out the user name of a UPN in the `AzureActivity` table. ```Kusto AzureActivity AzureActivity | distinct UPNUserPart, Caller ``` - ### Regular expressions-If your data can be identified with a regular expression, you can use [functions that use regular expressions](/azure/kusto/query/re2) to extract individual values. The following example uses [extract](/azure/kusto/query/extractfunction) to break out the _UPN_ field from _AzureActivity_ records and then return distinct users. +If your data can be identified with a regular expression, you can use [functions that use regular expressions](/azure/kusto/query/re2) to extract individual values. The following example uses [extract](/azure/kusto/query/extractfunction) to break out the `UPN` field from `AzureActivity` records and then return distinct users. ```Kusto AzureActivity AzureActivity | distinct UPNUserPart, Caller ``` -To enable efficient parsing at large scale, Azure Monitor uses re2 version of Regular Expressions, which is similar but not identical to some of the other regular expression variants. Refer to the [re2 expression syntax](https://aka.ms/kql_re2syntax) for details. -+To enable efficient parsing at large scale, Azure Monitor uses the re2 version of Regular Expressions, which is similar but not identical to some of the other regular expression variants. For more information, see the [re2 expression syntax](https://aka.ms/kql_re2syntax). ## Parse delimited data in a query-Delimited data separates fields with a common character such as a comma in a CSV file. Use the [split](/azure/kusto/query/splitfunction) function to parse delimited data using a delimiter that you specify. You can use this with [extend](/azure/kusto/query/extendoperator) operator to return all fields in the data or to specify individual fields to be included in the output. +Delimited data separates fields with a common character, like a comma in a CSV file. Use the [split](/azure/kusto/query/splitfunction) function to parse delimited data by using a delimiter that you specify. You can use this approach with the [extend](/azure/kusto/query/extendoperator) operator to return all fields in the data or to specify individual fields to be included in the output. > [!NOTE]-> Since split returns a dynamic object, the results may need to be explicitly cast to data types such as string to be used in operators and filters. +> Because split returns a dynamic object, the results might need to be explicitly cast to data types, such as string to be used in operators and filters. -Consider a custom log with data in the following CSV format. +Consider a custom log with data in the following CSV format: ``` 2018-03-10 01:34:36, 207,Success,Client 05a26a97-272a-4bc9-8f64-269d154b0e39 connected Consider a custom log with data in the following CSV format. 2018-03-10 01:31:34, 303,Error,Application lost connection to database ``` -The following query would parse this data and summarize by two of the calculated properties. The first line splits the _RawData_ property into a string array. Each of the next lines gives a name to individual properties and adds them to the output using functions to convert them to the appropriate data type. +The following query would parse this data and summarize by two of the calculated properties. The first line splits the `RawData` property into a string array. Each of the next lines gives a name to individual properties and adds them to the output by using functions to convert them to the appropriate data type. ```Kusto MyCustomCSVLog_CL MyCustomCSVLog_CL ``` ## Parse predefined structures in a query-If your data is formatted in a known structure, you may be able to use one of the functions in the [Kusto query language](/azure/kusto/query/) for parsing predefined structures: +If your data is formatted in a known structure, you might be able to use one of the functions in the [Kusto Query Language](/azure/kusto/query/) for parsing predefined structures: - [JSON](/azure/kusto/query/parsejsonfunction) - [XML](/azure/kusto/query/parse-xmlfunction) If your data is formatted in a known structure, you may be able to use one of th - [User agent](/azure/kusto/query/parse-useragentfunction) - [Version string](/azure/kusto/query/parse-versionfunction) -The following example query parses the _Properties_ field of the _AzureActivity_ table, which is structured in JSON. It saves the results to a dynamic property called _parsedProp_, which includes the individual named value in the JSON. These values are used to filter and summarize the query results. +The following example query parses the `Properties` field of the `AzureActivity` table, which is structured in JSON. It saves the results to a dynamic property called `parsedProp`, which includes the individual named value in the JSON. These values are used to filter and summarize the query results. ```Kusto AzureActivity AzureActivity | summarize count() by ResourceGroup, tostring(parsedProp.tags.businessowner) ``` -These parsing functions can be processor intensive, so they should be used only when your query uses multiple properties from the formatted data. Otherwise, simple pattern matching processing will be faster. +These parsing functions can be processor intensive. Only use them when your query uses multiple properties from the formatted data. Otherwise, simple pattern matching processing is faster. -The following example shows the breakdown of domain controller TGT Preauth type. The type exists only in the EventData field, which is an XML string, but no other data from this field is needed. In this case, [parse](/azure/kusto/query/parseoperator) is used to pick out the required piece of data. +The following example shows the breakdown of the domain controller `TGT Preauth` type. The type exists only in the `EventData` field, which is an XML string. No other data from this field is needed. In this case, [parse](/azure/kusto/query/parseoperator) is used to pick out the required piece of data. ```Kusto SecurityEvent SecurityEvent | summarize count() by PreAuthType ``` -## Use function to simulate a table -You may have multiple queries that perform the same parsing of a particular table. In this case, [create a function](../logs/functions.md) that returns the parsed data instead of replicating the parsing logic in each query. You can then use the function alias in place of the original table in other queries. +## Use a function to simulate a table +You might have multiple queries that perform the same parsing of a particular table. In this case, [create a function](../logs/functions.md) that returns the parsed data instead of replicating the parsing logic in each query. You can then use the function alias in place of the original table in other queries. -Consider the comma-delimited custom log example above. In order to use the parsed data in multiple queries, create a function using the following query and save it with the alias _MyCustomCSVLog_. +Consider the preceding comma-delimited custom log example. To use the parsed data in multiple queries, create a function by using the following query and save it with the alias `MyCustomCSVLog`. ```Kusto MyCustomCSVLog_CL MyCustomCSVLog_CL | extend Message = tostring(CSVFields[3]) ``` -You can now use the alias _MyCustomCSVLog_ in place of the actual table name in queries like the following. +You can now use the alias `MyCustomCSVLog` in place of the actual table name in queries like the following example: ```Kusto MyCustomCSVLog | summarize count() by Status,Code ``` - ## Next steps-* Learn about [log queries](./log-query-overview.md) to analyze the data collected from data sources and solutions. +Learn about [log queries](./log-query-overview.md) to analyze the data collected from data sources and solutions. |
azure-monitor | Powershell Workspace Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/powershell-workspace-configuration.md | When you create a workspace that was deleted in the last 14 days and is in a [so - If you provide the same workspace name, resource group, subscription, and region as in the deleted workspace, your workspace will be recovered. The recovered workspace includes data, configuration, and connected agents. - A workspace name must be unique per resource group. If you use a workspace name that already exists and is also in soft delete in your resource group, you'll get an error. The error will state "The workspace name 'workspace-name' is not unique" or "conflict." To override the soft delete, permanently delete your workspace, and create a new workspace with the same name, follow these steps to recover the workspace first and then perform a permanent delete: - * [Recover](../logs/delete-workspace.md#recover-workspace) your workspace. + * [Recover](../logs/delete-workspace.md#recover-a-workspace) your workspace. * [Permanently delete](../logs/delete-workspace.md#permanent-workspace-delete) your workspace. * Create a new workspace by using the same workspace name. |
azure-monitor | Query Packs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/query-packs.md | Last updated 06/22/2022 # Query packs in Azure Monitor Logs-Query packs act as containers for log queries in Azure Monitor and let you save log queries and share them across workspaces and other contexts in Log Analytics. +Query packs act as containers for log queries in Azure Monitor. They let you save log queries and share them across workspaces and other contexts in Log Analytics. ## View query packs-You can view and manage query packs in the Azure portal from the **Log Analytics query packs** menu. Select a query pack to view and edit its permissions. See below for details on creating a query pack using the API. +You can view and manage query packs in the Azure portal from the **Log Analytics query packs** menu. Select a query pack to view and edit its permissions. This article describes how to create a query pack by using the API. -[](media/query-packs/view-query-pack.png#lightbox) +[](media/query-packs/view-query-pack.png#lightbox) ## Permissions You can set the permissions on a query pack when you view it in the Azure portal. Users require the following permissions to use query packs: -- **Reader** - User can see and run all queries in the query pack.-- **Contributor** - User can modify existing queries and add new queries to the query pack.+- **Reader**: Users can see and run all queries in the query pack. +- **Contributor**: Users can modify existing queries and add new queries to the query pack. ## Default query pack-A query pack, called **DefaultQueryPack** is automatically created in each subscription in a resource group called **LogAnalyticsDefaultResources** when the first query is saved. You can create queries in this query pack or create additional query packs depending on your requirements. +A query pack, called `DefaultQueryPack`, is automatically created in each subscription in a resource group called `LogAnalyticsDefaultResources` when the first query is saved. You can create queries in this query pack or create other query packs depending on your requirements. -## Using multiple query packs -The single default query pack will be sufficient for most users to save and reuse queries. There are reasons that you may want to create multiple query packs for users in your organization though, including loading different sets of queries in different Log Analytics sessions and providing different permissions for different collections of queries. +## Use multiple query packs +The single default query pack will be sufficient for most users to save and reuse queries. But there are reasons that you might want to create multiple query packs for users in your organization. For example, you might want to load different sets of queries in different Log Analytics sessions and provide different permissions for different collections of queries. -When you create a new query pack using the API, you can add tags that classify queries according to your business requirements. For example, you could tag a query pack to relate it to a particular department in your organization or to severity of issues that the included queries are meant to address. This allows you to create different sets of queries intended for different sets of users and different situations. +When you create a new query pack by using the API, you can add tags that classify queries according to your business requirements. For example, you could tag a query pack to relate it to a particular department in your organization or to severity of issues that the included queries are meant to address. By using tags, you can create different sets of queries intended for different sets of users and different situations. ## Query pack definition-Each query pack is defined in a JSON file that includes the definition for one or more queries. Each query is represented by a block as follows: +Each query pack is defined in a JSON file that includes the definition for one or more queries. Each query is represented by a block. ```json { Each query pack is defined in a JSON file that includes the definition for one o } ``` - ## Query properties-Each query in the query pack has the following properties. -+Each query in the query pack has the following properties: | Property | Description | |:|:|-| displayName | Display name listed in Log Analytics for each query. | +| displayName | Display name listed in Log Analytics for each query. | | description | Description of the query displayed in Log Analytics for each query. |-| body | Query written in KQL. | -| related | Related categories, resource types, and solutions for the query. Used for grouping and filtering in Log Analytics by the user to help locate their query. Each query can have up to ten of each type. Retrieve allowed values from https://api.loganalytics.io/v1/metadata?select=resourceTypes,solutions,categories. | -| tags | Additional tags used by the user for sorting and filtering in Log Analytics. Each tag will be added to Category, Resource Type, and Solution when [grouping and filtering queries](queries.md#find-and-filter-queries). | +| body | Query written in Kusto Query Language. | +| related | Related categories, resource types, and solutions for the query. Used for grouping and filtering in Log Analytics by the user to help locate their query. Each query can have up to 10 of each type. Retrieve allowed values from https://api.loganalytics.io/v1/metadata?select=resourceTypes, solutions, and categories. | +| tags | Other tags used by the user for sorting and filtering in Log Analytics. Each tag will be added to Category, Resource Type, and Solution when you [group and filter queries](queries.md#find-and-filter-queries). | ## Create a query pack-You can create a query pack using the REST API or from the **Log Analytics query packs** pane in the Azure portal. Currently the **Log Analytics query packs** pane shows up under **Other** category of **All services** page in the Azure portal. +You can create a query pack by using the REST API or from the **Log Analytics query packs** pane in the Azure portal. To open the **Log Analytics query packs** pane in the portal, select **All services** > **Other**. -### Create token -You require a token for authentication of the API request. There are multiple methods to get a token including using **armclient**. +### Create a token +You must have a token for authentication of the API request. There are multiple methods to get a token. One method is to use `armclient`. -First log in to Azure using the following command: +First, sign in to Azure by using the following command: ``` armclient login ``` -Then create the token with the following command. The token is automatically copied to the clipboard so you can paste it into another tool. +Then create the token by using the following command. The token is automatically copied to the clipboard so that you can paste it into another tool. ``` armclient token ``` -### Create payload -The payload of the request is the JSON defining one or more queries and the location where the query pack should be stored. The name of the query pack is specified in the API request described in the next section. +### Create a payload +The payload of the request is the JSON that defines one or more queries and the location where the query pack should be stored. The name of the query pack is specified in the API request described in the next section. ```json { The payload of the request is the JSON defining one or more queries and the loca } ``` -### Create request -Use the following request to create a new query pack using the REST API. The request should use bearer token authorization. Content type should be application/json. +### Create a request +Use the following request to create a new query pack by using the REST API. The request should use bearer token authorization. The content type should be `application/json`. ```rest POST https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/my-resource-group/providers/Microsoft.Insights/querypacks/my-query-pack?api-version=2019-09-01 ``` -Use a tool that can submit a REST API request such as Fiddler or Postman to submit the request using the payload described in the previous section. The query ID will be generated and returned in the payload. +Use a tool that can submit a REST API request, such as Fiddler or Postman, to submit the request by using the payload described in the previous section. The query ID will be generated and returned in the payload. ## Update a query pack To update a query pack, submit the following request with an updated payload. This command requires the query pack ID. POST https://management.azure.com/subscriptions/00000000-0000-0000-0000-00000000 ## Next steps -- See [Using queries in Azure Monitor Log Analytics](queries.md) to see how users interact with query packs in Log Analytics.+See [Using queries in Azure Monitor Log Analytics](queries.md) to see how users interact with query packs in Log Analytics. |
azure-monitor | Quick Create Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/quick-create-workspace.md | When you create a workspace that was deleted in the last 14 days and in [soft-de 1. If you provide the same workspace name, resource group, subscription, and region as in the deleted workspace, your workspace will be recovered including its data, configuration, and connected agents. 1. Workspace names must be unique for a resource group. If you use a workspace name that already exists, or is soft deleted, an error is returned. To permanently delete your soft-deleted name and create a new workspace with the same name, follow these steps: - 1. [Recover](../logs/delete-workspace.md#recover-workspace) your workspace. + 1. [Recover](../logs/delete-workspace.md#recover-a-workspace) your workspace. 1. [Permanently delete](../logs/delete-workspace.md#permanent-workspace-delete) your workspace. 1. Create a new workspace by using the same workspace name. |
azure-monitor | Tutorial Logs Ingestion Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-api.md | Title: 'Tutorial: Send data to Azure Monitor Logs using REST API (Resource Manager templates)' description: Tutorial on how to send data to a Log Analytics workspace in Azure Monitor by using the REST API Azure Resource Manager template version. Previously updated : 07/15/2022 Last updated : 02/01/2023 # Tutorial: Send data to Azure Monitor Logs using REST API (Resource Manager templates) Use the **Tables - Update** API to create the table with the following PowerShel "description": "Additional message properties" }, {- "name": "ExtendedColumn", + "name": "CounterName", "type": "string",- "description": "An additional column extended at ingestion time" + "description": "Name of the counter" + }, + { + "name": "CounterValue", + "type": "real", + "description": "Value collected for the counter" } ] } A [DCE](../essentials/data-collection-endpoint-overview.md) is required to accep :::image type="content" source="media/tutorial-logs-ingestion-api/data-collection-endpoint-json.png" lightbox="media/tutorial-logs-ingestion-api/data-collection-endpoint-json.png" alt-text="Screenshot that shows the DCE resource ID."::: ## Create a data collection rule-The [DCR](../essentials/data-collection-rule-overview.md) defines the schema of data that's being sent to the HTTP endpoint. It also defines the transformation that will be applied to it. The DCR also defines the destination workspace and table the transformed data will be sent to. +The [DCR](../essentials/data-collection-rule-overview.md) defines the schema of data that's being sent to the HTTP endpoint and the [transformation](../essentials/data-collection-transformations.md) that will be applied to it before it's sent to the workspace. The DCR also defines the destination workspace and table the transformed data will be sent to. 1. In the Azure portal's search box, enter **template** and then select **Deploy a custom template**. The [DCR](../essentials/data-collection-rule-overview.md) defines the schema of - `dataCollectionEndpointId`: Identifies the Resource ID of the data collection endpoint. - `streamDeclarations`: Defines the columns of the incoming data. - `destinations`: Specifies the destination workspace.- - `dataFlows`: Matches the stream with the destination workspace and specifies the transformation query and the destination table. + - `dataFlows`: Matches the stream with the destination workspace and specifies the transformation query and the destination table. The output of the destination query is what will be sent to the destination table. ```json { The [DCR](../essentials/data-collection-rule-overview.md) defines the schema of }, "location": { "type": "string",- "defaultValue": "westus2", - "allowedValues": [ - "westus2", - "eastus2", - "eastus2euap" - ], "metadata": { "description": "Specifies the location in which to create the Data Collection Rule." } The [DCR](../essentials/data-collection-rule-overview.md) defines the schema of "destinations": [ "clv2ws1" ],- "transformKql": "source | extend jsonContext = parse_json(AdditionalContext) | project TimeGenerated = Time, Computer, AdditionalContext = jsonContext, ExtendedColumn=tostring(jsonContext.CounterName)", + "transformKql": "source | extend jsonContext = parse_json(AdditionalContext) | project TimeGenerated = Time, Computer, AdditionalContext = jsonContext, CounterName=tostring(jsonContext.CounterName), CounterValue=jsonContext.CounterValue", "outputStream": "Custom-MyTable_CL" } ] The [DCR](../essentials/data-collection-rule-overview.md) defines the schema of :::image type="content" source="media/tutorial-workspace-transformations-api/data-collection-rule-details.png" lightbox="media/tutorial-workspace-transformations-api/data-collection-rule-details.png" alt-text="Screenshot that shows DCR details."::: -1. Copy the **Resource ID** for the DCR. You'll use it in the next step. +1. Copy the **Immutable ID** for the DCR. You'll use it in a later step when you send sample data using the API. - :::image type="content" source="media/tutorial-workspace-transformations-api/data-collection-rule-json-view.png" lightbox="media/tutorial-workspace-transformations-api/data-collection-rule-json-view.png" alt-text="Screenshot that shows DCR JSON view."::: + :::image type="content" source="media/tutorial-logs-ingestion-api/data-collection-rule-json-view.png" lightbox="media/tutorial-workspace-transformations-api/data-collection-rule-json-view.png" alt-text="Screenshot that shows DCR JSON view."::: > [!NOTE] > All the properties of the DCR, such as the transformation, might not be displayed in the Azure portal even though the DCR was successfully created with those properties. The following PowerShell code sends data to the endpoint by using HTTP REST fund #information needed to send data to the DCR endpoint $dcrImmutableId = "dcr-000000000000000"; #the immutableId property of the DCR object $dceEndpoint = "https://my-dcr-name.westus2-1.ingest.monitor.azure.com"; #the endpoint property of the Data Collection Endpoint object+ $streamName = "Custom-MyTableRawData"; #name of the stream in the DCR that represents the destination table ################## ### Step 1: Obtain a bearer token used later to authenticate against the DCE. The following PowerShell code sends data to the endpoint by using HTTP REST fund ################## $body = $staticData; $headers = @{"Authorization"="Bearer $bearerToken";"Content-Type"="application/json"};- $uri = "$dceEndpoint/dataCollectionRules/$dcrImmutableId/streams/Custom-MyTableRawData?api-version=2021-11-01-preview" + $uri = "$dceEndpoint/dataCollectionRules/$dcrImmutableId/streams/$($streamName)?api-version=2021-11-01-preview" $uploadResponse = Invoke-RestMethod -Uri $uri -Method "Post" -Body $body -Headers $headers ``` |
azure-monitor | Monitor Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md | - Title: What is monitored by Azure Monitor -description: Reference of all services and other resources monitored by Azure Monitor. ---- Previously updated : 09/08/2022----# What is monitored by Azure Monitor? --This article is a reference of the different applications and services that are monitored by Azure Monitor. --Azure Monitor data is collected and stored based on resource provider namespaces. Each resource in Azure has a unique ID. The resource provider namespace is part of all unique IDs. For example, a key vault resource ID would be similar to `/subscriptions/d03b04c7-d1d4-eeee-aaaa-87b6fcb38b38/resourceGroups/KeyVaults/providers/Microsoft.KeyVault/vaults/mysafekeys ` . *Microsoft.KeyVault* is the resource provider namespace. *Microsoft.KeyVault/vaults/* is the resource provider. --For a list of Azure resource provider namespaces, see [Resource providers for Azure services](../azure-resource-manager/management/azure-services-resource-providers.md). --For a list of resource providers that support Azure Monitor --- **Metrics** - See [Supported metrics in Azure Monitor](essentials/metrics-supported.md).-- **Metric alerts** - See [Supported resources for metric alerts in Azure Monitor](alerts/alerts-metric-near-real-time.md).-- **Prometheus metrics** - See [Prometheus metrics overview](essentials/prometheus-metrics-overview.md#enable).-- **Resource logs** - See [Supported categories for Azure Monitor resource logs](essentials/resource-logs-categories.md).-- **Activity log** - All entries in the activity log are available for query, alerting and routing to Azure Monitor Logs store regardless of resource provider.--## Services that require agents --Azure Monitor can't see inside a service running its own application, operating system or container. That type of service requires one or more agents to be installed. The agent then runs as well to collect metrics, logs, traces and changes and forward them to Azure Monitor. The following services require agents for this reason. --- [Azure Cloud Services](../cloud-services-extended-support/index.yml)-- [Azure Virtual Machines](../virtual-machines/index.yml)-- [Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) -- [Azure Service Fabric](../service-fabric/index.yml) --In addition, applications also require either the Application Insights SDK or auto-instrumentation (via an agent) to collect information and write it to the Azure Monitor data platform. --## Services with Insights --Some services have curated monitoring experiences call "insights". Insights are meant to be a starting point for monitoring a service or set of services. Some insights may also automatically pull additional data that's not captured or stored in Azure Monitor. For more information on monitoring insights, see [Insights Overview](insights/insights-overview.md). --## Product integrations --The services and [older monitoring solutions](insights/solutions.md) in the following table store their data in Azure Monitor Logs so that it can be analyzed with other log data collected by Azure Monitor. --| Product/Service | Description | -|:|:| -| [Azure Automation](../automation/index.yml) | Manage operating system updates and track changes on Windows and Linux computers. See [Change tracking](../automation/change-tracking/overview.md) and [Update management](../automation/update-management/overview.md). | -| [Azure Information Protection](/azure/information-protection/) | Classify and optionally protect documents and emails. See [Central reporting for Azure Information Protection](/azure/information-protection/reports-aip#configure-a-log-analytics-workspace-for-the-reports). | -| [Defender for the Cloud](../defender-for-cloud/defender-for-cloud-introduction.md) | Collect and analyze security events and perform threat analysis. See [Data collection in Defender for the Cloud](../defender-for-cloud/monitoring-components.md). | -| [Microsoft Sentinel](../sentinel/index.yml) | Connect to different sources including Office 365 and Amazon Web Services Cloud Trail. See [Connect data sources](../sentinel/connect-data-sources.md). | -| [Microsoft Intune](/intune/) | Create a diagnostic setting to send logs to Azure Monitor. See [Send log data to storage, Event Hubs, or log analytics in Intune (preview)](/intune/fundamentals/review-logs-using-azure-monitor). | -| Network [Traffic Analytics](../network-watcher/traffic-analytics.md) | Analyze Network Watcher network security group flow logs to provide insights into traffic flow in your Azure cloud. | -| [System Center Operations Manager](/system-center/scom) | Collect data from Operations Manager agents by connecting their management group to Azure Monitor. See [Connect Operations Manager to Azure Monitor](agents/om-agents.md).<br> Assess the risk and health of your System Center Operations Manager management group with the [Operations Manager Assessment](insights/scom-assessment.md) solution. | -| [Microsoft Teams Rooms](/microsoftteams/room-systems/azure-monitor-deploy) | Integrated, end-to-end management of Microsoft Teams Rooms devices. | -| [Visual Studio App Center](/appcenter/) | Build, test, and distribute applications and then monitor their status and usage. See [Start analyzing your mobile app with App Center and Application Insights](https://github.com/Microsoft/appcenter). | -| Windows | [Windows Update Compliance](/windows/deployment/update/update-compliance-get-started) - Assess your Windows desktop upgrades.<br>[Desktop Analytics](/configmgr/desktop-analytics/overview) - Integrates with Configuration Manager to provide insight and intelligence to make more informed decisions about the update readiness of your Windows clients. | -| **The following solutions also integrate with parts of Azure Monitor. Note that solutions, which are based on Azure Monitor Logs and Log Analytics, are no longer under active development. Use [Insights](insights/insights-overview.md) instead.** | | -| Network - [Network Performance Monitor solution](insights/network-performance-monitor.md) | -| Network - [Azure Application Gateway solution](insights/azure-networking-analytics.md#azure-application-gateway-analytics) | . -| [Office 365 solution](insights/solution-office-365.md) | Monitor your Office 365 environment. Updated version with improved onboarding available through Microsoft Sentinel. | -| [SQL Analytics solution](insights/azure-sql.md) | Use SQL Insights instead. | -| [Surface Hub solution](insights/surface-hubs.md) | | --## Third-party integration --| Integration | Description | -|:|:| -| [ITSM](alerts/itsmc-overview.md) | The IT Service Management (ITSM) Connector allows you to connect Azure and a supported ITSM product/service. | -| [Azure Monitor Partners](./partners.md) | A list of partners that integrate with Azure Monitor in some form. | -| [Azure Monitor Partner integrations](../partner-solutions/overview.md)| Specialized integrations between Azure Monitor and other non-Microsoft monitoring platforms if you've already built on them. Examples include Datadog and Elastic.| --## Resources outside of Azure --Azure Monitor can collect data from resources outside of Azure by using the methods listed in the following table. --| Resource | Method | -|:|:| -| Applications | Monitor web applications outside of Azure by using Application Insights. See [What is Application Insights?](./app/app-insights-overview.md). | -| Virtual machines | Use agents to collect data from the guest operating system of virtual machines in other cloud environments or on-premises. See [Overview of Azure Monitor agents](agents/agents-overview.md). | -| REST API Client | Separate APIs are available to write data to Azure Monitor Logs and Metrics from any REST API client. See [Send log data to Azure Monitor with the HTTP Data Collector API](logs/data-collector-api.md) for Logs. See [Send custom metrics for an Azure resource to the Azure Monitor metric store by using a REST API](essentials/metrics-store-custom-rest-api.md) for Metrics. | --## Next steps --- Read more about the [Azure Monitor data platform that stores the logs and metrics collected by insights and solutions](data-platform.md).-- Complete a [tutorial on monitoring an Azure resource](essentials/tutorial-resource-logs.md).-- Complete a [tutorial on writing a log query to analyze data in Azure Monitor Logs](essentials/tutorial-resource-logs.md).-- Complete a [tutorial on creating a metrics chart to analyze data in Azure Monitor Metrics](essentials/tutorial-metrics.md). |
azure-monitor | Observability Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/observability-data.md | - Title: Observability data in Azure Monitor -description: Describes the --- Previously updated : 08/18/2022---# Observability data in Azure Monitor -Enabling observability across today's complex computing environments running distributed applications that rely on both cloud and on-premises services, requires collection of operational data from every layer and every component of the distributed system. You need to be able to perform deep insights on this data and consolidate it into a single pane of glass with different perspectives to support the multitude of stakeholders in your organization. --[Azure Monitor](overview.md) collects and aggregates data from various sources into a common data platform where it can be used for analysis, visualization, and alerting. It provides a consistent experience on top of data from multiple sources, which gives you deep insights across all your monitored resources and even with data from other services that store their data in Azure Monitor. ----## Pillars of observability --Metrics, logs, distributed traces, and changes are commonly referred to as the pillars of observability. These are the different kinds of data that a monitoring tool must collect and analyze to provide sufficient observability of a monitored system. Observability can be achieved by correlating data from multiple pillars and aggregating data across the entire set of resources being monitored. Because Azure Monitor stores data from multiple sources together, the data can be correlated and analyzed using a common set of tools. It also correlates data across multiple Azure subscriptions and tenants, in addition to hosting data for other services. --Azure resources generate a significant amount of monitoring data. Azure Monitor consolidates this data along with monitoring data from other sources into either a Metrics or Logs platform. Each is optimized for particular monitoring scenarios, and each supports different features in Azure Monitor. Features such as data analysis, visualizations, or alerting require you to understand the differences so that you can implement your required scenario in the most efficient and cost effective manner. Insights in Azure Monitor such as [Application Insights](app/app-insights-overview.md) or [VM insights](vm/vminsights-overview.md) have analysis tools that allow you to focus on the particular monitoring scenario without having to understand the differences between the two types of data. ---## Metrics -[Metrics](essentials/data-platform-metrics.md) are numerical values that describe some aspect of a system at a particular point in time. They are collected at regular intervals and are identified with a timestamp, a name, a value, and one or more defining labels. Metrics can be aggregated using a variety of algorithms, compared to other metrics, and analyzed for trends over time. --Metrics in Azure Monitor are stored in a time-series database which is optimized for analyzing time-stamped data. This makes metrics ideal for alerting and fast detection of issues. They can tell you how your system is performing but typically need to be combined with logs to identify the root cause of issues. --Metrics are available for interactive analysis in the Azure portal with [Azure Metrics Explorer](essentials/metrics-getting-started.md). They can be added to an [Azure dashboard](app/tutorial-app-dashboards.md) for visualization in combination with other data and used for near-real time [alerting](alerts/alerts-metric.md). --Read more about Azure Monitor Metrics including their sources of data in [Metrics in Azure Monitor](essentials/data-platform-metrics.md). --## Logs -[Logs](logs/data-platform-logs.md) are events that occurred within the system. They can contain different kinds of data and may be structured or free-form text with a timestamp. They may be created sporadically as events in the environment generate log entries, and a system under heavy load will typically generate more log volume. --Logs in Azure Monitor are stored in a Log Analytics workspace that's based on [Azure Data Explorer](/azure/data-explorer/) which provides a powerful analysis engine and [rich query language](/azure/kusto/query/). Logs typically provide enough information to provide complete context of the issue being identified and are valuable for identifying root case of issues. --> [!NOTE] -> It's important to distinguish between Azure Monitor Logs and sources of log data in Azure. For example, subscription level events in Azure are written to an [activity log](essentials/platform-logs-overview.md) that you can view from the Azure Monitor menu. Most resources will write operational information to a [resource log](essentials/platform-logs-overview.md) that you can forward to different locations. Azure Monitor Logs is a log data platform that collects activity logs and resource logs along with other monitoring data to provide deep analysis across your entire set of resources. --You can work with [log queries](logs/log-query-overview.md) interactively with [Log Analytics](logs/log-query-overview.md) in the Azure portal or add the results to an [Azure dashboard](app/tutorial-app-dashboards.md) for visualization in combination with other data. You can also create [log alerts](alerts/alerts-log.md) which will trigger an alert based on the results of a schedule query. --Read more about Azure Monitor Logs including their sources of data in [Logs in Azure Monitor](logs/data-platform-logs.md). --## Distributed traces -Traces are series of related events that follow a user request through a distributed system. They can be used to determine behavior of application code and the performance of different transactions. While logs will often be created by individual components of a distributed system, a trace measures the operation and performance of your application across the entire set of components. --Distributed tracing in Azure Monitor is enabled with the [Application Insights SDK](app/distributed-tracing.md), and trace data is stored with other application log data collected by Application Insights. This makes it available to the same analysis tools as other log data including log queries, dashboards, and alerts. --Read more about distributed tracing at [What is Distributed Tracing?](app/distributed-tracing.md). --## Changes --Change Analysis alerts you to live site issues, outages, component failures, or other change data. It also provides insights into those application changes, increases observability, and reduces the mean time to repair. You automatically register the `Microsoft.ChangeAnalysis` resource provider with an Azure Resource Manager subscription by going to Change Analysis via the Azure portal. For web app in-guest changes, you can enable the [Change Analysis tool via the Change Analysis portal](./change/change-analysis-enable.md#enable-azure-functions-and-web-app-in-guest-change-collection-via-the-change-analysis-portal). --Change Analysis builds on [Azure Resource Graph](../governance/resource-graph/overview.md) to provide a historical record of how your Azure resources have changed over time. It detects managed identities, platform operating system upgrades, and hostname changes. Change Analysis securely queries IP configuration rules, TLS settings, and extension versions to provide more detailed change data. --Read more about Change Analysis at [Use Change Analysis in Azure Monitor](./change/change-analysis.md). [Try Change Analysis for observability into your Azure subscriptions](https://aka.ms/cahome). --## Next steps --- Read more about [Metrics in Azure Monitor](essentials/data-platform-metrics.md).-- Read more about [Logs in Azure Monitor](logs/data-platform-logs.md).-- Learn about the [monitoring data available](data-sources.md) for different resources in Azure. |
azure-monitor | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md | -Azure Monitor helps you maximize the availability and performance of your applications and services. It delivers a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. This information helps you understand how your applications are performing and proactively identify issues that affect them and the resources they depend on. +Azure Monitor is a comprehensive monitoring solution for collecting, analyzing, and responding to telemetry from your cloud and on-premises environments. You can use Azure Monitor to maximize the availability and performance of your applications and services. +Azure Monitor collects and aggregates the data from every layer and component of your system into a common data platform. It correlates data across multiple Azure subscriptions and tenants, in addition to hosting data for other services. Because this data is stored together, it can be correlated and analyzed using a common set of tools. The data can then be used for analysis and visualizations to help you understand how your applications are performing and respond automatically to system events. -A few examples of what you can do with Azure Monitor include: +Azure Monitor also includes Azure Monitor SCOM Managed Instance, which allows you to move your on-premises System Center Operation Manager (Operations Manager) installation to the cloud in Azure. -- Detect and diagnose issues across applications and dependencies with [Application Insights](app/app-insights-overview.md).-- Correlate infrastructure issues with [VM insights](vm/vminsights-overview.md) and [Container insights](containers/container-insights-overview.md).-- Drill into your monitoring data with [Log Analytics](logs/log-query-overview.md) for troubleshooting and deep diagnostics.-- Support operations at scale with [automated actions](alerts/alerts-action-rules.md).-- Create visualizations with Azure [dashboards](visualize/tutorial-logs-dashboards.md) and [workbooks](visualize/workbooks-overview.md).-- Collect data from [monitored resources](./monitor-reference.md) by using [Azure Monitor Metrics](./essentials/data-platform-metrics.md).-- Investigate change data for routine monitoring or for triaging incidents by using [Change Analysis](./change/change-analysis.md).+Use Azure Monitor to monitor these types resources in Azure, other clouds, or on-premises: + - Applications + - Virtual machines + - Guest operating systems + - Containers + - Databases + - Security events in combination with Azure Sentinel + - Networking events and health in combination with Network Watcher + - Custom sources that use the APIs to get data into Azure Monitor + +You can also export monitoring data from Azure Monitor into other systems so you can: + - Integrate with other third-party and open-source monitoring and visualization tools + - Integrate with ticketing and other ITSM systems +## Monitoring and observability -## Overview -The following diagram gives a high-level view of Azure Monitor. --- The stores for the **[data platform](data-platform.md)** are at the center of the diagram. Azure Monitor stores these fundamental types of data: metrics, logs, traces, and changes.-- The **[sources of monitoring data](data-sources.md)** that populate these data stores are on the left.-- The different functions that Azure Monitor performs with this collected data are on the right. This includes such actions as analysis, alerting.-- At the bottom is a layer of integration pieces. These are actually integrated throughout other parts of the diagram, but that is too complex to show visually.---## Observability and the Azure Monitor data platform -Metrics, logs, and distributed traces are commonly referred to as the three pillars of observability. Observability can be achieved by aggregating and correlating these different types of data across the entire system being monitored. --Natively, Azure Monitor stores data as metrics, logs, or changes. Traces are stored in the Logs store. Each storage platform is optimized for particular monitoring scenarios, and each supports different features in Azure Monitor. It's important for you to understand the differences between features such as data analysis, visualizations, or alerting, so that you can implement your required scenario in the most efficient and cost effective manner. --| Pillar | Description | -|:|:| -| Metrics | Metrics are numerical values that describe some aspect of a system at a particular point in time. Metrics are collected at regular intervals and are identified with a timestamp, a name, a value, and one or more defining labels. Metrics can be aggregated using various algorithms, compared to other metrics, and analyzed for trends over time.<br><br>Metrics in Azure Monitor are stored in a time-series database, which is optimized for analyzing time-stamped data. For more information, see [Azure Monitor Metrics](essentials/data-platform-metrics.md). | -| Logs | [Logs](logs/data-platform-logs.md) are events that occurred within the system. They can contain different kinds of data and may be structured or free-form text with a timestamp. They may be created sporadically as events in the environment generate log entries, and a system under heavy load will typically generate more log volume.<br><br>Azure Monitor stores logs in the Azure Monitor Logs store. The store allows you to segregate logs into separate "Log Analytics workspaces". There you can analyze them using the Log Analytics tool. Log Analytics workspaces are based on [Azure Data Explorer](/azure/data-explorer/), which provides a powerful analysis engine and the [Kusto rich query language](/azure/kusto/query/). For more information, see [Azure Monitor Logs](logs/data-platform-logs.md). | -| Distributed traces | Traces are series of related events that follow a user request through a distributed system. They can be used to determine behavior of application code and the performance of different transactions. While logs will often be created by individual components of a distributed system, a trace measures the operation and performance of your application across the entire set of components.<br><br>Distributed tracing in Azure Monitor is enabled with the [Application Insights SDK](app/distributed-tracing.md). Trace data is stored with other application log data collected by Application Insights and stored in Azure Monitor Logs. For more information, see [What is Distributed Tracing?](app/distributed-tracing.md). | -| Changes | Changes are tracked using [Change Analysis](change/change-analysis.md). Changes are a series of events that occur in your Azure application and resources. Change Analysis is a subscription-level observability tool that's built on the power of Azure Resource Graph. <br><br> Once Change Analysis is enabled, the `Microsoft.ChangeAnalysis` resource provider is registered with an Azure Resource Manager subscription. Change Analysis' integrations with Monitoring and Diagnostics tools provide data to help users understand what changes might have caused the issues. Read more about Change Analysis in [Use Change Analysis in Azure Monitor](./change/change-analysis.md). | --Azure Monitor aggregates and correlates data across multiple Azure subscriptions and tenants, in addition to hosting data for other services. Because this data is stored together, it can be correlated and analyzed using a common set of tools. --> [!NOTE] -> It's important to distinguish between Azure Monitor Logs and sources of log data in Azure. For example, subscription level events in Azure are written to an [activity log](essentials/platform-logs-overview.md) that you can view from the Azure Monitor menu. Most resources will write operational information to a [resource log](essentials/platform-logs-overview.md) that you can forward to different locations. Azure Monitor Logs is a log data platform that collects activity logs and resource logs along with other monitoring data to provide deep analysis across your entire set of resources. --For many Azure resources, you'll see data collected by Azure Monitor right in their overview page in the Azure portal. Look at any virtual machine (VM), for example, and you'll see several charts that display performance metrics. Select any of the graphs to open the data in [Metrics Explorer](essentials/metrics-charts.md) in the Azure portal. With Metrics Explorer, you can chart the values of multiple metrics over time. You can view the charts interactively or pin them to a dashboard to view them with other visualizations. -- --Log data collected by Azure Monitor can be analyzed with [queries](logs/log-query-overview.md) to quickly retrieve, consolidate, and analyze collected data. You can create and test queries by using the [Log Analytics](./logs/log-query-overview.md) user interface in the Azure portal. You can then either directly analyze the data by using different tools or save queries for use with [visualizations](best-practices-analysis.md) or [alert rules](alerts/alerts-overview.md). --Azure Monitor Logs uses a version of the [Kusto Query Language](/azure/kusto/query/) that's suitable for simple log queries but also includes advanced functionality such as aggregations, joins, and smart analytics. You can quickly learn the query language by using [multiple lessons](logs/get-started-queries.md). Particular guidance is provided to users who are already familiar with [SQL](/azure/data-explorer/kusto/query/sqlcheatsheet) and [Splunk](/azure/data-explorer/kusto/query/splunk-cheat-sheet). +Observability is the ability to assess an internal systemΓÇÖs state based on the data it produces. An observability solution analyzes output data, provides an assessment of the systemΓÇÖs health, and offers actionable insights for addressing problems across your IT infrastructure. - +Observability wouldnΓÇÖt be possible without monitoring. Monitoring is the collection and analysis of data pulled from IT systems. -Change Analysis alerts you to live site issues, outages, component failures, or other change data. It also provides insights into those application changes, increases observability, and reduces the mean time to repair. You automatically register the `Microsoft.ChangeAnalysis` resource provider with an Azure Resource Manager subscription by going to Change Analysis via the Azure portal. For web app in-guest changes, you can enable Change Analysis by using the [Diagnose and solve problems tool](./change/change-analysis-enable.md#enable-azure-functions-and-web-app-in-guest-change-collection-via-the-change-analysis-portal). +The pillars of observability are the different kinds of data that a monitoring tool must collect and analyze to provide sufficient observability of a monitored system. Metrics, logs, and distributed traces are commonly referred to as the pillars of observability. Azure Monitor adds ΓÇ£changesΓÇ¥ to these pillars. -Change Analysis builds on [Azure Resource Graph](../governance/resource-graph/overview.md) to provide a historical record of how your Azure resources have changed over time. It detects managed identities, platform operating system upgrades, and hostname changes. Change Analysis securely queries IP configuration rules, TLS settings, and extension versions to provide more detailed change data. +When a system is observable, a user can identify the root cause of a performance problem by looking at the data it produces without additional testing or coding. +Azure Monitor achieves observability by correlating data from multiple pillars and aggregating data across the entire set of monitored resources. Azure Monitor provides a common set of tools to correlate and analyze the data from multiple Azure subscriptions and tenants, in addition to data hosted for other services. -## What data can Azure Monitor collect? +## High level architecture -Azure Monitor can collect data from [sources](monitor-reference.md) that range from your application to any operating system and services it relies on, down to the platform itself. Azure Monitor collects data from each of the following tiers: --- **Application** - Data about the performance and functionality of the code you've written, regardless of its platform.-- **Container** - Data about containers and applications running inside containers, such as Azure Kubernetes. -- **Guest operating system** - Data about the operating system on which your application is running. The system could be running in Azure, another cloud, or on-premises.-- **Azure resource** - Data about the operation of an Azure resource. For a list of the resources that have metrics and/or logs, see [What can you monitor with Azure Monitor?](monitor-reference.md).-- **Azure subscription** - Data about the operation and management of an Azure subscription, and data about the health and operation of Azure itself.-- **Azure tenant** - Data about the operation of tenant-level Azure services, such as Azure Active Directory.-- **Azure resource changes** - Data about changes within your Azure resources and how to address and triage incidents and issues.--As soon as you create an Azure subscription and add resources such as VMs and web apps, Azure Monitor starts collecting data. [Activity logs](essentials/platform-logs-overview.md) record when resources are created or modified. [Metrics](essentials/data-platform-metrics.md) tell you how the resource is performing and the resources that it's consuming. --[Enable diagnostics](essentials/platform-logs-overview.md) to extend the data you're collecting into the internal operation of the resources. [Add an agent](agents/agents-overview.md) to compute resources to collect telemetry from their guest operating systems. --Enable monitoring for your application with [Application Insights](app/app-insights-overview.md) to collect detailed information including page views, application requests, and exceptions. Further verify the availability of your application by configuring an [availability test](app/monitor-web-app-availability.md) to simulate user traffic. --### Custom sources --Azure Monitor can collect log data from any REST client by using the [Data Collector API](logs/data-collector-api.md). You can create custom monitoring scenarios and extend monitoring to resources that don't expose telemetry through other sources. --## Insights and curated visualizations --Monitoring data is only useful if it can increase your visibility into the operation of your computing environment. Some Azure resource providers have a "curated visualization," which gives you a customized monitoring experience for that particular service or set of services. They generally require minimal configuration. Larger, scalable, curated visualizations are known as "insights" and marked with that name in the documentation and the Azure portal. --For more information, see [List of insights and curated visualizations using Azure Monitor](insights/insights-overview.md). Some of the larger insights are described here. --### Application Insights +The following diagram gives a high-level view of Azure Monitor. -[Application Insights](app/app-insights-overview.md) monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It takes advantage of the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. You can use it to diagnose errors without waiting for a user to report them. Application Insights includes connection points to various development tools and integrates with Visual Studio to support your DevOps processes. - +The diagram depicts the Azure Monitor system components: +- The **[data sources](data-sources.md)** are the types of data collected from each monitored resource. The data is collected and routed to the **data platform**. +- The **[data platform](data-platform.md)** is made up of the data stores for collected data. Azure Monitor's data platform has stores for metrics, logs, traces, and changes. +- The functions and components that consume data include analysis, visualizations, insights, and responses. +- Services that integrate with Azure Monitor to provide additional functionality and are integrated throughout the system. -### Container insights +## Data sources +Azure Monitor can collect data from multiple sources, including from your application, operating systems, the services they rely on, and from the platform itself. -[Container insights](containers/container-insights-overview.md) monitors the performance of container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service. It gives you performance visibility by collecting metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. +You can integrate monitoring data from sources outside Azure, including on-premises and other non-Microsoft clouds, using the application, infrastructure, and custom data sources. - +Azure Monitor collects these types of data: -### VM insights +|Data Type |Description | +||| +|Application|Data about the performance and functionality of your application code on any platform.| +|Infrastructure|**- Container.** Data about containers, such as Azure Kubernetes, and about the applications running inside containers.<br>**- Operating system.** Data about the guest operating system on which your application is running.| +|Azure Platform|**- Azure resource**. The operation of an Azure resource.<br>**- Azure subscription.** The operation and management of an Azure subscription, and data about the health and operation of Azure itself.<br>**- Azure tenant.** Data about the operation of tenant-level Azure services, such as Azure Active Directory.<br>**- Azure resource changes.** Data about changes within your Azure resources and how to address and triage incidents and issues. | +|Custom Sources|Use the Azure Monitor REST API to send customer metric or log data to Azure Monitor and incorporate monitoring of resources that donΓÇÖt expose monitoring data through other methods.| -[VM insights](vm/vminsights-overview.md) monitors your Azure VMs at scale. It analyzes the performance and health of your Windows and Linux VMs and identifies their different processes and interconnected dependencies on external processes. The solution includes support for monitoring performance and application dependencies for VMs hosted on-premises or another cloud provider. +For detailed information about each of the data sources, see [data sources](./data-sources.md). - +## Data collection and routing -## Respond to critical situations +Azure Monitor collects and routes monitoring data using several mechanisms, depending on the data being routed and the destination data platform stores. -In addition to allowing you to interactively analyze monitoring data, an effective monitoring solution must be able to proactively respond to critical conditions identified in the data that it collects. The response could be sending a text or email to an administrator responsible for investigating an issue. Or you could launch an automated process that attempts to correct an error condition. +|Collection method|Description | +||| +|Direct data routing|Platform metrics are sent automatically to Azure Monitor Metrics by default and without configuration.| +|[Diagnostic settings](essentials/diagnostic-settings.md)|Use diagnostic settings to determine where to send resource and activity log data on the data platform.| +|[Data collection rules](essentials/data-collection-rule-overview.md)|Use data collection rules to specify what data should be collected, how to transform that data, and where to send that data.| +|[Application SDK](app/app-insights-overview.md)|Add the Application Insights SDK to your application code to receive, store, and explore your monitoring data. The SDK pre-processes telemetry and metrics before sending the data to Azure where it's ingested and processed further before being stored in Azure Monitor Logs.| +|[Azure Monitor REST API](logs/logs-ingestion-api-overview.md)|The Logs Ingestion API in Azure Monitor lets you send data to a Log Analytics workspace from any REST API client.| +|[Azure Monitor Agents](agents/agents-overview.md)|Azure Monitor Agent (AMA) collects monitoring data from the guest operating system of Azure and hybrid virtual machines and delivers it to Azure Monitor for use by features, insights, and other services, such as Microsoft Sentinel and Microsoft Defender for Cloud.| -### Alerts +For detailed information about data collection, see [data collection](./best-practices-data-collection.md). -[Alerts in Azure Monitor](alerts/alerts-overview.md) proactively notify you of critical conditions and potentially attempt to take corrective action. Alert rules based on metrics provide near-real-time alerts based on numeric values. Rules based on logs allow for complex logic across data from multiple sources. +## Data platform -Alert rules in Azure Monitor use [action groups](alerts/action-groups.md), which contain unique sets of recipients and actions that can be shared across multiple rules. Based on your requirements, action groups can perform such actions as using webhooks to have alerts start external actions or to integrate with your IT service management tools. +Azure Monitor stores data in data stores for each of the pillars of observability: metrics, logs, distributed traces, and changes. Each store is optimized for specific types of data and monitoring scenarios. - -### Autoscale +|Pillar of Observability/<br>Data Store|Description| +||| +|[Azure Monitor Metrics](essentials/data-platform-metrics.md)|Metrics are numerical values that describe an aspect of a system at a particular point in time. [Azure Monitor Metrics](./essentials/data-platform-metrics.md) is a time-series database, optimized for analyzing time-stamped data. Azure Monitor collects metrics at regular intervals. Metrics are identified with a timestamp, a name, a value, and one or more defining labels. They can be aggregated using algorithms, compared to other metrics, and analyzed for trends over time. It supports native Azure Monitor metrics and [Prometheus based metrics](/articles/azure-monitor/essentials/prometheus-metrics-overview.md).| +|[Azure Monitor Logs](logs/data-platform-logs.md)|Logs are recorded system events. Logs can contain different types of data, be structured or free-form text, and they contain a timestamp. Azure Monitor stores structured and unstructured log data of all types in [Azure Monitor Logs](./logs/data-platform-logs.md). You can route data to [Log Analytics workspaces](./logs/log-analytics-overview.md) for querying and analysis.| +|Traces|Distributed traces identify the series of related events that follow a user request through a distributed system. A trace measures the operation and performance of your application across the entire set of components in your system. Traces can be used to determine the behavior of application code and the performance of different transactions. Azure Monitor gets distributed trace data from the Application Insights SDK. The trace data is stored in a separate workspace in Azure Monitor Logs.| +|Changes|Changes are a series of events in your application and resources. They're tracked and stored when you use the [Change Analysis](./change/change-analysis.md) service, which uses [Azure Resource Graph](../governance/resource-graph/overview.md) as its store. Change Analysis helps you understand which changes, such as deploying updated code, may have caused issues in your systems.| -Autoscale allows you to have the right amount of resources running to handle the load on your application. Create rules that use metrics collected by Azure Monitor to determine when to automatically add resources when load increases. Save money by removing resources that are sitting idle. You specify a minimum and maximum number of instances and the logic for when to increase or decrease resources. +## The Azure portal - +The Azure portal is a web-based, unified console that provides an alternative to command-line tools. With the Azure portal, you can manage your Azure subscription using a graphical user interface. You can build, manage, and monitor everything from simple web apps to complex cloud deployments in the portal. +The Monitor section of the Azure portal provides a visual interface that gives you access to the data collected for Azure resources and an easy way to access the tools, insights, and visualizations in Azure Monitor. -## Visualize monitoring data -[Visualizations](best-practices-analysis.md) such as charts and tables are effective tools for summarizing monitoring data and presenting it to different audiences. Azure Monitor has its own features for visualizing monitoring data and uses other Azure services for publishing it to different audiences. +## Insights and Visualizations -### Dashboards +Insights and visualizations help increase your visibility into the operation of your computing environment. Some Azure resource providers have curated visualizations that provide a customized monitoring experience and require minimal configuration. -[Azure dashboards](../azure-portal/azure-portal-dashboards.md) allow you to combine different kinds of data into a single pane in the [Azure portal](https://portal.azure.com). You can optionally share the dashboard with other Azure users. Add the output of any log query or metrics chart to an Azure dashboard. For example, you could create a dashboard that combines tiles that show a graph of metrics, a table of Activity logs, a usage chart from Application Insights, and the output of a log query. +### Insights - +Insights are large, scalable, curated visualizations. For more information, see List of insights and curated visualizations using Azure Monitor. +The following table describes the three major insights: -### Workbooks +|Insight |Description | +||| +|[Application Insights](app/app-insights-overview.md)|Application Insights takes advantage of the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. Application Insights monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. You can use it to diagnose errors without waiting for a user to report them. Application Insights includes connection points to various development tools and integrates with Visual Studio to support your DevOps processes.| +|[Container Insights](containers/container-insights-overview.md)|Container Insights gives you performance visibility into container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service. Container Insights collects container logs and metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux.| +|[VM Insights](vm/vminsights-overview.md)|VM Insights monitors your Azure VMs. It analyzes the performance and health of your Windows and Linux VMs and identifies their different processes and interconnected dependencies on external processes. The solution includes support for monitoring performance and application dependencies for VMs hosted on-premises or another cloud provider.| -[Workbooks](visualize/workbooks-overview.md) provide a flexible canvas for data analysis and the creation of rich visual reports in the Azure portal. You can use them to tap into multiple data sources from across Azure and combine them into unified interactive experiences. Use workbooks provided with Insights or create your own from predefined templates. +### Visualize - +Visualizations such as charts and tables are effective tools for summarizing monitoring data and presenting it to different audiences. Azure Monitor has its own features for visualizing monitoring data and uses other Azure services for publishing it to different audiences. -### Power BI +|Visualization|Description | +||| +|[Dashboards](visualize/tutorial-logs-dashboards.md)|Azure dashboards allow you to combine different kinds of data into a single pane in the Azure portal. You can optionally share the dashboard with other Azure users. You can add the output of any log query or metrics chart to an Azure dashboard. For example, you could create a dashboard that combines tiles that show a graph of metrics, a table of activity logs, a usage chart from Application Insights, and the output of a log query.| +|[Workbooks](visualize/workbooks-overview.md)|Workbooks provide a flexible canvas for data analysis and the creation of rich visual reports in the Azure portal. You can use them to query data from multiple data sources. Workbooks can combine and correlate data from multiple data sets in one visualization giving you easy visual representation of your system. Workbooks are interactive and can be shared across teams with data updating in real time. Use workbooks provided with Insights, utilize the library of templates, or create your own.| +|[Power BI](logs/log-powerbi.md)|Power BI is a business analytics service that provides interactive visualizations across various data sources. It's an effective means of making data available to others within and outside your organization. You can configure Power BI to automatically import log data from Azure Monitor to take advantage of these visualizations.| +|[Grafana](visualize/grafana-plugin.md)|Grafana is an open platform that excels in operational dashboards. Grafana has popular plug-ins and dashboard templates for APM tools such as Dynatrace, New Relic, and AppDynamics. You can use these resources to visualize Azure platform data alongside other metrics from higher in the stack collected by other tools. It also has AWS CloudWatch and GCP BigQuery plug-ins for multicloud monitoring in a single pane of glass. All versions of Grafana include the Azure Monitor data source plug-in to visualize your Azure Monitor metrics and logs. Azure Managed Grafana also optimizes this experience for Azure-native data stores such as Azure Monitor and Azure Data Explorer. In this way, you can easily connect to any resource in your subscription and view all resulting monitoring data in a familiar Grafana dashboard. It also supports pinning charts from Azure Monitor metrics and logs to Grafana dashboards.| -[Power BI](https://powerbi.microsoft.com) is a business analytics service that provides interactive visualizations across various data sources. It's an effective means of making data available to others within and outside your organization. You can configure Power BI to [automatically import log data from Azure Monitor](./logs/log-powerbi.md) to take advantage of these visualizations. +## Analyze +The Azure portal contains built in tools that allow you to analyze monitoring data. - +|Tool |Description | +||| +|[Metrics explorer](essentials/metrics-getting-started.md)|Use the Azure Monitor metrics explorer user interface in the Azure portal to investigate the health and utilization of your resources. Metrics explorer helps you plot charts, visually correlate trends, and investigate spikes and dips in metric values. Metrics explorer contains features for applying dimensions and filtering, and for customizing charts. These features help you analyze exactly the data you need in a visually intuitive way.| +|[Log Analytics](logs/log-analytics-overview.md)|The Log Analytics user interface in the Azure portal helps you query the log data collected by Azure Monitor so that you can quickly retrieve, consolidate, and analyze collected data. After creating test queries, you can then directly analyze the data with Azure Monitor tools, or you can save the queries for use with visualizations or alert rules. Log Analytics workspaces are based on Azure Data Explorer, using a powerful analysis engine and the rich Kusto query language (KQL).Azure Monitor Logs uses a version of the Kusto Query Language suitable for simple log queries, and advanced functionality such as aggregations, joins, and smart analytics. You can [get started with KQL](logs/get-started-queries.md) quickly and easily.| +|[Change Analysis](change/change-analysis.md)| The Change Analysis user interface in the Azure portal gives you insight into the cause of live site issues, outages, or component failures. Change Analysis uses the power of [Azure Resource Graph](../governance/resource-graph/overview.md) to detect various types of changes, from the infrastructure layer through application deployment. Change Analysis is a subscription-level Azure resource provider that checks resource changes in the subscription and provides data for diagnostic tools to help users understand what changes might have caused issues.| -## Integrate and export data -You'll often have the requirement to integrate Azure Monitor with other systems and to build custom solutions that use your monitoring data. Other Azure services work with Azure Monitor to provide this integration. +## Respond -### Event Hubs +An effective monitoring solution proactively responds to critical events, without the need for an individual or team to notice the issue. The response could be a text or email to an administrator, or an automated process that attempts to correct an error condition. -[Azure Event Hubs](../event-hubs/index.yml) is a streaming platform and event ingestion service. It can transform and store data by using any real-time analytics provider or batching/storage adapters. Use Event Hubs to [stream Azure Monitor data](essentials/stream-monitoring-data-event-hubs.md) to partner SIEM and monitoring tools. +- **[Alerts](alerts/alerts-overview.md)** notify you of critical conditions and can take corrective action. Alert rules can be based on metric or log data. Metric alert rules provide near-real-time alerts based on collected metrics. Log alerts rules based on logs allow for complex logic across data from multiple sources. +Alert rules use action groups, which can perform actions like sending email or SMS notifications. Action groups can send notifications using webhooks to trigger external processes or to integrate with your IT service management tools. Action groups, actions, and sets of recipients can be shared across multiple rules. +- **[Autoscale](autoscale/autoscale-overview.md)** allows you to dynamically control the number of resources running to handle the load on your application. You can create rules that use Azure Monitor metrics to determine when to automatically add resources when the load increases or remove resources that are sitting idle. You can specify a minimum and maximum number of instances, and the logic for when to increase or decrease resources to save money and to increase performance. -### Logic Apps +## Integrate -[Azure Logic Apps](https://azure.microsoft.com/services/logic-apps) is a service you can use to automate tasks and business processes by using workflows that integrate with different systems and services. Activities are available that read and write metrics and logs in Azure Monitor. +You may need to integrate Azure Monitor with other systems or to build custom solutions that use your monitoring data. These Azure services work with Azure Monitor to provide integration capabilities. -### API -Multiple APIs are available to read and write metrics and logs to and from Azure Monitor in addition to accessing generated alerts. You can also configure and retrieve alerts. With APIs, you have unlimited possibilities to build custom solutions that integrate with Azure Monitor. +|Azure service |Description | +||| +|[Event Hubs](../event-hubs/event-hubs-about.md)|Azure Event Hubs is a streaming platform and event ingestion service. It can transform and store data by using any real-time analytics provider or batching/storage adapters. Use Event Hubs to stream Azure Monitor data to partner SIEM and monitoring tools.| +|[Logic Apps](../logic-apps/logic-apps-overview.md)|Azure Logic Apps is a service you can use to automate tasks and business processes by using workflows that integrate with different systems and services. Activities are available that read and write metrics and logs in Azure Monitor.| +|[API](/rest/api/monitor/)|Multiple APIs are available to read and write metrics and logs to and from Azure Monitor in addition to accessing generated alerts. You can also configure and retrieve alerts. With APIs, you have unlimited possibilities to build custom solutions that integrate with Azure Monitor.| ## Next steps--Learn more about: -* [Metrics and logs](./data-platform.md#metrics) for the data collected by Azure Monitor. -* [Data sources](data-sources.md) for how the different components of your application send telemetry. -* [Log queries](logs/log-query-overview.md) for analyzing collected data. -* [Best practices](/azure/architecture/best-practices/monitoring) for monitoring cloud applications and services. +- [Getting started with Azure Monitor](getting-started.md) +- [Sources of monitoring data for Azure Monitor](data-sources.md) +- [Data collection in Azure Monitor](essentials/data-collection.md) |
azure-monitor | Resource Manager Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/resource-manager-samples.md | In the request body, provide a link to your template and parameter file. - [Metric alert rules](alerts/resource-manager-alerts-metric.md): Configure alerts from metrics that use different kinds of logic. - [Application Insights](app/resource-manager-app-resource.md) - [Diagnostic settings](essentials/resource-manager-diagnostic-settings.md): Create diagnostic settings to forward logs and metrics from different resource types.+- [Enable Prometheus metrics](essentials/prometheus-metrics-enable.md?tabs=resource-manager#enable-prometheus-metric-collection): Install the Azure Monitor agent on your AKS cluster and send Prometheus metrics to your Azure Monitor workspace. - [Log queries](logs/resource-manager-log-queries.md): Create saved log queries in a Log Analytics workspace. - [Log Analytics workspace](logs/resource-manager-workspace.md): Create a Log Analytics workspace and configure a collection of data sources from the Log Analytics agent. - [Workbooks](visualize/resource-manager-workbooks.md): Create workbooks. |
azure-monitor | Tutorial Logs Dashboards | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/tutorial-logs-dashboards.md | -Log Analytics dashboards can visualize all of your saved log queries, giving you the ability to find, correlate, and share IT operational data in the organization. This tutorial covers creating a log query that will be used to support a shared dashboard that will be accessed by your IT operations support team. You learn how to: +Log Analytics dashboards can visualize all of your saved log queries. Visualizations give you the ability to find, correlate, and share IT operational data in your organization. This tutorial covers creating a log query that will be used to support a shared dashboard that can be accessed by your IT operations support team. You learn how to: > [!div class="checklist"]-> * Create a shared dashboard in the Azure portal -> * Visualize a performance log query -> * Add a log query to a shared dashboard -> * Customize a tile in a shared dashboard +> * Create a shared dashboard in the Azure portal. +> * Visualize a performance log query. +> * Add a log query to a shared dashboard. +> * Customize a tile in a shared dashboard. -To complete the example in this tutorial, you must have an existing virtual machine [connected to the Log Analytics workspace](../vm/monitor-virtual-machine.md). - -## Sign in to Azure portal -Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com). +To complete the example in this tutorial, you must have an existing virtual machine [connected to the Log Analytics workspace](../vm/monitor-virtual-machine.md). ++## Sign in to the Azure portal +Sign in to the [Azure portal](https://portal.azure.com). ## Create a shared dashboard-Select **Dashboard** to open your default [dashboard](../../azure-portal/azure-portal-dashboards.md). Your dashboard will look different than the example below. +Select **Dashboard** to open your default [dashboard](../../azure-portal/azure-portal-dashboards.md). Your dashboard will look different from the following example. - + -Here you can bring together operational data that is most important to IT across all your Azure resources, including telemetry from Azure Log Analytics. Before we step into visualizing a log query, let's first create a dashboard and share it. We can then focus on our example performance log query, which will render as a line chart, and add it to the dashboard. +Here you can bring together operational data that's most important to IT across all your Azure resources, including telemetry from Azure Log Analytics. Before we visualize a log query, let's first create a dashboard and share it. We can then focus on our example performance log query, which will render as a line chart, and add it to the dashboard. > [!NOTE]-> The following chart types are supported in Azure dashboards using log queries: -> - areachart -> - columnchart -> - piechart (will render in dashboard as donut) -> - scatterchart -> - timechart +> The following chart types are supported in Azure dashboards by using log queries: +> - `areachart` +> - `columnchart` +> - `piechart` (will render in dashboard as a donut) +> - `scatterchart` +> - `timechart` -To create a dashboard, select the **New dashboard** button next to the current dashboard's name. +To create a dashboard, select **New dashboard**. - + -This action creates a new, empty, private dashboard and puts you into customization mode where you can name your dashboard and add or rearrange tiles. Edit the name of the dashboard and specify *Sample Dashboard* for this tutorial, and then select **Done customizing**.<br><br>  +This action creates a new, empty, private dashboard. It opens in a customization mode where you can name your dashboard and add or rearrange tiles. Edit the name of the dashboard and specify **Sample Dashboard** for this tutorial. Then select **Done customizing**.<br><br>  -When you create a dashboard, it is private by default, which means you are the only person who can see it. To make it visible to others, use the **Share** button that appears alongside the other dashboard commands. +When you create a dashboard, it's private by default, so you're the only person who can see it. To make it visible to others, select **Share**. - + -You are asked to choose a subscription and resource group for your dashboard to be published to. For convenience, the portal's publishing experience guides you towards a pattern where you place dashboards in a resource group called **dashboards**. Verify the subscription selected and then click **Publish**. Access to the information displayed in the dashboard is controlled with [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md). +Choose a subscription and resource group for your dashboard to be published to. For convenience, you're guided toward a pattern where you place dashboards in a resource group called **dashboards**. Verify the subscription selected and then select **Publish**. Access to the information displayed in the dashboard is controlled with [Azure role-based access control](../../role-based-access-control/role-assignments-portal.md). ## Visualize a log query-[Log Analytics](../logs/log-analytics-tutorial.md) is a dedicated portal used to work with log queries and their results. Features include the ability to edit a query on multiple lines, selectively execute code, context sensitive Intellisense, and Smart Analytics. In this tutorial, you will use Log Analytics to create a performance view in graphical form, save it for a future query, and pin it to the shared dashboard created earlier. +[Log Analytics](../logs/log-analytics-tutorial.md) is a dedicated portal used to work with log queries and their results. Features include the ability to edit a query on multiple lines and selectively execute code. Log Analytics also uses context-sensitive IntelliSense and Smart Analytics. ++In this tutorial, you'll use Log Analytics to create a performance view in graphical form and save it for a future query. Then you'll pin it to the shared dashboard you created earlier. -Open Log Analytics by selecting **Logs** in the Azure Monitor menu. It starts with a new blank query. +Open Log Analytics by selecting **Logs** on the Azure Monitor menu. It starts with a new blank query. - + -Enter the following query to return processor utilization records for both Windows and Linux computers, grouped by Computer and TimeGenerated, and displayed in a visual chart. Click **Run** to run the query and view the resulting chart. +Enter the following query to return processor utilization records for both Windows and Linux computers. The records are grouped by `Computer` and `TimeGenerated` and displayed in a visual chart. Select **Run** to run the query and view the resulting chart. ```Kusto Perf Perf | render timechart ``` -Save the query by selecting the **Save** button from the top of the page. +Save the query by selecting **Save**. -In the **Save Query** control panel, provide a name such as *Azure VMs - Processor Utilization* and a category such as *Dashboards* and then click **Save**. This way you can create a library of common queries that you can use and modify. Finally, pin this to the shared dashboard created earlier by selecting the **Pin to dashboard** button from the top right corner of the page and then selecting the dashboard name. +In the **Save Query** control panel, provide a name such as **Azure VMs - Processor Utilization** and a category such as **Dashboards**. Select **Save**. This way you can create a library of common queries that you can use and modify. Finally, pin this query to the shared dashboard you created earlier. Select the **Pin to dashboard** button in the upper-right corner of the page and then select the dashboard name. -Now that we have a query pinned to the dashboard, you will notice it has a generic title and comment below it. +Now that we have a query pinned to the dashboard, you'll notice that it has a generic title and comment underneath it. - We should rename it to something meaningful that can be easily understood by those viewing it. Click the edit button to customize the title and subtitle for the tile, and then click **Update**. A banner will appear asking you to publish changes or discard. Click **Save a copy**. + Rename the query with a meaningful name that can be easily understood by anyone who views it. Select **Edit** to customize the title and subtitle for the tile, and then select **Update**. A banner appears that asks you to publish changes or discard. Select **Save a copy**. ## Next steps-In this tutorial, you learned how to create a dashboard in the Azure portal and add a log query to it. Follow this link to see pre-built Log Analytics script samples. +In this tutorial, you learned how to create a dashboard in the Azure portal and add a log query to it. Follow this link to see prebuilt Log Analytics script samples. > [!div class="nextstepaction"] > [Log Analytics script samples](../powershell-samples.md) |
azure-monitor | View Designer Filters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/view-designer-filters.md | Title: Filters in Azure Monitor views | Microsoft Docs -description: A filter in an Azure Monitor view allows users to filter the data in the view by the value of a particular property without modifying the view itself. This article describes how to use a filter and add one to a custom view. +description: A filter in an Azure Monitor view allows users to filter the data in the view by the value of a particular property without modifying the view itself. This article describes how to use a filter and add one to a custom view. Last updated 06/22/2018 Last updated 06/22/2018 # Filters in Azure Monitor views > [!IMPORTANT]-> Views in Azure Monitor [are being retired](https://azure.microsoft.com/updates/view-designer-in-azure-monitor-is-retiring-on-31-august-2023/) and transitioned to [workbooks](workbooks-overview.md) which provide additional functionality. See [Azure Monitor view designer to workbooks transition guide](view-designer-conversion-overview.md) for details on converting your existing views to workbooks. +> Views in Azure Monitor [are being retired](https://azure.microsoft.com/updates/view-designer-in-azure-monitor-is-retiring-on-31-august-2023/) and transitioned to [workbooks](workbooks-overview.md), which provide more functionality. For details on converting your existing views to workbooks, see [Azure Monitor View Designer to workbooks transition guide](view-designer-conversion-overview.md). -A **filter** in an [Azure Monitor view](view-designer.md) allows users to filter the data in the view by the value of a particular property without modifying the view itself. For example, you could allow users of your view to filter the view for data only from a particular computer or set of computers. You can create multiple filters on a single view to allow users to filter by multiple properties. This article describes how to use a filter and add one to a custom view. +A *filter* in an [Azure Monitor view](view-designer.md) allows users to filter the data in the view by the value of a particular property without modifying the view itself. For example, you could allow users of your view to filter the view for data only from a particular computer or set of computers. You can create multiple filters on a single view to allow users to filter by multiple properties. This article describes how to use a filter and add one to a custom view. -## Using a filter -Click the date time range at the top of a view to open the drop down where you can change the date time range for the view. +## Use a filter +Select the date time range at the top of a view to open the dropdown where you can change the date time range for the view. - + -Click the **+** to add a filter using custom filters that are defined for the view. Either select a value for the filter from the dropdown or type in a value. Continue to add filters by clicking the **+**. +Select **+** to add a filter by using custom filters that are defined for the view. Either select a value for the filter from the dropdown or enter a value. Continue to add filters by selecting **+**. + - +If you remove all the values for a filter, that filter will no longer be applied. -If you remove all of the values for a filter, then that filter will no longer be applied. +## Create a filter +Create a filter from the **Filters** tab when you [edit a view](view-designer.md). The filter is global for the view and applies to all parts in the view. -## Creating a filter --Create a filter from the **Filters** tab when [editing a view](view-designer.md). The filter is global for the view and applies to all parts in the view. -- + The following table describes the settings for a filter. | Setting | Description | |:|:|-| Field Name | Name of the field used for filtering. This field must match the summarize field in **Query for Values**. | -| Query for Values | Query to run to populate filter dropdown for the user. This query must use either [summarize](/azure/kusto/query/summarizeoperator) or [distinct](/azure/kusto/query/distinctoperator) to provide unique values for a particular field, and it must match the **Field Name**. You can use [sort](/azure/kusto/query/sortoperator) to sort the values that are displayed to the user. | +| Field Name | Name of the field used for filtering. This field must match the summarize field in **Query for Values**. | +| Query for Values | Query to run to populate the **Filter** dropdown for the user. This query must use either [summarize](/azure/kusto/query/summarizeoperator) or [distinct](/azure/kusto/query/distinctoperator) to provide unique values for a particular field. It must match the **Field Name**. You can use [sort](/azure/kusto/query/sortoperator) to sort the values that are displayed to the user. | | Tag | Name for the field that's used in queries supporting the filter and is also displayed to the user. | ### Examples -The following table includes a few examples of common filters. +The following table includes examples of common filters. -| Field Name | Query for Values | Tag | +| Field name | Query for values | Tag | |:--|:--|:--| | Computer | Heartbeat | distinct Computer | sort by Computer asc | Computers | | EventLevelName | Event | distinct EventLevelName | Severity | | SeverityLevel | Syslog | distinct SeverityLevel | Severity | | SvcChangeType | ConfigurationChange | distinct svcChangeType | ChangeType | - ## Modify view queries -For a filter to have any effect, you must modify any queries in the view to filter on the selected values. If you don't modify any queries in the view, then any values the user selects will have no effect. +For a filter to have any effect, you must modify any queries in the view to filter on the selected values. If you don't modify any queries in the view, any values the user selects will have no effect. -The syntax for using a filter value in a query is: +The syntax for using a filter value in a query is: `where ${filter name}` -For example, if your view has a query that returns events and uses a filter called _Computers_, you could use the following query. +For example, if your view has a query that returns events and uses a filter called `Computers`, you could use the following query: ```kusto Event | where ${Computers} | summarize count() by EventLevelName ``` -If you added another filter called Severity, you could use the following query to use both filters. +If you added another filter called `Severity`, you could use the following query to use both filters: ```kusto Event | where ${Computers} | where ${Severity} | summarize count() by EventLevelName ``` ## Next steps-* Learn more about the [Visualization Parts](view-designer-parts.md) you can add to your custom view. +Learn more about the [visualization parts](view-designer-parts.md) you can add to your custom view. |
azure-monitor | Vmext Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/vmext-troubleshoot.md | Title: Troubleshoot Azure Log Analytics VM Extension + Title: Troubleshoot the Azure Log Analytics VM extension description: Describe the symptoms, causes, and resolution for the most common issues with the Log Analytics VM extension for Windows and Linux Azure VMs. Last updated 06/06/2019 -# Troubleshooting the Log Analytics VM extension in Azure Monitor -This article provides help troubleshooting errors you might experience with the Log Analytics VM extension for Windows and Linux virtual machines running on Microsoft Azure, and suggests possible solutions to resolve them. +# Troubleshoot the Log Analytics VM extension in Azure Monitor +This article provides help troubleshooting errors you might experience with the Log Analytics VM extension for Windows and Linux virtual machines running on Azure. The article suggests possible solutions to resolve them. -To verify the status of the extension, perform the following steps from the Azure portal. +To verify the status of the extension: -1. Sign into the [Azure portal](https://portal.azure.com). -2. In the Azure portal, click **All services**. In the list of resources, type **virtual machines**. As you begin typing, the list filters based on your input. Select **Virtual machines**. -3. In your list of virtual machines, find and select it. -3. On the virtual machine, click **Extensions**. -4. From the list, check to see if the Log Analytics extension is enabled or not. For Linux, the agent is listed as **OMSAgentforLinux** and for Windows, the agent is listed as **MicrosoftMonitoringAgent**. +1. Sign in to the [Azure portal](https://portal.azure.com). +1. In the portal, select **All services**. In the list of resources, enter **virtual machines**. As you begin typing, the list filters based on your input. Select **Virtual machines**. +1. In your list of virtual machines, find and select it. +1. On the virtual machine, select **Extensions**. +1. From the list, check to see if the Log Analytics extension is enabled or not. For Linux, the agent is listed as **OMSAgentforLinux**. For Windows, the agent is listed as **MicrosoftMonitoringAgent**. -  +  -4. Click on the extension to view details. +1. Select the extension to view details. -  +  -## Troubleshooting Azure Windows VM extension +## Troubleshoot the Azure Windows VM extension -If the *Microsoft Monitoring Agent* VM extension is not installing or reporting, you can perform the following steps to troubleshoot the issue. +If the Microsoft Monitoring Agent VM extension isn't installing or reporting, perform the following steps to troubleshoot the issue: -1. Check if the Azure VM agent is installed and working correctly by using the steps in [KB 2965986](https://support.microsoft.com/kb/2965986#mt1). - * You can also review the VM agent log file `C:\WindowsAzure\logs\WaAppAgent.log` - * If the log does not exist, the VM agent is not installed. - * [Install the Azure VM Agent](../../virtual-machines/extensions/agent-windows.md#install-the-vm-agent) -2. Review the Microsoft Monitoring Agent VM extension log files in `C:\Packages\Plugins\Microsoft.EnterpriseCloud.Monitoring.MicrosoftMonitoringAgent` -3. Ensure the virtual machine can run PowerShell scripts -4. Ensure permissions on C:\Windows\temp havenΓÇÖt been changed -5. View the status of the Microsoft Monitoring Agent by typing the following in an elevated PowerShell window on the virtual machine `(New-Object -ComObject 'AgentConfigManager.MgmtSvcCfg').GetCloudWorkspaces() | Format-List` -6. Review the Microsoft Monitoring Agent setup log files in `C:\WindowsAzure\Logs\Plugins\Microsoft.EnterpriseCloud.Monitoring.MicrosoftMonitoringAgent\1.0.18053.0\`. Note that this path will change based on the version number of the agent. +1. Check if the Azure VM agent is installed and working correctly by using the steps in [KB 2965986](https://support.microsoft.com/kb/2965986#mt1): + * You can also review the VM agent log file `C:\WindowsAzure\logs\WaAppAgent.log`. + * If the log doesn't exist, the VM agent isn't installed. + * [Install the Azure VM Agent](../../virtual-machines/extensions/agent-windows.md#install-the-vm-agent). +1. Review the Microsoft Monitoring Agent VM extension log files in `C:\Packages\Plugins\Microsoft.EnterpriseCloud.Monitoring.MicrosoftMonitoringAgent`. +1. Ensure the virtual machine can run PowerShell scripts. +1. Ensure permissions on C:\Windows\temp haven't been changed. +1. View the status of the Microsoft Monitoring Agent by entering `(New-Object -ComObject 'AgentConfigManager.MgmtSvcCfg').GetCloudWorkspaces() | Format-List` in an elevated PowerShell window on the virtual machine. +1. Review the Microsoft Monitoring Agent setup log files in `C:\WindowsAzure\Logs\Plugins\Microsoft.EnterpriseCloud.Monitoring.MicrosoftMonitoringAgent\1.0.18053.0\`. This path changes based on the version number of the agent. -For more information, see [troubleshooting Windows extensions](../../virtual-machines/extensions/oms-windows.md). +For more information, see [Troubleshooting Windows extensions](../../virtual-machines/extensions/oms-windows.md). -## Troubleshooting Linux VM extension -If the *Log Analytics agent for Linux* VM extension is not installing or reporting, you can perform the following steps to troubleshoot the issue. +## Troubleshoot the Linux VM extension +If the Log Analytics agent for Linux VM extension isn't installing or reporting, perform the following steps to troubleshoot the issue: -1. If the extension status is *Unknown* check if the Azure VM agent is installed and working correctly by reviewing the VM agent log file `/var/log/waagent.log` - * If the log does not exist, the VM agent is not installed. - * [Install the Azure VM Agent on Linux VMs](../../virtual-machines/extensions/agent-linux.md#installation) -2. For other unhealthy statuses, review the Log Analytics agent for Linux VM extension logs files in `/var/log/azure/Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux/*/extension.log` and `/var/log/azure/Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux/*/CommandExecution.log` -3. If the extension status is healthy, but data is not being uploaded review the Log Analytics agent for Linux log files in `/var/opt/microsoft/omsagent/log/omsagent.log` +1. If the extension status is **Unknown**, check if the Azure VM agent is installed and working correctly by reviewing the VM agent log file `/var/log/waagent.log`. + * If the log doesn't exist, the VM agent isn't installed. + * [Install the Azure VM Agent on Linux VMs](../../virtual-machines/extensions/agent-linux.md#installation). +1. For other unhealthy statuses, review the Log Analytics agent for Linux VM extension logs files in `/var/log/azure/Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux/*/extension.log` and `/var/log/azure/Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux/*/CommandExecution.log`. +1. If the extension status is healthy but data isn't being uploaded, review the Log Analytics agent for Linux log files in `/var/opt/microsoft/omsagent/log/omsagent.log`. ## Next steps -For additional troubleshooting guidance related to the Log Analytics agent for Linux, see [Troubleshoot Azure Log Analytics Linux Agent](../agents/agent-linux-troubleshoot.md). +For more troubleshooting guidance related to the Log Analytics agent for Linux, see [Troubleshoot issues with the Log Analytics agent for Linux](../agents/agent-linux-troubleshoot.md). |
azure-monitor | Monitor Virtual Machine | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine.md | The articles in this guide provide guidance on configuring VM insights and using ## Security monitoring-Azure Monitor focuses on operational data like Activity logs, Metrics, and Log Analytics supported sources, including Windows Events (excluding security events), performance counters, logs, and Syslog. Security monitoring in Azure is performed by [Microsoft Defender for Cloud](/azure/defender-for-cloud/) and [Microsoft Sentinel](/azure/sentinel/). Configuration of these services is not included in this guide. +Azure Monitor focuses on operational data like Activity logs, Metrics, and Log Analytics supported sources, including Windows Events (excluding security events), performance counters, logs, and Syslog. Security monitoring in Azure is performed by [Microsoft Defender for Cloud](../../defender-for-cloud/index.yml) and [Microsoft Sentinel](../../sentinel/index.yml). Configuration of these services is not included in this guide. > [!IMPORTANT] > The security services have their own cost independent of Azure Monitor. Before you configure these services, refer to their pricing information to determine your appropriate investment in their usage. See [Design a Log Analytics workspace architecture](../logs/workspace-design.md) ## Next steps -[Deploy the Azure Monitor agent to your virtual machines](monitor-virtual-machine-agent.md) +[Deploy the Azure Monitor agent to your virtual machines](monitor-virtual-machine-agent.md) |
azure-netapp-files | Faq Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-integration.md | Using Azure NetApp Files NFS or SMB volumes with AVS for *Guest OS mounts* is su ## Which Unicode Character Encoding is supported by Azure NetApp Files for the creation and display of file and directory names? -Azure NetApp Files only supports file and directory names that are encoded with the UTF-8 Unicode Character Encoding format for both NFS and SMB volumes. +Azure NetApp Files only supports file and directory names that are encoded with the [UTF-8 Unicode Character Encoding](https://en.wikipedia.org/wiki/UTF-8), *C locale* (or _C.UTF-8_) format for both NFS and SMB volumes. As such only strict ASCII characters are valid. -If you try to create files or directories with names that use supplementary characters or surrogate pairs such as non-regular characters and emoji that are not supported by UTF-8, the operation will fail. In this case, an error from a Windows client might read ΓÇ£The file name you specified is not valid or too long. Specify a different file name.ΓÇ¥ +If you try to create files or directories with names that use supplementary characters or surrogate pairs such as non-regular characters and emoji that are not supported by C.UTF-8, the operation will fail. In this case, an error from a Windows client might read ΓÇ£The file name you specified is not valid or too long. Specify a different file name.ΓÇ¥ ## Next steps |
azure-netapp-files | Snapshots Manage Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/snapshots-manage-policy.md | A snapshot policy enables you to specify the snapshot creation frequency in hour > [!IMPORTANT] > For *monthly* snapshot policy definition, be sure to specify a day that will work for all intended months. If you intend for the monthly snapshot configuration to work for all months in the year, pick a day of the month between 1 and 28. For example, if you specify `31` (day of the month), the monthly snapshot configuration is skipped for the months that have less than 31 days. > + + > [!NOTE] + > Using [policy-based backups for Azure NetApp Files](backup-configure-policy-based.md#configure-a-backup-policy) might affect the number of snapshots to keep. Backup policies involve snapshot policies. And Azure NetApp Files prevents you from deleting the latest backup. + See [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md) about the maximum number of snapshots allowed for a volume. The following example shows hourly snapshot policy configuration. |
azure-resource-manager | Linter Rule Decompiler Cleanup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-decompiler-cleanup.md | Title: Linter rule - decompiler cleanup description: Linter rule - decompiler cleanup Previously updated : 11/01/2022 Last updated : 02/10/2023 # Linter rule - decompiler cleanup Use the following value in the [Bicep configuration file](bicep-config-linter.md To increase the readability, update these names with more meaningful names. +The following example fails this test because the two variable names appear to have originated from a naming conflict during a decompilation from JSON. ++```bicep +var hostingPlanName_var = functionAppName +var storageAccountName_var = 'azfunctions${uniqueString(resourceGroup().id)}' +``` ++This example passes this test. ++```bicep +var hostingPlanName = functionAppName +var storageAccountName = 'azfunctions${uniqueString(resourceGroup().id)}' +``` ++Consider using <kbd>F2</kbd> in Visual Studio Code to replace symbols. + ## Next steps For more information about the linter, see [Use Bicep linter](./linter.md). |
azure-resource-manager | Linter Rule No Hardcoded Location | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-no-hardcoded-location.md | Title: Linter rule - no hardcoded locations description: Linter rule - no hardcoded locations Previously updated : 1/6/2022 Last updated : 02/10/2023 # Linter rule - no hardcoded locations The following example fails this test because the resource's `location` property location: 'westus' } ```+ You can fix it by creating a new `location` string parameter (which may optionally have a default value - resourceGroup().location is frequently used as a default): ```bicep You can fix it by creating a new `location` string parameter (which may optional } ``` +Use **Quick Fix** to create a location parameter and replace the string literal with the parameter name. See the following screenshot: ++ The following example fails this test because the resource's `location` property uses a variable with a string literal. ```bicep module m1 'module1.bicep' = { } } ```+ where module1.bicep is:+ ```bicep param location string resource storageaccount 'Microsoft.Storage/storageAccounts@2021-02-01' = { ``` You can fix the failure by creating a new parameter for the value:+ ```bicep param location string // optionally with a default value module m1 'module1.bicep' = { |
azure-resource-manager | Linter Rule No Unnecessary Dependson | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-no-unnecessary-dependson.md | Title: Linter rule - no unnecessary dependsOn entries description: Linter rule - no unnecessary dependsOn entries Previously updated : 12/14/2021 Last updated : 02/10/2023 # Linter rule - no unnecessary dependsOn entries Use the following value in the [Bicep configuration file](bicep-config-linter.md ## Solution -To reduce confusion in your template, delete any dependsOn entries which are not necessary. Bicep automatically infers most resource dependencies as long as template expressions reference other resources via symbolic names rather than strings with hard-coded IDs or names. +To reduce confusion in your template, delete any dependsOn entries that aren't necessary. Bicep automatically infers most resource dependencies as long as template expressions reference other resources via symbolic names rather than strings with hard-coded IDs or names. The following example fails this test because the dependsOn entry `appServicePlan` is automatically inferred by Bicep implied by the expression `appServicePlan.id` (which references resource symbolic name `appServicePlan`) in the `serverFarmId` property's value. ```bicep-resource appServicePlan 'Microsoft.Web/serverfarms@2020-12-01' = { +param location string = resourceGroup().location ++resource appServicePlan 'Microsoft.Web/serverfarms@2022-03-01' = { name: 'name'- location: resourceGroup().location + location: location sku: { name: 'F1' capacity: 1 } } -resource webApplication 'Microsoft.Web/sites@2018-11-01' = { +resource webApplication 'Microsoft.Web/sites@2022-03-01' = { name: 'name'- location: resourceGroup().location + location: location properties: { serverFarmId: appServicePlan.id } resource webApplication 'Microsoft.Web/sites@2018-11-01' = { You can fix it by removing the unnecessary dependsOn entry. ```bicep-resource appServicePlan 'Microsoft.Web/serverfarms@2020-12-01' = { +param location string = resourceGroup().location ++resource appServicePlan 'Microsoft.Web/serverfarms@2022-03-01' = { name: 'name'- location: resourceGroup().location + location: location sku: { name: 'F1' capacity: 1 } } -resource webApplication 'Microsoft.Web/sites@2018-11-01' = { +resource webApplication 'Microsoft.Web/sites@2022-03-01' = { name: 'name'- location: resourceGroup().location + location: location properties: { serverFarmId: appServicePlan.id } } ``` +Use **Quick Fix** to remove the unnecessary dependsOn entry. ++ ## Next steps For more information about the linter, see [Use Bicep linter](./linter.md). |
azure-resource-manager | Linter Rule No Unused Existing Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-no-unused-existing-resources.md | Title: Linter rule - no unused existing resources description: Linter rule - no unused existing resources Previously updated : 07/21/2022 Last updated : 02/10/2023 # Linter rule - no unused existing resources Use the following value in the [Bicep configuration file](bicep-config-linter.md To reduce confusion in your template, delete any [existing resources](./existing-resource.md) that are defined but not used. This test finds any existing resource that isn't used anywhere in the template. +The following example fails this test because the existing resource **stg** is declared but never used: ++```bicep +resource stg 'Microsoft.Storage/storageAccounts@2022-09-01' existing = { + name: 'examplestorage' +} +``` ++Use **Quick Fix** to remove the unused existing resource. ++ ## Next steps For more information about the linter, see [Use Bicep linter](./linter.md). |
azure-resource-manager | Linter Rule No Unused Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-no-unused-parameters.md | Title: Linter rule - no unused parameters description: Linter rule - no unused parameters Previously updated : 11/18/2021 Last updated : 02/10/2023 # Linter rule - no unused parameters Use the following value in the [Bicep configuration file](bicep-config-linter.md To reduce confusion in your template, delete any parameters that are defined but not used. This test finds any parameters that aren't used anywhere in the template. Eliminating unused parameters also makes it easier to deploy your template because you don't have to provide unnecessary values. +You can use **Quick Fix** to remove the unused parameters: ++ ## Next steps For more information about the linter, see [Use Bicep linter](./linter.md). |
azure-resource-manager | Linter Rule No Unused Variables | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-no-unused-variables.md | Title: Linter rule - no unused variables description: Linter rule - no unused variables Previously updated : 11/18/2021 Last updated : 02/10/2023 # Linter rule - no unused variables Use the following value in the [Bicep configuration file](bicep-config-linter.md To reduce confusion in your template, delete any variables that are defined but not used. This test finds any variables that aren't used anywhere in the template. +You can use **Quick Fix** to remove the unused variables: ++ ## Next steps For more information about the linter, see [Use Bicep linter](./linter.md). |
azure-resource-manager | Linter Rule Outputs Should Not Contain Secrets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-outputs-should-not-contain-secrets.md | Title: Linter rule - outputs should not contain secrets description: Linter rule - outputs should not contain secrets Previously updated : 12/09/2022 Last updated : 02/10/2023 # Linter rule - outputs should not contain secrets The output from a template is stored in the deployment history, so a user with r The following example fails because it includes a secure parameter in an output value. ```bicep- @secure() param secureParam string ΓÇï The following example fails because the output name contains 'password', indicat output accountPassword string = '...' ``` -To fix it, you will need to remove the secret data from the output. The recommended practice is to output the resourceId of the resource containing the secret and retrieve the secret when the resource needing the information is created or updated. Secrets may also be stored in KeyVault for more complex deployment scenarios. +To fix it, you need to remove the secret data from the output. The recommended practice is to output the resourceId of the resource containing the secret and retrieve the secret when the resource needing the information is created or updated. Secrets may also be stored in KeyVault for more complex deployment scenarios. The following example shows a secure pattern for retrieving a storageAccount key from a module. someProperty: listKeys(myStorageModule.outputs.storageId.value, '2021-09-01').ke ## Silencing false positives -Sometimes this rule will alert on template outputs that do not actually contain secrets. For instance, not all [`list*`](./bicep-functions-resource.md#list) functions actually return sensitive data. In these cases, you can disable the warning for this line by adding `#disable-next-line outputs-should-not-contain-secrets` before the line with the warning. +Sometimes this rule alerts on template outputs that don't actually contain secrets. For instance, not all [`list*`](./bicep-functions-resource.md#list) functions actually return sensitive data. In these cases, you can disable the warning for this line by adding `#disable-next-line outputs-should-not-contain-secrets` before the line with the warning. ```bicep-#disable-next-line outputs-should-not-contain-secrets // Does not contain a password +#disable-next-line outputs-should-not-contain-secrets // Doesn't contain a password output notAPassword string = '...' ``` -It is good practice to add a comment explaining why the rule does not apply to this line. +It's good practice to add a comment explaining why the rule doesn't apply to this line. ## Next steps |
azure-resource-manager | Linter Rule Prefer Interpolation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-prefer-interpolation.md | Title: Linter rule - prefer interpolation description: Linter rule - prefer interpolation Previously updated : 11/18/2021 Last updated : 02/10/2023 # Linter rule - prefer interpolation Use the following value in the [Bicep configuration file](bicep-config-linter.md ## Solution -Use string interpolation instead of the concat function. +Use string interpolation instead of the `concat` function. -The following example fails this test because the concat function is used. +The following example fails this test because the `concat` function is used. ```bicep param suffix string = '001' var vnetName = concat('vnet-', suffix) ``` -You can fix it by replacing concat with string interpolation. The following example passes this test. +You can fix it by replacing `concat` with string interpolation. The following example passes this test. ```bicep param suffix string = '001' var vnetName = 'vnet-${suffix}' ``` +Optionally, you can use **Quickfix** to replace the `concat` with string interpolation: ++ ## Next steps For more information about the linter, see [Use Bicep linter](./linter.md). |
azure-resource-manager | Linter Rule Prefer Unquoted Property Names | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-prefer-unquoted-property-names.md | Title: Linter rule - prefer unquoted property names description: Linter rule - prefer unquoted property names Previously updated : 07/29/2022 Last updated : 02/10/2023 # Linter rule - prefer unquoted property names var x2 = obj['1'] var x3 = obj.myProp ``` +Optionally, you can use **Quick Fix** to fix the issues: ++linter-rule-prefer-unquoted-property-names-quick-fix ++ ## Next steps For more information about the linter, see [Use Bicep linter](./linter.md). |
azure-resource-manager | Linter Rule Secure Parameter Default | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-secure-parameter-default.md | Title: Linter rule - secure parameter default description: Linter rule - secure parameter default Previously updated : 11/18/2021 Last updated : 02/10/2023 # Linter rule - secure parameter default You can fix it by removing the default value. param adminPassword string ``` +Optionally, you can use **Quick Fix** to remove the insecured default value: ++ Or, by providing an empty string for the default value. ```bicep |
azure-resource-manager | Linter Rule Secure Secrets In Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-secure-secrets-in-parameters.md | Title: Linter rule - secure secrets in parameters description: Linter rule - secure secrets in parameters Previously updated : 08/01/2022 Last updated : 02/10/2023 # Linter rule - secure secrets in parameters You can fix it by adding the secure decorator: param mypassword string ``` +Optionally, you can use **Quick Fix** to add the secure decorator: ++ ## Silencing false positives Sometimes this rule alerts on parameters that don't actually contain secrets. In these cases, you can disable the warning for this line by adding `#disable-next-line secure-secrets-in-params` before the line with the warning. For example: |
azure-resource-manager | Linter Rule Simplify Interpolation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-simplify-interpolation.md | Title: Linter rule - simplify interpolation description: Linter rule - simplify interpolation Previously updated : 11/18/2021 Last updated : 02/10/2023 # Linter rule - simplify interpolation The following example fails this test because it just references a parameter. ```bicep param AutomationAccountName string -resource AutomationAccount 'Microsoft.Automation/automationAccounts@2020-01-13-preview' = { +resource AutomationAccount 'Microsoft.Automation/automationAccounts@2022-08-08' = { name: '${AutomationAccountName}' ... } You can fix it by removing the string interpolation syntax. ```bicep param AutomationAccountName string -resource AutomationAccount 'Microsoft.Automation/automationAccounts@2020-01-13-preview' = { +resource AutomationAccount 'Microsoft.Automation/automationAccounts@2022-08-08' = { name: AutomationAccountName ... } ``` +Optionally, you can use **Quick Fix** to remove the string interpolation syntax: ++ ## Next steps For more information about the linter, see [Use Bicep linter](./linter.md). |
azure-resource-manager | Linter Rule Use Recent Api Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-use-recent-api-versions.md | Title: Linter rule - use recent API versions description: Linter rule - use recent API versions Previously updated : 09/30/2022 Last updated : 02/13/2023 # Linter rule - use recent API versions Use the following value in the [Bicep configuration file](bicep-config-linter.md Use the most recent API version, or one that is no older than 730 days. +Use **Quick Fix** to use the latest API versions: ++ ## Next steps For more information about the linter, see [Use Bicep linter](./linter.md). |
azure-resource-manager | Azure Subscription Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md | To learn more about limits on a more granular level, such as document size, quer ## Azure Communications Gateway limits -Some of the following default limits and quotas can be increased. To request a change, create a [change request](/azure/communications-gateway/request-changes.md) stating the limit you want to change. +Some of the following default limits and quotas can be increased. To request a change, create a [change request](../../communications-gateway/request-changes.md) stating the limit you want to change. [!INCLUDE [communications-gateway-general-restrictions](../../communications-gateway/includes/communications-gateway-general-restrictions.md)] There are limits, per subscription, for deploying resources using Compute Galler * [Understand Azure limits and increases](https://azure.microsoft.com/blog/2014/06/04/azure-limits-quotas-increase-requests/) * [Virtual machine and cloud service sizes for Azure](../../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) * [Sizes for Azure Cloud Services](../../cloud-services/cloud-services-sizes-specs.md)-* [Naming rules and restrictions for Azure resources](resource-name-rules.md) +* [Naming rules and restrictions for Azure resources](resource-name-rules.md) |
batch | Batch Pool Vm Sizes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-vm-sizes.md | Title: Choose VM sizes and images for pools description: How to choose from the available VM sizes and OS versions for compute nodes in Azure Batch pools Previously updated : 09/02/2021 Last updated : 02/13/2023 When you select a node size for an Azure Batch pool, you can choose from almost ### Pools in Virtual Machine configuration -Batch pools in the Virtual Machine configuration support almost all [VM sizes](../virtual-machines/sizes.md). The supported VM sizes in a region can be obtained via [Batch Management APIs](batch-apis-tools.md#batch-management-apis), as well as the [command line tools](batch-apis-tools.md#batch-command-line-tools) (PowerShell cmdlets and Azure CLI). For example, the [Azure Batch CLI command](/cli/azure/batch/location#az-batch-location-list-skus) to list supported VM sizes in a region is: +Batch pools in the Virtual Machine configuration support almost all [VM sizes](../virtual-machines/sizes.md) available in Azure. +The supported VM sizes in a region can be obtained via the Batch Management API. You can use one of the following methods to +return a list of VM sizes supported by Batch in a region: ++- PowerShell: [Get-AzBatchSupportedVirtualMachineSku](/powershell/module/az.batch/get-azbatchsupportedvirtualmachinesku) +- Azure CLI: [az batch location list-skus](/cli/azure/batch/location#az-batch-location-list-skus) +- [Batch Management APIs](batch-apis-tools.md#batch-management-apis): [List Supported Virtual Machine SKUs](/rest/api/batchmanagement/location/list-supported-virtual-machine-skus) ++For example, using the Azure CLI, you can obtain the list of skus for a particular Azure region with the following command: ```azurecli-interactive-az batch location list-skus --location - [--filter] - [--maxresults] - [--subscription] +az batch location list-skus --location <azure-region> ``` -For each VM series, the following table also lists whether the VM series and VM sizes are supported by Batch. --| VM series | Supported sizes | -||| -| Basic A | All sizes *except* Basic_A0 (A0) | -| A | All sizes *except* Standard_A0, Standard_A8, Standard_A9, Standard_A10, Standard_A11 | -| Av2 | All sizes | -| B | Not supported | -| DCsv2 | All sizes | -| Dv2, DSv2 | All sizes | -| Dv3, Dsv3 | All sizes | -| Dav4, Dasv4 | All sizes | -| Ddv4, Ddsv4 | All sizes | -| Dv4, Dsv4 | Not supported | -| Ev3, Esv3 | All sizes, except for E64is_v3 | -| Eav4, Easv4 | All sizes | -| Edv4, Edsv4 | All sizes | -| Ev4, Esv4 | Not supported | -| F, Fs | All sizes | -| Fsv2 | All sizes | -| FX<sup>1</sup> | All sizes | -| G, Gs | All sizes | -| H | All sizes | -| HB | All sizes | -| HBv2 | All sizes | -| HBv3 | All sizes | -| HC | All sizes | -| Ls | All sizes | -| Lsv2 | All sizes | -| M | All sizes | -| Mv2<sup>1</sup> | All sizes | -| NC | All sizes | -| NCv2 | All sizes | -| NCv3 | All sizes | -| NCasT4_v3 | All sizes | -| NC_A100_v4 | All sizes | -| ND | All sizes | -| NDv4 | All sizes | -| NDv2 | None - not yet available | -| NP | All sizes | -| NV | All sizes | -| NVv3 | All sizes | -| NVv4 | All sizes | -| SAP HANA | Not supported | --<sup>1</sup> These VM series can only be used with generation 2 VM Images. +> [!TIP] +> Batch **does not** support any VM SKU sizes that have only remote storage. A local temporary disk is required for Batch. +> For example, Batch supports [ddv4 and ddsv4](../virtual-machines/ddv4-ddsv4-series.md), but does not support +> [dv4 and dsv4](../virtual-machines/dv4-dsv4-series.md). ### Using Generation 2 VM Images -Some VM series, such as [Mv2](../virtual-machines/mv2-series.md), can only be used with [generation 2 VM images](../virtual-machines/generation-2.md). Generation 2 VM images are specified like any VM image, using the 'sku' property of the ['imageReference'](/rest/api/batchservice/pool/add#imagereference) configuration; the 'sku' strings have a suffix such as "-g2" or "-gen2". To get a list of VM images supported by Batch, including generation 2 images, use the ['list supported images'](/rest/api/batchservice/account/listsupportedimages) API, [PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](/cli/azure/batch/pool/supported-images). +Some VM series, such as [FX](../virtual-machines/fx-series.md) and [Mv2](../virtual-machines/mv2-series.md), can only be used +with [generation 2 VM images](../virtual-machines/generation-2.md). Generation 2 VM images are specified like any VM image, +using the `sku` property of the [`imageReference`](/rest/api/batchservice/pool/add#imagereference) configuration; the `sku` +strings have a suffix such as `-g2` or `-gen2`. To get a list of VM images supported by Batch, including generation 2 images, +use the ['list supported images'](/rest/api/batchservice/account/listsupportedimages) API, +[PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](/cli/azure/batch/pool/supported-images). ### Pools in Cloud Services Configuration Batch pools in Cloud Services Configuration support all [VM sizes for Cloud Serv - **Application requirements** - Consider the characteristics and requirements of the application you'll run on the nodes. Aspects like whether the application is multithreaded and how much memory it consumes can help determine the most suitable and cost-effective node size. For multi-instance [MPI workloads](batch-mpi.md) or CUDA applications, consider specialized [HPC](../virtual-machines/sizes-hpc.md) or [GPU-enabled](../virtual-machines/sizes-gpu.md) VM sizes, respectively. For more information, see [Use RDMA-capable or GPU-enabled instances in Batch pools](batch-pool-compute-intensive-sizes.md). -- **Tasks per node** - It's typical to select a node size assuming one task runs on a node at a time. However, it might be advantageous to have multiple tasks (and therefore multiple application instances) [run in parallel](batch-parallel-node-tasks.md) on compute nodes during job execution. In this case, it is common to choose a multicore node size to accommodate the increased demand of parallel task execution.+- **Tasks per node** - It's typical to select a node size assuming one task runs on a node at a time. However, it might be advantageous to have multiple tasks (and therefore multiple application instances) [run in parallel](batch-parallel-node-tasks.md) on compute nodes during job execution. In this case, it's common to choose a multicore node size to accommodate the increased demand of parallel task execution. - **Load levels for different tasks** - All of the nodes in a pool are the same size. If you intend to run applications with differing system requirements and/or load levels, we recommend that you use separate pools. Batch pools in Cloud Services Configuration support all [VM sizes for Cloud Serv Use one of the following APIs to return a list of Windows and Linux VM images currently supported by Batch, including the node agent SKU IDs for each image: -- Batch Service REST API: [List Supported Images](/rest/api/batchservice/account/listsupportedimages) - PowerShell: [Get-AzBatchSupportedImage](/powershell/module/az.batch/get-azbatchsupportedimage) - Azure CLI: [az batch pool supported-images](/cli/azure/batch/pool/supported-images)+- [Batch Service APIs](batch-apis-tools.md#batch-service-apis): [List Supported Images](/rest/api/batchservice/account/listsupportedimages) ++For example, using the Azure CLI, you can obtain the list of supported VM images with the following command: ++```azurecli-interactive +az batch pool supported-images list +``` -It is strongly recommended to avoid images with impending Batch support end of life (EOL) dates. These dates can be discovered via the [`ListSupportedImages` API](/rest/api/batchservice/account/listsupportedimages), [PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](/cli/azure/batch/pool/supported-images). Please see the [Batch best practices guide](best-practices.md) for more information regarding Batch pool VM image selection. +It's recommended to avoid images with impending Batch support end of life (EOL) dates. These dates can be discovered via +the [`ListSupportedImages` API](/rest/api/batchservice/account/listsupportedimages), +[PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](/cli/azure/batch/pool/supported-images). +For more information, see the [Batch best practices guide](best-practices.md) regarding Batch pool VM image selection. ## Next steps |
cloud-shell | Private Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/private-vnet.md | container resources: the internal resources from outside. - Accessible from specified networks: In this configuration, administrators must access the Azure portal from a computer running in the appropriate network to be able to use Cloud Shell.+- Disabled: When the networking for relay is set to disabled, the computer running Azure Cloud Shell + must be able to reach the private endpoint connected to the relay. ## Storage requirements |
cognitive-services | Gaming Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/gaming-concepts.md | It's not unusual that players in the same game session natively speak different For an example, see the [Speech translation quickstart](get-started-speech-translation.md). > [!NOTE]-> Besides the Speech service, you can also use the [Translator service](/azure/cognitive-services/translator/translator-overview). To execute text translation between supported source and target languages in real time see [Text translation](/azure/cognitive-services/translator/text-translation-overview). +> Besides the Speech service, you can also use the [Translator service](../translator/translator-overview.md). To execute text translation between supported source and target languages in real time see [Text translation](../translator/text-translation-overview.md). ## Next steps * [Azure gaming documentation](/gaming/azure/) * [Text-to-speech quickstart](get-started-text-to-speech.md) * [Speech-to-text quickstart](get-started-speech-to-text.md)-* [Speech translation quickstart](get-started-speech-translation.md) +* [Speech translation quickstart](get-started-speech-translation.md) |
cognitive-services | Cognitive Services Apis Create Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-apis-create-account.md | -Azure Cognitive Services are cloud-based artificial intelligence (AI) services that help developers build cognitive intelligence into applications without having direct AI or data science skills or knowledge. They are available through REST APIs and client library SDKs in popular development languages. Azure Cognitive Services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, and analyze. +Azure Cognitive Services is cloud-based artificial intelligence (AI) services that help developers build cognitive intelligence into applications without having direct AI or data science skills or knowledge. They're available through REST APIs and client library SDKs in popular development languages. Azure Cognitive Services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, and analyze. ## Types of Cognitive Services resources The multi-service resource is named **Cognitive Services** in the portal. The mu * **Decision** - Content Moderator * **Language** - Language, Translator * **Speech** - Speech-* **Vision** - Computer Vision, Custom Vision, Face +* **Vision** - Computer Vision, Custom Vision, Form Recognizer, Face 1. You can select this link to create an Azure Cognitive multi-service resource: [Create a Cognitive Services resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne). 1. On the **Create** page, provide the following information: [!INCLUDE [Create Azure resource for subscription](./includes/quickstarts/cognitive-resource-project-details.md)]- + :::image type="content" source="media/cognitive-services-apis-create-account/resource_create_screen-multi.png" alt-text="Multi-service resource creation screen"::: 1. Configure other settings for your resource as needed, read and accept the conditions (as applicable), and then select **Review + create**. The multi-service resource is named **Cognitive Services** in the portal. The mu ### [Speech](#tab/speech) -1. Select the following links to create a Speech resource: +1. Select the following links to create a Speech resource: - [Speech Services](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) 1. On the **Create** page, provide the following information: If you want to clean up and remove a Cognitive Services subscription, you can de 1. In the Azure portal, expand the menu on the left side to open the menu of services, and choose **Resource Groups** to display the list of your resource groups. 1. Locate the resource group containing the resource to be deleted. 1. If you want to delete the entire resource group, select the resource group name. On the next page, Select **Delete resource group**, and confirm.-1. If you want to delete only the Cognitive Service resource, select the resource group to see all the resources within it. On the next page, select the resource that you want to delete, select the ellipsis menu for that row, and select **Delete**. +1. If you want to delete only the Cognitive Service resource, select the resource group to see all the resources within it. On the next page, select the resource that you want to delete, select the ellipsis menu for that row, and select **Delete**. If you need to recover a deleted resource, see [Recover deleted Cognitive Services resources](manage-resources.md). |
cognitive-services | Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/disaster-recovery.md | The cross-region disaster recovery feature, also known as Single Resource Multip ## Routing profiles -Azure Traffic Manager routes requests among the selected regions. The SRMR currently supports [Priority](/azure/traffic-manager/traffic-manager-routing-methods#priority-traffic-routing-method), [Performance](/azure/traffic-manager/traffic-manager-routing-methods#performance-traffic-routing-method) and [Weighted](/azure/traffic-manager/traffic-manager-routing-methods#weighted-traffic-routing-method) profiles and is currently available for the following +Azure Traffic Manager routes requests among the selected regions. The SRMR currently supports [Priority](../traffic-manager/traffic-manager-routing-methods.md#priority-traffic-routing-method), [Performance](../traffic-manager/traffic-manager-routing-methods.md#performance-traffic-routing-method) and [Weighted](../traffic-manager/traffic-manager-routing-methods.md#weighted-traffic-routing-method) profiles and is currently available for the following -* [Computer Vision](/azure/cognitive-services/computer-vision/overview) -* [Immersive Reader](/azure/applied-ai-services/immersive-reader/overview) -* [Univariate Anomaly Detector](/azure/cognitive-services/anomaly-detector/overview) +* [Computer Vision](./computer-vision/overview.md) +* [Immersive Reader](../applied-ai-services/immersive-reader/overview.md) +* [Univariate Anomaly Detector](./anomaly-detector/overview.md) > [!NOTE] > SRMR is not supported for multi-service resources or free tier resources. -If you use Priority or Weighted traffic manager profiles, your configuration will behave according to the [Traffic Manager documentation](/azure/traffic-manager/traffic-manager-routing-methods). +If you use Priority or Weighted traffic manager profiles, your configuration will behave according to the [Traffic Manager documentation](../traffic-manager/traffic-manager-routing-methods.md). ## Enable SRMR |
cognitive-services | Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/service-limits.md | Conversational language understanding is only available in some Azure regions. S | North Central US | | Γ£ô | | North Europe | Γ£ô | Γ£ô | | Norway East | | Γ£ô |+| Qatar Central | | Γ£ô | | South Africa North | | Γ£ô | | South Central US | Γ£ô | Γ£ô | | Southeast Asia | | Γ£ô | Conversational language understanding is only available in some Azure regions. S | UK South | Γ£ô | Γ£ô | | West Central US | | Γ£ô | | West Europe | Γ£ô | Γ£ô |+| West US | | Γ£ô | +| West US 2 | Γ£ô | Γ£ô | +| West US 3 | Γ£ô | Γ£ô | ## API limits The following limits are observed for the conversational language understanding. |Item|Lower Limit| Upper Limit | | | | |-|Count of utterances per project | 1 | 25,000| +|Number of utterances per project | 1 | 25,000| |Utterance length in characters (authoring) | 1 | 500 | |Utterance length in characters (prediction) | 1 | 1000 |-|Count of intents per project | 1 | 500| -|Count of entities per project | 1 | 500| -|Count of list synonyms per entity| 0 | 20,000 | -|Count of prebuilt components per entity| 0 | 7 | -|Count of regular expressions per project| 0 | 20 | -|Count of trained models per project| 0 | 10 | -|Count of deployments per project| 0 | 10 | +|Number of intents per project | 1 | 500| +|Number of entities per project | 0 | 350| +|Number of list synonyms per entity| 0 | 20,000 | +|Number of list synonyms per project| 0 | 2,000,000 | +|Number of prebuilt components per entity| 0 | 7 | +|Number of regular expressions per project| 0 | 20 | +|Number of trained models per project| 0 | 10 | +|Number of deployments per project| 0 | 10 | ## Naming limits |
cognitive-services | Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/service-limits.md | Custom named entity recognition is only available in some Azure regions. Some re | North Central US | | Γ£ô | | North Europe | Γ£ô | Γ£ô | | Norway East | | Γ£ô |+| Qatar Central | | Γ£ô | | South Africa North | | Γ£ô | | South Central US | Γ£ô | Γ£ô | | Southeast Asia | | Γ£ô | Custom named entity recognition is only available in some Azure regions. Some re | UK South | Γ£ô | Γ£ô | | West Central US | | Γ£ô | | West Europe | Γ£ô | Γ£ô |+| West US | | Γ£ô | +| West US 2 | Γ£ô | Γ£ô | +| West US 3 | Γ£ô | Γ£ô | ## API limits |
cognitive-services | Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/service-limits.md | Custom text classification is only available in some Azure regions. Some regions | North Central US | | Γ£ô | | North Europe | Γ£ô | Γ£ô | | Norway East | | Γ£ô |+| Qatar Central | | Γ£ô | | South Africa North | | Γ£ô | | South Central US | Γ£ô | Γ£ô | | Southeast Asia | | Γ£ô | Custom text classification is only available in some Azure regions. Some regions | UK South | Γ£ô | Γ£ô | | West Central US | | Γ£ô | | West Europe | Γ£ô | Γ£ô |+| West US | | Γ£ô | +| West US 2 | Γ£ô | Γ£ô | +| West US 3 | Γ£ô | Γ£ô | ## API limits |
cognitive-services | Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/orchestration-workflow/service-limits.md | Orchestration workflow is only available in some Azure regions. Some regions are | North Central US | | Γ£ô | | North Europe | Γ£ô | Γ£ô | | Norway East | | Γ£ô |+| Qatar Central | | Γ£ô | | South Africa North | | Γ£ô | | South Central US | Γ£ô | Γ£ô | | Southeast Asia | | Γ£ô | Orchestration workflow is only available in some Azure regions. Some regions are | UK South | Γ£ô | Γ£ô | | West Central US | | Γ£ô | | West Europe | Γ£ô | Γ£ô |+| West US | | Γ£ô | +| West US 2 | Γ£ô | Γ£ô | +| West US 3 | Γ£ô | Γ£ô | ## API limits |
cognitive-services | Content Filter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/content-filter.md | The table below outlines the various ways content filtering can appear: ```json {- "prompt":"Text example" - , "n": 3 - , "stream": false + "prompt":"Text example", + "n": 3, + "stream": false } ``` The table below outlines the various ways content filtering can appear: ```json {- "prompt":"Text example" - , "n": 3 - , "stream": false + "prompt":"Text example", + "n": 3, + "stream": false } ``` The table below outlines the various ways content filtering can appear: ```json {- "prompt":"Text example" - , "n": 3 - , "stream": true + "prompt":"Text example", + "n": 3, + "stream": true } ``` The table below outlines the various ways content filtering can appear: ```json {- "prompt":"Text example" - , "n": 3 - , "stream": true + "prompt":"Text example", + "n": 3, + "stream": true } ``` The table below outlines the various ways content filtering can appear: ```json {- "prompt":"Text example" - , "n": 1 - , "stream": false + "prompt":"Text example", + "n": 1, + "stream": false } ``` |
cognitive-services | Manage Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/manage-costs.md | + + Title: Plan to manage costs for Azure OpenAI +description: Learn how to plan for and manage costs for Azure OpenAI Service by using cost analysis in the Azure portal. ++++++ Last updated : 02/10/2023++++# Plan to manage costs for Azure OpenAI Service ++This article describes how you plan for and manage costs for Azure OpenAI Service. Before you deploy the service, you can use the Azure pricing calculator to estimate costs for Azure OpenAI. Later, as you deploy Azure resources, review the estimated costs. After you've started using Azure OpenAI resources, use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for Azure OpenAI Service are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Azure OpenAI, you're billed for all Azure services and resources used in your Azure subscription, including the third-party services. ++## Prerequisites ++Cost analysis in Cost Management supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../../../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for an Azure account. For information about assigning access to Azure Cost Management data, see [Assign access to data](../../../cost-management/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). ++## Estimate costs before using Azure OpenAI Service ++Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the costs of using Azure OpenAI. ++## Understand the full billing model for Azure OpenAI Service ++Azure OpenAI Service runs on Azure infrastructure that accrues costs when you deploy new resources. It's important to understand that there could be other additional infrastructure costs that might accrue. ++### How you're charged for Azure OpenAI Service ++### Base series and Codex series models ++Azure OpenAI base series and Codex series models are charged per 1,000 tokens. Costs vary depending on which model series you choose: Ada, Babbage, Curie, Davinci, or Code-Cushman. ++Our models understand and process text by breaking it down into tokens. For reference, each token is roughly four characters for typical English text. ++### Base Series and Codex series fine-tuned models ++Azure OpenAI fine-tuned models are charged based on three factors: ++- Training hours +- Hosting hours +- Inference per 1,000 tokens ++The hosting hours cost is important to be aware of since once a fine-tuned model is deployed it continues to incur an hourly cost regardless of whether you're actively using it. Fine-tuned model costs should be monitored closely. ++### Other costs that might accrue with Azure OpenAI Service ++Keep in mind that enabling capabilities like sending data to Azure Monitor Logs, alerting, etc. incurs additional costs for those services. These costs are visible under those other services and at the subscription level, but aren't visible when scoped just to your Azure OpenAI resource. ++### Using Azure Prepayment with Azure OpenAI Service ++You can pay for Azure OpenAI Service charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those from the Azure Marketplace. ++## Monitor costs ++As you use Azure resources with Azure OpenAI, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on.) As soon as Azure OpenAI use starts, costs can be incurred and you can see the costs in [cost analysis](../../../cost-management/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). ++When you use cost analysis, you view Azure OpenAI costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded. ++To view Azure OpenAI costs in cost analysis: ++1. Sign in to the Azure portal. +2. Select one of your Azure OpenAI resources. +3. Under **Resource Management** select **Cost analysis** +4. By default cost analysis is scoped to the individual Azure OpenAI resource. +++To understand the breakdown of what makes up that cost, it can help to modify **Group by** to **Meter** and in this case switching the chart type to **Line**. You can now see that for this particular resource the source of the costs is from three different model series with **Text-Davinci Tokens** representing the bulk of the costs. +++It's important to understand scope when evaluating costs associated with Azure OpenAI. If your resources are part of the same resource group you can scope Cost Analysis at that level to understand the effect on costs. If your resources are spread across multiple resource groups you can scope to the subscription level. ++However, when scoped at a higher level you often need to add additional filters to be able to zero in on Azure OpenAI usage. When scoped at the subscription level we see a number of other resources that we may not care about in the context of Azure OpenAI cost management. When scoping at the subscription level, we recommend navigating to the full **Cost analysis tool** under the **Cost Management** service. Search for **"Cost Management"** in the top Azure search bar to navigate to the full service experience, which includes more options like creating budgets. +++If you try to add a filter by service, you'll find that you can't find Azure OpenAI in the list. This is because technically Azure OpenAI is part of Cognitive Services so the service level filter is **Cognitive Services**, but if you want to see all Azure OpenAI resources across a subscription without any other type of Cognitive Services resources you need to instead scope to **Service tier: Azure OpenAI**: +++## Create budgets ++You can create [budgets](../../../cost-management/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../../../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy. ++Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you additional money. For more information about the filter options available when you create a budget, see [Group and filter options](../../../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). ++> [!IMPORTANT] +> While OpenAI has an option for hard limits that will prevent you from going over your budget, Azure OpenAI does not currently provide this functionality. You are able to kick off automation from action groups as part of your budget notifications to take more advanced actions, but this requires additional custom development on your part. ++## Export cost data ++You can also [export your cost data](../../../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do additional data analysis for costs. For example, a finance team can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets. ++## Next steps ++- Learn [how to optimize your cloud investment with Azure Cost Management](../../../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). +- Learn more about managing costs with [cost analysis](../../../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). +- Learn about how to [prevent unexpected costs](../../../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). +- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course. |
cognitive-services | Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/monitoring.md | + + Title: Monitoring Azure OpenAI Service +description: Start here to learn how to monitor Azure OpenAI Service ++++++ Last updated : 02/13/2023+++# Monitoring Azure OpenAI Service ++When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. ++This article describes the monitoring data generated by Azure OpenAI Service. Azure OpenAI is part of Cognitive Services, which uses [Azure Monitor](/azure/azure-monitor/overview). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource). ++## Monitoring data ++Azure OpenAI collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-Azure-resources). ++## Collection and routing ++Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting. ++Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations. ++See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. ++Keep in mind that using diagnostic settings and sending data to Azure Monitor Logs has additional costs associated with it. To understand more, consult the [Azure Monitor cost calculation guide](/azure/azure-monitor/logs/cost-logs). ++The metrics and logs you can collect are discussed in the following sections. ++## Analyzing metrics ++You can analyze metrics for *Azure OpenAI* by opening **Metrics** which can be found underneath the **Monitoring** section when viewing your Azure OpenAI resource in the Azure portal. See [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started) for details on using this tool. ++Azure OpenAI is a part of Cognitive Services. For a list of all platform metrics collected for Cognitive Services and Azure OpenAI, see [Cognitive Services supported metrics](/azure/azure-monitor/essentials/metrics-supported#microsoftcognitiveservicesaccounts). ++For the current subset of metrics available in Azure OpenAI: ++### Azure OpenAI Metrics ++|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| +|||||||| +|BlockedCalls |Yes |Blocked Calls |Count |Total |Number of calls that exceeded rate or quota limit. |ApiName, OperationName, Region, RatelimitKey | +|ClientErrors |Yes |Client Errors |Count |Total |Number of calls with client side error (HTTP response code 4xx). |ApiName, OperationName, Region, RatelimitKey | +|DataIn |Yes |Data In |Bytes |Total |Size of incoming data in bytes. |ApiName, OperationName, Region | +|DataOut |Yes |Data Out |Bytes |Total |Size of outgoing data in bytes. |ApiName, OperationName, Region | +|FineTunedTrainingHours |Yes |Processed FineTuned Training Hours |Count |Total |Number of Training Hours Processed on an OpenAI FineTuned Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region | +|Latency |Yes |Latency |MilliSeconds |Average |Latency in milliseconds. |ApiName, OperationName, Region, RatelimitKey | +|Ratelimit |Yes |Ratelimit |Count |Total |The current ratelimit of the ratelimit key. |Region, RatelimitKey | +|ServerErrors |Yes |Server Errors |Count |Total |Number of calls with service internal error (HTTP response code 5xx). |ApiName, OperationName, Region, RatelimitKey | +|SuccessfulCalls |Yes |Successful Calls |Count |Total |Number of successful calls. |ApiName, OperationName, Region, RatelimitKey | +|TokenTransaction |Yes |Processed Inference Tokens |Count |Total |Number of Inference Tokens Processed on an OpenAI Model |ApiName, ModelDeploymentName, FeatureName, UsageChannel, Region | +|TotalCalls |Yes |Total Calls |Count |Total |Total number of calls. |ApiName, OperationName, Region, RatelimitKey | +|TotalErrors |Yes |Total Errors |Count |Total |Total number of calls with error response (HTTP response code 4xx or 5xx). |ApiName, OperationName, Region, RatelimitKey | ++## Analyzing logs ++Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties. ++All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/resource-logs-schema). ++The [Activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics. ++For a list of the types of resource logs available for Azure OpenAI and other Cognitive Services, see [Resource provider operations for Cognitive Services](/azure/role-based-access-control/resource-provider-operations#microsoftcognitiveservices) ++### Kusto queries ++> [!IMPORTANT] +> When you select **Logs** from the Azure OpenAI menu, Log Analytics is opened with the query scope set to the current Azure OpenAI resource. This means that log queries will only include data from that resource. If you want to run a query that includes data from other resources or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](/azure/azure-monitor/logs/scope) for details. ++To explore and get a sense of what type of information is available for your Azure OpenAI resource a useful query to start with once you have deployed a model and sent some completion calls through the playground is as follows: ++```kusto +AzureDiagnostics +| take 100 +| project TimeGenerated, _ResourceId, Category,OperationName, DurationMs, ResultSignature, properties_s +``` ++Here we return a sample of 100 entries and are displaying a subset of the available columns of data in the logs. The results are as follows: +++If you wish to see all available columns of data, you can remove the scoping that is provided by the `| project` line: ++```kusto +AzureDiagnostics +| take 100 +``` ++You can also select the arrow next to the table name to view all available columns and associated data types. ++To examine AzureMetrics run: ++```kusto +AzureMetrics +| take 100 +| project TimeGenerated, MetricName, Total, Count, TimeGrain, UnitName +``` +++## Alerts ++Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/alerts/alerts-metric-overview), [logs](/azure/azure-monitor/alerts/alerts-unified-log), and the [activity log](/azure/azure-monitor/alerts/activity-log-alerts). Different types of alerts have different benefits and drawbacks. ++Every organization's alerting needs are going to vary, and will also evolve over time. Generally all alerts should be actionable, with a specific intended response if the alert occurs. If there's no action for someone to take, then it might be something you want to capture in a report, but not in an alert. Some use cases may require alerting anytime certain error conditions exist. But in many environments, it might only be in cases where errors exceed a certain threshold for a period of time where sending an alert is warranted. ++Errors below certain thresholds can often be evaluated through regular analysis of data in Azure Monitor Logs. As you analyze your log data over time, you may also find that a certain condition not occurring for a long enough period of time might be valuable to track with alerts. Sometimes the absence of an event in a log is just as important a signal as an error. ++Depending on what type of application you're developing in conjunction with your use of Azure OpenAI, [Azure Monitor Application Insights](/azure/azure-monitor/overview#application-insights) may offer additional monitoring benefits at the application layer. ++## Next steps ++- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources. +- Read [Understand log searches in Azure Monitor logs](../../../azure-monitor/logs/log-query-overview.md). |
cognitive-services | Prepare Dataset | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/prepare-dataset.md | The first step of customizing your model is to prepare a high quality dataset. T ## Best practices -Customization performs better with high-quality examples and the more you have, generally the better the model will perform. We recommend that you provide at least a few hundred high-quality examples to achieve a model that will perform better than using well-designed prompts with a base model. From there, performance tends to linearly increase with every doubling of the number of examples. Increasing the number of examples is usually the best and most reliable way of improving performance. +Customization performs better with high-quality examples and the more you have, generally the better the model performs. We recommend that you provide at least a few hundred high-quality examples to achieve a model that performs better than using well-designed prompts with a base model. From there, performance tends to linearly increase with every doubling of the number of examples. Increasing the number of examples is usually the best and most reliable way of improving performance. If you're fine-tuning on a pre-existing dataset rather than writing prompts from scratch, be sure to manually review your data for offensive or inaccurate content if possible, or review as many random samples of the dataset as possible if it's large. ## Specific guidelines -Fine-tuning can solve a variety of problems, and the optimal way to use it may depend on your specific use case. Below, we've listed the most common use cases for fine-tuning and corresponding guidelines. +Fine-tuning can solve various problems, and the optimal way to use it may depend on your specific use case. Below, we've listed the most common use cases for fine-tuning and corresponding guidelines. ### Classification The dataset might look something like the following: In the example above, we used a structured input containing the name of the company, the product, and the associated ad. As a separator we used `\nSupported:` which clearly separated the prompt from the completion. With a sufficient number of examples, the separator you choose doesn't make much of a difference (usually less than 0.4%) as long as it doesn't appear within the prompt or the completion. -For this use case we fine-tuned an ada model since it will be faster and cheaper, and the performance will be comparable to larger models because it's a classification task. +For this use case we fine-tuned an ada model since it is faster and cheaper, and the performance is comparable to larger models because it's a classification task. Now we can query our model by making a Completion request. Which will return: #### Case study: Categorization for Email triage -Let's say you'd like to categorize incoming email into one of a large number of predefined categories. For classification into a large number of categories, we recommend you convert those categories into numbers, which will work well up to ~500 categories. We've observed that adding a space before the number sometimes slightly helps the performance, due to tokenization. You may want to structure your training data as follows: +Let's say you'd like to categorize incoming email into one of a large number of predefined categories. For classification into a large number of categories, we recommend you convert those categories into numbers, which will work well with up to approximately 500 categories. We've observed that adding a space before the number sometimes slightly helps the performance, due to tokenization. You may want to structure your training data as follows: ```json-{"prompt":"Subject: <email_subject>\nFrom:<customer_name>\nDate:<date>\nContent:<email_body>\n\n###\n\n", "completion":" <numerical_category>"} +{ + "prompt":"Subject: <email_subject>\nFrom:<customer_name>\nDate:<date>\nContent:<email_body>\n\n###\n\n", "completion":" <numerical_category>" +} ``` For example: ```json-{"prompt":"Subject: Update my address\nFrom:Joe Doe\nTo:support@ourcompany.com\nDate:2021-06-03\nContent:Hi,\nI would like to update my billing address to match my delivery address.\n\nPlease let me know once done.\n\nThanks,\nJoe\n\n###\n\n", "completion":" 4"} +{ + "prompt":"Subject: Update my address\nFrom:Joe Doe\nTo:support@ourcompany.com\nDate:2021-06-03\nContent:Hi,\nI would like to update my billing address to match my delivery address.\n\nPlease let me know once done.\n\nThanks,\nJoe\n\n###\n\n", + "completion":" 4" +} ``` In the example above we used an incoming email capped at 2043 tokens as input. (This allows for a four token separator and a one token completion, summing up to 2048.) As a separator we used `\n\n###\n\n` and we removed any occurrence of ### within the email. Conditional generation is a problem where the content needs to be generated give - Aim for at least ~500 examples. - Ensure that the prompt + completion doesn't exceed 2048 tokens, including the separator. - Ensure the examples are of high quality and follow the same desired format.-- Ensure that the dataset used for fine-tuning is very similar in structure and type of task as what the model will be used for.+- Ensure that the dataset used for fine-tuning is similar in structure and type of task as what the model will be used for. - Using Lower learning rate and only 1-2 epochs tends to work better for these use cases. #### Case study: Write an engaging ad based on a Wikipedia article Conditional generation is a problem where the content needs to be generated give This is a generative use case so you would want to ensure that the samples you provide are of the highest quality, as the fine-tuned model will try to imitate the style (and mistakes) of the given examples. A good starting point is around 500 examples. A sample dataset might look like this: ```json-{"prompt":"<Product Name>\n<Wikipedia description>\n\n###\n\n", "completion":" <engaging ad> END"} +{ + "prompt":"<Product Name>\n<Wikipedia description>\n\n###\n\n", + "completion":" <engaging ad> END" +} ``` For example: ```json-{"prompt":"Samsung Galaxy Feel\nThe Samsung Galaxy Feel is an Android smartphone developed by Samsung Electronics exclusively for the Japanese market. The phone was released in June 2017 and was sold by NTT Docomo. It runs on Android 7.0 (Nougat), has a 4.7 inch display, and a 3000 mAh battery.\nSoftware\nSamsung Galaxy Feel runs on Android 7.0 (Nougat), but can be later updated to Android 8.0 (Oreo).\nHardware\nSamsung Galaxy Feel has a 4.7 inch Super AMOLED HD display, 16 MP back facing and 5 MP front facing cameras. It has a 3000 mAh battery, a 1.6 GHz Octa-Core ARM Cortex-A53 CPU, and an ARM Mali-T830 MP1 700 MHz GPU. It comes with 32GB of internal storage, expandable to 256GB via microSD. Aside from its software and hardware specifications, Samsung also introduced a unique a hole in the phone's shell to accommodate the Japanese perceived penchant for personalizing their mobile phones. The Galaxy Feel's battery was also touted as a major selling point since the market favors handsets with longer battery life. The device is also waterproof and supports 1seg digital broadcasts using an antenna that is sold separately.\n\n###\n\n", "completion":"Looking for a smartphone that can do it all? Look no further than Samsung Galaxy Feel! With a slim and sleek design, our latest smartphone features high-quality picture and video capabilities, as well as an award winning battery life. END"} +{ + "prompt":"Samsung Galaxy Feel\nThe Samsung Galaxy Feel is an Android smartphone developed by Samsung Electronics exclusively for the Japanese market. The phone was released in June 2017 and was sold by NTT Docomo. It runs on Android 7.0 (Nougat), has a 4.7 inch display, and a 3000 mAh battery.\nSoftware\nSamsung Galaxy Feel runs on Android 7.0 (Nougat), but can be later updated to Android 8.0 (Oreo).\nHardware\nSamsung Galaxy Feel has a 4.7 inch Super AMOLED HD display, 16 MP back facing and 5 MP front facing cameras. It has a 3000 mAh battery, a 1.6 GHz Octa-Core ARM Cortex-A53 CPU, and an ARM Mali-T830 MP1 700 MHz GPU. It comes with 32GB of internal storage, expandable to 256GB via microSD. Aside from its software and hardware specifications, Samsung also introduced a unique a hole in the phone's shell to accommodate the Japanese perceived penchant for personalizing their mobile phones. The Galaxy Feel's battery was also touted as a major selling point since the market favors handsets with longer battery life. The device is also waterproof and supports 1seg digital broadcasts using an antenna that is sold separately.\n\n###\n\n", + "completion":"Looking for a smartphone that can do it all? Look no further than Samsung Galaxy Feel! With a slim and sleek design, our latest smartphone features high-quality picture and video capabilities, as well as an award winning battery life. END" +} ``` Here we used a multiline separator, as Wikipedia articles contain multiple paragraphs and headings. We also used a simple end token, to ensure that the model knows when the completion should finish. #### Case study: Entity extraction -This is similar to a language transformation task. To improve the performance, it's best to either sort different extracted entities alphabetically or in the same order as they appear in the original text. This will help the model to keep track of all the entities which need to be generated in order. The dataset could look as follows: +This is similar to a language transformation task. To improve the performance, it's best to either sort different extracted entities alphabetically or in the same order as they appear in the original text. This helps the model to keep track of all the entities which need to be generated in order. The dataset could look as follows: ```json-{"prompt":"<any text, for example news article>\n\n###\n\n", "completion":" <list of entities, separated by a newline> END"} +{ + "prompt":"<any text, for example news article>\n\n###\n\n", + "completion":" <list of entities, separated by a newline> END" +} ``` For example: ```json-{"prompt":"Portugal will be removed from the UK's green travel list from Tuesday, amid rising coronavirus cases and concern over a \"Nepal mutation of the so-called Indian variant\". It will join the amber list, meaning holidaymakers should not visit and returnees must isolate for 10 days...\n\n###\n\n", "completion":" Portugal\nUK\nNepal mutation\nIndian variant END"} +{ + "prompt":"Portugal will be removed from the UK's green travel list from Tuesday, amid rising coronavirus cases and concern over a \"Nepal mutation of the so-called Indian variant\". It will join the amber list, meaning holidaymakers should not visit and returnees must isolate for 10 days...\n\n###\n\n", + "completion":" Portugal\nUK\nNepal mutation\nIndian variant END" +} ``` A multi-line separator works best, as the text will likely contain multiple lines. Ideally there will be a high diversity of the types of input prompts (news articles, Wikipedia pages, tweets, legal documents), which reflect the likely texts which will be encountered when extracting entities. #### Case study: Customer support chatbot -A chatbot will normally contain relevant context about the conversation (order details), summary of the conversation so far as well as most recent messages. For this use case the same past conversation can generate multiple rows in the dataset, each time with a slightly different context, for every agent generation as a completion. This use case will require a few thousand examples, as it will likely deal with different types of requests, and customer issues. To ensure the performance is of high quality we recommend vetting the conversation samples to ensure the quality of agent messages. The summary can be generated with a separate text transformation fine tuned model. The dataset could look as follows: +A chatbot will normally contain relevant context about the conversation (order details), summary of the conversation so far, and most recent messages. For this use case the same past conversation can generate multiple rows in the dataset, each time with a slightly different context, for every agent generation as a completion. This use case requires a few thousand examples, as it likely deals with different types of requests, and customer issues. To ensure the performance is of high quality, we recommend vetting the conversation samples to ensure the quality of agent messages. The summary can be generated with a separate text transformation fine tuned model. The dataset could look as follows: ```json {"prompt":"Summary: <summary of the interaction so far>\n\nSpecific information:<for example order details in natural language>\n\n###\n\nCustomer: <message1>\nAgent: <response1>\nCustomer: <message2>\nAgent:", "completion":" <response2>\n"} Here we purposefully separated different types of input information, but maintai Here it's important to convert the input data into a natural language, which will likely lead to superior performance. For example, the following format: ```json-{"prompt":"Item=handbag, Color=army_green, price=$99, size=S->", "completion":" This stylish small green handbag will add a unique touch to your look, without costing you a -fortune."} +{ + "prompt":"Item=handbag, Color=army_green, price=$99, size=S->", + "completion":"This stylish small green handbag will add a unique touch to your look, without costing you a fortune." +} ``` Won't work as well as: ```json-{"prompt":"Item is a handbag. Colour is army green. Price is midrange. Size is small.->", "completion":" This stylish small green handbag will add a unique touch to your look, without costing you a fortune."} +{ + "prompt":"Item is a handbag. Colour is army green. Price is midrange. Size is small.->", + "completion":"This stylish small green handbag will add a unique touch to your look, without costing you a fortune." +} ``` For high performance, ensure that the completions were based on the description provided. If external content is often consulted, then adding such content in an automated way would improve the performance. If the description is based on images, it may help to use an algorithm to extract a textual description of the image. Since completions are only one sentence long, we can use `.` as the stop sequence during inference. For this type of problem we recommend: - Leave the prompt empty. - No need for any separators.-- You'll normally want a very large number of examples, at least a few thousand.+- You'll normally want a large number of examples, at least a few thousand. - Ensure the examples cover the intended domain or the desired tone of voice. #### Case study: Maintaining company voice -Many companies will have a large amount of high quality content generated in a specific voice. Ideally all generations from our API should follow that voice for the different use cases. Here we can use the trick of leaving the prompt empty, and feeding in all the documents which are good examples of the company voice. A fine-tuned model can be used to solve a number of different use cases with similar prompts to the ones used for base models, but the outputs are going to follow the company voice much more closely than previously. +Many companies have a large amount of high quality content generated in a specific voice. Ideally all generations from our API should follow that voice for the different use cases. Here we can use the trick of leaving the prompt empty, and feeding in all the documents which are good examples of the company voice. A fine-tuned model can be used to solve many different use cases with similar prompts to the ones used for base models, but the outputs are going to follow the company voice much more closely than previously. ```json {"prompt":"", "completion":" <company voice textual content>"} Many companies will have a large amount of high quality content generated in a s A similar technique could be used for creating a virtual character with a particular personality, style of speech and topics the character talks about. -Generative tasks have a potential to leak training data when requesting completions from the model, so additional care needs to be taken that this is addressed appropriately. For example personal or sensitive company information should be replaced by generic information or not be included into fine-tuning in the first place. +Generative tasks have a potential to leak training data when requesting completions from the model, so extra care needs to be taken that this is addressed appropriately. For example personal or sensitive company information should be replaced by generic information or not be included into fine-tuning in the first place. ## Next steps |
cognitive-services | Quotas Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md | The following sections provide you with a quick guide to the quotas and limits t | Limit Name | Limit Value | |--|--| | OpenAI resources per region | 2 | -| Requests per second per deployment | 20 requests per second for: text-davinci-003, text-davinci-002, text-davinci-fine-tune-002, code-cushman-002, code-davinci-002, code-davinci-fine-tune-002 <br ><br> 50 requests per second for all other text models. - | -| Max fine-tuned model deployments | 2 | +| Requests per minute per model* | Davinci-models (002 and later): 120 <br> All other models: 300 | +| Tokens per minute per model* | Davinci-models (002 and later): 40,000 <br> All other models: 120,000 | +| Max fine-tuned model deployments* | 2 | | Ability to deploy same model to multiple deployments | Not allowed | | Total number of training jobs per resource | 100 | | Max simultaneous running training jobs per resource | 1 | The following sections provide you with a quick guide to the quotas and limits t | Max Files per resource | 50 | | Total size of all files per resource | 1 GB | | Max training job time (job will fail if exceeded) | 120 hours |-| Max training job size (tokens in training file * # of epochs) | **Ada**: 40-M tokens <br> **Babbage**: 40-M tokens <br> **Curie**: 40-M tokens <br> **Cushman**: 40-M tokens <br> **Davinci**: 10-M | +| Max training job size (tokens in training file) x (# of epochs) | **Ada**: 40-M tokens <br> **Babbage**: 40-M tokens <br> **Curie**: 40-M tokens <br> **Cushman**: 40-M tokens <br> **Davinci**: 10-M | +*The limits are subject to change. We anticipate that you will need higher limits as you move toward production and your solution scales. When you know your solution requirements, please reach out to us by applying for a quota increase here: <https://aka.ms/oai/quotaincrease> ### General best practices to mitigate throttling during autoscaling To minimize issues related to throttling, it's a good idea to use the following The next sections describe specific cases of adjusting quotas. -### Request an increase to a limit on transactions-per-second or number of fine-tuned models deployed +### How to request an increase to the transactions-per-minute, number of fine-tuned models deployed or token per minute quotas. -The limit of concurrent requests defines how high the service can scale before it starts to throttle your requests. --#### Have the required information ready --- OpenAI Resource ID-- Region-- Deployment Name - -How to get this information: --1. Go to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. -1. Select the Azure OpenAI resource for which you would like to increase the request limit. -1. From the **Resource Management** group, select **Properties**. -1. Copy and save the values of the following fields: - - **Resource ID** - - **Location** (your endpoint region) -1. From the **Resource Management** group, select **Deployments**. - - Copy and save the name of the Deployment you're requesting a limit increase --## Create and submit a support request --Initiate the increase of the limit for concurrent requests for your resource, or if necessary check the current limit, by submitting a support request. Here's how: --1. Ensure you have the required information listed in the previous section. -1. Go to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. -1. Select the OpenAI service resource for which you would like to increase (or to check) the concurrency request limit. -1. In the **Support + troubleshooting** group, select **New support request**. A new window will appear, with auto-populated information about your Azure subscription and Azure resource. -1. In **Summary**, describe what you want (for example, "Increase OpenAI request limit"). -1. In **Problem type**, select **Quota or Subscription issues**. -1. In **Problem subtype**, select **Increasing limits or access to specific functionality** -1. Select **Next: Solutions**. Proceed further with the request creation. -1. On the **Details** tab, in the **Description** field, enter the following: - - Include details on which limit you're requesting an increase for. - - The Azure resource information you [collected previously](#have-the-required-information-ready). - - Any other required information. -1. On the **Review + create** tab, select **Create**. -1. Note the support request number in Azure portal notifications. You'll be contacted shortly about your request. +If you need to increase the limit, you can apply for a quota increase here: <https://aka.ms/oai/quotaincrease> ## Next steps |
cognitive-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/whats-new.md | keywords: ```json {ΓÇï-"training_file": "file-XGinujblHPwGLSztz8cPS8XY" ,ΓÇï -"hyperparams": { ΓÇï - "batch_size": 4,ΓÇï - "learning_rate_multiplier": 0.1,ΓÇï - "n_epochs": 4,ΓÇï - "prompt_loss_weight": 0.1, ΓÇï - }ΓÇï + "training_file": "file-XGinujblHPwGLSztz8cPS8XY",ΓÇï + "hyperparams": { ΓÇï + "batch_size": 4,ΓÇï + "learning_rate_multiplier": 0.1,ΓÇï + "n_epochs": 4,ΓÇï + "prompt_loss_weight": 0.1,ΓÇï + }ΓÇï } ``` keywords: ```json {ΓÇï-"training_file": "file-XGinujblHPwGLSztz8cPS8XY" ,ΓÇï -"batch_size": 4,ΓÇï -ΓÇ£learning_rate_multiplier": 0.1,ΓÇï -"n_epochs": 4,ΓÇï -"prompt_loss_weight": 0.1, ΓÇï + "training_file": "file-XGinujblHPwGLSztz8cPS8XY",ΓÇï + "batch_size": 4,ΓÇï + "learning_rate_multiplier": 0.1,ΓÇï + "n_epochs": 4,ΓÇï + "prompt_loss_weight": 0.1,ΓÇï } ``` |
communication-services | Calling Chat | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/calling-chat.md | To enable calling and chat between your Communication Services users and your Te ## Enabling calling and chat interoperability in your Teams tenant-Azure AD user with [Teams administrator role](/azure/active-directory/roles/permissions-reference#teams-administrator) can run PowerShell cmdlet with MicrosoftTeams module to enable the Communication Services resource in the tenant. First, open the PowerShell and validate the existence of the Teams module with the following command: +Azure AD user with [Teams administrator role](../../../active-directory/roles/permissions-reference.md#teams-administrator) can run PowerShell cmdlet with MicrosoftTeams module to enable the Communication Services resource in the tenant. First, open the PowerShell and validate the existence of the Teams module with the following command: ```script Get-module *teams* While in private preview, a Communication Services user can do various actions u ## Privacy Interoperability between Azure Communication Services and Microsoft Teams enables your applications and users to participate in Teams calls, meetings, and chats. It is your responsibility to ensure that the users of your application are notified when recording or transcription are enabled in a Teams call or meeting. -Microsoft will indicate to you via the Azure Communication Services API that recording or transcription has commenced. You must communicate this fact in real time to your users within your application's user interface. You agree to indemnify Microsoft for all costs and damages incurred due to your failure to comply with this obligation. +Microsoft will indicate to you via the Azure Communication Services API that recording or transcription has commenced. You must communicate this fact in real time to your users within your application's user interface. You agree to indemnify Microsoft for all costs and damages incurred due to your failure to comply with this obligation. |
communication-services | Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/security.md | Microsoft Teams handles security using a combination of technologies and process Azure Communication Services handles security by implementing various security measures to prevent and mitigate common security threats. These measures include data encryption in transit and at rest, secure real-time communication through Microsoft's global network, and authentication mechanisms to verify the identity of users. The security framework for Azure Communication Services is based on industry standards and best practices. Azure also undergoes regular security assessments and audits to ensure that the platform meets industry standards for security and privacy. Additionally, Azure Communication Services integrates with other Azure security services, such as Azure Active Directory, to provide customers with a comprehensive security solution. Customers can also control access to the services and manage their security settings through the Azure portal. You can learn here more about [Azure security baseline](/security/benchmark/azure/baselines/azure-communication-services-security-baseline?toc=/azure/communication-services/toc.json), about security of [call flows](../../call-flows.md) and [call flow topologies](../../detailed-call-flows.md). ## Azure Active Directory-Azure Active Directory provides a range of security features for Microsoft Teams to help handle common security threats and provide a secure collaboration environment. Azure AD helps to secure user authentication and authorization, allowing administrators to manage user access to Teams and other applications through a single, centralized platform. Azure AD also integrates with Teams to provide multi-factor authentication and conditional access policies, which can be used to enforce security policies and control access to sensitive information. The security framework for Azure Active Directory is based on the Microsoft Security Development Lifecycle (SDL), a comprehensive and standardized approach to software security that covers all stages of development. Azure AD undergoes regular security assessments and audits to ensure that the platform meets industry standards for security and privacy. Additionally, Azure AD integrates with other Azure security services, such as Azure Information Protection, to provide customers with a comprehensive security solution. You can learn here more about [Azure identity management security](/azure/security/fundamentals/identity-management-overview). +Azure Active Directory provides a range of security features for Microsoft Teams to help handle common security threats and provide a secure collaboration environment. Azure AD helps to secure user authentication and authorization, allowing administrators to manage user access to Teams and other applications through a single, centralized platform. Azure AD also integrates with Teams to provide multi-factor authentication and conditional access policies, which can be used to enforce security policies and control access to sensitive information. The security framework for Azure Active Directory is based on the Microsoft Security Development Lifecycle (SDL), a comprehensive and standardized approach to software security that covers all stages of development. Azure AD undergoes regular security assessments and audits to ensure that the platform meets industry standards for security and privacy. Additionally, Azure AD integrates with other Azure security services, such as Azure Information Protection, to provide customers with a comprehensive security solution. You can learn here more about [Azure identity management security](../../../../security/fundamentals/identity-management-overview.md). |
communication-services | Quickstart Botframework Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/quickstart-botframework-integration.md | In some use cases, two bots need to be added to the same chat thread to provide You can verify the Communication Services user identity of a message sender in the activity's `From.Id` property. Check to see whether it belongs to another bot. Then, take the required action to prevent a bot-to-bot communication flow. If this type of scenario results in high call volumes, the Communication Services Chat channel throttles the requests and a bot can't send and receive messages. -Learn more about [throttle limits](/azure/communication-services/concepts/service-limits#chat). +Learn more about [throttle limits](../../concepts/service-limits.md#chat). ## Troubleshoot Verify that the bot's Communication Services ID is used correctly when a request ## Next steps -Try the [chat bot demo app](https://github.com/Azure/communication-preview/tree/master/samples/AzureBotService-Sample-App) for a 1:1 chat between a chat user and a bot via the BotFramework WebChat UI component. +Try the [chat bot demo app](https://github.com/Azure/communication-preview/tree/master/samples/AzureBotService-Sample-App) for a 1:1 chat between a chat user and a bot via the BotFramework WebChat UI component. |
communications-gateway | Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/deploy.md | Microsoft Teams only sends traffic to domains that you've confirmed that you own ## Next steps -- [Prepare for live traffic with Azure Communications Gateway](prepare-for-live-traffic.md)+- [Prepare for live traffic with Azure Communications Gateway](prepare-for-live-traffic.md) |
communications-gateway | Monitor Azure Communications Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/monitor-azure-communications-gateway.md | Last updated 01/25/2023 When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by Azure Communications Gateway and how you can use the features of Azure Monitor to analyze and alert on this data. -This article describes the monitoring data generated by Azure Communications Gateway. Azure Communications Gateway uses [Azure Monitor](/azure/azure-monitor/overview). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource). +This article describes the monitoring data generated by Azure Communications Gateway. Azure Communications Gateway uses [Azure Monitor](../azure-monitor/overview.md). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md). ## What is Azure Monitor? The following sections build on this article by describing the specific data gat Azure Communications Gateway collects metrics. See [Monitoring Azure Communications Gateway data reference](monitoring-azure-communications-gateway-data-reference.md) for detailed information on the metrics created by Azure Communications Gateway. Azure Communications Gateway doesn't collect logs. - For clarification on the different types of metrics available in Azure Monitor, see [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-Azure-resources). + For clarification on the different types of metrics available in Azure Monitor, see [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data-from-azure-resources). ## Analyzing metrics Azure Communications Gateway doesn't currently support alerts. ## Next steps -- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources. |
communications-gateway | Prepare To Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-to-deploy.md | This step guides you through creating a Key Vault to store a secret for the App The App registration you created in [3. Create an App registration to provide Azure Communications Gateway access to the Operator Connect API](#3-create-an-app-registration-to-provide-azure-communications-gateway-access-to-the-operator-connect-api) requires a dedicated Key Vault. The Key Vault is used to store the secret name and secret value (created in the next steps) for the App registration. -1. Create a Key Vault. Follow the steps in [Create a Vault](/azure/key-vault/general/quick-create-portal). +1. Create a Key Vault. Follow the steps in [Create a Vault](../key-vault/general/quick-create-portal.md). 1. Provide your onboarding team with the ResourceID and the Vault URI of your Key Vault. 1. Your onboarding team will use the ResourceID to request a Private-Endpoint. That request triggers two approval requests to appear in the Key Vault. 1. Approve these requests. This step must be performed on your tenant. It gives Azure Communications Gatewa Ensure your network is set up as shown in the following diagram and has been configured in accordance with the *Network Connectivity Specification* that you've been issued. You must have two Azure Regions with cross-connect functionality. For more information on the reliability design for Azure Communications Gateway, see [Reliability in Azure Communications Gateway](reliability-communications-gateway.md). -To configure MAPS, follow the instructions in [Azure Internet peering for Communications Services walkthrough](/azure/internet-peering/walkthrough-communications-services-partner). +To configure MAPS, follow the instructions in [Azure Internet peering for Communications Services walkthrough](../internet-peering/walkthrough-communications-services-partner.md). :::image type="content" source="media/azure-communications-gateway-redundancy.png" alt-text="Network diagram of an Azure Communications Gateway that uses MAPS as its peering service between Azure and an operators network."::: ## 6. Collect basic information for deploying an Azure Communications Gateway Access to Azure Communications Gateway is restricted. When you've completed the ## Next steps -- [Create an Azure Communications Gateway resource](deploy.md)+- [Create an Azure Communications Gateway resource](deploy.md) |
communications-gateway | Reliability Communications Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/reliability-communications-gateway.md | A single deployment of Azure Communications Gateway is designed to handle your O - Select from the list of available Azure regions. You can see the Azure regions that can be selected as service regions on the [Products by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/) page. - Choose regions near to your own premises and the peering locations between your network and Microsoft to reduce call latency.-- Prefer [regional pairs](/azure/reliability/cross-region-replication-azure#azure-cross-region-replication-pairings-for-all-geographies) to minimize the recovery time if a multi-region outage occurs.+- Prefer [regional pairs](../reliability/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) to minimize the recovery time if a multi-region outage occurs. Choose a management region from the following list: The reliability design described in this document is implemented by Microsoft an ## Next steps > [!div class="nextstepaction"]-> [Prepare to deploy an Azure Communications Gateway resource](prepare-to-deploy.md) +> [Prepare to deploy an Azure Communications Gateway resource](prepare-to-deploy.md) |
communications-gateway | Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/security.md | The customer data Azure Communications Gateway handles can be split into: Azure Communications Gateway doesn't store content data, but it does store customer data and provide statistics based on it. This data is stored for a maximum of 30 days. After this period, it's no longer accessible to perform diagnostics or analysis of individual calls. Anonymized statistics and logs produced based on customer data are available after the 30 days limit. -Azure Communications Gateway doesn't support [Customer Lockbox for Microsoft Azure](/azure/security/fundamentals/customer-lockbox-overview). However Microsoft engineers can only access data on a just-in-time basis, and only for diagnostic purposes. +Azure Communications Gateway doesn't support [Customer Lockbox for Microsoft Azure](../security/fundamentals/customer-lockbox-overview.md). However Microsoft engineers can only access data on a just-in-time basis, and only for diagnostic purposes. Azure Communications Gateway stores all data at rest securely, including any customer data that has to be temporarily stored, such as call records. It uses standard Azure infrastructure, with platform-managed encryption keys, to provide server-side encryption compliant with a range of security standards including FedRAMP. For more information, see [encryption of data at rest](../security/fundamentals/encryption-overview.md). The following cipher suites are used for encrypting SIP and RTP. ## Next steps -- Learn about [how Azure Communications Gateway communicates with Microsoft Teams and your network](interoperability.md).-+- Learn about [how Azure Communications Gateway communicates with Microsoft Teams and your network](interoperability.md). |
container-registry | Container Registry Oci Artifacts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-oci-artifacts.md | To remove the artifact from your registry, use the `oras manifest delete` comman <!-- LINKS - internal --> [acr-landing]: https://aka.ms/acr-[acr-authentication]: /azure/container-registry/container-registry-authentication?tabs=azure-cli -[az-acr-create]: /azure/container-registry/container-registry-get-started-azure-cli +[acr-authentication]: ./container-registry-authentication.md?tabs=azure-cli +[az-acr-create]: ./container-registry-get-started-azure-cli.md [az-acr-repository-delete]: /cli/azure/acr/repository#az_acr_repository_delete-[azure-cli-install]: /cli/azure/install-azure-cli +[azure-cli-install]: /cli/azure/install-azure-cli |
container-registry | Container Registry Oras Artifacts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-oras-artifacts.md | In this article, a graph of supply chain artifacts is created, discovered, promo [oras-cli]: https://oras.land/cli_reference/ <!-- LINKS - internal -->-[acr-authentication]: /azure/container-registry/container-registry-authentication?tabs=azure-cli -[az-acr-create]: /azure/container-registry/container-registry-get-started-azure-cli +[acr-authentication]: ./container-registry-authentication.md?tabs=azure-cli +[az-acr-create]: ./container-registry-get-started-azure-cli.md [az-acr-build]: /cli/azure/acr#az_acr_build [az-acr-manifest-metadata]: /cli/azure/acr/manifest/metadata [az-acr-repository-delete]: /cli/azure/acr/repository#az_acr_repository_delete-[azure-cli-install]: /cli/azure/install-azure-cli +[azure-cli-install]: /cli/azure/install-azure-cli |
container-registry | Container Registry Tasks Scheduled | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-scheduled.md | timertask linux Enabled BASE_IMAGE, TIMER Also, a simple example, of the task running with source code context. The following task triggers running the `hello-world` image from Microsoft Container Registry every day at 21:00 UTC. -Follow the [Prerequisites](/azure/container-registry/container-registry-tutorial-quick-task#prerequisites) to build the source code context and then create a scheduled task with context. +Follow the [Prerequisites](./container-registry-tutorial-quick-task.md#prerequisites) to build the source code context and then create a scheduled task with context. ```azurecli az acr task create \ For examples of tasks triggered by source code commits or base image updates, se [az-acr-task-timer-update]: /cli/azure/acr/task/timer#az_acr_task_timer_update [az-acr-task-run]: /cli/azure/acr/task#az_acr_task_run [az-acr-task]: /cli/azure/acr/task-[azure-cli-install]: /cli/azure/install-azure-cli +[azure-cli-install]: /cli/azure/install-azure-cli |
cost-management-billing | Troubleshoot Threshold Billing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/troubleshoot-threshold-billing.md | However, if the authorization on the card is declined, you're asked to update th ## How am I notified by Microsoft for a threshold billing authorization? -If the payment authorization is approved by the bank, it will immediately be reversed. You won't receive a notification. However, if the payment authorization is declined, you'll receive an email, text message, and Azure portal notification asking you to update your payment method before your account is disabled. +If the payment authorization is approved by the bank, it will immediately be reversed. You won't receive a notification. However, if the payment authorization is declined, you'll receive an email and an Azure portal notification asking you to update your payment method before your account is disabled. ## When does Microsoft release withholding funds on my credit card? |
databox-online | Azure Stack Edge Gpu Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-overview.md | The users are charged a monthly, recurring subscription fee for an Azure Stack E Currency conversion and discounts are handled centrally by the Azure Commerce billing platform, and you get one unified, itemized bill at the end of each month. -Billing starts 14 days after a device is marked as **Shipped** and ends when you initiate return of your device. +Standard [storage rates and transaction fees](https://azure.microsoft.com/pricing/details/storage/blobs/) are charged separately as applicable. Monthly subscription fee billing starts after delivery whether the appliance is activated or not. The billing happens against the order resource. If you activate the device against a different resource, the order and billing details move to the new resource. |
ddos-protection | Ddos Protection Sku Comparison | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-sku-comparison.md | Azure DDoS Network Protection, combined with application design best practices, > DDoS IP Protection is currently only available in Azure Preview PowerShell. > [!NOTE]-> Protecting a public IP resource attached to a Public Load Balancer is not supported for DDoS IP Proteciton SKU. +> Protecting a public IP resource attached to a Public Load Balancer is not supported for DDoS IP Protection SKU. ## SKUs |
defender-for-cloud | Defender For Cloud Glossary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-glossary.md | Title: Defender for Cloud glossary description: The glossary provides a brief description of important Defender for Cloud platform terms and concepts. Previously updated : 01/24/2023 Last updated : 02/13/2023 This glossary provides a brief description of important terms and concepts for t | Term | Description | Learn more | |--|--|--| |**AAC**|Adaptive application controls are an intelligent and automated solution for defining allowlists of known-safe applications for your machines. |[Adaptive Application Controls](adaptive-application-controls.md)+|**AAD**| Azure Active Directory (Azure AD) is a cloud-based identity and access management service.| [Adaptive Application Controls](../active-directory/fundamentals/active-directory-whatis.md) | **ACR Tasks** | A suite of features within Azure container registry | [Frequently asked questions - Azure Container Registry](../container-registry/container-registry-faq.yml) |+|**Adaptive network hardening**|Adaptive network hardening provides recommendations to further harden the [network security groups (NSG)](../virtual-network/network-security-groups-overview.md) rules.|[What is Adaptive Network Hardening?](../defender-for-cloud/adaptive-network-hardening.md#what-is-adaptive-network-hardening) | |**ADO**|Azure DevOps provides developer services for allowing teams to plan work, collaborate on code development, and build and deploy applications.|[What is Azure DevOps?](/azure/devops/user-guide/what-is-azure-devops) | |**AKS**| Azure Kubernetes Service, Microsoft's managed service for developing, deploying, and managing containerized applications.| [Kubernetes Concepts](/azure-stack/aks-hci/kubernetes-concepts)| |**Alerts**| Alerts defend your workloads in real-time so you can react immediately and prevent security events from developing.|[Security alerts and incidents](alerts-overview.md)| |**ANH** | Adaptive network hardening| [Improve your network security posture with adaptive network hardening](adaptive-network-hardening.md) |**APT** | Advanced Persistent Threats | [Video: Understanding APTs](/events/teched-2012/sia303)| | **Arc-enabled Kubernetes**| Azure Arc-enabled Kubernetes allows you to attach and configure Kubernetes clusters running anywhere. You can connect your clusters running on other public cloud providers or clusters running on your on-premises data center.|[What is Azure Arc-enabled Logic Apps? (Preview)](../logic-apps/azure-arc-enabled-logic-apps-overview.md)+|**ARG**| Azure Resource Graph-an Azure service designed to extend Azure Resource Management by providing resource exploration with the ability to query at scale across a given set of subscriptions so that you can effectively govern your environment.| [Azure Resource Graph Overview](../governance/resource-graph/overview.md)| |**ARM**| Azure Resource Manager-the deployment and management service for Azure.| [Azure Resource Manager Overview](../azure-resource-manager/management/overview.md)| |**ASB**| Azure Security Benchmark provides recommendations on how you can secure your cloud solutions on Azure.| [Azure Security Benchmark](/security/benchmark/azure/baselines/security-center-security-baseline) |+|**Attack Path Analysis**| A graph-based algorithm that scans the cloud security graph, exposes attack paths and suggests recommendations as to how best remediate issues that will break the attack path and prevent successful breach.| [What is attack path analysis?](concept-attack-path.md#what-is-attack-path-analysis) | |**Auto-provisioning**| To make sure that your server resources are secure, Microsoft Defender for Cloud uses agents installed on your servers to send information about your servers to Microsoft Defender for Cloud for analysis. You can use auto provisioning to quietly deploy the Azure Monitor Agent on your servers.| [Configure auto provision](../iot-dps/quick-setup-auto-provision.md)| ## B | Term | Description | Learn more | |--|--|--|+|**Bicep**| Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. It provides concise syntax, reliable type safety, and support for code reuse.| [Bicep tutorial](../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md)| |**Blob storage**| Azure Blob Storage is the high scale object storage service for Azure and a key building block for data storage in Azure.| [what is Azure blob storage?](../storage/blobs/storage-blobs-introduction.md)| ## C | Term | Description | Learn more | |--|--|--|-|**Cacls** | Change access control list, Microsoft Windows native command-line utility often used for modifying the security permission on folders and files.| [access-control-lists](/windows/win32/secauthz/access-control-lists) | +|**Cacls** | Change access control list, Microsoft Windows native command-line utility often used for modifying the security permission on folders and files.| [Access control lists](/windows/win32/secauthz/access-control-lists) | |**CIS Benchmark** | (Kubernetes) Center for Internet Security benchmark| [CIS](../aks/cis-kubernetes.md)|+|**Cloud security graph** | The cloud security graph is a graph-based context engine that exists within Defender for Cloud. The cloud security graph collects data from your multicloud environment and other data sources| [What is the cloud security graph?](concept-attack-path.md#what-is-cloud-security-graph)| |**CORS**| Cross origin resource sharing, an HTTP feature that enables a web application running under one domain to access resources in another domain.| [CORS](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services)|+|**CNAPP**|Cloud Native Application Protection Platform|[Build cloud native applications in Azure](https://azure.microsoft.com/solutions/cloud-native-apps/)| |**CNCF**|Cloud Native Computing Foundation|[Build CNCF projects by using Azure Kubernetes service](/azure/architecture/example-scenario/apps/build-cncf-incubated-graduated-projects-aks)| |**CSPM**|Cloud Security Posture Management| [Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md)| |**CWPP** | Cloud Workload Protection Platform | [CWPP](./overview-page.md)| This glossary provides a brief description of important terms and concepts for t | Term | Description | Learn more | |--|--|--|+|**EASM**| External Attack Surface Management|[EASM Overview](how-to-manage-attack-path.md#external-attack-surface-management-easm)| |**EDR**| Endpoint Detection and Response|[Microsoft Defender for Endpoint](integration-defender-for-endpoint.md)| |**EKS**| Amazon Elastic Kubernetes Service, Amazon's managed service for running Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes.|[EKS](https://aws.amazon.com/eks/)| |**eBPF**|Extended Berkley Packet Filter |[What is eBPF?](https://ebpf.io/)| This glossary provides a brief description of important terms and concepts for t |--|--|--| |**GCP**| Google Cloud Platform | [Onboard a GPC Project](../active-directory/cloud-infrastructure-entitlement-management/onboard-gcp.md)| |**GKE**| Google Kubernetes Engine, Google's managed environment for deploying, managing, and scaling applications using GCP infrastructure.|[Deploy a Kubernetes workload using GPU sharing on your Azure Stack Edge Pro](../databox-online/azure-stack-edge-gpu-deploy-kubernetes-gpu-sharing.md)|+|**Governance**| A set of rules and policies adopted by companies that run services in the cloud. The goal of cloud governance is to enhance data security, manage risk, and enable the smooth operation of cloud systems.|[Governance Overview](governance-rules.md)| ++## I ++| Term | Description | Learn more | +|--|--|--| +| **IaaS** | Infrastructure as a service, a type of cloud computing service that offers essential compute, storage, and networking resources on demand, on a pay-as-you-go basis. |[What is IaaS?](https://azure.microsoft.com/resources/cloud-computing-dictionary/what-is-iaas/) +| **IAM** | Identity and Access management |[Introduction to IAM](https://www.microsoft.com/security/business/security-101/what-is-identity-access-management-iam)| ## J This glossary provides a brief description of important terms and concepts for t | Term | Description | Learn more | |--|--|--|+|**Kill Chain**|The series of steps that describe the progression of a cyberattack from reconnaissance to data exfiltration. Defender for Cloud's supported kill chain intents are based on the MITRE ATT&CK matrix. | [MITRE Attack Matrix](https://attack.mitre.org/matrices/enterprise/)| |**KQL**|Kusto Query Language-a tool to explore your data and discover patterns, identify anomalies and outliers, create statistical modeling, and more.| [KQL Overview](/azure/data-explorer/kusto/query/)| ## L This glossary provides a brief description of important terms and concepts for t | Term | Description | Learn more | |--|--|--|+|**MCSB**| Microsoft Cloud Security Benchmark | [MCSB in Defender for Cloud](concept-regulatory-compliance.md#microsoft-cloud-security-benchmark-in-defender-for-cloud)| |**MDC**| Microsoft Defender for Cloud is a Cloud Security Posture Management (CSPM) and Cloud Workload Protection Platform (CWPP) for all of your Azure, on-premises, and multicloud (Amazon AWS and Google GCP) resources. | [What is Microsoft Defender for Cloud?](defender-for-cloud-introduction.md)| |**MDE**| Microsoft Defender for Endpoint is an enterprise endpoint security platform designed to help enterprise networks prevent, detect, investigate, and respond to advanced threats.|[Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md)| |**MFA**|multi factor authentication, a process in which users are prompted during the sign-in process for an extra form of identification, such as a code on their cellphone or a fingerprint scan.|[How it works: Azure Multi Factor Authentication](../active-directory/authentication/concept-mfa-howitworks.md)| This glossary provides a brief description of important terms and concepts for t |--|--|--| |**NGAV**| Next Generation Anti-Virus | **NIST** | National Institute of Standards and Technology|[National Institute of Standards and Technology](https://www.nist.gov/)+|**NSG**| Network Security Group |[network security groups (NSGs)](../virtual-network/network-security-groups-overview.md)| ++## P ++| Term | Description | Learn more | +|--|--|--| +|**PaaS**| Platform as a service (PaaS) is a complete development and deployment environment in the cloud, with resources that enable you to deliver everything from simple cloud-based apps to sophisticated, cloud-enabled enterprise applications. |[What is PaaS?](https://azure.microsoft.com/resources/cloud-computing-dictionary/what-is-paas/) ## R This glossary provides a brief description of important terms and concepts for t |**RBAC**| Azure role-based access control (Azure RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to. | [RBAC Overview](../role-based-access-control/overview.md)| |**RDP** | Remote Desktop Protocol (RDP) is a sophisticated technology that uses various techniques to perfect the server's remote graphics' delivery to the client device.| [RDP Bandwidth Requirements](../virtual-desktop/rdp-bandwidth.md)| |**Recommendations**|Recommendations secure your workloads with step-by-step actions that protect your workloads from known security risks.| [What are security policies, initiatives, and recommendations?](security-policy-concept.md)|-**Regulatory Compliance** | Regulatory compliance refers to the discipline and process of ensuring that a company follows the laws enforced by governing bodies in their geography or rules required | [Regulatory Compliance Overview](/azure/cloud-adoption-framework/govern/policy-compliance/regulatory-compliance) | +|**Regulatory Compliance** | Regulatory compliance refers to the discipline and process of ensuring that a company follows the laws enforced by governing bodies in their geography or rules required | [Regulatory Compliance Overview](/azure/cloud-adoption-framework/govern/policy-compliance/regulatory-compliance) | ## S | Term | Description | Learn more | |--|--|--|+|**SAS**| Shared access signature that provides secure delegated access to resources in your storage account.|[Storage SAS Overview (https://learn.microsoft.com/azure/storage/common/storage-sas-overview)| +|**SaaS**| Software as a service (SaaS) allows users to connect to and use cloud-based apps over the Internet. Common examples are email, calendaring, and office tools (such as Microsoft Office 365). SaaS provides a complete software solution that you purchase on a pay-as-you-go basis from a cloud service provider.|[What is SaaS?](https://azure.microsoft.com/resources/cloud-computing-dictionary/what-is-saas/)| |**Secure Score**|Defender for Cloud continually assesses your cross-cloud resources for security issues. It then aggregates all the findings into a single score that represents your current security situation: the higher the score, the lower the identified risk level.|[Security posture for Microsoft Defender for Cloud](secure-score-security-controls.md)|+|**Security Alerts**|Security alerts are the notifications generated by Defender for Cloud and Defender for Cloud plans when threats are identified in your cloud, hybrid, or on-premises environment.|[What are security alerts?](../defender-for-cloud/alerts-overview.md#what-are-security-alerts)| |**Security Initiative** | A collection of Azure Policy Definitions, or rules, that are grouped together towards a specific goal or purpose. | [What are security policies, initiatives, and recommendations?](security-policy-concept.md) |**Security Policy**| An Azure rule about specific security conditions that you want controlled.|[Understanding Security Policies](security-policy-concept.md)|+|**SIEM**| Security Information and Event Management.| [What is SIEM?](https://www.microsoft.com/security/business/security-101/what-is-siem?rtc=1)| |**SOAR**| Security Orchestration Automated Response, a collection of software tools designed to collect data about security threats from multiple sources and respond to low-level security events without human assistance.| [SOAR](../sentinel/automation.md)| ## T |
defender-for-cloud | Devops Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/devops-faq.md | Ensure that you've [onboarded your repositories](/azure/defender-for-cloud/quick ### Secret scan didn't run on my code -To ensure your code is scanned for secrets, make sure you've [onboarded your repositories](/azure/defender-for-cloud/quickstart-onboard-devops?branch=main) to Defender for Cloud. +To ensure your code is scanned for secrets, make sure you've [onboarded your repositories](./quickstart-onboard-devops.md?branch=main) to Defender for Cloud. -In addition to onboarding resources, you must have the [Microsoft Security DevOps (MSDO) Azure DevOps extension](/azure/defender-for-cloud/azure-devops-extension?branch=main) configured for your pipelines. The extension runs secret scan along with other scanners. +In addition to onboarding resources, you must have the [Microsoft Security DevOps (MSDO) Azure DevOps extension](./azure-devops-extension.md?branch=main) configured for your pipelines. The extension runs secret scan along with other scanners. If no secrets are identified through scans, the total exposed secret for the resource shows `Healthy` in Defender for Cloud. Data is stored within the region your connector is created in. You should consid Defender for DevOps currently doesn't process or store your code, build, and audit logs. +Learn more about [Microsoft Privacy Statement](https://go.microsoft.com/fwLink/?LinkID=521839&clcid=0x9). + ### Is Exemptions capability available and tracked for app sec vulnerability management? Exemptions are not available for Defender for DevOps within Microsoft Defender for Cloud. |
defender-for-iot | Alert Engine Messages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alert-engine-messages.md | Title: Microsoft Defender for IoT alert types and descriptions -description: This article provides a reference of all alerts that are generated by Microsoft Defender for IoT network sensors. + Title: Microsoft Defender for IoT alert reference +description: This article provides a reference of all alerts that are generated by Microsoft Defender for IoT network sensors, inclduing a list of all alert types and descriptions. Last updated 11/23/2022 -# Microsoft Defender for IoT alert types and descriptions +# Microsoft Defender for IoT alert reference -This article provides a reference of the [alerts](how-to-manage-cloud-alerts.md) that are generated by Microsoft Defender for IoT network sensors. You might use this reference to [map alerts into playbooks](iot-advanced-threat-monitoring.md#automate-response-to-defender-for-iot-alerts), [define forwarding rules](how-to-forward-alert-information-to-partners.md) on an OT network sensor, or other custom activity. +This article provides a reference of the [alerts](how-to-manage-cloud-alerts.md) that are generated by Microsoft Defender for IoT network sensors, inclduing a list of all alert types and descriptions. You might use this reference to [map alerts into playbooks](iot-advanced-threat-monitoring.md#automate-response-to-defender-for-iot-alerts), [define forwarding rules](how-to-forward-alert-information-to-partners.md) on an OT network sensor, or other custom activity. > [!IMPORTANT] > The **Alerts** page in the Azure portal is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. |
defender-for-iot | Configure Sensor Settings Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/configure-sensor-settings-portal.md | To define OT sensor settings, make sure that you have the following: - **Permissions**: - - To view settings that others have defined, sign in with a [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader), [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) role for the subscription. + - To view settings that others have defined, sign in with a [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader), [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role for the subscription. - - To define or update settings, sign in with [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) role. + - To define or update settings, sign in with [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role. For more information, see [Azure user roles and permissions for Defender for IoT](roles-azure.md). Select **Add VLAN** to add more VLANs as needed. > [Manage sensors from the Azure portal](how-to-manage-sensors-on-the-cloud.md) > [!div class="nextstepaction"]-> [Manage OT sensors from the sensor console](how-to-manage-individual-sensors.md) +> [Manage OT sensors from the sensor console](how-to-manage-individual-sensors.md) |
defender-for-iot | How To Set Up Snmp Mib Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-snmp-mib-monitoring.md | Supported SNMP versions are SNMP version 2 and version 3. The SNMP protocol util For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md). -- To download the SNMP MIB file, make sure you can access the Azure portal as a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) user.+- To download the SNMP MIB file, make sure you can access the Azure portal as a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user. If you don't already have an Azure account, you can [create your free Azure account today](https://azure.microsoft.com/free/). Note that: ## Next steps -For more information, see [Export troubleshooting logs](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md) +For more information, see [Export troubleshooting logs](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md) |
defender-for-iot | Service Now Legacy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrations/service-now-legacy.md | -# Tutorial: Integrate ServiceNow with Microsoft Defender for IoT (legacy) +# Integrate ServiceNow with Microsoft Defender for IoT (legacy) > [!NOTE] > A new [Operational Technology Manager](https://store.servicenow.com/sn_appstore_store.do#!/store/application/31eed0f72337201039e2cb0a56bf65ef/1.1.2?referer=%2Fstore%2Fsearch%3Flistingtype%3Dallintegrations%25253Bancillary_app%25253Bcertified_apps%25253Bcontent%25253Bindustry_solution%25253Boem%25253Butility%25253Btemplate%26q%3Doperational%2520technology%2520manager&sl=sh) integration is now available from the ServiceNow store. The new integration streamlines Microsoft Defender for IoT sensor appliances, OT assets, network connections, and vulnerabilities to ServiceNowΓÇÖs Operational Technology (OT) data model. |
defender-for-iot | Iot Advanced Threat Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/iot-advanced-threat-monitoring.md | The following table describes the workbooks included in the **Microsoft Defender ## Automate response to Defender for IoT alerts -[Playbooks](/azure/sentinel/tutorial-respond-threats-playbook) are collections of automated remediation actions that can be run from Microsoft Sentinel as a routine. A playbook can help automate and orchestrate your threat response; it can be run manually or set to run automatically in response to specific alerts or incidents, when triggered by an analytics rule or an automation rule, respectively. +[Playbooks](../../sentinel/tutorial-respond-threats-playbook.md) are collections of automated remediation actions that can be run from Microsoft Sentinel as a routine. A playbook can help automate and orchestrate your threat response; it can be run manually or set to run automatically in response to specific alerts or incidents, when triggered by an analytics rule or an automation rule, respectively. The [Microsoft Defender for IoT](#install-the-defender-for-iot-solution) solution includes out-of-the-box playbooks that provide the following functionality: |
defender-for-iot | Iot Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/iot-solution.md | Before you start, make sure you have the following requirements on your workspac ## Connect your data from Defender for IoT to Microsoft Sentinel -Start by enabling the [Defender for IoT data connector](/azure/sentinel/data-connectors-reference.md#microsoft-defender-for-iot) to stream all your Defender for IoT events into Microsoft Sentinel. +Start by enabling the [Defender for IoT data connector](../../sentinel/data-connectors-reference.md#microsoft-defender-for-iot) to stream all your Defender for IoT events into Microsoft Sentinel. **To enable the Defender for IoT data connector**: The following types of updates generate new records in the **SecurityAlert** tab The [Microsoft Defender for IoT](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-unifiedmicrosoftsocforot?tab=Overview) solution is a set of bundled, out-of-the-box content that's configured specifically for Defender for IoT data, and includes analytics rules, workbooks, and playbooks. > [!div class="nextstepaction"]-> [Install the **Microsoft Defender for IoT** solution](iot-advanced-threat-monitoring.md) +> [Install the **Microsoft Defender for IoT** solution](iot-advanced-threat-monitoring.md) |
defender-for-iot | Respond Ot Alert | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/respond-ot-alert.md | Before you start, make sure that you have: - [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](iot-solution.md) - [Tutorial: Investigate and detect threats for IoT devices](iot-advanced-threat-monitoring.md) -- An alert details page open, accessed either from the Defender for IoT **Alerts** page in the [Azure portal](how-to-manage-cloud-alerts.md), a Defender for IoT [device details page](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory), or a Microsoft Sentinel [incident](/azure/sentinel/investigate-incidents).+- An alert details page open, accessed either from the Defender for IoT **Alerts** page in the [Azure portal](how-to-manage-cloud-alerts.md), a Defender for IoT [device details page](how-to-manage-device-inventory-for-organizations.md#view-the-device-inventory), or a Microsoft Sentinel [incident](../../sentinel/investigate-incidents.md). ## Investigate an alert from the Azure portal For example: :::image type="content" source="media/respond-ot-alert/change-alert-status.png" alt-text="Screenshot of changing an alert status on the Azure portal."::: > [!IMPORTANT]-> If you're integrating with Microsoft Sentinel, make sure to manage your alert status only from the [incident](/azure/sentinel/investigate-incidents) in Microsoft Sentinel. Alerts statuses are not synchronized from Defender for IoT to Microsoft Sentinel. +> If you're integrating with Microsoft Sentinel, make sure to manage your alert status only from the [incident](../../sentinel/investigate-incidents.md) in Microsoft Sentinel. Alerts statuses are not synchronized from Defender for IoT to Microsoft Sentinel. After updating the status, check the alert details page for the following details to aid in your investigation: For high severity alerts, you may want to take action immediately. ## Next steps > [!div class="nextstepaction"]-> [Enhance security posture with security recommendations](recommendations.md) --+> [Enhance security posture with security recommendations](recommendations.md) |
defender-for-iot | Tutorial Clearpass | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-clearpass.md | Title: Integrate ClearPass with Microsoft Defender for IoT description: In this tutorial, you will learn how to integrate Microsoft Defender for IoT with ClearPass. Last updated 02/07/2022-+ -# Tutorial: Integrate ClearPass with Microsoft Defender for IoT +# Integrate ClearPass with Microsoft Defender for IoT This tutorial will help you learn how to integrate ClearPass Policy Manager (CPPM) with Microsoft Defender for IoT. |
defender-for-iot | Tutorial Cyberark | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-cyberark.md | Title: Integrate CyberArk with Microsoft Defender for IoT description: In this tutorial, you will learn how to integrate Microsoft Defender for IoT with CyberArk. Last updated 02/08/2022-+ -# Tutorial: Integrate CyberArk with Microsoft Defender for IoT +# Integrate CyberArk with Microsoft Defender for IoT This tutorial will help you learn how to integrate, and use CyberArk with Microsoft Defender for IoT. |
defender-for-iot | Tutorial Forescout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-forescout.md | Title: Integrate Forescout with Microsoft Defender for IoT description: In this tutorial, you'll learn how to integrate Microsoft Defender for IoT with Forescout. Last updated 02/08/2022-+ -# Tutorial: Integrate Forescout with Microsoft Defender for IoT +# Integrate Forescout with Microsoft Defender for IoT > [!Note] > References to CyberX refer to Microsoft Defender for IoT. |
defender-for-iot | Tutorial Fortinet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-fortinet.md | Title: Integrate Fortinet with Microsoft Defender for IoT description: In this article, you'll learn how to integrate Microsoft Defender for IoT with Fortinet. Last updated 01/01/2023-+ -# Tutorial: Integrate Fortinet with Microsoft Defender for IoT +# Integrate Fortinet with Microsoft Defender for IoT This tutorial will help you learn how to integrate, and use Fortinet with Microsoft Defender for IoT. |
defender-for-iot | Tutorial Palo Alto | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-palo-alto.md | Last updated 01/01/2023 -# Tutorial: Integrate Palo-Alto with Microsoft Defender for IoT +# Integrate Palo-Alto with Microsoft Defender for IoT This tutorial will help you learn how to integrate, and use Palo Alto with Microsoft Defender for IoT. |
defender-for-iot | Tutorial Splunk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-splunk.md | Title: Integrate Splunk with Microsoft Defender for IoT description: In this tutorial, learn how to integrate Splunk with Microsoft Defender for IoT. Last updated 02/07/2022-+ -# Tutorial: Integrate Splunk with Microsoft Defender for IoT +# Integrate Splunk with Microsoft Defender for IoT This tutorial will help you learn how to integrate, and use Splunk with Microsoft Defender for IoT. |
defender-for-iot | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md | For more information, see [Define and view OT sensor settings from the Azure por ### Alerts GA in the Azure portal -The **Alerts** page in the Azure portal is now out for General Availability. Microsoft Defender for IoT alerts enhance your network security and operations with real-time details about events detected in your network. Alerts are triggered when OT or Enterprise IoT network sensors, or the [Defender for IoT micro agent](/azure/defender-for-iot/device-builders/), detect changes or suspicious activity in network traffic that need your attention. +The **Alerts** page in the Azure portal is now out for General Availability. Microsoft Defender for IoT alerts enhance your network security and operations with real-time details about events detected in your network. Alerts are triggered when OT or Enterprise IoT network sensors, or the [Defender for IoT micro agent](../device-builders/index.yml), detect changes or suspicious activity in network traffic that need your attention. Specific alerts triggered by the Enterprise IoT sensor currently remain in public preview. The following Defender for IoT options and configurations have been moved, remov ## Next steps -[Getting started with Defender for IoT](getting-started.md) +[Getting started with Defender for IoT](getting-started.md) |
dev-box | Quickstart Configure Dev Box Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-service.md | Before users can create dev boxes based on the dev box pools in a project, you m 1. Select **Add** > **Add role assignment**. -1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal). +1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). |Setting |Value | ||| In this quickstart, you created a dev box project and the resources necessary to To learn about how to create and connect to a dev box, advance to the next quickstart: > [!div class="nextstepaction"]-> [Create a dev box](./quickstart-create-dev-box.md) --+> [Create a dev box](./quickstart-create-dev-box.md) |
dns | Dns Private Resolver Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md | Azure DNS Private Resolver is available in the following regions: | South Central US | France Central | Korea Central | | North Central US | Sweden Central | South Africa North| | West Central US | Switzerland North| Australia East |-| West US 3 | | Central India | +| West US 2 | | Central India | +| West US 3 | | | | Canada Central | | | | Brazil South | | | |
dns | Private Dns Privatednszone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-privatednszone.md | To understand how many private DNS zones you can create in a subscription and ho | | | | |azclient.ms | azclient.us | azclient.cn |azure.com | azure.us | azure.cn- |azure-api.net | azure-api.us | azure-api.cn |cloudapp.net | usgovcloudapp.net | chinacloudapp.cn |core.windows.net | core.usgovcloudapi.net | core.chinacloudapi.cn |microsoft.com | microsoft.us | microsoft.cn |
event-hubs | Event Hubs Java Get Started Send | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-java-get-started-send.md | Title: Send or receive events from Azure Event Hubs using Java (latest) -description: This article provides a walkthrough of creating a Java application that sends/receives events to/from Azure Event Hubs using the latest azure-messaging-eventhubs package. +description: This article provides a walkthrough of creating a Java application that sends/receives events to/from Azure Event Hubs. Previously updated : 12/21/2022 Last updated : 02/10/2023 ms.devlang: java-+ -# Use Java to send events to or receive events from Azure Event Hubs (azure-messaging-eventhubs) +# Use Java to send events to or receive events from Azure Event Hubs This quickstart shows how to send events to and receive events from an event hub using the **azure-messaging-eventhubs** Java package. This section shows you how to create a Java application to send events an event ### Add reference to Azure Event Hubs library -First, create a new **Maven** project for a console/shell application in your favorite Java development environment. Update the `pom.xml` file with the following dependency. The Java client library for Event Hubs is available in the [Maven Central Repository](https://search.maven.org/search?q=a:azure-messaging-eventhubs). +First, create a new **Maven** project for a console/shell application in your favorite Java development environment. Update the `pom.xml` file as follows. The Java client library for Event Hubs is available in the [Maven Central Repository](https://search.maven.org/search?q=a:azure-messaging-eventhubs). ++### [Passwordless (Recommended)](#tab/passwordless) ```xml <dependency> First, create a new **Maven** project for a console/shell application in your fa <artifactId>azure-messaging-eventhubs</artifactId> <version>5.15.0</version> </dependency>+ <dependency> + <groupId>com.azure</groupId> + <artifactId>azure-identity</artifactId> + <version>1.8.0</version> + <scope>compile</scope> + </dependency> ``` +### [Connection String](#tab/connection-string) ++```xml + <dependency> + <groupId>com.azure</groupId> + <artifactId>azure-messaging-eventhubs</artifactId> + <version>5.15.0</version> + </dependency> +``` +++ > [!NOTE] > Update the version to the latest version published to the Maven repository. +### Authenticate the app to Azure ++ ### Write code to send messages to the event hub +## [Passwordless (Recommended)](#tab/passwordless) +Add a class named `Sender`, and add the following code to the class: ++> [!IMPORTANT] +> - Update `<NAMESPACE NAME>` with the name of your Event Hubs namespace. +> - Update `<EVENT HUB NAME>` with the name of your event hub. ++```java +package ehubquickstart; ++import com.azure.messaging.eventhubs.*; +import java.util.Arrays; +import java.util.List; ++import com.azure.identity.*; ++public class SenderAAD { ++ // replace <NAMESPACE NAME> with the name of your Event Hubs namespace. + // Example: private static final String namespaceName = "contosons.servicebus.windows.net"; + private static final String namespaceName = "<NAMESPACE NAME>.servicebus.windows.net"; ++ // Replace <EVENT HUB NAME> with the name of your event hug. + // Example: private static final String eventHubName = "ordersehub"; + private static final String eventHubName = "<EVENT HUB NAME>"; ++ public static void main(String[] args) { + publishEvents(); + } + /** + * Code sample for publishing events. + * @throws IllegalArgumentException if the EventData is bigger than the max batch size. + */ + public static void publishEvents() { + // create a token using the default Azure credential + DefaultAzureCredential credential = new DefaultAzureCredentialBuilder() + .authorityHost(AzureAuthorityHosts.AZURE_PUBLIC_CLOUD) + .build(); ++ // create a producer client + EventHubProducerClient producer = new EventHubClientBuilder() + .fullyQualifiedNamespace(namespaceName) + .eventHubName(eventHubName) + .credential(credential) + .buildProducerClient(); ++ // sample events in an array + List<EventData> allEvents = Arrays.asList(new EventData("Foo"), new EventData("Bar")); ++ // create a batch + EventDataBatch eventDataBatch = producer.createBatch(); ++ for (EventData eventData : allEvents) { + // try to add the event from the array to the batch + if (!eventDataBatch.tryAdd(eventData)) { + // if the batch is full, send it and then create a new batch + producer.send(eventDataBatch); + eventDataBatch = producer.createBatch(); ++ // Try to add that event that couldn't fit before. + if (!eventDataBatch.tryAdd(eventData)) { + throw new IllegalArgumentException("Event is too large for an empty batch. Max size: " + + eventDataBatch.getMaxSizeInBytes()); + } + } + } + // send the last batch of remaining events + if (eventDataBatch.getCount() > 0) { + producer.send(eventDataBatch); + } + producer.close(); + } +} +``` ++## [Connection String](#tab/connection-string) Add a class named `Sender`, and add the following code to the class: > [!IMPORTANT] Add a method named `publishEvents` to the `Sender` class: producer.close(); } ```+ Build the program, and ensure that there are no errors. You'll run this program after you run the receiver program. Follow these steps to create an Azure Storage account. 1. [Create an Azure Storage account](../storage/common/storage-account-create.md?tabs=azure-portal) 2. [Create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container)-3. [Get the connection string to the storage account](../storage/common/storage-configure-connection-string.md) +3. Authenticate to the blob container + +## [Passwordless (Recommended)](#tab/passwordless) - Note down the **connection string** and the **container name**. You'll use them in the receive code. +## [Connection String](#tab/connection-string) ++[Get the connection string to the storage account](../storage/common/storage-configure-connection-string.md) ++Note down the **connection string** and the **container name**. You use them in the receive code. ++ ### Add Event Hubs libraries to your Java project +## [Passwordless (Recommended)](#tab/passwordless) ++Add the following dependencies in the pom.xml file. ++- [azure-messaging-eventhubs](https://search.maven.org/search?q=a:azure-messaging-eventhubs) +- [azure-messaging-eventhubs-checkpointstore-blob](https://search.maven.org/search?q=a:azure-messaging-eventhubs-checkpointstore-blob) +- [azure-identity](https://search.maven.org/search?q=a:azure-identity) ++```xml + <dependencies> + <dependency> + <groupId>com.azure</groupId> + <artifactId>azure-messaging-eventhubs</artifactId> + <version>5.15.0</version> + </dependency> + <dependency> + <groupId>com.azure</groupId> + <artifactId>azure-messaging-eventhubs-checkpointstore-blob</artifactId> + <version>1.16.1</version> + </dependency> + <dependency> + <groupId>com.azure</groupId> + <artifactId>azure-identity</artifactId> + <version>1.8.0</version> + <scope>compile</scope> + </dependency> + </dependencies> +``` +++### [Connection String](#tab/connection-string) + Add the following dependencies in the pom.xml file. - [azure-messaging-eventhubs](https://search.maven.org/search?q=a:azure-messaging-eventhubs) Add the following dependencies in the pom.xml file. </dependency> </dependencies> ```++++## [Passwordless (Recommended)](#tab/passwordless) ++1. Add the following `import` statements at the top of the Java file. ++ ```java + import com.azure.messaging.eventhubs.*; + import com.azure.messaging.eventhubs.checkpointstore.blob.BlobCheckpointStore; + import com.azure.messaging.eventhubs.models.*; + import com.azure.storage.blob.*; + import java.util.function.Consumer; + + import com.azure.identity.*; + ``` +2. Create a class named `Receiver`, and add the following string variables to the class. Replace the placeholders with the correct values. ++ > [!IMPORTANT] + > Replace the placeholders with the correct values. + > - `<NAMESPACE NAME>` with the name of your Event Hubs namespace. + > - `<EVENT HUB NAME>` with the name of your event hub in the namespace. ++ ```java + private static final String namespaceName = "<NAMESPACE NAME>.servicebus.windows.net"; + private static final String eventHubName = "<EVENT HUB NAME>"; + ``` +3. Add the following `main` method to the class. ++ > [!IMPORTANT] + > Replace the placeholders with the correct values. + > - `<STORAGE ACCOUNT NAME>` with the name of your Azure Storage account. + > - `<CONTAINER NAME>` with the name of the blob container in the storage account ++ ```java + // create a token using the default Azure credential + DefaultAzureCredential credential = new DefaultAzureCredentialBuilder() + .authorityHost(AzureAuthorityHosts.AZURE_PUBLIC_CLOUD) + .build(); ++ // Create a blob container client that you use later to build an event processor client to receive and process events + BlobContainerAsyncClient blobContainerAsyncClient = new BlobContainerClientBuilder() + .credential(credential) + .endpoint("https://<STORAGE ACCOUNT NAME>.blob.core.windows.net") + .containerName("<CONTAINER NAME>") + .buildAsyncClient(); + + // Create an event processor client to receive and process events and errors. + EventProcessorClient eventProcessorClient = new EventProcessorClientBuilder() + .fullyQualifiedNamespace(namespaceName) + .eventHubName(eventHubName) + .consumerGroup(EventHubClientBuilder.DEFAULT_CONSUMER_GROUP_NAME) + .processEvent(PARTITION_PROCESSOR) + .processError(ERROR_HANDLER) + .checkpointStore(new BlobCheckpointStore(blobContainerAsyncClient)) + .credential(credential) + .buildEventProcessorClient(); ++ System.out.println("Starting event processor"); + eventProcessorClient.start(); ++ System.out.println("Press enter to stop."); + System.in.read(); ++ System.out.println("Stopping event processor"); + eventProcessorClient.stop(); + System.out.println("Event processor stopped."); ++ System.out.println("Exiting process"); + ``` ++## [Connection String](#tab/connection-string) 1. Add the following **import** statements at the top of the Java file. Add the following dependencies in the pom.xml file. System.out.println("Exiting process"); } ```++ 4. Add the two helper methods (`PARTITION_PROCESSOR` and `ERROR_HANDLER`) that process events and errors to the `Receiver` class. ```java |
event-hubs | Event Hubs Kafka Connect Debezium | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-connect-debezium.md | To complete this walk through, you'll require: An Event Hubs namespace is required to send and receive from any Event Hubs service. See [Creating an event hub](event-hubs-create.md) for instructions to create a namespace and an event hub. Get the Event Hubs connection string and fully qualified domain name (FQDN) for later use. For instructions, see [Get an Event Hubs connection string](event-hubs-get-connection-string.md). ## Set up and configure Azure Database for PostgreSQL-[Azure Database for PostgreSQL](../postgresql/overview.md) is a relational database service based on the community version of open-source PostgreSQL database engine, and is available in two deployment options: Single Server and Hyperscale (Citus). [Follow these instructions](../postgresql/quickstart-create-server-database-portal.md) to create an Azure Database for PostgreSQL server using the Azure portal. +[Azure Database for PostgreSQL](../postgresql/overview.md) is a relational database service based on the community version of open-source PostgreSQL database engine, and is available in three deployment options: Single Server, Flexible Server and Cosmos DB for PostgreSQL. [Follow these instructions](../postgresql/quickstart-create-server-database-portal.md) to create an Azure Database for PostgreSQL server using the Azure portal. ## Setup and run Kafka Connect This section will cover the following topics: |
event-hubs | Event Hubs Python Get Started Send | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-python-get-started-send.md | If you don't see events in the receiver window or the code reports an error, try * If you see authorization errors with *recv.py* when accessing storage, make sure you followed the steps in [Create an Azure storage account and a blob container](#create-an-azure-storage-account-and-a-blob-container) and assigned the **Storage Blob Data Contributor** role to the service principal. -* If you receive events with different partition IDs, this result is expected. Partitions are a data organization mechanism that relates to the downstream parallelism required in consuming applications. The number of partitions in an event hub directly relates to the number of concurrent readers you expect to have. For more information, see [Learn more about partitions](/azure/event-hubs/event-hubs-features#partitions). +* If you receive events with different partition IDs, this result is expected. Partitions are a data organization mechanism that relates to the downstream parallelism required in consuming applications. The number of partitions in an event hub directly relates to the number of concurrent readers you expect to have. For more information, see [Learn more about partitions](./event-hubs-features.md#partitions). ## Next steps In this quickstart, you've sent and received events asynchronously. To learn how to send and receive events synchronously, go to the [GitHub sync_samples page](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/eventhub/azure-eventhub/samples/sync_samples). -For all the samples (both synchronous and asynchronous) on GitHub, go to [Azure Event Hubs client library for Python samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/eventhub/azure-eventhub/samples). +For all the samples (both synchronous and asynchronous) on GitHub, go to [Azure Event Hubs client library for Python samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/eventhub/azure-eventhub/samples). |
external-attack-surface-management | Understanding Asset Details | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-asset-details.md | This section is comprised of high-level information that is key to understanding |--|--|--| | Asset Name | The name of an asset. | All | | UUID | This 128-bit label represents the universally unique identifier (UUID) for the | All |-| Added to inventory | The date than an asset was added to inventory, whether automatically to the ΓÇ£Approved InventoryΓÇ¥ state or in another state (e.g. ΓÇ£CandidateΓÇ¥). | All | +| Added to inventory | The date that an asset was added to inventory, whether automatically to the ΓÇ£Approved InventoryΓÇ¥ state or in another state (e.g. ΓÇ£CandidateΓÇ¥). | All | | Status | The status of the asset within the RiskIQ system. Options include Approved Inventory, Candidate, Dependencies, or Requires Investigation. | All | | First seen (Global Security Graph) | The date that Microsoft first scanned the asset and added it to our comprehensive Global Security Graph. | All | | Last seen (Global Security Graph) | The date that Microsoft most recently scanned the asset. | All | | Discovered on | Indicates the creation date of the Discovery Group that detected the asset. | All |-| Last updated | The date that the asset was last updated, whether by new data discovered in a scan or manual user actions (e.g. a state change). | All | +| Last updated | The date that the asset was last updated by a manual user actions (e.g. a state change, asset removal). | All | | Country | The country of origin detected for this asset. | All | | State/Province | The state or province of origin detected for this asset. | All | | City | The city of origin detected for this asset. | All | |
firewall | Integrate With Nat Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/integrate-with-nat-gateway.md | ThereΓÇÖs no double NAT with this architecture. Azure Firewall instances send th > [!NOTE] > Using Azure Virtual Network NAT is currently incompatible with Azure Firewall if you have deployed your [Azure Firewall across multiple availability zones](deploy-availability-zone-powershell.md). >-> In addition, Azure Virtual Network NAT integration is not currently supported in secured virtual hub network architectures. You must deploy using a hub virtual network architecture. For detailed guidance on integrating NAT gateway with Azure Firewall in a hub and spoke network architecture refer to the [NAT gateway and Azure Firewall integration tutorial](/azure/virtual-network/nat-gateway/tutorial-hub-spoke-nat-firewall). For more information about Azure Firewall architecture options, see [What are the Azure Firewall Manager architecture options?](../firewall-manager/vhubs-and-vnets.md). +> In addition, Azure Virtual Network NAT integration is not currently supported in secured virtual hub network architectures. You must deploy using a hub virtual network architecture. For detailed guidance on integrating NAT gateway with Azure Firewall in a hub and spoke network architecture refer to the [NAT gateway and Azure Firewall integration tutorial](../virtual-network/nat-gateway/tutorial-hub-spoke-nat-firewall.md). For more information about Azure Firewall architecture options, see [What are the Azure Firewall Manager architecture options?](../firewall-manager/vhubs-and-vnets.md). ## Associate a NAT gateway with an Azure Firewall subnet - Azure PowerShell az network vnet subnet update --name AzureFirewallSubnet --vnet-name nat-vnet -- ## Next steps - [Design virtual networks with NAT gateway](../virtual-network/nat-gateway/nat-gateway-resource.md)-- [Integrate NAT gateway with Azure Firewall in a hub and spoke network](/azure/virtual-network/nat-gateway/tutorial-hub-spoke-nat-firewall)+- [Integrate NAT gateway with Azure Firewall in a hub and spoke network](../virtual-network/nat-gateway/tutorial-hub-spoke-nat-firewall.md) |
frontdoor | Create Front Door Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-cli.md | az afd endpoint create \ --enabled-state Enabled ``` -For more information about endpoints in Front Door, please read [Endpoints in Azure Front Door](/azure/frontdoor/endpoint). +For more information about endpoints in Front Door, please read [Endpoints in Azure Front Door](./endpoint.md). ### Create an origin group az afd origin create \ --https-port 443 ``` -For more information about origins, origin groups and health probes, please read [Origins and origin groups in Azure Front Door](/azure/frontdoor/origin) +For more information about origins, origin groups and health probes, please read [Origins and origin groups in Azure Front Door](./origin.md) ### Add a route az afd route create \ --link-to-default-domain Enabled ``` -To learn more about routes in Azure Front Door, please read [Traffic routing methods to origin](/azure/frontdoor/routing-methods). +To learn more about routes in Azure Front Door, please read [Traffic routing methods to origin](./routing-methods.md). ## Create a new security policy az network front-door waf-policy create \ > [!NOTE] > If you select `Detection` mode, your WAF doesn't block any requests. -To learn more about WAF policy settings for Front Door, please read [Policy settings for Web Application Firewall on Azure Front Door](/azure/web-application-firewall/afds/waf-front-door-policy-settings). +To learn more about WAF policy settings for Front Door, please read [Policy settings for Web Application Firewall on Azure Front Door](../web-application-firewall/afds/waf-front-door-policy-settings.md). ### Assign managed rules to the WAF policy az network front-door waf-policy managed-rules add \ --version 1.0 ``` -To learn more about managed rules in Front Door, please read [Web Application Firewall DRS rule groups and rules](/azure/web-application-firewall/afds/waf-front-door-drs). +To learn more about managed rules in Front Door, please read [Web Application Firewall DRS rule groups and rules](../web-application-firewall/afds/waf-front-door-drs.md). ### Create the security policy az group delete --name myRGFD Advance to the next article to learn how to add a custom domain to your Front Door. > [!div class="nextstepaction"]-> [Add a custom domain](standard-premium/how-to-add-custom-domain.md) +> [Add a custom domain](standard-premium/how-to-add-custom-domain.md) |
healthcare-apis | How To Enable Diagnostic Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-enable-diagnostic-settings.md | If you choose to include your Log Analytics workspace as a destination option fo > [!WARNING] > The above custom query is not saved and will have to be recreated if you leave your Log Analytics workspace without saving the custom query. >-> To learn how to save a custom query in Log Analytics, see [Save a query in Azure Monitor Log Analytics](/azure/azure-monitor/logs/save-query) +> To learn how to save a custom query in Log Analytics, see [Save a query in Azure Monitor Log Analytics](../../azure-monitor/logs/save-query.md) > [!TIP] > To learn how to use the Log Analytics workspace, see [Azure Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md). The MedTech service comes with pre-defined queries that can be used anytime in y > [!WARNING] > Any changes that you've made to the pre-defined queries are not saved and will have to be recreated if you leave your Log Analytics workspace without saving custom changes you've made to the pre-defined queries. >-> To learn how to save a query in Log Analytics, see [Save a query in Azure Monitor Log Analytics](/azure/azure-monitor/logs/save-query) +> To learn how to save a query in Log Analytics, see [Save a query in Azure Monitor Log Analytics](../../azure-monitor/logs/save-query.md) > [!TIP] > To learn how to use the Log Analytics workspace, see [Azure Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md). In this article, you learned how to enable the diagnostics settings for the MedT To learn about the MedTech service frequently asked questions (FAQs), see > [!div class="nextstepaction"]-> [Frequently asked questions about the MedTech service](frequently-asked-questions.md) +> [Frequently asked questions about the MedTech service](frequently-asked-questions.md) |
healthcare-apis | Understand Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/understand-service.md | The MedTech service device message data processing follows these steps and in th ## Ingest Ingest is the first stage where device messages are received from an [Azure Event Hubs](../../event-hubs/index.yml) event hub (`device message event hub`) and immediately pulled into the MedTech service. The Event Hubs service supports high scale and throughput with the ability to receive and process millions of device messages per second. It also enables the MedTech service to consume messages asynchronously, removing the need for devices to wait while device messages are processed. -The device message event hub uses the MedTech service's [system-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types) and [Azure resource-based access control (Azure RBAC)](/azure/role-based-access-control/overview) for secure access to the device message event hub. +The device message event hub uses the MedTech service's [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) and [Azure resource-based access control (Azure RBAC)](../../role-based-access-control/overview.md) for secure access to the device message event hub. > [!NOTE] > JSON is the only supported format at this time for device message data. The MedTech service buffers the FHIR Observations resources created during the t ## Persist Persist is the final stage where the FHIR Observation resources from the transform stage are persisted in the [FHIR service](../fhir/overview.md). If the FHIR Observation resource is new, it's created in the FHIR service. If the FHIR Observation resource already existed, it gets updated in the FHIR service. -The FHIR service uses the MedTech service's [system-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types) and [Azure resource-based access control (Azure RBAC)](/azure/role-based-access-control/overview) for secure access to the FHIR service. +The FHIR service uses the MedTech service's [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) and [Azure resource-based access control (Azure RBAC)](../../role-based-access-control/overview.md) for secure access to the FHIR service. ## Next steps To learn how to configure the MedTech service device and FHIR destination mappin > [!div class="nextstepaction"] > [How to configure FHIR destination mappings](how-to-configure-fhir-mappings.md) -FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. +FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
iot-central | Concepts Device Implementation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-implementation.md | Title: Device implementation in Azure IoT Central | Microsoft Docs + Title: Device implementation + description: This article introduces the key concepts and best practices for implementing a device that connects to your IoT Central application. Previously updated : 03/04/2022 Last updated : 02/13/2023 az iot central device manual-failover \ > [!TIP] > To find the **Application ID**, navigate to **Application > Management** in your IoT Central application. -If the command succeeds, you see output that looks like the following: +If the command succeeds, you see output that looks like the following example: ```output Command group 'iot central device' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus |
iot-central | Concepts Faq Scalability Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-scalability-availability.md | Title: Azure IoT Central scalability and high availability | Microsoft Docs + Title: Scalability and high availability + description: This article describes how IoT Central automatically scales to handle more devices, its high availability disaster recovery capabilities. Previously updated : 03/01/2022 Last updated : 02/13/2023 IoT Central automatically scales its IoT hubs based on the load profiles in your ## High availability and disaster recovery -For highly available device connectivity, an IoT Central application always have at least two IoT hubs. For exceptions to to this rule, see [Limitations](#limitations). The number of hubs can grow or shrink as IoT Central scales the application in response to changes in the load profile. +For highly available device connectivity, an IoT Central application always has at least two IoT hubs. For exceptions to this rule, see [Limitations](#limitations). The number of hubs can grow or shrink as IoT Central scales the application in response to changes in the load profile. IoT Central also uses [availability zones](../../availability-zones/az-overview.md#availability-zones) to make various services it uses highly available. |
iot-central | Howto Manage Devices Individually | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-individually.md | Title: Manage devices individually in your Azure IoT Central application | Microsoft Docs + Title: Manage devices individually in your application + description: Learn how to manage devices individually in your Azure IoT Central application. Monitor, manage, create, delete, and update devices. Previously updated : 03/02/2022 Last updated : 02/13/2023 To view an individual device: 1. In the right-hand pane of the **Devices** page, you see a list of devices accessible to your organization created from that device template: - :::image type="content" source="media/howto-manage-devices-individually/device-list.png" alt-text="Screenshot showing the device list." lightbox="media/howto-manage-devices-individually/device-list.png"::: + :::image type="content" source="media/howto-manage-devices-individually/device-list.png" alt-text="Screenshot that shows the list of Thermostat devices." lightbox="media/howto-manage-devices-individually/device-list.png"::: Choose an individual device to see the device details page for that device. When a device connects to your IoT Central application, its device status change 1. An operator can block a device. When a device is blocked, it can't send data to your IoT Central application. Blocked devices have a status of **Blocked**. An operator must reset the device before it can resume sending data. When an operator unblocks a device the status returns to its previous value, **Registered** or **Provisioned**. -1. If the device status is **Waiting for Approval**, it means the **Auto approve** option is disabled. An operator must explicitly approve a device before it starts sending data. Devices not registered manually on the **Devices** page, but connected with valid credentials will have the device status **Waiting for Approval**. Operators can approve these devices from the **Devices** page using the **Approve** button. +1. If the device status is **Waiting for Approval**, it means the **Auto approve** option is disabled. An operator must explicitly approve a device before it starts sending data. Devices not registered manually on the **Devices** page, but connected with valid credentials have the device status **Waiting for Approval**. Operators can approve these devices from the **Devices** page using the **Approve** button. 1. If the device status is **Unassigned**, it means the device connecting to IoT Central isn't assigned to a device template. This situation typically happens in the following scenarios: |
iot-central | Overview Iot Central Developer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-developer.md | Title: Azure IoT Central device connectivity guide | Microsoft Docs -description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This guide describes how IoT devices connect to your IoT Central application. After a device connects, it uses telemetry to send streaming data and properties to report device state. Iot Central can set device state using writable properties and call commands on a device. + Title: Device connectivity guide ++description: This guide describes how IoT devices connect to and communicate with your IoT Central application. The article describes telemetry, properties, and commands. Previously updated : 03/02/2022 Last updated : 02/13/2023 - # This article applies to device developers. |
iot-central | Tutorial Continuous Patient Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/healthcare/tutorial-continuous-patient-monitoring.md | Title: Tutorial - Azure IoT continuous patient monitoring | Microsoft Docs -description: This tutorial shows you how to deploy and use the continuous patient monitoring application template for IoT Central. -- Previously updated : 12/23/2021+ Title: "Tutorial: Azure IoT continuous patient monitoring" ++description: In this tutorial, you deploy and use the continuous patient monitoring application template for IoT Central. ++ Last updated : 02/10/2023 -# Tutorial: Deploy and walkthrough the continuous patient monitoring application template +# Tutorial: Deploy and review the continuous patient monitoring application template -In the healthcare IoT space, continuous patient monitoring is one of the key enablers of reducing the risk of readmissions, managing chronic diseases more effectively, and improving patient outcomes. Continuous patient monitoring can be split into two major categories: +In the healthcare IoT space, continuous patient monitoring is a key enabler for reducing the risk of readmissions, managing chronic diseases more effectively, and improving patient outcomes. Continuous patient monitoring can be split into two categories: -1. **In-patient monitoring**: Using medical wearables and other devices in the hospital, care teams can monitor patient vital signs and medical conditions without having to send a nurse to check up on a patient multiple times a day. Care teams can understand the moment that a patient needs critical attention through notifications and prioritizes their time effectively. -1. **Remote patient monitoring**: By using medical wearables and patient reported outcomes (PROs) to monitor patients outside of the hospital, the risk of readmission can be lowered. Data from chronic disease patients and rehabilitation patients can be collected to ensure that patients are adhering to care plans and that alerts of patient deterioration can be surfaced to care teams before they become critical. +1. **In-patient monitoring**: Care teams use medical wearables and other devices to monitor patient vital signs and medical conditions without having to send a nurse to check up on a patient multiple times a day. Care teams can receive notifications when a patient needs critical attention and prioritize their time effectively. +1. **Remote patient monitoring**: Care teams use medical wearables to monitor patients outside of the hospital to lower the risk of readmission. Data collected from chronic disease patients and rehabilitation patients can help to ensure that patients are adhering to care plans and that alerts of patient deterioration are surfaced to care teams before they become critical. The application template enables you to: - Seamlessly connect different kinds of medical wearables to an IoT Central instance. - Monitor and manage the devices to ensure they remain healthy.-- Create custom rules around device data to trigger appropriate alerts.+- Create custom rules that use device data to trigger alerts. - Export your patient health data to the Azure API for FHIR, a compliant data store. - Export the aggregated insights into existing or new business applications. The application template enables you to: ### Bluetooth Low Energy (BLE) medical devices (1) -Many medical wearables used in healthcare IoT solutions are BLE devices. These devices can't communicate directly to the cloud and need to use a gateway to exchange data with your cloud solution. This architecture uses a mobile phone application as the gateway. +Many medical wearables used in healthcare IoT solutions are BLE devices. These devices can't communicate directly to the cloud and require a gateway to exchange data with your cloud solution. This architecture uses a mobile phone application as the gateway. ### Mobile phone gateway (2) -The mobile phone application's primary function is to collect BLE data from medical devices and communicate it to IoT Central. The app also guides patients through device setup and lets them view their personal health data. Other solutions could use a tablet gateway or a static gateway in a hospital room. An open-source sample mobile application is available for Android and iOS to use as a starting point for your application development. To learn more, see the [Continuous patient monitoring sample mobile app on GitHub](https://github.com/iot-for-all/iotc-cpm-sample). +The primary function of the mobile phone application is to collect BLE data from medical devices and send it to IoT Central. The application also guides patients through device setup and lets them view their personal health data. Other solutions could use a tablet gateway or a static gateway in a hospital room. An open-source sample mobile application is available for Android and iOS to use as a starting point for your application development. To learn more, see the [Continuous patient monitoring sample mobile app on GitHub](https://github.com/iot-for-all/iotc-cpm-sample). ### Export to Azure API for FHIR® (3) -Azure IoT Central is HIPAA-compliant and HITRUST® certified. You can also send patient health data to other services using the [Azure API for FHIR](../../healthcare-apis/fhir/overview.md). Azure API for FHIR is a standards-based API for clinical health data. The [Azure IoT connector for FHIR](../../healthcare-apis/fhir/iot-fhir-portal-quickstart.md) lets you use the Azure API for FHIR as a continuous data export destination from IoT Central. +Azure IoT Central is HIPAA-compliant and HITRUST® certified. You can send patient health data to other services using the [Azure API for FHIR](../../healthcare-apis/fhir/overview.md). Azure API for FHIR is a standards-based API for clinical health data. The [Azure IoT connector for FHIR](../../healthcare-apis/fhir/iot-fhir-portal-quickstart.md) lets you use the Azure API for FHIR as a continuous data export destination from IoT Central. ### Machine learning (4) Use machine learning models with your FHIR data to generate insights and support ### Provider dashboard (5) -Use the Azure API for FHIR data to build a patient insights dashboard or integrate it directly into an electronic medical record used by care teams. Care teams can use the dashboard to assist patients and identify early warning signs of deterioration. To learn more, see the [Build a Power BI provider dashboard](tutorial-health-data-triage.md) tutorial. +Use the Azure API for FHIR data to build a patient insights dashboard or integrate it directly into an electronic medical record used by care teams. Care teams can use the dashboard to assist patients and identify early warning signs of deterioration. In this tutorial, you learn how to: The following sections walk you through the key features of the application: ### Dashboards -After deploying the application template, you'll first land on the **Lamna in-patient monitoring dashboard**. Lamna Healthcare is a fictitious hospital system that contains two hospitals: Woodgrove Hospital and Burkville Hospital. On the Woodgrove Hospital operator dashboard, you can: +After you deploy the application template, navigate to the **Dashboards**. There are two dashboards. On the **Lamna in-patient monitoring dashboard** for the Woodgrove hospital, you can: * See device telemetry and properties such as the **battery level** of your device or its **connectivity** status. After deploying the application template, you'll first land on the **Lamna in-pa * Change the **patient status** of your device to indicate if the device is being used for an in-patient or remote scenario. -You can also select **Go to remote patient dashboard** to see the Burkville Hospital operator dashboard. This dashboard contains a similar set of actions, telemetry, and information. You can also see multiple devices in use and choose to **update the firmware** on each. +On the **Lamna remote patient monitoring dashboard** for the Burkville hospital, you can see a similar set of actions, telemetry, and information. You can also see multiple devices in use and choose to **update the firmware** on each. ### Device templates -If you select **Device templates**, you see the two device types in the template: +Navigate to **Device templates** to see the two device types in the application template: -- **Smart Vitals Patch**: This device represents a patch that measures various vital signs. It's used for monitoring patients in and outside the hospital. If you select the template, you see that the patch sends both device data such as battery level and device temperature, and patient health data such as respiratory rate and blood pressure.+- **Smart Vitals Patch**: This device represents a patch that measures various vital signs. It's used for monitoring patients in and outside the hospital. The patch sends both device data such as battery level and device temperature, and patient health data such as respiratory rate and blood pressure. -- **Smart Knee Brace**: This device represents a knee brace that patients use when recovering from a knee replacement surgery. If you select this template, you see capabilities such as device data, range of motion, and acceleration.+- **Smart Knee Brace**: This device represents a knee brace that patients use when recovering from a knee replacement surgery. The knee brace sends device data such as range of motion and acceleration. ### Device groups Use device groups to logically group a set of devices and then run bulk queries or operations on them. -If you select the device groups tab, you see a default device group for each device template in the application. There are also created two additional sample device groups called **Provision devices** and **Devices with outdated firmware**. You can use these sample device groups as inputs to run some of the [Jobs](#jobs) in the application. +Navigate to **Device groups** to see the default device groups for each device template. There are also two more device groups called **Provisioned devices** and **Devices with outdated firmware**. The [Jobs](#jobs) in the application use these device groups to run operations on sets of devices. ### Rules -If you select **Rules**, you see the three rules in the template: +Navigate to **Rules** to see the three rules in the application template: - **Brace temperature high**: This rule triggers when the device temperature of the smart knee brace is greater than 95°F over a 5-minute window. Use this rule to alert the patient and care team, and cool the device down remotely. If you select **Rules**, you see the three rules in the template: - **Patch battery low**: This rule is triggers when the battery level on the device goes below 10%. Use this rule to trigger a notification to the patient to charge their device. ### Jobs -Jobs let you run bulk operations on a set of devices, using [device groups](#device-groups) as the input. The application template has two sample jobs that an operator can run: +Use jobs to run bulk operations on a set of devices, using [device groups](#device-groups) to select the devices. The application template has two sample jobs that an operator can run: * **Update knee brace firmware**: This job finds devices in the device group **Devices with outdated firmware** and runs a command to update those devices to the latest firmware version. This sample job assumes that the devices can handle an **update** command and then fetch the firmware files from the cloud. -* **Re-provision devices**: You have a set of devices that have recently been returned to the hospital. This job finds devices in the device group **Provision devices** and runs a command to re-provision them for the next set of patients. +* **Re-provision devices**: You have a set of devices that have recently been returned to the hospital. This job finds devices in the device group **Provisioned devices** and runs a command to reprovision them for the next set of patients. ### Devices -Select the **Devices** tab and then select an instance of the **Smart Knee Brace**. There are three views to explore information about the particular device that you've selected. These views are created and published when you build the device template for your device. therefore, these views are consistent across all the devices that you connect or simulate. +Navigate to **Devices** and then select a **Smart Knee Brace** instance. There are three views to explore information about the particular device that you've selected. These views are created and published when you build the device template for your device. These views are consistent across all the devices of that type. The **Dashboard** view gives an overview of operator-oriented telemetry and properties from the device. The **Properties** tab lets you edit cloud properties and read/write device prop The **Commands** tab lets you run commands on the device. ## Clean up resources The **Commands** tab lets you run commands on the device. ## Next steps -Advance to the next article to learn how to create a provider dashboard that connects to your IoT Central application. +A suggested next step is to learn more about integrating IoT Central with other > [!div class="nextstepaction"]-> [Build a provider dashboard](tutorial-health-data-triage.md) +> [IoT Central data integration](../core/overview-iot-central-solution-builder.md) |
iot-central | Tutorial Health Data Triage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/healthcare/tutorial-health-data-triage.md | - Title: Tutorial - Create a health data triage dashboard with Azure IoT Central | Microsoft Docs -description: Tutorial - Learn to build a health data triage dashboard using Azure IoT Central application templates. -- Previously updated : 12/21/2021-------# Tutorial: Build a Power BI provider dashboard --When building your continuous patient monitoring solution, you can also create a dashboard for a hospital care team to visualize patient data. In this tutorial, you will learn how to create a Power BI real-time streaming dashboard from your IoT Central continuous patient monitoring application template. ---The basic architecture will follow this structure: ---In this tutorial, you learn how to: --- Export data from Azure IoT Central to Azure Event Hubs-- Set up a Power BI streaming dataset-- Connect your Logic App to Azure Event Hubs-- Stream data to Power BI from your Logic App-- Build a real-time dashboard for patient vitals---## Prerequisites --* An Azure subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/). --* An Azure IoT Central continuous patient monitoring application template. If you don't have one already, you can follow steps to [Deploy an application template](overview-iot-central-healthcare.md). --* An Azure [Event Hubs namespace and Event Hub](../../event-hubs/event-hubs-create.md). --* The Logic App that you want to access your Event Hub. To start your Logic App with an Azure Event Hubs trigger, you need a [blank Logic App](../../logic-apps/quickstart-create-first-logic-app-workflow.md). --* A Power BI service account. If you don't have one already, you can [create a free trial account for Power BI service](https://app.powerbi.com/). If you haven't used Power BI before, it might be helpful to go through [Get started with Power BI](/power-bi/service-get-started). ---## Set up a continuous data export to Azure Event Hubs --You will first need to set up a continuous data export from your Azure IoT Central application template to the Azure Event Hub in your subscription. You can do so by following the steps in this Azure IoT Central tutorial for [Exporting to Event Hubs](../core/howto-export-data.md). You will only need to export for the telemetry for the purposes of this tutorial. ---## Create a Power BI streaming dataset --1. Sign in to your Power BI account. --1. In your preferred Workspace, create a new streaming dataset by selecting the **+ Create** button in the upper-right corner of the toolbar. You will need to create a separate dataset for each patient that you would like to have on your dashboard. -- :::image type="content" source="media/create-streaming-dataset.png" alt-text="Create streaming dataset"::: ---1. Choose **API** for the source of your dataset. --1. Enter a **name** (for example, a patient's name) for your dataset and then fill out the values from your stream. You can see an example below based on values coming from the simulated devices in the continuous patient monitoring application template. The example has two patients: -- * Teddy Silvers, who has data from the Smart Knee Brace. - * Yesenia Sanford, who has data from the Smart Vitals Patch. -- :::image type="content" source="media/enter-dataset-values.png" alt-text="Enter dataset values"::: --To learn more about streaming datasets in Power BI, you can read this document on [real-time streaming in Power BI](/power-bi/service-real-time-streaming). ---## Connect your Logic App to Azure Event Hubs --To connect your Logic App to Azure Event Hubs, you can follow the instructions outlined in this document on [Sending events with Azure Event Hubs and Azure Logic Apps](../../connectors/connectors-create-api-azure-event-hubs.md#add-event-hubs-action). Here are some suggested parameters: --|Parameter|Value| -||| -|Content type|application/json| -|Interval|3| -|Frequency|Second| --At the end of this step, your Logic App Designer should look like this: -->[!div class="mx-imgBorder"] -> ----## Stream data to Power BI from your Logic App --The next step will be to parse the data coming from your Event Hub to stream it into the Power BI datasets that you have previously created. --Before you can do this, you will need to understand the JSON payload that is being sent from your device to your Event Hub. You can do so by looking at this [sample schema](../core/howto-export-data.md#telemetry-format) and modifying it to match your schema or using [Service Bus explorer](https://github.com/paolosalvatori/ServiceBusExplorer) to inspect the messages. If you are using the continuous patient monitoring applications, your messages will look like this: --**Smart Vitals Patch telemetry** --```json -{ - "HeartRate": 80, - "RespiratoryRate": 12, - "HeartRateVariability": 64, - "BodyTemperature": 99.08839032397609, - "BloodPressure": { - "Systolic": 23, - "Diastolic": 34 - }, - "Activity": "walking" -} -``` --**Smart Knee Brace telemetry** --```json -{ - "Acceleration": { - "x": 72.73510947763711, - "y": 72.73510947763711, - "z": 72.73510947763711 - }, - "RangeOfMotion": 123, - "KneeBend": 3 -} -``` --**Properties** --```json -{ - "iothub-connection-device-id": "1qsi9p8t5l2", - "iothub-connection-auth-method": "{\"scope\":\"device\",\"type\":\"sas\", \"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}", - "iothub-connection-auth-generation-id": "637063718586331040", - "iothub-enqueuedtime": 1571681440990, - "iothub-message-source": "Telemetry", - "iothub-interface-name": "Patient_health_data_3bk", - "x-opt-sequence-number": 7, - "x-opt-offset": "3672", - "x-opt-enqueued-time": 1571681441317 -} -``` --1. Now that you have inspected your JSON payloads, go back to your Logic App Designer and select **+ New Step**. Search and add **Initialize variable** as your next step and enter the following parameters: -- |Parameter|Value| - ||| - |Name|Interface Name| - |Type|String| -- Select **Save**. --1. Add another variable called **Body** with Type as **String**. Your Logic App will have these actions added: -- :::image type="content" source="media/initialize-string-variables.png" alt-text="Initialize variables"::: - -1. Select **+ New Step** and add a **Parse JSON** action. Rename this to **Parse Properties**. For the Content, choose **Properties** coming from the Event Hub. Select **Use sample payload to generate schema** at the bottom, and paste the sample payload from the Properties section above. --1. Next, choose the **Set variable** action and update your **Interface Name** variable with the **iothub-interface-name** from the parsed JSON properties. --1. Add a **Split** Control as your next action and choose the **Interface Name** variable as the On parameter. You will use this to funnel the data to the correct dataset. --1. In your Azure IoT Central application, find the Interface Name for the Smart Vitals Patch health data and the Smart Knee Brace health data from the **Device Templates** view. Create two different cases for the **Switch** Control for each Interface Name and rename the control appropriately. You can set the Default case to use the **Terminate** Control and choose what status you would like to show. -- :::image type="content" source="media/split-by-interface.png" alt-text="Split control"::: --1. For the **Smart Vitals Patch** case, add a **Parse JSON** action. For the Content, choose **Content** coming from the Event Hub. Copy and paste the sample payloads for the Smart Vitals Patch above to generate the schema. --1. Add a **Set variable** action and update the **Body** variable with the **Body** from the parsed JSON in Step 7. --1. Add a **Condition** Control as your next action and set the condition to **Body**, **contains**, **HeartRate**. This will make sure that you have the right set of data coming from the Smart Vitals Patch before populating the Power BI dataset. Steps 7-9 will look like this: -- :::image type="content" source="media/smart-vitals-pbi.png" alt-text="Smart Vitals add condition"::: --1. For the **True** case of the Condition, add an action that calls the **Add rows to a dataset** Power BI functionality. You will have to sign into Power BI for this. Your **False** case can again use the **Terminate** control. --1. Choose the appropriate **Workspace**, **Dataset**, and **Table**. Select **Add new parameter** > **Payload**. Map the parameters that you specified when creating your streaming dataset in Power BI to the parsed JSON values that are coming from your Event Hub. Then, enter valid JSON contents into the Payload field. Your filled-out actions should look like this: --  --1. For the **Smart Knee Brace** switch case, add a **Parse JSON** action to parse the content, similar to Step 7. Then **Add rows to a dataset** to update your Teddy Silvers dataset in Power BI. -- :::image type="content" source="media/knee-brace-pbi.png" alt-text="Screenshot that shows how to add rows to a datasets"::: --1. Press **Save** and then run your Logic App. ---## Build a real-time dashboard for patient vitals --Now go back to Power BI and select **+ Create** to create a new **Dashboard**. Give your dashboard a name and hit **Create**. --Select the three dots in the top navigation bar and then select **+ Add tile**. ---Choose the type of tile you would like to add and customize your app however you'd like. ---## Clean up resources --If you're not going to continue to use this application, delete your resources with the following steps: --1. From the Azure portal, you can delete the Event Hub and Logic Apps resources that you created. --1. For your IoT Central application, go to the **Application > Management** tab and select **Delete**. |
iot-central | Tutorial Micro Fulfillment Center | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-micro-fulfillment-center.md | Title: Tutorial - Azure IoT Micro-fulfillment center | Microsoft Docs + Title: "Tutorial: Azure IoT Micro-fulfillment center" + description: This tutorial shows you how to deploy and use the micro-fulfillment center application template for Azure IoT Central--++ Previously updated : 12/21/2021 Last updated : 02/13/2023 -# Tutorial: Deploy and walk through the micro-fulfillment center application template +# Tutorial: Deploy and review the micro-fulfillment center application template In the increasingly competitive retail landscape, retailers constantly face pressure to close the gap between demand and fulfillment. A new trend that has emerged to address the growing consumer demand is to house inventory near the end customers and the stores they visit. -The IoT Central _micro-fulfillment center_ application template enables you to monitor and manage all aspects of your fully automated fulfillment centers. The template includes a set of simulated condition monitoring sensors and robotic carriers to accelerate the solution development process. These sensor devices capture meaningful signals that can be converted into business insights allowing retailers to reduce their operating costs and create experiences for their customers. +The IoT Central micro-fulfillment center application template enables you to monitor and manage all aspects of your fully automated fulfillment centers. The template includes a set of simulated condition monitoring sensors and robotic carriers to accelerate the solution development process. These sensor devices capture meaningful signals that can be converted into business insights allowing retailers to reduce their operating costs and create experiences for their customers. The application template enables you to: -- Seamlessly connect different kinds of IoT sensors such as robots or condition monitoring sensors to an IoT Central application instance.-- Monitor and manage the health of the sensor network, and any gateway devices in the environment.-- Create custom rules around the environmental conditions within a fulfillment center to trigger appropriate alerts.-- Transform the environmental conditions within your fulfillment center into insights that the retail warehouse team can use.-- Export the aggregated insights into existing or new business applications for the benefit of the retail staff members.+* Seamlessly connect different kinds of IoT sensors such as robots or condition monitoring sensors to an IoT Central application instance. +* Monitor and manage the health of the sensor network, and any gateway devices in the environment. +* Create custom rules around the environmental conditions within a fulfillment center to trigger appropriate alerts. +* Transform the environmental conditions within your fulfillment center into insights that the retail warehouse team can use. +* Export the aggregated insights into existing or new business applications for the benefit of the retail staff members. :::image type="content" source="media/tutorial-micro-fulfillment-center-app/micro-fulfillment-center-architecture-frame.png" alt-text="Diagram showing the micro-fulfillment application architecture." border="false"::: ### Robotic carriers (1) -A micro-fulfillment center solution will likely have a large set of robotic carriers generating different kinds of telemetry signals. These signals can be ingested by a gateway device, aggregated, and then sent to IoT Central as reflected by the left side of the architecture diagram. +A micro-fulfillment center solution typically includes robotic carriers that generate different kinds of telemetry signals. These signals can be ingested by a gateway device, aggregated, and then sent to IoT Central as shown in the architecture diagram. ### Condition monitoring sensors (1) -An IoT solution starts with a set of sensors capturing meaningful signals from within your fulfillment center. It's reflected by different kinds of sensors on the far left of the architecture diagram above. +An IoT solution starts with a set of sensors capturing meaningful signals from within your fulfillment center. The architecture diagram shows different types on sensor. ### Gateway devices (2) Many IoT sensors can feed raw signals directly to the cloud or to a gateway devi The Azure IoT Central application ingests data from different kinds of IoT sensors, robots, as well gateway devices within the fulfillment center environment and generates a set of meaningful insights. -Azure IoT Central also provides a tailored experience to the store operator enabling them to remotely monitor and manage the infrastructure devices. +Azure IoT Central also provides a tailored experience to store operators enabling them to remotely monitor and manage the infrastructure devices. ### Data transform (3,4) -The Azure IoT Central application within a solution can be configured to export raw or aggregated insights to a set of Azure PaaS (Platform-as-a-Service) services that can perform data manipulation and enrich these insights before landing them in a business application. +The Azure IoT Central application within a solution can be configured to export raw or aggregated insights to other platform services. These services can perform data manipulation and enrichment before delivering insights to a business application. ### Business application (5) Create the application using following steps: The following sections walk you through the key features of the application: -After successfully deploying the application template, you see the **Northwind Traders micro-fulfillment center dashboard**. Northwind Traders is a fictitious retailer that has a micro-fulfillment center being managed in this Azure IoT Central application. On this dashboard, you see information and telemetry about the devices in this template, along with a set of commands, jobs, and actions that you can take. The dashboard is logically split into two sections. On the left, you can monitor the environmental conditions within the fulfillment structure, and on the right, you can monitor the health of a robotic carrier within the facility. +After successfully deploying the application template, you see the **Northwind Traders micro-fulfillment center dashboard**. Northwind Traders is a fictitious retailer that has a micro-fulfillment center being managed in this Azure IoT Central application. On this dashboard, you see information and telemetry about the devices in this template, along with a set of commands, jobs, and actions that you can take. The dashboard is logically split into two sections. One section lets you monitor the environmental conditions within the fulfillment structure, and the other lets you monitor the health of a robotic carrier within the facility. From the dashboard, you can: -* See device telemetry, such as the number of picks, the number of orders processed, and properties, such as the structure system status. +* See device telemetry, such as the number of picks, the number of orders processed. +* See properties such as the structure system status. * View the floor plan and location of the robotic carriers within the fulfillment structure. * Trigger commands, such as resetting the control system, updating the carrier's firmware, and reconfiguring the network. * See an example of the dashboard that an operator can use to monitor conditions within the fulfillment center. From the dashboard, you can: If you select the device templates tab, you see that there are two different device types that are part of the template: -* **Robotic Carrier**: This device template represents the definition for a functioning robotic carrier that has been deployed in the fulfillment structure, and is performing appropriate storage and retrieval operations. If you select the template, you see that the robot is sending device data, such as temperature and axis position, and properties like the robotic carrier status. -* **Structure Condition Monitoring**: This device template represents a device collection that allows you to monitor environment condition, as well as the gateway device hosting various edge workloads to power your fulfillment center. The device sends telemetry data, such as the temperature, the number of picks, and the number of orders. It also sends information about the state and health of the compute workloads running in your environment. +* **Robotic Carrier**: This device template represents the definition for a functioning robotic carrier that has been deployed in the fulfillment structure, and is performing appropriate storage and retrieval operations. If you select the template, you see that the robot is sending device data, such as temperature and axis position, and properties like the robotic carrier status. +* **Structure Condition Monitoring**: This device template represents a device collection that lets you monitor environment conditions, and the gateway device that hosts various edge workloads. The device sends telemetry data, such as the temperature, the number of picks, and the number of orders. It also sends information about the state and health of the compute workloads running in your environment. :::image type="content" source="media/tutorial-micro-fulfillment-center-app/device-templates.png" alt-text="Screenshot of the micro-fulfillment center application device templates." lightbox="media/tutorial-micro-fulfillment-center-app/device-templates.png"::: Learn more about: > [!div class="nextstepaction"] > [IoT Central data integration](../core/overview-iot-central-solution-builder.md)- |
iot-dps | How To Troubleshoot Dps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-troubleshoot-dps.md | Use this table to understand and resolve common errors. * For a 429 error, follow the retry pattern of IoT Hub that has exponential backoff with a random jitter. You can follow the retry-after header provided by the SDK. -* For 500-series server errors, retry your [connection](/azure/iot-dps/concepts-deploy-at-scale#iot-hub-connectivity-considerations) using cached credentials or a [Device Registration Status Lookup API](/rest/api/iot-dps/device/runtime-registration/device-registration-status-lookup#deviceregistrationresult) call. +* For 500-series server errors, retry your [connection](./concepts-deploy-at-scale.md#iot-hub-connectivity-considerations) using cached credentials or a [Device Registration Status Lookup API](/rest/api/iot-dps/device/runtime-registration/device-registration-status-lookup#deviceregistrationresult) call. -For related best practices, such as retrying operations, see [Best practices for large-scale IoT device deployments](/azure/iot-dps/concepts-deploy-at-scale). +For related best practices, such as retrying operations, see [Best practices for large-scale IoT device deployments](./concepts-deploy-at-scale.md). ## Next Steps - To learn more about using Azure Monitor with DPS, see [Monitor Device Provisioning Service](monitor-iot-dps.md). -- To learn about metrics, logs, and schemas emitted for DPS in Azure Monitor, see [Monitoring Device Provisioning Service data reference](monitor-iot-dps-reference.md).+- To learn about metrics, logs, and schemas emitted for DPS in Azure Monitor, see [Monitoring Device Provisioning Service data reference](monitor-iot-dps-reference.md). |
iot-edge | How To Store Data Blob | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-store-data-blob.md | You can use [Azure Storage Explorer](https://azure.microsoft.com/features/storag 1. Download and install Azure Storage Explorer +1. The latest version of Azure Storage Explorer uses a newer storage API version not supported by the blob storage module. Start Azure Storage Explorer. Select the **Edit** menu. Verify the **Target Azure Stack Hub APIs** is selected. If it isn't, select **Target Azure Stack Hub**. Restart Azure Storage Explorer for the change to take effect. This configuration is required for compatibility with your IoT Edge environment. + 1. Connect to Azure Storage using a connection string 1. Provide connection string: `DefaultEndpointsProtocol=http;BlobEndpoint=http://<host device name>:11002/<your local account name>;AccountName=<your local account name>;AccountKey=<your local account key>;` |
iot-fundamentals | Iot Security Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/iot-security-architecture.md | When you design and architect an IoT solution, it's important to understand the Microsoft recommends using a threat modeling process as part of your IoT solution design. If you're not familiar with threat modeling and the secure development lifecycle, see: - [Threat modeling](https://www.microsoft.com/securityengineering/sdl/threatmodeling)-- [Secure development best practices on Azure](/azure/security/develop/secure-dev-overview)-- [Getting started guide](/azure/security/develop/threat-modeling-tool-getting-started)+- [Secure development best practices on Azure](../security/develop/secure-dev-overview.md) +- [Getting started guide](../security/develop/threat-modeling-tool-getting-started.md) ## Security in IoT Each zone is separated by a _trust boundary_, shown as the dotted red line in th - Denial of service - Elevation of privilege -To learn more, see the [STRIDE model](/azure/security/develop/threat-modeling-tool-threats#stride-model). +To learn more, see the [STRIDE model](../security/develop/threat-modeling-tool-threats.md#stride-model). :::image type="content" source="media/iot-security-architecture/iot-security-architecture-fig1.png" alt-text="A diagram that shows the zones and trust boundaries in a typical IoT solution architecture." border="false"::: The following table shows example mitigations to the storage threats: ## See also -Read about IoT Hub security in [Control access to IoT Hub](../iot-hub/iot-hub-devguide-security.md) in the IoT Hub developer guide. +Read about IoT Hub security in [Control access to IoT Hub](../iot-hub/iot-hub-devguide-security.md) in the IoT Hub developer guide. |
iot-hub | Iot Hub Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-upgrade.md | The maximum limit of device-to-cloud partitions for basic tier and standard tier ## Next steps -Get more details about [How to choose the right IoT Hub tier](iot-hub-scaling.md). +Get more details about [How to choose the right IoT Hub tier](iot-hub-scaling.md). |
key-vault | Monitor Key Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/monitor-key-vault.md | Here are some queries that you can enter into the **Log search** bar to help you Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system preemptively. You can set alerts on [metrics](../../azure-monitor/alerts/alerts-metric-overview.md), [logs](../../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../../azure-monitor/alerts/activity-log-alerts.md). -If you are creating or running an application which runs on Azure Key Vault, [Azure Monitor Application Insights](../../azure-monitor/overview.md#application-insights) may offer additional types of alerts. +If you are creating or running an application which runs on Azure Key Vault, [Azure Monitor Application Insights](../../azure-monitor/app/app-insights-overview.md) may offer additional types of alerts. Here are some common and recommended alert rules for Azure Key Vault - |
lighthouse | Tenants Users Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/tenants-users-roles.md | All [built-in roles](../../role-based-access-control/built-in-roles.md) are curr In some cases, a role that had previously been supported with Azure Lighthouse may become unavailable. For example, if the [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission is added to a role that previously didn't have that permission, that role can no longer be used when onboarding new delegations. Users who had already been assigned the role will still be able to work on previously delegated resources, but they won't be able to perform tasks that use the [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission. > [!IMPORTANT]-> When assigning roles, be sure to review the [actions](../../role-based-access-control/role-definitions.md) specified for each role. In some cases, even though roles with [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission are not supported, the actions included in a role may allow access to data, where data is exposed through access keys and not accessed via the user's identity. For example, the [Virtual Machine Contributor](/azure/role-based-access-control/built-in-roles) role includes the `Microsoft.Storage/storageAccounts/listKeys/action` action, which returns storage account access keys that could be used to retrieve certain customer data. +> When assigning roles, be sure to review the [actions](../../role-based-access-control/role-definitions.md) specified for each role. In some cases, even though roles with [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission are not supported, the actions included in a role may allow access to data, where data is exposed through access keys and not accessed via the user's identity. For example, the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md) role includes the `Microsoft.Storage/storageAccounts/listKeys/action` action, which returns storage account access keys that could be used to retrieve certain customer data. > [!NOTE] > As soon as a new applicable built-in role is added to Azure, it can be assigned when [onboarding a customer using Azure Resource Manager templates](../how-to/onboard-customer.md). There may be a delay before the newly-added role becomes available in Partner Center when [publishing a managed service offer](../how-to/publish-managed-services-offers.md). Similarly, if a role becomes unavailable, you may still see it in Partner Center for a period of time; however, you won't be able to publish new offers using such roles. The only exception is if the subscription is transferred to an Azure AD tenant t ## Next steps - Learn about [recommended security practices for Azure Lighthouse](recommended-security-practices.md).-- Onboard your customers to Azure Lighthouse, either by [using Azure Resource Manager templates](../how-to/onboard-customer.md) or by [publishing a private or public managed services offer to Azure Marketplace](../how-to/publish-managed-services-offers.md).+- Onboard your customers to Azure Lighthouse, either by [using Azure Resource Manager templates](../how-to/onboard-customer.md) or by [publishing a private or public managed services offer to Azure Marketplace](../how-to/publish-managed-services-offers.md). |
load-balancer | Load Balancer Multiple Ip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multiple-ip.md | In this section, you'll create a virtual network for the load balancer and virtu | AzureBastionSubnet address space | Enter **10.1.1.0/26** | | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. | -12. Select the **Review + create** tab select **Review + create**. +12. Select the **Review + create** tab or select the blue **Review + create** button at the bottom of the page. 13. Select **Create**. |
load-balancer | Monitor Load Balancer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/monitor-load-balancer.md | For a list of the tables used by Azure Monitor Logs and queryable by Log Analyti Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks -If you're creating or running an application, which run on Load Balancer [Azure Monitor Application Insights](../azure-monitor/overview.md#application-insights) may offer other types of alerts. +If you're creating or running an application, which run on Load Balancer [Azure Monitor Application Insights](../azure-monitor/app/app-insights-overview.md) may offer other types of alerts. The following table lists common and recommended alert rules for Load Balancer. |
logic-apps | Biztalk Server To Azure Integration Services Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/biztalk-server-to-azure-integration-services-overview.md | You can install and run BizTalk Server on your own hardware, on-premises virtual - Availability and redundancy - In Azure, [availability zones](../reliability/availability-zones-overview.md#availability-zones) provide resiliency, distributed availability, and active-active-active zone scalability. To increase availability for your logic app workloads, you can [enable availability zone support](/azure/logic-apps/set-up-zone-redundancy-availability-zones), but only when you create your logic app. You'll need at least three separate availability zones in any Azure region that supports and enables zone redundancy. The Azure Logic Apps platform distributes these zones and logic app workloads across these zones. This capability is a key requirement for enabling resilient architectures and providing high availability if datacenter failures happen in a region. For more information, see [Build solutions for high availability using availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability). + In Azure, [availability zones](../reliability/availability-zones-overview.md#availability-zones) provide resiliency, distributed availability, and active-active-active zone scalability. To increase availability for your logic app workloads, you can [enable availability zone support](./set-up-zone-redundancy-availability-zones.md), but only when you create your logic app. You'll need at least three separate availability zones in any Azure region that supports and enables zone redundancy. The Azure Logic Apps platform distributes these zones and logic app workloads across these zones. This capability is a key requirement for enabling resilient architectures and providing high availability if datacenter failures happen in a region. For more information, see [Build solutions for high availability using availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability). - Isolated and dedicated environment You've learned more about how Azure Integration Services compares to BizTalk Ser > [!div class="nextstepaction"] > [Choose the best Azure Integration Services offerings for your scenario](azure-integration-services-choose-capabilities.md) >-> [Migration approaches for BizTalk Server to Azure Integration Services](biztalk-server-azure-integration-services-migration-approaches.md) +> [Migration approaches for BizTalk Server to Azure Integration Services](biztalk-server-azure-integration-services-migration-approaches.md) |
machine-learning | How To Access Azureml Behind Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md | The following terms and information are used throughout this article: * __Azure service tags__: A service tag is an easy way to specify the IP ranges used by an Azure service. For example, the `AzureMachineLearning` tag represents the IP addresses used by the Azure Machine Learning service. > [!IMPORTANT]- > Azure service tags are only supported by some Azure services. For a list of service tags supported with network security groups and Azure Firewall, see the [Virtual network service tags](/azure/virtual-network/service-tags-overview) article. + > Azure service tags are only supported by some Azure services. For a list of service tags supported with network security groups and Azure Firewall, see the [Virtual network service tags](../virtual-network/service-tags-overview.md) article. > > If you are using a non-Azure solution such as a 3rd party firewall, download a list of [Azure IP Ranges and Service Tags](https://www.microsoft.com/download/details.aspx?id=56519). Extract the file and search for the service tag within the file. The IP addresses may change periodically. __Inbound traffic__ | `AzureLoadBalancer` | Any | `VirtualNetwork` | 44224 | Inbound to compute instance/cluster. __Only needed if the instance/cluster is configured to use a public IP address__. | > [!TIP]-> A network security group (NSG) is created by default for this traffic. For more information, see [Default security rules](/azure/virtual-network/network-security-groups-overview#inbound). +> A network security group (NSG) is created by default for this traffic. For more information, see [Default security rules](../virtual-network/network-security-groups-overview.md#inbound). __Outbound traffic__ __Outbound traffic__ > If a compute instance or compute cluster is configured for no public IP, they can't access the public internet by default. However, they do need to communicate with the resources listed above. To enable outbound communication, you have two possible options: > > * __User-defined route and firewall__: Create a user-defined route in the subnet that contains the compute. The __Next hop__ for the route should reference the private IP address of the firewall, with an address prefix of 0.0.0.0/0.- > * __Azure Virtual Network NAT with a public IP__: For more information on using Virtual Network Nat, see the [Virtual Network NAT](/azure/virtual-network/nat-gateway/nat-overview) documentation. + > * __Azure Virtual Network NAT with a public IP__: For more information on using Virtual Network Nat, see the [Virtual Network NAT](../virtual-network/nat-gateway/nat-overview.md) documentation. ### Recommended configuration for training and deploying models |
machine-learning | How To Add Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-add-users.md | title.suffix: Azure Machine Learning description: Add users to your data labeling project so that they can label data, but not see the rest of your workspace. -+ |
machine-learning | How To Create Image Labeling Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-image-labeling-projects.md | |
machine-learning | How To Create Text Labeling Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-text-labeling-projects.md | |
machine-learning | How To Inference Server Http | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-server-http.md | This article focuses on the Azure Machine Learning inference HTTP server. The following table provides an overview of scenarios to help you choose what works best for you. -| Scenario | Inference HTTP Server | Local endpoint | +| Scenario | Inference HTTP server | Local endpoint | | -- | | -- | | Update local Python environment **without** Docker image rebuild | Yes | No | | Update scoring script | Yes | Yes | Now you can modify the scoring script (`score.py`) and test your changes by runn There are two ways to use Visual Studio Code (VS Code) and [Python Extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) to debug with [azureml-inference-server-http](https://pypi.org/project/azureml-inference-server-http/) package ([Launch and Attach modes](https://code.visualstudio.com/docs/editor/debugging#_launch-versus-attach-configurations)). -- **Launch mode**: set up the `launch.json` in VS Code and start the AzureML Inference HTTP Server within VS Code.+- **Launch mode**: set up the `launch.json` in VS Code and start the AzureML inference HTTP server within VS Code. 1. Start VS Code and open the folder containing the script (`score.py`). 1. Add the following configuration to `launch.json` for that workspace in VS Code: There are two ways to use Visual Studio Code (VS Code) and [Python Extension](ht 1. Start debugging session in VS Code. Select "Run" -> "Start Debugging" (or `F5`). -- **Attach mode**: start the AzureML Inference HTTP Server in a command line and use VS Code + Python Extension to attach to the process.+- **Attach mode**: start the AzureML inference HTTP server in a command line and use VS Code + Python Extension to attach to the process. > [!NOTE] > If you're using Linux environment, first install the `gdb` package by running `sudo apt-get install -y gdb`. 1. Add the following configuration to `launch.json` for that workspace in VS Code: The following steps explain how the Azure Machine Learning inference HTTP server ## Understanding logs -Here we describe logs of the AzureML Inference HTTP Server. You can get the log when you run the `azureml-inference-server-http` locally, or [get container logs](how-to-troubleshoot-online-endpoints.md#get-container-logs) if you're using online endpoints. +Here we describe logs of the AzureML inference HTTP server. You can get the log when you run the `azureml-inference-server-http` locally, or [get container logs](how-to-troubleshoot-online-endpoints.md#get-container-logs) if you're using online endpoints. > [!NOTE] > The logging format has changed since version 0.8.0. If you find your log in different style, update the `azureml-inference-server-http` package to the latest version. |
machine-learning | How To Integrate Azure Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-integrate-azure-policy.md | You can also assign policies by using [Azure PowerShell](../governance/policy/as ## Conditional access policies +To control who can access your Azure Machine Learning workspace, use Azure Active Directory [Conditional Access](../active-directory/conditional-access/overview.md). + > [!IMPORTANT]-> [Azure AD Conditional Access](/azure/active-directory/conditional-access/overview) is __not__ supported with Azure Machine Learning. +> Azure Machine Learning studio cannot be added in cloud apps in Azure AD Conditional Access, as the studio UI is a client application. ## Enable self-service using landing zones |
machine-learning | How To Outsource Data Labeling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-outsource-data-labeling.md | -Learn how to engage a data labeling vendor company to help you label your data. You can learn more about these companies and the labeling services they provide in their listing pages in [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/consulting-services?page=1&search=AzureMLVend). +Learn how to engage a data labeling vendor company to help you label your data. Learn more about these companies, and the labeling services they provide, in their [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/consulting-services?page=1&search=AzureMLVend) listing pages. ## Workflow summary Before you create your data labeling project: 1. Contact and enter into a contract with the labeling service provider. -Once the contract with the vendor labeling company is in place: +Once you have the contract with the vendor labeling company in place: -1. Create the labeling project in [Azure Machine Learning studio](https://ml.azure.com). For more details on creating a project, see how to create an [image labeling project](how-to-create-image-labeling-projects.md) or [text labeling project](how-to-create-text-labeling-projects.md). -1. You're not limited to using a data labeling provider from Azure Marketplace. But if you do use a provider from Azure Marketplace: +1. Create the labeling project in the [Azure Machine Learning studio](https://ml.azure.com). To learn more about project creation, see how to create an [image labeling project](how-to-create-image-labeling-projects.md) or [text labeling project](how-to-create-text-labeling-projects.md). +1. You're not limited the data labeling providers listed in the Azure Marketplace. However, if you do use a provider from the Azure Marketplace: 1. Select **Use a vendor labeling company from Azure Marketplace** in the workforce step. 1. Select the appropriate data labeling company in the dropdown. > [!NOTE]- > The vendor labeling company name cannot be changed after you create the labeling project. + > You cannot change the vendor labeling company name after you create the labeling project. -1. For any provider, whether found through Azure Marketplace or elsewhere, enable access (`labeler` role, `techlead` role) to the vendor labeling company using Azure Role Based Access (RBAC). These roles will allow the company to access resources to annotate your data. +1. For any provider, found through Azure Marketplace or somewhere else, use Azure Role Based Access (RBAC) to enable access (`labeler` role, `techlead` role) to the vendor labeling company. These roles will allow the company to access resources to annotate your data. ## <a name="review"></a> Select a company -Microsoft has identified some labeling service providers with knowledge and experience who may be able to meet your needs. You can learn about the labeling service providers and choose a provider, taking into account the needs and requirements of your project(s) in their listing pages in [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/consulting-services?page=1&search=AzureMLVend). +Microsoft has identified some labeling service providers, with knowledge and experience, who can potentially meet your needs. Taking into account the needs and requirements of your project(s), you can learn about the labeling service providers, and choose a provider, in the provider listing pages at the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/consulting-services?page=1&search=AzureMLVend). > [!IMPORTANT]-> You can learn more about these companies and the labeling services they provide in their listing pages in Azure Marketplace. You are responsible for any decision to use a labeling company that offers services through Azure Marketplace, and you should independently assess whether a labeling company and its experience, services, staffing, terms, etc. will meet your project requirements. You may contact a labeling company that offers services through Azure Marketplace using the **Contact me** option in Azure Marketplace, and you can expect to hear from a contacted company within three business days. You will contract with and make payment to the labeling company directly. +> You can learn more about these companies, and the labeling services they provide, in their listing pages in Azure Marketplace. You are responsible for any decision to use a labeling company that offers services through Azure Marketplace, and you should independently assess whether a labeling company and its experience, services, staffing, terms, etc. will meet your project requirements. You may contact a labeling company that offers services through Azure Marketplace using the **Contact me** option in Azure Marketplace, and you can expect to hear from a contacted company within three business days. You will contract with and make payment to the labeling company directly. Microsoft periodically reviews the list of potential labeling service providers in Azure Marketplace and may add or remove providers from the list at any time. -* If a provider is removed, it won't affect any existing projects, or that company's access to those projects. +* If a provider is removed, it won't affect any existing projects, or the access of that company to those projects. * If you use a provider who is no longer listed in Azure Marketplace, don't select the **Use a vendor labeling company from Azure Marketplace** option in your new project. * A removed provider will no longer have a listing in Azure Marketplace. * A removed provider will no longer be able to be contacted through Azure Marketplace. Below are vendor labeling companies who might help in getting your data labeled * [Quadrant Resource](https://azuremarketplace.microsoft.com/marketplace/consulting-services/quadrantresourcellc1587325810226.quadrant_resource_data_labeling) -## Enter into a contract +## Enter into a contract -After you have selected the labeling company you want to work with, you need to enter into a contract directly with the labeling company setting forth the terms of your engagement. Microsoft is not a party to this agreement and plays no role in determining or negotiating its terms. Amounts payable under this agreement will be paid directly to the labeling company. +After you select the labeling company you want to work with, you must enter into a contract directly with that labeling company, setting forth the terms of your engagement. Microsoft is not a party to this agreement, and plays no role in determining or negotiating its terms. Amounts payable under this agreement will be paid directly to the labeling company. -If you enable ML Assisted labeling in a labeling project, Microsoft will charge you separately for the compute resources consumed in connection with this service. All other charges associated with your use of Azure Machine Learning (such as storage of data used in your Azure Machine Learning workspace) are governed by the terms of your agreement with Microsoft. +If you enable ML Assisted labeling in a labeling project, Microsoft will charge you separately for the compute resources consumed in connection with this service. The terms of your agreement with Microsoft govern all other charges associated with your use of Azure Machine Learning (for example, storage of data used in your Azure Machine Learning workspace). ## Enable access -In order for the vendor labeling company to have access into your projects, you'll next [add them as labelers to your project](how-to-add-users.md). If you are planning to use multiple vendor labeling companies for different labeling projects, we recommend you create separate workspaces for each company. +In order for the vendor labeling company to have access to your project resources, you'll next [add them as labelers to your project](how-to-add-users.md). If you plan to use multiple vendor labeling companies for different labeling projects, we recommend that you create separate workspaces for each company. > [!IMPORTANT]-> You, and not Microsoft, are responsible for all aspects of your engagement with a labeling company, including but not limited to issues relating to scope, quality, schedule, and pricing. +> You, and not Microsoft, are responsible for all aspects of your engagement with a labeling company, including but not limited to issues involving scope, quality, schedule, and pricing. ## Next steps |
machine-learning | How To Secure Training Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md | The following configurations are in addition to those listed in the [Prerequisit For more information on the outbound traffic that is used by Azure Machine Learning, see the following articles: - [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md).- - [Azure's outbound connectivity methods](/azure/load-balancer/load-balancer-outbound-connections#scenarios). + - [Azure's outbound connectivity methods](../load-balancer/load-balancer-outbound-connections.md#scenarios). - For more information on service tags that can be used with Azure Firewall, see the [Virtual network service tags](/azure/virtual-network/service-tags-overview) article. + For more information on service tags that can be used with Azure Firewall, see the [Virtual network service tags](../virtual-network/service-tags-overview.md) article. Use the following information to create a compute instance or cluster with no public IP address: This article is part of a series on securing an Azure Machine Learning workflow. * [Secure the inference environment](how-to-secure-inferencing-vnet.md) * [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md)-* [Use a firewall](how-to-access-azureml-behind-firewall.md) +* [Use a firewall](how-to-access-azureml-behind-firewall.md) |
machine-learning | How To Setup Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-authentication.md | Learn how to set up authentication to your Azure Machine Learning workspace from Regardless of the authentication workflow used, Azure role-based access control (Azure RBAC) is used to scope the level of access (authorization) allowed to the resources. For example, an admin or automation process might have access to create a compute instance, but not use it, while a data scientist could use it, but not delete or create it. For more information, see [Manage access to Azure Machine Learning workspace](how-to-assign-roles.md). +Azure AD Conditional Access can be used to further control or restrict access to the workspace for each authentication workflow. For example, an admin can allow workspace access from managed devices only. + ## Prerequisites * Create an [Azure Machine Learning workspace](how-to-manage-workspace.md). print(ml_client) ## Use Conditional Access +As an administrator, you can enforce [Azure AD Conditional Access policies](../active-directory/conditional-access/overview.md) for users signing in to the workspace. For example, you +can require two-factor authentication, or allow sign in only from managed devices. To use Conditional Access for Azure Machine Learning workspaces specifically, [assign the Conditional Access policy](../active-directory/conditional-access/concept-conditional-access-cloud-apps.md) to Machine Learning Cloud app. + > [!IMPORTANT]-> [Azure AD Conditional Access](/azure/active-directory/conditional-access/overview) is __not__ supported with Azure Machine Learning. +> Azure Machine Learning studio cannot be added in cloud apps in Azure AD Conditional Access, as the studio UI is a client application. ## Next steps |
machine-learning | How To Troubleshoot Data Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-data-access.md | In this guide, learn how to identify and resolve known issues with data access w ## Error Codes -Data access error codes are hierarchical. Error codes are delimited by the full stop character `.` and are more specific the more segments there are. +Data access error codes are hierarchical. The full stop character `.` delimits error codes, and become more specific with more segments available. ## ScriptExecution.DatabaseConnection ### ScriptExecution.DatabaseConnection.NotFound -The database or server defined in the datastore couldn't be found or no longer exists. Check if the database still exists in Azure portal or linked to from the Azure Machine Learning studio datastore details page. If it doesn't exist, recreating it with the same name will enable the existing datastore to be used. If a new server name or database is used, the datastore will have to be deleted, and recreated to use the new name. +The database or server defined in the datastore cannot be found, or no longer exists. Check if the database still exists in Azure portal, or if the Azure Machine Learning studio datastore details page links to it. If it doesn't exist, you will enable the existing datastore for use if you recreate it with the same name. To use a new server name or database, you must delete and recreate the datastore to use the new name. ### ScriptExecution.DatabaseConnection.Authentication -The authentication failed while trying to connect to the database. The authentication method is stored inside the datastore and supports SQL authentication, service principal, or no stored credential (identity based access). Enabling workspace MSI makes the authentication use the workspace MSI when previewing data in Azure Machine Learning studio. A SQL server user needs to be created for the service principal and workspace MSI (if applicable) and granted classic database permissions. More info can be found [here](/azure/azure-sql/database/authentication-aad-service-principal-tutorial#create-the-service-principal-user-in-azure-sql-database). +The authentication failed while trying to connect to the database. The authentication method is stored inside the datastore, and supports SQL authentication, service principal, or no stored credential (identity based access). When previewing data in Azure Machine Learning studio, workspace MSI enabling makes the authentication use the workspace MSI. A SQL server user needs to be created for the service principal and workspace MSI (if applicable) and granted classic database permissions. More info can be found [here](/azure/azure-sql/database/authentication-aad-service-principal-tutorial#create-the-service-principal-user-in-azure-sql-database). Contact your data admin to verify or add the correct permissions to the service principal or user identity. Errors also include: - ScriptExecution.DatabaseConnection.Authentication.AzureIdentityAccessTokenResolution.InvalidResource- - The server under the subscription and resource group couldn't be found. Check that the subscription ID and resource group defined in the datastore matches that of the server and update the values if needed. + - The server under the subscription and resource group couldn't be found. Check that the subscription ID and resource group defined in the datastore match those of the server, and update the values if necessary. > [!NOTE]- > Use the subscription ID and resource group of the server and not of the workspace. If the datastore is cross subscription or cross resource group server, these will be different. + > Use the subscription ID and resource group of the server, not of the workspace. If the datastore is cross subscription or cross resource group server, these will differ. - ScriptExecution.DatabaseConnection.Authentication.AzureIdentityAccessTokenResolution.FirewallSettingsResolutionFailure- - The identity doesn't have permission to read firewall settings of the target server. Contact your data admin to the Reader role to the workspace MSI. + - The identity doesn't have permission to read the target server firewall settings. Contact your data admin for the workspace MSI Reader role. ## ScriptExecution.DatabaseQuery ### ScriptExecution.DatabaseQuery.TimeoutExpired -The executed SQL query took too long and timed out. The timeout can be specified at time of data asset creation. If a new timeout is needed, a new asset must be created, or a new version of the current asset must be created. In Azure Machine Learning studio SQL preview, there will have a fixed query timeout, but the defined value will always be honored for jobs. +The executed SQL query took too long and timed out. You can specify the timeout at time of data asset creation. If a new timeout is needed, a new asset must be created, or a new version of the current asset must be created. In Azure Machine Learning studio SQL preview, there will have a fixed query timeout, but the defined value will always be honored for jobs. ## ScriptExecution.StreamAccess ### ScriptExecution.StreamAccess.Authentication -The authentication failed while trying to connect to the storage account. The authentication method is stored inside the datastore and depending on the datastore type, can support account key, SAS token, service principal or no stored credential (identity based access). Enabling workspace MSI makes the authentication use the workspace MSI when previewing data in Azure Machine Learning studio. +The authentication failed while trying to connect to the storage account. The authentication method is stored inside the datastore, and depending on the datastore type, it can support account key, SAS token, service principal or no stored credential (identity based access). When previewing data in Azure Machine Learning studio, workspace MSI enabling makes the authentication use the workspace MSI. Contact your data admin to verify or add the correct permissions to the service principal or user identity. Errors also include: - ScriptExecution.StreamAccess.Authentication.AzureIdentityAccessTokenResolution.FirewallSettingsResolutionFailure - The identity doesn't have permission to read firewall settings of the target storage account. Contact your data admin to the Reader role to the workspace MSI. - ScriptExecution.StreamAccess.Authentication.AzureIdentityAccessTokenResolution.PrivateEndpointResolutionFailure- - The target storage account is using a virtual network but the logged in session isn't connecting to the workspace via a private endpoint. Add a private endpoint to the workspace and ensure that the virtual network or subnet of the private endpoint is allowed by the storage virtual network settings. Add the logged in session's public IP to the storage firewall allowlist. + - The target storage account uses a virtual network, but the logged-in session isn't connecting to the workspace via a private endpoint. Add a private endpoint to the workspace, and ensure that the storage virtual network settings allows the virtual network or subnet of the private endpoint. Add the logged in session's public IP to the storage firewall allowlist. - ScriptExecution.StreamAccess.Authentication.AzureIdentityAccessTokenResolution.NetworkIsolationViolated- - The target storage account's firewall settings don't permit this data access. Check that your logged in session is within compatible network settings with the storage account. If Workspace MSI is used, check that it has Reader access to the storage account and to the private endpoints associated with the storage account. + - The target storage account firewall settings don't permit this data access. Check that your logged in session falls within compatible network settings with the storage account. If Workspace MSI is used, check that it has Reader access to the storage account and to the private endpoints associated with the storage account. - ScriptExecution.StreamAccess.Authentication.AzureIdentityAccessTokenResolution.InvalidResource- - The storage account under the subscription and resource group couldn't be found. Check that the subscription ID and resource group defined in the datastore matches that of the storage account and update the values if needed. + - The storage account under the subscription and resource group couldn't be found. Check that the subscription ID and resource group defined in the datastore match those of the storage account, and update the values if needed. > [!NOTE]- > Use the subscription ID and resource group of the server and not of the workspace. If the datastore is cross subscription or cross resource group server, these will be different. + > Use the subscription ID and resource group of the server, and not of the workspace. These will be different for a cross subscription or cross resource group server. ### ScriptExecution.StreamAccess.NotFound -The specified file or folder path doesn't exist. Check that the provided path exists in Azure portal or if using a datastore, that the right datastore is used (including the datastore's account and container). If the storage account is an HNS enabled Blob storage, otherwise known as ADLS Gen2, or an `abfs[s]` URI, that storage ACLs may restrict particular folders or paths. This error will appear as a "NotFound" error instead of an "Authentication" error. +The specified file or folder path doesn't exist. Check that the provided path exists in Azure portal, or if using a datastore, that the right datastore is used (including the datastore's account and container). If the storage account is an HNS enabled Blob storage, otherwise known as ADLS Gen2, or an `abfs[s]` URI, that storage ACLs may restrict particular folders or paths. This error will appear as a "NotFound" error instead of an "Authentication" error. ### ScriptExecution.StreamAccess.Validation |
machine-learning | Migrate To V2 Assets Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-assets-data.md | In V1, an AzureML dataset can either be a `Filedataset` or a `Tabulardataset`. In V2, an AzureML data asset can be a `uri_folder`, `uri_file` or `mltable`. You can conceptually map `Filedataset` to `uri_folder` and `uri_file`, `Tabulardataset` to `mltable`. -* URIs (`uri_folder`, `uri_file`) - a Uniform Resource Identifier that is a reference to a storage location on your local computer or in the cloud that makes it easy to access data in your jobs. -* MLTable - a method to abstract the schema definition for tabular data so that it's easier for consumers of the data to materialize the table into a Pandas/Dask/Spark dataframe. +* URIs (`uri_folder`, `uri_file`) - a Uniform Resource Identifier that is a reference to a storage location on your local computer or in the cloud, that makes it easy to access data in your jobs. +* MLTable - a method to abstract the tabular data schema definition, to make it easier for consumers of that data to materialize the table into a Pandas/Dask/Spark dataframe. -This article gives a comparison of data scenario(s) in SDK v1 and SDK v2. +This article compares data scenario(s) in SDK v1 and SDK v2. ## Create a `filedataset`/ uri type of data asset |
machine-learning | How To Secure Training Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-training-vnet.md | In this article you learn how to secure the following training compute resources - "Microsoft.Network/virtualNetworks/*/read" on the virtual network resource. This permission isn't needed for Azure Resource Manager (ARM) template deployments. - "Microsoft.Network/virtualNetworks/subnet/join/action" on the subnet resource. - For more information on Azure RBAC with networking, see the [Networking built-in roles](/azure/role-based-access-control/built-in-roles.md#networking) + For more information on Azure RBAC with networking, see the [Networking built-in roles](../../role-based-access-control/built-in-roles.md#networking) ## Limitations In this article you learn how to secure the following training compute resources * __Compute clusters__ can be created in a different region than your workspace. This functionality is in __preview__, and is only available for __compute clusters__, not compute instances. When using a different region for the cluster, the following limitations apply: - * If your workspace associated resources, such as storage, are in a different virtual network than the cluster, set up global virtual network peering between the networks. For more information, see [Virtual network peering](/azure/virtual-network/virtual-network-peering-overview). + * If your workspace associated resources, such as storage, are in a different virtual network than the cluster, set up global virtual network peering between the networks. For more information, see [Virtual network peering](../../virtual-network/virtual-network-peering-overview.md). * You may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it. Guidance such as using NSG rules, user-defined routes, and input/output requirements, apply as normal when using a different region than the workspace. The following configurations are in addition to those listed in the [Prerequisit For more information on the outbound traffic that is used by Azure Machine Learning, see the following articles: - [Configure inbound and outbound network traffic](../how-to-access-azureml-behind-firewall.md).- - [Azure's outbound connectivity methods](/azure/load-balancer/load-balancer-outbound-connections#scenarios). + - [Azure's outbound connectivity methods](../../load-balancer/load-balancer-outbound-connections.md#scenarios). Use the following information to create a compute instance or cluster with no public IP address: The following configurations are in addition to those listed in the [Prerequisit > > If you have another NSG at the subnet level, the rules in the subnet level NSG mustn't conflict with the rules in the automatically created NSG. >- > To learn how the NSGs filter your network traffic, see [How network security groups filter network traffic](/azure/virtual-network/network-security-group-how-it-works). + > To learn how the NSGs filter your network traffic, see [How network security groups filter network traffic](../../virtual-network/network-security-group-how-it-works.md). * One load balancer The following configurations are in addition to those listed in the [Prerequisit For a compute instance, these resources are kept until the instance is deleted. Stopping the instance doesn't remove the resources. > [!IMPORTANT]- > These resources are limited by the subscription's [resource quotas](/azure/azure-resource-manager/management/azure-subscription-service-limits). If the virtual network resource group is locked then deletion of compute cluster/instance will fail. Load balancer cannot be deleted until the compute cluster/instance is deleted. Also please ensure there is no Azure Policy assignment which prohibits creation of network security groups. + > These resources are limited by the subscription's [resource quotas](../../azure-resource-manager/management/azure-subscription-service-limits.md). If the virtual network resource group is locked then deletion of compute cluster/instance will fail. Load balancer cannot be deleted until the compute cluster/instance is deleted. Also please ensure there is no Azure Policy assignment which prohibits creation of network security groups. + In your VNet, allow **inbound** TCP traffic on port **44224** from the `AzureMachineLearning` service tag. > [!IMPORTANT] |
machine-learning | How To Setup Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-setup-authentication.md | Learn how to set up authentication to your Azure Machine Learning workspace. Aut Regardless of the authentication workflow used, Azure role-based access control (Azure RBAC) is used to scope the level of access (authorization) allowed to the resources. For example, an admin or automation process might have access to create a compute instance, but not use it, while a data scientist could use it, but not delete or create it. For more information, see [Manage access to Azure Machine Learning workspace](../how-to-assign-roles.md). +Azure AD Conditional Access can be used to further control or restrict access to the workspace for each authentication workflow. For example, an admin can allow workspace access from managed devices only. + ## Prerequisites * Create an [Azure Machine Learning workspace](../how-to-manage-workspace.md). ws = Workspace(subscription_id="your-sub-id", ## Use Conditional Access +As an administrator, you can enforce [Azure AD Conditional Access policies](../../active-directory/conditional-access/overview.md) for users signing in to the workspace. For example, you +can require two-factor authentication, or allow sign in only from managed devices. To use Conditional Access for Azure Machine Learning workspaces specifically, [assign the Conditional Access policy](../../active-directory/conditional-access/concept-conditional-access-cloud-apps.md) to Machine Learning Cloud app. + > [!IMPORTANT]-> [Azure AD Conditional Access](/azure/active-directory/conditional-access/overview) is __not__ supported with Azure Machine Learning. +> Azure Machine Learning studio cannot be added in cloud apps in Azure AD Conditional Access, as the studio UI is a client application. ## Next steps |
marketplace | Azure App Metered Billing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-metered-billing.md | When it comes to defining the offer along with its pricing models, it is importa > You must keep track of the usage in your code and only send usage events to Microsoft for the usage that is above the base fee. > [!Note]-> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](/azure/marketplace/marketplace-geo-availability-currencies#how-we-convert-currency). +> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](./marketplace-geo-availability-currencies.md#how-we-convert-currency). ## Sample offer As an example, Contoso is a publisher with a managed application service called Contoso Analytics (CoA). CoA allows customers to analyze large amount of data for reporting and data warehousing. Contoso is registered as a publisher in Partner Center for the commercial marketplace program to publish offers to Azure customers. There are two plans associated with CoA, outlined below: Follow the instruction in [Support for the commercial marketplace program in Par **Video tutorial** -- [Metered Billing for Azure Managed Applications Overview](https://go.microsoft.com/fwlink/?linkid=2196310)--+- [Metered Billing for Azure Managed Applications Overview](https://go.microsoft.com/fwlink/?linkid=2196310) |
marketplace | Azure Container Plan Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-container-plan-availability.md | When you remove a market, customers from that market who are using active deploy Select *Save* to continue. > [!NOTE]-> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](/azure/marketplace/marketplace-geo-availability-currencies). +> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](./marketplace-geo-availability-currencies.md). ## Pricing |
marketplace | Isv Customer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/isv-customer.md | Use this page to define private offer terms, notification contacts, and pricing > [!NOTE]- > Customers can find their billing account ID in 2 ways. 1) In the [Azure portal](https://aka.ms/PrivateOfferAzurePortal) under **Cost Management + Billing** > **Properties** > **ID**. A user in the customer organization should have access to the billing account to see the ID in Azure Portal. 2) If the customer knows the subscription they plan to use for the purchase, click on **Subscriptions**, click on the relevant subscription > **Properties** (or Billing Properties) > **Billing Account ID**. See [Billing account scopes in the Azure portal](/azure/cost-management-billing/manage/view-all-accounts). + > Customers can find their billing account ID in 2 ways. 1) In the [Azure portal](https://aka.ms/PrivateOfferAzurePortal) under **Cost Management + Billing** > **Properties** > **ID**. A user in the customer organization should have access to the billing account to see the ID in Azure Portal. 2) If the customer knows the subscription they plan to use for the purchase, click on **Subscriptions**, click on the relevant subscription > **Properties** (or Billing Properties) > **Billing Account ID**. See [Billing account scopes in the Azure portal](../cost-management-billing/manage/view-all-accounts.md). :::image type="content" source="media/isv-customer/customer-properties.png" alt-text="Shows the offer Properties tab in Partner Center."::: The payout amount and agency fee that Microsoft charges is based on the private - [Private Offer Overview of ISV to Customer Offers](https://www.youtube.com/watch?v=SNfEMKNmstY) - [ISV to Customer Private Offer Creation](https://www.youtube.com/watch?v=WPSM2_v4JuE) - [ISV to Customer Private Offer Acceptance](https://www.youtube.com/watch?v=HWpLOOtfWZs)-- [ISV to Customer Private Offer Purchase Experience](https://www.youtube.com/watch?v=mPX7gqdHqBk)----+- [ISV to Customer Private Offer Purchase Experience](https://www.youtube.com/watch?v=mPX7gqdHqBk) |
migrate | Migrate Support Matrix Vmware Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware-migration.md | This table summarizes assessment support and limitations for VMware vSphere virt | **VMware vCenter Server** | Version 5.5, 6.0, 6.5, or 6.7. **VMware vSphere ESXi host** | Version 5.5, 6.0, 6.5, 6.7 or 7.0.-**vCenter Server permissions** | A read-only account for vCenter Server. +**vCenter Server permissions** | **VM discovery**: At least a read-only user<br/><br/> Data Center object ΓÇô> Propagate to Child Object, role=Read-only.<br/><br/> **Replication**: Create a role (Azure Site Recovery) with the required permissions, and then assign the role to a VMware vSphere user or group<br/><br/> Data Center object ΓÇô> Propagate to Child Object, role=Azure Site Recovery<br/><br/> Datastore -> Allocate space, browse datastore, low-level file operations, remove file, update virtual machine files<br/><br/> Network -> Network assign<br/><br/> Resource -> Assign VM to resource pool, migrate powered off VM, migrate powered on VM<br/><br/> Tasks -> Create task, update task<br/><br/> Virtual machine -> Configuration<br/><br/> Virtual machine -> Interact -> answer question, device connection, configure CD media, configure floppy media, power off, power on, VMware tools install<br/><br/> Virtual machine -> Inventory -> Create, register, unregister<br/><br/> Virtual machine -> Provisioning -> Allow virtual machine download, allow virtual machine files upload<br/><br/> Virtual machine -> Snapshots -> Remove snapshots.<br/><br/><br/>**Note**:<br/>User assigned at datacenter level, and has access to all the objects in the datacenter.<br/><br/> To restrict access, assign the **No access** role with the **Propagate to child** object, to the child objects (vSphere hosts, datastores, VMs, and networks). ### VM requirements (agent-based) |
mysql | Tutorial Deploy Wordpress On Aks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-deploy-wordpress-on-aks.md | The server created has the below attributes: - Using public-access argument allow you to create a server with public access protected by firewall rules. By providing your IP address to add the firewall rule to allow access from your client machine. - Since the command is using Local context it will create the server in the resource group ```wordpress-project``` and in the region ```eastus```. -### Build your WordPress docker image --Download the [latest WordPress](https://wordpress.org/download/) version. Create new directory ```my-wordpress-app``` for your project and use this simple folder structure --```wordpress -ΓööΓöÇΓöÇΓöÇmy-wordpress-app - ΓööΓöÇΓöÇΓöÇpublic - Γö£ΓöÇΓöÇΓöÇwp-admin - Γöé Γö£ΓöÇΓöÇΓöÇcss - . . . . . . . - Γö£ΓöÇΓöÇΓöÇwp-content - Γöé Γö£ΓöÇΓöÇΓöÇplugins - . . . . . . . - ΓööΓöÇΓöÇΓöÇwp-includes - . . . . . . . - Γö£ΓöÇΓöÇΓöÇwp-config-sample.php - Γö£ΓöÇΓöÇΓöÇindex.php - . . . . . . . - ΓööΓöÇΓöÇΓöÇ Dockerfile +## Container definitions -``` --Rename ```wp-config-sample.php``` to ```wp-config.php``` and replace lines from beginingin of ```// ** MySQL settings - You can get this info from your web host ** //``` until the line ```define( 'DB_COLLATE', '' );``` with the code snippet below. The code below is reading the database host, username and password from the Kubernetes manifest file. --```php -//Using environment variables for DB connection information --// ** MySQL settings - You can get this info from your web host ** // -/** The name of the database for WordPress */ --$connectstr_dbhost = getenv('DATABASE_HOST'); -$connectstr_dbusername = getenv('DATABASE_USERNAME'); -$connectstr_dbpassword = getenv('DATABASE_PASSWORD'); -$connectst_dbname = getenv('DATABASE_NAME'); --/** MySQL database name */ -define('DB_NAME', $connectst_dbname); --/** MySQL database username */ -define('DB_USER', $connectstr_dbusername); --/** MySQL database password */ -define('DB_PASSWORD',$connectstr_dbpassword); --/** MySQL hostname */ -define('DB_HOST', $connectstr_dbhost); +In the following example, we're creating two containers, a Nginx web server and a PHP FastCGI processor, based on official Docker images `nginx` and `wordpress` ( `fpm` version with FastCGI support), published on Docker Hub. -/** Database Charset to use in creating database tables. */ -define('DB_CHARSET', 'utf8'); --/** The Database Collate type. Don't change this if in doubt. */ -define('DB_COLLATE', ''); ---/** SSL*/ -define('MYSQL_CLIENT_FLAGS', MYSQLI_CLIENT_SSL); -``` --### Create a Dockerfile --Create a new Dockerfile and copy this code snippet. This Dockerfile in setting up Apache web server with PHP and enabling mysqli extension. --```docker -FROM php:7.2-apache -COPY public/ /var/www/html/ -RUN docker-php-ext-install mysqli -RUN docker-php-ext-enable mysqli -``` --### Build your docker image --Make sure you're in the directory ```my-wordpress-app``` in a terminal using the ```cd``` command. Run the following command to build the image: --``` bash --docker build --tag myblog:latest . --``` --Deploy your image to [Docker hub](https://docs.docker.com/get-started/part3/#create-a-docker-hub-repository-and-push-your-image) or [Azure Container registry](../../container-registry/container-registry-get-started-azure-cli.md). +Alternatively you can build custom docker image(s) and deploy image(s) into [Docker hub](https://docs.docker.com/get-started/part3/#create-a-docker-hub-repository-and-push-your-image) or [Azure Container registry](../../container-registry/container-registry-get-started-azure-cli.md). > [!IMPORTANT] > If you are using Azure container regdistry (ACR), then run the ```az aks update``` command to attach ACR account with the AKS cluster. Deploy your image to [Docker hub](https://docs.docker.com/get-started/part3/#cre > az aks update -n myAKSCluster -g wordpress-project --attach-acr <your-acr-name> > ``` + ## Create Kubernetes manifest file A Kubernetes manifest file defines a desired state for the cluster, such as what container images to run. Let's create a manifest file named `mywordpress.yaml` and copy in the following YAML definition. > [!IMPORTANT] >-> - Replace ```[DOCKER-HUB-USER/ACR ACCOUNT]/[YOUR-IMAGE-NAME]:[TAG]``` with your actual WordPress docker image name and tag, for example ```docker-hub-user/myblog:latest```. > - Update ```env``` section below with your ```SERVERNAME```, ```YOUR-DATABASE-USERNAME```, ```YOUR-DATABASE-PASSWORD``` of your MySQL flexible server. ```yaml apiVersion: apps/v1 kind: Deployment metadata:- name: wordpress-blog + name: wp-blog spec: replicas: 1 selector: matchLabels:- app: wordpress-blog + app: wp-blog template: metadata: labels:- app: wordpress-blog + app: wp-blog spec: containers:- - name: wordpress-blog - image: [DOCKER-HUB-USER-OR-ACR-ACCOUNT]/[YOUR-IMAGE-NAME]:[TAG] + - name: wp-blog-nginx + image: nginx ports: - containerPort: 80+ volumeMounts: + - name: config + mountPath: /etc/nginx/conf.d + - name: wp-persistent-storage + mountPath: /var/www/html ++ - name: wp-blog-php + image: wordpress:fpm + ports: + - containerPort: 9000 + volumeMounts: + - name: wp-persistent-storage + mountPath: /var/www/html env:- - name: DATABASE_HOST - value: "SERVERNAME.mysql.database.azure.com" #Update here - - name: DATABASE_USERNAME - value: "YOUR-DATABASE-USERNAME" #Update here - - name: DATABASE_PASSWORD - value: "YOUR-DATABASE-PASSWORD" #Update here - - name: DATABASE_NAME - value: "flexibleserverdb" + - name: WORDPRESS_DB_HOST + value: "<<SERVERNAME.mysql.database.azure.com>>" #Update here + - name: WORDPRESS_DB_USER + value: "<<YOUR-DATABASE-USERNAME>>" #Update here + - name: WORDPRESS_DB_PASSWORD + value: "<<YOUR-DATABASE-PASSWORD>>" #Update here + - name: WORDPRESS_DB_NAME + value: "<<flexibleserverdb>>" + - name: WORDPRESS_CONFIG_EXTRA # enable SSL connection for MySQL + value: | + define('MYSQL_CLIENT_FLAGS', MYSQLI_CLIENT_SSL); + volumes: + - name: config + configMap: + name: wp-nginx-config + items: + - key: config + path: site.conf ++ - name: wp-persistent-storage + persistentVolumeClaim: + claimName: wp-pv-claim affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: spec: - key: "app" operator: In values:- - wordpress-blog + - wp-blog topologyKey: "kubernetes.io/hostname" apiVersion: v1+kind: PersistentVolumeClaim +metadata: + name: wp-pv-claim + labels: + app: wp-blog +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 20Gi ++apiVersion: v1 kind: Service metadata:- name: php-svc + name: blog-nginx-service spec: type: LoadBalancer ports: - port: 80 selector:- app: wordpress-blog + app: wp-blog ++apiVersion: v1 +kind: ConfigMap +metadata: + name: wp-nginx-config +data: + config : | + server { + listen 80; + server_name localhost; + root /var/www/html/; ++ access_log /var/log/nginx/wp-blog-access.log; + error_log /var/log/nginx/wp-blog-error.log error; + index https://docsupdatetracker.net/index.html index.htm index.php; ++ + location ~* .(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { + expires max; + index index.php https://docsupdatetracker.net/index.html index.htm; + try_files $uri =404; + } ++ location / { + index index.php https://docsupdatetracker.net/index.html index.htm; + + if (-f $request_filename) { + expires max; + break; + } + + if (!-e $request_filename) { + rewrite ^(.+)$ /index.php?q=$1 last; + } + } ++ location ~ \.php$ { + fastcgi_split_path_info ^(.+\.php)(/.+)$; + fastcgi_pass localhost:9000; + fastcgi_index index.php; + include fastcgi_params; + fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; + fastcgi_param SCRIPT_NAME $fastcgi_script_name; + fastcgi_param PATH_INFO $fastcgi_path_info; + } + } ``` ## Deploy WordPress to AKS cluster |
partner-solutions | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/troubleshoot.md | This document contains information about troubleshooting your solutions that use * The EA subscription doesn't allow Marketplace purchases. - Use a different subscription. Or, check if your EA subscription is enabled for Marketplace purchase. For more information, see [Enable Marketplace purchases](/azure/cost-management-billing/manage/ea-azure-marketplace#enabling-azure-marketplace-purchases). If those options don't solve the problem, contact [Datadog support](https://www.datadoghq.com/support). + Use a different subscription. Or, check if your EA subscription is enabled for Marketplace purchase. For more information, see [Enable Marketplace purchases](../../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases). If those options don't solve the problem, contact [Datadog support](https://www.datadoghq.com/support). ## Unable to create Datadog - An Azure Native ISV Service resource If the Datadog agent has been configured with an incorrect key, navigate to the ## Next steps -- Learn about [managing your instance](manage.md) of Datadog.+- Learn about [managing your instance](manage.md) of Datadog. |
partner-solutions | Dynatrace Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-create.md | Use the Azure portal to find Azure Native Dynatrace Service application. - **Send Azure resource logs for all defined sources** - Azure resource logs provide insight into operations that were taken on an Azure resource at the [data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). For example, getting a secret from a Key Vault is a data plane operation. Or, making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type. - - **Send Azure Active Directory logs** – Azure Active Directory logs allow you to route the audit, sign-in, and provisioning logs to Dynatrace. The details are listed in [Azure AD activity logs in Azure Monitor](/azure/active-directory/reports-monitoring/concept-activity-logs-azure-monitor). The global administrator or security administrator for your Azure Active Directory (AAD) tenant can enable AAD logs. + - **Send Azure Active Directory logs** – Azure Active Directory logs allow you to route the audit, sign-in, and provisioning logs to Dynatrace. The details are listed in [Azure AD activity logs in Azure Monitor](../../active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md). The global administrator or security administrator for your Azure Active Directory (AAD) tenant can enable AAD logs. 1. To send subscription level logs to Dynatrace, select **Send subscription activity logs**. If this option is left unchecked, none of the subscription level logs are sent to Dynatrace. -1. To send Azure resource logs to Dynatrace, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](/azure/azure-monitor/essentials/resource-logs-categories). +1. To send Azure resource logs to Dynatrace, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](../../azure-monitor/essentials/resource-logs-categories.md). When the checkbox for Azure resource logs is selected, by default, logs are forwarded for all resources. To filter the set of Azure resources sending logs to Dynatrace, use inclusion and exclusion rules and set the Azure resource tags: Use the Azure portal to find Azure Native Dynatrace Service application. ## Next steps -- [Manage the Dynatrace resource](dynatrace-how-to-manage.md)+- [Manage the Dynatrace resource](dynatrace-how-to-manage.md) |
partner-solutions | New Relic Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-create.md | Use the Azure portal to find the Azure Native New Relic Service application: | Property | Description | |--|--| | **Subscription** | Select the Azure subscription that you want to use for creating the New Relic resource. You must have owner access.|- | **Resource group** |Specify whether you want to create a new resource group or use an existing one. A [resource group](/azure/azure-resource-manager/management/overview) is a container that holds related resources for an Azure solution.| + | **Resource group** |Specify whether you want to create a new resource group or use an existing one. A [resource group](../../azure-resource-manager/management/overview.md) is a container that holds related resources for an Azure solution.| | **Resource name** |Specify a name for the New Relic resource. This name will be the friendly name of the New Relic account.| | **Region** |Select the region where the New Relic resource on Azure and the New Relic account will be created.| Your next step is to configure metrics and logs on the **Logs** tab. When you're 1. To send subscription-level logs to New Relic, select **Subscription activity logs**. If you leave this option cleared, no subscription-level logs will be sent to New Relic. - These logs provide insight into the operations on your resources at the [control plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). These logs also include updates on service-health events. + These logs provide insight into the operations on your resources at the [control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). These logs also include updates on service-health events. Use the activity log to determine what, who, and when for any write operations (`PUT`, `POST`, `DELETE`). There's a single activity log for each Azure subscription. -1. To send Azure resource logs to New Relic, select **Azure resource logs** for all supported resource types. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](/azure/azure-monitor/essentials/resource-logs-categories). +1. To send Azure resource logs to New Relic, select **Azure resource logs** for all supported resource types. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](../../azure-monitor/essentials/resource-logs-categories.md). - These logs provide insight into operations that were taken on an Azure resource at the [data plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). For example, getting a secret from a key vault is a data plane operation. Making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type. + These logs provide insight into operations that were taken on an Azure resource at the [data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). For example, getting a secret from a key vault is a data plane operation. Making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type. :::image type="content" source="media/new-relic-create/new-relic-metrics.png" alt-text="Screenshot of the tab for logs in a New Relic resource, with resource logs selected."::: You can also skip this step and go directly to the **Review and Create** tab. ## Next steps -- [Manage the New Relic resource](new-relic-how-to-manage.md)+- [Manage the New Relic resource](new-relic-how-to-manage.md) |
partner-solutions | New Relic How To Configure Prereqs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-how-to-configure-prereqs.md | This article describes the prerequisites that you must complete in your Azure su ## Access control -To set up New Relic on Azure, you must have owner access on the Azure subscription. [Confirm that you have the appropriate access](/azure/role-based-access-control/check-access) before you start the setup. +To set up New Relic on Azure, you must have owner access on the Azure subscription. [Confirm that you have the appropriate access](../../role-based-access-control/check-access.md) before you start the setup. ## Resource provider registration To set up New Relic on Azure, you need to register the `NewRelic.Observability` resource provider in the specific Azure subscription: -- To register the resource provider in the Azure portal, follow the steps in [Azure resource providers and types](/azure/azure-resource-manager/management/resource-providers-and-types).+- To register the resource provider in the Azure portal, follow the steps in [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). - To register the resource provider in the Azure CLI, use this command: To set up New Relic on Azure, you need to register the `NewRelic.Observability` ## Next steps - [Quickstart: Get started with New Relic](new-relic-create.md)-- [Troubleshoot Azure Native New Relic Service](new-relic-troubleshoot.md)-+- [Troubleshoot Azure Native New Relic Service](new-relic-troubleshoot.md) |
partner-solutions | New Relic How To Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-how-to-manage.md | You can filter the list of resources by resource type, resource group name, regi The column **Logs to New Relic** indicates whether the resource is sending logs to New Relic. If the resource isn't sending logs, the reasons could be: -- **Resource does not support sending logs**: Only resource types with monitoring log categories can be configured to send logs. See [Supported categories](/azure/azure-monitor/essentials/resource-logs-categories).+- **Resource does not support sending logs**: Only resource types with monitoring log categories can be configured to send logs. See [Supported categories](../../azure-monitor/essentials/resource-logs-categories.md). - **Limit of five diagnostic settings reached**: Each Azure resource can have a maximum of five diagnostic settings. For more information, see [Diagnostic settings](/cli/azure/monitor/diagnostic-settings). - **Error**: The resource is configured to send logs to New Relic but is blocked by an error. - **Logs not configured**: Only Azure resources that have the appropriate resource tags are configured to send logs to New Relic. If you map more than one New Relic resource to the New Relic account by using th ## Next steps - [Troubleshoot Azure Native New Relic Service](new-relic-troubleshoot.md)-- [Quickstart: Get started with New Relic](new-relic-create.md)+- [Quickstart: Get started with New Relic](new-relic-create.md) |
partner-solutions | New Relic Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-troubleshoot.md | Try the troubleshooting information in this article first. If that doesn't work, ### Purchase fails -A purchase can fail because a valid credit card isn't connected to the Azure subscription, or because a payment method isn't associated with the subscription. To solve this problem, use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [Add, update, or delete a payment method](/azure/cost-management-billing/manage/change-credit-card). +A purchase can fail because a valid credit card isn't connected to the Azure subscription, or because a payment method isn't associated with the subscription. To solve this problem, use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [Add, update, or delete a payment method](../../cost-management-billing/manage/change-credit-card.md). -A purchase can also fail because an Enterprise Agreement (EA) subscription doesn't allow Azure Marketplace purchases. Try to use a different subscription. Or, check if your EA subscription is enabled for Azure Marketplace purchases. For more information, see [Enabling Azure Marketplace purchases](/azure/cost-management-billing/manage/ea-azure-marketplace#enabling-azure-marketplace-purchases). +A purchase can also fail because an Enterprise Agreement (EA) subscription doesn't allow Azure Marketplace purchases. Try to use a different subscription. Or, check if your EA subscription is enabled for Azure Marketplace purchases. For more information, see [Enabling Azure Marketplace purchases](../../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases). ### You can't create a New Relic resource To set up Azure Native New Relic Service, you must have owner access on the Azure subscription. Ensure that you have the appropriate access before you start the setup. -To find the New Relic offering on Azure and set up the service, you must first register the `NewRelic.Observability` resource provider in your Azure subscription. To register the resource provider by using the Azure portal, follow the guidance in [Azure resource providers and types](/azure/azure-resource-manager/management/resource-providers-and-types). +To find the New Relic offering on Azure and set up the service, you must first register the `NewRelic.Observability` resource provider in your Azure subscription. To register the resource provider by using the Azure portal, follow the guidance in [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). To register the resource provider from a command line, enter `az provider register --namespace NewRelic.Observability --subscription <subscription-id>`. ### Logs aren't being sent to New Relic -Only resource types in [supported categories](/azure/azure-monitor/essentials/resource-logs-categories) send logs to New Relic through the integration. To check whether the resource is set up to send logs to New Relic, go to the [Azure diagnostic settings](/azure/azure-monitor/platform/diagnostic-settings) for that resource. Then, verify that there's a New Relic diagnostic setting. +Only resource types in [supported categories](../../azure-monitor/essentials/resource-logs-categories.md) send logs to New Relic through the integration. To check whether the resource is set up to send logs to New Relic, go to the [Azure diagnostic settings](/azure/azure-monitor/platform/diagnostic-settings) for that resource. Then, verify that there's a New Relic diagnostic setting. ### You can't install or uninstall an extension on a virtual machine Only virtual machines that currently have the New Relic agent installed should b Resource monitoring in New Relic is enabled through the *ingest API key*, which you set up at the time of resource creation. Revoking the ingest API key from the New Relic portal disrupts monitoring of logs and metrics for all resources, including virtual machines and app services. You shouldn't* revoke the ingest API key. If the API key is already revoked, contact New Relic support. -If your Azure subscription is suspended or deleted because of payment-related issues, resource monitoring in New Relic automatically stops. Use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [Add, update, or delete a payment method](/azure/cost-management-billing/manage/change-credit-card). +If your Azure subscription is suspended or deleted because of payment-related issues, resource monitoring in New Relic automatically stops. Use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [Add, update, or delete a payment method](../../cost-management-billing/manage/change-credit-card.md). New Relic manages the APIs for creating and managing resources, and for the storage and processing of customer telemetry data. The New Relic APIs might be on or outside Azure. If your Azure subscription and resource are working correctly but the New Relic portal shows problems with monitoring data, contact New Relic support. <!-- need some clarification here --> ## Next steps -- [Manage Azure Native New Relic Service](new-relic-how-to-manage.md)+- [Manage Azure Native New Relic Service](new-relic-how-to-manage.md) |
partner-solutions | Qumulo Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-create.md | In this quickstart, you create an instance of Azure Native Qumulo Scalable File For more information about permissions and how to check access, see [Troubleshoot Azure Native Qumulo Service](qumulo-troubleshoot.md). -1. Create a [delegated subnet](/azure/virtual-network/subnet-delegation-overview) to the Qumulo service: +1. Create a [delegated subnet](../../virtual-network/subnet-delegation-overview.md) to the Qumulo service: 1. Identify the region where you want to subscribe to the Qumulo service. 1. Create a new virtual network, or select an existing virtual network in the same region where you want to create the Qumulo service. In this quickstart, you create an instance of Azure Native Qumulo Scalable File | **Property** | **Description** | |--|--| |**Subscription** | From the dropdown list, select the Azure subscription where you have **Owner** access. |- |**Resource group** | Specify whether you want to create a new resource group or use an existing one. A resource group is a container that holds related resources for an Azure solution. For more information, seeΓÇ»[Azure resource group overview](/azure/azure-resource-manager/management/overview). | + |**Resource group** | Specify whether you want to create a new resource group or use an existing one. A resource group is a container that holds related resources for an Azure solution. For more information, seeΓÇ»[Azure resource group overview](../../azure-resource-manager/management/overview.md). | |**Resource name** | Enter the name of the Qumulo file system. The resource name should have fewer than 15 characters, and it can contain only alphanumeric characters and the hyphen symbol.| |**Region** | Select one of the available regions from the dropdown list. | |**Availability Zone** | Select an availability zone to pin the Qumulo file system resources to that zone in a region. | In this quickstart, you create an instance of Azure Native Qumulo Scalable File Only virtual networks in the specified region with subnets delegated to `Qumulo.Storage/fileSystems` appear on this page. If an expected virtual network is not listed, verify that it's in the chosen region and that the virtual network includes a subnet delegated to Qumulo. -1. Select **Review + Create** to create the resource. +1. Select **Review + Create** to create the resource. |
partner-solutions | Qumulo Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-troubleshoot.md | Try the troubleshooting information in this article first. If that doesn't work, A purchase can fail because a valid credit card is not connected to the Azure subscription, or because a payment method is not associated with the subscription. -Try using a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [Update the credit and payment method](/azure/cost-management-billing/manage/change-credit-card). +Try using a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [Update the credit and payment method](../../cost-management-billing/manage/change-credit-card.md). ## You got a purchase error related to an Enterprise Agreement Some Microsoft Enterprise Agreement (EA) subscriptions don't allow Azure Marketplace purchases. -Try using a different subscription, or [enable your subscription for Azure Marketplace purchases](/azure/cost-management-billing/manage/ea-azure-marketplace#enabling-azure-marketplace-purchases). +Try using a different subscription, or [enable your subscription for Azure Marketplace purchases](../../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases). ## You can't create a resource For successful creation of a Qumulo service, custom role-based access control (R ## Next steps -- [Manage Azure Native Qumulo Scalable File Service Preview](qumulo-how-to-manage.md)+- [Manage Azure Native Qumulo Scalable File Service Preview](qumulo-how-to-manage.md) |
postgresql | Concepts Major Version Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-major-version-upgrade.md | + + Title: Major Version Upgrade - Azure Database for PostgreSQL - Flexible Server Preview +description: Learn about the concepts of in-place major version upgrade with Azure Database for PostgreSQL - Flexible Server +++ Last updated : 02/08/2023+++++++# Major Version Upgrade with PostgreSQL Flexible Server Preview ++++## Overview +Azure Database for PostgreSQL Flexible server supports PostgreSQL versions 11, 12,13, and 14. Postgres community releases a new major version containing new features about once a year. Additionally, major version receives periodic bug fixes in the form of minor releases. Minor version upgrades include changes that are backward-compatible with existing applications. Azure Database for PostgreSQL Flexible service periodically updates the minor versions during customer’s maintenance window. Major version upgrades are more complicated than minor version upgrades as they can include internal changes and new features that may not be backward-compatible with existing applications. ++Azure Database for PostgreSQL Flexible Server Postgres has now introduced in-place major version upgrade feature that performs an in-place upgrade of the server with just a click. In-place major version upgrade simplifies the upgrade process minimizing the disruption to users and applications accessing the server. In-place upgrades are a simpler way to upgrade the major version of the instance, as they retain the server name and other settings of the current server after the upgrade, and don't require data migration or changes to the application connection strings. In-place upgrades are faster and involve shorter downtime than data migration. +++## Process ++Here are some of the important considerations with in-place major version upgrade. ++- During in-place major version upgrade process, Flexible Server runs a pre-check procedure to identify any potential issues that might cause the upgrade to fail. If the pre-check finds any incompatibilities, it creates a log event showing that the upgrade pre-check failed, along with an error message. ++- If the pre-check is successful, then Flexible Server stops the service and takes an implicit backup just before starting the upgrade. This backup can be used to restore the database instance to its previous version if there's an upgrade error. ++- Flexible Server uses **pg_upgrade** utility to perform in-place major version upgrades and provides the flexibility to skip versions and upgrade directly to higher versions. ++- During an in-place major version upgrade of a High Availability (HA) enabled server, the service disables HA, performs the upgrade on the primary server, and then re-enables HA after the upgrade is complete. ++- Most extensions are automatically upgraded to higher versions during an in-place major version upgrade, with some exceptions. Refer **limitations** section for more details. ++- In-place major version upgrade process for Flexible server automatically deploys the latest supported minor version. ++- In-place major version upgrade process is an offline operation and it involves a short downtime. ++- Long-running transactions or high workload before the upgrade might increase the time taken to shut down the database and increase upgrade time. ++- If an in-place major version upgrade fails, the service restores the server to its previous state using a backup taken as part of step 2. ++- Once the in-place major version upgrade is successful, there are no automated ways to revert to the earlier version. However, you can perform a Point-In-Time Recovery (PITR) to a time prior to the upgrade to restore the previous version of the database instance. ++## Limitations: ++During preview, if in-place major version upgrade pre-check operations fail then it aborts with a detailed error message for all the below limitations. ++- In-place major version upgrade currently doesn't support read replicas, so if you have a read replica enabled server, you need to delete the replica before performing the upgrade on the primary server. After the upgrade, you can recreate the replica. ++- In-place major version upgrade doesn't support certain extensions and there are some limitations to upgrading certain extensions. The extensions **Timescaledb**, **pgaudit**, **dblink**, and **postgres_fdw** are unsupported for all PostgreSQL versions during preview. ++- Upgrading the **PostGIS** extension is currently unsupported from PostgreSQL 12, while upgrading the **orafce** extension is unsupported from PostgreSQL 11. All other versions of these extensions are supported for in-place major version upgrade. ++- During preview, in-place major version upgrade is currently available in the following regions. + Australia East / Australia Southeast/ Canada East/ China North 3/China East 3/ East Asia / East US / France South / India Central / India South /Japan East /Jio India West /Korea Central / Norway East/ North Europe/South Central US /Sweden Central /Switzerland North /Switzerland West /UAE north /West Central US/ West US / West US3/ Qatar Central ++- Servers configured with logical replication slots aren't supported. ++- MVU is currently not supported for PgBouncer enabled servers. +++## How to Perform in-place major version upgrade: ++It's recommended to perform a dry run of the in-place major version upgrade in a non-production environment before upgrading the production server. It allows you to identify any application incompatibilities and validate that the upgrade completes successfully before upgrading the production environment. You can perform a Point-In-Time Recovery (PITR) of your production server and test the upgrade in the non-production environment. Addressing these issues before the production upgrade minimizes downtime and ensures a smooth upgrade process. ++**Steps** ++1. You can perform in-place major version upgrade using Azure portal or CLI (command-line interface). Click the **Upgrade** button in Overview blade. +++++ :::image type="content" source="media/concepts-major-version-upgrade/upgrade-tab.png" alt-text="Diagram of Upgrade tab to perform in-place major version upgrade."::: +++++2. You'll see an option to select the major version of your choice, you have an option to skip versions to directly upgrade to higher versions. Choose the version and click **Upgrade**. +++++++++3. During upgrade, users have to wait for the process to complete. You can resume accessing the server once the server is back online. +++++++++++4. Once the upgrade is successful,you can expand the **Deployment details** tab and click **Operation details** to see more information about upgrade process like duration, provisioning state etc. ++++++++++++5. You can click on the **Go to resource** tab to validate your upgrade. You notice that server name remained unchanged and PostgreSQL version upgraded to desired higher version with the latest minor version. ++++++++## Post Upgrade ++Run the **ANALYZE** operation to refresh the pg_statistic table. You should do this for every database on all your Flexible Server. Optimizer statistics aren't transferred during a major version upgrade, so you need to regenerate all statistics to avoid performance issues. Run the command without any parameters to generate statistics for all regular tables in the current database, as follows +++``` +ANALYZE VERBOSE +``` +> [!NOTE] +> +> The VERBOSE flag is optional, but using it shows you the progress. ++## Next steps ++- Learn about [business continuity](./concepts-business-continuity.md). +- Learn about [zone-redundant high availability](./concepts-high-availability.md). +- Learn about [backup and recovery](./concepts-backup-restore.md). + |
private-5g-core | Azure Stack Edge Disconnects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-stack-edge-disconnects.md | Once ASE connectivity resumes, several features will resume: - **Monitoring** tabs will show metrics for sites after 10 minutes, but won't populate stats for the disconnected duration. - **Kubernetes Arc Insights** will show new stats after 10 minutes, but won't populate stats for the disconnected duration. - **Resource Health** views will be viewable immediately.-- [Workbooks](/azure/update-center/workbooks) for the ASE will be viewable immediately and will populate for the disconnected duration.+- [Workbooks](../update-center/workbooks.md) for the ASE will |