Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | Functions For Customizing Application Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/functions-for-customizing-application-data.md | The NumFromDate function converts a DateTime value to Active Directory format th | Name | Required/ Repeating | Type | Notes | | | | | |-| **value** |Required | String | Date time string in the supported format. For supported formats, see https://msdn.microsoft.com/library/8kb3ddd4%28v=vs.110%29.aspx. | +| **value** |Required | String | Date time string in [ISO 8601](https://www.iso.org/iso-8601-date-and-time-format.html) format. If the date variable is in a different format, use [FormatDateTime](#formatdatetime) function to convert the date to ISO 8601 format. | **Example:** * Workday example |
active-directory | Application Proxy Azure Front Door | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-azure-front-door.md | + + Title: Using Azure Front Door to provide geo-acceleration +description: How to optimize performance for global connectivity scenarios using Azure Front Door (for Geo-Acceleration) with Azure Active Directory Application Proxy. ++++++ Last updated : 08/22/2022+++++# Using Azure Front Door to achieve geo-acceleration ++This article explains how to configure Azure Active Directory (Azure AD) Application Proxy to work with Azure Front Door (AFD) to achieve reduce latency and better performance. + +## What is Azure Front Door? ++Azure Front Door helps deliver low-latency, high-throughput content at scale from the cloud or on-premises infrastructure to users anywhere. Accelerate static and dynamic content delivery with a unified platform built on the massively scalable Microsoft private global network. For more information about Azure Front Door, see [What is Azure Front Door?][front-door-overview]. ++## Deployment steps ++This article guides you through the steps to securely expose a web application on the Internet, by integrating the Azure AD Application Proxy with Azure Front Door. In this guide we'll be using the Azure portal. The reference architecture for this deployment is represented below. + ++## Prerequisites ++- A Front Door Service ΓÇô Standard or Classic tier +- Apps that exist in a single region. +- A custom domain to use for the application. +- For licensing information, Application Proxy is available through an Azure AD Premium subscription. Refer here for a full listing of licensing options and features: [Azure Active Directory Pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing) ++### Application Proxy Configuration ++Follow these steps to configure Application Proxy for Front Door: +1. Install connector for the location that your app instances will be in (For example US West). For the connector group assign the connector to the right region (For example North America). +2. Set up your app instance with Application Proxy as follows: + - Set the Internal URL to the address users access the app from the internal network, for example contoso.org + - Set the External URL to the domain address you want the users to access the app from. For this you must configure a custom domain for our application here, for example, contoso.org. Reference: [Custom domains in Azure Active Directory Application Proxy][appproxy-custom-domain] + - Assign the application to the appropriate connector group (For example: North America) + - Note down the URL generated by Application Proxy to access the application. For example, contoso.msappproxy.net + - For the application configure a CNAME Entry in your DNS provider which points the external URL to the Front DoorΓÇÖs endpoint, for example ΓÇÿcontoso.orgΓÇÖ to contoso.msappproxy.net +3. In the Front Door service, utilize the URL generated for the app by Application Proxy as a backend for the backend pool. For example, contoso.msappproxy.net ++#### Sample Application Proxy Configuration +The following table shows a sample Application Proxy configuration. The sample scenario uses the sample application domain www.contoso.org as the External URL. ++| | Configuration | Additional Information | +|- | -- | - | +| **Internal URL** | nam.contoso.com | | +| **External URL** | contoso.org | Configure a custom domain for users to access the app from.| +| **Connector group** | North America | Select the connector group in the geo closest to where the app instance will be in for optimized performance.| ++### Front Door Configuration ++Azure Front Door is offered in different tiers including Standard, Premium and Classic. Select a tier based on the preference. For more information on tier comparison, refer here: [Azure Front Door tier comparison][front-door-tier] ++For Front Door Standard Tier +The configuration steps that follow refer to the following definitions: +- Endpoint name: A globally unique name for the endpoint. You can onboard custom domains as well. For example, front door endpoint name: contoso-nam that will generate the Endpoint host name contoso-nam.azurefd.net and utilize custom domain host name: contoso.org +- Origin: Origins are your application servers. Front door will route your client requests to origins, based on the type, ports, priority, and weight you specify here +- Origin Type: The type of resource you want to add. Front Door supports auto-discovery of your application backends from App Service, Cloud Service, or Storage. If you want a different resource in Azure or even a non-Azure backend, select Custom host. For example Custom host for have a backend of an Application Proxy service +- Origin host name: This represents the backend origin host name. For example, contoso.msappproxy.net +- Origin host header: This represented the host header value being sent to the backend for each request. For example, contoso.org. For more information refer here: [Origins and origin groups ΓÇô Azure Front Door][front-door-origin] ++Follow these steps to configure the Front Door Service (Standard): +1. Create a Front Door (Standard) with the configuration below: + - Add an Endpoint name for generating the Front DoorΓÇÖs default domain i.e. azurefd.net. For example, contoso-nam that generated the Endpoint hostname contoso-nam.azurefd.net + - Add an Origin Type for the type of backend resource. For example Custom here for the Application Proxy resource + - Add an Origin host name to represent the backend host name. For example, contoso.msappproxy.net + - Optional: Enable Caching for the routing rule for Front Door to cache your static content. +2. Verify if the deployment is complete and the Front Door Service is ready +3. To give your Front Door service a user-friendly domain host name URL, create a CNAME record with your DNS provider for your Application Proxy External URL that points to Front DoorΓÇÖs domain host name (generated by the Front Door service). For example, contoso.org points to contoso.azurefd.net Reference: [How to add a custom domain - Azure Front Door][front-door-custom-domain] +4. As per the reference, on the Front Door Service Dashboard navigate to Front Door Manager and add a Domain with the Custom Hostname. For example, contoso.org +5. Navigate to the Origin groups in the Front Door Service Dashboard, select the origin name and validate the Origin host header matches the domain of the backend. For example here the Origin host header should be: contoso.org ++| | Configuration | Additional Information | +|- | -- | - | +| **Endpoint Name** | ΓÇó Endpoint name: contoso-nam <br /> ΓÇó Front door generated Hostname: <br /> contoso-nam.azurefd.net <br /> ΓÇó Custom Domain Hostname: contoso.org| A custom domain host name must be utilized here.| +| **Origin hostname** | contoso.msappproxy.net | The URL generated for the app by Application Proxy must be utilized here.| +| **Connector group** | North America | Select the connector group in the geo closest to where the app instance will be in for optimized performance.| +++++## Next steps ++To prevent false positives, learn how to [Customize Web Application Firewall rules](../../web-application-firewall/ag/application-gateway-customize-waf-rules-portal.md), configure [Web Application Firewall exclusion lists](../../web-application-firewall/ag/application-gateway-waf-configuration.md?tabs=portal), or [Web Application Firewall custom rules](../../web-application-firewall/ag/create-custom-waf-rules.md). ++[front-door-overview]: ../../frontdoor/front-door-overview.md +[front-door-origin]: ../../frontdoor/origin.md?pivots=front-door-standard-premium#origin-host-header +[front-door-tier]: ../../frontdoor/standard-premium/tier-comparison.md +[front-door-custom-domain]: ../../frontdoor/standard-premium/how-to-add-custom-domain.md +[appproxy-custom-domain]: ./application-proxy-configure-custom-domain.md +[private-dns]: ../../dns/private-dns-getstarted-portal.md +[waf-logs]: ../../application-gateway/application-gateway-diagnostics.md#firewall-log |
active-directory | Concepts Azure Multi Factor Authentication Prompts Session Lifetime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md | When a user selects **Yes** on the *Stay signed in?* option during sign-in, a pe If you have an Azure AD Premium 1 license, we recommend using Conditional Access policy for *Persistent browser session*. This policy overwrites the *Stay signed in?* setting and provides an improved user experience. If you don't have an Azure AD Premium 1 license, we recommend enabling the stay signed in setting for your users. -For more information on configuring the option to let users remain signed-in, see [Customize your Azure AD sign-in page](../fundamentals/customize-branding.md#learn-about-the-stay-signed-in-prompt). +For more information on configuring the option to let users remain signed-in, see [Customize your Azure AD sign-in page](../fundamentals/active-directory-users-profile-azure-portal.md#learn-about-the-stay-signed-in-prompt). ### Remember Multi-Factor Authentication |
active-directory | How To Mfa Server Migration Utility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-server-migration-utility.md | Admins can use the MFA Server Migration Utility to target single users or groups - The MFA Server Migration Utility requires a new build of the MFA Server solution to be installed on your Primary MFA Server. The build makes updates to the MFA Server data file, and includes the new MFA Server Migration Utility. You donΓÇÖt have to update the WebSDK or User portal. Installing the update _doesn't_ start the migration automatically. - The MFA Server Migration Utility copies the data from the database file onto the user objects in Azure AD. During migration, users can be targeted for Azure AD MFA for testing purposes using [Staged Rollout](../hybrid/how-to-connect-staged-rollout.md). Staged migration lets you test without making any changes to your domain federation settings. Once migrations are complete, you must finalize your migration by making changes to your domain federation settings. - AD FS running Windows Server 2016 or higher is required to provide MFA authentication on any AD FS relying parties, not including Azure AD and Office 365. -- Review your AD FS claims rules and make sure none requires MFA to be performed on-premises as part of the authentication process.+- Review your AD FS access control policies and make sure none requires MFA to be performed on-premises as part of the authentication process. - Staged rollout can target a maximum of 500,000 users (10 groups containing a maximum of 50,000 users each). ## Migration guide Set the **Staged Rollout for Azure MFA** to **Off**. Users will once again be re ## Next steps - [Overview of how to migrate from MFA Server to Azure AD Multi-Factor Authentication](how-to-migrate-mfa-server-to-azure-mfa.md)-- [Migrate to cloud authentication using Staged Rollout](../hybrid/how-to-connect-staged-rollout.md)+- [Migrate to cloud authentication using Staged Rollout](../hybrid/how-to-connect-staged-rollout.md) |
active-directory | Plan Conditional Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/plan-conditional-access.md | Microsoft provides [security defaults](../fundamentals/concept-fundamentals-secu ### Prerequisites * A working Azure AD tenant with Azure AD Premium or trial license enabled. If needed, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-* An account with Conditional Access Administrator privileges. +* An account with privileges to create Conditional Access policies. * A test user (non-administrator) that allows you to verify policies work as expected before you impact real users. If you need to create a user, see [Quickstart: Add new users to Azure Active Directory](../fundamentals/add-users-azure-active-directory.md). * A group that the non-administrator user is a member of. If you need to create a group, see [Create a group and add members in Azure Active Directory](../fundamentals/active-directory-groups-create-azure-portal.md). +#### Permissions ++Conditional Access policies can be created or modified by anyone assigned the following roles: ++- Conditional Access Administrator +- Security Administrator +- Global Administrator ++Conditional Access policies can be read by anyone assigned the following roles: ++- Security Reader +- Global Reader + ## Understand Conditional Access policy components Policies answer questions about who should access your resources, what resources they should access, and under what conditions. Policies can be designed to grant access, limit access with session controls, or to block access. You [build a Conditional Access policy](concept-conditional-access-policies.md) by defining the if-then statements: **If an assignment is met, then apply the access controls**. |
active-directory | Workload Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/workload-identity.md | Create a location based Conditional Access policy that applies to service princi ### Create a risk-based Conditional Access policy -Create a location based Conditional Access policy that applies to service principals. +Create a risk-based Conditional Access policy that applies to service principals. :::image type="content" source="media/workload-identity/conditional-access-workload-identity-risk-policy.png" alt-text="Creating a Conditional Access policy with a workload identity and risk as a condition." lightbox="media/workload-identity/conditional-access-workload-identity-risk-policy.png"::: |
active-directory | Msal Logging Js | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-js.md | The loggerOptions object has the following properties: - `piiLoggingEnabled` (optional): if set to true, logs personal and organizational data. By default this is false so that your application doesn't log personal data. Personal data logs are never written to default outputs like Console, Logcat, or NSLog. ```javascript+import msal from "@azure/msal-browser" + const msalConfig = { auth: { clientId: "enter_client_id_here", const msalConfig = { }, system: { loggerOptions: {- loggerCallback: (level: LogLevel, message: string, containsPii: boolean): void => { + logLevel: msal.LogLevel.Verbose, + loggerCallback: (level, message, containsPii) => { if (containsPii) { return; } switch (level) {- case LogLevel.Error: + case msal.LogLevel.Error: console.error(message); return;- case LogLevel.Info: + case msal.LogLevel.Info: console.info(message); return;- case LogLevel.Verbose: + case msal.LogLevel.Verbose: console.debug(message); return;- case LogLevel.Warning: + case msal.LogLevel.Warning: console.warn(message); return; } }, piiLoggingEnabled: false },- windowHashTimeout: 60000, - iframeHashTimeout: 6000, - loadFrameTimeout: 0, - asyncPopups: false - }; -} --const msalInstance = new PublicClientApplication(msalConfig); + }, +}; ``` ## Next steps |
active-directory | Msal Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-migration.md | If any of your applications use the Azure Active Directory Authentication Librar ## Why switch to MSAL? +To understand 'Why MSAL?', it's important to first understand the differences between Microsoft identity platform (v2.0) and Azure Active Directory (v1.0) endpoints. The v1.0 endpoint is used by Azure AD Authentication Library (ADAL) while the v2.0 endpoint is used by Microsoft Authentication Library (MSAL). If you've developed apps against the v1.0 endpoint in the past, you're likely using ADAL. Since the v2.0 endpoint has changed significantly enough, the new library (MSAL) was built for the new endpoint entirely. ++The following diagram shows the v2.0 vs v1.0 endpoint experience at a high level, including the app registration experience, SDKs, endpoints, and supported identities. ++![Diagram that shows the v1.0 versus the v2.0 architecture.](../azuread-dev/media/about-microsoft-identity-platform/about-microsoft-identity-platform.svg) ++MSAL leverages all the [benefits of Microsoft identity platform (v2.0) endpoint](../azuread-dev/azure-ad-endpoint-comparison.md). ++MSAL is designed to enable a secure solution without developers having to worry about the implementation details. it simplifies and manages acquiring, managing, caching, and refreshing tokens, and uses best practices for resilience. We recommend you use MSAL to [increase the resilience of authentication and authorization in client applications that you develop](../fundamentals/resilience-client-app.md?tabs=csharp#use-the-microsoft-authentication-library-msal). + MSAL provides multiple benefits over ADAL, including the following features: |Features|MSAL|ADAL| MSAL provides multiple benefits over ADAL, including the following features: | Proactive token renewal |![Proactive token renewal - MSAL provides the feature][y]|![Proactive token renewal - ADAL doesn't provide the feature][n]| | Throttling |![Throttling - MSAL provides the feature][y]|![Throttling - ADAL doesn't provide the feature][n]| +## Additional Capabilities of MSAL over ADAL +- Auth broker support ΓÇô Device-based Conditional Access policy +- Proof of possession tokens +- Azure AD certificate-based authentication (CBA) on mobile +- System browsers on mobile devices +- Where ADAL had only authentication context class, MSAL exposes the notion of a collection of client apps (public client and confidential client). + ## AD FS support in MSAL.NET You can use MSAL.NET, MSAL Java, and MSAL Python to get tokens from Active Directory Federation Services (AD FS) 2019 or later. Earlier versions of AD FS, including AD FS 2016, are unsupported by MSAL. After identifying your apps that use ADAL, migrate them to MSAL depending on you [!INCLUDE [application type](includes/adal-msal-migration.md)] +MSAL Supports a wide range of application types and scenarios. Please refer to [Microsoft Authentication Library support for several application types](reference-v2-libraries.md#single-page-application-spa). ++ADAL to MSAL Migration Guide for different platforms are available in the following link. +- [Migrate to MSAL iOS and MacOS](migrate-objc-adal-msal.md) +- [Migrate to MSAL Java](migrate-adal-msal-java.md) +- [Migrate to MSAL .Net](msal-net-migration.md) +- [Migrate to MSAL Node](msal-node-migration.md) +- [Migrate to MSAL Python](migrate-python-adal-msal.md) + ## Migration help If you have questions about migrating your app from ADAL to MSAL, here are some options: For more information about MSAL, including usage information and which libraries ![X indicating no.][n] | ![Green check mark.][y] | ![Green check mark.][y] | -- | --> [y]: media/common/yes.png-[n]: media/common/no.png +[n]: media/common/no.png |
active-directory | Tutorial V2 Javascript Spa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-javascript-spa.md | sampleApp/ In the next steps, you'll create a new folder for the JavaScript SPA and set up the user interface (UI). > [!TIP]-> When you set up an Azure Active Directory (Azure AD) account, you create a tenant. This is a digital representation of your organization. It's primarily associated with a domain, like Microsoft.com. If you want to learn how applications can work with multiple tenants, refer to the [application model](/articles/active-directory/develop/application-model.md). +> When you set up an Azure Active Directory (Azure AD) account, you create a tenant. This is a digital representation of your organization. It's primarily associated with a domain, like Microsoft.com. If you want to learn how applications can work with multiple tenants, refer to the [application model](/azure/active-directory/develop/application-model). ## Create the SPA UI |
active-directory | Tutorial V2 Shared Device Mode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-shared-device-mode.md | private void loadAccount() } } @Override- public void on AccountChanged(@Nullable IAccount priorAccount, @Nullable Iaccount currentAccount) + public void onAccountChanged(@Nullable IAccount priorAccount, @Nullable Iaccount currentAccount) { if (currentAccount == null) { |
active-directory | Enterprise State Roaming Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/enterprise-state-roaming-enable.md | -When you enable Enterprise State Roaming, your organization is automatically granted a free, limited-use license for Azure Rights Management protection from Azure Information Protection. This free subscription is limited to encrypting and decrypting enterprise settings and application data synced by Enterprise State Roaming. You must have [a paid subscription](https://azure.microsoft.com/services/information-protection/) to use the full capabilities of the Azure Rights Management service. - > [!NOTE] > This article applies to the Microsoft Edge Legacy HTML-based browser launched with Windows 10 in July 2015. The article does not apply to the new Microsoft Edge Chromium-based browser released on January 15, 2020. For more information on the Sync behavior for the new Microsoft Edge, see the article [Microsoft Edge Sync](/deployedge/microsoft-edge-enterprise-sync). |
active-directory | Groups Create Rule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-create-rule.md | The following status messages can be shown for **Dynamic rule processing** statu - **Processing error**: Processing couldn't be completed because of an error evaluating the membership rule. - **Update paused**: Dynamic membership rule updates have been paused by the administrator. MembershipRuleProcessingState is set to ΓÇ£PausedΓÇ¥. +>[!NOTE] +>In this screen you now may also choose to **Pause processing**. Previously, this option was only available through the modification of the membershipRuleProcessingState property. Global admins, group admins, user admins, and Intune admins can manage this setting and can pause and resume dynamic group processing. Group owners without the correct roles do not have the rights needed to edit this setting. + The following status messages can be shown for **Last membership change** status: - <**Date and time**>: The last time the membership was updated. |
active-directory | Active Directory Users Profile Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-users-profile-azure-portal.md | There are six categories of profile details you may be able to edit. - **Settings:** Decide whether the user can sign in to the Azure Active Directory tenant. You can also specify the user's global location. -- **On-premises:** Accounts synced from Windows Server Active Directory include additional values not applicable to Azure AD accounts.+- **On-premises:** Accounts synced from Windows Server Active Directory include other values not applicable to Azure AD accounts. >[!Note] >You must use Windows Server Active Directory to update the identity, contact info, or job info for users whose source of authority is Windows Server Active Directory. After you complete your update, you must wait for the next synchronization cycle to complete before you'll see the changes. In the **User settings** area of Azure AD, you can adjust several settings that Go to **Azure AD** > **User settings**. +### Learn about the 'Stay signed in?' prompt ++The **Stay signed in?** prompt appears after a user successfully signs in. This process is known as **Keep me signed in** (KMSI). If a user answers **Yes** to this prompt, the KMSI service gives them a persistent [refresh token](../develop/developer-glossary.md#refresh-token). For federated tenants, the prompt will show after the user successfully authenticates with the federated identity service. ++The following diagram shows the user sign-in flow for a managed tenant and federated tenant using the KMSI in prompt. This flow contains smart logic so that the **Stay signed in?** option won't be displayed if the machine learning system detects a high-risk sign-in or a sign-in from a shared device. ++KMSI is only available on the default custom branding. It can't be added to language-specific branding. Some features of SharePoint Online and Office 2010 depend on users being able to choose to remain signed in. If you uncheck the **Show option to remain signed in** option, your users may see other unexpected prompts during the sign-in process. ++![Diagram showing the user sign-in flow for a managed vs. federated tenant](media/customize-branding/kmsi-workflow.png) ++Configuring the 'keep me signed in' (KMSI) option requires one of the following licenses: ++- Azure AD Premium 1 +- Azure AD Premium 2 +- Office 365 (for Office apps) +- Microsoft 365 ++#### Troubleshoot 'Stay signed in?' issues ++If a user doesn't act on the **Stay signed in?** prompt but abandons the sign-in attempt, a sign-in log entry appears in the Azure AD **Sign-ins** page. The prompt the user sees is called an "interrupt." ++![Sample 'Stay signed in?' prompt](media/customize-branding/kmsi-stay-signed-in-prompt.png) ++Details about the sign-in error are found in the **Sign-in logs** in Azure AD. Select the impacted user from the list and locate the details below in the **Basic info** section. ++* **Sign in error code**: 50140 +* **Failure reason**: This error occurred due to "Keep me signed in" interrupt when the user was signing in. ++You can stop users from seeing the interrupt by setting the **Show option to remain signed in** setting to **No** in the advanced branding settings. This setting disables the KMSI prompt for all users in your Azure AD directory. ++You also can use the [persistent browser session controls in Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md) to prevent users from seeing the KMSI prompt. This option allows you to disable the KMSI prompt for a select group of users (such as the global administrators) without affecting sign-in behavior for everyone else in the directory. ++To ensure that the KMSI prompt is shown only when it can benefit the user, the KMSI prompt is intentionally not shown in the following scenarios: ++* User is signed in via seamless SSO and integrated Windows authentication (IWA) +* User is signed in via Active Directory Federation Services and IWA +* User is a guest in the tenant +* User's risk score is high +* Sign-in occurs during user or admin consent flow +* Persistent browser session control is configured in a conditional access policy + ## Next steps - [Add or delete users](add-users-azure-active-directory.md) |
active-directory | Customize Branding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/customize-branding.md | Title: Add branding to your organization's sign-in page - Azure AD description: Instructions about how to add your organization's branding to the Azure Active Directory sign-in page. -+ Previously updated : 08/26/2022- Last updated : 11/21/2022+ -Create a consistent experience when users sign into your organization's web-based apps that use Azure Active Directory (Azure AD) as your identity provider, such as Microsoft 365. The sign-in process can include your company logo and customized experiences based on browser language. +When users authenticate into your corporate intranet or web-based applications, Azure Active Directory (Azure AD) provides the identity and access management (IAM) service. You can add company branding that applies to all these sign-in experiences to create a consistent experience for your users. ++This article covers how to customize the company branding for sign-in experiences for your users. ++An updated experience for adding company branding is available as an Azure AD preview feature. To opt in and explore the new experience, go to **Azure AD** > **Preview features** and enable the **Enhanced Company Branding** feature. Check out the updated documentation on [how to customize branding](how-to-customize-branding.md). ## License requirements -Adding custom branding and configuring the 'keep me signed in' (KMSI) option requires one of the following licenses: +Adding custom branding requires one of the following licenses: - Azure AD Premium 1 - Azure AD Premium 2 - Office 365 (for Office apps)-- Microsoft 365 (KMSI only) Azure AD Premium editions are available for customers in China using the worldwide instance of Azure AD. Azure AD Premium editions aren't currently supported in the Azure service operated by 21Vianet in China. For more information about licensing and editions, see [Sign up for Azure AD Premium](active-directory-get-started-premium.md). Custom branding appears after users sign in. Users that start the sign-in proces >[!IMPORTANT] > Transparent logos are supported with the square logo image. The color palette used in the transparent logo could conflict with backgrounds (such as, white, light grey, dark grey, and black backgrounds) used within Microsoft 365 apps and services that consume the square logo image. Solid color backgrounds may need to be used to ensure the square image logo is rendered correctly in all situations. - - **Show option to remain signed in** You can choose to let your users remain signed in to Azure AD until explicitly signing out. If you uncheck this option, users must sign in each time the browser is closed and reopened. This feature is covered in detail in the [Learn about the 'Stay signed in?' prompt](#learn-about-the-stay-signed-in-prompt) section of this article. + - **Show option to remain signed in** You can choose to let your users remain signed in to Azure AD until explicitly signing out. If you uncheck this option, users must sign in each time the browser is closed and reopened. For more information, see the [Add or update a user's profile](active-directory-users-profile-azure-portal.md#learn-about-the-stay-signed-in-prompt) article. 3. After you've finished adding your branding, select **Save** in the upper-left corner of the configuration panel. We recommend adding **Sign-in page text** in the selected language. If custom branding has been added to your tenant, you can edit the details already provided. Refer to the details and descriptions of each setting in the [Add custom branding](#customize-the-default-sign-in-experience) section of this article. -1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global administrator account for the directory. +1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global Administrator account for the directory. 1. Go to **Azure Active Directory** > **Company branding**. If custom branding has been added to your tenant, you can edit the details alrea It can take up to an hour for any changes you made to the sign-in page branding to appear. -## Learn about the 'Stay signed in?' prompt --The **Stay signed in?** prompt appears after a user successfully signs in. This process is known as **Keep me signed in** (KMSI). If a user answers **Yes** to this prompt, the KMSI service gives them a persistent [refresh token](../develop/developer-glossary.md#refresh-token). For federated tenants, the prompt will show after the user successfully authenticates with the federated identity service. --The following diagram shows the user sign-in flow for a managed tenant and federated tenant using the KMSI in prompt. This flow contains smart logic so that the **Stay signed in?** option won't be displayed if the machine learning system detects a high-risk sign-in or a sign-in from a shared device. --KMSI is only available on the default custom branding. It can't be added to language-specific branding. Some features of SharePoint Online and Office 2010 depend on users being able to choose to remain signed in. If you uncheck the **Show option to remain signed in** option, your users may see other unexpected prompts during the sign-in process. --![Diagram showing the user sign-in flow for a managed vs. federated tenant](media/customize-branding/kmsi-workflow.png) --See the [License requirements](#license-requirements) section for using the KMSI service. --### Troubleshoot 'Stay signed in?' issues --If a user doesn't act on the **Stay signed in?** prompt but abandons the sign-in attempt, a sign-in log entry appears in the Azure AD **Sign-ins** page. The prompt the user sees is called an "interrupt." --![Sample 'Stay signed in?' prompt](media/customize-branding/kmsi-stay-signed-in-prompt.png) --Details about the sign-in error are found in the **Sign-in logs** in Azure AD. Select the impacted user from the list and locate the details below in the **Basic info** section. --* **Sign in error code**: 50140 -* **Failure reason**: This error occurred due to "Keep me signed in" interrupt when the user was signing in. --You can stop users from seeing the interrupt by setting the **Show option to remain signed in** setting to **No** in the advanced branding settings. This setting disables the KMSI prompt for all users in your Azure AD directory. --You also can use the [persistent browser session controls in Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md) to prevent users from seeing the KMSI prompt. This option allows you to disable the KMSI prompt for a select group of users (such as the global administrators) without affecting sign-in behavior for everyone else in the directory. --To ensure that the KMSI prompt is shown only when it can benefit the user, the KMSI prompt is intentionally not shown in the following scenarios: --* User is signed in via seamless SSO and integrated Windows authentication (IWA) -* User is signed in via Active Directory Federation Services and IWA -* User is a guest in the tenant -* User's risk score is high -* Sign-in occurs during user or admin consent flow -* Persistent browser session control is configured in a conditional access policy - ## Next steps - [Add your organization's privacy info on Azure AD](./active-directory-properties-area.md) |
active-directory | How To Customize Branding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-customize-branding.md | + + Title: Add company branding to your organization's sign-in page (preview) - Azure AD +description: Instructions about how to add your organization's branding to the sign-in experience. ++++++++ Last updated : 11/21/2022+++++++# Configure your company branding (preview) ++When users authenticate into your corporate intranet or web-based applications, Azure Active Directory (Azure AD) provides the identity and access management (IAM) service. You can add company branding that applies to all these sign-in experiences to create a consistent experience for your users. ++The updated experience for adding company branding covered in this article is available as an Azure AD preview feature. To opt in and explore the new experience, go to **Azure AD** > **Preview features** and enable the **Enhanced Company Branding** feature. ++For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ++Instructions for the legacy company branding customization process can be found in the [Customize branding](customize-branding.md) article. ++## License requirements ++Adding custom branding requires one of the following licenses: ++- Azure AD Premium 1 +- Azure AD Premium 2 +- Office 365 (for Office apps) ++For more information about licensing and editions, see the [Sign up for Azure AD Premium](active-directory-get-started-premium.md) article. ++Azure AD Premium editions are available for customers in China using the worldwide instance of Azure AD. Azure AD Premium editions aren't currently supported in the Azure service operated by 21Vianet in China ++## Before you begin ++You can customize the sign-in pages when users access your organization's tenant-specific apps, such as `https://outlook.com/woodgrove.com`, or when passing a domain variable, such as `https://passwordreset.microsoftonline.com/?whr=woodgrove.com`. ++Custom branding appears after users authenticate for the first time. Users that start the sign-in process at a site like www\.office.com won't see the branding. After the first sign-in, the branding may take at least 15 minutes to appear. ++**All branding elements are optional. Default settings will remain, if left unchanged.** For example, if you specify a banner logo but no background image, the sign-in page shows your logo with a default background image from the destination site such as Microsoft 365. Additionally, sign-in page branding doesn't carry over to personal Microsoft accounts. If your users or guests authenticate using a personal Microsoft account, the sign-in page won't reflect the branding of your organization. ++**Images have different image and file size requirements.** Take note of the image requirements for each option. You may need to use a photo editor to create the right size images. The preferred image type for all images is PNG, but JPG is accepted. ++1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global Administrator account for the directory. ++2. Go to **Azure Active Directory** > **Company branding** > **Customize**. + - If you currently have a customized sign-in experience, you'll see an **Edit** button. ++ ![Custom branding landing page with 'Company branding' highlighted in the side menu and 'Configure' button highlighted in the center of the page](media/how-to-customize-branding/customize-branding-getting-started.png) ++The sign-in experience process is grouped into sections. At the end of each section, select the **Review + create** button to review what you have selected and submit your changes or the **Next** button to move to the next section. ++!['Review + create' and 'Next: Layout' buttons from the bottom of the configure custom branding page](media/how-to-customize-branding/customize-branding-buttons.png) ++## Basics ++- **Favicon**: Select a PNG or JPG of your logo that appears in the web browser tab. ++- **Background image**: Select a PNG or JPG to display as the main image on your sign-in page. This image will scale and crop according to the window size, but may be partially blocked by the sign-in prompt. ++- **Page background color**: If the background image isn't able to load because of a slower connection, your selected background color appears instead. ++## Layout ++- **Visual Templates**: Customize the layout of your sign-in page using templates or custom CSS. ++ - Choose one of two **Templates**: Full-screen or partial-screen background. The full-screen background could obscure your background image, so choose the partial-screen background if your background image is important. + - The details of the **Header** and **Footer** options are set on the next two sections of the process. ++- **Custom CSS**: Upload custom CSS to replace the Microsoft default style of the page. [Download the CSS template](https://download.microsoft.com/download/7/2/7/727f287a-125d-4368-a673-a785907ac5ab/custom-styles-template.css). ++## Header ++If you haven't enabled the header, go to the **Layout** section and select **Show header**. Once enabled, select a PNG or JPG to display in the header of the sign-in page. ++## Footer ++If you haven't enabled the footer, go to the **Layout** section and select **Show footer**. Once enabled, adjust the following settings. ++- **Show 'Privacy & Cookies'**: This option is selected by default and displays the [Microsoft 'Privacy & Cookies'](https://privacy.microsoft.com/privacystatement) link. + + Uncheck this option to hide the default Microsoft link. Optionally provide your own **Display text** and **URL**. The text and links don't have to be related to privacy and cookies. ++- **Show 'Terms of Use'**: This option is also elected by default and displays the [Microsoft 'Terms of Use'](https://www.microsoft.com/servicesagreement/) link. ++ Uncheck this option to hide the default Microsoft link. Optionally provide your own **Display text** and **URL**. The text and links don't have to be related to your terms of use. ++ >[!IMPORTANT] + >The default Microsoft 'Terms of Use' link is not the same as the Conditional Access Terms of Use. Seeing the terms here doesn't mean you've accepted those terms and conditions. ++ ![Customize branding on the Footer section](media/how-to-customize-branding/customize-branding-footer.png) ++## Sign-in form ++- **Banner logo**: Select a PNG or JPG image file of a banner-sized logo (short and wide) to appear on the sign-in pages. ++- **Square logo (light theme)**: Select a square PNG or JPG image file of your logo to be used in browsers that are using a light color theme. This logo is used to represent your organization on the Azure AD web interface and in Windows. ++- **Square logo (dark theme)** Select a square PNG or JPG image file of your logo to be used in browsers that are using a dark color theme. This logo is used to represent your organization on the Azure AD web interface and in Windows. If your logo looks good on light and dark backgrounds, there's no need to add a dark theme logo. ++- **Username hint text**: Enter hint text for the username input field on the sign-in page. If guests use the same sign-in page, we don't recommend using hint text here. ++- **Sign-in page text**: Enter text that appears on the bottom of the sign-in page. You can use this text to communicate additional information, such as the phone number to your help desk or a legal statement. This page is public, so don't provide sensitive information here. This text must be Unicode and can't exceed 1024 characters. ++ To begin a new paragraph, use the enter key twice. You can also change text formatting to include bold, italics, an underline, or clickable link. Use the following syntax to add formatting to text: ++ > Hyperlink: `[text](link)` + + > Bold: `**text**` or `__text__` + + > Italics: `*text*` or `_text_` + + > Underline: `++text++` + + > [!IMPORTANT] + > Hyperlinks that are added to the sign-in page text render as text in native environments, such as desktop and mobile applications. ++- **Self-service password reset**: + - Show self-service password reset (SSPR): Select the checkbox to turn on SSPR. + - Common URL: Enter the destination URL for where your users will reset their passwords. This URL appears on the username and password collection screens. + - Username collection display text: Replace the default text with your own custom username collection text. + - Password collection display text: Replace the default text with your own customer password collection text. ++## Review ++All of the available options appear in one list so you can review everything you've customized or left at the default setting. When you're done, select the **Create** button. ++Once your default sign-in experience is created, select the **Edit** button to make any changes. You can't delete a default sign-in experience after it's created, but you can remove all custom settings. ++## Customize the sign-in experience by browser language ++To create an inclusive experience for all of your users, you can customize the sign-in experience based on browser language. ++1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global Administrator account for the directory. ++2. Go to **Azure Active Directory** > **Company branding** > **Add browser language**. ++The process for customizing the experience is the same as the [default sign-in experience](#basics) process, except you must select a language from the dropdown list in the **Basics** section. We recommend adding custom text in the same areas as your default sign-in experience. ++## Next steps ++- [Learn more about default user permissions in Azure AD](../fundamentals/users-default-permissions.md) ++- [Manage the 'stay signed in' prompt](active-directory-users-profile-azure-portal.md#learn-about-the-stay-signed-in-prompt) |
active-directory | Reference Reports Latencies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-reports-latencies.md | The following table lists the latency information for activity reports. ### How soon can I see activities data after getting a premium license? -If you already have activities data with your free license, then you can see it immediately on upgrade. If you donΓÇÖt have any data, then it will take one or two days for the data to show up in the reports after you upgrade to a premium license. +When you upgrade to Azure AD P1 or P2 from a free version of Azure AD, the reports associated with P1 and P2 will begin to retain and display data from your tenant. You should expect a delay of roughly 24 hours from when you upgrade your tenant before all premium reporting features show data. Many premium reporting features will only begin retaining data after this 24 hour period following your upgrade to P1 or P2. ## Security reports The following table lists the latency information for risk detections. * [Azure AD reports overview](overview-reports.md) * [Programmatic access to Azure AD reports](concept-reporting-api.md)-* [Azure Active Directory risk detections](../identity-protection/overview-identity-protection.md) +* [Azure Active Directory risk detections](../identity-protection/overview-identity-protection.md) |
active-directory | 10000Ftplans Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/10000ftplans-tutorial.md | |
active-directory | 123Formbuilder Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/123formbuilder-tutorial.md | |
active-directory | 15Five Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/15five-provisioning-tutorial.md | |
active-directory | 15Five Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/15five-tutorial.md | |
active-directory | 23Video Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/23video-tutorial.md | |
active-directory | 360Online Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/360online-tutorial.md | |
active-directory | 4Dx Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/4dx-tutorial.md | |
active-directory | 4Me Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/4me-provisioning-tutorial.md | |
active-directory | 4Me Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/4me-tutorial.md | |
active-directory | 8X8 Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/8x8-provisioning-tutorial.md | |
active-directory | 8X8virtualoffice Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/8x8virtualoffice-tutorial.md | |
active-directory | A Cloud Guru Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/a-cloud-guru-tutorial.md | |
active-directory | Abbyy Flexicapture Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/abbyy-flexicapture-cloud-tutorial.md | |
active-directory | Abintegro Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/abintegro-tutorial.md | |
active-directory | Absorblms Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/absorblms-tutorial.md | |
active-directory | Abstract Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/abstract-tutorial.md | |
active-directory | Academy Attendance Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/academy-attendance-tutorial.md | |
active-directory | Acadia Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/acadia-tutorial.md | |
active-directory | Accenture Academy Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/accenture-academy-tutorial.md | |
active-directory | Accredible Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/accredible-tutorial.md | |
active-directory | Achieve3000 Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/achieve3000-tutorial.md | |
active-directory | Aclp Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aclp-tutorial.md | |
active-directory | Acquireio Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/acquireio-tutorial.md | |
active-directory | Active And Thriving Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/active-and-thriving-tutorial.md | |
active-directory | Active Directory Sso For Doubleyou Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/active-directory-sso-for-doubleyou-tutorial.md | |
active-directory | Acunetix 360 Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/acunetix-360-tutorial.md | |
active-directory | Adaptive Shield Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adaptive-shield-tutorial.md | |
active-directory | Adaptivesuite Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adaptivesuite-tutorial.md | |
active-directory | Adem Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adem-tutorial.md | |
active-directory | Adglobalview Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adglobalview-tutorial.md | |
active-directory | Adobe Creative Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adobe-creative-cloud-tutorial.md | |
active-directory | Adobe Echosign Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adobe-echosign-tutorial.md | |
active-directory | Adobe Identity Management Provisioning Oidc Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adobe-identity-management-provisioning-oidc-tutorial.md | |
active-directory | Adobe Identity Management Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adobe-identity-management-provisioning-tutorial.md | |
active-directory | Adobe Identity Management Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adobe-identity-management-tutorial.md | |
active-directory | Adobecaptivateprime Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adobecaptivateprime-tutorial.md | |
active-directory | Adobeexperiencemanager Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adobeexperiencemanager-tutorial.md | |
active-directory | Adoddle Csaas Platform Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adoddle-csaas-platform-tutorial.md | |
active-directory | Adp Emea French Hr Portal Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adp-emea-french-hr-portal-tutorial.md | |
active-directory | Adpfederatedsso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adpfederatedsso-tutorial.md | |
active-directory | Adra By Trintech Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adra-by-trintech-tutorial.md | |
active-directory | Adstream Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adstream-tutorial.md | |
active-directory | Advance Kerbf5 Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/advance-kerbf5-tutorial.md | |
active-directory | Agile Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/agile-provisioning-tutorial.md | |
active-directory | Agiloft Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/agiloft-tutorial.md | |
active-directory | Aha Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aha-tutorial.md | |
active-directory | Ahrtemis Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ahrtemis-tutorial.md | |
active-directory | Air Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/air-tutorial.md | |
active-directory | Airstack Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/airstack-provisioning-tutorial.md | |
active-directory | Airstack Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/airstack-tutorial.md | |
active-directory | Airtable Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/airtable-tutorial.md | |
active-directory | Airwatch Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/airwatch-tutorial.md | |
active-directory | Akamai Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/akamai-tutorial.md | |
active-directory | Akashi Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/akashi-tutorial.md | |
active-directory | Alacritylaw Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alacritylaw-tutorial.md | |
active-directory | Alcumus Info Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alcumus-info-tutorial.md | |
active-directory | Alertmedia Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alertmedia-provisioning-tutorial.md | |
active-directory | Alertmedia Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alertmedia-tutorial.md | |
active-directory | Alertops Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alertops-tutorial.md | |
active-directory | Alexishr Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alexishr-provisioning-tutorial.md | |
active-directory | Alexishr Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alexishr-tutorial.md | |
active-directory | Alibaba Cloud Service Role Based Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alibaba-cloud-service-role-based-sso-tutorial.md | |
active-directory | Alinto Protect Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alinto-protect-provisioning-tutorial.md | |
active-directory | Allbound Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/allbound-sso-tutorial.md | |
active-directory | Allocadia Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/allocadia-tutorial.md | |
active-directory | Ally Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ally-tutorial.md | |
active-directory | Altamira Hrm Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/altamira-hrm-tutorial.md | |
active-directory | Altoura Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/altoura-tutorial.md | |
active-directory | Amazing People Schools Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/amazing-people-schools-tutorial.md | |
active-directory | Amazon Business Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/amazon-business-tutorial.md | In this tutorial, you'll learn how to integrate Amazon Business with Azure Activ * Enable your users to be automatically signed-in to Amazon Business with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal. +> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5cbi8] + ## Prerequisites To get started, you need the following items: |
active-directory | Amazon Managed Grafana Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/amazon-managed-grafana-tutorial.md | |
active-directory | Amazon Web Service Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/amazon-web-service-tutorial.md | |
active-directory | Amms Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/amms-tutorial.md | |
active-directory | Amplitude Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/amplitude-tutorial.md | |
active-directory | Anaplan Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/anaplan-tutorial.md | |
active-directory | Anaqua Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/anaqua-tutorial.md | |
active-directory | Andfrankly Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/andfrankly-tutorial.md | |
active-directory | Andromedascm Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/andromedascm-tutorial.md | |
active-directory | Animaker Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/animaker-tutorial.md | |
active-directory | Answerhub Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/answerhub-tutorial.md | |
active-directory | Anyone Home Crm Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/anyone-home-crm-tutorial.md | |
active-directory | Apexportal Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/apexportal-tutorial.md | |
active-directory | Appaegis Isolation Access Cloud Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/appaegis-isolation-access-cloud-provisioning-tutorial.md | |
active-directory | Appaegis Isolation Access Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/appaegis-isolation-access-cloud-tutorial.md | |
active-directory | Appblade Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/appblade-tutorial.md | |
active-directory | Appdynamics Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/appdynamics-tutorial.md | |
active-directory | Appian Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/appian-tutorial.md | |
active-directory | Appinux Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/appinux-tutorial.md | |
active-directory | Apple Business Manager Provision Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/apple-business-manager-provision-tutorial.md | |
active-directory | Apple School Manager Provision Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/apple-school-manager-provision-tutorial.md | |
active-directory | Applied Mental Health Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/applied-mental-health-tutorial.md | |
active-directory | Appneta Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/appneta-tutorial.md | |
active-directory | Appraisd Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/appraisd-tutorial.md | |
active-directory | Appremo Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/appremo-tutorial.md | |
active-directory | Appsec Flow Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/appsec-flow-sso-tutorial.md | |
active-directory | Apptio Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/apptio-tutorial.md | |
active-directory | Aravo Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aravo-tutorial.md | |
active-directory | Arc Facilities Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/arc-facilities-tutorial.md | |
active-directory | Arc Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/arc-tutorial.md | |
active-directory | Arcgis Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/arcgis-tutorial.md | |
active-directory | Arcgisenterprise Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/arcgisenterprise-tutorial.md | |
active-directory | Archie Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/archie-tutorial.md | |
active-directory | Ardoq Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ardoq-tutorial.md | |
active-directory | Arena Eu Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/arena-eu-tutorial.md | |
active-directory | Arena Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/arena-tutorial.md | |
active-directory | Ares For Enterprise Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ares-for-enterprise-tutorial.md | |
active-directory | Ariba Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ariba-tutorial.md | |
active-directory | Articulate360 Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/articulate360-tutorial.md | |
active-directory | Aruba User Experience Insight Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aruba-user-experience-insight-tutorial.md | |
active-directory | Asana Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/asana-provisioning-tutorial.md | |
active-directory | Asana Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/asana-tutorial.md | |
active-directory | Asccontracts Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/asccontracts-tutorial.md | |
active-directory | Ascentis Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ascentis-tutorial.md | |
active-directory | Asignet Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/asignet-sso-tutorial.md | |
active-directory | Askspoke Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/askspoke-provisioning-tutorial.md | |
active-directory | Askspoke Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/askspoke-tutorial.md | |
active-directory | Askyourteam Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/askyourteam-tutorial.md | |
active-directory | Asset Planner Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/asset-planner-tutorial.md | |
active-directory | Assetbank Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/assetbank-tutorial.md | |
active-directory | Assetsonar Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/assetsonar-tutorial.md | |
active-directory | Astra Schedule Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/astra-schedule-tutorial.md | |
active-directory | Atea Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atea-provisioning-tutorial.md | |
active-directory | Athena Systems Login Platform Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/athena-systems-login-platform-tutorial.md | |
active-directory | Atlassian Cloud Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atlassian-cloud-provisioning-tutorial.md | |
active-directory | Atlassian Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atlassian-cloud-tutorial.md | |
active-directory | Atomiclearning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atomiclearning-tutorial.md | |
active-directory | Atp Spotlight And Chronicx Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atp-spotlight-and-chronicx-tutorial.md | |
active-directory | Attendancemanagementservices Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/attendancemanagementservices-tutorial.md | |
active-directory | Auditboard Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/auditboard-provisioning-tutorial.md | |
active-directory | Auditboard Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/auditboard-tutorial.md | |
active-directory | Authomize Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/authomize-tutorial.md | |
active-directory | Autodesk Sso Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/autodesk-sso-provisioning-tutorial.md | |
active-directory | Autodesk Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/autodesk-sso-tutorial.md | |
active-directory | Autotaskendpointbackup Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/autotaskendpointbackup-tutorial.md | |
active-directory | Autotaskworkplace Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/autotaskworkplace-tutorial.md | |
active-directory | Awardspring Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/awardspring-tutorial.md | |
active-directory | Awarego Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/awarego-tutorial.md | |
active-directory | Aws Clientvpn Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aws-clientvpn-tutorial.md | |
active-directory | Aws Multi Accounts Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aws-multi-accounts-tutorial.md | |
active-directory | Aws Single Sign On Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aws-single-sign-on-provisioning-tutorial.md | |
active-directory | Aws Single Sign On Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aws-single-sign-on-tutorial.md | |
active-directory | Axiad Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/axiad-cloud-tutorial.md | |
active-directory | Axway Csos Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/axway-csos-tutorial.md | |
active-directory | Baldwin Safety & Compliance Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/baldwin-safety-&-compliance-tutorial.md | |
active-directory | Balsamiq Wireframes Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/balsamiq-wireframes-tutorial.md | |
active-directory | Bamboo Hr Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bamboo-hr-tutorial.md | |
active-directory | Bamboo Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bamboo-tutorial.md | |
active-directory | Bambubysproutsocial Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bambubysproutsocial-tutorial.md | |
active-directory | Banyan Command Center Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/banyan-command-center-tutorial.md | |
active-directory | Battery Management Information System Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/battery-management-information-system-tutorial.md | |
active-directory | Bcinthecloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bcinthecloud-tutorial.md | |
active-directory | Bealink Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bealink-tutorial.md | |
active-directory | Beatrust Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/beatrust-tutorial.md | |
active-directory | Beautiful.Ai Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/beautiful.ai-tutorial.md | |
active-directory | Beekeeper Azure Ad Data Connector Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/beekeeper-azure-ad-data-connector-tutorial.md | |
active-directory | Beeline Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/beeline-tutorial.md | |
active-directory | Benchling Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/benchling-tutorial.md | |
active-directory | Benefithub Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/benefithub-tutorial.md | |
active-directory | Benefitsolver Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/benefitsolver-tutorial.md | |
active-directory | Benq Iam Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/benq-iam-provisioning-tutorial.md | |
active-directory | Benq Iam Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/benq-iam-tutorial.md | |
active-directory | Benselect Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/benselect-tutorial.md | |
active-directory | Bentley Automatic User Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bentley-automatic-user-provisioning-tutorial.md | |
active-directory | Bersin Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bersin-tutorial.md | |
active-directory | Betterworks Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/betterworks-tutorial.md | |
active-directory | Beyond Identity Admin Console Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/beyond-identity-admin-console-tutorial.md | |
active-directory | Bgsonline Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bgsonline-tutorial.md | |
active-directory | Bic Cloud Design Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bic-cloud-design-provisioning-tutorial.md | |
active-directory | Bic Cloud Design Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bic-cloud-design-tutorial.md | |
active-directory | Bime Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bime-tutorial.md | |
active-directory | Birst Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/birst-tutorial.md | |
active-directory | Bis Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bis-tutorial.md | |
active-directory | Bitabiz Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bitabiz-provisioning-tutorial.md | |
active-directory | Bitabiz Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bitabiz-tutorial.md | |
active-directory | Bitbucket Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bitbucket-tutorial.md | |
active-directory | Bitly Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bitly-tutorial.md | |
active-directory | Bizagi Studio For Digital Process Automation Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bizagi-studio-for-digital-process-automation-provisioning-tutorial.md | |
active-directory | Bizagi Studio For Digital Process Automation Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bizagi-studio-for-digital-process-automation-tutorial.md | |
active-directory | Blackboard Learn Shibboleth Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/blackboard-learn-shibboleth-tutorial.md | |
active-directory | Blackboard Learn Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/blackboard-learn-tutorial.md | |
active-directory | Bldng App Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bldng-app-provisioning-tutorial.md | |
active-directory | Blink Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/blink-provisioning-tutorial.md | |
active-directory | Blink Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/blink-tutorial.md | |
active-directory | Blinq Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/blinq-provisioning-tutorial.md | |
active-directory | Blockbax Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/blockbax-tutorial.md | |
active-directory | Blogin Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/blogin-provisioning-tutorial.md | |
active-directory | Blogin Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/blogin-tutorial.md | |
active-directory | Blue Access For Members Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/blue-access-for-members-tutorial.md | |
active-directory | Blue Ocean Brain Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/blue-ocean-brain-tutorial.md | |
active-directory | Bluejeans Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bluejeans-provisioning-tutorial.md | |
active-directory | Bluejeans Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bluejeans-tutorial.md | |
active-directory | Bomgarremotesupport Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bomgarremotesupport-tutorial.md | |
active-directory | Bonos Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bonos-tutorial.md | |
active-directory | Bonus Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bonus-tutorial.md | |
active-directory | Bonusly Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bonusly-provisioning-tutorial.md | |
active-directory | Boomi Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/boomi-tutorial.md | |
active-directory | Borrowbox Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/borrowbox-tutorial.md | |
active-directory | Box Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/box-tutorial.md | |
active-directory | Box Userprovisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/box-userprovisioning-tutorial.md | |
active-directory | Boxcryptor Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/boxcryptor-provisioning-tutorial.md | |
active-directory | Boxcryptor Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/boxcryptor-tutorial.md | |
active-directory | Bpanda Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bpanda-provisioning-tutorial.md | |
active-directory | Bpmonline Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bpmonline-tutorial.md | |
active-directory | Brandfolder Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/brandfolder-tutorial.md | |
active-directory | Braze Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/braze-tutorial.md | |
active-directory | Bridge Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bridge-tutorial.md | |
active-directory | Bright Pattern Omnichannel Contact Center Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bright-pattern-omnichannel-contact-center-tutorial.md | |
active-directory | Brightidea Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/brightidea-tutorial.md | |
active-directory | Brightspace Desire2learn Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/brightspace-desire2learn-tutorial.md | |
active-directory | Britive Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/britive-provisioning-tutorial.md | |
active-directory | Britive Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/britive-tutorial.md | |
active-directory | Brivo Onair Identity Connector Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/brivo-onair-identity-connector-provisioning-tutorial.md | |
active-directory | Broadcom Dx Saas Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/broadcom-dx-saas-tutorial.md | |
active-directory | Broker Groupe Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/broker-groupe-tutorial.md | |
active-directory | Browserstack Single Sign On Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/browserstack-single-sign-on-provisioning-tutorial.md | |
active-directory | Browserstack Single Sign On Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/browserstack-single-sign-on-tutorial.md | |
active-directory | Brushup Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/brushup-tutorial.md | |
active-directory | Bugsnag Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bugsnag-tutorial.md | |
active-directory | Bullseyetdp Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bullseyetdp-provisioning-tutorial.md | |
active-directory | Bullseyetdp Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bullseyetdp-tutorial.md | |
active-directory | Burp Suite Enterprise Edition Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/burp-suite-enterprise-edition-tutorial.md | |
active-directory | Buttonwood Central Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/buttonwood-central-sso-tutorial.md | |
active-directory | Bynder Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bynder-tutorial.md | |
active-directory | C3m Cloud Control Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/c3m-cloud-control-tutorial.md | |
active-directory | Cakehr Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cakehr-tutorial.md | |
active-directory | Campus Cafe Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/campus-cafe-tutorial.md | |
active-directory | Canvas Lms Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/canvas-lms-tutorial.md | |
active-directory | Cappm Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cappm-tutorial.md | |
active-directory | Capriza Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/capriza-tutorial.md | |
active-directory | Carbonite Endpoint Backup Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/carbonite-endpoint-backup-tutorial.md | |
active-directory | Catchpoint Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/catchpoint-tutorial.md | |
active-directory | Cato Networks Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cato-networks-provisioning-tutorial.md | |
active-directory | Cbre Serviceinsight Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cbre-serviceinsight-tutorial.md | |
active-directory | Cch Tagetik Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cch-tagetik-tutorial.md | |
active-directory | Central Desktop Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/central-desktop-tutorial.md | |
active-directory | Cequence Application Security Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cequence-application-security-tutorial.md | |
active-directory | Cerby Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cerby-provisioning-tutorial.md | |
active-directory | Cerby Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cerby-tutorial.md | |
active-directory | Ceridiandayforcehcm Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ceridiandayforcehcm-tutorial.md | |
active-directory | Cernercentral Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cernercentral-provisioning-tutorial.md | |
active-directory | Cernercentral Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cernercentral-tutorial.md | |
active-directory | Certainadminsso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/certainadminsso-tutorial.md | |
active-directory | Certent Equity Management Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/certent-equity-management-tutorial.md | |
active-directory | Certify Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/certify-tutorial.md | |
active-directory | Cezannehrsoftware Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cezannehrsoftware-tutorial.md | |
active-directory | Change Process Management Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/change-process-management-tutorial.md | |
active-directory | Chaos Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/chaos-provisioning-tutorial.md | |
active-directory | Chargebee Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/chargebee-tutorial.md | |
active-directory | Chatwork Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/chatwork-provisioning-tutorial.md | |
active-directory | Chatwork Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/chatwork-tutorial.md | |
active-directory | Check Point Harmony Connect Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/check-point-harmony-connect-tutorial.md | |
active-directory | Check Point Identity Awareness Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/check-point-identity-awareness-tutorial.md | |
active-directory | Check Point Remote Access Vpn Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/check-point-remote-access-vpn-tutorial.md | |
active-directory | Checkpoint Infinity Portal Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/checkpoint-infinity-portal-tutorial.md | |
active-directory | Checkproof Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/checkproof-provisioning-tutorial.md | |
active-directory | Checkproof Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/checkproof-tutorial.md | |
active-directory | Cheetah For Benelux Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cheetah-for-benelux-tutorial.md | |
active-directory | Cherwell Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cherwell-tutorial.md | |
active-directory | Chromeriver Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/chromeriver-tutorial.md | |
active-directory | Chronicx Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/chronicx-tutorial.md | |
active-directory | Chronus Saml Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/chronus-saml-tutorial.md | |
active-directory | Cimpl Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cimpl-tutorial.md | |
active-directory | Cinode Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cinode-provisioning-tutorial.md | |
active-directory | Cirrus Identity Bridge For Azure Ad Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cirrus-identity-bridge-for-azure-ad-tutorial.md | |
active-directory | Cisco Anyconnect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-anyconnect.md | |
active-directory | Cisco Intersight Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-intersight-tutorial.md | |
active-directory | Cisco Spark Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-spark-tutorial.md | |
active-directory | Cisco Umbrella Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-umbrella-tutorial.md | |
active-directory | Cisco Umbrella User Management Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-umbrella-user-management-provisioning-tutorial.md | |
active-directory | Cisco Webex Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-webex-provisioning-tutorial.md | |
active-directory | Cisco Webex Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-webex-tutorial.md | |
active-directory | Ciscocloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ciscocloud-tutorial.md | |
active-directory | Ciscocloudlock Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ciscocloudlock-tutorial.md | |
active-directory | Citrix Cloud Saml Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/citrix-cloud-saml-sso-tutorial.md | |
active-directory | Citrix Gotomeeting Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/citrix-gotomeeting-tutorial.md | |
active-directory | Citrix Netscaler Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/citrix-netscaler-tutorial.md | |
active-directory | Citrixgotomeeting Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/citrixgotomeeting-provisioning-tutorial.md | |
active-directory | Civic Platform Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/civic-platform-tutorial.md | |
active-directory | Clarivatewos Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/clarivatewos-tutorial.md | |
active-directory | Clarizen One Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/clarizen-one-provisioning-tutorial.md | |
active-directory | Clarizen Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/clarizen-tutorial.md | |
active-directory | Claromentis Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/claromentis-tutorial.md | |
active-directory | Clearcompany Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/clearcompany-tutorial.md | |
active-directory | Clearreview Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/clearreview-tutorial.md | |
active-directory | Clebex Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/clebex-provisioning-tutorial.md | |
active-directory | Clebex Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/clebex-tutorial.md | |
active-directory | Clever Nelly Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/clever-nelly-tutorial.md | |
active-directory | Clever Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/clever-tutorial.md | |
active-directory | Clicktime Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/clicktime-tutorial.md | |
active-directory | Clickup Productivity Platform Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/clickup-productivity-platform-tutorial.md | |
active-directory | Clockwork Recruiting Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/clockwork-recruiting-tutorial.md | |
active-directory | Cloud Academy Sso Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cloud-academy-sso-provisioning-tutorial.md | |
active-directory | Cloud Academy Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cloud-academy-sso-tutorial.md | |
active-directory | Cloud Service Picco Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cloud-service-picco-tutorial.md | |
active-directory | Cloudcords Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cloudcords-tutorial.md | |
active-directory | Cloudknox Permissions Management Platform Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cloudknox-permissions-management-platform-tutorial.md | |
active-directory | Cloudmore Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cloudmore-tutorial.md | |
active-directory | Cloudpassage Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cloudpassage-tutorial.md | |
active-directory | Cloudsign Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cloudsign-tutorial.md | |
active-directory | Cloudtamer Io Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cloudtamer-io-tutorial.md | |
active-directory | Cobalt Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cobalt-tutorial.md | |
active-directory | Coda Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/coda-provisioning-tutorial.md | |
active-directory | Coda Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/coda-tutorial.md | |
active-directory | Code42 Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/code42-provisioning-tutorial.md | |
active-directory | Code42 Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/code42-tutorial.md | |
active-directory | Codility Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/codility-tutorial.md | |
active-directory | Cofense Provision Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cofense-provision-tutorial.md | |
active-directory | Coggle Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/coggle-tutorial.md | |
active-directory | Cognician Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cognician-tutorial.md | |
active-directory | Cognidox Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cognidox-tutorial.md | |
active-directory | Collaborativeinnovation Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/collaborativeinnovation-tutorial.md | |
active-directory | Colortokens Ztna Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/colortokens-ztna-tutorial.md | |
active-directory | Comeet Recruiting Software Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/comeet-recruiting-software-provisioning-tutorial.md | |
active-directory | Comeetrecruitingsoftware Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/comeetrecruitingsoftware-tutorial.md | |
active-directory | Comm100livechat Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/comm100livechat-tutorial.md | |
active-directory | Communifire Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/communifire-tutorial.md | |
active-directory | Community Spark Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/community-spark-tutorial.md | |
active-directory | Competencyiq Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/competencyiq-tutorial.md | |
active-directory | Complianceelf Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/complianceelf-tutorial.md | |
active-directory | Concur Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/concur-provisioning-tutorial.md | |
active-directory | Concur Travel And Expense Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/concur-travel-and-expense-tutorial.md | |
active-directory | Concur Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/concur-tutorial.md | |
active-directory | Condeco Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/condeco-tutorial.md | |
active-directory | Confirmit Horizons Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/confirmit-horizons-tutorial.md | |
active-directory | Confluence App Proxy Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/confluence-app-proxy-tutorial.md | |
active-directory | Confluencemicrosoft Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/confluencemicrosoft-tutorial.md | |
active-directory | Consent2go Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/consent2go-tutorial.md | |
active-directory | Contentful Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/contentful-provisioning-tutorial.md | |
active-directory | Contentful Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/contentful-tutorial.md | |
active-directory | Contentkalender Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/contentkalender-tutorial.md | |
active-directory | Contentsquare Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/contentsquare-sso-tutorial.md | |
active-directory | Contractsafe Saml2 Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/contractsafe-saml2-sso-tutorial.md | |
active-directory | Contractworks Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/contractworks-tutorial.md | |
active-directory | Contrast Security Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/contrast-security-tutorial.md | |
active-directory | Control Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/control-tutorial.md | |
active-directory | Convene Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/convene-tutorial.md | |
active-directory | Convercent Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/convercent-tutorial.md | |
active-directory | Coralogix Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/coralogix-tutorial.md | |
active-directory | Cornerstone Ondemand Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cornerstone-ondemand-provisioning-tutorial.md | |
active-directory | Cornerstone Ondemand Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cornerstone-ondemand-tutorial.md | |
active-directory | Corporateexperience Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/corporateexperience-tutorial.md | |
active-directory | Corptax Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/corptax-tutorial.md | |
active-directory | Costpoint Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/costpoint-tutorial.md | |
active-directory | Count Me In Operations Dashboard Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/count-me-in-operations-dashboard-tutorial.md | |
active-directory | Coupa Risk Assess Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/coupa-risk-assess-tutorial.md | |
active-directory | Coupa Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/coupa-tutorial.md | |
active-directory | Coverity Static Application Security Testing Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/coverity-static-application-security-testing-tutorial.md | |
active-directory | Cpqsync By Cincom Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cpqsync-by-cincom-tutorial.md | |
active-directory | Crayon Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/crayon-tutorial.md | |
active-directory | Createweb Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/createweb-tutorial.md | |
active-directory | Crises Control Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/crises-control-tutorial.md | |
active-directory | Crossknowledge Learning Suite Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/crossknowledge-learning-suite-tutorial.md | |
active-directory | Crowd Log Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/crowd-log-tutorial.md | |
active-directory | Crowdstrike Falcon Platform Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/crowdstrike-falcon-platform-tutorial.md | |
active-directory | Cs Stars Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cs-stars-tutorial.md | |
active-directory | Culture Shift Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/culture-shift-tutorial.md | |
active-directory | Curator Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/curator-tutorial.md | |
active-directory | Curricula Saml Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/curricula-saml-tutorial.md | |
active-directory | Cwt Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cwt-tutorial.md | |
active-directory | Cyara Cx Assurance Platform Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cyara-cx-assurance-platform-tutorial.md | |
active-directory | Cyberark Saml Authentication Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cyberark-saml-authentication-tutorial.md | |
active-directory | Cybersolutions Cybermail Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cybersolutions-cybermail-tutorial.md | |
active-directory | Cybersolutions Mailbase Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cybersolutions-mailbase-tutorial.md | |
active-directory | Cybsafe Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cybsafe-provisioning-tutorial.md | |
active-directory | Cylanceprotect Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cylanceprotect-tutorial.md | |
active-directory | Cytric Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cytric-tutorial.md | |
active-directory | Dagster Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dagster-cloud-tutorial.md | |
active-directory | Darwinbox Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/darwinbox-tutorial.md | |
active-directory | Databasics Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/databasics-tutorial.md | |
active-directory | Databook Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/databook-tutorial.md | |
active-directory | Datacamp Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/datacamp-tutorial.md | |
active-directory | Datadog Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/datadog-tutorial.md | |
active-directory | Datahug Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/datahug-tutorial.md | |
active-directory | Datasite Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/datasite-tutorial.md | |
active-directory | Datava Enterprise Service Platform Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/datava-enterprise-service-platform-tutorial.md | |
active-directory | Datto File Protection Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/datto-file-protection-tutorial.md | |
active-directory | Datto Workplace Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/datto-workplace-tutorial.md | |
active-directory | Dealpath Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dealpath-tutorial.md | |
active-directory | Debroome Brand Portal Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/debroome-brand-portal-tutorial.md | |
active-directory | Degreed Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/degreed-tutorial.md | |
active-directory | Deputy Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/deputy-tutorial.md | |
active-directory | Desknets Neo Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/desknets-neo-tutorial.md | |
active-directory | Deskradar Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/deskradar-tutorial.md | |
active-directory | Dialpad Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dialpad-provisioning-tutorial.md | |
active-directory | Digicert Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/digicert-tutorial.md | |
active-directory | Digital Pigeon Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/digital-pigeon-tutorial.md | |
active-directory | Dining Sidekick Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dining-sidekick-tutorial.md | |
active-directory | Direct Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/direct-tutorial.md | |
active-directory | Directprint Io Cloud Print Administration Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/directprint-io-cloud-print-administration-tutorial.md | |
active-directory | Directprint Io Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/directprint-io-provisioning-tutorial.md | |
active-directory | Discovery Benefits Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/discovery-benefits-sso-tutorial.md | |
active-directory | Displayr Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/displayr-tutorial.md | |
active-directory | Dmarcian Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dmarcian-tutorial.md | |
active-directory | Documo Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/documo-provisioning-tutorial.md | |
active-directory | Documo Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/documo-tutorial.md | |
active-directory | Docusign Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/docusign-provisioning-tutorial.md | |
active-directory | Docusign Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/docusign-tutorial.md | |
active-directory | Dome9arc Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dome9arc-tutorial.md | |
active-directory | Dominknowone Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dominknowone-tutorial.md | |
active-directory | Domo Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/domo-tutorial.md | |
active-directory | Dossier Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dossier-tutorial.md | |
active-directory | Dotcom Monitor Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dotcom-monitor-tutorial.md | |
active-directory | Dovetale Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dovetale-tutorial.md | |
active-directory | Dowjones Factiva Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dowjones-factiva-tutorial.md | |
active-directory | Draup Inc Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/draup-inc-tutorial.md | |
active-directory | Drawboard Projects Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/drawboard-projects-tutorial.md | |
active-directory | Drift Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/drift-tutorial.md | |
active-directory | Dropboxforbusiness Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dropboxforbusiness-provisioning-tutorial.md | |
active-directory | Dropboxforbusiness Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dropboxforbusiness-tutorial.md | |
active-directory | Drtrack Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/drtrack-tutorial.md | |
active-directory | Druva Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/druva-provisioning-tutorial.md | |
active-directory | Druva Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/druva-tutorial.md | |
active-directory | Dx Netops Portal Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dx-netops-portal-tutorial.md | |
active-directory | Dynamic Signal Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dynamic-signal-provisioning-tutorial.md | |
active-directory | Dynamicsignal Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dynamicsignal-tutorial.md | |
active-directory | Dynatrace Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dynatrace-tutorial.md | |
active-directory | E Days Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/e-days-tutorial.md | |
active-directory | E2open Cm Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/e2open-cm-tutorial.md | |
active-directory | E2open Lsp Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/e2open-lsp-tutorial.md | |
active-directory | Eab Navigate Impl Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/eab-navigate-impl-tutorial.md | |
active-directory | Eab Navigate Strategic Care Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/eab-navigate-strategic-care-tutorial.md | |
active-directory | Eab Navigate Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/eab-navigate-tutorial.md | |
active-directory | Eacomposer Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/eacomposer-tutorial.md | |
active-directory | Easysso For Bamboo Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/easysso-for-bamboo-tutorial.md | |
active-directory | Easysso For Bitbucket Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/easysso-for-bitbucket-tutorial.md | |
active-directory | Easysso For Confluence Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/easysso-for-confluence-tutorial.md | |
active-directory | Easysso For Jira Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/easysso-for-jira-tutorial.md | |
active-directory | Easyterritory Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/easyterritory-tutorial.md | |
active-directory | Ebsco Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ebsco-tutorial.md | |
active-directory | Eccentex Appbase For Azure Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/eccentex-appbase-for-azure-tutorial.md | |
active-directory | Echospan Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/echospan-tutorial.md | |
active-directory | Ecornell Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ecornell-tutorial.md | |
active-directory | Edcor Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/edcor-tutorial.md | |
active-directory | Edigitalresearch Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/edigitalresearch-tutorial.md | |
active-directory | Ediwin Saas Edi Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ediwin-saas-edi-tutorial.md | |
active-directory | Edubrite Lms Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/edubrite-lms-tutorial.md | |
active-directory | Edx For Business Saml Integration Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/edx-for-business-saml-integration-tutorial.md | |
active-directory | Efidigitalstorefront Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/efidigitalstorefront-tutorial.md | |
active-directory | Egnyte Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/egnyte-tutorial.md | |
active-directory | Egress Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/egress-tutorial.md | |
active-directory | Ekarda Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ekarda-tutorial.md | |
active-directory | Ekincare Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ekincare-tutorial.md | |
active-directory | Elearnposh Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/elearnposh-tutorial.md | |
active-directory | Eletive Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/eletive-provisioning-tutorial.md | |
active-directory | Elionboarding Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/elionboarding-tutorial.md | |
active-directory | Elium Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/elium-provisioning-tutorial.md | |
active-directory | Elium Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/elium-tutorial.md | |
active-directory | Elqano Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/elqano-sso-tutorial.md | |
active-directory | Eluminate Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/eluminate-tutorial.md | |
active-directory | Embark Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/embark-tutorial.md | |
active-directory | Embed Signage Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/embed-signage-provisioning-tutorial.md | |
active-directory | Embed Signage Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/embed-signage-tutorial.md | |
active-directory | Empactis Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/empactis-tutorial.md | |
active-directory | Empcenter Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/empcenter-tutorial.md | |
active-directory | Emplifi Platform Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/emplifi-platform-tutorial.md | |
active-directory | Enablon Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/enablon-tutorial.md | |
active-directory | Encompass Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/encompass-tutorial.md | |
active-directory | Envimmis Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/envimmis-tutorial.md | |
active-directory | Envoy Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/envoy-provisioning-tutorial.md | |
active-directory | Envoy Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/envoy-tutorial.md | |
active-directory | Ephoto Dam Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ephoto-dam-tutorial.md | |
active-directory | Eplatform Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/eplatform-tutorial.md | |
active-directory | Equifax Workforce Solutions Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/equifax-workforce-solutions-tutorial.md | |
active-directory | Equinix Federation App Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/equinix-federation-app-tutorial.md | |
active-directory | Equisolve Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/equisolve-tutorial.md | |
active-directory | Era Ehs Core Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/era-ehs-core-tutorial.md | |
active-directory | Esalesmanagerremix Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/esalesmanagerremix-tutorial.md | |
active-directory | Ethicspoint Incident Management Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ethicspoint-incident-management-tutorial.md | |
active-directory | Etouches Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/etouches-tutorial.md | |
active-directory | Euromonitor Passport Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/euromonitor-passport-tutorial.md | |
active-directory | Eventfinity Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/eventfinity-tutorial.md | |
active-directory | Everbridge Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/everbridge-tutorial.md | |
active-directory | Evercate Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/evercate-provisioning-tutorial.md | |
active-directory | Evergreen Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/evergreen-tutorial.md | |
active-directory | Evernote Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/evernote-tutorial.md | |
active-directory | Evidence Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/evidence-tutorial.md | |
active-directory | Evovia Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/evovia-tutorial.md | |
active-directory | Exactcare Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/exactcare-sso-tutorial.md | |
active-directory | Exceed Ai Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/exceed-ai-tutorial.md | |
active-directory | Excelity Hcm Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/excelity-hcm-tutorial.md | |
active-directory | Excelityglobal Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/excelityglobal-tutorial.md | |
active-directory | Exium Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/exium-provisioning-tutorial.md | |
active-directory | Exium Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/exium-tutorial.md | |
active-directory | Expensein Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/expensein-tutorial.md | |
active-directory | Expensify Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/expensify-tutorial.md | |
active-directory | Experience Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/experience-cloud-tutorial.md | |
active-directory | Expiration Reminder Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/expiration-reminder-tutorial.md | |
active-directory | Explanation Based Auditing System Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/explanation-based-auditing-system-tutorial.md | |
active-directory | Exponenthr Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/exponenthr-tutorial.md | |
active-directory | Ezofficeinventory Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ezofficeinventory-tutorial.md | |
active-directory | Ezra Coaching Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ezra-coaching-tutorial.md | |
active-directory | Ezrentout Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ezrentout-tutorial.md | |
active-directory | F5 Big Ip Headers Easy Button | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/f5-big-ip-headers-easy-button.md | |
active-directory | F5 Big Ip Oracle Enterprise Business Suite Easy Button | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/f5-big-ip-oracle-enterprise-business-suite-easy-button.md | |
active-directory | F5 Big Ip Oracle Jd Edwards Easy Button | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/f5-big-ip-oracle-jd-edwards-easy-button.md | |
active-directory | F5 Big Ip Sap Erp Easy Button | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/f5-big-ip-sap-erp-easy-button.md | |
active-directory | Fabric Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fabric-tutorial.md | |
active-directory | Facebook Work Accounts Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/facebook-work-accounts-provisioning-tutorial.md | |
active-directory | Factset Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/factset-tutorial.md | |
active-directory | Fastly Edge Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fastly-edge-cloud-tutorial.md | |
active-directory | Fax Plus Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fax-plus-tutorial.md | |
active-directory | Fcm Hub Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fcm-hub-tutorial.md | |
active-directory | Federated Directory Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/federated-directory-provisioning-tutorial.md | |
active-directory | Fence Mobile Remotemanager Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fence-mobile-remotemanager-sso-tutorial.md | |
active-directory | Fexa Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fexa-tutorial.md | |
active-directory | Fidelity Planviewer Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fidelity-planviewer-tutorial.md | |
active-directory | Fidelitynetbenefits Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fidelitynetbenefits-tutorial.md | |
active-directory | Field Id Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/field-id-tutorial.md | |
active-directory | Fieldglass Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fieldglass-tutorial.md | |
active-directory | Figbytes Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/figbytes-tutorial.md | |
active-directory | Figma Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/figma-provisioning-tutorial.md | |
active-directory | Figma Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/figma-tutorial.md | |
active-directory | Filecloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/filecloud-tutorial.md | |
active-directory | Fileorbis Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fileorbis-tutorial.md | |
active-directory | Filesanywhere Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/filesanywhere-tutorial.md | |
active-directory | Finvari Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/finvari-tutorial.md | |
active-directory | Firmex Vdr Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/firmex-vdr-tutorial.md | |
active-directory | Firmplay Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/firmplay-tutorial.md | |
active-directory | Firstbird Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/firstbird-tutorial.md | |
active-directory | Fiscalnote Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fiscalnote-tutorial.md | |
active-directory | Five9 Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/five9-tutorial.md | |
active-directory | Fivetran Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fivetran-tutorial.md | |
active-directory | Flatter Files Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/flatter-files-tutorial.md | |
active-directory | Flexera One Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/flexera-one-tutorial.md | |
active-directory | Float Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/float-tutorial.md | |
active-directory | Flock Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/flock-provisioning-tutorial.md | |
active-directory | Flock Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/flock-tutorial.md | |
active-directory | Floqast Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/floqast-tutorial.md | |
active-directory | Fluxxlabs Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fluxxlabs-tutorial.md | |
active-directory | Fm Systems Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fm-systems-tutorial.md | |
active-directory | Foko Retail Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/foko-retail-tutorial.md | |
active-directory | Folloze Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/folloze-tutorial.md | |
active-directory | Foodee Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/foodee-provisioning-tutorial.md | |
active-directory | Foodee Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/foodee-tutorial.md | |
active-directory | Forcepoint Cloud Security Gateway Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/forcepoint-cloud-security-gateway-tutorial.md | |
active-directory | Foreseecxsuite Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/foreseecxsuite-tutorial.md | |
active-directory | Formcom Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/formcom-tutorial.md | |
active-directory | Fortes Change Cloud Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortes-change-cloud-provisioning-tutorial.md | |
active-directory | Fortes Change Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortes-change-cloud-tutorial.md | |
active-directory | Fortigate Ssl Vpn Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortigate-ssl-vpn-tutorial.md | |
active-directory | Fortisase Sia Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortisase-sia-tutorial.md | |
active-directory | Fortiweb Web Application Firewall Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortiweb-web-application-firewall-tutorial.md | |
active-directory | Foundu Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/foundu-tutorial.md | |
active-directory | Fourkites Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fourkites-tutorial.md | |
active-directory | Framer Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/framer-tutorial.md | |
active-directory | Frankli Io Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/frankli-io-provisioning-tutorial.md | |
active-directory | Freedcamp Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/freedcamp-tutorial.md | |
active-directory | Fresh Relevance Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fresh-relevance-tutorial.md | |
active-directory | Freshdesk Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/freshdesk-tutorial.md | |
active-directory | Freshgrade Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/freshgrade-tutorial.md | |
active-directory | Freshservice Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/freshservice-provisioning-tutorial.md | |
active-directory | Freshservice Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/freshservice-tutorial.md | |
active-directory | Freshworks Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/freshworks-tutorial.md | |
active-directory | Front Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/front-tutorial.md | |
active-directory | Frontify Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/frontify-tutorial.md | |
active-directory | Frontline Education Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/frontline-education-tutorial.md | |
active-directory | Fulcrum Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fulcrum-tutorial.md | |
active-directory | Fuse Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fuse-tutorial.md | |
active-directory | Fuze Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fuze-provisioning-tutorial.md | |
active-directory | Fuze Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fuze-tutorial.md | |
active-directory | G Suite Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/g-suite-provisioning-tutorial.md | |
active-directory | Gaggleamp Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gaggleamp-tutorial.md | |
active-directory | Gamba Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gamba-tutorial.md | |
active-directory | Getabstract Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/getabstract-provisioning-tutorial.md | |
active-directory | Getabstract Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/getabstract-tutorial.md | |
active-directory | Getthere Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/getthere-tutorial.md | |
active-directory | Ghae Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ghae-tutorial.md | |
active-directory | Gigya Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gigya-tutorial.md | |
active-directory | Github Ae Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-ae-provisioning-tutorial.md | |
active-directory | Github Ae Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-ae-tutorial.md | |
active-directory | Github Enterprise Cloud Enterprise Account Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-enterprise-cloud-enterprise-account-tutorial.md | |
active-directory | Github Enterprise Managed User Oidc Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-enterprise-managed-user-oidc-provisioning-tutorial.md | |
active-directory | Github Enterprise Managed User Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-enterprise-managed-user-provisioning-tutorial.md | |
active-directory | Github Enterprise Managed User Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-enterprise-managed-user-tutorial.md | |
active-directory | Github Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-provisioning-tutorial.md | |
active-directory | Github Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/github-tutorial.md | |
active-directory | Glassfrog Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/glassfrog-tutorial.md | |
active-directory | Glint Inc Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/glint-inc-tutorial.md | |
active-directory | Global Relay Identity Sync Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/global-relay-identity-sync-provisioning-tutorial.md | |
active-directory | Globalone Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/globalone-tutorial.md | |
active-directory | Globesmart Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/globesmart-tutorial.md | |
active-directory | Goalquest Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/goalquest-tutorial.md | |
active-directory | Golinks Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/golinks-provisioning-tutorial.md | |
active-directory | Golinks Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/golinks-tutorial.md | |
active-directory | Gong Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gong-provisioning-tutorial.md | |
active-directory | Goodpractice Toolkit Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/goodpractice-toolkit-tutorial.md | |
active-directory | Google Apps Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/google-apps-tutorial.md | |
active-directory | Gr8 People Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gr8-people-tutorial.md | |
active-directory | Gradle Enterprise Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gradle-enterprise-tutorial.md | |
active-directory | Grammarly Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/grammarly-provisioning-tutorial.md | |
active-directory | Grammarly Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/grammarly-tutorial.md | |
active-directory | Grape Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/grape-tutorial.md | |
active-directory | Greenhouse Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/greenhouse-tutorial.md | |
active-directory | Greenlight Compliant Access Management Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/greenlight-compliant-access-management-tutorial.md | |
active-directory | Greenlight Enterprise Business Controls Platform Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/greenlight-enterprise-business-controls-platform-tutorial.md | |
active-directory | Greenlight Integration Platform Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/greenlight-integration-platform-tutorial.md | |
active-directory | Greenorbit Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/greenorbit-tutorial.md | |
active-directory | Grok Learning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/grok-learning-tutorial.md | |
active-directory | Grouptalk Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/grouptalk-provisioning-tutorial.md | |
active-directory | Grovo Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/grovo-tutorial.md | |
active-directory | Gtmhub Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gtmhub-provisioning-tutorial.md | |
active-directory | Gtnexus Sso Module Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gtnexus-sso-module-tutorial.md | |
active-directory | Guardium Data Protection Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/guardium-data-protection-tutorial.md | |
active-directory | H5mag Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/h5mag-provisioning-tutorial.md | |
active-directory | Hackerone Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hackerone-tutorial.md | |
active-directory | Hacknotice Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hacknotice-tutorial.md | |
active-directory | Halogen Software Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/halogen-software-tutorial.md | |
active-directory | Halosys Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/halosys-tutorial.md | |
active-directory | Happyfox Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/happyfox-tutorial.md | |
active-directory | Harmony Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/harmony-tutorial.md | |
active-directory | Harness Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/harness-provisioning-tutorial.md | |
active-directory | Harness Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/harness-tutorial.md | |
active-directory | Hcaptcha Enterprise Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hcaptcha-enterprise-tutorial.md | |
active-directory | Header Citrix Netscaler Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/header-citrix-netscaler-tutorial.md | |
active-directory | Headspace Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/headspace-tutorial.md | |
active-directory | Health Support System Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/health-support-system-tutorial.md | |
active-directory | Helloid Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/helloid-provisioning-tutorial.md | |
active-directory | Helper Helper Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/helper-helper-tutorial.md | |
active-directory | Helpscout Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/helpscout-tutorial.md | |
active-directory | Helpshift Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/helpshift-tutorial.md | |
active-directory | Heroku Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/heroku-tutorial.md | |
active-directory | Heybuddy Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/heybuddy-tutorial.md | |
active-directory | Highgear Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/highgear-tutorial.md | |
active-directory | Highground Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/highground-tutorial.md | |
active-directory | Hightail Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hightail-tutorial.md | |
active-directory | Hirebridge Ats Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hirebridge-ats-tutorial.md | |
active-directory | Hiretual Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hiretual-tutorial.md | |
active-directory | Hirevue Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hirevue-tutorial.md | |
active-directory | Hive Learning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hive-learning-tutorial.md | |
active-directory | Hive Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hive-tutorial.md | |
active-directory | Holmes Cloud Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/holmes-cloud-provisioning-tutorial.md | |
active-directory | Holmes Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/holmes-tutorial.md | |
active-directory | Honestly Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/honestly-tutorial.md | |
active-directory | Hootsuite Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hootsuite-provisioning-tutorial.md | |
active-directory | Hootsuite Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hootsuite-tutorial.md | |
active-directory | Hopsworks Ai Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hopsworks-ai-tutorial.md | |
active-directory | Hornbill Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hornbill-tutorial.md | |
active-directory | Hosted Heritage Online Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hosted-heritage-online-sso-tutorial.md | |
active-directory | Hosted Mycirqa Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hosted-mycirqa-sso-tutorial.md | |
active-directory | Hostedgraphite Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hostedgraphite-tutorial.md | |
active-directory | Hownow Webapp Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hownow-webapp-sso-tutorial.md | |
active-directory | Hoxhunt Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hoxhunt-provisioning-tutorial.md | |
active-directory | Hoxhunt Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hoxhunt-tutorial.md | |
active-directory | Hpesaas Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hpesaas-tutorial.md | |
active-directory | Hr2day Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hr2day-tutorial.md | |
active-directory | Hrworks Single Sign On Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hrworks-single-sign-on-tutorial.md | |
active-directory | Hsb Thoughtspot Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hsb-thoughtspot-tutorial.md | |
active-directory | Hub Planner Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hub-planner-tutorial.md | |
active-directory | Hubble Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hubble-tutorial.md | |
active-directory | Hubspot Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hubspot-tutorial.md | |
active-directory | Huddle Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/huddle-tutorial.md | |
active-directory | Humanage Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/humanage-tutorial.md | |
active-directory | Hype Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hype-tutorial.md | |
active-directory | Hyperanna Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hyperanna-tutorial.md | |
active-directory | Iamip Patent Platform Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/iamip-patent-platform-tutorial.md | |
active-directory | Iauditor Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/iauditor-tutorial.md | |
active-directory | Ibm Digital Business Automation On Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ibm-digital-business-automation-on-cloud-tutorial.md | |
active-directory | Ibmid Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ibmid-tutorial.md | |
active-directory | Ibmopenpages Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ibmopenpages-tutorial.md | |
active-directory | Ice Contact Center Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ice-contact-center-tutorial.md | |
active-directory | Icims Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/icims-tutorial.md | |
active-directory | Idc Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/idc-tutorial.md | |
active-directory | Ideagen Cloud Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ideagen-cloud-provisioning-tutorial.md | |
active-directory | Ideascale Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ideascale-tutorial.md | |
active-directory | Ideo Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ideo-provisioning-tutorial.md | |
active-directory | Idid Manager Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/idid-manager-tutorial.md | |
active-directory | Idrive Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/idrive-tutorial.md | |
active-directory | Idrive360 Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/idrive360-tutorial.md | |
active-directory | Igloo Software Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/igloo-software-tutorial.md | |
active-directory | Igrafx Platform Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/igrafx-platform-tutorial.md | |
active-directory | Ihasco Training Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ihasco-training-tutorial.md | |
active-directory | Illusive Networks Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/illusive-networks-tutorial.md | |
active-directory | Ilms Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ilms-tutorial.md | |
active-directory | Imagerelay Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/imagerelay-tutorial.md | |
active-directory | Imageworks Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/imageworks-tutorial.md | |
active-directory | Imagineerwebvision Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/imagineerwebvision-tutorial.md | |
active-directory | Impacriskmanager Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/impacriskmanager-tutorial.md | |
active-directory | Imperva Data Security Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/imperva-data-security-tutorial.md | |
active-directory | In Case Of Crisis Mobile Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/in-case-of-crisis-mobile-tutorial.md | |
active-directory | In Case Of Crisis Online Portal Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/in-case-of-crisis-online-portal-tutorial.md | |
active-directory | Infinitecampus Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/infinitecampus-tutorial.md | |
active-directory | Infinityqs Proficient On Demand Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/infinityqs-proficient-on-demand-tutorial.md | |
active-directory | Infogix Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/infogix-tutorial.md | |
active-directory | Infor Cloud Suite Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/infor-cloud-suite-tutorial.md | |
active-directory | Infor Cloudsuite Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/infor-cloudsuite-provisioning-tutorial.md | |
active-directory | Informacast Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/informacast-tutorial.md | |
active-directory | Informatica Platform Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/informatica-platform-tutorial.md | |
active-directory | Inforretailinformationmanagement Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/inforretailinformationmanagement-tutorial.md | |
active-directory | Infrascale Cloud Backup Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/infrascale-cloud-backup-tutorial.md | |
active-directory | Inkling Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/inkling-tutorial.md | |
active-directory | Innotas Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/innotas-tutorial.md | |
active-directory | Innovationhub Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/innovationhub-tutorial.md | |
active-directory | Insidertrack Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/insidertrack-tutorial.md | |
active-directory | Insideview Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/insideview-tutorial.md | |
active-directory | Insight4grc Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/insight4grc-provisioning-tutorial.md | |
active-directory | Insight4grc Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/insight4grc-tutorial.md | |
active-directory | Insigniasamlsso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/insigniasamlsso-tutorial.md | |
active-directory | Insite Lms Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/insite-lms-provisioning-tutorial.md | |
active-directory | Insperityexpensable Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/insperityexpensable-tutorial.md | |
active-directory | Instavr Viewer Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/instavr-viewer-tutorial.md | |
active-directory | Insuite Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/insuite-tutorial.md | |
active-directory | Intacct Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/intacct-tutorial.md | |
active-directory | Intelligencebank Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/intelligencebank-tutorial.md | |
active-directory | International Sos Assistance Products Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/international-sos-assistance-products-tutorial.md | |
active-directory | Intime Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/intime-tutorial.md | |
active-directory | Intralinks Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/intralinks-tutorial.md | |
active-directory | Introdus Pre And Onboarding Platform Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/introdus-pre-and-onboarding-platform-provisioning-tutorial.md | |
active-directory | Intsights Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/intsights-tutorial.md | |
active-directory | Invision Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/invision-provisioning-tutorial.md | |
active-directory | Invision Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/invision-tutorial.md | |
active-directory | Invitedesk Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/invitedesk-provisioning-tutorial.md | |
active-directory | Ip Platform Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ip-platform-tutorial.md | |
active-directory | Ipass Smartconnect Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ipass-smartconnect-provisioning-tutorial.md | |
active-directory | Ipasssmartconnect Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ipasssmartconnect-tutorial.md | |
active-directory | Ipoint Service Provider Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ipoint-service-provider-tutorial.md | |
active-directory | Iqnavigatorvms Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/iqnavigatorvms-tutorial.md | |
active-directory | Iqualify Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/iqualify-tutorial.md | |
active-directory | Iris Intranet Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/iris-intranet-provisioning-tutorial.md | |
active-directory | Iris Intranet Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/iris-intranet-tutorial.md | |
active-directory | Iriusrisk Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/iriusrisk-tutorial.md | |
active-directory | Isams Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/isams-tutorial.md | |
active-directory | Iserver Portal Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/iserver-portal-tutorial.md | |
active-directory | Isight Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/isight-tutorial.md | |
active-directory | Itrp Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/itrp-tutorial.md | |
active-directory | Itslearning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/itslearning-tutorial.md | |
active-directory | Ivanti Service Manager Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ivanti-service-manager-tutorial.md | |
active-directory | Ivm Smarthub Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ivm-smarthub-tutorial.md | |
active-directory | Iwellnessnow Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/iwellnessnow-tutorial.md | |
active-directory | Iwt Procurement Suite Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/iwt-procurement-suite-tutorial.md | |
active-directory | Jamfprosamlconnector Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jamfprosamlconnector-tutorial.md | |
active-directory | Javelo Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/javelo-tutorial.md | |
active-directory | Jdacloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jdacloud-tutorial.md | |
active-directory | Jedox Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jedox-tutorial.md | |
active-directory | Jfrog Artifactory Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jfrog-artifactory-tutorial.md | |
active-directory | Jira52microsoft Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jira52microsoft-tutorial.md | |
active-directory | Jiramicrosoft Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jiramicrosoft-tutorial.md | |
active-directory | Jisc Student Voter Registration Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jisc-student-voter-registration-tutorial.md | |
active-directory | Jitbit Helpdesk Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jitbit-helpdesk-tutorial.md | |
active-directory | Jive Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jive-provisioning-tutorial.md | |
active-directory | Jive Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jive-tutorial.md | |
active-directory | Jll Tririga Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jll-tririga-tutorial.md | |
active-directory | Jobbadmin Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jobbadmin-tutorial.md | |
active-directory | Jobhub Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jobhub-tutorial.md | |
active-directory | Jobscience Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jobscience-tutorial.md | |
active-directory | Jobscore Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jobscore-tutorial.md | |
active-directory | Joinedup Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/joinedup-tutorial.md | |
active-directory | Joinme Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/joinme-tutorial.md | |
active-directory | Jooto Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jooto-tutorial.md | |
active-directory | Josa Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/josa-tutorial.md | |
active-directory | Jostle Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jostle-provisioning-tutorial.md | |
active-directory | Jostle Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jostle-tutorial.md | |
active-directory | Joyn Fsm Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/joyn-fsm-provisioning-tutorial.md | |
active-directory | Juno Journey Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/juno-journey-provisioning-tutorial.md | |
active-directory | Juno Journey Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/juno-journey-tutorial.md | |
active-directory | Juriblox Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/juriblox-tutorial.md | |
active-directory | Justlogin Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/justlogin-tutorial.md | |
active-directory | Kallidus Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kallidus-tutorial.md | |
active-directory | Kanbanize Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kanbanize-tutorial.md | |
active-directory | Kantegassoforbamboo Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kantegassoforbamboo-tutorial.md | |
active-directory | Kantegassoforbitbucket Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kantegassoforbitbucket-tutorial.md | |
active-directory | Kantegassoforconfluence Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kantegassoforconfluence-tutorial.md | |
active-directory | Kantegassoforfisheyecrucible Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kantegassoforfisheyecrucible-tutorial.md | |
active-directory | Kantegassoforjira Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kantegassoforjira-tutorial.md | |
active-directory | Kao Navi Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kao-navi-tutorial.md | |
active-directory | Keepabl Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/keepabl-provisioning-tutorial.md | |
active-directory | Keepabl Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/keepabl-tutorial.md | Follow these steps to enable Azure AD SSO in the Azure portal. a. In the **Identifier** textbox, type a value using the following pattern: `keepabl_microsoft_azure_<OrganizationID>` - b. In the **Reply URL** text box, type one of the following URLs: -- | **Reply URL** | - || - | `https://app.keepabl.com/users/saml/auth` | - | `https://keepabl.herokuapp.com/users/saml/auth` | + b. In the **Reply URL** text box, type the URL: `https://app.keepabl.com/users/saml/auth` 1. Click **Set additional URLs** and perform the following step if you wish to configure the application in SP initiated mode: |
active-directory | Keeper Password Manager Digitalvault Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/keeper-password-manager-digitalvault-provisioning-tutorial.md | |
active-directory | Keeperpasswordmanager Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/keeperpasswordmanager-tutorial.md | |
active-directory | Kemp Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kemp-tutorial.md | |
active-directory | Kendis Scaling Agile Platform Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kendis-scaling-agile-platform-tutorial.md | |
active-directory | Kenexasurvey Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kenexasurvey-tutorial.md | |
active-directory | Kerbf5 Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kerbf5-tutorial.md | |
active-directory | Keystone Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/keystone-tutorial.md | |
active-directory | Kfadvance Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kfadvance-tutorial.md | |
active-directory | Khoros Care Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/khoros-care-tutorial.md | |
active-directory | Kindling Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kindling-tutorial.md | |
active-directory | Kintone Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kintone-tutorial.md | |
active-directory | Kisi Physical Security Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kisi-physical-security-provisioning-tutorial.md | |
active-directory | Kisi Physical Security Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kisi-physical-security-tutorial.md | |
active-directory | Kiteworks Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kiteworks-tutorial.md | |
active-directory | Klaxoon Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/klaxoon-provisioning-tutorial.md | |
active-directory | Klaxoon Saml Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/klaxoon-saml-provisioning-tutorial.md | |
active-directory | Klaxoon Saml Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/klaxoon-saml-tutorial.md | |
active-directory | Klue Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/klue-tutorial.md | |
active-directory | Knowbe4 Security Awareness Training Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/knowbe4-security-awareness-training-provisioning-tutorial.md | |
active-directory | Knowbe4 Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/knowbe4-tutorial.md | |
active-directory | Knowledge Anywhere Lms Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/knowledge-anywhere-lms-tutorial.md | |
active-directory | Knowledgeowl Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/knowledgeowl-tutorial.md | |
active-directory | Kontiki Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kontiki-tutorial.md | |
active-directory | Korn Ferry 360 Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/korn-ferry-360-tutorial.md | |
active-directory | Korn Ferry Alp Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/korn-ferry-alp-tutorial.md | |
active-directory | Kpifire Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kpifire-provisioning-tutorial.md | |
active-directory | Kpifire Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kpifire-tutorial.md | |
active-directory | Kpmg Tool Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kpmg-tool-tutorial.md | |
active-directory | Kpn Grip Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kpn-grip-provisioning-tutorial.md | |
active-directory | Kronos Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kronos-tutorial.md | |
active-directory | Kronos Workforce Dimensions Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kronos-workforce-dimensions-tutorial.md | |
active-directory | Kudos Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kudos-tutorial.md | |
active-directory | Kumolus Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kumolus-tutorial.md | |
active-directory | Lablog Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lablog-tutorial.md | |
active-directory | Landgorilla Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/landgorilla-tutorial.md | |
active-directory | Lanschool Air Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lanschool-air-provisioning-tutorial.md | |
active-directory | Lanschool Air Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lanschool-air-tutorial.md | |
active-directory | Lattice Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lattice-tutorial.md | |
active-directory | Launchdarkly Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/launchdarkly-tutorial.md | |
active-directory | Lawvu Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lawvu-provisioning-tutorial.md | |
active-directory | Lawvu Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lawvu-tutorial.md | |
active-directory | Lcvista Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lcvista-tutorial.md | |
active-directory | Leadfamly Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/leadfamly-tutorial.md | |
active-directory | Lean Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lean-tutorial.md | |
active-directory | Leapsome Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/leapsome-provisioning-tutorial.md | |
active-directory | Leapsome Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/leapsome-tutorial.md | |
active-directory | Learning At Work Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/learning-at-work-tutorial.md | |
active-directory | Learningpool Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/learningpool-tutorial.md | |
active-directory | Learningseatlms Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/learningseatlms-tutorial.md | |
active-directory | Learnster Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/learnster-tutorial.md | |
active-directory | Learnupon Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/learnupon-tutorial.md | |
active-directory | Lecorpio Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lecorpio-tutorial.md | |
active-directory | Lensesio Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lensesio-tutorial.md | |
active-directory | Lessonly Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lessonly-tutorial.md | |
active-directory | Lexion Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lexion-tutorial.md | |
active-directory | Lexonis Talentscape Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lexonis-talentscape-tutorial.md | |
active-directory | Lifesize Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lifesize-cloud-tutorial.md | |
active-directory | Lift Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lift-tutorial.md | |
active-directory | Limblecmms Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/limblecmms-provisioning-tutorial.md | |
active-directory | Lines Elibrary Advance Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lines-elibrary-advance-tutorial.md | |
active-directory | Linkedin Talent Solutions Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/linkedin-talent-solutions-tutorial.md | |
active-directory | Linkedinelevate Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/linkedinelevate-provisioning-tutorial.md | |
active-directory | Linkedinelevate Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/linkedinelevate-tutorial.md | |
active-directory | Linkedinlearning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/linkedinlearning-tutorial.md | |
active-directory | Linkedinsalesnavigator Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/linkedinsalesnavigator-provisioning-tutorial.md | |
active-directory | Linkedinsalesnavigator Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/linkedinsalesnavigator-tutorial.md | |
active-directory | Liquidfiles Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/liquidfiles-tutorial.md | |
active-directory | Litmos Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/litmos-tutorial.md | |
active-directory | Litmus Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/litmus-tutorial.md | |
active-directory | Lms And Education Management System Leaf Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lms-and-education-management-system-leaf-tutorial.md | |
active-directory | Logicgate Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/logicgate-provisioning-tutorial.md | |
active-directory | Logicmonitor Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/logicmonitor-tutorial.md | |
active-directory | Logmein Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/logmein-provisioning-tutorial.md | |
active-directory | Logmein Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/logmein-tutorial.md | |
active-directory | Logzio Cloud Observability For Engineers Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/logzio-cloud-observability-for-engineers-tutorial.md | |
active-directory | Looker Analytics Platform Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/looker-analytics-platform-tutorial.md | |
active-directory | Looop Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/looop-provisioning-tutorial.md | |
active-directory | Loop Flow Crm Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/loop-flow-crm-tutorial.md | |
active-directory | Lr Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lr-tutorial.md | |
active-directory | Lucid All Products Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lucid-all-products-provisioning-tutorial.md | |
active-directory | Lucid Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lucid-tutorial.md | |
active-directory | Lucidchart Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lucidchart-provisioning-tutorial.md | |
active-directory | Lucidchart Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lucidchart-tutorial.md | |
active-directory | Lusid Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lusid-tutorial.md | |
active-directory | Luum Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/luum-tutorial.md | |
active-directory | Lynda Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lynda-tutorial.md | |
active-directory | Lytx Drivecam Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lytx-drivecam-tutorial.md | |
active-directory | Lyve Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lyve-cloud-tutorial.md | |
active-directory | M Files Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/m-files-tutorial.md | |
active-directory | Mail Luck Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mail-luck-tutorial.md | |
active-directory | Mailgates Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mailgates-tutorial.md | |
active-directory | Manabipocket Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/manabipocket-tutorial.md | |
active-directory | Manifestly Checklists Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/manifestly-checklists-tutorial.md | |
active-directory | Mapbox Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mapbox-tutorial.md | |
active-directory | Mapiq Essentials Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mapiq-essentials-tutorial.md | |
active-directory | Mapiq Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mapiq-tutorial.md | |
active-directory | Maptician Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/maptician-provisioning-tutorial.md | |
active-directory | Maptician Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/maptician-tutorial.md | |
active-directory | Marketo Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/marketo-tutorial.md | |
active-directory | Maverics Identity Orchestrator Saml Connector Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/maverics-identity-orchestrator-saml-connector-tutorial.md | |
active-directory | Maxient Conduct Manager Software Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/maxient-conduct-manager-software-tutorial.md | |
active-directory | Maxxpoint Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/maxxpoint-tutorial.md | |
active-directory | Mcm Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mcm-tutorial.md | |
active-directory | Fortigate Deployment Guide Converted | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/media/fortigate-ssl-vpn-tutorial/fortigate-deployment-guide-converted.md | |
active-directory | Mediusflow Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mediusflow-provisioning-tutorial.md | |
active-directory | Menlosecurity Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/menlosecurity-tutorial.md | |
active-directory | Meraki Dashboard Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/meraki-dashboard-tutorial.md | |
active-directory | Mercell Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mercell-tutorial.md | |
active-directory | Mercerhrs Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mercerhrs-tutorial.md | |
active-directory | Merchlogix Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/merchlogix-provisioning-tutorial.md | |
active-directory | Merchlogix Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/merchlogix-tutorial.md | |
active-directory | Meta Networks Connector Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/meta-networks-connector-provisioning-tutorial.md | |
active-directory | Meta Work Accounts Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/meta-work-accounts-tutorial.md | |
active-directory | Meta4 Global Hr Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/meta4-global-hr-tutorial.md | |
active-directory | Metanetworksconnector Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/metanetworksconnector-tutorial.md | |
active-directory | Metatask Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/metatask-tutorial.md | |
active-directory | Mevisio Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mevisio-tutorial.md | |
active-directory | Michigan Data Hub Single Sign On Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/michigan-data-hub-single-sign-on-tutorial.md | |
active-directory | Mihcm Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mihcm-tutorial.md | |
active-directory | Mimecast Personal Portal Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mimecast-personal-portal-tutorial.md | |
active-directory | Mindflash Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mindflash-tutorial.md | |
active-directory | Mindtickle Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mindtickle-provisioning-tutorial.md | |
active-directory | Mindtickle Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mindtickle-tutorial.md | |
active-directory | Mindwireless Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mindwireless-tutorial.md | |
active-directory | Miro Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/miro-provisioning-tutorial.md | |
active-directory | Miro Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/miro-tutorial.md | |
active-directory | Mist Cloud Admin Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mist-cloud-admin-tutorial.md | |
active-directory | Mitel Connect Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mitel-connect-tutorial.md | |
active-directory | Mixpanel Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mixpanel-provisioning-tutorial.md | |
active-directory | Mixpanel Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mixpanel-tutorial.md | |
active-directory | Mobi Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mobi-tutorial.md | |
active-directory | Mobicontrol Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mobicontrol-tutorial.md | |
active-directory | Mobile Locker Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mobile-locker-tutorial.md | |
active-directory | Mobileiron Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mobileiron-tutorial.md | |
active-directory | Mobilexpense Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mobilexpense-tutorial.md | |
active-directory | Moconavi Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/moconavi-tutorial.md | |
active-directory | Momenta Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/momenta-tutorial.md | |
active-directory | Mondaycom Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mondaycom-provisioning-tutorial.md | |
active-directory | Mondaycom Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mondaycom-tutorial.md | |
active-directory | Mongodb Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mongodb-cloud-tutorial.md | |
active-directory | Montageonline Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/montageonline-tutorial.md | |
active-directory | Moqups Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/moqups-tutorial.md | |
active-directory | Motus Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/motus-tutorial.md | |
active-directory | Moveittransfer Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/moveittransfer-tutorial.md | |
active-directory | Moxiengage Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/moxiengage-tutorial.md | |
active-directory | Moxtra Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/moxtra-tutorial.md | |
active-directory | Mozy Enterprise Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mozy-enterprise-tutorial.md | |
active-directory | Ms Azure Sso Access For Ethidex Compliance Office Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ms-azure-sso-access-for-ethidex-compliance-office-tutorial.md | |
active-directory | Ms Confluence Jira Plugin Adminguide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ms-confluence-jira-plugin-adminguide.md | |
active-directory | Mural Identity Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mural-identity-provisioning-tutorial.md | |
active-directory | Mural Identity Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mural-identity-tutorial.md | |
active-directory | Mx3 Diagnostics Connector Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mx3-diagnostics-connector-provisioning-tutorial.md | |
active-directory | My Ibisworld Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/my-ibisworld-tutorial.md | |
active-directory | Myaos Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/myaos-tutorial.md | |
active-directory | Myaryaka Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/myaryaka-tutorial.md | |
active-directory | Myawardpoints Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/myawardpoints-tutorial.md | |
active-directory | Myday Provision Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/myday-provision-tutorial.md | |
active-directory | Mypolicies Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mypolicies-provisioning-tutorial.md | |
active-directory | Mypolicies Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mypolicies-tutorial.md | |
active-directory | Myvr Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/myvr-tutorial.md | |
active-directory | Myworkdrive Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/myworkdrive-tutorial.md | |
active-directory | N2f Expensereports Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/n2f-expensereports-tutorial.md | |
active-directory | Namely Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/namely-tutorial.md | |
active-directory | Nature Research Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/nature-research-tutorial.md | |
active-directory | Navex Irm Keylight Lockpath Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/navex-irm-keylight-lockpath-tutorial.md | |
active-directory | Navex One Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/navex-one-tutorial.md | |
active-directory | Negometrixportal Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/negometrixportal-tutorial.md | |
active-directory | Neogov Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/neogov-tutorial.md | |
active-directory | Neotalogicstudio Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/neotalogicstudio-tutorial.md | |
active-directory | Netdocuments Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netdocuments-tutorial.md | |
active-directory | Netmotion Mobility Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netmotion-mobility-tutorial.md | |
active-directory | Netop Portal Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netop-portal-tutorial.md | |
active-directory | Netpresenter Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netpresenter-provisioning-tutorial.md | |
active-directory | Netskope Administrator Console Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netskope-administrator-console-provisioning-tutorial.md | |
active-directory | Netskope Cloud Security Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netskope-cloud-security-tutorial.md | |
active-directory | Netskope User Authentication Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netskope-user-authentication-tutorial.md | |
active-directory | Netsparker Enterprise Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netsparker-enterprise-tutorial.md | |
active-directory | Netsuite Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netsuite-provisioning-tutorial.md | |
active-directory | Netsuite Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netsuite-tutorial.md | |
active-directory | Netvision Compas Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netvision-compas-tutorial.md | |
active-directory | Neustar Ultradns Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/neustar-ultradns-tutorial.md | |
active-directory | New Relic By Organization Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/new-relic-by-organization-provisioning-tutorial.md | |
active-directory | New Relic Limited Release Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/new-relic-limited-release-tutorial.md | |
active-directory | New Relic Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/new-relic-tutorial.md | |
active-directory | Newsignature Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/newsignature-tutorial.md | |
active-directory | Nexonia Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/nexonia-tutorial.md | |
active-directory | Nexsure Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/nexsure-tutorial.md | |
active-directory | Nice Cxone Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/nice-cxone-tutorial.md | |
active-directory | Nimblex Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/nimblex-tutorial.md | |
active-directory | Nimbus Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/nimbus-tutorial.md | |
active-directory | Nitro Productivity Suite Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/nitro-productivity-suite-tutorial.md | |
active-directory | Nodetrax Project Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/nodetrax-project-tutorial.md | |
active-directory | Nomadesk Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/nomadesk-tutorial.md | |
active-directory | Nomadic Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/nomadic-tutorial.md | |
active-directory | Nordpass Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/nordpass-provisioning-tutorial.md | |
active-directory | Notion Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/notion-tutorial.md | |
active-directory | Novatus Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/novatus-tutorial.md | |
active-directory | Ns1 Sso Azure Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ns1-sso-azure-tutorial.md | |
active-directory | Nuclino Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/nuclino-tutorial.md | |
active-directory | Nulab Pass Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/nulab-pass-tutorial.md | |
active-directory | Numlyengage Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/numlyengage-tutorial.md | |
active-directory | Oc Tanner Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oc-tanner-tutorial.md | |
active-directory | Officespace Software Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/officespace-software-provisioning-tutorial.md | |
active-directory | Officespace Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/officespace-tutorial.md | |
active-directory | Oktopost Saml Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oktopost-saml-tutorial.md | |
active-directory | Olfeo Saas Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/olfeo-saas-provisioning-tutorial.md | |
active-directory | Olfeo Saas Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/olfeo-saas-tutorial.md | |
active-directory | On24 Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/on24-tutorial.md | |
active-directory | Onedesk Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/onedesk-tutorial.md | |
active-directory | Oneteam Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oneteam-tutorial.md | |
active-directory | Onetrust Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/onetrust-tutorial.md | |
active-directory | Onit Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/onit-tutorial.md | |
active-directory | Onshape Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/onshape-tutorial.md | |
active-directory | Ontrack Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ontrack-tutorial.md | |
active-directory | Opal Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/opal-tutorial.md | |
active-directory | Open Text Directory Services Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/open-text-directory-services-provisioning-tutorial.md | |
active-directory | Openathens Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/openathens-tutorial.md | |
active-directory | Openidoauth Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/openidoauth-tutorial.md | |
active-directory | Openlearning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/openlearning-tutorial.md | |
active-directory | Opentext Directory Services Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/opentext-directory-services-tutorial.md | |
active-directory | Opentext Fax Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/opentext-fax-tutorial.md | |
active-directory | Opsgenie Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/opsgenie-tutorial.md | |
active-directory | Optimizely Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/optimizely-tutorial.md | |
active-directory | Oracle Cloud Infrastructure Console Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-cloud-infrastructure-console-provisioning-tutorial.md | |
active-directory | Oracle Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-cloud-tutorial.md | |
active-directory | Oracle Fusion Erp Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-fusion-erp-provisioning-tutorial.md | |
active-directory | Oracle Fusion Erp Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-fusion-erp-tutorial.md | |
active-directory | Orgchartnow Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/orgchartnow-tutorial.md | |
active-directory | Orgvitality Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/orgvitality-sso-tutorial.md | |
active-directory | Origami Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/origami-tutorial.md | |
active-directory | Otsuka Shokai Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/otsuka-shokai-tutorial.md | |
active-directory | Ou Campus Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ou-campus-tutorial.md | |
active-directory | Outsystems Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/outsystems-tutorial.md | |
active-directory | Overdrive Books Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/overdrive-books-tutorial.md | |
active-directory | Pacific Timesheet Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pacific-timesheet-tutorial.md | |
active-directory | Pagedna Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pagedna-tutorial.md | |
active-directory | Pagerduty Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pagerduty-tutorial.md | |
active-directory | Palantir Foundry Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/palantir-foundry-tutorial.md | |
active-directory | Palo Alto Networks Cloud Identity Engine Cloud Authentication Service Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/palo-alto-networks-cloud-identity-engine---cloud-authentication-service-tutorial.md | |
active-directory | Palo Alto Networks Cloud Identity Engine Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/palo-alto-networks-cloud-identity-engine-provisioning-tutorial.md | |
active-directory | Palo Alto Networks Globalprotect Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/palo-alto-networks-globalprotect-tutorial.md | |
active-directory | Palo Alto Networks Scim Connector Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/palo-alto-networks-scim-connector-provisioning-tutorial.md | |
active-directory | Paloaltoadmin Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/paloaltoadmin-tutorial.md | |
active-directory | Paloaltonetworks Aperture Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/paloaltonetworks-aperture-tutorial.md | |
active-directory | Paloaltonetworks Captiveportal Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/paloaltonetworks-captiveportal-tutorial.md | |
active-directory | Pandadoc Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pandadoc-tutorial.md | |
active-directory | Panopto Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/panopto-tutorial.md | |
active-directory | Panorama9 Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/panorama9-tutorial.md | |
active-directory | Panorays Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/panorays-tutorial.md | |
active-directory | Pantheon Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pantheon-tutorial.md | |
active-directory | Papercut Cloud Print Management Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/papercut-cloud-print-management-provisioning-tutorial.md | |
active-directory | Parkalot Car Park Management Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/parkalot-car-park-management-tutorial.md | |
active-directory | Parkhere Corporate Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/parkhere-corporate-tutorial.md | |
active-directory | Parsable Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/parsable-provisioning-tutorial.md | |
active-directory | Patentsquare Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/patentsquare-tutorial.md | |
active-directory | Pavaso Digital Close Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pavaso-digital-close-tutorial.md | |
active-directory | Paylocity Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/paylocity-tutorial.md | |
active-directory | Peakon Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/peakon-provisioning-tutorial.md | |
active-directory | Peakon Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/peakon-tutorial.md | |
active-directory | Pegasystems Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pegasystems-tutorial.md | |
active-directory | Pendo Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pendo-tutorial.md | |
active-directory | Penji Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/penji-tutorial.md | |
active-directory | People Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/people-tutorial.md | |
active-directory | Peoplecart Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/peoplecart-tutorial.md | |
active-directory | Per Angusta Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/per-angusta-tutorial.md | |
active-directory | Perceptionunitedstates Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/perceptionunitedstates-tutorial.md | |
active-directory | Perceptyx Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/perceptyx-tutorial.md | |
active-directory | Percolate Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/percolate-tutorial.md | |
active-directory | Perforce Helix Core Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/perforce-helix-core-tutorial.md | |
active-directory | Performancecentre Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/performancecentre-tutorial.md | |
active-directory | Perimeter 81 Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/perimeter-81-tutorial.md | |
active-directory | Perimeterx Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/perimeterx-tutorial.md | |
active-directory | Peripass Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/peripass-provisioning-tutorial.md | |
active-directory | Periscope Data Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/periscope-data-tutorial.md | |
active-directory | Petrovue Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/petrovue-tutorial.md | |
active-directory | Pexip Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pexip-tutorial.md | |
active-directory | Phenom Txm Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/phenom-txm-tutorial.md | |
active-directory | Phraseanet Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/phraseanet-tutorial.md | |
active-directory | Picturepark Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/picturepark-tutorial.md | |
active-directory | Pingboard Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pingboard-provisioning-tutorial.md | |
active-directory | Pingboard Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pingboard-tutorial.md | |
active-directory | Pipedrive Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pipedrive-tutorial.md | |
active-directory | Plandisc Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/plandisc-provisioning-tutorial.md | |
active-directory | Plangrid Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/plangrid-tutorial.md | |
active-directory | Planmyleave Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/planmyleave-tutorial.md | |
active-directory | Planview Enterprise One Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/planview-enterprise-one-tutorial.md | |
active-directory | Planview Id Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/planview-id-tutorial.md | |
active-directory | Planview Leankit Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/planview-leankit-tutorial.md | |
active-directory | Playvox Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/playvox-provisioning-tutorial.md | |
active-directory | Pluralsight Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pluralsight-tutorial.md | |
active-directory | Pluto Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pluto-tutorial.md | |
active-directory | Podbean Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/podbean-tutorial.md | |
active-directory | Policystat Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/policystat-tutorial.md | |
active-directory | Poolparty Semantic Suite Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/poolparty-semantic-suite-tutorial.md | |
active-directory | Postbeyond Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/postbeyond-tutorial.md | |
active-directory | Postman Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/postman-tutorial.md | |
active-directory | Powerschool Performance Matters Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/powerschool-performance-matters-tutorial.md | |
active-directory | Preciate Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/preciate-provisioning-tutorial.md | |
active-directory | Predictix Assortment Planning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/predictix-assortment-planning-tutorial.md | |
active-directory | Predictixordering Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/predictixordering-tutorial.md | |
active-directory | Predictixpricereporting Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/predictixpricereporting-tutorial.md | |
active-directory | Preset Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/preset-tutorial.md | |
active-directory | Presspage Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/presspage-tutorial.md | |
active-directory | Prezi Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/prezi-tutorial.md | |
active-directory | Printer Logic Saas Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/printer-logic-saas-provisioning-tutorial.md | |
active-directory | Printerlogic Saas Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/printerlogic-saas-tutorial.md | |
active-directory | Printix Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/printix-tutorial.md | |
active-directory | Priority Matrix Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/priority-matrix-provisioning-tutorial.md | |
active-directory | Prisma Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/prisma-cloud-tutorial.md | |
active-directory | Procaire Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/procaire-tutorial.md | |
active-directory | Processunity Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/processunity-tutorial.md | |
active-directory | Procoresso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/procoresso-tutorial.md | |
active-directory | Prodpad Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/prodpad-provisioning-tutorial.md | |
active-directory | Prodpad Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/prodpad-tutorial.md | |
active-directory | Productboard Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/productboard-tutorial.md | |
active-directory | Productive Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/productive-tutorial.md | |
active-directory | Profitco Saml App Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/profitco-saml-app-tutorial.md | |
active-directory | Projectplace Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/projectplace-tutorial.md | |
active-directory | Prolorus Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/prolorus-tutorial.md | |
active-directory | Promapp Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/promapp-provisioning-tutorial.md | |
active-directory | Promapp Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/promapp-tutorial.md | |
active-directory | Promaster Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/promaster-tutorial.md | |
active-directory | Pronovos Analytics Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pronovos-analytics-tutorial.md | |
active-directory | Pronovos Ops Manager Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pronovos-ops-manager-tutorial.md | |
active-directory | Proofpoint Ondemand Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/proofpoint-ondemand-tutorial.md | |
active-directory | Proprofs Classroom Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/proprofs-classroom-tutorial.md | |
active-directory | Proprofs Knowledge Base Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/proprofs-knowledge-base-tutorial.md | |
active-directory | Proto.Io Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/proto.io-tutorial.md | |
active-directory | Proware Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/proware-provisioning-tutorial.md | |
active-directory | Proware Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/proware-tutorial.md | |
active-directory | Proxyclick Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/proxyclick-provisioning-tutorial.md | |
active-directory | Proxyclick Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/proxyclick-tutorial.md | |
active-directory | Pulse Secure Pcs Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pulse-secure-pcs-tutorial.md | |
active-directory | Pulse Secure Virtual Traffic Manager Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pulse-secure-virtual-traffic-manager-tutorial.md | |
active-directory | Purecloud By Genesys Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/purecloud-by-genesys-provisioning-tutorial.md | |
active-directory | Purecloud By Genesys Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/purecloud-by-genesys-tutorial.md | |
active-directory | Purelyhr Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/purelyhr-tutorial.md | |
active-directory | Pymetrics Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/pymetrics-tutorial.md | |
active-directory | Qiita Team Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/qiita-team-tutorial.md | |
active-directory | Qliksense Enterprise Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/qliksense-enterprise-tutorial.md | |
active-directory | Qmarkets Idea Innovation Management Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/qmarkets-idea-innovation-management-tutorial.md | |
active-directory | Qprism Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/qprism-tutorial.md | |
active-directory | Qreserve Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/qreserve-tutorial.md | |
active-directory | Qualaroo Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/qualaroo-tutorial.md | |
active-directory | Qualtrics Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/qualtrics-tutorial.md | |
active-directory | Quantum Workplace Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/quantum-workplace-tutorial.md | |
active-directory | Questetra Bpm Suite Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/questetra-bpm-suite-tutorial.md | |
active-directory | Quickhelp Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/quickhelp-tutorial.md | |
active-directory | Qumucloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/qumucloud-tutorial.md | |
active-directory | R And D Tax Credit Services Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/r-and-d-tax-credit-services-tutorial.md | |
active-directory | Rackspacesso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rackspacesso-tutorial.md | |
active-directory | Raketa Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/raketa-tutorial.md | |
active-directory | Rally Software Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rally-software-tutorial.md | |
active-directory | Raumfurraum Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/raumfurraum-tutorial.md | |
active-directory | Readcube Papers Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/readcube-papers-tutorial.md | |
active-directory | Real Links Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/real-links-provisioning-tutorial.md | |
active-directory | Real Links Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/real-links-tutorial.md | |
active-directory | Recognize Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/recognize-tutorial.md | |
active-directory | Recurly Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/recurly-tutorial.md | |
active-directory | Redbrick Health Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/redbrick-health-tutorial.md | |
active-directory | Redvector Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/redvector-tutorial.md | |
active-directory | Reflektive Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/reflektive-tutorial.md | |
active-directory | Remotepc Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/remotepc-tutorial.md | |
active-directory | Renraku Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/renraku-tutorial.md | |
active-directory | Replicon Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/replicon-tutorial.md | |
active-directory | Reprints Desk Article Galaxy Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/reprints-desk-article-galaxy-tutorial.md | |
active-directory | Rescana Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rescana-tutorial.md | |
active-directory | Resource Central Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/resource-central-tutorial.md | |
active-directory | Retail Zipline Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/retail-zipline-tutorial.md | |
active-directory | Retrievermediadatabase Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/retrievermediadatabase-tutorial.md | |
active-directory | Reviewsnap Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/reviewsnap-tutorial.md | |
active-directory | Reward Gateway Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/reward-gateway-provisioning-tutorial.md | |
active-directory | Reward Gateway Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/reward-gateway-tutorial.md | |
active-directory | Rewatch Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rewatch-tutorial.md | |
active-directory | Rfpio Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rfpio-provisioning-tutorial.md | |
active-directory | Rfpio Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rfpio-tutorial.md | |
active-directory | Rhombus Systems Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rhombus-systems-tutorial.md | |
active-directory | Rightanswers Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rightanswers-tutorial.md | |
active-directory | Rightcrowd Workforce Management Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rightcrowd-workforce-management-tutorial.md | |
active-directory | Rightscale Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rightscale-tutorial.md | |
active-directory | Ringcentral Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ringcentral-provisioning-tutorial.md | |
active-directory | Ringcentral Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ringcentral-tutorial.md | |
active-directory | Risecom Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/risecom-tutorial.md | |
active-directory | Riskware Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/riskware-tutorial.md | |
active-directory | Riva Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/riva-tutorial.md | |
active-directory | Roadmunk Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/roadmunk-tutorial.md | |
active-directory | Robin Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/robin-provisioning-tutorial.md | |
active-directory | Robin Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/robin-tutorial.md | |
active-directory | Rocketreach Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rocketreach-sso-tutorial.md | |
active-directory | Rolepoint Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rolepoint-tutorial.md | |
active-directory | Rollbar Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rollbar-provisioning-tutorial.md | |
active-directory | Rollbar Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rollbar-tutorial.md | |
active-directory | Rootly Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rootly-tutorial.md | |
active-directory | Rouse Sales Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rouse-sales-provisioning-tutorial.md | |
active-directory | Rsa Archer Suite Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rsa-archer-suite-tutorial.md | |
active-directory | Rstudio Connect Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rstudio-connect-tutorial.md | |
active-directory | Rstudio Server Pro Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rstudio-server-pro-tutorial.md | |
active-directory | Runmyprocess Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/runmyprocess-tutorial.md | |
active-directory | S4 Digitsec Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/s4-digitsec-tutorial.md | |
active-directory | Saba Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/saba-cloud-tutorial.md | |
active-directory | Safeconnect Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/safeconnect-tutorial.md | |
active-directory | Safetynet Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/safetynet-tutorial.md | |
active-directory | Sailpoint Identitynow Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sailpoint-identitynow-tutorial.md | |
active-directory | Salesforce Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/salesforce-provisioning-tutorial.md | |
active-directory | Salesforce Sandbox Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/salesforce-sandbox-provisioning-tutorial.md | |
active-directory | Salesforce Sandbox Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/salesforce-sandbox-tutorial.md | |
active-directory | Salesforce Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/salesforce-tutorial.md | |
active-directory | Samanage Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/samanage-provisioning-tutorial.md | |
active-directory | Samanage Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/samanage-tutorial.md | |
active-directory | Saml Toolkit Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/saml-toolkit-tutorial.md | |
active-directory | Saml Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/saml-tutorial.md | |
active-directory | Samlssoconfluence Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/samlssoconfluence-tutorial.md | |
active-directory | Samlssojira Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/samlssojira-tutorial.md | |
active-directory | Samsara Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/samsara-tutorial.md | |
active-directory | Samsung Knox And Business Services Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/samsung-knox-and-business-services-tutorial.md | |
active-directory | Sansan Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sansan-tutorial.md | |
active-directory | Sap Analytics Cloud Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-analytics-cloud-provisioning-tutorial.md | |
active-directory | Sap Cloud Platform Identity Authentication Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-cloud-platform-identity-authentication-provisioning-tutorial.md | |
active-directory | Sap Customer Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-customer-cloud-tutorial.md | |
active-directory | Sap Fiori Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-fiori-tutorial.md | |
active-directory | Sap Hana Cloud Platform Identity Authentication Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-hana-cloud-platform-identity-authentication-tutorial.md | |
active-directory | Sap Hana Cloud Platform Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-hana-cloud-platform-tutorial.md | |
active-directory | Sap Netweaver Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-netweaver-tutorial.md | |
active-directory | Sap Successfactors Inbound Provisioning Cloud Only Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial.md | |
active-directory | Sap Successfactors Inbound Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-successfactors-inbound-provisioning-tutorial.md | |
active-directory | Sap Successfactors Writeback Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-successfactors-writeback-tutorial.md | |
active-directory | Sapboc Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sapboc-tutorial.md | |
active-directory | Sapbusinessbydesign Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sapbusinessbydesign-tutorial.md | |
active-directory | Saphana Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/saphana-tutorial.md | |
active-directory | Sapient Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sapient-tutorial.md | |
active-directory | Saucelabs Mobileandwebtesting Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/saucelabs-mobileandwebtesting-tutorial.md | |
active-directory | Saviynt Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/saviynt-tutorial.md | |
active-directory | Scalex Enterprise Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/scalex-enterprise-tutorial.md | |
active-directory | Scclifecycle Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/scclifecycle-tutorial.md | |
active-directory | Schoolstream Asa Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/schoolstream-asa-provisioning-tutorial.md | |
active-directory | Schoox Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/schoox-tutorial.md | |
active-directory | Sciforma Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sciforma-tutorial.md | |
active-directory | Sciquest Spend Director Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sciquest-spend-director-tutorial.md | |
active-directory | Screencast Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/screencast-tutorial.md | |
active-directory | Screensteps Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/screensteps-tutorial.md | |
active-directory | Scuba Analytics Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/scuba-analytics-tutorial.md | |
active-directory | Sd Elements Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sd-elements-tutorial.md | |
active-directory | Sds Chemical Information Management Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sds-chemical-information-management-tutorial.md | |
active-directory | Secretserver On Premises Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/secretserver-on-premises-tutorial.md | |
active-directory | Sectigo Certificate Manager Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sectigo-certificate-manager-tutorial.md | |
active-directory | Seculio Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/seculio-tutorial.md | |
active-directory | Secure Deliver Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/secure-deliver-provisioning-tutorial.md | |
active-directory | Secure Login Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/secure-login-provisioning-tutorial.md | |
active-directory | Securedeliver Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/securedeliver-tutorial.md | |
active-directory | Securejoinnow Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/securejoinnow-tutorial.md | |
active-directory | Securitystudio Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/securitystudio-tutorial.md | |
active-directory | Sedgwickcms Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sedgwickcms-tutorial.md | |
active-directory | Seekout Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/seekout-tutorial.md | |
active-directory | Segment Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/segment-provisioning-tutorial.md | |
active-directory | Segment Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/segment-tutorial.md | |
active-directory | Seismic Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/seismic-tutorial.md | |
active-directory | Sendpro Enterprise Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sendpro-enterprise-tutorial.md | |
active-directory | Sendsafely Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sendsafely-tutorial.md | |
active-directory | Sensoscientific Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sensoscientific-tutorial.md | |
active-directory | Sentry Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sentry-provisioning-tutorial.md | |
active-directory | Sentry Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sentry-tutorial.md | |
active-directory | Sequr Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sequr-tutorial.md | |
active-directory | Serraview Space Utilization Software Solutions Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/serraview-space-utilization-software-solutions-tutorial.md | |
active-directory | Servicechannel Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicechannel-tutorial.md | |
active-directory | Servicenow Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicenow-provisioning-tutorial.md | |
active-directory | Servicenow Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicenow-tutorial.md | |
active-directory | Servicessosafe Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicessosafe-tutorial.md | |
active-directory | Settlingmusic Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/settlingmusic-tutorial.md | |
active-directory | Sevone Network Monitoring System Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sevone-network-monitoring-system-tutorial.md | |
active-directory | Sharefile Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sharefile-tutorial.md | |
active-directory | Sharepoint On Premises Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sharepoint-on-premises-tutorial.md | |
active-directory | Sharevault Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sharevault-tutorial.md | |
active-directory | Sharingcloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sharingcloud-tutorial.md | |
active-directory | Shibumi Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/shibumi-tutorial.md | |
active-directory | Shiftplanning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/shiftplanning-tutorial.md | |
active-directory | Shiftwizard Saml Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/shiftwizard-saml-tutorial.md | |
active-directory | Shiphazmat Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/shiphazmat-tutorial.md | |
active-directory | Shmoopforschools Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/shmoopforschools-tutorial.md | |
active-directory | Shopify Plus Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/shopify-plus-provisioning-tutorial.md | |
active-directory | Shopify Plus Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/shopify-plus-tutorial.md | |
active-directory | Showpad Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/showpad-tutorial.md | |
active-directory | Shucchonavi Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/shucchonavi-tutorial.md | |
active-directory | Shutterstock Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/shutterstock-tutorial.md | |
active-directory | Sigma Computing Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sigma-computing-provisioning-tutorial.md | |
active-directory | Sigma Computing Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sigma-computing-tutorial.md | |
active-directory | Signagelive Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/signagelive-provisioning-tutorial.md | |
active-directory | Signagelive Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/signagelive-tutorial.md | |
active-directory | Signalfx Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/signalfx-tutorial.md | |
active-directory | Sigstr Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sigstr-tutorial.md | |
active-directory | Silkroad Life Suite Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/silkroad-life-suite-tutorial.md | |
active-directory | Silverback Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/silverback-tutorial.md | |
active-directory | Simple Sign Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/simple-sign-tutorial.md | |
active-directory | Simplenexus Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/simplenexus-tutorial.md | |
active-directory | Siteintel Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/siteintel-tutorial.md | |
active-directory | Skedda Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/skedda-tutorial.md | |
active-directory | Sketch Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sketch-tutorial.md | |
active-directory | Skillcast Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/skillcast-tutorial.md | |
active-directory | Skilljar Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/skilljar-tutorial.md | |
active-directory | Skillport Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/skillport-tutorial.md | |
active-directory | Skills Workflow Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/skills-workflow-tutorial.md | |
active-directory | Skillsbase Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/skillsbase-tutorial.md | |
active-directory | Skillsmanager Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/skillsmanager-tutorial.md | |
active-directory | Skopenow Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/skopenow-tutorial.md | |
active-directory | Skybreathe Analytics Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/skybreathe-analytics-tutorial.md | |
active-directory | Skydeskemail Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/skydeskemail-tutorial.md | |
active-directory | Skyhighnetworks Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/skyhighnetworks-tutorial.md | |
active-directory | Skysite Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/skysite-tutorial.md | |
active-directory | Skytap Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/skytap-tutorial.md | |
active-directory | Skyward Qmlativ Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/skyward-qmlativ-tutorial.md | |
active-directory | Slack Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/slack-provisioning-tutorial.md | |
active-directory | Slack Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/slack-tutorial.md | |
active-directory | Smallimprovements Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smallimprovements-tutorial.md | |
active-directory | Smallstep Ssh Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smallstep-ssh-provisioning-tutorial.md | |
active-directory | Smart Global Governance Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smart-global-governance-tutorial.md | |
active-directory | Smart360 Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smart360-tutorial.md | |
active-directory | Smartdraw Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smartdraw-tutorial.md | |
active-directory | Smarteru Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smarteru-tutorial.md | |
active-directory | Smartfile Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smartfile-provisioning-tutorial.md | |
active-directory | Smartfile Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smartfile-tutorial.md | |
active-directory | Smarthr Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smarthr-tutorial.md | |
active-directory | Smarthub Infer Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smarthub-infer-tutorial.md | |
active-directory | Smartkargo Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smartkargo-tutorial.md | |
active-directory | Smartlook Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smartlook-tutorial.md | |
active-directory | Smartlpa Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smartlpa-tutorial.md | |
active-directory | Smartrecruiters Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smartrecruiters-tutorial.md | |
active-directory | Smartsheet Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smartsheet-provisioning-tutorial.md | |
active-directory | Smartvid.Io Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/smartvid.io-tutorial.md | |
active-directory | Snackmagic Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/snackmagic-tutorial.md | |
active-directory | Snowflake Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/snowflake-provisioning-tutorial.md | |
active-directory | Snowflake Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/snowflake-tutorial.md | |
active-directory | Softeon Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/softeon-tutorial.md | |
active-directory | Software Ag Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/software-ag-cloud-tutorial.md | |
active-directory | Solarwinds Orion Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/solarwinds-orion-tutorial.md | |
active-directory | Soloinsight Cloudgate Sso Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/soloinsight-cloudgate-sso-provisioning-tutorial.md | |
active-directory | Soloinsight Cloudgate Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/soloinsight-cloudgate-sso-tutorial.md | |
active-directory | Sonarqube Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sonarqube-tutorial.md | |
active-directory | Soonr Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/soonr-tutorial.md | |
active-directory | Sosafe Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sosafe-provisioning-tutorial.md | |
active-directory | Spaceiq Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/spaceiq-provisioning-tutorial.md | |
active-directory | Spaceiq Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/spaceiq-tutorial.md | |
active-directory | Spacio Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/spacio-tutorial.md | |
active-directory | Spectrumu Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/spectrumu-tutorial.md | |
active-directory | Speexx Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/speexx-tutorial.md | |
active-directory | Spintr Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/spintr-sso-tutorial.md | |
active-directory | Splan Visitor Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/splan-visitor-tutorial.md | |
active-directory | Splashtop Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/splashtop-provisioning-tutorial.md | |
active-directory | Splashtop Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/splashtop-tutorial.md | |
active-directory | Splunkenterpriseandsplunkcloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/splunkenterpriseandsplunkcloud-tutorial.md | |
active-directory | Spotinst Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/spotinst-tutorial.md | |
active-directory | Spring Cm Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/spring-cm-tutorial.md | |
active-directory | Springerlink Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/springerlink-tutorial.md | |
active-directory | Sprinklr Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sprinklr-tutorial.md | |
active-directory | Ssogen Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ssogen-tutorial.md | |
active-directory | Stackby Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/stackby-tutorial.md | |
active-directory | Standard For Success Accreditation Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/standard-for-success-accreditation-tutorial.md | |
active-directory | Standard For Success Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/standard-for-success-tutorial.md | |
active-directory | Starleaf Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/starleaf-provisioning-tutorial.md | |
active-directory | Starmind Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/starmind-tutorial.md | |
active-directory | Statuspage Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/statuspage-tutorial.md | |
active-directory | Storegate Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/storegate-provisioning-tutorial.md | |
active-directory | Stormboard Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/stormboard-tutorial.md | |
active-directory | Styleflow Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/styleflow-tutorial.md | |
active-directory | Successfactors Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/successfactors-tutorial.md | |
active-directory | Sugarcrm Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sugarcrm-tutorial.md | |
active-directory | Sumologic Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sumologic-tutorial.md | |
active-directory | Sumtotalcentral Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sumtotalcentral-tutorial.md | |
active-directory | Supermood Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/supermood-tutorial.md | |
active-directory | Surfsecureid Azure Mfa Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/surfsecureid-azure-mfa-tutorial.md | |
active-directory | Surveymonkey Enterprise Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/surveymonkey-enterprise-tutorial.md | |
active-directory | Swit Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/swit-provisioning-tutorial.md | |
active-directory | Swit Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/swit-tutorial.md | |
active-directory | Symantec Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/symantec-tutorial.md | |
active-directory | Symantec Web Security Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/symantec-web-security-service.md | |
active-directory | Synchronet Click Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/synchronet-click-tutorial.md | |
active-directory | Syncplicity Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/syncplicity-tutorial.md | |
active-directory | Syndio Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/syndio-tutorial.md | |
active-directory | Synergi Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/synergi-tutorial.md | |
active-directory | Synerise Ai Growth Ecosystem Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/synerise-ai-growth-ecosystem-tutorial.md | |
active-directory | Syniverse Customer Portal Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/syniverse-customer-portal-tutorial.md | |
active-directory | Syxsense Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/syxsense-tutorial.md | |
active-directory | Tableau Online Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tableau-online-provisioning-tutorial.md | |
active-directory | Tableauonline Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tableauonline-tutorial.md | |
active-directory | Tableauserver Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tableauserver-tutorial.md | |
active-directory | Talent Palette Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/talent-palette-tutorial.md | |
active-directory | Talentech Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/talentech-provisioning-tutorial.md | |
active-directory | Talentlms Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/talentlms-tutorial.md | |
active-directory | Talentsoft Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/talentsoft-tutorial.md | |
active-directory | Tango Reserve Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tango-reserve-tutorial.md | |
active-directory | Tangoanalytics Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tangoanalytics-tutorial.md | |
active-directory | Tangoe Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tangoe-tutorial.md | |
active-directory | Tap App Security Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tap-app-security-provisioning-tutorial.md | |
active-directory | Tap App Security Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tap-app-security-tutorial.md | |
active-directory | Target Process Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/target-process-tutorial.md | |
active-directory | Tas Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tas-tutorial.md | |
active-directory | Taskize Connect Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/taskize-connect-provisioning-tutorial.md | |
active-directory | Taskize Connect Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/taskize-connect-tutorial.md | |
active-directory | Teachme Biz Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/teachme-biz-tutorial.md | |
active-directory | Teamgo Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/teamgo-provisioning-tutorial.md | |
active-directory | Teamgo Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/teamgo-tutorial.md | |
active-directory | Teamphoria Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/teamphoria-tutorial.md | |
active-directory | Teamseer Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/teamseer-tutorial.md | |
active-directory | Teamslide Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/teamslide-tutorial.md | |
active-directory | Teamsticker By Communitio Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/teamsticker-by-communitio-tutorial.md | |
active-directory | Teamviewer Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/teamviewer-provisioning-tutorial.md | |
active-directory | Teamviewer Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/teamviewer-tutorial.md | |
active-directory | Teamwork Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/teamwork-tutorial.md | |
active-directory | Teamzskill Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/teamzskill-tutorial.md | |
active-directory | Templafy Openid Connect Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/templafy-openid-connect-provisioning-tutorial.md | |
active-directory | Templafy Saml 2 Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/templafy-saml-2-provisioning-tutorial.md | |
active-directory | Templafy Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/templafy-tutorial.md | |
active-directory | Tendium Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tendium-tutorial.md | |
active-directory | Terraform Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/terraform-cloud-tutorial.md | |
active-directory | Terraform Enterprise Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/terraform-enterprise-tutorial.md | |
active-directory | Terratrue Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/terratrue-provisioning-tutorial.md | |
active-directory | Terratrue Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/terratrue-tutorial.md | |
active-directory | Textexpander Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/textexpander-tutorial.md | |
active-directory | Textline Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/textline-tutorial.md | |
active-directory | Textmagic Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/textmagic-tutorial.md | |
active-directory | The Funding Portal Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/the-funding-portal-tutorial.md | |
active-directory | Theorgwiki Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/theorgwiki-provisioning-tutorial.md | |
active-directory | Thirdlight Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/thirdlight-tutorial.md | |
active-directory | Thirdpartytrust Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/thirdpartytrust-tutorial.md | |
active-directory | Thoughtworks Mingle Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/thoughtworks-mingle-tutorial.md | |
active-directory | Thousandeyes Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/thousandeyes-provisioning-tutorial.md | |
active-directory | Thousandeyes Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/thousandeyes-tutorial.md | |
active-directory | Thrive Lxp Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/thrive-lxp-provisioning-tutorial.md | |
active-directory | Thrive Lxp Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/thrive-lxp-tutorial.md | |
active-directory | Tic Tac Mobile Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tic-tac-mobile-provisioning-tutorial.md | |
active-directory | Ticketmanager Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ticketmanager-tutorial.md | |
active-directory | Tickitlms Learn Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tickitlms-learn-tutorial.md | |
active-directory | Tidemark Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tidemark-tutorial.md | |
active-directory | Tigergraph Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tigergraph-tutorial.md | |
active-directory | Tigertext Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tigertext-tutorial.md | |
active-directory | Timeclock 365 Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/timeclock-365-provisioning-tutorial.md | |
active-directory | Timeclock 365 Saml Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/timeclock-365-saml-provisioning-tutorial.md | |
active-directory | Timeclock 365 Saml Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/timeclock-365-saml-tutorial.md | |
active-directory | Timelive Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/timelive-tutorial.md | |
active-directory | Timeoffmanager Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/timeoffmanager-tutorial.md | |
active-directory | Timetabling Solutions Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/timetabling-solutions-tutorial.md | |
active-directory | Timetrack Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/timetrack-tutorial.md | |
active-directory | Timu Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/timu-tutorial.md | |
active-directory | Tinfoil Security Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tinfoil-security-tutorial.md | |
active-directory | Titanfile Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/titanfile-tutorial.md | |
active-directory | Tivitz Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tivitz-tutorial.md | |
active-directory | Tonicdm Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tonicdm-tutorial.md | |
active-directory | Topdesk Public Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/topdesk-public-tutorial.md | |
active-directory | Topdesk Secure Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/topdesk-secure-tutorial.md | |
active-directory | Torii Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/torii-tutorial.md | |
active-directory | Tracker Software Technologies Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tracker-software-technologies-tutorial.md | |
active-directory | Trackvia Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/trackvia-tutorial.md | |
active-directory | Traction Guest Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/traction-guest-tutorial.md | |
active-directory | Tradeshift Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tradeshift-tutorial.md | |
active-directory | Training Platform Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/training-platform-tutorial.md | |
active-directory | Trakopolis Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/trakopolis-tutorial.md | |
active-directory | Trakstar Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/trakstar-tutorial.md | |
active-directory | Transperfect Globallink Dashboard Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/transperfect-globallink-dashboard-tutorial.md | |
active-directory | Tranxfer Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tranxfer-tutorial.md | |
active-directory | Travelperk Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/travelperk-provisioning-tutorial.md | |
active-directory | Travelperk Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/travelperk-tutorial.md | |
active-directory | Trelica Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/trelica-tutorial.md | |
active-directory | Trello Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/trello-tutorial.md | |
active-directory | Trend Micro Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/trend-micro-tutorial.md | |
active-directory | Trendminer Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/trendminer-tutorial.md | |
active-directory | Tribeloo Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tribeloo-provisioning-tutorial.md | |
active-directory | Tribeloo Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tribeloo-tutorial.md | |
active-directory | Tripactions Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tripactions-tutorial.md | |
active-directory | Trisotechdigitalenterpriseserver Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/trisotechdigitalenterpriseserver-tutorial.md | |
active-directory | True Office Learning Lio Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/true-office-learning-lio-tutorial.md | |
active-directory | Truechoice Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/truechoice-tutorial.md | |
active-directory | Trunarrative Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/trunarrative-tutorial.md | |
active-directory | Tulip Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tulip-tutorial.md | |
active-directory | Turborater Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/turborater-tutorial.md | |
active-directory | Tutorial List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tutorial-list.md | |
active-directory | Tutorocean Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tutorocean-tutorial.md | |
active-directory | Tvu Service Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tvu-service-tutorial.md | |
active-directory | Twic Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/twic-tutorial.md | |
active-directory | Twilio Sendgrid Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/twilio-sendgrid-tutorial.md | |
active-directory | Twingate Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/twingate-provisioning-tutorial.md | |
active-directory | Tyeexpress Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tyeexpress-tutorial.md | |
active-directory | Uber Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/uber-provisioning-tutorial.md | |
active-directory | Uberflip Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/uberflip-tutorial.md | |
active-directory | Ultipro Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ultipro-tutorial.md | |
active-directory | Ungerboeck Software Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ungerboeck-software-tutorial.md | |
active-directory | Unifi Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/unifi-provisioning-tutorial.md | |
active-directory | Unifi Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/unifi-tutorial.md | |
active-directory | Uniflow Online Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/uniflow-online-tutorial.md | |
active-directory | Upshotly Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/upshotly-tutorial.md | |
active-directory | Upwork Enterprise Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/upwork-enterprise-tutorial.md | |
active-directory | Us Bank Prepaid Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/us-bank-prepaid-tutorial.md | |
active-directory | Useall Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/useall-tutorial.md | |
active-directory | Userecho Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/userecho-tutorial.md | |
active-directory | Usertesting Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/usertesting-tutorial.md | |
active-directory | Uservoice Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/uservoice-tutorial.md | |
active-directory | Userzoom Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/userzoom-tutorial.md | |
active-directory | V Client Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/v-client-tutorial.md | |
active-directory | Valence Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/valence-tutorial.md | |
active-directory | Valid8me Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/valid8me-tutorial.md | |
active-directory | Validsign Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/validsign-tutorial.md | |
active-directory | Vault Platform Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vault-platform-tutorial.md | |
active-directory | Vecos Releezme Locker Management System Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vecos-releezme-locker-management-system-tutorial.md | |
active-directory | Velpic Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/velpic-provisioning-tutorial.md | |
active-directory | Velpicsaml Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/velpicsaml-tutorial.md | |
active-directory | Veracode Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/veracode-tutorial.md | |
active-directory | Verasmart Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/verasmart-tutorial.md | |
active-directory | Vergesense Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vergesense-tutorial.md | |
active-directory | Veritas Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/veritas-tutorial.md | |
active-directory | Verkada Command Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/verkada-command-tutorial.md | |
active-directory | Verme Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/verme-tutorial.md | |
active-directory | Versal Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/versal-tutorial.md | |
active-directory | Veza Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/veza-tutorial.md | |
active-directory | Viareports Inativ Portal Europe Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/viareports-inativ-portal-europe-tutorial.md | |
active-directory | Vibehcm Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vibehcm-tutorial.md | |
active-directory | Vida Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vida-tutorial.md | |
active-directory | Vidyard Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vidyard-tutorial.md | |
active-directory | Virtual Risk Manager Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/virtual-risk-manager-tutorial.md | |
active-directory | Virtual Risk Manager Usa Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/virtual-risk-manager-usa-tutorial.md | |
active-directory | Visibly Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/visibly-provisioning-tutorial.md | |
active-directory | Visibly Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/visibly-tutorial.md | |
active-directory | Visitly Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/visitly-provisioning-tutorial.md | |
active-directory | Visitly Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/visitly-tutorial.md | |
active-directory | Visitorg Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/visitorg-tutorial.md | |
active-directory | Visma Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/visma-tutorial.md | |
active-directory | Vmware Horizon Unified Access Gateway Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vmware-horizon-unified-access-gateway-tutorial.md | |
active-directory | Vocoli Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vocoli-tutorial.md | |
active-directory | Vodeclic Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vodeclic-tutorial.md | |
active-directory | Vonage Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vonage-provisioning-tutorial.md | |
active-directory | Vonage Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vonage-tutorial.md | |
active-directory | Voyance Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/voyance-tutorial.md | |
active-directory | Vtiger Crm Saml Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vtiger-crm-saml-tutorial.md | |
active-directory | Vxmaintain Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vxmaintain-tutorial.md | |
active-directory | Vyond Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vyond-tutorial.md | |
active-directory | Walkme Saml Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/walkme-saml-tutorial.md | |
active-directory | Wan Sign Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wan-sign-tutorial.md | |
active-directory | Wandera Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wandera-tutorial.md | |
active-directory | Watch By Colors Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/watch-by-colors-tutorial.md | |
active-directory | Waywedo Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/waywedo-tutorial.md | |
active-directory | Wdesk Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wdesk-tutorial.md | |
active-directory | Web Cargo Air Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/web-cargo-air-tutorial.md | |
active-directory | Webcargo Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/webcargo-tutorial.md | |
active-directory | Webmethods Integration Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/webmethods-integration-cloud-tutorial.md | |
active-directory | Webroot Security Awareness Training Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/webroot-security-awareness-training-provisioning-tutorial.md | |
active-directory | Wedo Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wedo-provisioning-tutorial.md | |
active-directory | Wedo Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wedo-tutorial.md | |
active-directory | Weekdone Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/weekdone-tutorial.md | |
active-directory | Whatfix Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/whatfix-tutorial.md | |
active-directory | Whimsical Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/whimsical-provisioning-tutorial.md | |
active-directory | Whimsical Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/whimsical-tutorial.md | |
active-directory | Whitesource Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/whitesource-tutorial.md | |
active-directory | Whos On Location Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/whos-on-location-tutorial.md | |
active-directory | Whosoffice Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/whosoffice-tutorial.md | |
active-directory | Wikispaces Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wikispaces-tutorial.md | |
active-directory | Wingspanetmf Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wingspanetmf-tutorial.md | |
active-directory | Wirewheel Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wirewheel-tutorial.md | |
active-directory | Wisdom By Invictus Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wisdom-by-invictus-tutorial.md | |
active-directory | Wiz Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wiz-sso-tutorial.md | |
active-directory | Wizergosproductivitysoftware Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wizergosproductivitysoftware-tutorial.md | |
active-directory | Wootric Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wootric-tutorial.md | |
active-directory | Work Com Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/work-com-tutorial.md | |
active-directory | Workable Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workable-tutorial.md | |
active-directory | Workboard Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workboard-tutorial.md | |
active-directory | Workday Inbound Cloud Only Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-inbound-cloud-only-tutorial.md | |
active-directory | Workday Inbound Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-inbound-tutorial.md | |
active-directory | Workday Mobile Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-mobile-tutorial.md | |
active-directory | Workday Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-tutorial.md | |
active-directory | Workday Writeback Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-writeback-tutorial.md | |
active-directory | Workfront Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workfront-tutorial.md | |
active-directory | Workgrid Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workgrid-provisioning-tutorial.md | |
active-directory | Workgrid Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workgrid-tutorial.md | |
active-directory | Workhub Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workhub-tutorial.md | |
active-directory | Workpath Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workpath-tutorial.md | |
active-directory | Workplace By Facebook Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workplace-by-facebook-provisioning-tutorial.md | |
active-directory | Workplacebyfacebook Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workplacebyfacebook-tutorial.md | |
active-directory | Workrite Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workrite-tutorial.md | |
active-directory | Workshop Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workshop-tutorial.md | |
active-directory | Worksmobile Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/worksmobile-tutorial.md | |
active-directory | Workspotcontrol Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workspotcontrol-tutorial.md | |
active-directory | Workstars Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workstars-tutorial.md | |
active-directory | Workteam Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workteam-provisioning-tutorial.md | |
active-directory | Workteam Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workteam-tutorial.md | |
active-directory | Workware Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workware-tutorial.md | |
active-directory | Wrike Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wrike-provisioning-tutorial.md | |
active-directory | Wrike Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wrike-tutorial.md | |
active-directory | Wuru App Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wuru-app-tutorial.md | |
active-directory | X Point Cloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/x-point-cloud-tutorial.md | |
active-directory | Xaitporter Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/xaitporter-tutorial.md | |
active-directory | Xcarrier Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/xcarrier-tutorial.md | |
active-directory | Xmatters Ondemand Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/xmatters-ondemand-tutorial.md | |
active-directory | Yardielearning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/yardielearning-tutorial.md | |
active-directory | Yardione Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/yardione-tutorial.md | |
active-directory | Yello Enterprise Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/yello-enterprise-tutorial.md | |
active-directory | Yellowbox Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/yellowbox-provisioning-tutorial.md | |
active-directory | Yodeck Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/yodeck-tutorial.md | |
active-directory | Yonyx Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/yonyx-tutorial.md | |
active-directory | Youearnedit Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/youearnedit-tutorial.md | |
active-directory | Yuhu Property Management Platform Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/yuhu-property-management-platform-tutorial.md | |
active-directory | Zapier Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zapier-provisioning-tutorial.md | |
active-directory | Zdiscovery Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zdiscovery-tutorial.md | |
active-directory | Zendesk Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zendesk-provisioning-tutorial.md | |
active-directory | Zendesk Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zendesk-tutorial.md | |
active-directory | Zengine Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zengine-tutorial.md | |
active-directory | Zenqms Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zenqms-tutorial.md | |
active-directory | Zenya Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zenya-provisioning-tutorial.md | |
active-directory | Zenya Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zenya-tutorial.md | |
active-directory | Zephyrsso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zephyrsso-tutorial.md | |
active-directory | Zero Networks Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zero-networks-tutorial.md | |
active-directory | Zero Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zero-provisioning-tutorial.md | |
active-directory | Zeroheight Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zeroheight-tutorial.md | |
active-directory | Zest Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zest-tutorial.md | |
active-directory | Ziflow Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ziflow-tutorial.md | |
active-directory | Zip Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zip-provisioning-tutorial.md | |
active-directory | Zip Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zip-tutorial.md | |
active-directory | Zivver Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zivver-tutorial.md | |
active-directory | Zoho Mail Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zoho-mail-tutorial.md | |
active-directory | Zoho One China Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zoho-one-china-tutorial.md | |
active-directory | Zohoone Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zohoone-tutorial.md | |
active-directory | Zola Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zola-tutorial.md | |
active-directory | Zoom Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zoom-provisioning-tutorial.md | |
active-directory | Zoom Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zoom-tutorial.md | |
active-directory | Zscaler B2b User Portal Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zscaler-b2b-user-portal-tutorial.md | |
active-directory | Zscaler Beta Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zscaler-beta-provisioning-tutorial.md | |
active-directory | Zscaler Beta Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zscaler-beta-tutorial.md | |
active-directory | Zscaler Internet Access Administrator Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zscaler-internet-access-administrator-tutorial.md | |
active-directory | Zscaler One Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zscaler-one-provisioning-tutorial.md | |
active-directory | Zscaler One Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zscaler-one-tutorial.md | |
active-directory | Zscaler Private Access Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zscaler-private-access-provisioning-tutorial.md | |
active-directory | Zscaler Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zscaler-provisioning-tutorial.md | |
active-directory | Zscaler Three Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zscaler-three-provisioning-tutorial.md | |
active-directory | Zscaler Three Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zscaler-three-tutorial.md | |
active-directory | Zscaler Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zscaler-tutorial.md | |
active-directory | Zscaler Two Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zscaler-two-provisioning-tutorial.md | |
active-directory | Zscaler Two Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zscaler-two-tutorial.md | |
active-directory | Zscaler Zscloud Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zscaler-zscloud-provisioning-tutorial.md | |
active-directory | Zscaler Zscloud Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zscaler-zscloud-tutorial.md | |
active-directory | Zscalerprivateaccess Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zscalerprivateaccess-tutorial.md | |
active-directory | Zscalerprivateaccessadministrator Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zscalerprivateaccessadministrator-tutorial.md | |
active-directory | Zuddl Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zuddl-tutorial.md | |
active-directory | Zwayam Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zwayam-tutorial.md | |
active-directory | Zylo Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zylo-tutorial.md | |
aks | Ingress Basic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-basic.md | kubectl get services --namespace ingress-basic -o wide -w ingress-nginx-controll When the Kubernetes load balancer service is created for the NGINX ingress controller, an IP address is assigned under *EXTERNAL-IP*, as shown in the following example output: ```-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR ingress-nginx-controller LoadBalancer 10.0.65.205 EXTERNAL-IP 80:30957/TCP,443:32414/TCP 1m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx ``` |
aks | Integrations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/integrations.md | Both extensions and add-ons are supported ways to add functionality to your AKS ## GitHub Actions -GitHub Actions helps you automate your software development workflows from within GitHub. For more details on using GitHub Actions with Azure, see [What is GitHub Actions for Azures][github-actions]. For an example of using GitHub Actions with an AKS cluster, see [Build, test, and deploy containers to Azure Kubernetes Service using GitHub Actions][github-actions-aks]. +GitHub Actions helps you automate your software development workflows from within GitHub. For more details on using GitHub Actions with Azure, see [What is GitHub Actions for Azure][github-actions]. For an example of using GitHub Actions with an AKS cluster, see [Build, test, and deploy containers to Azure Kubernetes Service using GitHub Actions][github-actions-aks]. ## Open source and third-party integrations |
aks | Supported Kubernetes Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md | Title: Supported Kubernetes versions in Azure Kubernetes Service description: Understand the Kubernetes version support policy and lifecycle of clusters in Azure Kubernetes Service (AKS) Previously updated : 08/09/2021 Last updated : 11/21/2022 -The Kubernetes community releases minor versions roughly every three months. Recently, the Kubernetes community has [increased the support window for each version from 9 months to 12 months](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/), starting with version 1.19. +The Kubernetes community releases minor versions roughly every three months. Recently, the Kubernetes community has [increased the support window for each version from nine months to one year](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/), starting with version 1.19. Minor version releases include new features and improvements. Patch releases are more frequent (sometimes weekly) and are intended for critical bug fixes within a minor version. Patch releases include fixes for security vulnerabilities or major bugs. Kubernetes uses the standard [Semantic Versioning](https://semver.org/) versioni ``` [major].[minor].[patch] -Example: +Examples: 1.17.7 1.17.8 ``` Each number in the version indicates general compatibility with the previous ver * **Minor versions** change when functionality updates are made that are backwards compatible to the other minor releases. * **Patch versions** change when backwards-compatible bug fixes are made. -Aim to run the latest patch release of the minor version you're running. For example, your production cluster is on **`1.17.7`**. **`1.17.8`** is the latest available patch version available for the *1.17* series. You should upgrade to **`1.17.8`** as soon as possible to ensure your cluster is fully patched and supported. +Aim to run the latest patch release of the minor version you're running. For example, if your production cluster is on **`1.17.7`**, **`1.17.8`** is the latest available patch version available for the *1.17* series. You should upgrade to **`1.17.8`** as soon as possible to ensure your cluster is fully patched and supported. ## Alias minor version > [!NOTE] > Alias minor version requires Azure CLI version 2.37 or above. Use `az upgrade` to install the latest version of the CLI. -Azure Kubernetes Service allows for you to create a cluster without specifying the exact patch version. When creating a cluster without designating a patch, the cluster will run the minor version's latest GA patch. For example, if you create a cluster with **`1.21`**, your cluster will be running **`1.21.7`**, which is the latest GA patch version of *1.21*. +With AKS, you can create a cluster without specifying the exact patch version. When you create a cluster without designating a patch, the cluster will run the minor version's latest GA patch. For example, if you create a cluster with **`1.21`**, your cluster will run **`1.21.7`**, which is the latest GA patch version of *1.21*. -When upgrading by alias minor version, only a higher minor version is supported. For example, upgrading from `1.14.x` to `1.14` will not trigger an upgrade to the latest GA `1.14` patch, but upgrading to `1.15` will trigger an upgrade to the latest GA `1.15` patch. +When you upgrade by alias minor version, only a higher minor version is supported. For example, upgrading from `1.14.x` to `1.14` won't trigger an upgrade to the latest GA `1.14` patch, but upgrading to `1.15` will trigger an upgrade to the latest GA `1.15` patch. -To see what patch you are on, run the `az aks show --resource-group myResourceGroup --name myAKSCluster` command. The property `currentKubernetesVersion` shows the whole Kubernetes version. +To see what patch you're on, run the `az aks show --resource-group myResourceGroup --name myAKSCluster` command. The `currentKubernetesVersion` property shows the whole Kubernetes version. ``` { To see what patch you are on, run the `az aks show --resource-group myResourceGr AKS defines a generally available version as a version enabled in all SLO or SLA measurements and available in all regions. AKS supports three GA minor versions of Kubernetes: -* The latest GA minor version that is released in AKS (which we'll refer to as N). +* The latest GA minor version released in AKS (which we'll refer to as N). * Two previous minor versions.- * Each supported minor version also supports a maximum of two (2) stable patches. + * Each supported minor version also supports a maximum of two (2) stable patches. -AKS may also support preview versions, which are explicitly labeled and subject to [Preview terms and conditions][preview-terms]. +AKS may also support preview versions, which are explicitly labeled and subject to [preview terms and conditions][preview-terms]. > [!NOTE] > AKS uses safe deployment practices which involve gradual region deployment. This means it may take up to 10 business days for a new release or a new version to be available in all regions. New minor version | Supported Version List Where ".letter" is representative of patch versions. -When a new minor version is introduced, the oldest minor version and patch releases supported are deprecated and removed. For example, the current supported version list is: +When a new minor version is introduced, the oldest minor version and patch releases supported are deprecated and removed. For example, if the current supported version list is: ``` 1.17.a When a new minor version is introduced, the oldest minor version and patch relea 1.15.f ``` -AKS releases 1.18.\*, removing all the 1.15.\* versions out of support in 30 days. +When AKS releases 1.18.\*, all the 1.15.\* versions go out of support 30 days later. > [!NOTE]-> If customers are running an unsupported Kubernetes version, they will be asked to upgrade when requesting support for the cluster. Clusters running unsupported Kubernetes releases are not covered by the [AKS support policies](./support-policies.md). +> If customers are running an unsupported Kubernetes version, they'll be asked to upgrade when requesting support for the cluster. Clusters running unsupported Kubernetes releases aren't covered by the [AKS support policies](./support-policies.md). In addition to the above, AKS supports a maximum of two **patch** releases of a given minor version. So given the following supported versions: az aks install-cli ```powershell Install-AzAksKubectl -Version latest ```+ ## Release and deprecation process -You can reference upcoming version releases and deprecations on the [AKS Kubernetes Release Calendar](#aks-kubernetes-release-calendar). +You can reference upcoming version releases and deprecations on the [AKS Kubernetes release calendar](#aks-kubernetes-release-calendar). For new **minor** versions of Kubernetes:- * AKS publishes a pre-announcement with the planned date of a new version release and respective old version deprecation on the [AKS Release notes](https://aka.ms/aks/releasenotes) at least 30 days prior to removal. - * AKS uses [Azure Advisor](../advisor/advisor-overview.md) to alert users if a new version will cause issues in their cluster because of deprecated APIs. Azure Advisor is also used to alert the user if they are currently out of support. - * AKS publishes a [service health notification](../service-health/service-health-overview.md) available to all users with AKS and portal access, and sends an email to the subscription administrators with the planned version removal dates. - > [!NOTE] - > To find out who is your subscription administrators or to change it, please refer to [manage Azure subscriptions](../cost-management-billing/manage/add-change-subscription-administrator.md#assign-a-subscription-administrator). +* AKS publishes a pre-announcement with the planned date of the new version release and respective old version deprecation. This announcement is published on the [AKS release notes](https://aka.ms/aks/releasenotes) at least 30 days before removal. +* AKS uses [Azure Advisor](../advisor/advisor-overview.md) to alert users if a new version will cause issues in their cluster because of deprecated APIs. Azure Advisor is also used to alert the user if they're currently out of support. +* AKS publishes a [service health notification](../service-health/service-health-overview.md) available to all users with AKS and portal access and sends an email to the subscription administrators with the planned version removal dates. ++ > [!NOTE] + > Visit [manage Azure subscriptions](../cost-management-billing/manage/add-change-subscription-administrator.md#assign-a-subscription-administrator) to determine who your subscription administrators are and make any necessary changes. - * Users have **30 days** from version removal to upgrade to a supported minor version release to continue receiving support. +* Users have **30 days** from version removal to upgrade to a supported minor version release to continue receiving support. For new **patch** versions of Kubernetes:- * Because of the urgent nature of patch versions, they can be introduced into the service as they become available. Once available, patches will have a two month minimum lifecycle. - * In general, AKS does not broadly communicate the release of new patch versions. However, AKS constantly monitors and validates available CVE patches to support them in AKS in a timely manner. If a critical patch is found or user action is required, AKS will notify users to upgrade to the newly available patch. - * Users have **30 days** from a patch release's removal from AKS to upgrade into a supported patch and continue receiving support. However, you will **no longer be able to create clusters or node pools once the version is deprecated/removed.** ++* Because of the urgent nature of patch versions, they can be introduced into the service as they become available. Once available, patches will have a two month minimum lifecycle. +* In general, AKS doesn't broadly communicate the release of new patch versions. However, AKS constantly monitors and validates available CVE patches to support them in AKS in a timely manner. If a critical patch is found or user action is required, AKS will notify users to upgrade to the newly available patch. +* Users have **30 days** from a patch release's removal from AKS to upgrade into a supported patch and continue receiving support. However, you'll **no longer be able to create clusters or node pools once the version is deprecated/removed.** ### Supported versions policy exceptions Specific patch releases may be skipped or rollout accelerated, depending on the ## Azure portal and CLI versions -When you deploy an AKS cluster in the portal, with the Azure CLI, or with Azure PowerShell, the cluster defaults to the N-1 minor version and latest patch. For example, if AKS supports *1.17.a*, *1.17.b*, *1.16.c*, *1.16.d*, *1.15.e*, and *1.15.f*, the default version selected is *1.16.c*. +When you deploy an AKS cluster with Azure portal, Azure CLI, Azure PowerShell, the cluster defaults to the N-1 minor version and latest patch. For example, if AKS supports *1.17.a*, *1.17.b*, *1.16.c*, *1.16.d*, *1.15.e*, and *1.15.f*, the default version selected is *1.16.c*. ### [Azure CLI](#tab/azure-cli) To find out what versions are currently available for your subscription and region, use the-[az aks get-versions][az-aks-get-versions] command. The following example lists the available Kubernetes versions for the *EastUS* region: +[az aks get-versions][az-aks-get-versions] command. The following example lists available Kubernetes versions for the *EastUS* region: ```azurecli-interactive az aks get-versions --location eastus --output table ``` - ### [Azure PowerShell](#tab/azure-powershell) To find out what versions are currently available for your subscription and region, use the-[Get-AzAksVersion][get-azaksversion] cmdlet. The following example lists the available Kubernetes versions for the *EastUS* region: +[Get-AzAksVersion][get-azaksversion] cmdlet. The following example lists available Kubernetes versions for the *EastUS* region: ```azurepowershell-interactive Get-AzAksVersion -Location eastus Get-AzAksVersion -Location eastus -## AKS Kubernetes Release Calendar +## AKS Kubernetes release calendar For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kubernetes#History). For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kub | 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | 1.28 GA | 1.26 | Dec 2022 | Jan 2023 | Mar 2023 | 1.29 GA +> [!NOTE] +> To see real-time updates of region release status and version release notes, visit the [AKS release status webpage][aks-release]. To learn more about the release status webpage, see [AKS release tracker][aks-tracker]. + ## FAQ -**How does Microsoft notify me of new Kubernetes versions?** +### How does Microsoft notify me of new Kubernetes versions? -The AKS team publishes pre-announcements with planned dates of the new Kubernetes versions in our documentation, our [GitHub](https://github.com/Azure/AKS/releases) as well as emails to subscription administrators who own clusters that are going to fall out of support. In addition to announcements, AKS also uses [Azure Advisor](../advisor/advisor-overview.md) to notify the customer inside the Azure portal to alert users if they are out of support, as well as alerting them of deprecated APIs that will affect their application or development process. +The AKS team publishes pre-announcements with planned dates of the new Kubernetes versions in the AKS docs, our [GitHub](https://github.com/Azure/AKS/releases), and emails to subscription administrators who own clusters that are going to fall out of support. AKS also uses [Azure Advisor](../advisor/advisor-overview.md) to alert customers in the Azure portal to notify users if they're out of support. It also alerts them of deprecated APIs that will affect their application or development processes. -**How often should I expect to upgrade Kubernetes versions to stay in support?** +### How often should I expect to upgrade Kubernetes versions to stay in support? -Starting with Kubernetes 1.19, the [open source community has expanded support to 1 year](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/). AKS commits to enabling patches and support matching the upstream commitments. For AKS clusters on 1.19 and greater, you will be able to upgrade at a minimum of once a year to stay on a supported version. +Starting with Kubernetes 1.19, the [open source community has expanded support to one year](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/). AKS commits to enabling patches and support matching the upstream commitments. For AKS clusters on 1.19 and greater, you'll be able to upgrade at a minimum of once a year to stay on a supported version. -**What happens when a user upgrades a Kubernetes cluster with a minor version that isn't supported?** +### What happens when a user upgrades a Kubernetes cluster with a minor version that isn't supported? If you're on the *n-3* version or older, it means you're outside of support and will be asked to upgrade. When your upgrade from version n-3 to n-2 succeeds, you're back within our support policies. For example: -- If the oldest supported AKS version is *1.15.a* and you are on *1.14.b* or older, you're outside of support.-- When you successfully upgrade from *1.14.b* to *1.15.a* or higher, you're back within our support policies.+* If the oldest supported AKS version is *1.15.a* and you're on *1.14.b* or older, you're outside of support. +* When you successfully upgrade from *1.14.b* to *1.15.a* or higher, you're back within our support policies. -Downgrades are not supported. +Downgrades aren't supported. -**What does 'Outside of Support' mean** +### What does 'Outside of Support' mean? 'Outside of Support' means that:+ * The version you're running is outside of the supported versions list. * You'll be asked to upgrade the cluster to a supported version when requesting support, unless you're within the 30-day grace period after version deprecation. Additionally, AKS doesn't make any runtime or other guarantees for clusters outside of the supported versions list. -**What happens when a user scales a Kubernetes cluster with a minor version that isn't supported?** +### What happens when a user scales a Kubernetes cluster with a minor version that isn't supported? -For minor versions not supported by AKS, scaling in or out should continue to work. Since there are no Quality of Service guarantees, we recommend upgrading to bring your cluster back into support. +For minor versions not supported by AKS, scaling in or out should continue to work. Since there are no guarantees with quality of service, we recommend upgrading to bring your cluster back into support. -**Can a user stay on a Kubernetes version forever?** +### Can a user stay on a Kubernetes version forever? -If a cluster has been out of support for more than three (3) minor versions and has been found to carry security risks, Azure proactively contacts you to upgrade your cluster. If you do not take further action, Azure reserves the right to automatically upgrade your cluster on your behalf. +If a cluster has been out of support for more than three (3) minor versions and has been found to carry security risks, Azure proactively contacts you to upgrade your cluster. If you don't take further action, Azure reserves the right to automatically upgrade your cluster on your behalf. -**What version does the control plane support if the node pool is not in one of the supported AKS versions?** +### What version does the control plane support if the node pool isn't in one of the supported AKS versions? The control plane must be within a window of versions from all node pools. For details on upgrading the control plane or node pools, visit documentation on [upgrading node pools](use-multiple-node-pools.md#upgrade-a-cluster-control-plane-with-multiple-node-pools). -**Can I skip multiple AKS versions during cluster upgrade?** +### Can I skip multiple AKS versions during cluster upgrade? -When you upgrade a supported AKS cluster, Kubernetes minor versions cannot be skipped. Kubernetes control planes [version skew policy](https://kubernetes.io/releases/version-skew-policy/) does not support minor version skipping. For example, upgrades between: +When you upgrade a supported AKS cluster, Kubernetes minor versions can't be skipped. Kubernetes control planes [version skew policy](https://kubernetes.io/releases/version-skew-policy/) doesn't support minor version skipping. For example, upgrades between: - * *1.12.x* -> *1.13.x*: allowed. - * *1.13.x* -> *1.14.x*: allowed. - * *1.12.x* -> *1.14.x*: not allowed. +* *1.12.x* -> *1.13.x*: allowed. +* *1.13.x* -> *1.14.x*: allowed. +* *1.12.x* -> *1.14.x*: not allowed. To upgrade from *1.12.x* -> *1.14.x*:+ 1. Upgrade from *1.12.x* -> *1.13.x*.-1. Upgrade from *1.13.x* -> *1.14.x*. +2. Upgrade from *1.13.x* -> *1.14.x*. Skipping multiple versions can only be done when upgrading from an unsupported version back into the minimum supported version. For example, you can upgrade from an unsupported *1.10.x* to a supported *1.15.x* if *1.15* is the minimum supported minor version. - When performing an upgrade from an _unsupported version_ that skips two or more minor versions, the upgrade is performed without any guarantee of functionality and is excluded from the service-level agreements and limited warranty. If your version is significantly out of date, it's recommended to re-create the cluster. +When performing an upgrade from an _unsupported version_ that skips two or more minor versions, the upgrade is performed without any guarantee of functionality and is excluded from the service-level agreements and limited warranty. If your version is significantly out of date, it's recommended to re-create the cluster. -**Can I create a new 1.xx.x cluster during its 30 day support window?** +### Can I create a new 1.xx.x cluster during its 30 day support window? -No. Once a version is deprecated/removed, you cannot create a cluster with that version. As the change rolls out, you will start to see the old version removed from your version list. This process may take up to two weeks from announcement, progressively by region. +No. Once a version is deprecated/removed, you can't create a cluster with that version. As the change rolls out, you'll start to see the old version removed from your version list. This process may take up to two weeks from announcement, progressively by region. -**I am on a freshly deprecated version, can I still add new node pools? Or will I have to upgrade?** +### I'm on a freshly deprecated version, can I still add new node pools? Or will I have to upgrade? -No. You will not be allowed to add node pools of the deprecated version to your cluster. You can add node pools of a new version. However, this may require you to update the control plane first. +No. You won't be allowed to add node pools of the deprecated version to your cluster. You can add node pools of a new version, but this may require you to update the control plane first. -**How often do you update patches?** +### How often do you update patches? -Patches have a two month minimum lifecycle. To keep up to date when new patches are released, follow the [AKS Release Notes](https://github.com/Azure/AKS/releases). +Patches have a two month minimum lifecycle. To keep up to date when new patches are released, follow the [AKS release notes](https://github.com/Azure/AKS/releases). ## Next steps For information on how to upgrade your cluster, see [Upgrade an Azure Kubernetes <!-- LINKS - External --> [azure-update-channel]: https://azure.microsoft.com/updates/?product=kubernetes-service+[aks-release]: https://releases.aks.azure.com/ <!-- LINKS - Internal --> [aks-upgrade]: upgrade-cluster.md For information on how to upgrade your cluster, see [Upgrade an Azure Kubernetes [az-aks-get-versions]: /cli/azure/aks#az_aks_get_versions [preview-terms]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ [get-azaksversion]: /powershell/module/az.aks/get-azaksversion+[aks-tracker]: release-tracker.md |
automation | Automation Hybrid Runbook Worker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hybrid-runbook-worker.md | Title: Azure Automation Hybrid Runbook Worker overview description: Know about Hybrid Runbook Worker. How to install and run the runbooks on machines in your local datacenter or cloud provider. Previously updated : 11/11/2021 Last updated : 11/09/2022 Azure Automation provides native integration of the Hybrid Runbook Worker role t :::image type="content" source="./media/automation-hybrid-runbook-worker/hybrid-worker-group-platform-inline.png" alt-text="Screenshot of hybrid worker group showing platform field." lightbox="./media/automation-hybrid-runbook-worker/hybrid-worker-group-platform-expanded.png"::: -Here's a list of benefits available with the extension-based Hybrid Runbook Worker role: --| Benefit | Description | -||| -|Seamless onboarding| Removes dependency on a Log Analytics solution for onboarding Hybrid Runbook Workers, which is a multi-step process, is time consuming, and error-prone. | -|Unified onboarding experience| Installation is managed using the same supported methods for Azure and non-Azure machines. | -|Ease of Manageability| Native integration with ARM identity for Hybrid Runbook Worker and provides the flexibility for governance at scale through policies and templates. | -|Azure AD-based authentication| Uses VM system assigned-identities provided by Azure AD. This centralizes control and management of identities and resource credentials.| - For Hybrid Runbook Worker operations after installation, the process of executing runbooks on Hybrid Runbook Workers is the same. The purpose of the extension-based approach is to simplify the installation and management of the Hybrid Runbook Worker role and remove the complexity working with the agent-based version. The new extension-based installation doesn't affect the installation or management of an agent-based Hybrid Runbook Worker role. Both Hybrid Runbook Worker types can co-exist on the same machine. The extension-based Hybrid Runbook Worker only supports the user Hybrid Runbook Worker type, and doesn't include the system Hybrid Runbook Worker required for the Update Management feature. ->[!NOTE] -> PowerShell support to install the extension-based Hybrid Runbook Worker is not supported at this time. +## Benefits of extension-based User Hybrid Workers +The extension-based approach greatly simplifies the installation and management of the User Hybrid Runbook Worker, removing the complexity of working with the agent-based approach. Here are some key benefits: +- **Seamless onboarding** ΓÇô The Agent-based approach for onboarding Hybrid Runbook worker is dependent on the Log Analytics agent, which is a multi-step, time-consuming, and error-prone process. The extension-based approach is no longer dependent on the Log Analytics agent. +- **Ease of Manageability** ΓÇô It offers native integration with ARM identity for Hybrid Runbook Worker and provides the flexibility for governance at scale through policies and templates. +- **Azure Active Directory based authentication** ΓÇô It uses a VM system-assigned managed identities provided by Azure Active Directory. This centralizes control and management of identities and resource credentials. +- **Unified experience** ΓÇô It offers an identical experience for managing Azure and off-Azure Arc-enabled machines. +- **Multiple onboarding channels** ΓÇô You can choose to onboard and manage extension-based workers through the Azure portal, PowerShell cmdlets, Bicep, ARM templates, REST API and Azure CLI. You can also install the extension on an existing Azure VM or Arc-enabled server within the Azure portal experience of that machine through the Extensions blade. +- **Default Automatic upgrade** ΓÇô It offers Automatic upgrade of minor versions by default, significantly reducing the manageability of staying updated on the latest version. We recommend enabling Automatic upgrades to take advantage of any security or feature updates without the manual overhead. You can also opt out of automatic upgrades at any time. Any major version upgrades are currently not supported and should be managed manually. ## Runbook Worker types A Hybrid Runbook Worker doesn't have many of the [Azure sandbox](automation-runb To control the distribution of runbooks on Hybrid Runbook Workers and when or how the jobs are triggered, you can register the hybrid worker against different Hybrid Runbook Worker groups within your Automation account. Target the jobs against the specific group or groups in order to support your execution arrangement. +## Common Scenarios for User Hybrid Runbook Workers ++- To execute Azure Automation runbooks for in-guest VM management directly on an existing Azure virtual machine (VM) and off-Azure server registered as Azure Arc-enabled server or Azure Arc-enabled VMware VM (preview). Azure Arc-enabled servers can be Windows and Linux physical servers and virtual machines hosted outside of Azure, on your corporate network, or other cloud providers. +- To overcome the Azure Automation sandbox limitation - the common scenarios include executing long-running operations beyond three-hour limit for cloud jobs, performing resource-intensive automation operations, interacting with local services running on-premises or in hybrid environment, run scripts that require elevated permissions. +- To overcome organization restrictions to keep data in Azure for governance and security reasons - as you cannot execute Automation jobs on the cloud, you can run it on an on-premises machine that is onboarded as a User Hybrid Runbook Worker. +- To automate operations on multiple ΓÇöOff-Azure resources running on-premises or multicloud environments. You can onboard one of those machines as a User Hybrid Runbook Worker and target automation on the remaining machines in the local environment. +- To access other services privately from the Azure Virtual Network (VNet) without opening an outbound internet connection, you can execute runbooks on a Hybrid Worker connected to the Azure VNet. ++ ## Hybrid Runbook Worker installation The process to install a user Hybrid Runbook Worker depends on the operating system. The table below defines the deployment types. |
automation | Automation Linux Hrw Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-linux-hrw-install.md | The Linux Hybrid Runbook Worker executes runbooks as a special user that can be After you successfully deploy a runbook worker, review [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md) to learn how to configure your runbooks to automate processes in your on-premises datacenter or other cloud environment. > [!NOTE]-> A hybrid worker can co-exist with both platforms: **Agent based (V1)** and **Extension based (V2)**. If you install Extension based (V2) on a hybrid worker already running Agent based (V1), then you would see two entries of the Hybrid Runbook Worker in the group. One with Platform Extension based (V2) and the other Agent based (V1). [**Learn more**](./extension-based-hybrid-runbook-worker-install.md#install-extension-based-v2-on-existing-agent-based-v1-hybrid-worker). -+> A hybrid worker can co-exist with both platforms: **Agent based (V1)** and **Extension based (V2)**. If you install Extension based (V2) on a hybrid worker already running Agent based (V1), then you would see two entries of the Hybrid Runbook Worker in the group. One with Platform Extension based (V2) and the other Agent based (V1). [**Learn more**](./extension-based-hybrid-runbook-worker-install.md#migrate-an-existing-agent-based-to-extension-based-hybrid-workers). ## Prerequisites -Before you start, make sure that you have the following. +Before you start, make sure that you've the following. ### A Log Analytics workspace To install and configure a Linux Hybrid Runbook Worker, perform the following st - Using Azure Policy. - Using this approach, you use the Azure Policy [Deploy Log Analytics agent to Linux or Windows Azure Arc machines](../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition to audit if the Arc-enabled server has the Log Analytics agent installed. If the agent isn't installed, it automatically deploys it using a remediation task. If you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../governance/policy/samples/built-in-initiatives.md#monitoring) initiative to install and configure the Log Analytics agent. + Using this approach, you use the Azure Policy [Deploy Log Analytics agent to Linux or Microsoft Azure Arc machines](../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition to audit if the Arc-enabled server has the Log Analytics agent installed. If the agent isn't installed, it automatically deploys it using a remediation task. If you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../governance/policy/samples/built-in-initiatives.md#monitoring) initiative to install and configure the Log Analytics agent. We recommend installing the Log Analytics agent for Windows or Linux using Azure Policy. |
automation | Automation Windows Hrw Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-windows-hrw-install.md | Azure Automation stores and manages runbooks and then delivers them to one or mo After you successfully deploy a runbook worker, review [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md) to learn how to configure your runbooks to automate processes in your on-premises datacenter or other cloud environment. > [!NOTE]-> A hybrid worker can co-exist with both platforms: **Agent based (V1)** and **Extension based (V2)**. If you install Extension based (V2)on a hybrid worker already running Agent based (V1), then you would see two entries of the Hybrid Runbook Worker in the group. One with Platform Extension based (V2) and the other Agent based (V1). [**Learn more**](./extension-based-hybrid-runbook-worker-install.md#install-extension-based-v2-on-existing-agent-based-v1-hybrid-worker). +> A hybrid worker can co-exist with both platforms: **Agent based (V1)** and **Extension based (V2)**. If you install Extension based (V2)on a hybrid worker already running Agent based (V1), then you would see two entries of the Hybrid Runbook Worker in the group. One with Platform Extension based (V2) and the other Agent based (V1). [**Learn more**](./extension-based-hybrid-runbook-worker-install.md#migrate-an-existing-agent-based-to-extension-based-hybrid-workers). ## Prerequisites To install and configure a Windows Hybrid Runbook Worker, perform the following - Using Azure Policy. - Using this approach, you use the Azure Policy [Deploy Log Analytics agent to Linux or Windows Azure Arc machines](../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition to audit if the Arc-enabled server has the Log Analytics agent installed. If the agent isn't installed, it automatically deploys it using a remediation task. If you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../governance/policy/samples/built-in-initiatives.md#monitoring) initiative to install and configure the Log Analytics agent. + Using this approach, you use the Azure Policy [Deploy Log Analytics agent to Linux or Microsoft Azure Arc machines](../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition to audit if the Arc-enabled server has the Log Analytics agent installed. If the agent isn't installed, it automatically deploys it using a remediation task. If you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../governance/policy/samples/built-in-initiatives.md#monitoring) initiative to install and configure the Log Analytics agent. We recommend installing the Log Analytics agent for Windows or Linux using Azure Policy. To check version of agent-based Windows Hybrid Runbook Worker, go to the followi `C:\ProgramFiles\Microsoft Monitoring Agent\Agent\AzureAutomation\` -The *AzureAutomation* folder has a sub-folder with the version number as the name of the sub-folder. +The *Azure Automation* folder has a sub-folder with the version number as the name of the sub-folder. ## Next steps |
automation | Disable Local Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/disable-local-authentication.md | Title: Disable local authentication in Azure Automation description: This article describes disabling local authentication in Azure Automation. Previously updated : 09/28/2021 Last updated : 09/30/2022+ #Customer intent: As an administrator, I want disable local authentication so that I can enhance security. |
automation | Extension Based Hybrid Runbook Worker Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md | Title: Deploy an extension-based Windows or Linux User Hybrid Runbook Worker in Azure Automation (Preview) + Title: Deploy an extension-based Windows or Linux User Hybrid Runbook Worker in Azure Automation description: This article provides information about deploying the extension-based User Hybrid Runbook Worker to run runbooks on Windows or Linux machines in your on-premises datacenter or other cloud environment. Previously updated : 04/13/2022 Last updated : 11/09/2022 #Customer intent: As a developer, I want to learn about extension so that I can efficiently deploy Hybrid Runbook Workers. -# Deploy an extension-based Windows or Linux User Hybrid Runbook Worker in Azure Automation (Preview) +# Deploy an extension-based Windows or Linux User Hybrid Runbook Worker in Azure Automation The extension-based onboarding is only for **User** Hybrid Runbook Workers. This article describes how to: deploy a user Hybrid Runbook Worker on a Windows or Linux machine, remove the worker, and remove a Hybrid Runbook Worker group. Azure Automation stores and manages runbooks and then delivers them to one or mo > [!NOTE]-> A hybrid worker can co-exist with both platforms: **Agent based (V1)** and **Extension based (V2)**. If you install Extension based (V2)on a hybrid worker already running Agent based (V1), then you would see two entries of the Hybrid Runbook Worker in the group. One with Platform Extension based (V2) and the other Agent based (V1). [**Learn more**](#install-extension-based-v2-on-existing-agent-based-v1-hybrid-worker). +> A hybrid worker can co-exist with both platforms: **Agent based (V1)** and **Extension based (V2)**. If you install Extension based (V2)on a hybrid worker already running Agent based (V1), then you would see two entries of the Hybrid Runbook Worker in the group. One with Platform Extension based (V2) and the other Agent based (V1). [**Learn more**](#migrate-an-existing-agent-based-to-extension-based-hybrid-workers). ## Prerequisites Azure Automation stores and manages runbooks and then delivers them to one or mo - Two cores - 4 GB of RAM+- **Non-Azure machines** must have the [Azure Connected Machine agent](../azure-arc/servers/agent-overview.md) installed. To install the `AzureConnectedMachineAgent`, see [Connect hybrid machines to Azure from the Azure portal](../azure-arc/servers/onboard-portal.md) for Arc-enabled servers or see [Manage VMware virtual machines Azure Arc](../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md#enable-guest-management) to enable guest management for Arc-enabled VMware vSphere VMs. - The system-assigned managed identity must be enabledΓÇ»on the Azure virtual machine, Arc-enabled server or Arc-enabled VMware vSphere VM. If the system-assigned managed identity isn't enabled, it will be enabled as part of the adding process.-- Non-Azure machines must have the [Azure Connected Machine agent](../azure-arc/servers/agent-overview.md) installed. To install the `AzureConnectedMachineAgent`, see [Connect hybrid machines to Azure from the Azure portal](../azure-arc/servers/onboard-portal.md) for Arc-enabled servers or see [Manage VMware virtual machines Azure Arc](../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md#enable-guest-management) for Arc-enabled VMware vSphere VMs.-+ ### Supported operating systems | Windows | Linux (x64)| |||-| ● Windows Server 2022 (including Server Core) <br> ● Windows Server 2019 (including Server Core) <br> ● Windows Server 2016, version 1709 and 1803 (excluding Server Core), and <br> ● Windows Server 2012, 2012 R2 | ● Debian GNU/Linux 7 and 8 <br> ● Ubuntu 18.04, and 20.04 LTS <br> ● SUSE Linux Enterprise Server 15, and 15.1 (SUSE didn't release versions numbered 13 or 14), and <br> ● Red Hat Enterprise Linux Server 7 and 8ΓÇ»| +| ● Windows Server 2022 (including Server Core) <br> ● Windows Server 2019 (including Server Core) <br> ● Windows Server 2016, version 1709 and 1803 (excluding Server Core), and <br> ● Windows Server 2012, 2012 R2 | ● Debian GNU/Linux 10 and 11 <br> ● Ubuntu 22.04 LTS <br> ● SUSE Linux Enterprise Server 15.2, and 15.3 <br> ● Red Hat Enterprise Linux Server 7 and 8ΓÇ»| ### Other Requirements Azure Automation stores and manages runbooks and then delivers them to one or mo | | | - | | PowerShell Core | To run PowerShell runbooks, PowerShell Core needs to be installed. For instructions, see [Installing PowerShell Core on Linux](/powershell/scripting/install/installing-powershell-core-on-linux) | 6.0.0 | +> [!NOTE] +> Hybrid Runbook Worker is currently not supported for Virtual Machine Scale Sets (VMSS). + ## Network requirements ### Proxy server use $protectedsettings = @{ **Azure VMs** ```powershell-Set-AzVMExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -VMName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForWindows -TypeHandlerVersion 0.1 -Settings $settings +Set-AzVMExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -VMName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForWindows -TypeHandlerVersion 1.1 -Settings $settings -EnableAutomaticUpgrade $true/$false ``` **Azure Arc-enabled VMs** ```powershell-New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -MachineName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForWindows -TypeHandlerVersion 0.1 -Setting $settings -NoWait +New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -MachineName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForWindows -TypeHandlerVersion 1.1 -Setting $settings -NoWait -EnableAutomaticUpgrade ``` # [Linux](#tab/linux) If you use a firewall to restrict access to the Internet, you must configure the ### CPU quota limit -There is a CPU quota limit of 5% while configuring extension-based Linux Hybrid Runbook worker. There is no such limit for Windows Hybrid Runbook Worker. +There is a CPU quota limit of 25% while configuring extension-based Linux Hybrid Runbook worker. There is no such limit for Windows Hybrid Runbook Worker. ## Create hybrid worker group -You can create a Hybrid Worker Group via the Azure portal. Currently, creating through the Azure Resource Manager (ARM) template is not supported. - ToΓÇ»create a hybrid worker group in the Azure portal, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com). You can also add machines to an existing hybrid worker group. 1. Select the checkbox next to the machine(s) you want to add to the hybrid worker group. - If you don't see your non-Azure machine listed, ensure Azure Arc Connected Machine agent is installed on the machine. + If you don't see your non-Azure machine listed, ensure Azure Arc Connected Machine agent is installed on the machine. To install the `AzureConnectedMachineAgent` see [Connect hybrid machines to Azure from the Azure portal](../azure-arc/servers/onboard-portal.md) for Arc-enabled servers or see [Manage VMware virtual machines Azure Arc](../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md#enable-guest-management) to enable guest management for Arc-enabled VMware vSphere VMs. 1. Select **Add** to add the machine to the group. You can also add machines to an existing hybrid worker group. :::image type="content" source="./media/extension-based-hybrid-runbook-worker-install/hybrid-worker-group-platform-inline.png" alt-text="Screenshot of platform field showing agent or extension based." lightbox="./media/extension-based-hybrid-runbook-worker-install/hybrid-worker-group-platform-expanded.png"::: -## Install Extension-based (V2) on existing Agent-based (V1) Hybrid Worker +## Migrate an existing Agent based to Extension based Hybrid Workers -A hybrid worker can co-exist with both platforms: **Agent based (V1)** and **Extension based (V2)**. To install Extension based (V2) on a hybrid worker that already has an Agent based (V1): +To utilize the benefits of extension based Hybrid Workers, you must migrate all existing agent based User Hybrid Workers to extension based Workers. A hybrid worker machine can co-exist on both **Agent based (V1)** and **Extension based (V2)** platforms. The extension based installation doesn't affect the installation or management of an agent based Worker. -1. Under **Process Automation**, select **Hybrid Workers groups**, and then your existing hybrid worker group to go to the **Hybrid Worker Group** page. -1. Under **Hybrid worker group**, select **Hybrid Workers**. -1. Select **+ Add** to go to the **Add machines as hybrid worker** page. -1. Select the checkbox next to existing Agent based (V1) Hybrid worker. -1. Select **Add** to add the machine to the group. +To install Hybrid worker extension on an existing agent based hybrid worker, follow these steps: -The **Platform** column shows the same worker as both **Agent based (V1)** and **Extension based (V2)**. Delete the Agent based (V1) Hybrid Worker after you are sure on the working of Extension based (V2) worker. +1. Under **Process Automation**, select **Hybrid worker groups**, and then select your existing hybrid worker group to go to the **Hybrid worker group** page. +1. Under **Hybrid worker group**, select **Hybrid Workers** > **+ Add** to go to the **Add machines as hybrid worker** page. +1. Select the checkbox next to the existing Agent based (V1) Hybrid worker. +1. Select **Add** to append the machine to the group. ++The Platform column shows the same Hybrid worker as both **Agent based (V1)** and **Extension based (V2)**. After you're confident of the extension based Hybrid Worker experience and use, you can remove the agent based Worker. + +For at-scale migration of multiple Agent based Hybrid Workers, you can also use other [channels](#manage-hybrid-worker-extension-using-bicep--arm-templates-rest-api-azure-cli-and-powershell) such as - Bicep, ARM templates, PowerShell cmdlets, REST API, and Azure CLI. ## Delete a Hybrid Runbook Worker You can delete an empty Hybrid Runbook Worker group from the portal. The hybrid worker group will be deleted. -## Manage Hybrid Worker extension using ARM template, REST API, and Azure CLI ++## Automatic upgrade of extension ++Hybrid Worker extension supports [Automatic upgrade](/articles/virtual-machines/automatic-extension-upgrade.md) of minor versions by default. We recommend that you enable Automatic upgrades to take advantage of any security or feature updates without manual overhead. However, to prevent the extension from automatically upgrading (for example, if there is a strict change windows and can only be updated at specific time), you can opt out of this feature by setting the `enableAutomaticUpgrade`property in ARM, Bicep template, PowerShell cmdlets to *false*. Set the same property to *true* whenever you want to re-enable the Automatic upgrade. ++```powershell +$extensionType = "HybridWorkerForLinux/HybridWorkerForWindows" +$extensionName = "HybridWorkerExtension" +$publisher = "Microsoft.Azure.Automation.HybridWorker" +Set-AzVMExtension -ResourceGroupName <RGName> -Location <Location> -VMName <vmName> -Name $extensionName -Publisher $publisher -ExtensionType $extensionType -TypeHandlerVersion 1.1 -Settings $settings -EnableAutomaticUpgrade $true/$false +``` +Major version upgrades must be managed manually. Run the below cmdlets with the latest TypeHandlerVersion. ++> [!NOTE] +> If you had installed the Hybrid Worker extension during the public preview, ensure to upgrade it to the latest major version. ++**Azure VMs** ++```powershell +Set-AzVMExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -VMName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForWindows -TypeHandlerVersion 1.1 -Settings $settings -EnableAutomaticUpgrade $true/$false +``` +**Azure Arc-enabled VMs** ++```powershell +New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -MachineName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForWindows -TypeHandlerVersion 1.1 -Setting $settings -NoWait -EnableAutomaticUpgrade +``` +++## Manage Hybrid Worker extension using Bicep & ARM templates, REST API, Azure CLI, and PowerShell ++#### [Bicep template](#tab/bicep-template) ++You can use the Bicep template to create a new Hybrid Worker group, create a new Azure Windows VM and add it to an existing Hybrid Worker Group. Learn more about [Bicep](/articles/azure-resource-manager/bicep/overview.md) ++```Bicep +param automationAccount string +param automationAccountLocation string +param workerGroupName string ++@description('Name of the virtual machine.') +param virtualMachineName string ++@description('Username for the Virtual Machine.') +param adminUsername string ++@description('Password for the Virtual Machine.') +@minLength(12) +@secure() +param adminPassword string ++@description('Location for the VM.') +param vmLocation string = 'North Central US' ++@description('Size of the virtual machine.') +param vmSize string = 'Standard_DS1_v2' ++@description('The Windows version for the VM. This will pick a fully patched image of this given Windows version.') +@allowed([ + '2008-R2-SP1' + '2012-Datacenter' + '2012-R2-Datacenter' + '2016-Nano-Server' + '2016-Datacenter-with-Containers' + '2016-Datacenter' + '2019-Datacenter' + '2019-Datacenter-Core' + '2019-Datacenter-Core-smalldisk' + '2019-Datacenter-Core-with-Containers' + '2019-Datacenter-Core-with-Containers-smalldisk' + '2019-Datacenter-smalldisk' + '2019-Datacenter-with-Containers' + '2019-Datacenter-with-Containers-smalldisk' +]) +param osVersion string = '2019-Datacenter' ++@description('DNS name for the public IP') +param dnsNameForPublicIP string ++var nicName_var = 'myVMNict' +var addressPrefix = '10.0.0.0/16' +var subnetName = 'Subnet' +var subnetPrefix = '10.0.0.0/24' +var subnetRef = resourceId('Microsoft.Network/virtualNetworks/subnets', virtualNetworkName_var, subnetName) +var vmName_var = virtualMachineName +var virtualNetworkName_var = 'MyVNETt' +var publicIPAddressName_var = 'myPublicIPt' +var networkSecurityGroupName_var = 'default-NSGt' +var UniqueStringBasedOnTimeStamp = uniqueString(resourceGroup().id) ++resource publicIPAddressName 'Microsoft.Network/publicIPAddresses@2020-08-01' = { + name: publicIPAddressName_var + location: vmLocation + properties: { + publicIPAllocationMethod: 'Dynamic' + dnsSettings: { + domainNameLabel: dnsNameForPublicIP + } + } +} ++resource networkSecurityGroupName 'Microsoft.Network/networkSecurityGroups@2020-08-01' = { + name: networkSecurityGroupName_var + location: vmLocation + properties: { + securityRules: [ + { + name: 'default-allow-3389' + properties: { + priority: 1000 + access: 'Allow' + direction: 'Inbound' + destinationPortRange: '3389' + protocol: 'Tcp' + sourceAddressPrefix: '*' + sourcePortRange: '*' + destinationAddressPrefix: '*' + } + } + ] + } +} ++resource virtualNetworkName 'Microsoft.Network/virtualNetworks@2020-08-01' = { + name: virtualNetworkName_var + location: vmLocation + properties: { + addressSpace: { + addressPrefixes: [ + addressPrefix + ] + } + subnets: [ + { + name: subnetName + properties: { + addressPrefix: subnetPrefix + networkSecurityGroup: { + id: networkSecurityGroupName.id + } + } + } + ] + } +} ++resource nicName 'Microsoft.Network/networkInterfaces@2020-08-01' = { + name: nicName_var + location: vmLocation + properties: { + ipConfigurations: [ + { + name: 'ipconfig1' + properties: { + privateIPAllocationMethod: 'Dynamic' + publicIPAddress: { + id: publicIPAddressName.id + } + subnet: { + id: subnetRef + } + } + } + ] + } + dependsOn: [ ++ virtualNetworkName + ] +} ++resource vmName 'Microsoft.Compute/virtualMachines@2020-12-01' = { + name: vmName_var + location: vmLocation + identity: { + type: 'SystemAssigned' + } + properties: { + hardwareProfile: { + vmSize: vmSize + } + osProfile: { + computerName: vmName_var + adminUsername: adminUsername + adminPassword: adminPassword + } + storageProfile: { + imageReference: { + publisher: 'MicrosoftWindowsServer' + offer: 'WindowsServer' + sku: osVersion + version: 'latest' + } + osDisk: { + createOption: 'FromImage' + } + } + networkProfile: { + networkInterfaces: [ + { + id: nicName.id + } + ] + } + } +} ++resource automationAccount_resource 'Microsoft.Automation/automationAccounts@2021-06-22' = { + name: automationAccount + location: automationAccountLocation + properties: { + sku: { + name: 'Basic' + } + } +} ++resource automationAccount_workerGroupName 'Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups@2022-02-22' = { + parent: automationAccount_resource + name: workerGroupName + dependsOn: [ ++ vmName + ] +} ++resource automationAccount_workerGroupName_testhw_UniqueStringBasedOnTimeStamp 'Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers@2021-06-22' = { + parent: automationAccount_workerGroupName + name: guid('testhw', UniqueStringBasedOnTimeStamp) + properties: { + vmResourceId: resourceId('Microsoft.Compute/virtualMachines', virtualMachineName) + } + dependsOn: [ + vmName + ] +} ++resource virtualMachineName_HybridWorkerExtension 'Microsoft.Compute/virtualMachines/extensions@2022-03-01' = { + name: '${virtualMachineName}/HybridWorkerExtension' + location: vmLocation + properties: { + publisher: 'Microsoft.Azure.Automation.HybridWorker' + type: 'HybridWorkerForWindows' + typeHandlerVersion: '1.1' + autoUpgradeMinorVersion: true + enableAutomaticUpgrade: true + settings: { + AutomationAccountURL: automationAccount_resource.properties.automationHybridServiceUrl + } + } + dependsOn: [ + vmName + ] +} ++output output1 string = automationAccount_resource.properties.automationHybridServiceUrl +``` #### [ARM template](#tab/arm-template) You can use an Azure Resource Manager (ARM) template to create a new Azure Windo }, "virtualMachineName": { "type": "string",- "defaultValue": "simple-vm", "metadata": { "description": "Name of the virtual machine." } You can use an Azure Resource Manager (ARM) template to create a new Azure Windo }, "resources": [ {- "name": "[concat(parameters('workerGroupName'),'/',guid('AzureAutomationJobName', variables('UniqueStringBasedOnTimeStamp')))]", - "type": "hybridRunbookWorkerGroups/hybridRunbookWorkers", - "apiVersion": "2021-06-22", + "name": "[parameters('workerGroupName')]", + "type": "hybridRunbookWorkerGroups", + "apiVersion": "2022-02-22", "dependsOn": [ "[resourceId('Microsoft.Automation/automationAccounts', parameters('automationAccount'))]", "[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]" ],- "properties": { - "vmResourceId": "[resourceId('Microsoft.Compute/virtualMachines', parameters('virtualMachineName'))]" - } + "resources" : [ + { + "name": "[guid('testhw', variables('UniqueStringBasedOnTimeStamp'))]", + "type": "hybridRunbookWorkers", + "apiVersion": "2021-06-22", + "dependsOn": [ + "[resourceId('Microsoft.Automation/automationAccounts', parameters('automationAccount'))]", + "[resourceId('Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups', parameters('automationAccount'),parameters('workerGroupName'))]", + "[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]" + ], + "properties": { + "vmResourceId": "[resourceId('Microsoft.Compute/virtualMachines', parameters('virtualMachineName'))]" + } + } + ] } ] }, { "type": "Microsoft.Compute/virtualMachines/extensions", "name": "[concat(parameters('virtualMachineName'),'/HybridWorkerExtension')]",- "apiVersion": "2020-12-01", + "apiVersion": "2022-03-01", "location": "[parameters('vmLocation')]", "dependsOn": [ "[resourceId('Microsoft.Automation/automationAccounts', parameters('automationAccount'))]", You can use an Azure Resource Manager (ARM) template to create a new Azure Windo "properties": { "publisher": "Microsoft.Azure.Automation.HybridWorker", "type": "HybridWorkerForWindows",- "typeHandlerVersion": "0.1", + "typeHandlerVersion": "1.1", "autoUpgradeMinorVersion": true,+ "enableAutomaticUpgrade": true, "settings": { "AutomationAccountURL": "[reference(resourceId('Microsoft.Automation/automationAccounts', parameters('automationAccount'))).AutomationHybridServiceUrl]" } To install and use Hybrid Worker extension using REST API, follow these steps. T - To create, delete, and manage extension-based Hybrid Runbook Worker groups, see [az automation hrwg | Microsoft Docs](/cli/azure/automation/hrwg?view=azure-cli-latest) - To create, delete, and manage extension-based Hybrid Runbook Worker, see [az automation hrwg hrw | Microsoft Docs](/cli/azure/automation/hrwg/hrw?view=azure-cli-latest) +After creating new Hybrid Runbook Worker, you must install the extension on the Hybrid Worker using [az vm extension set](/cli/azure/vm/extension?view=azure-cli-latest#az-vm-extension-set). +++#### [PowerShell](#tab/ps) ++You can use the following PowerShell cmdlets to manage Hybrid Runbook Worker and Hybrid Runbook Worker groups: ++| PowerShell cmdlet | Description | +| -- | -- | +|[`Get-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/get-azautomationhybridrunbookworkergroup?view=azps-9.1.0) | Gets Hybrid Runbook Worker group| +|[`Remove-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/remove-azautomationhybridrunbookworkergroup?view=azps-9.1.0) | Removes Hybrid Runbook Worker group| +|[`Set-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/set-azautomationhybridrunbookworkergroup?view=azps-9.1.0) | Updates Hybrid Worker group with Hybrid Worker credentials| +|[`New-AzAutomationHybridRunbookWorkerGroup`](/powershell/module/az.automation/new-azautomationhybridrunbookworkergroup?view=azps-9.1.0) | Creates new Hybrid Runbook Worker group| +|[`Get-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/get-azautomationhybridrunbookworker?view=azps-9.1.0) | Gets Hybrid Runbook Worker| +|[`Move-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/move-azautomationhybridrunbookworker?view=azps-9.1.0) | Moves Hybrid Worker from one group to other| +|[`New-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/new-azautomationhybridrunbookworker?view=azps-9.1.0) | Creates new Hybrid Runbook Worker| +|[`Remove-AzAutomationHybridRunbookWorker`](/powershell/module/az.automation/remove-azautomationhybridrunbookworker?view=azps-9.1.0)| Removes Hybrid Runbook Worker| ++After creating new Hybrid Runbook Worker, you must install the extension on the Hybrid Worker. ++**Azure VMs** ++```powershell +Set-AzVMExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -VMName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForWindows -TypeHandlerVersion 1.1 -Settings $settings -EnableAutomaticUpgrade $true/$false +``` +**Azure Arc-enabled VMs** ++```powershell +New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -MachineName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForWindows -TypeHandlerVersion 1.1 -Setting $settings -NoWait -EnableAutomaticUpgrade +``` ## Manage Role permissions for Hybrid Worker Groups and Hybrid Workers |
automation | Desired State Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/desired-state-configuration.md | The following are the possible causes: - When you disable local authentication in Azure Automation. See [Disable local authentication](../disable-local-authentication.md). To fix it, see [re-enable local authentication](../disable-local-authentication.md#re-enable-local-authentication). +- Client computer time is many minutes inaccurate from actual time. (To check time use: *w32tm /stripchart /computer:time.windows.com /samples:6*). + ### Resolution Use the following steps to reregister the failing DSC node. |
automation | Extension Based Hybrid Runbook Worker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/extension-based-hybrid-runbook-worker.md | Title: Troubleshoot extension-based Hybrid Runbook Worker issues in Azure Automation description: This article tells how to troubleshoot and resolve issues that arise with Azure Automation extension-based Hybrid Runbook Workers. Previously updated : 09/28/2021 Last updated : 10/31/2022 To help troubleshoot issues with extension-based Hybrid Runbook Workers: - Check the error message shown in the Hybrid worker extension status/Detailed Status. It contains error message(s) and respective recommendation(s) to fix the issue. - Run the troubleshooter tool on the VM and it will generate an output file. Open the output file and verify the errors identified by the troubleshooter tool.- - For windows: you can find the troubleshooter at `C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\<version>\bin\troubleshooter\TroubleShootWindowsExtension.ps1`. - - For Linux: you can find the troubleshooter at `/var/lib/waagent/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux/troubleshootLinuxExtension.py`. + - For windows: you can find the troubleshooter at `C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\<version>\bin\troubleshooter\TroubleShootWindowsExtension.ps1` + - For Linux: you can find the troubleshooter at `/var/lib/waagent/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux/troubleshootLinuxExtension.py` - For Linux machines, the Hybrid worker extension creates a `hweautomation` user and starts the Hybrid worker under the user. Check whether the user `hweautomation` is set up with the correct permissions. If your runbook is trying to access any local resources, ensure that the `hweautomation` has the correct permissions to the local resources. To help troubleshoot issues with extension-based Hybrid Runbook Workers: - For Linux: check the `hwd.` service. - Run the log collector tool and review the collected logs.- - For Windows: the logs are located at `C:\HybridWorkerExtensionLogs`. The tool is at `C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\<version>\bin\troubleshooter\PullLogs.ps1`. The extension log is `HybridWorkerExtensionHandler.log`. The worker log is `SME.log`. - - For Linux: the logs are located at `/home/nxautomation/run`. The tool is at `/var/lib/waagent/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux/logcollector.py`. The worker log is `worker.log`. The extension registration log is `register_log`. The extension startup log is `startup_log`. The extension log is `extension_out`. + - For Windows: the logs are located at `C:\HybridWorkerExtensionLogs`. The tool is at `C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\<version>\bin\troubleshooter\PullLogs.ps1`. + - For Linux: the logs are located at `/home/nxautomation/run`. The tool is at `/var/lib/waagent/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux/logcollector.py`. ++### Scenario: Hybrid Worker deployment fails with Private Link error ++#### Issue ++You are deploying an extension-based Hybrid Runbook Worker on a VM and it fails with error: *Authentication failed for private links*. ++#### Cause +The virtual network of the VM is different from the private endpoint of Azure Automation account, **or** they are not connected. ++#### Resolution +Ensure that the private end point of Azure Automation account is connected to the same Virtual Network, to which the VM is connected. Follow the steps mentioned in [Planning based on your network](../how-to/private-link-security.md#planning-based-on-your-network) to connect to a private endpoint. Also [set public network access flags](../how-to/private-link-security.md#set-public-network-access-flags) to configure an Automation account to deny all public configuration and allow only connections through private endpoints. For more information on how to configure DNS settings for private endpoints, see [DNS configuration](../how-to/private-link-security.md#dns-configuration) ++### Scenario: Hybrid Worker deployment fails when the provided Hybrid Worker group does not exist ++#### Issue +You are deploying an extension-based Hybrid Runbook Worker on a VM and it fails with error: *Account/Group specified does not exist*. ++#### Cause +The Hybrid Runbook Worker group to which the Hybrid Worker is to be deployed is already deleted. ++#### Resolution +Ensure that you create the Hybrid Runbook Worker group and add the VM as a Hybrid Worker in that group. Follow the steps mentioned in [create a Hybrid Runbook Worker group](../extension-based-hybrid-runbook-worker-install.md#create-hybrid-worker-group) using the Azure portal. ++### Scenario: Hybrid Worker deployment fails when system-assigned managed identity is not enabled on the VM ++### Issue +You are deploying an extension-based Hybrid Runbook Worker on a VM and it fails with error: +*Unable to retrieve IMDS identity endpoint for non-Azure VM. Ensure that the Azure connected machine agent is installed and System-assigned identity is enabled.* ++### Cause +You are deploying the extension-based Hybrid Worker on a non-Azure VM that does not have Arc connected machine agent installed on it. ++### Resolution +Non-Azure machines must have the Arc connected machine agent installed on it, before deploying it as an extension-based Hybrid Runbook worker. To install the `AzureConnectedMachineAgent`, see [connect hybrid machines to Azure from the Azure portal](https://learn.microsoft.com/azure/azure-arc/servers/onboard-portal) +for Arc-enabled servers or [Manage VMware virtual machines Azure Arc](https://learn.microsoft.com/azure/azure-arc/vmware-vsphere/manage-vmware-vms-in-azure#enable-guest-management) to enable guest management for Arc-enabled VMware VM. + ++### Scenario: Hybrid Worker deployment fails due to System assigned identity not enabled ++### Issue +You are deploying an extension-based Hybrid Runbook Worker on a VM, and it fails with error: *Invalid Authorization Token*. ++### Cause +User-assigned managed identity of the VM is enabled, but system-assigned managed identity is not enabled. ++### Resolution +Follow the steps listed below: ++1. [Enable](https://learn.microsoft.com/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#enable-system-assigned-managed-identity-on-an-existing-vm) System-assigned managed identity of the VM. +2. [Delete](../extension-based-hybrid-runbook-worker-install.md#delete-a-hybrid-runbook-worker) the Hybrid Worker extension installed on the VM. +3. [Re-install]() the Hybrid Worker extension on the VM. +++### Scenario: Installation process of Hybrid Worker extension on Windows VM gets stuck ++### Issue +You have installed Hybrid Worker extension on a Windows VM from the Portal, but don't get a notification that the process has completed successfully. ++### Cause +Sometimes the installation process might get stuck. ++### Resolution +Follow the steps mentioned below to install Hybrid Worker extension again: ++1. Open PowerShell console +1. Remove registry entry, if present: *HKLM:/Software/Microsoft/Azure/HybridWorker* +1. Remove the registry entry, if present: *HKLM:/Software/Microsoft/HybridRunbookWorkerV2* +1. Go to Hybrid Worker extension installation folder + Cd "C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\<version>" +1. Install Hybrid Worker extension: `.\bin\install.ps1` +1. Enable Hybrid Worker extension: `.\bin\enable.ps1` ++### Scenario: Uninstallation process of Hybrid Worker extension on Windows VM gets stuck ++#### Issue +You have installed a Hybrid Worker extension on a Windows VM from the portal, but don't get a notification that the process has completed successfully. ++#### Cause +Sometimes the uninstallation process might get stuck. ++#### Resolution +1. Open PowerShell console +1. Go to Hybrid Worker extension installation folder + Cd "C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\<version>" +1. Disable Hybrid Worker extension: `.\bin\disable.cmd` +1. Uninstall Hybrid Worker extension: `.\bin\uninstall.ps1` +1. Remove registry entry, if present: *HKLM:/Software/Microsoft/Azure/HybridWorker* +1. Remove the registry entry, if present: *HKLM:/Software/Microsoft/HybridRunbookWorkerV2* +++### Scenario: Installation process of Hybrid Worker extension on Linux VM gets stuck ++#### Issue +You have installed a Hybrid Worker extension on a Linux VM from the portal, but don't get a notification that the process has completed successfully. ++#### Cause +Sometimes the uninstallation process might get stuck. ++#### Resolution +1. Go to folder: `rm -r /home/hweautomation/state` +1. Go to Hybrid Worker extension installation folder */var/lib/waagent/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux-<version>/* +1. Go to above folder and run command `rm mrseq` +1. Install Hybrid Worker Extension: *"installCommand": "./extension_shim.sh -c ./HWExtensionHandlers.py -i"* +1. Enable Hybrid Worker extension: *"enableCommand": "./extension_shim.sh -c ./HWExtensionHandlers.py -e"* ++### Scenario: Uninstallation process of Hybrid Worker extension on Linux VM gets stuck ++#### Issue +You have uninstalled Hybrid Worker extension on a Linux VM from the portal, but don't get a notification that the process has completed successfully. ++#### Cause +Sometimes the uninstallation process might get stuck. ++#### Resolution +Follow the steps mentioned below to completely uninstall Hybrid Worker extension: ++1. Go to Hybrid Worker Extension installation folder: + */var/lib/waagent/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux-<version>/* +1. Disable the extension: `"disableCommand": "./extension_shim.sh -c ./HWExtensionHandlers.py -d" ` +1. Uninstall the extension: `"uninstallCommand": "./extension_shim.sh -c ./HWExtensionHandlers.py -u"` + +### Scenario: Runbook execution fails ++#### Issue ++Runbook execution fails, and you receive the following error message: ++`The job action 'Activate' cannot be run, because the process stopped unexpectedly. The job action was attempted three times.` ++Your runbook is suspended shortly after it attempts to execute three times. There are conditions that can interrupt the runbook from completing. The related error message might not include any additional information. ++#### Cause ++The following are possible causes: ++* The runbooks can't authenticate with local resources. +* The hybrid worker is behind a proxy or firewall. +* The computer configured to run the Hybrid Runbook Worker doesn't meet the minimum hardware requirements. ++#### Resolution ++Verify that the computer has outbound access to **\*.azure-automation.net** on port 443. ++Computers running the Hybrid Runbook Worker should meet the minimum hardware requirements before the worker is configured to host this feature. Runbooks and the background process they use might cause the system to be overused and cause runbook job delays or timeouts. ++Confirm the computer to run the Hybrid Runbook Worker feature meets the minimum hardware requirements. If it does, monitor CPU and memory use to determine any correlation between the performance of Hybrid Runbook Worker processes and Windows. Any memory or CPU pressure can indicate the need to upgrade resources. You can also select a different compute resource that supports the minimum requirements and scale when workload demands indicate an increase is necessary. ++Check the **Microsoft-SMA** event log for a corresponding event with the description `Win32 Process Exited with code [4294967295]`. The cause of this error is that you haven't configured authentication in your runbooks or specified the Run As credentials for the Hybrid Runbook Worker group. Review runbook permissions in [Running runbooks on a Hybrid Runbook Worker](../automation-hrw-run-runbooks.md) to confirm that you've correctly configured authentication for your runbooks. +++### Scenario: No certificate was found in the certificate store on the Hybrid Runbook Worker ++#### Issue ++A runbook running on a Hybrid Runbook Worker fails with the following error message: ++`Connect-AzAccount : No certificate was found in the certificate store with thumbprint 0000000000000000000000000000000000000000` +`At line:3 char:1` +`+ Connect-AzAccount -ServicePrincipal -Tenant $Conn.TenantID -Appl ...` +`+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~` +` + CategoryInfo : CloseError: (:) [Connect-AzAccount],ArgumentException` +` + FullyQualifiedErrorId : Microsoft.Azure.Commands.Profile.ConnectAzAccountCommand` ++#### Cause ++This error occurs when you attempt to use a [Run As account](../automation-security-overview.md#run-as-accounts) in a runbook that runs on a Hybrid Runbook Worker where the Run As account certificate isn't present. Hybrid Runbook Workers don't have the certificate asset locally by default. The Run As account requires this asset to operate properly. ++#### Resolution ++If your Hybrid Runbook Worker is an Azure VM, you can use [runbook authentication with managed identities](../automation-hrw-run-runbooks.md#runbook-auth-managed-identities) instead. This scenario simplifies authentication by allowing you to authenticate to Azure resources using the managed identity of the Azure VM instead of the Run As account. When the Hybrid Runbook Worker is an on-premises machine, you need to install the Run As account certificate on the machine. To learn how to install the certificate, see the steps to run the PowerShell runbook **Export-RunAsCertificateToHybridWorker** in [Run runbooks on a Hybrid Runbook Worker](../automation-hrw-run-runbooks.md). +++### Scenario: Set-AzStorageBlobContent fails on a Hybrid Runbook Worker ++#### Issue ++Runbook fails when it tries to execute `Set-AzStorageBlobContent`, and you receive the following error message: ++`Set-AzStorageBlobContent : Failed to open file xxxxxxxxxxxxxxxx: Illegal characters in path` ++#### Cause ++ This error is caused by the long file name behavior of calls to `[System.IO.Path]::GetFullPath()`, which adds UNC paths. ++#### Resolution ++As a workaround, you can create a configuration file named `OrchestratorSandbox.exe.config` with the following content: ++```azurecli +<configuration> + <runtime> + <AppContextSwitchOverrides value="Switch.System.IO.UseLegacyPathHandling=false" /> + </runtime> +</configuration> +``` ++Place this file in the same folder as the executable file `OrchestratorSandbox.exe`. For example, ++`%ProgramFiles%\Microsoft Monitoring Agent\Agent\AzureAutomation\7.3.702.0\HybridAgent` +++### Scenario: Microsoft Azure VMs automatically dropped from a hybrid worker group ++#### Issue ++You can't see the Hybrid Runbook Worker or VMs when the worker machine has been turned off for a long time. ++#### Cause ++The Hybrid Runbook Worker machine hasn't pinged Azure Automation for more than 30 days. As a result, Automation has purged the Hybrid Runbook Worker group or the System Worker group. ++#### Resolution ++Start the worker machine, and then re-register it with Azure Automation. For instructions on how to install the runbook environment and connect to Azure Automation, see [Deploy a Windows Hybrid Runbook Worker](../automation-windows-hrw-install.md). ## Next steps |
automation | Hybrid Runbook Worker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/hybrid-runbook-worker.md | You have two options for resolving this issue: * Manually configure the worker machine to run in an Orchestrator sandbox. Then run a runbook created in the Azure Automation account on the worker to test the functionality. -### <a name="vm-automatically-dropped"></a>Scenario: Windows Azure VMs automatically dropped from a hybrid worker group +### <a name="vm-automatically-dropped"></a>Scenario: Microsoft Azure VMs automatically dropped from a hybrid worker group #### Issue The Hybrid Runbook Worker machine hasn't pinged Azure Automation for more than 3 #### Resolution -Start the worker machine, and then rereregister it with Azure Automation. For instructions on how to install the runbook environment and connect to Azure Automation, see [Deploy a Windows Hybrid Runbook Worker](../automation-windows-hrw-install.md). +Start the worker machine, and then re-register it with Azure Automation. For instructions on how to install the runbook environment and connect to Azure Automation, see [Deploy a Windows Hybrid Runbook Worker](../automation-windows-hrw-install.md). ### <a name="no-cert-found"></a>Scenario: No certificate was found in the certificate store on the Hybrid Runbook Worker |
azure-cache-for-redis | Cache Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure.md | Use the **Maxmemory policy**, **maxmemory-reserved**, and **maxfragmentationmemo **Maxmemory policy** configures the eviction policy for the cache and allows you to choose from the following eviction policies: -- `volatile-lru` - The default eviction policy.-- `allkeys-lru`-- `volatile-random`-- `allkeys-random`-- `volatile-ttl`-- `noeviction`+- `volatile-lru`: The default eviction policy, removes the least recently used key out of all the keys with an expiration set. +- `allkeys-lru`: Removes the least recently used key. +- `volatile-random`: Removes a random key that has an expiration set. +- `allkeys-random`: Removes a random key. +- `volatile-ttl`: Removes the key with the shortest time to live based on the expiration set for it. +- `noeviction`: No eviction policy. Returns an error message if you attempt to insert data. +- `volatile-lfu`: Evicts the least frequently used keys out of all keys with an expire field set. +- `allkeys-lfu`: Evicts the least frequently used keys out of all keys. For more information about `maxmemory` policies, see [Eviction policies](https://redis.io/topics/lru-cache#eviction-policies). |
azure-fluid-relay | Fluid Json Web Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/fluid-json-web-token.md | Each part is separated by a period (.) and separately Base64 encoded. | documentId | string | Generated by Azure Fluid Relay (AFR) service. Identifies the document for which the token is being generated. | | scope | string[] | Identifies the permissions required by the client on the document or summary. For every scope, you can define the permissions you want to give to the client. | | tenantId | string | Identifies the tenant. |-| user | JSON | *Optional* `{ displayName: <display_name>, id: <user_id>, name: <user_name>, }` Identifies users of your application. This is sent back to your application by Alfred, the ordering service. It can be used by your application to identify your users from the response it gets from Alfred. Azure Fluid Relay doesn't validate this information. | +| user | JSON | Identifies users of your application. It can be used by your application to identify your users by using the [Fluid Framework Audience](use-audience-in-fluid.md).<br>`{ id: <user_id>, name: <user_name>, additionalDetails: { email: <email>, date: <date>, }, }` | | iat | number, a UNIX timestamp | "Issued At" indicates when the authentication for this token occurred. | | exp | number, a UNIX timestamp | The "exp" (expiration time) claim identifies the expiration time on or after which the JWT must not be accepted for processing. The token lifetime can't be more than 1 hour. | | ver | string | Indicates the version of the access token. Must be `1.0` . | |
azure-functions | Functions Reference Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md | zone_pivot_groups: python-mode-functions # Azure Functions Python developer guide -This article is an introduction to developing Azure Functions using Python. The content below assumes that you've already read the [Azure Functions developers guide](functions-reference.md). +This guide is an introduction to developing Azure Functions by using Python. The article assumes that you've already read the [Azure Functions developers guide](functions-reference.md). > [!IMPORTANT] > This article supports both the v1 and v2 programming model for Python in Azure Functions. -> The v2 programming model is currently in preview. -> While the v1 model uses a functions.json file to define functions, the new v2 model lets you instead use a decorator-based approach. This new approach results in a simpler file structure and a more code-centric approach. Choose the **v2** selector at the top of the article to learn about this new programming model. . +> The Python v2 programming model is currently in preview. +> The Python v1 model uses a *functions.json* file to define functions, and the new v2 model lets you instead use a decorator-based approach. This new approach results in a simpler file structure, and it's more code-centric. Choose the **v2** selector at the top of the article to learn about this new programming model. -As a Python developer, you may also be interested in one of the following articles: +As a Python developer, you might also be interested in one of the following articles: -| Getting started | Concepts| Scenarios/Samples | ++| Getting started | Concepts| Scenarios and samples | |--|--|--|-| <ul><li>[Python function using Visual Studio Code](./create-first-function-vs-code-python.md?pivots=python-mode-configuration)</li><li>[Python function with terminal/command prompt](./create-first-function-cli-python.md?pivots=python-mode-configuration)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[Performance considerations](functions-best-practices.md)</li></ul> | <ul><li>[Image classification with PyTorch](machine-learning-pytorch.md)</li><li>[Azure Automation sample](/samples/azure-samples/azure-functions-python-list-resource-groups/azure-functions-python-sample-list-resource-groups/)</li><li>[Machine learning with TensorFlow](functions-machine-learning-tensorflow.md)</li><li>[Browse Python samples](/samples/browse/?products=azure-functions&languages=python)</li></ul> | +| <ul><li>[Create Python functions by using Visual Studio Code](./create-first-function-vs-code-python.md?pivots=python-mode-configuration)</li><li>[Create Python functions by using a terminal or command prompt](./create-first-function-cli-python.md?pivots=python-mode-configuration)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[Performance considerations](functions-best-practices.md)</li></ul> | <ul><li>[Image classification with PyTorch](machine-learning-pytorch.md)</li><li>[Azure Automation sample](/samples/azure-samples/azure-functions-python-list-resource-groups/azure-functions-python-sample-list-resource-groups/)</li><li>[Machine learning with TensorFlow](functions-machine-learning-tensorflow.md)</li><li>[Browse Python samples](/samples/browse/?products=azure-functions&languages=python)</li></ul> | ::: zone-end+ ::: zone pivot="python-mode-decorators" + | Getting started | Concepts| Samples |-|--|--|--| -| <ul><li>[Python function using Visual Studio Code](./create-first-function-vs-code-python.md?pivots=python-mode-decorators)</li><li>[Python function with terminal/command prompt](./create-first-function-cli-python.md?pivots=python-mode-decorators)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[Performance considerations](functions-best-practices.md)</li></ul> | <li>[Code Examples](functions-bindings-triggers-python.md)</li> | +| | | | +| <ul><li>[Create Python functions by using Visual Studio Code](./create-first-function-vs-code-python.md?pivots=python-mode-decorators)</li><li>[Create Python functions by using a terminal or command prompt](./create-first-function-cli-python.md?pivots=python-mode-decorators)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[Performance considerations](functions-best-practices.md)</li></ul> | <li>[Code Examples](functions-bindings-triggers-python.md)</li> | + ::: zone-end > [!NOTE]-> While you can develop your Python based Azure Functions locally on Windows, Python is only supported on a Linux based hosting plan when running in Azure. See the list of supported [operating system/runtime](functions-scale.md#operating-systemruntime) combinations. +> Although you can develop your Python-based Azure functions locally on Windows, Python is supported only on a Linux-based hosting plan when it's running in Azure. For more information, see the [list of supported operating system/runtime combinations](functions-scale.md#operating-systemruntime). ## Programming model ::: zone pivot="python-mode-configuration" -Azure Functions expects a function to be a stateless method in your Python script that processes input and produces output. By default, the runtime expects the method to be implemented as a global method called `main()` in the `__init__.py` file. You can also [specify an alternate entry point](#alternate-entry-point). +Azure Functions expects a function to be a stateless method in your Python script that processes input and produces output. By default, the runtime expects the method to be implemented as a global method called `main()` in the *\_\_init\_\_.py* file. You can also [specify an alternative entry point](#alternative-entry-point). -Data from triggers and bindings is bound to the function via method attributes using the `name` property defined in the *function.json* file. For example, the _function.json_ below describes a simple function triggered by an HTTP request named `req`: +You bind data to the function from triggers and bindings via method attributes that use the `name` property that's defined in the *function.json* file. For example, the following *function.json* file describes a simple function that's triggered by an HTTP request named `req`: :::code language="json" source="~/functions-quickstart-templates/Functions.Templates/Templates/HttpTrigger-Python/function.json"::: -Based on this definition, the `__init__.py` file that contains the function code might look like the following example: +Based on this definition, the *\_\_init\_\_.py* file that contains the function code might look like the following example: ```python def main(req): def main(req): return f'Hello, {user}!' ``` -You can also explicitly declare the attribute types and return type in the function using Python type annotations. This action helps you to use the IntelliSense and autocomplete features provided by many Python code editors. +You can also explicitly declare the attribute types and return type in the function by using Python type annotations. Doing so helps you to use the IntelliSense and autocomplete features that are provided by many Python code editors. ```python import azure.functions def main(req: azure.functions.HttpRequest) -> str: return f'Hello, {user}!' ``` -Use the Python annotations included in the [azure.functions.*](/python/api/azure-functions/azure.functions) package to bind input and outputs to your methods. +Use the Python annotations that are included in the [azure.functions.\*](/python/api/azure-functions/azure.functions) package to bind the input and outputs to your methods. + ::: zone-end ::: zone pivot="python-mode-decorators" -Azure Functions expects a function to be a stateless method in your Python script that processes input and produces output. By default, the runtime expects the method to be implemented as a global method in the `function_app.py` file. +Azure Functions expects a function to be a stateless method in your Python script that processes input and produces output. By default, the runtime expects the method to be implemented as a global method in the *function\_app.py* file. -Triggers and bindings can be declared and used in a function in a decorator based approach. They're defined in the same file, `function_app.py`, as the functions. As an example, the below _function_app.py_ file represents a function trigger by an HTTP request. +Triggers and bindings can be declared and used in a function in a decorator based approach. They're defined in the same file, *function\_app.py*, as the functions. As an example, the following *function\_app.py* file represents a function trigger by an HTTP request. ```python @app.function_name(name="HttpTrigger1") def main(req): return f'Hello, {user}!' ``` -You can also explicitly declare the attribute types and return type in the function using Python type annotations. This helps you use the IntelliSense and autocomplete features provided by many Python code editors. +You can also explicitly declare the attribute types and return type in the function by using Python type annotations. Doing so helps you use the IntelliSense and autocomplete features that are provided by many Python code editors. ```python import azure.functions def main(req: azure.functions.HttpRequest) -> str: return f'Hello, {user}!' ``` -At this time, only specific triggers and bindings are supported by the v2 programming model. For more information, see [Triggers and inputs](#triggers-and-inputs). +At this time, only specific triggers and bindings are supported by the Python v2 programming model. For more information, see [Triggers and inputs](#triggers-and-inputs). To learn about known limitations with the v2 model and their workarounds, see [Troubleshoot Python errors in Azure Functions](./recover-python-functions.md?pivots=python-mode-decorators). ::: zone-end -## Alternate entry point +## Alternative entry point ::: zone pivot="python-mode-configuration" -You can change the default behavior of a function by optionally specifying the `scriptFile` and `entryPoint` properties in the *function.json* file. For example, the _function.json_ below tells the runtime to use the `customentry()` method in the _main.py_ file, as the entry point for your Azure Function. ++You can change the default behavior of a function by optionally specifying the `scriptFile` and `entryPoint` properties in the *function.json* file. For example, the following *function.json* tells the runtime to use the `customentry()` method in the *main.py* file as the entry point for your Azure function. ```json { You can change the default behavior of a function by optionally specifying the ` ::: zone-end ::: zone pivot="python-mode-decorators" -During preview, the entry point is only in the file `function_app.py`. However, functions within the project can be referenced in function_app.py using [blueprints](#blueprints) or by importing. +During preview, the entry point is only in the *function\_app.py* file. However, you can reference functions within the project in *function\_app.py* by using [blueprints](#blueprints) or by importing. ::: zone-end ## Folder structure ::: zone pivot="python-mode-configuration" -The recommended folder structure for a Python Functions project looks like the following example: +The recommended folder structure for a Python functions project looks like the following example: ``` <project_root>/ The recommended folder structure for a Python Functions project looks like the f | - requirements.txt | - Dockerfile ```-The main project folder (<project_root>) can contain the following files: ++The main project folder, *<project_root>*, can contain the following files: * *local.settings.json*: Used to store app settings and connection strings when running locally. This file doesn't get published to Azure. To learn more, see [local.settings.file](functions-develop-local.md#local-settings-file). * *requirements.txt*: Contains the list of Python packages the system installs when publishing to Azure. * *host.json*: Contains configuration options that affect all functions in a function app instance. This file does get published to Azure. Not all options are supported when running locally. To learn more, see [host.json](functions-host-json.md).-* *.vscode/*: (Optional) Contains store VSCode configuration. To learn more, see [VSCode setting](https://code.visualstudio.com/docs/getstarted/settings). +* *.vscode/*: (Optional) Contains the stored Visual Studio Code configuration. To learn more, see [Visual Studio Code settings](https://code.visualstudio.com/docs/getstarted/settings). * *.venv/*: (Optional) Contains a Python virtual environment used by local development. * *Dockerfile*: (Optional) Used when publishing your project in a [custom container](functions-create-function-linux-custom-image.md). * *tests/*: (Optional) Contains the test cases of your function app.-* *.funcignore*: (Optional) Declares files that shouldn't get published to Azure. Usually, this file contains `.vscode/` to ignore your editor setting, `.venv/` to ignore local Python virtual environment, `tests/` to ignore test cases, and `local.settings.json` to prevent local app settings being published. +* *.funcignore*: (Optional) Declares files that shouldn't get published to Azure. Usually, this file contains *.vscode/* to ignore your editor setting, *.venv/* to ignore the local Python virtual environment, *tests/* to ignore test cases, and *local.settings.json* to prevent local app settings from being published. ++Each function has its own code file and binding configuration file, *function.json*. -Each function has its own code file and binding configuration file (function.json). ::: zone-end ::: zone pivot="python-mode-decorators" -The recommended folder structure for a Python Functions project looks like the following example: +The recommended folder structure for a Python functions project looks like the following example: ``` <project_root>/ The recommended folder structure for a Python Functions project looks like the f | - Dockerfile ``` -The main project folder (<project_root>) can contain the following files: -* *.venv/*: (Optional) Contains a Python virtual environment used by local development. -* *.vscode/*: (Optional) Contains store VSCode configuration. To learn more, see [VSCode setting](https://code.visualstudio.com/docs/getstarted/settings). -* *function_app.py*: This is the default location for all functions and their related triggers and bindings. -* *additional_functions.py*: (Optional) Any other Python files that contain functions (usually for logical grouping) that are referenced in `function_app.py` through blueprints. +The main project folder, *<project_root>*, can contain the following files: ++* *.venv/*: (Optional) Contains a Python virtual environment that's used by local development. +* *.vscode/*: (Optional) Contains the stored Visual Studio Code configuration. To learn more, see [Visual Studio Code settings](https://code.visualstudio.com/docs/getstarted/settings). +* *function_app.py*: The default location for all functions and their related triggers and bindings. +* *additional_functions.py*: (Optional) Any other Python files that contain functions (usually for logical grouping) that are referenced in *function\_app.py* through blueprints. * *tests/*: (Optional) Contains the test cases of your function app.-* *.funcignore*: (Optional) Declares files that shouldn't get published to Azure. Usually, this file contains `.vscode/` to ignore your editor setting, `.venv/` to ignore local Python virtual environment, `tests/` to ignore test cases, and `local.settings.json` to prevent local app settings being published. +* *.funcignore*: (Optional) Declares files that shouldn't get published to Azure. Usually, this file contains *.vscode/* to ignore your editor setting, *.venv/* to ignore local Python virtual environment, *tests/* to ignore test cases, and *local.settings.json* to prevent local app settings being published. * *host.json*: Contains configuration options that affect all functions in a function app instance. This file does get published to Azure. Not all options are supported when running locally. To learn more, see [host.json](functions-host-json.md).-* *local.settings.json*: Used to store app settings and connection strings when running locally. This file doesn't get published to Azure. To learn more, see [local.settings.file](functions-develop-local.md#local-settings-file). -* *requirements.txt*: Contains the list of Python packages the system installs when publishing to Azure. +* *local.settings.json*: Used to store app settings and connection strings when it's running locally. This file doesn't get published to Azure. To learn more, see [local.settings.file](functions-develop-local.md#local-settings-file). +* *requirements.txt*: Contains the list of Python packages the system installs when it publishes to Azure. * *Dockerfile*: (Optional) Used when publishing your project in a [custom container](functions-create-function-linux-custom-image.md). ::: zone-end -When you deploy your project to a function app in Azure, the entire contents of the main project (*<project_root>*) folder should be included in the package, but not the folder itself, which means `host.json` should be in the package root. We recommend that you maintain your tests in a folder along with other functions, in this example `tests/`. For more information, see [Unit Testing](#unit-testing). +When you deploy your project to a function app in Azure, the entire contents of the main project folder, *<project_root>*, should be included in the package, but not the folder itself, which means that *host.json* should be in the package root. We recommend that you maintain your tests in a folder along with other functions (in this example, *tests/*). For more information, see [Unit testing](#unit-testing). ::: zone pivot="python-mode-decorators" ## Blueprints -The v2 programming model introduces the concept of _blueprints_. A blueprint is a new class instantiated to register functions outside of the core function application. The functions registered in blueprint instances aren't indexed directly by function runtime. To get these blueprint functions indexed, the function app needs to register the functions from blueprint instances. +The Python v2 programming model introduces the concept of _blueprints_. A blueprint is a new class that's instantiated to register functions outside of the core function application. The functions registered in blueprint instances aren't indexed directly by the function runtime. To get these blueprint functions indexed, the function app needs to register the functions from blueprint instances. Using blueprints provides the following benefits: -* Lets you break-up the function app into modular components enabling you to define functions in multiple Python files and divide them into different components per file. +* Lets you break up the function app into modular components, which enables you to define functions in multiple Python files and divide them into different components per file. * Provides extensible public function app interfaces to build and reuse your own APIs. The following example shows how to use blueprints: -First, in an `http_blueprint.py` file HTTP triggered function is first defined and added to a blueprint object. +First, in an *http_blueprint.py* file, an HTTP-triggered function is first defined and added to a blueprint object. ```python import logging def default_template(req: func.HttpRequest) -> func.HttpResponse: if name: return func.HttpResponse( - f"Hello, {name}. This HTTP triggered function " + f"Hello, {name}. This HTTP-triggered function " f"executed successfully.") else: return func.HttpResponse( - "This HTTP triggered function executed successfully. " + "This HTTP-triggered function executed successfully. " "Pass a name in the query string or in the request body for a" " personalized response.", status_code=200 ) ``` -Next, in `function_app.py` the blueprint object is imported and its functions are registered to function app. +Next, in the *function\_app.py* file, the blueprint object is imported and its functions are registered to the function app. ```python import azure.functions as func app = func.FunctionApp() app.register_functions(bp) ``` + ::: zone pivot="python-mode-configuration" ## Import behavior -You can import modules in your function code using both absolute and relative references. Based on the folder structure shown above, the following imports work from within the function file *<project_root>\my\_first\_function\\_\_init\_\_.py*: +You can import modules in your function code by using both absolute and relative references. Based on the previously described folder structure, the following imports work from within the function file *<project_root>\my\_first\_function\\_\_init\_\_.py*: ```python from shared_code import my_first_helper_function #(absolute) from . import example #(relative) ``` > [!NOTE]-> The *shared_code/* folder needs to contain an \_\_init\_\_.py file to mark it as a Python package when using absolute import syntax. +> When you're using absolute import syntax, the *shared_code/* folder needs to contain an *\_\_init\_\_.py* file to mark it as a Python package. -The following \_\_app\_\_ import and beyond top-level relative import are deprecated, since it isn't supported by static type checker and not supported by Python test frameworks: +The following \_\_app\_\_ import and beyond top-level relative import are deprecated, because they're not supported by the static type checker and not supported by Python test frameworks: ```python from __app__.shared_code import my_first_helper_function #(deprecated __app__ import) from ..shared_code import my_first_helper_function #(deprecated beyond top-level ## Triggers and inputs ::: zone pivot="python-mode-configuration" -Inputs are divided into two categories in Azure Functions: trigger input and other input. Although they're different in the `function.json` file, usage is identical in Python code. Connection strings or secrets for trigger and input sources map to values in the `local.settings.json` file when running locally, and the application settings when running in Azure. +Inputs are divided into two categories in Azure Functions: trigger input and other input. Although they're different in the *function.json* file, their usage is identical in Python code. Connection strings or secrets for trigger and input sources map to values in the *local.settings.json* file when they're running locally, and they map to the application settings when they're running in Azure. -For example, the following code demonstrates the difference between the two: +For example, the following code demonstrates the difference between the two inputs: ```json // function.json import logging def main(req: func.HttpRequest, obj: func.InputStream): - logging.info(f'Python HTTP triggered function processed: {obj.read()}') + logging.info(f'Python HTTP-triggered function processed: {obj.read()}') ``` -When the function is invoked, the HTTP request is passed to the function as `req`. An entry will be retrieved from the Azure Blob Storage based on the _ID_ in the route URL and made available as `obj` in the function body. Here, the storage account specified is the connection string found in the AzureWebJobsStorage app setting, which is the same storage account used by the function app. +When the function is invoked, the HTTP request is passed to the function as `req`. An entry will be retrieved from the Azure Blob Storage account based on the _ID_ in the route URL and made available as `obj` in the function body. Here, the specified storage account is the connection string that's found in the `AzureWebJobsStorage` app setting, which is the same storage account that's used by the function app. ::: zone-end ::: zone pivot="python-mode-decorators" -Inputs are divided into two categories in Azure Functions: trigger input and other input. Although they're defined using different decorators, usage is similar in Python code. Connection strings or secrets for trigger and input sources map to values in the `local.settings.json` file when running locally, and the application settings when running in Azure. +Inputs are divided into two categories in Azure Functions: trigger input and other input. Although they're defined using different decorators, their usage is similar in Python code. Connection strings or secrets for trigger and input sources map to values in the *local.settings.json* file when they're running locally, and they map to the application settings when they're running in Azure. -As an example, the following code demonstrates how to define a Blob storage input binding: +As an example, the following code demonstrates how to define a Blob Storage input binding: ```json // local.settings.json app = func.FunctionApp() def main(req: func.HttpRequest, obj: func.InputStream):- logging.info(f'Python HTTP triggered function processed: {obj.read()}') + logging.info(f'Python HTTP-triggered function processed: {obj.read()}') ``` -When the function is invoked, the HTTP request is passed to the function as `req`. An entry will be retrieved from the Azure Blob Storage based on the _ID_ in the route URL and made available as `obj` in the function body. Here, the storage account specified is the connection string found in the AzureWebJobsStorage app setting, which is the same storage account used by the function app. +When the function is invoked, the HTTP request is passed to the function as `req`. An entry will be retrieved from the Azure Blob Storage account based on the _ID_ in the route URL and made available as `obj` in the function body. Here, the specified storage account is the connection string that's found in the AzureWebJobsStorage app setting, which is the same storage account that's used by the function app. -At this time, only specific triggers and bindings are supported by the v2 programming model. Supported triggers and bindings are as follows. +At this time, only specific triggers and bindings are supported by the Python v2 programming model. Supported triggers and bindings are as follows: -| Type | Trigger | Input Binding | Output Binding | -| | | | | +| Type | Trigger | Input binding | Output binding | +| | :: | :: | :: | | [HTTP](functions-bindings-triggers-python.md#http-trigger) | x | | | | [Timer](functions-bindings-triggers-python.md#timer-trigger) | x | | | | [Azure Queue Storage](functions-bindings-triggers-python.md#azure-queue-storage-trigger) | x | | x | For more examples, see [Python V2 model Azure Functions triggers and bindings (p ::: zone pivot="python-mode-configuration" Output can be expressed both in return value and output parameters. If there's only one output, we recommend using the return value. For multiple outputs, you'll have to use output parameters. -To use the return value of a function as the value of an output binding, the `name` property of the binding should be set to `$return` in `function.json`. +To use the return value of a function as the value of an output binding, the `name` property of the binding should be set to `$return` in the *function.json* file. To produce multiple outputs, use the `set()` method provided by the [`azure.functions.Out`](/python/api/azure-functions/azure.functions.out) interface to assign a value to the binding. For example, the following function can push a message to a queue and also return an HTTP response. To learn more about logging, see [Monitor Azure Functions](functions-monitoring. ### Log custom telemetry -By default, the Functions runtime collects logs and other telemetry data generated by your functions. This telemetry ends up as traces in Application Insights. Request and dependency telemetry for certain Azure services are also collected by default by [triggers and bindings](functions-triggers-bindings.md#supported-bindings). To collect custom request and custom dependency telemetry outside of bindings, you can use the [OpenCensus Python Extensions](https://github.com/census-ecosystem/opencensus-python-extensions-azure). This extension sends custom telemetry data to your Application Insights instance. You can find a list of supported extensions at the [OpenCensus repository](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib). +By default, the Functions runtime collects logs and other telemetry data that are generated by your functions. This telemetry ends up as traces in Application Insights. Request and dependency telemetry for certain Azure services are also collected by default by [triggers and bindings](functions-triggers-bindings.md#supported-bindings). ++To collect custom request and custom dependency telemetry outside of bindings, you can use the [OpenCensus Python Extensions](https://github.com/census-ecosystem/opencensus-python-extensions-azure). This extension sends custom telemetry data to your Application Insights instance. You can find a list of supported extensions at the [OpenCensus repository](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib). >[!NOTE] >To use the OpenCensus Python extensions, you need to enable [Python worker extensions](#python-worker-extensions) in your function app by setting `PYTHON_ENABLE_WORKER_EXTENSIONS` to `1`. You also need to switch to using the Application Insights connection string by adding the [`APPLICATIONINSIGHTS_CONNECTION_STRING`](functions-app-settings.md#applicationinsights_connection_string) setting to your [application settings](functions-how-to-use-azure-function-app-settings.md#settings), if it's not already there. def main(req, context): ## HTTP trigger ::: zone pivot="python-mode-configuration" -The HTTP trigger is defined in the function.json file. The `name` of the binding must match the named parameter in the function. +The HTTP trigger is defined in the *function.json* file. The `name` of the binding must match the named parameter in the function. In the previous examples, a binding name `req` is used. This parameter is an [HttpRequest] object, and an [HttpResponse] object is returned. From the [HttpRequest] object, you can get request headers, query parameters, route parameters, and the message body. def main(req: func.HttpRequest) -> func.HttpResponse: ) ``` -In this function, the value of the `name` query parameter is obtained from the `params` parameter of the [HttpRequest] object. The JSON-encoded message body is read using the `get_json` method. +In this function, you obtain the value of the `name` query parameter from the `params` parameter of the [HttpRequest] object. You read the JSON-encoded message body by using the `get_json` method. Likewise, you can set the `status_code` and `headers` for the response message in the returned [HttpResponse] object.+ ::: zone-end+ ::: zone pivot="python-mode-decorators" -The HTTP trigger is defined in the function.json file. The `name` of the binding must match the named parameter in the function. ++The HTTP trigger is defined in the *function.json* file. The `name` of the binding must match the named parameter in the function. + In the previous examples, a binding name `req` is used. This parameter is an [HttpRequest] object, and an [HttpResponse] object is returned. From the [HttpRequest] object, you can get request headers, query parameters, route parameters, and the message body. -The following example is from the HTTP trigger template for Python v2 programming model. It's the sample code provided when you create a function from Core Tools or VS Code. +The following example is from the HTTP trigger template for the Python v2 programming model. It's the sample code that's provided when you create a function by using Azure Functions Core Tools or Visual Studio Code. ```python @app.function_name(name="HttpTrigger1") def test_function(req: func.HttpRequest) -> func.HttpResponse: name = req_body.get('name') if name:- return func.HttpResponse(f"Hello, {name}. This HTTP triggered function executed successfully.") + return func.HttpResponse(f"Hello, {name}. This HTTP-triggered function executed successfully.") else: return func.HttpResponse(- "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.", + "This HTTP-triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.", status_code=200 ) ``` -In this function, the value of the `name` query parameter is obtained from the `params` parameter of the [HttpRequest] object. The JSON-encoded message body is read using the `get_json` method. +In this function, you obtain the value of the `name` query parameter from the `params` parameter of the [HttpRequest] object. You read the JSON-encoded message body by using the `get_json` method. Likewise, you can set the `status_code` and `headers` for the response message in the returned [HttpResponse] object. -To pass in a name in this example, paste the URL provided when running the function, and append it with "?name={name}" +To pass in a name in this example, paste the URL that's provided when you're running the function, and then append it with `"?name={name}"`. ::: zone-end ## Web frameworks ::: zone pivot="python-mode-configuration" -You can use WSGI and ASGI-compatible frameworks such as Flask and FastAPI with your HTTP-triggered Python functions. This section shows how to modify your functions to support these frameworks. +You can use Web Server Gateway Interface (WSGI)-compatible and Asynchronous Server Gateway Interface (ASGI)-compatible frameworks, such as Flask and FastAPI, with your HTTP-triggered Python functions. This section shows how to modify your functions to support these frameworks. -First, the function.json file must be updated to include a `route` in the HTTP trigger, as shown in the following example: +First, the *function.json* file must be updated to include a `route` in the HTTP trigger, as shown in the following example: ```json { First, the function.json file must be updated to include a `route` in the HTTP t } ``` -The host.json file must also be updated to include an HTTP `routePrefix`, as shown in the following example. +The *host.json* file must also be updated to include an HTTP `routePrefix`, as shown in the following example: ```json { The host.json file must also be updated to include an HTTP `routePrefix`, as sho } ``` -Update the Python code file `init.py`, depending on the interface used by your framework. The following example shows either an ASGI handler approach or a WSGI wrapper approach for Flask: +Update the Python code file *init.py*, depending on the interface that's used by your framework. The following example shows either an ASGI handler approach or a WSGI wrapper approach for Flask: # [ASGI](#tab/asgi) def main(req: func.HttpRequest, context) -> func.HttpResponse: logging.info('Python HTTP trigger function processed a request.') return func.WsgiMiddleware(app).handle(req, context) ```-For a full example, see [Using Flask Framework with Azure Functions](/samples/azure-samples/flask-app-on-azure-functions/azure-functions-python-create-flask-app/). +For a full example, see [Use Flask Framework with Azure Functions](/samples/azure-samples/flask-app-on-azure-functions/azure-functions-python-create-flask-app/). ::: zone-end+ ::: zone pivot="python-mode-decorators" -You can use ASGI and WSGI-compatible frameworks such as Flask and FastAPI with your HTTP-triggered Python functions. You must first update the host.json file to include an HTTP `routePrefix`, as shown in the following example: ++You can use Asynchronous Server Gateway Interface (ASGI)-compatible and Web Server Gateway Interface (WSGI)-compatible frameworks, such as Flask and FastAPI, with your HTTP-triggered Python functions. You must first update the *host.json* file to include an HTTP `routePrefix`, as shown in the following example: ```json { app = func.AsgiFunctionApp(app=fast_app, # [WSGI](#tab/wsgi) -`WsgiFunctionApp` is top level function app class for constructing WSGI HTTP functions. +`WsgiFunctionApp` is the top-level function app class for constructing WSGI HTTP functions. ```python # function_app.py app = func.WsgiFunctionApp(app=flask_app.wsgi_app, ::: zone-end ## Scaling and performance -For scaling and performance best practices for Python function apps, see the [Python scale and performance article](python-scale-performance-reference.md). +For scaling and performance best practices for Python function apps, see the [Python scaling and performance](python-scale-performance-reference.md) article. ## Context -To get the invocation context of a function during execution, include the [`context`](/python/api/azure-functions/azure.functions.context) argument in its signature. +To get the invocation context of a function when it's running, include the [`context`](/python/api/azure-functions/azure.functions.context) argument in its signature. For example: def main(req: azure.functions.HttpRequest, return f'{context.invocation_id}' ``` -The [**Context**](/python/api/azure-functions/azure.functions.context) class has the following string attributes: --`function_directory` -The directory in which the function is running. +The [`Context`](/python/api/azure-functions/azure.functions.context) class has the following string attributes: -`function_name` -Name of the function. --`invocation_id` -ID of the current function invocation. --`trace_context` -Context for distributed tracing. For more information, see [`Trace Context`](https://www.w3.org/TR/trace-context/). --`retry_context` -Context for retries to the function. For more information, see [`retry-policies`](./functions-bindings-errors.md#retry-policies). +| Attribute | Description | +| | | +| `function_directory` | The directory in which the function is running. | +| `function_name` | The name of the function. | +| `invocation_id` | The ID of the current function invocation. | +| `trace_context` | The context for distributed tracing. For more information, see [`Trace Context`](https://www.w3.org/TR/trace-context/). | +| `retry_context` | The context for retries to the function. For more information, see [`retry-policies`](./functions-bindings-errors.md#retry-policies). | ## Global variables -It isn't guaranteed that the state of your app will be preserved for future executions. However, the Azure Functions runtime often reuses the same process for multiple executions of the same app. In order to cache the results of an expensive computation, declare it as a global variable. +It isn't guaranteed that the state of your app will be preserved for future executions. However, the Azure Functions runtime often reuses the same process for multiple executions of the same app. To cache the results of an expensive computation, declare it as a global variable. ```python CACHED_DATA = None def main(req): ## Environment variables ::: zone pivot="python-mode-configuration" -In Functions, [application settings](functions-app-settings.md), such as service connection strings, are exposed as environment variables during execution. There are two main ways to access these settings in your code. +In Azure Functions, [application settings](functions-app-settings.md), such as service connection strings, are exposed as environment variables when they're running. There are two main ways to access these settings in your code. | Method | Description | | | |-| **`os.environ["myAppSetting"]`** | Tries to get the application setting by key name, raising an error when unsuccessful. | -| **`os.getenv("myAppSetting")`** | Tries to get the application setting by key name, returning null when unsuccessful. | +| **`os.environ["myAppSetting"]`** | Tries to get the application setting by key name, and raises an error when it's unsuccessful. | +| **`os.getenv("myAppSetting")`** | Tries to get the application setting by key name, and returns `null` when it's unsuccessful. | Both of these ways require you to declare `import os`. def main(req: func.HttpRequest) -> func.HttpResponse: logging.info(f'My app setting value:{my_app_setting_value}') ``` -For local development, application settings are [maintained in the local.settings.json file](functions-develop-local.md#local-settings-file). +For local development, application settings are [maintained in the *local.settings.json* file](functions-develop-local.md#local-settings-file). ::: zone-end ::: zone pivot="python-mode-decorators" -In Functions, [application settings](functions-app-settings.md), such as service connection strings, are exposed as environment variables during execution. There are two main ways to access these settings in your code. +In Azure Functions, [application settings](functions-app-settings.md), such as service connection strings, are exposed as environment variables when they're running. There are two main ways to access these settings in your code. | Method | Description | | | |-| **`os.environ["myAppSetting"]`** | Tries to get the application setting by key name, raising an error when unsuccessful. | -| **`os.getenv("myAppSetting")`** | Tries to get the application setting by key name, returning null when unsuccessful. | +| **`os.environ["myAppSetting"]`** | Tries to get the application setting by key name, and raises an error when it's unsuccessful. | +| **`os.getenv("myAppSetting")`** | Tries to get the application setting by key name, and returns `null` when it's unsuccessful. | Both of these ways require you to declare `import os`. def main(req: func.HttpRequest) -> func.HttpResponse: logging.info(f'My app setting value:{my_app_setting_value}') ``` -For local development, application settings are [maintained in the local.settings.json file](functions-develop-local.md#local-settings-file). +For local development, application settings are [maintained in the *local.settings.json* file](functions-develop-local.md#local-settings-file). -When using the new programming model, the following app setting needs to be enabled in the file `localsettings.json` as follows. +When you're using the new programming model, enable the following app setting in the *local.settings.json* file, as shown here: ```json "AzureWebJobsFeatureFlags": "EnableWorkerIndexing" ``` -When deploying the function, this setting won't be automatically created. You must explicitly create this setting in your function app in Azure for it to run using the v2 model. +When you're deploying the function, this setting isn't created automatically. You must explicitly create this setting in your function app in Azure for it to run by using the v2 model. -Multiple Python workers aren't supported in v2 at this time. This means that setting `FUNCTIONS_WORKER_PROCESS_COUNT` to greater than 1 isn't supported for the functions using the v2 model. +The multiple Python workers setting isn't supported in the v2 programming model at this time. This means that setting `FUNCTIONS_WORKER_PROCESS_COUNT` to greater than `1` isn't supported for functions that are developed by using the v2 model. ::: zone-end Multiple Python workers aren't supported in v2 at this time. This means that set Azure Functions supports the following Python versions: -| Functions version | Python<sup>*</sup> versions | -| -- | -- | +| Functions version | Python\* versions | +| -- | :--: | | 4.x | 3.9<br/> 3.8<br/>3.7 | | 3.x | 3.9<br/> 3.8<br/>3.7<br/>3.6 | | 2.x | 3.7<br/>3.6 | -<sup>*</sup>Official Python distributions +\* Official Python distributions -To request a specific Python version when you create your function app in Azure, use the `--runtime-version` option of the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command. The Functions runtime version is set by the `--functions-version` option. The Python version is set when the function app is created and can't be changed. +To request a specific Python version when you create your function app in Azure, use the `--runtime-version` option of the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command. The Functions runtime version is set by the `--functions-version` option. The Python version is set when the function app is created, and it can't be changed. -The runtime uses the available Python version, when you run it locally. +The runtime uses the available Python version when you run it locally. ### Changing Python version -To set a Python function app to a specific language version, you need to specify the language and the version of the language in `LinuxFxVersion` field in site config. For example, to change Python app to use Python 3.8, set `linuxFxVersion` to `python|3.8`. +To set a Python function app to a specific language version, you need to specify the language and the version of the language in the `LinuxFxVersion` field in the site configuration. For example, to change the Python app to use Python 3.8, set `linuxFxVersion` to `python|3.8`. To learn how to view and change the `linuxFxVersion` site setting, see [How to target Azure Functions runtime versions](set-runtime-version.md#manual-version-updates-on-linux). For more general information, see the [Azure Functions runtime support policy](. ## Package management -When developing locally using the Azure Functions Core Tools or Visual Studio Code, add the names and versions of the required packages to the `requirements.txt` file and install them using `pip`. +When you're developing locally by using Core Tools or Visual Studio Code, add the names and versions of the required packages to the *requirements.txt* file, and then install them by using `pip`. -For example, the following requirements file and pip command can be used to install the `requests` package from PyPI. +For example, you can use the following *requirements.txt* file and `pip` command to install the `requests` package from PyPI. ```txt requests==2.19.1 pip install -r requirements.txt ## Publishing to Azure -When you're ready to publish, make sure that all your publicly available dependencies are listed in the requirements.txt file. You can locate this file at the root of your project directory. +When you're ready to publish, make sure that all your publicly available dependencies are listed in the *requirements.txt* file. You can locate this file at the root of your project directory. -Project files and folders that are excluded from publishing, including the virtual environment folder, you can find them in the root directory of your project. +You can find the project files and folders that are excluded from publishing, including the virtual environment folder, in the root directory of your project. There are three build actions supported for publishing your Python project to Azure: remote build, local build, and builds using custom dependencies. -You can also use Azure Pipelines to build your dependencies and publish using continuous delivery (CD). To learn more, see [Continuous delivery with Azure Pipelines](functions-how-to-azure-devops.md). +You can also use Azure Pipelines to build your dependencies and publish by using continuous delivery (CD). To learn more, see [Continuous delivery with Azure Pipelines](functions-how-to-azure-devops.md). ### Remote build -When you use remote build, dependencies restored on the server and native dependencies match the production environment. This results in a smaller deployment package to upload. Use remote build when developing Python apps on Windows. If your project has custom dependencies, you can [use remote build with extra index URL](#remote-build-with-extra-index-url). +When you use remote build, dependencies that are restored on the server and native dependencies match the production environment. This results in a smaller deployment package to upload. Use remote build when you're developing Python apps on Windows. If your project has custom dependencies, you can [use remote build with extra index URL](#remote-build-with-extra-index-url). -Dependencies are obtained remotely based on the contents of the requirements.txt file. [Remote build](functions-deployment-technologies.md#remote-build) is the recommended build method. By default, the Azure Functions Core Tools requests a remote build when you use the following [`func azure functionapp publish`](functions-run-local.md#publish) command to publish your Python project to Azure. +Dependencies are obtained remotely based on the contents of the *requirements.txt* file. [Remote build](functions-deployment-technologies.md#remote-build) is the recommended build method. By default, Core Tools requests a remote build when you use the following [`func azure functionapp publish`](functions-run-local.md#publish) command to publish your Python project to Azure. ```bash func azure functionapp publish <APP_NAME> The [Azure Functions Extension for Visual Studio Code](./create-first-function-v ### Local build -Dependencies are obtained locally based on the contents of the requirements.txt file. You can prevent doing a remote build by using the following [`func azure functionapp publish`](functions-run-local.md#publish) command to publish with a local build. +Dependencies are obtained locally based on the contents of the *requirements.txt* file. You can prevent doing a remote build by using the following [`func azure functionapp publish`](functions-run-local.md#publish) command to publish with a local build: ```command func azure functionapp publish <APP_NAME> --build local func azure functionapp publish <APP_NAME> --build local Remember to replace `<APP_NAME>` with the name of your function app in Azure. -When you use the `--build local` option, project dependencies are read from the requirements.txt file, and those dependent packages are downloaded and installed locally. Project files and dependencies are deployed from your local computer to Azure. This results in a larger deployment package being uploaded to Azure. If for some reason, you can't get requirements.txt file by Core Tools, you must use the custom dependencies option for publishing. +When you use the `--build local` option, project dependencies are read from the *requirements.txt* file, and those dependent packages are downloaded and installed locally. Project files and dependencies are deployed from your local computer to Azure. This results in a larger deployment package being uploaded to Azure. If for some reason you can't get the *requirements.txt* file by using Core Tools, you must use the custom dependencies option for publishing. -We don't recommend using local builds when developing locally on Windows. +We don't recommend using local builds when you're developing locally on Windows. ### Custom dependencies -When your project has dependencies not found in the [Python Package Index](https://pypi.org/), there are two ways to build the project. The build method depends on how you build the project. +When your project has dependencies that aren't found in the [Python Package Index](https://pypi.org/), there are two ways to build the project. The first way, the *build* method, depends on how you build the project. #### Remote build with extra index URL -When your packages are available from an accessible custom package index, use a remote build. Before publishing, make sure to [create an app setting](functions-how-to-use-azure-function-app-settings.md#settings) named `PIP_EXTRA_INDEX_URL`. The value for this setting is the URL of your custom package index. Using this setting tells the remote build to run `pip install` using the `--extra-index-url` option. To learn more, see the [Python pip install documentation](https://pip.pypa.io/en/stable/reference/pip_install/#requirements-file-format). +When your packages are available from an accessible custom package index, use a remote build. Before you publish, be sure to [create an app setting](functions-how-to-use-azure-function-app-settings.md#settings) named `PIP_EXTRA_INDEX_URL`. The value for this setting is the URL of your custom package index. Using this setting tells the remote build to run `pip install` by using the `--extra-index-url` option. To learn more, see the [Python `pip install` documentation](https://pip.pypa.io/en/stable/reference/pip_install/#requirements-file-format). -You can also use basic authentication credentials with your extra package index URLs. To learn more, see [Basic authentication credentials](https://pip.pypa.io/en/stable/user_guide/#basic-authentication-credentials) in Python documentation. +You can also use basic authentication credentials with your extra package index URLs. To learn more, see [Basic authentication credentials](https://pip.pypa.io/en/stable/user_guide/#basic-authentication-credentials) in the Python documentation. #### Install local packages -If your project uses packages not publicly available to our tools, you can make them available to your app by putting them in the \_\_app\_\_/.python_packages directory. Before publishing, run the following command to install the dependencies locally: +If your project uses packages that aren't publicly available to our tools, you can make them available to your app by putting them in the *\_\_app\_\_/.python_packages* directory. Before you publish, run the following command to install the dependencies locally: ```command pip install --target="<PROJECT_DIR>/.python_packages/lib/site-packages" -r requirements.txt ``` -When using custom dependencies, you should use the `--no-build` publishing option, since you've already installed the dependencies into the project folder. +When you're using custom dependencies, you should use the `--no-build` publishing option, because you've already installed the dependencies into the project folder. ```command func azure functionapp publish <APP_NAME> --no-build Remember to replace `<APP_NAME>` with the name of your function app in Azure. ## Unit testing -Functions written in Python can be tested like other Python code using standard testing frameworks. For most bindings, it's possible to create a mock input object by creating an instance of an appropriate class from the `azure.functions` package. Since the [`azure.functions`](https://pypi.org/project/azure-functions/) package isn't immediately available, be sure to install it via your `requirements.txt` file as described in the [package management](#package-management) section above. +Functions that are written in Python can be tested like other Python code by using standard testing frameworks. For most bindings, it's possible to create a mock input object by creating an instance of an appropriate class from the `azure.functions` package. Since the [`azure.functions`](https://pypi.org/project/azure-functions/) package isn't immediately available, be sure to install it via your *requirements.txt* file as described in the [package management](#package-management) section above. -Take *my_second_function* as an example, following is a mock test of an HTTP triggered function: +With *my_second_function* as an example, the following is a mock test of an HTTP-triggered function: ::: zone pivot="python-mode-configuration" -First we need to create *<project_root>/my_second_function/function.json* file and define this function as an http trigger. ++First, create a *<project_root>/my_second_function/function.json* file, and then define this function as an HTTP trigger. ```json { First we need to create *<project_root>/my_second_function/function.json* file a } ``` -Now, we can implement the *my_second_function* and the *shared_code.my_second_helper_function*. +Next, you can implement `my_second_function` and `shared_code.my_second_helper_function`. ```python # <project_root>/my_second_function/__init__.py import logging # Use absolute import to resolve shared_code modules from shared_code import my_second_helper_function -# Define an http trigger which accepts ?value=<int> query parameter +# Define an HTTP trigger that accepts the ?value=<int> query parameter # Double the value and return the result in HttpResponse def main(req: func.HttpRequest) -> func.HttpResponse: logging.info('Executing my_second_function.') def double(value: int) -> int: return value * 2 ``` -We can start writing test cases for our http trigger. +You can start writing test cases for your HTTP trigger. ```python # <project_root>/tests/test_my_second_function.py class TestFunction(unittest.TestCase): ) ``` -Inside your `.venv` Python virtual environment, install your favorite Python test framework, such as `pip install pytest`. Then run `pytest tests` to check the test result. +Inside your *.venv* Python virtual environment folder, install your favorite Python test framework, such as `pip install pytest`. Then run `pytest tests` to check the test result. + ::: zone-end+ ::: zone pivot="python-mode-decorators" -First we need to create *<project_root>/function_app.py* file and implement *my_second_function* function as http trigger and the *shared_code.my_second_helper_function*. +First, create the *<project_root>/function_app.py* file and implement the `my_second_function` function as the HTTP trigger and `shared_code.my_second_helper_function`. ```python # <project_root>/function_app.py from shared_code import my_second_helper_function app = func.FunctionApp() -# Define http trigger which accepts ?value=<int> query parameter +# Define the HTTP trigger that accepts the ?value=<int> query parameter # Double the value and return the result in HttpResponse @app.function_name(name="my_second_function") @app.route(route="hello") def double(value: int) -> int: return value * 2 ``` -We can start writing test cases for our http trigger. +You can start writing test cases for your HTTP trigger. ```python # <project_root>/tests/test_my_second_function.py class TestFunction(unittest.TestCase): ) ``` -Inside your `.venv` Python virtual environment, install your favorite Python test framework, such as `pip install pytest`. Then run `pytest tests` to check the test result. +Inside your *.venv* Python virtual environment folder, install your favorite Python test framework, such as `pip install pytest`. Then run `pytest tests` to check the test result. ::: zone-end ## Temporary files -The `tempfile.gettempdir()` method returns a temporary folder, which on Linux is `/tmp`. Your application can use this directory to store temporary files generated and used by your functions during execution. +The `tempfile.gettempdir()` method returns a temporary folder, which on Linux is */tmp*. Your application can use this directory to store temporary files that are generated and used by your functions when they're running. > [!IMPORTANT] > Files written to the temporary directory aren't guaranteed to persist across invocations. During scale out, temporary files aren't shared between instances. -The following example creates a named temporary file in the temporary directory (`/tmp`): +The following example creates a named temporary file in the temporary directory (*/tmp*): ```python import logging from os import listdir filesDirListInTemp = listdir(tempFilePath) ``` -We recommend that you maintain your tests in a folder separate from the project folder. This action keeps you from deploying test code with your app. +We recommend that you maintain your tests in a folder that's separate from the project folder. This action keeps you from deploying test code with your app. ## Preinstalled libraries -There are a few libraries that come with the Python Functions runtime. +A few libraries come with the Python functions runtime. -### Python Standard Library +### The Python standard library -The Python Standard Library contains a list of built-in Python modules that are shipped with each Python distribution. Most of these libraries help you access system functionality, like file I/O. On Windows systems, these libraries are installed with Python. On the Unix-based systems, they're provided by package collections. +The Python standard library contains a list of built-in Python modules that are shipped with each Python distribution. Most of these libraries help you access system functionality, such as file input/output (I/O). On Windows systems, these libraries are installed with Python. On Unix-based systems, they're provided by package collections. -To view the full details of the list of these libraries, see the links below: +To view the library for your Python version, go to: -* [Python 3.6 Standard Library](https://docs.python.org/3.6/library/) -* [Python 3.7 Standard Library](https://docs.python.org/3.7/library/) -* [Python 3.8 Standard Library](https://docs.python.org/3.8/library/) -* [Python 3.9 Standard Library](https://docs.python.org/3.9/library/) +* [Python 3.6 standard library](https://docs.python.org/3.6/library/) +* [Python 3.7 standard library](https://docs.python.org/3.7/library/) +* [Python 3.8 standard library](https://docs.python.org/3.8/library/) +* [Python 3.9 standard library](https://docs.python.org/3.9/library/) ### Azure Functions Python worker dependencies -The Functions Python worker requires a specific set of libraries. You can also use these libraries in your functions, but they aren't a part of the Python standard. If your functions rely on any of these libraries, they may not be available to your code when running outside of Azure Functions. You can find a detailed list of dependencies in the **install\_requires** section in the [setup.py](https://github.com/Azure/azure-functions-python-worker/blob/dev/setup.py#L282) file. +The Azure Functions Python worker requires a specific set of libraries. You can also use these libraries in your functions, but they aren't a part of the Python standard. If your functions rely on any of these libraries, they might be unavailable to your code when it's running outside of Azure Functions. You'll find a detailed list of dependencies in the "install\_requires" section of the [*setup.py*](https://github.com/Azure/azure-functions-python-worker/blob/dev/setup.py#L282) file. > [!NOTE]-> If your function app's requirements.txt contains an `azure-functions-worker` entry, remove it. The functions worker is automatically managed by Azure Functions platform, and we regularly update it with new features and bug fixes. Manually installing an old version of worker in requirements.txt may cause unexpected issues. +> If your function app's *requirements.txt* file contains an `azure-functions-worker` entry, remove it. The functions worker is automatically managed by the Azure Functions platform, and we regularly update it with new features and bug fixes. Manually installing an old version of worker in the *requirements.txt* file might cause unexpected issues. > [!NOTE]-> If your package contains certain libraries that may collide with worker's dependencies (e.g. protobuf, tensorflow, grpcio), please configure [`PYTHON_ISOLATE_WORKER_DEPENDENCIES`](functions-app-settings.md#python_isolate_worker_dependencies-preview) to `1` in app settings to prevent your application from referring worker's dependencies. This feature is in preview. +> If your package contains certain libraries that might collide with worker's dependencies (for example, protobuf, tensorflow, or grpcio), configure [`PYTHON_ISOLATE_WORKER_DEPENDENCIES`](functions-app-settings.md#python_isolate_worker_dependencies-preview) to `1` in app settings to prevent your application from referring to worker's dependencies. This feature is in preview. -### Azure Functions Python library +### The Azure Functions Python library -Every Python worker update includes a new version of [Azure Functions Python library (azure.functions)](https://github.com/Azure/azure-functions-python-library). This approach makes it easier to continuously update your Python function apps, because each update is backwards-compatible. A list of releases of this library can be found in [azure-functions PyPi](https://pypi.org/project/azure-functions/#history). +Every Python worker update includes a new version of the [Azure Functions Python library (azure.functions)](https://github.com/Azure/azure-functions-python-library). This approach makes it easier to continuously update your Python function apps, because each update is backwards-compatible. For a list of releases of this library, go to [azure-functions PyPi](https://pypi.org/project/azure-functions/#history). -The runtime library version is fixed by Azure, and it can't be overridden by requirements.txt. The `azure-functions` entry in requirements.txt is only for linting and customer awareness. +The runtime library version is fixed by Azure, and it can't be overridden by *requirements.txt*. The `azure-functions` entry in *requirements.txt* is only for linting and customer awareness. -Use the following code to track the actual version of the Python Functions library in your runtime: +Use the following code to track the actual version of the Python functions library in your runtime: ```python getattr(azure.functions, '__version__', '< 1.2.1') getattr(azure.functions, '__version__', '< 1.2.1') ### Runtime system libraries -For a list of preinstalled system libraries in Python worker Docker images, see the links below: +For a list of preinstalled system libraries in Python worker Docker images, see the following: | Functions runtime | Debian version | Python versions | |||| For a list of preinstalled system libraries in Python worker Docker images, see The Python worker process that runs in Azure Functions lets you integrate third-party libraries into your function app. These extension libraries act as middleware that can inject specific operations during the lifecycle of your function's execution. -Extensions are imported in your function code much like a standard Python library module. Extensions are executed based on the following scopes: +Extensions are imported in your function code much like a standard Python library module. Extensions are run based on the following scopes: | Scope | Description | | | |-| **Application-level** | When imported into any function trigger, the extension applies to every function execution in the app. | -| **Function-level** | Execution is limited to only the specific function trigger into which it's imported. | +| Application-level | When imported into any function trigger, the extension applies to every function execution in the app. | +| Function-level | Execution is limited to only the specific function trigger into which it's imported. | -Review the information for a given extension to learn more about the scope in which the extension runs. +Review the information for each extension to learn more about the scope in which the extension runs. -Extensions implement a Python worker extension interface. This action lets the Python worker process call into the extension code during the function execution lifecycle. To learn more, see [Creating extensions](#creating-extensions). +Extensions implement a Python worker extension interface. This action lets the Python worker process call into the extension code during the function's execution lifecycle. To learn more, see [Create extensions](#creating-extensions). ### Using extensions -You can use a Python worker extension library in your Python functions by following these basic steps: +You can use a Python worker extension library in your Python functions by doing the following: -1. Add the extension package in the requirements.txt file for your project. +1. Add the extension package in the *requirements.txt* file for your project. 1. Install the library into your app.-1. Add the application setting `PYTHON_ENABLE_WORKER_EXTENSIONS`: - + Locally: add `"PYTHON_ENABLE_WORKER_EXTENSIONS": "1"` in the `Values` section of your [local.settings.json file](functions-develop-local.md#local-settings-file). - + Azure: add `PYTHON_ENABLE_WORKER_EXTENSIONS=1` to your [app settings](functions-how-to-use-azure-function-app-settings.md#settings). +1. Add the following application settings: + + Locally: Enter `"PYTHON_ENABLE_WORKER_EXTENSIONS": "1"` in the `Values` section of your [*local.settings.json* file](functions-develop-local.md#local-settings-file). + + Azure: Enter `PYTHON_ENABLE_WORKER_EXTENSIONS=1` in your [app settings](functions-how-to-use-azure-function-app-settings.md#settings). 1. Import the extension module into your function trigger. -1. Configure the extension instance, if needed. Configuration requirements should be called-out in the extension's documentation. +1. Configure the extension instance, if needed. Configuration requirements should be called out in the extension's documentation. > [!IMPORTANT]-> Third-party Python worker extension libraries are not supported or warrantied by Microsoft. You must make sure that any extensions you use in your function app is trustworthy, and you bear the full risk of using a malicious or poorly written extension. +> Third-party Python worker extension libraries aren't supported or warrantied by Microsoft. You must make sure that any extensions that you use in your function app is trustworthy, and you bear the full risk of using a malicious or poorly written extension. -Third-parties should provide specific documentation on how to install and consume their specific extension in your function app. For a basic example of how to consume an extension, see [Consuming your extension](develop-python-worker-extensions.md#consume-your-extension-locally). +Third-parties should provide specific documentation on how to install and consume their extensions in your function app. For a basic example of how to consume an extension, see [Consuming your extension](develop-python-worker-extensions.md#consume-your-extension-locally). Here are examples of using extensions in a function app, by scope: def main(req, context): ### Creating extensions -Extensions are created by third-party library developers who have created functionality that can be integrated into Azure Functions. An extension developer design, implements, and releases Python packages that contain custom logic designed specifically to be run in the context of function execution. These extensions can be published either to the PyPI registry or to GitHub repositories. +Extensions are created by third-party library developers who have created functionality that can be integrated into Azure Functions. An extension developer designs, implements, and releases Python packages that contain custom logic designed specifically to be run in the context of function execution. These extensions can be published either to the PyPI registry or to GitHub repositories. To learn how to create, package, publish, and consume a Python worker extension package, see [Develop Python worker extensions for Azure Functions](develop-python-worker-extensions.md). #### Application-level extensions -An extension inherited from [`AppExtensionBase`](https://github.com/Azure/azure-functions-python-library/blob/dev/azure/functions/extension/app_extension_base.py) runs in an _application_ scope. +An extension that's inherited from [`AppExtensionBase`](https://github.com/Azure/azure-functions-python-library/blob/dev/azure/functions/extension/app_extension_base.py) runs in an _application_ scope. `AppExtensionBase` exposes the following abstract class methods for you to implement: | Method | Description | | | | | **`init`** | Called after the extension is imported. |-| **`configure`** | Called from function code when needed to configure the extension. | -| **`post_function_load_app_level`** | Called right after the function is loaded. The function name and function directory are passed to the extension. Keep in mind that the function directory is read-only, and any attempt to write to local file in this directory fails. | +| **`configure`** | Called from function code when it's needed to configure the extension. | +| **`post_function_load_app_level`** | Called right after the function is loaded. The function name and function directory are passed to the extension. Keep in mind that the function directory is read-only, and any attempt to write to a local file in this directory fails. | | **`pre_invocation_app_level`** | Called right before the function is triggered. The function context and function invocation arguments are passed to the extension. You can usually pass other attributes in the context object for the function code to consume. |-| **`post_invocation_app_level`** | Called right after the function execution completes. The function context, function invocation arguments, and the invocation return object are passed to the extension. This implementation is a good place to validate whether execution of the lifecycle hooks succeeded. | +| **`post_invocation_app_level`** | Called right after the function execution finishes. The function context, function invocation arguments, and invocation return object are passed to the extension. This implementation is a good place to validate whether execution of the lifecycle hooks succeeded. | #### Function-level extensions An extension that inherits from [FuncExtensionBase](https://github.com/Azure/azu | Method | Description | | | |-| **`__init__`** | This method is the constructor of the extension. It's called when an extension instance is initialized in a specific function. When implementing this abstract method, you may want to accept a `filename` parameter and pass it to the parent's method `super().__init__(filename)` for proper extension registration. | -| **`post_function_load`** | Called right after the function is loaded. The function name and function directory are passed to the extension. Keep in mind that the function directory is read-only, and any attempt to write to local file in this directory fails. | +| **`__init__`** | The constructor of the extension. It's called when an extension instance is initialized in a specific function. When you're implementing this abstract method, you might want to accept a `filename` parameter and pass it to the parent's method `super().__init__(filename)` for proper extension registration. | +| **`post_function_load`** | Called right after the function is loaded. The function name and function directory are passed to the extension. Keep in mind that the function directory is read-only, and any attempt to write to a local file in this directory fails. | | **`pre_invocation`** | Called right before the function is triggered. The function context and function invocation arguments are passed to the extension. You can usually pass other attributes in the context object for the function code to consume. |-| **`post_invocation`** | Called right after the function execution completes. The function context, function invocation arguments, and the invocation return object are passed to the extension. This implementation is a good place to validate whether execution of the lifecycle hooks succeeded. | +| **`post_invocation`** | Called right after the function execution finishes. The function context, function invocation arguments, and invocation return object are passed to the extension. This implementation is a good place to validate whether execution of the lifecycle hooks succeeded. | ## Cross-origin resource sharing [!INCLUDE [functions-cors](../../includes/functions-cors.md)] -CORS is fully supported for Python function apps. +Cross-origin resource sharing (CORS) is fully supported for Python function apps. ## Async By default, a host instance for Python can process only one function invocation ## <a name="shared-memory"></a>Shared memory (preview) -To improve throughput, Functions let your out-of-process Python language worker share memory with the Functions host process. When your function app is hitting bottlenecks, you can enable shared memory by adding an application setting named [FUNCTIONS_WORKER_SHARED_MEMORY_DATA_TRANSFER_ENABLED](functions-app-settings.md#functions_worker_shared_memory_data_transfer_enabled) with a value of `1`. With shared memory enabled, you can then use the [DOCKER_SHM_SIZE](functions-app-settings.md#docker_shm_size) setting to set the shared memory to something like `268435456`, which is equivalent to 256 MB. +To improve throughput, Azure Functions lets your out-of-process Python language worker share memory with the Functions host process. When your function app is hitting bottlenecks, you can enable shared memory by adding an application setting named [FUNCTIONS_WORKER_SHARED_MEMORY_DATA_TRANSFER_ENABLED](functions-app-settings.md#functions_worker_shared_memory_data_transfer_enabled) with a value of `1`. With shared memory enabled, you can then use the [DOCKER_SHM_SIZE](functions-app-settings.md#docker_shm_size) setting to set the shared memory to something like `268435456`, which is equivalent to 256 MB. -For example, you might enable shared memory to reduce bottlenecks when using Blob storage bindings to transfer payloads larger than 1 MB. +For example, you might enable shared memory to reduce bottlenecks when you're using Blob Storage bindings to transfer payloads larger than 1 MB. -This functionality is available only for function apps running in Premium and Dedicated (App Service) plans. To learn more, see [Shared memory](https://github.com/Azure/azure-functions-python-worker/wiki/Shared-Memory). +This functionality is available only for function apps that are running in Premium and Dedicated (Azure App Service) plans. To learn more, see [Shared memory](https://github.com/Azure/azure-functions-python-worker/wiki/Shared-Memory). ## Known issues and FAQ -The following is a list of troubleshooting guides for common issues: +Here are two troubleshooting guides for common issues: * [ModuleNotFoundError and ImportError](recover-python-functions.md#troubleshoot-modulenotfounderror)-* [Can't import 'cygrpc'](recover-python-functions.md#troubleshoot-cannot-import-cygrpc). +* [Can't import 'cygrpc'](recover-python-functions.md#troubleshoot-cannot-import-cygrpc) -Following is a list of troubleshooting guides for known issues with the v2 programming model: +Here are two troubleshooting guides for known issues with the v2 programming model: * [Couldn't load file or assembly](recover-python-functions.md#troubleshoot-could-not-load-file-or-assembly)-* [Unable to resolve the Azure Storage connection named Storage](recover-python-functions.md#troubleshoot-unable-to-resolve-the-azure-storage-connection). +* [Unable to resolve the Azure Storage connection named Storage](recover-python-functions.md#troubleshoot-unable-to-resolve-the-azure-storage-connection) -All known issues and feature requests are tracked using [GitHub issues](https://github.com/Azure/azure-functions-python-worker/issues) list. If you run into a problem and can't find the issue in GitHub, open a new issue and include a detailed description of the problem. +All known issues and feature requests are tracked in a [GitHub issues list](https://github.com/Azure/azure-functions-python-worker/issues). If you run into a problem and can't find the issue in GitHub, open a new issue, and include a detailed description of the problem. ## Next steps For more information, see the following resources: * [Azure Functions package API documentation](/python/api/azure-functions/azure.functions) * [Best practices for Azure Functions](functions-best-practices.md) * [Azure Functions triggers and bindings](functions-triggers-bindings.md)-* [Blob storage bindings](functions-bindings-storage-blob.md) -* [HTTP and Webhook bindings](functions-bindings-http-webhook.md) -* [Queue storage bindings](functions-bindings-storage-queue.md) -* [Timer trigger](functions-bindings-timer.md) +* [Blob Storage bindings](functions-bindings-storage-blob.md) +* [HTTP and webhook bindings](functions-bindings-http-webhook.md) +* [Queue Storage bindings](functions-bindings-storage-queue.md) +* [Timer triggers](functions-bindings-timer.md) -[Having issues? Let us know.](https://aka.ms/python-functions-ref-survey) +[Having issues with using Python? Tell us what's going on.](https://aka.ms/python-functions-ref-survey) [HttpRequest]: /python/api/azure-functions/azure.functions.httprequest |
azure-functions | Recover Python Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/recover-python-functions.md | Title: Troubleshoot Python function apps in Azure Functions description: Learn how to troubleshoot Python functions. Previously updated : 10/25/2022 Last updated : 11/21/2022 ms.devlang: python zone_pivot_groups: python-mode-functions zone_pivot_groups: python-mode-functions # Troubleshoot Python errors in Azure Functions -This article provides information to help you troubleshoot errors with your Python functions in Azure Functions. This article supports both the v1 and v2 programming models. Choose your desired model from the selector at the top of the article. The v2 model is currently in preview. For more information on Python programming models, see the [Python developer guide](./functions-reference-python.md). +This article provides information to help you troubleshoot errors with your Python functions in Azure Functions. This article supports both the v1 and v2 programming models. Choose the model you want to use from the selector at the top of the article. The v2 model is currently in preview. For more information on Python programming models, see the [Python developer guide](./functions-reference-python.md). -The following is a list of troubleshooting sections for common issues in Python functions: +Here are the troubleshooting sections for common issues in Python functions: ::: zone pivot="python-mode-configuration" * [ModuleNotFoundError and ImportError](#troubleshoot-modulenotfounderror) General troubleshooting guides for Python Functions include: This section helps you troubleshoot module-related errors in your Python function app. These errors typically result in the following Azure Functions error message: -> `Exception: ModuleNotFoundError: No module named 'module_name'.` +"Exception: ModuleNotFoundError: No module named 'module_name'." This error occurs when a Python function app fails to load a Python module. The root cause for this error is one of the following issues: * [The package can't be found](#the-package-cant-be-found)-* [The package isn't resolved with proper Linux wheel](#the-package-isnt-resolved-with-proper-linux-wheel) +* [The package isn't resolved with proper Linux wheel](#the-package-isnt-resolved-with-the-proper-linux-wheel) * [The package is incompatible with the Python interpreter version](#the-package-is-incompatible-with-the-python-interpreter-version) * [The package conflicts with other packages](#the-package-conflicts-with-other-packages)-* [The package only supports Windows or macOS platforms](#the-package-only-supports-windows-or-macos-platforms) +* [The package supports only Windows and macOS platforms](#the-package-supports-only-windows-and-macos-platforms) ### View project files To identify the actual cause of your issue, you need to get the Python project files that run on your function app. If you don't have the project files on your local computer, you can get them in one of the following ways: -* If the function app has `WEBSITE_RUN_FROM_PACKAGE` app setting and its value is a URL, download the file by copy and paste the URL into your browser. -* If the function app has `WEBSITE_RUN_FROM_PACKAGE` and it's set to `1`, navigate to `https://<app-name>.scm.azurewebsites.net/api/vfs/data/SitePackages` and download the file from the latest `href` URL. -* If the function app doesn't have the app setting mentioned above, navigate to `https://<app-name>.scm.azurewebsites.net/api/settings` and find the URL under `SCM_RUN_FROM_PACKAGE`. Download the file by copy and paste the URL into your browser. -* If none of these suggestions resolve the issue, navigate to `https://<app-name>.scm.azurewebsites.net/DebugConsole` and view the content under `/home/site/wwwroot`. +* If the function app has a `WEBSITE_RUN_FROM_PACKAGE` app setting and its value is a URL, download the file by copying and pasting the URL into your browser. +* If the function app has `WEBSITE_RUN_FROM_PACKAGE` and it's set to `1`, go to `https://<app-name>.scm.azurewebsites.net/api/vfs/data/SitePackages` and download the file from the latest `href` URL. +* If the function app doesn't have either of the preceding app settings, go to `https://<app-name>.scm.azurewebsites.net/api/settings` and find the URL under `SCM_RUN_FROM_PACKAGE`. Download the file by copying and pasting the URL into your browser. +* If none of these suggestions resolves the issue, go to `https://<app-name>.scm.azurewebsites.net/DebugConsole` and view the content under `/home/site/wwwroot`. The rest of this article helps you troubleshoot potential causes of this error by inspecting your function app's content, identifying the root cause, and resolving the specific issue. This section details potential root causes of module-related errors. After you f #### The package can't be found -Browse to `.python_packages/lib/python3.6/site-packages/<package-name>` or `.python_packages/lib/site-packages/<package-name>`. If the file path doesn't exist, this missing path is likely the root cause. +Go to `.python_packages/lib/python3.6/site-packages/<package-name>` or `.python_packages/lib/site-packages/<package-name>`. If the file path doesn't exist, this missing path is likely the root cause. -Using third-party or outdated tools during deployment may cause this issue. +Using third-party or outdated tools during deployment might cause this issue. -See [Enable remote build](#enable-remote-build) or [Build native dependencies](#build-native-dependencies) for mitigation. +To mitigate this issue, see [Enable remote build](#enable-remote-build) or [Build native dependencies](#build-native-dependencies). -#### The package isn't resolved with proper Linux wheel +#### The package isn't resolved with the proper Linux wheel -Go to `.python_packages/lib/python3.6/site-packages/<package-name>-<version>-dist-info` or `.python_packages/lib/site-packages/<package-name>-<version>-dist-info`. Use your favorite text editor to open the **wheel** file and check the **Tag:** section. If the value of the tag doesn't contain **linux**, this could be the issue. +Go to `.python_packages/lib/python3.6/site-packages/<package-name>-<version>-dist-info` or `.python_packages/lib/site-packages/<package-name>-<version>-dist-info`. Use your favorite text editor to open the *wheel* file and check the **Tag:** section. The issue might be that the tag value doesn't contain **linux**. -Python functions run only on Linux in Azure: Functions runtime v2.x runs on Debian Stretch and the v3.x runtime on Debian Buster. The artifact is expected to contain the correct Linux binaries. When you use the `--build local` flag in Core Tools, third-party, or outdated tools it may cause older binaries to be used. +Python functions run only on Linux in Azure. The Functions runtime v2.x runs on Debian Stretch, and the v3.x runtime runs on Debian Buster. The artifact is expected to contain the correct Linux binaries. When you use the `--build local` flag in Core Tools, third-party, or outdated tools, it might cause older binaries to be used. -See [Enable remote build](#enable-remote-build) or [Build native dependencies](#build-native-dependencies) for mitigation. +To mitigate the issue, see [Enable remote build](#enable-remote-build) or [Build native dependencies](#build-native-dependencies). #### The package is incompatible with the Python interpreter version -Go to `.python_packages/lib/python3.6/site-packages/<package-name>-<version>-dist-info` or `.python_packages/lib/site-packages/<package-name>-<version>-dist-info`. Using a text editor, open the METADATA file and check the **Classifiers:** section. If the section doesn't contains `Python :: 3`, `Python :: 3.6`, `Python :: 3.7`, `Python :: 3.8`, or `Python :: 3.9`, this means the package version is either too old, or most likely, the package is already out of maintenance. +Go to `.python_packages/lib/python3.6/site-packages/<package-name>-<version>-dist-info` or `.python_packages/lib/site-packages/<package-name>-<version>-dist-info`. In your text editor, open the *METADATA* file and check the **Classifiers:** section. If the section doesn't contain `Python :: 3`, `Python :: 3.6`, `Python :: 3.7`, `Python :: 3.8`, or `Python :: 3.9`, the package version is either too old or, more likely, it's already out of maintenance. -You can check the Python version of your function app from the [Azure portal](https://portal.azure.com). Navigate to your function app, choose **Resource explorer**, and select **Go**. +You can check the Python version of your function app from the [Azure portal](https://portal.azure.com). Navigate to your function app's **Overview** resource page to find the runtime version. The runtime version supports Python versions as described in the [Azure Functions runtime versions overview](./functions-versions.md). --After the explorer loads, search for **LinuxFxVersion**, which shows the Python version. --See [Update your package to the latest version](#update-your-package-to-the-latest-version) or [Replace the package with equivalents](#replace-the-package-with-equivalents) for mitigation. +To mitigate the issue, see [Update your package to the latest version](#update-your-package-to-the-latest-version) or [Replace the package with equivalents](#replace-the-package-with-equivalents). #### The package conflicts with other packages -If you've verified that the package is resolved correctly with the proper Linux wheels, there may be a conflict with other packages. In certain packages, the PyPi documentations may clarify the incompatible modules. For example in [`azure 4.0.0`](https://pypi.org/project/azure/4.0.0/), there's a statement as follows: +If you've verified that the package is resolved correctly with the proper Linux wheels, there might be a conflict with other packages. In certain packages, the PyPi documentation might clarify the incompatible modules. For example, in [`azure 4.0.0`](https://pypi.org/project/azure/4.0.0/), you'll find the following statement: -<pre>This package isn't compatible with azure-storage. -If you installed azure-storage, or if you installed azure 1.x/2.x and didnΓÇÖt uninstall azure-storage, -you must uninstall azure-storage first.</pre> +"This package isn't compatible with azure-storage. +If you installed azure-storage, or if you installed azure 1.x/2.x and didnΓÇÖt uninstall azure-storage, you must uninstall azure-storage first." You can find the documentation for your package version in `https://pypi.org/project/<package-name>/<package-version>`. -See [Update your package to the latest version](#update-your-package-to-the-latest-version) or [Replace the package with equivalents](#replace-the-package-with-equivalents) for mitigation. +To mitigate the issue, see [Update your package to the latest version](#update-your-package-to-the-latest-version) or [Replace the package with equivalents](#replace-the-package-with-equivalents). -#### The package only supports Windows or macOS platforms +#### The package supports only Windows and macOS platforms -Open the `requirements.txt` with a text editor and check the package in `https://pypi.org/project/<package-name>`. Some packages only run on Windows or macOS platforms. For example, pywin32 only runs on Windows. +Open the `requirements.txt` with a text editor and check the package in `https://pypi.org/project/<package-name>`. Some packages run only on Windows and macOS platforms. For example, pywin32 runs on Windows only. -The `Module Not Found` error may not occur when you're using Windows or macOS for local development. However, the package fails to import on Azure Functions, which uses Linux at runtime. This issue is likely to be caused by using `pip freeze` to export virtual environment into requirements.txt from your Windows or macOS machine during project initialization. +The `Module Not Found` error might not occur when you're using Windows or macOS for local development. However, the package fails to import on Azure Functions, which uses Linux at runtime. This issue is likely to be caused by using `pip freeze` to export the virtual environment into *requirements.txt* from your Windows or macOS machine during project initialization. -See [Replace the package with equivalents](#replace-the-package-with-equivalents) or [Handcraft requirements.txt](#handcraft-requirementstxt) for mitigation. +To mitigate the issue, see [Replace the package with equivalents](#replace-the-package-with-equivalents) or [Handcraft requirements.txt](#handcraft-requirementstxt). ### Mitigate ModuleNotFoundError -The following are potential mitigations for module-related issues. Use the [diagnoses above](#diagnose-modulenotfounderror) to determine which of these mitigations to try. +The following are potential mitigations for module-related issues. Use the [previously mentioned diagnoses](#diagnose-modulenotfounderror) to determine which of these mitigations to try. #### Enable remote build -Make sure that remote build is enabled. The way that you do this depends on your deployment method. +Make sure that remote build is enabled. The way that you make sure depends on your deployment method. # [Visual Studio Code](#tab/vscode)-Make sure that the latest version of the [Azure Functions extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) is installed. Verify that `.vscode/settings.json` exists and it contains the setting `"azureFunctions.scmDoBuildDuringDeployment": true`. If not, please create this file with the `azureFunctions.scmDoBuildDuringDeployment` setting enabled and redeploy the project. +Make sure that the latest version of the [Azure Functions extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) is installed. Verify that the *.vscode/settings.json* file exists and it contains the setting `"azureFunctions.scmDoBuildDuringDeployment": true`. If it doesn't, create the file with the `azureFunctions.scmDoBuildDuringDeployment` setting enabled, and then redeploy the project. # [Azure Functions Core Tools](#tab/coretools) If you're manually publishing your package into the `https://<app-name>.scm.azur #### Build native dependencies -Make sure that the latest version of both **docker** and [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools/releases) is installed. Go to your local function project folder, and use `func azure functionapp publish <app-name> --build-native-deps` for deployment. +Make sure that the latest versions of both Docker and [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools/releases) are installed. Go to your local function project folder, and use `func azure functionapp publish <app-name> --build-native-deps` for deployment. #### Update your package to the latest version -Browse the latest package version in `https://pypi.org/project/<package-name>` and check the **Classifiers:** section. The package should be `OS Independent`, or compatible with `POSIX` or `POSIX :: Linux` in **Operating System**. Also, the Programming Language should contain: `Python :: 3`, `Python :: 3.6`, `Python :: 3.7`, `Python :: 3.8`, or `Python :: 3.9`. +In the latest package version of `https://pypi.org/project/<package-name>`, check the **Classifiers:** section. The package should be `OS Independent`, or compatible with `POSIX` or `POSIX :: Linux` in **Operating System**. Also, the programming language should contain: `Python :: 3`, `Python :: 3.6`, `Python :: 3.7`, `Python :: 3.8`, or `Python :: 3.9`. -If these are correct, you can update the package to the latest version by changing the line `<package-name>~=<latest-version>` in requirements.txt. +If these package items are correct, you can update the package to the latest version by changing the line `<package-name>~=<latest-version>` in *requirements.txt*. #### Handcraft requirements.txt -Some developers use `pip freeze > requirements.txt` to generate the list of Python packages for their developing environments. Although this convenience should work in most cases, there can be issues in cross-platform deployment scenarios, such as developing functions locally on Windows or macOS, but publishing to a function app, which runs on Linux. In this scenario, `pip freeze` can introduce unexpected operating system-specific dependencies or dependencies for your local development environment. These dependencies can break the Python function app when running on Linux. +Some developers use `pip freeze > requirements.txt` to generate the list of Python packages for their developing environments. Although this convenience should work in most cases, there can be issues in cross-platform deployment scenarios, such as developing functions locally on Windows or macOS, but publishing to a function app, which runs on Linux. In this scenario, `pip freeze` can introduce unexpected operating system-specific dependencies or dependencies for your local development environment. These dependencies can break the Python function app when it's running on Linux. -The best practice is to check the import statement from each .py file in your project source code and only check-in those modules in requirements.txt file. This guarantees the resolution of packages can be handled properly on different operating systems. +The best practice is to check the import statement from each *.py* file in your project source code and then check in only the modules in the *requirements.txt* file. This practice guarantees that the resolution of packages can be handled properly on different operating systems. #### Replace the package with equivalents -First, we should take a look into the latest version of the package in `https://pypi.org/project/<package-name>`. Usually, this package has their own GitHub page, go to the **Issues** section on GitHub and search if your issue has been fixed. If so, update the package to the latest version. +First, take a look into the latest version of the package in `https://pypi.org/project/<package-name>`. This package usually has its own GitHub page. Go to the **Issues** section on GitHub and search to see whether your issue has been fixed. If it has been fixed, update the package to the latest version. -Sometimes, the package may have been integrated into [Python Standard Library](https://docs.python.org/3/library/) (such as `pathlib`). If so, since we provide a certain Python distribution in Azure Functions (Python 3.6, Python 3.7, Python 3.8, and Python 3.9), the package in your requirements.txt should be removed. +Sometimes, the package might have been integrated into [Python Standard Library](https://docs.python.org/3/library/) (such as `pathlib`). If so, because we provide a certain Python distribution in Azure Functions (Python 3.6, Python 3.7, Python 3.8, and Python 3.9), the package in your *requirements.txt* file should be removed. -However, if you're facing an issue that it hasn't been fixed and you're on a deadline. I encourage you to do some research and find a similar package for your project. Usually, the Python community will provide you with a wide variety of similar libraries that you can use. +However, if you're finding that the issue hasn't been fixed, and you're on a deadline, we encourage you to do some research to find a similar package for your project. Usually, the Python community will provide you with a wide variety of similar libraries that you can use. ## Troubleshoot cannot import 'cygrpc' -This section helps you troubleshoot 'cygrpc' related errors in your Python function app. These errors typically result in the following Azure Functions error message: +This section helps you troubleshoot 'cygrpc'-related errors in your Python function app. These errors typically result in the following Azure Functions error message: -> `Cannot import name 'cygrpc' from 'grpc._cython'` +"Cannot import name 'cygrpc' from 'grpc._cython'" This error occurs when a Python function app fails to start with a proper Python interpreter. The root cause for this error is one of the following issues: - [The Python interpreter mismatches OS architecture](#the-python-interpreter-mismatches-os-architecture) - [The Python interpreter isn't supported by Azure Functions Python Worker](#the-python-interpreter-isnt-supported-by-azure-functions-python-worker) -### Diagnose 'cygrpc' reference error +### Diagnose the 'cygrpc' reference error #### The Python interpreter mismatches OS architecture -This is most likely caused by a 32-bit Python interpreter is installed on your 64-bit operating system. +This mismatch is most likely caused by a 32-bit Python interpreter being installed on your 64-bit operating system. -If you're running on an x64 operating system, please ensure your Python 3.6, 3.7, 3.8, or 3.9 interpreter is also on 64-bit version. +If you're running on an x64 operating system, ensure that your Python version 3.6, 3.7, 3.8, or 3.9 interpreter is also on a 64-bit version. -You can check your Python interpreter bitness by the following commands: +You can check your Python interpreter bitness by running the following commands: -On Windows in PowerShell: `py -c 'import platform; print(platform.architecture()[0])'` +On Windows in PowerShell, run `py -c 'import platform; print(platform.architecture()[0])'`. -On Unix-like shell: `python3 -c 'import platform; print(platform.architecture()[0])'` +On a Unix-like shell, run `python3 -c 'import platform; print(platform.architecture()[0])'`. -If there's a mismatch between Python interpreter bitness and operating system architecture, please download a proper Python interpreter from [Python Software Foundation](https://www.python.org/downloads). +If there's a mismatch between Python interpreter bitness and the operating system architecture, download a proper Python interpreter from [Python Software Foundation](https://www.python.org/downloads). #### The Python interpreter isn't supported by Azure Functions Python Worker -The Azure Functions Python Worker only supports Python 3.6, 3.7, 3.8, and 3.9. -Check if your Python interpreter matches our expected version by `py --version` in Windows or `python3 --version` in Unix-like systems. Ensure the return result is Python 3.6.x, Python 3.7.x, Python 3.8.x, or Python 3.9.x. +The Azure Functions Python Worker supports only Python versions 3.6, 3.7, 3.8, and 3.9. ++Check to see whether your Python interpreter matches your expected version by `py --version` in Windows or `python3 --version` in Unix-like systems. Ensure that the return result is `Python 3.6.x`, `Python 3.7.x`, `Python 3.8.x`, or `Python 3.9.x`. -If your Python interpreter version doesn't meet the requirements for Functions, instead download the Python 3.6, 3.7, 3.8, or 3.9 interpreter from [Python Software Foundation](https://www.python.org/downloads). +If your Python interpreter version doesn't meet the requirements for Azure Functions, instead download the Python version 3.6, 3.7, 3.8, or 3.9 interpreter from [Python Software Foundation](https://www.python.org/downloads). -## Troubleshoot Python Exited With Code 137 +## Troubleshoot "python exited with code 137" Code 137 errors are typically caused by out-of-memory issues in your Python function app. As a result, you get the following Azure Functions error message: -> `Microsoft.Azure.WebJobs.Script.Workers.WorkerProcessExitException : python exited with code 137` +"Microsoft.Azure.WebJobs.Script.Workers.WorkerProcessExitException : python exited with code 137" -This error occurs when a Python function app is forced to terminate by the operating system with a SIGKILL signal. This signal usually indicates an out-of-memory error in your Python process. The Azure Functions platform has a [service limitation](functions-scale.md#service-limits) which will terminate any function apps that exceeded this limit. +This error occurs when a Python function app is forced to terminate by the operating system with a SIGKILL signal. This signal usually indicates an out-of-memory error in your Python process. The Azure Functions platform has a [service limitation](functions-scale.md#service-limits) that terminates any function apps that exceed this limit. Visit the tutorial section in [memory profiling on Python functions](python-memory-profiler-reference.md#memory-profiling-process) to analyze the memory bottleneck in your function app. -## Troubleshoot Python Exited With Code 139 +## Troubleshoot "python exited with code 139" This section helps you troubleshoot segmentation fault errors in your Python function app. These errors typically result in the following Azure Functions error message: -> `Microsoft.Azure.WebJobs.Script.Workers.WorkerProcessExitException : python exited with code 139` +"Microsoft.Azure.WebJobs.Script.Workers.WorkerProcessExitException : python exited with code 139" -This error occurs when a Python function app is forced to terminate by the operating system with a SIGSEGV signal. This signal indicates a memory segmentation violation, which can be caused by unexpectedly reading from or writing into a restricted memory region. In the following sections, we provide a list of common root causes. +This error occurs when a Python function app is forced to terminate by the operating system with a SIGSEGV signal. This signal indicates violation of the memory segmentation, which can result from an unexpected reading from or writing into a restricted memory region. In the following sections, we provide a list of common root causes. ### A regression from third-party packages -In your function app's requirements.txt, an unpinned package will be upgraded to the latest version in every Azure Functions deployment. Vendors of these packages may introduce regressions in their latest release. To recover from this issue, try commenting out the import statements, disabling the package references, or pinning the package to a previous version in requirements.txt. +In your function app's *requirements.txt* file, an unpinned package will be upgraded to the latest version in every Azure Functions deployment. Vendors of these packages might introduce regressions in their latest release. To recover from this issue, try commenting out the import statements, disabling the package references, or pinning the package to a previous version in *requirements.txt*. -### Unpickling from a malformed .pkl file +### Unpickling from a malformed \.pkl file -If your function app is using the Python pickel library to load Python object from .pkl file, it's possible that the .pkl contains malformed bytes string, or invalid address reference in it. To recover from this issue, try commenting out the pickle.load() function. +If your function app is using the Python pickle library to load a Python object from a *\.pkl* file, it's possible that the file contains a malformed bytes string or an invalid address reference. To recover from this issue, try commenting out the `pickle.load()` function. ### Pyodbc connection collision -If your function app is using the popular ODBC database driver [pyodbc](https://github.com/mkleehammer/pyodbc), it is possible that multiple connections are opened within a single function app. To avoid this issue, please use the singleton pattern and ensure only one pyodbc connection is used across the function app. +If your function app is using the popular ODBC database driver [pyodbc](https://github.com/mkleehammer/pyodbc), it's possible that multiple connections are open within a single function app. To avoid this issue, use the singleton pattern, and ensure that only one pyodbc connection is used across the function app. ## Troubleshoot errors with Protocol Buffers -Version 4.x.x of the Protocol Buffers (protobuf) package introduces breaking changes. Because the Python worker process for Azure Functions relies on v3.x.x of this package, pinning your function app to use v4.x.x can break your app. At this time, you should also avoid using any libraries that themselves require protobuf v4.x.x. +Version 4.x.x of the Protocol Buffers (Protobuf) package introduces breaking changes. Because the Python worker process for Azure Functions relies on v3.x.x of this package, pinning your function app to use v4.x.x can break your app. At this time, you should also avoid using any libraries that require Protobuf v4.x.x. Example error logs: ```bash Example error logs: [Information] _descriptor.FieldDescriptor( [Information] File "/home/site/wwwroot/.python_packages/lib/site-packages/google/protobuf/descriptor.py", line 560, in __new__ [Information] _message.Message._CheckCalledFromGeneratedFile()- [Error] TypeError: Descriptors cannot not be created directly. + [Error] TypeError: Descriptors cannot be created directly. [Information] If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0. [Information] If you cannot immediately regenerate your protos, some other possible workarounds are: [Information] 1. Downgrade the protobuf package to 3.20.x or lower. [Information] 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower). [Information] More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates ```-There are two ways to mitigate this issue. -+ Set the application setting [PYTHON_ISOLATE_WORKER_DEPENDENCIES](functions-app-settings.md#python_isolate_worker_dependencies-preview) to a value of `1`. -+ Pin protobuf to a non-4.x.x. version, as in the following example: +You can mitigate this issue in either of two ways: ++* Set the application setting [PYTHON_ISOLATE_WORKER_DEPENDENCIES](functions-app-settings.md#python_isolate_worker_dependencies-preview) to a value of `1`. ++* Pin Protobuf to a non-4.x.x. version, as in the following example: + ``` protobuf >= 3.19.3, == 3.* ``` There are two ways to mitigate this issue. ::: zone pivot="python-mode-decorators" ## Multiple Python workers not supported -Multiple Python workers aren't supported in the v2 programming model at this time. This means that enabling intelligent concurrency by setting `FUNCTIONS_WORKER_PROCESS_COUNT` greater than 1 isn't supported for functions developed using the V2 model. +The multiple Python workers setting isn't supported in the v2 programming model at this time. More specifically, enabling intelligent concurrency by setting `FUNCTIONS_WORKER_PROCESS_COUNT` to greater than `1` isn't supported for functions that are developed by using the v2 model. -## Troubleshoot could not load file or assembly +## Troubleshoot "could not load file or assembly" -If you're facing this error, it may be the case that you are using the V2 programming model. This error is due to a known issue that will be resolved in an upcoming release. +If you receive this error, it might be because you're using the v2 programming model. This error results from a known issue that will be resolved in an upcoming release. -This specific error may ready: +This specific error might read: -> `DurableTask.Netherite.AzureFunctions: Could not load file or assembly 'Microsoft.Azure.WebJobs.Extensions.DurableTask, Version=2.0.0.0, Culture=neutral, PublicKeyToken=014045d636e89289'.` -> `The system cannot find the file specified.` +"DurableTask.Netherite.AzureFunctions: Could not load file or assembly 'Microsoft.Azure.WebJobs.Extensions.DurableTask, Version=2.0.0.0, Culture=neutral, PublicKeyToken=014045d636e89289'. +The system cannot find the file specified." -The reason this error may be occurring is because of an issue with how the extension bundle was cached. To detect if this is the issue, you can run the command with `--verbose` to see more details. +This error might occur because of an issue with how the extension bundle was cached. To troubleshoot this issue, you can run the following command with `--verbose` to see more details: > `func host start --verbose` -Upon running the command, if you notice that `Loading startup extension <>` is not followed by `Loaded extension <>` for each extension, it is likely that you are facing a caching issue. --To resolve this issue, --1. Find the path of `.azure-functions-core-tools` by running -```console -func GetExtensionBundlePath -``` +After you run the command, if you notice that `Loading startup extension <>` isn't followed by `Loaded extension <>` for each extension, it's likely that you have a caching issue. -2. Delete the directory `.azure-functions-core-tools` +To resolve this issue: -# [bash](#tab/bash) +1. Find the *\.azure-functions-core-tools* path by running: -```bash -rm -r <insert path>/.azure-functions-core-tools -``` + ```console + func GetExtensionBundlePath + ``` -# [PowerShell](#tab/powershell) +1. Delete the *\.azure-functions-core-tools* directory. -```powershell -Remove-Item <insert path>/.azure-functions-core-tools -``` + # [bash](#tab/bash) + + ```bash + rm -r <insert path>/.azure-functions-core-tools + ``` + + # [PowerShell](#tab/powershell) + + ```powershell + Remove-Item <insert path>/.azure-functions-core-tools + ``` + + # [Cmd](#tab/cmd) + + ```cmd + rmdir <insert path>/.azure-functions-core-tools + ``` + + -# [Cmd](#tab/cmd) +## Troubleshoot "unable to resolve the Azure Storage connection" -```cmd -rmdir <insert path>/.azure-functions-core-tools -``` +You might see this error in your local output as the following message: --## Troubleshoot unable to resolve the Azure Storage connection +"Microsoft.Azure.WebJobs.Extensions.DurableTask: Unable to resolve the Azure Storage connection named 'Storage'. +Value cannot be null. (Parameter 'provider')" -You may see this error in your local output as the following message: +This error is a result of how extensions are loaded from the bundle locally. To resolve this error, take one of the following actions: -> `Microsoft.Azure.WebJobs.Extensions.DurableTask: Unable to resolve the Azure Storage connection named 'Storage'.` -> `Value cannot be null. (Parameter 'provider')` +* Use a storage emulator such as [Azurite](../storage/common/storage-use-azurite.md). This option is a good one when you aren't planning to use a storage account in your function application. -This error is a result of how extensions are loaded from the bundle locally. To resolve this error, you can do one of the following: -* Use a storage emulator such as [Azurite](../storage/common/storage-use-azurite.md). This is a good option when you aren't planning to use a storage account in your function application. -* Create a storage account and add a connection string to the `AzureWebJobsStorage` environment variable in `localsettings.json`. Use this option when you are using a storage account trigger or binding with your application, or if you have an existing storage account. To get started, see [Create a storage account](../storage/common/storage-account-create.md). +* Create a storage account and add a connection string to the `AzureWebJobsStorage` environment variable in the *localsettings.json* file. Use this option when you're using a storage account trigger or binding with your application, or if you have an existing storage account. To get started, see [Create a storage account](../storage/common/storage-account-create.md). -## Issue with Deployment +## Issue with deployment -In the [Azure portal](https://portal.azure.com), navigate to **Settings** > **Configuration** and make sure that the `AzureWebJobsFeatureFlags` application setting has a value of `EnableWorkerIndexing`. If it is not found, add this setting to the function app. +In the [Azure portal](https://portal.azure.com), select **Settings** > **Configuration**, and then ensure that the `AzureWebJobsFeatureFlags` application setting has a value of `EnableWorkerIndexing`. If it's not found, add this setting to the function app. ::: zone-end ## Next steps -If you're unable to resolve your issue, please report this to the Functions team: +If you're unable to resolve your issue, contact the Azure Functions team: > [!div class="nextstepaction"] > [Report an unresolved issue](https://github.com/Azure/azure-functions-python-worker/issues) |
azure-monitor | Azure Monitor Agent Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md | We recommend that you enable automatic update of the agent by enabling the [Auto - Windows ```azurecli- az vm extension set -name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --vm-name <virtual-machine-name> --resource-group <resource-group-name> --enable-auto-upgrade true + az vm extension set --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --vm-name <virtual-machine-name> --resource-group <resource-group-name> --enable-auto-upgrade true ``` - Linux ```azurecli- az vm extension set -name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --vm-name <virtual-machine-name> --resource-group <resource-group-name> --enable-auto-upgrade true + az vm extension set --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --vm-name <virtual-machine-name> --resource-group <resource-group-name> --enable-auto-upgrade true ``` ### Update on Azure Arc-enabled servers |
azure-monitor | Itsmc Secure Webhook Connections Servicenow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-secure-webhook-connections-servicenow.md | Ensure that you've met the following prerequisites: * Azure AD is registered. * You have the supported version of The ServiceNow Event Management - ITOM (version New York or later).-* [Application](https://store.servicenow.com/sn_appstore_store.do#!/store/application/ac4c9c57dbb1d090561b186c1396191a/1.3.1?referer=%2Fstore%2Fsearch%3Flistingtype%3Dallintegrations%25253Bancillary_app%25253Bcertified_apps%25253Bcontent%25253Bindustry_solution%25253Boem%25253Butility%26q%3DEvent%2520Management%2520Connectors&sl=sh) installed on ServiceNow instance. +* [Application](https://store.servicenow.com/sn_appstore_store.do#!/store/application/ac4c9c57dbb1d090561b186c1396191a/2.2.0) installed on ServiceNow instance. ## Configure the ServiceNow connection |
azure-monitor | Automate Custom Reports | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/automate-custom-reports.md | Title: Automate custom reports with Application Insights data -description: Automate custom daily/weekly/monthly reports with Azure Monitor Application Insights data +description: Automate custom daily, weekly, and monthly reports with Azure Monitor Application Insights data. Last updated 05/20/2019 -Periodical reports help keep a team informed on how their business critical services are doing. Developers, DevOps/SRE teams, and their managers can be productive with automated reports reliably delivering insights without requiring everyone to sign in the portal. Such reports can also help identify gradual increases in latencies, load or failure rates that may not trigger any alert rules. +Periodical reports help keep a team informed on how their business-critical services are doing. Developers, DevOps/SRE teams, and their managers can be productive with automated reports that reliably deliver insights without requiring everyone to sign in to the portal. Such reports can also help identify gradual increases in latencies, load, or failure rates that might not trigger any alert rules. Each enterprise has its unique reporting needs, such as: -* Specific percentile aggregations of metrics, or custom metrics in a report. -* Have different reports for daily, weekly, and monthly roll-ups of data for different audiences. -* Segmentation by custom attributes like region, or environment. -* Group some AI resources together in a single report, even if they may be in different subscriptions or resource groups etc. -* Separate reports containing sensitive metrics sent to selective audience. -* Reports to stakeholders who may not have access to the portal resources. +* Specific percentile aggregations of metrics or custom metrics in a report. +* Different reports for daily, weekly, and monthly roll-ups of data for different audiences. +* Segmentation by custom attributes like region or environment. +* AI resources grouped together in a single report, even if they might be in different subscriptions or resource groups. +* Separate reports that contain sensitive metrics sent to selective audiences. +* Reports to stakeholders who might not have access to the portal resources. -> [!NOTE] -> The weekly Application Insights digest email did not allow any customization, and will be discontinued in favor of the custom options listed below. The last weekly digest email will be sent on June 11, 2018. Please configure one of the following options to get similar custom reports (use the query suggested below). +> [!NOTE] +> The weekly Application Insights digest email didn't allow any customization and will be discontinued in favor of the custom options listed here. The last weekly digest email was sent on June 11, 2018. Configure one of the following options to get similar custom reports. Use the query that's suggested in this article. -## To automate custom report emails +## Automate custom report emails You can [programmatically query Application Insights](https://dev.applicationinsights.io/) data to generate custom reports on a schedule. The following options can help you get started quickly: -* [Automate reports with Power Automate](../logs/logicapp-flow-connector.md) -* [Automate reports with Logic Apps](automate-with-logic-apps.md) -* Use the "Application Insights scheduled digest" [Azure function](../../azure-functions/functions-get-started.md) template in the Monitoring scenario. This function uses SendGrid to deliver the email. +* [Automate reports with Power Automate](../logs/logicapp-flow-connector.md). +* [Automate reports with Azure Logic Apps](automate-with-logic-apps.md). +* Use the **Application Insights scheduled digest** [Azure Functions](../../azure-functions/functions-get-started.md) template in the **Monitoring** scenario. This function uses SendGrid to deliver the email. - ![Azure function template](./media/automate-custom-reports/azure-function-template.png) + ![Screenshot that shows an Azure Functions template.](./media/automate-custom-reports/azure-function-template.png) ## Sample query for a weekly digest email-The following query shows joining across multiple datasets for a weekly digest email like report. Customize it as required and use it with any of the options listed above to automate a weekly report. +The following query shows joining across multiple datasets for a weekly digest email-like report. Customize it as required and use it with any of the options previously listed to automate a weekly report. ```AIQL let period=7d; availabilityResults ## Application Insights scheduled digest report -1. Create an Azure Function App.(Application Insights _On_ is required only if you want to monitor your new Function App with Application Insights) +1. Create an Azure Functions app. Application Insights **On** is required only if you want to monitor your new Azure Functions app with Application Insights. - Visit the Azure Functions documentation to learn how to [create a function app](../../azure-functions/functions-get-started.md) + See the Azure Functions documentation to learn how to [create a function app](../../azure-functions/functions-get-started.md). -2. Once your new Function App has completed deployment, select **Go to resource**. +1. After your new Azure Functions app has finished deployment, select **Go to resource**. -3. Select **New function**. +1. Select **New function**. - ![Create a new Function screenshot](./media/automate-custom-reports/new-function.png) + ![Screenshot that shows Create a new function.](./media/automate-custom-reports/new-function.png) -4. Select the **_Application Insights scheduled digest template_**. +1. Select the **Application Insights scheduled digest** template. > [!NOTE]- > By default, function apps are created with runtime version 3.x. You must [target Azure Functions runtime version](../../azure-functions/set-runtime-version.md) **1.x** to use the Application Insights scheduled digest template. Go to Configuration > Function Runtime settings to change the runtime version. ![runtime screenshot](./media/automate-custom-reports/change-runtime-v.png) + > By default, function apps are created with runtime version 3.x. You must [target Azure Functions runtime version](../../azure-functions/set-runtime-version.md) **1.x** to use the Application Insights scheduled digest template. Go to **Configuration** > **Function runtime settings** to change the runtime version. ![Screenshot that shows the Function runtime settings tab.](./media/automate-custom-reports/change-runtime-v.png) - ![New Function Application Insights Template screenshot](./media/automate-custom-reports/function-app-04.png) + ![Screenshot that shows New Function Application Insights Template.](./media/automate-custom-reports/function-app-04.png) -5. Enter an appropriate recipient e-mail address for your report and select **Create**. +1. Enter an appropriate recipient email address for your report and select **Create**. - ![Function Settings screenshot](./media/automate-custom-reports/scheduled-digest.png) + ![Screenshot that shows Function Settings.](./media/automate-custom-reports/scheduled-digest.png) -6. Select your **Function App** > **Platform features** > **Configuration**. +1. Select **Function Apps** > **Platform features** > **Configuration**. - ![Azure Function Application settings screenshot](./media/automate-custom-reports/config.png) + ![Screenshot that shows Azure Function Application settings.](./media/automate-custom-reports/config.png) -7. Create three new application settings with appropriate corresponding values ``AI_APP_ID``, ``AI_APP_KEY``, and ``SendGridAPI``. Select **Save**. +1. Create three new application settings with the appropriate corresponding values ``AI_APP_ID``, ``AI_APP_KEY``, and ``SendGridAPI``. Select **Save**. - ![Function integration interface screenshot](./media/automate-custom-reports/app-settings.png) + ![Screenshot that shows Function integration interface.](./media/automate-custom-reports/app-settings.png) - (The AI_ values can be found under API Access for the Application Insights Resource you want to report on. If you don't have an Application Insights API Key, there is the option to **Create API Key**.) + You can find the AI_ values under **API Access** for the Application Insights resource you want to report on. If you don't have an Application Insights API key, use the **Create API Key** option. * AI_APP_ID = Application ID * AI_APP_KEY = API Key * SendGridAPI =SendGrid API Key > [!NOTE]- > If you don't have a SendGrid account you can create one. SendGrid's documentation for Azure Functions is [here](../../azure-functions/functions-bindings-sendgrid.md). If just want a minimal explanation of how to setup SendGrid and generate an API key one is provided at the end of this article. + > If you don't have a SendGrid account, you can create one. For more information, see [Azure Functions SendGrid bindings](../../azure-functions/functions-bindings-sendgrid.md) for the SendGrid documentation for Azure Functions. If you want a brief explanation of how to set up SendGrid and generate an API key, one is provided at the end of this article. -8. Select **Integrate** and under Outputs click **SendGrid ($return)**. +1. Select **Integrate**. Under **Outputs**, select **SendGrid ($return)**. - ![Output screenshot](./media/automate-custom-reports/integrate.png) + ![Screenshot that shows Outputs.](./media/automate-custom-reports/integrate.png) -9. Under the **SendGridAPI Key App Setting**, select your newly created App Setting for **SendGridAPI**. +1. Under the **SendGridAPI Key App Setting**, select your newly created app setting **SendGridAPI**. - ![Run Function App screenshot](./media/automate-custom-reports/sendgrid-output.png) + ![Screenshot that shows SendGridAPI.](./media/automate-custom-reports/sendgrid-output.png) -10. Run and test your Function App. +1. Run and test your function app. - ![Test Screenshot](./media/automate-custom-reports/function-app-11.png) + ![Screenshot that shows Test.](./media/automate-custom-reports/function-app-11.png) -11. Check your e-mail to confirm that the message was sent/received successfully. +1. Check your email to confirm that the message was sent or received successfully. - ![E-mail subject line screenshot](./media/automate-custom-reports/function-app-12.png) + ![Screenshot that shows the E-mail subject line.](./media/automate-custom-reports/function-app-12.png) ## SendGrid with Azure These steps only apply if you don't already have a SendGrid account configured. -1. From the Azure portal, select **Create a resource** > search for **SendGrid Email Delivery** > click **Create** > fill out the SendGrid specific create instructions. +1. On the Azure portal, select **Create a resource**. Search for **SendGrid Email Delivery** and select **Create**. Fill out the SendGrid instructions. - ![Create SendGrid Resource Screenshot](./media/automate-custom-reports/sendgrid.png) + ![Screenshot that shows the SendGrid Create button.](./media/automate-custom-reports/sendgrid.png) -2. Once created under SendGrid Accounts select **Manage**. +1. Under **SendGrid Accounts**, select **Manage**. - ![Settings API Key Screenshot](./media/automate-custom-reports/sendgrid-manage.png) + ![Screenshot that shows the Manage button.](./media/automate-custom-reports/sendgrid-manage.png) -3. This will launch SendGrid's site. Select **Settings** > **API Keys**. +1. This action opens SendGrid's site. Select **Settings** > **API Keys**. - ![Create and View API Key App Screenshot](./media/automate-custom-reports/function-app-15.png) + ![Screenshot that shows API Keys under Settings.](./media/automate-custom-reports/function-app-15.png) -4. Create an API Key > choose **Create & View**. (Review SendGrid's documentation on restricted access to determine what level of permissions is appropriate for your API Key. Full Access is selected here for example purposes only.) +1. To create an API key, select **Create & View**. Review SendGrid's documentation on restricted access to determine what level of permissions is appropriate for your API key. **Full Access** is selected here only as an example. - ![Full access screenshot](./media/automate-custom-reports/function-app-16.png) + ![Screenshot that shows Full Access.](./media/automate-custom-reports/function-app-16.png) -5. Copy the entire key, this value is what you need in your Function App settings as the value for SendGridAPI +1. Copy the entire key. This value is what you need in your function app settings as the value for `SendGridAPI`. - ![Copy API key screenshot](./media/automate-custom-reports/function-app-17.png) + ![Screenshot that shows the API Key Created pane.](./media/automate-custom-reports/function-app-17.png) ## Next steps * Learn more about creating [Analytics queries](../logs/get-started-queries.md). * Learn more about [programmatically querying Application Insights data](https://dev.applicationinsights.io/) * Learn more about [Logic Apps](../../logic-apps/logic-apps-overview.md).-* Learn more about [Microsoft Power Automate](https://ms.flow.microsoft.com). +* Learn more about [Power Automate](https://ms.flow.microsoft.com). |
azure-monitor | Availability Private Test | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-private-test.md | -If you want to use availability tests on internal servers that run behind a firewall, there are two possible solutions: public availability test enablement and disconnected/no ingress scenarios. +If you want to use availability tests on internal servers that run behind a firewall, you have two possible solutions: public availability test enablement and disconnected/no ingress scenarios. ## Public availability test enablement > [!NOTE]-> If you donΓÇÖt want to allow any ingress to your environment, then use the method in the [Disconnected or no ingress scenarios](#disconnected-or-no-ingress-scenarios) section. +> If you don't want to allow any ingress to your environment, use the method in the [Disconnected or no ingress scenarios](#disconnected-or-no-ingress-scenarios) section. - Ensure you have a public DNS record for your internal website. The test will fail if the DNS cannot be resolved. [Create a custom domain name for internal application.](../../cloud-services/cloud-services-custom-domain-name-portal.md#add-an-a-record-for-your-custom-domain) + Ensure you have a public DNS record for your internal website. The test will fail if the DNS can't be resolved. For more information, see [Create a custom domain name for internal application](../../cloud-services/cloud-services-custom-domain-name-portal.md#add-an-a-record-for-your-custom-domain). Configure your firewall to permit incoming requests from our service. -- [Service tags](../../virtual-network/service-tags-overview.md) are a simple way to enable Azure services without having to authorize individual IPs or maintain an up-to-date list. Service tags can be used across Azure Firewall and Network Security Groups to allow our service access. **ApplicationInsightsAvailability** is the Service tag dedicated to our ping testing service, covering both URL ping tests and Standard availability tests.- 1. If you are using [Azure Network Security Groups](../../virtual-network/network-security-groups-overview.md), go to your Network Security group resource and select **inbound security rules** under *Settings* then select **Add**. +- [Service tags](../../virtual-network/service-tags-overview.md) are a simple way to enable Azure services without having to authorize individual IPs or maintain an up-to-date list. Service tags can be used across Azure Firewall and network security groups to allow our service access. The service tag **ApplicationInsightsAvailability** is dedicated to our ping testing service, which covers both URL ping tests and Standard availability tests. + 1. If you're using [Azure network security groups](../../virtual-network/network-security-groups-overview.md), go to your network security group resource and under **Settings**, select **inbound security rules**. Then select **Add**. - :::image type="content" source="media/availability-private-test/add.png" alt-text="Screenshot of the inbound security rules tab in the network security group resource."::: + :::image type="content" source="media/availability-private-test/add.png" alt-text="Screenshot that shows the inbound security rules tab in the network security group resource."::: - 1. Next, select *Service Tag* as the source and *ApplicationInsightsAvailability* as the source service tag. Use open ports 80 (http) and 443 (https) for incoming traffic from the service tag. + 1. Next, select **Service Tag** as the source and select **ApplicationInsightsAvailability** as the source service tag. Use open ports 80 (http) and 443 (https) for incoming traffic from the service tag. - :::image type="content" source="media/availability-private-test/service-tag.png" alt-text="Screenshot of the Add inbound security rules tab with a source of service tag."::: + :::image type="content" source="media/availability-private-test/service-tag.png" alt-text="Screenshot that shows the Add inbound security rules tab with a source of service tag."::: -- If your endpoints are hosted outside of Azure or Service Tags aren't available for your scenario, then you'll need to individually allowlist the [IP addresses of our web test agents](ip-addresses.md). You can query the IP ranges directly from PowerShell, Azure CLI, or a REST call using the [Service tag API](../../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api) You can also download a [JSON file](../../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) to get a list of current service tags with IP addresses details.- 1. In your Network Security group resource and select **inbound security rules** under *Settings*, then select **Add**. - 1. Next, select *IP Addresses* as your source then add your IP addresses in a comma delimited list in source IP address/CIRD ranges. +- If your endpoints are hosted outside of Azure or service tags aren't available for your scenario, you'll need to individually allowlist the [IP addresses of our web test agents](ip-addresses.md). You can query the IP ranges directly from PowerShell, the Azure CLI, or a REST call by using the [Service Tag API](../../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api). You can also download a [JSON file](../../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) to get a list of current service tags with IP address details. + 1. In your network security group resource, under **Settings**, select **inbound security rules**. Then select **Add**. + 1. Next, select **IP Addresses** as your source. Then add your IP addresses in a comma-delimited list in source IP address/CIRD ranges. - :::image type="content" source="media/availability-private-test/ip-addresses.png" alt-text="Screenshot of the Add inbound security rules tab with a source of IP addresses."::: + :::image type="content" source="media/availability-private-test/ip-addresses.png" alt-text="Screenshot that shows the Add inbound security rules tab with a source of IP addresses."::: ## Disconnected or no ingress scenarios -Your test server will need to have outgoing access to the Application Insights ingestion endpoint, which is a significantly lower security risk than the alternative of permitting incoming requests. The results will appear in the availability web tests tab with a simplified experience from what is available for test created via the Azure portal. Custom availability test will also appear as availability results in Analytics, Search, and Metrics. +To use this method, your test server must have outgoing access to the Application Insights ingestion endpoint. This is a much lower security risk than the alternative of permitting incoming requests. The results will appear in the availability web tests tab with a simplified experience from what is available for tests created via the Azure portal. Custom availability tests will also appear as availability results in **Analytics**, **Search**, and **Metrics**. -1. Connect your Application Insights resource and disconnected environment using [Azure Private Link](../logs/private-link-security.md) -1. Write custom code to periodically test your internal server or endpoints. You can run the code using [Azure Functions](availability-azure-functions.md) or a background process on a test server behind your firewall. Your test process can send its results to Application Insights by using the `TrackAvailability()` API in the core SDK package. +1. Connect your Application Insights resource and disconnected environment by using [Azure Private Link](../logs/private-link-security.md). +1. Write custom code to periodically test your internal server or endpoints. You can run the code by using [Azure Functions](availability-azure-functions.md) or a background process on a test server behind your firewall. Your test process can send its results to Application Insights by using the `TrackAvailability()` API in the core SDK package. ## Troubleshooting -Dedicated [troubleshooting article](troubleshoot-availability.md). +For more information, see the [troubleshooting article](troubleshoot-availability.md). ## Next steps * [Azure Private Link](../logs/private-link-security.md)-* [Availability Alerts](availability-alerts.md) +* [Availability alerts](availability-alerts.md) * [URL tests](monitor-web-app-availability.md)-* [Create and run custom availability tests using Azure Functions](availability-azure-functions.md) +* [Create and run custom availability tests by using Azure Functions](availability-azure-functions.md) |
azure-monitor | Java Standalone Arguments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md | See [Monitoring Azure Functions with Azure Monitor Application Insights](./monit Read the Spring Boot documentation [here](../app/java-in-process-agent.md). +## Third-party container images ++If you are using a third-party container image which you cannot modify, mount the Application Insights Java agent jar +into the container from outside, and set the environment variable for the container +`JAVA_TOOL_OPTIONS=-javaagent:/path/to/applicationinsights-agent.jar`. + ## Tomcat 8 (Linux) ### Tomcat installed via `apt-get` or `yum` |
azure-monitor | Mobile Center Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/mobile-center-quickstart.md | Title: Monitor mobile or universal Windows apps with Azure Monitor Application Insights -description: Provides instructions to quickly set up a mobile or universal Windows app for monitoring with Azure Monitor Application Insights and App Center + Title: Monitor mobile or Universal Windows Platform apps with Azure Monitor Application Insights +description: This tutorial provides instructions to quickly set up a mobile or Universal Windows Platform app for monitoring with Azure Monitor Application Insights and App Center. Last updated 11/15/2022 ms.devlang: java, swift This tutorial guides you through connecting your app's App Center instance to Application Insights. With Application Insights, you can query, segment, filter, and analyze your telemetry with more powerful tools than are available from the [Analytics](/mobile-center/analytics/) service of App Center. - In this tutorial, you learn how to: > [!div class="checklist"]-> * Connect an app's App Center instance to Application Insights -> * Modify your app to send custom telemetry to Application Insights -> * Query custom telemetry in Log Analytics -> * Analyze conversion, retention, and navigation patterns in your app +> * Connect an app's App Center instance to Application Insights. +> * Modify your app to send custom telemetry to Application Insights. +> * Query custom telemetry in Log Analytics. +> * Analyze conversion, retention, and navigation patterns in your app. ## Prerequisites To complete this tutorial, you need: - An Azure subscription.-- An iOS, Android, Xamarin, Universal Windows, or React Native app.- +- An iOS, Android, Xamarin, Universal Windows Platform (UWP), or React Native app. + If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] To begin, create an account and [sign up with App Center](https://appcenter.ms/s ## Onboard to App Center -Before you can use Application Insights with your mobile app, you need to onboard your app to [App Center](/mobile-center/). Application Insights does not receive telemetry from your mobile app directly. Instead, your app sends custom event telemetry to App Center. Then, App Center continuously exports copies of these custom events into Application Insights as the events are received. (This does not apply to the [Application Insights JS SDK](https://github.com/Microsoft/ApplicationInsights-JS) or the [React Native plugin](https://github.com/Microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-react-native) where telemetry is sent directly to Application Insights.) +Before you can use Application Insights with your mobile app, you need to onboard your app to [App Center](/mobile-center/). Application Insights doesn't receive telemetry from your mobile app directly. Instead, your app sends custom event telemetry to App Center. Then, App Center continuously exports copies of these custom events into Application Insights as the events are received. (This description doesn't apply to the [Application Insights JS SDK](https://github.com/Microsoft/ApplicationInsights-JS) or the [React Native plug-in](https://github.com/Microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-react-native) where telemetry is sent directly to Application Insights.) To onboard your app, follow the App Center quickstart for each platform your app supports. Create separate App Center instances for each platform: -* [iOS](/mobile-center/sdk/getting-started/ios). -* [Android](/mobile-center/sdk/getting-started/android). -* [Xamarin](/mobile-center/sdk/getting-started/xamarin). -* [Universal Windows](/mobile-center/sdk/getting-started/uwp). -* [React Native](/mobile-center/sdk/getting-started/react-native). +* [iOS](/mobile-center/sdk/getting-started/ios) +* [Android](/mobile-center/sdk/getting-started/android) +* [Xamarin](/mobile-center/sdk/getting-started/xamarin) +* [Universal Windows](/mobile-center/sdk/getting-started/uwp) +* [React Native](/mobile-center/sdk/getting-started/react-native) ## Track events in your app -After your app is onboarded to App Center, it needs to be modified to send custom event telemetry using the App Center SDK. +After your app is onboarded to App Center, it needs to be modified to send custom event telemetry by using the App Center SDK. -To send custom events from iOS apps, use the `trackEvent` or `trackEvent:withProperties` methods in the App Center SDK. [Learn more about tracking events from iOS apps.](/mobile-center/sdk/analytics/ios) +To send custom events from iOS apps, use the `trackEvent` or `trackEvent:withProperties` methods in the App Center SDK. Learn more about [tracking events from iOS apps](/mobile-center/sdk/analytics/ios). ```Swift MSAnalytics.trackEvent("Video clicked") ``` -To send custom events from Android apps, use the `trackEvent` method in the App Center SDK. [Learn more about tracking events from Android apps.](/mobile-center/sdk/analytics/android) +To send custom events from Android apps, use the `trackEvent` method in the App Center SDK. Learn more about [tracking events from Android apps](/mobile-center/sdk/analytics/android). ```Java Analytics.trackEvent("Video clicked") Analytics.trackEvent("Video clicked") To send custom events from other app platforms, use the `trackEvent` methods in their App Center SDKs. -To make sure your custom events are being received, go to the **Events** tab under the **Analytics** section in App Center. It can take a couple minutes for events to show up from when they're sent from your app. +To make sure your custom events are being received, go to the **Events** tab under the **Analytics** section in App Center. It can take a couple minutes for events to show up after they're sent from your app. ## Create an Application Insights resource -Once your app is sending custom events and these events are being received by App Center, you need to create an App Center-type Application Insights resource in the Azure portal: +After your app sends custom events and these events are received by App Center, you need to create an App Center-type Application Insights resource in the Azure portal: 1. Sign in to the [Azure portal](https://portal.azure.com/).-2. Select **Create a resource** > **Developer tools** > **Application Insights**. +1. Select **Create a resource** > **Developer tools** > **Application Insights**. > [!NOTE]- > If this is your first time creating an Application Insights resource you can learn more by visiting the [Create an Application Insights Resource](../app/create-new-resource.md) doc. + > If this is your first time creating an Application Insights resource, you can learn more by reading [Create an Application Insights resource](../app/create-new-resource.md). - A configuration box will appear. Use the table below to fill out the input fields. + A configuration box appears. Use the following table to fill out the input fields. | Settings | Value | Description | | - |:-|:--|- | **Name** | Some globally unique value, like "myApp-iOS" | Name that identifies the app you are monitoring | - | **Resource Group** | A new resource group, or an existing one from the menu | The resource group in which to create the new Application Insights resource | - | **Location** | A location from the menu | Choose a location near you, or near where your app is hosted | + | Name | Some globally unique value, like "myApp-iOS" | Name that identifies the app you're monitoring. | + | Resource group | A new resource group, or an existing one from the menu | The resource group in which to create the new Application Insights resource. | + | Location | A location from the menu | Choose a location near you, or near where your app is hosted. | -3. Click **Create**. +1. Select **Create**. -If your app supports multiple platforms (iOS, Android, etc.), it's best to create separate Application Insights resources, one for each platform. +If your app supports multiple platforms like iOS and Android, it's best to create separate Application Insights resources. Create one for each platform. ## Export to Application Insights -In your new Application Insights resource on the **Overview** page. Copy the instrumentation key from your resource. +In your new Application Insights resource on the **Overview** page, copy the instrumentation key from your resource. In the [App Center](https://appcenter.ms/) instance for your app: -1. On the **Settings** page, click **Export**. -2. Choose **New Export**, pick **Application Insights**, then click **Customize**. -3. Paste your Application Insights instrumentation key into the box. -4. Consent to increasing the usage of the Azure subscription containing your Application Insights resource. Each Application Insights resource is free for the first 1 GB of data received per month. [Learn more about Application Insights pricing.](https://azure.microsoft.com/pricing/details/application-insights/) +1. On the **Settings** page, select **Export**. +1. Select **New Export** > **Application Insights** > **Customize**. +1. Paste your Application Insights instrumentation key into the box. +1. Consent to increasing the usage of the Azure subscription that contains your Application Insights resource. Each Application Insights resource is free for the first 1 GB of data received per month. Learn more about [Application Insights pricing](https://azure.microsoft.com/pricing/details/application-insights/). Remember to repeat this process for each platform your app supports. -Once [export](/mobile-center/analytics/export) is set up, each custom event received by App Center is copied into Application Insights. It can take several minutes for events to reach Application Insights, so if they don't show up immediately, wait a bit before diagnosing further. +After [export](/mobile-center/analytics/export) is set up, each custom event received by App Center is copied into Application Insights. It can take several minutes for events to reach Application Insights, so if they don't show up immediately, wait a few minutes before you diagnose further. To give you more data when you first connect, the most recent 48 hours of custom events in App Center are automatically exported to Application Insights. ## Start monitoring your app -Application Insights can query, segment, filter, and analyze the custom event telemetry from your apps, beyond the analytics tools App Center provides. +Application Insights can query, segment, filter, and analyze the custom event telemetry from your apps, beyond the analytics tools that App Center provides. ++### Query your custom event telemetry -1. **Query your custom event telemetry.** From the Application Insights **Overview** page, choose **Logs (Analytics)**. +1. On the Application Insights **Overview** page, select **Logs**. - The Application Insights Logs (Analytics) portal associated with your Application Insights resource will open. The Logs (Analytics) portal lets you directly query your data using the Log Analytics query language, so you can ask arbitrarily complex questions about your app and its users. - - Open a new tab in the Logs (Analytics) portal, then paste in the following query. It returns a count of how many distinct users have sent each custom event from your app in the last 24 hours, sorted by these distinct counts. + The Application Insights Logs portal associated with your Application Insights resource will open. The Logs portal lets you directly query your data by using the Log Analytics query language, so you can ask arbitrarily complex questions about your app and its users. ++1. Open a new tab in the Logs portal and paste in the following query. It returns a count of how many distinct users have sent each custom event from your app in the last 24 hours, sorted by these distinct counts. ```AIQL customEvents Application Insights can query, segment, filter, and analyze the custom event te | order by dcount_user_Id desc ``` - ![Logs (Analytics) portal](./media/mobile-center-quickstart/analytics-portal-001.png) + ![Screenshot that shows the Logs portal.](./media/mobile-center-quickstart/analytics-portal-001.png) 1. Select the query by clicking anywhere on the query in the text editor.- 2. Then click **Go** to run the query. + 1. Then select **Run** to run the query. Learn more about [Application Insights Analytics](../logs/log-query-overview.md) and the [Log Analytics query language](/azure/data-explorer/kusto/query/). +### Segment and filter your custom event telemetry ++On the Application Insights **Overview** page, select **Users** in the table of contents. -2. **Segment and filter your custom event telemetry.** From the Application Insights **Overview** page, choose **Users** in the table of contents. + ![Screenshot that shows the Users tool icon.](./media/mobile-center-quickstart/users-icon-001.png) - ![Users tool icon](./media/mobile-center-quickstart/users-icon-001.png) + The Users tool shows how many users of your app clicked certain buttons, visited certain screens, or performed any other action that you're tracking as an event with the App Center SDK. If you've been looking for a way to segment and filter your App Center events, the Users tool is a great choice. - The Users tool shows how many users of your app clicked certain buttons, visited certain screens, or performed any other action that you are tracking as an event with the App Center SDK. If you've been looking for a way to segment and filter your App Center events, the Users tool is a great choice. + ![Screenshot that shows the Users tool.](./media/mobile-center-quickstart/users-001.png) - ![Users tool](./media/mobile-center-quickstart/users-001.png) + For example, segment your usage by geography by selecting **Country or region** in the **Split by** dropdown box. - For example, segment your usage by geography by choosing **Country or region** in the **Split by** dropdown menu. +### Analyze conversion, retention, and navigation patterns in your app -3. **Analyze conversion, retention, and navigation patterns in your app.** From the Application Insights **Overview** page, choose **User Flows** in the table of contents. +On the Application Insights **Overview** page, select **User Flows** in the table of contents. - ![User Flows tool](./media/mobile-center-quickstart/user-flows-001.png) + ![Screenshot that shows the User Flows tool.](./media/mobile-center-quickstart/user-flows-001.png) - The User Flows tool visualizes which events users send after some starting event. It's useful for getting an overall picture of how users navigate through your app. It can also reveal places where many users are churning from your app, or repeating the same actions over and over. + The User Flows tool visualizes which events users send after some starting event. It's useful for getting an overall picture of how users navigate through your app. It can also reveal places where many users are churning from your app or repeating the same actions over and over. In addition to User Flows, Application Insights has several other user behavior analytics tools to answer specific questions: - * **Funnels** for analyzing and monitoring conversion rates. - * **Retention** for analyzing how well your app retains users over time. - * **Workbooks** for combining visualizations and text into a shareable report. - * **Cohorts** for naming and saving specific groups of users or events so they can be easily referenced from other analytics tools. + * **Funnels**: Analyze and monitor conversion rates. + * **Retention**: Analyze how well your app retains users over time. + * **Workbooks**: Combine visualizations and text into a shareable report. + * **Cohorts**: Name and save specific groups of users or events so they can be easily referenced from other analytics tools. ## Clean up resources -If you do not want to continue using Application Insights with App Center, turn off export in App Center and delete the Application Insights resource. This will prevent you from being charged further by Application Insights for this resource. +If you don't want to continue using Application Insights with App Center, turn off export in App Center and delete the Application Insights resource. This step will prevent you from being charged further by Application Insights for this resource. To turn off export in App Center: -1. In App Center, go to **Settings** and choose **Export**. -2. Click the Application Insights export you want to delete, then click **Delete export** at the bottom and confirm. +1. In App Center, go to **Settings** and select **Export**. +1. Select the Application Insights export you want to delete. Then select **Delete export** at the bottom and confirm. To delete the Application Insights resource: -1. In the left-hand menu of the Azure portal, click **Resource groups** and then choose the resource group in which your Application Insights resource was created. -2. Open the Application Insights resource you want to delete. Then click **Delete** in the top menu of the resource and confirm. This will permanently delete the copy of the data that was exported to Application Insights. +1. On the left menu in the Azure portal, select **Resource groups**. Then choose the resource group in which your Application Insights resource was created. +1. Open the Application Insights resource you want to delete. Then select **Delete** on the top menu of the resource and confirm. This action permanently deletes the copy of the data that was exported to Application Insights. ## Next steps |
azure-monitor | Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell.md | Title: Automate Azure Application Insights with PowerShell | Microsoft Docs -description: Automate creating and managing resources, alerts, and availability tests in PowerShell using an Azure Resource Manager template. + Title: Automate Application Insights with PowerShell | Microsoft Docs +description: Automate creating and managing resources, alerts, and availability tests in PowerShell by using an Azure Resource Manager template. Last updated 05/02/2020 -# Manage Application Insights resources using PowerShell +# Manage Application Insights resources by using PowerShell [!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)] -This article shows you how to automate the creation and update of [Application Insights](./app-insights-overview.md) resources automatically by using Azure Resource Management. You might, for example, do so as part of a build process. Along with the basic Application Insights resource, you can create [availability web tests](./monitor-web-app-availability.md), set up [alerts](../alerts/alerts-log.md), set the [pricing scheme](../logs/cost-logs.md#application-insights-billing), and create other Azure resources. +This article shows you how to automate the creation and update of [Application Insights](./app-insights-overview.md) resources automatically by using Azure Resource Manager. You might, for example, do so as part of a build process. Along with the basic Application Insights resource, you can create [availability web tests](./monitor-web-app-availability.md), set up [alerts](../alerts/alerts-log.md), set the [pricing scheme](../logs/cost-logs.md#application-insights-billing), and create other Azure resources. -The key to creating these resources is JSON templates for [Azure Resource Manager](../../azure-resource-manager/management/manage-resources-powershell.md). The basic procedure is: download the JSON definitions of existing resources; parameterize certain values such as names; and then run the template whenever you want to create a new resource. You can package several resources together, to create them all in one go - for example, an app monitor with availability tests, alerts, and storage for continuous export. There are some subtleties to some of the parameterizations, which we'll explain here. +The key to creating these resources is JSON templates for [Resource Manager](../../azure-resource-manager/management/manage-resources-powershell.md). The basic procedure is: -## One-time setup -If you haven't used PowerShell with your Azure subscription before: +- Download the JSON definitions of existing resources. +- Parameterize certain values, such as names. +- Run the template whenever you want to create a new resource. ++You can package several resources together to create them all in one go. For example, you can create an app monitor with availability tests, alerts, and storage for continuous export. There are some subtleties to some of the parameterizations, which we'll explain here. -Install the Azure PowerShell module on the machine where you want to run the scripts: +## One-time setup +If you haven't used PowerShell with your Azure subscription before, install the Azure PowerShell module on the machine where you want to run the scripts: 1. Install [Microsoft Web Platform Installer (v5 or higher)](https://www.microsoft.com/web/downloads/platform.aspx).-2. Use it to install Microsoft Azure PowerShell. +1. Use it to install Azure PowerShell. -In addition to using Resource Manager templates, there is a rich set of [Application Insights PowerShell cmdlets](/powershell/module/az.applicationinsights), which make it easy to configure Application Insights resources programatically. The capabilities enabled by the cmdlets include: +In addition to using Azure Resource Manager templates (ARM templates), there's a rich set of [Application Insights PowerShell cmdlets](/powershell/module/az.applicationinsights). These cmdlets make it easy to configure Application Insights resources programatically. You can use the capabilities enabled by the cmdlets to: -* Create and delete Application Insights resources -* Get lists of Application Insights resources and their properties -* Create and manage Continuous Export -* Create and manage Application Keys -* Set the Daily Cap -* Set the Pricing Plan +* Create and delete Application Insights resources. +* Get lists of Application Insights resources and their properties. +* Create and manage continuous export. +* Create and manage application keys. +* Set the daily cap. +* Set the pricing plan. -## Create Application Insights resources using a PowerShell cmdlet +## Create Application Insights resources by using a PowerShell cmdlet -Here's how to create a new Application Insights resource in the Azure East US datacenter using the [New-AzApplicationInsights](/powershell/module/az.applicationinsights/new-azapplicationinsights) cmdlet: +Here's how to create a new Application Insights resource in the Azure East US datacenter by using the [New-AzApplicationInsights](/powershell/module/az.applicationinsights/new-azapplicationinsights) cmdlet: ```PS New-AzApplicationInsights -ResourceGroupName <resource group> -Name <resource name> -location eastus ``` +## Create Application Insights resources by using an ARM template -## Create Application Insights resources using a Resource Manager template +Here's how to create a new Application Insights resource by using an ARM template. -Here's how to create a new Application Insights resource using a Resource Manager template. +### Create the ARM template -### Create the Azure Resource Manager template --Create a new .json file - let's call it `template1.json` in this example. Copy this content into it: +Create a new .json file. Let's call it `template1.json` in this example. Copy this content into it: ```JSON { Create a new .json file - let's call it `template1.json` in this example. Copy t } ``` -### Use the Resource Manager template to create a new Application Insights resource +### Use the ARM template to create a new Application Insights resource ++1. In PowerShell, sign in to Azure by using `$Connect-AzAccount`. +1. Set your context to a subscription with `Set-AzContext "<subscription ID>"`. +1. Run a new deployment to create a new Application Insights resource: -1. In PowerShell, sign in to Azure using `$Connect-AzAccount` -2. Set your context to a subscription with `Set-AzContext "<subscription ID>"` -2. Run a new deployment to create a new Application Insights resource: - ```PS New-AzResourceGroupDeployment -ResourceGroupName Fabrikam ` -TemplateFile .\template1.json ` -appName myNewApp ``` - + * `-ResourceGroupName` is the group where you want to create the new resources. * `-TemplateFile` must occur before the custom parameters.- * `-appName` The name of the resource to create. + * `-appName` is the name of the resource to create. -You can add other parameters - you'll find their descriptions in the parameters section of the template. +You can add other parameters. You'll find their descriptions in the parameters section of the template. ## Get the instrumentation key -After creating an application resource, you'll want the instrumentation key: +After you create an application resource, you'll want the instrumentation key: -1. `$Connect-AzAccount` -2. `Set-AzContext "<subscription ID>"` -3. `$resource = Get-AzResource -Name "<resource name>" -ResourceType "Microsoft.Insights/components"` -4. `$details = Get-AzResource -ResourceId $resource.ResourceId` -5. `$details.Properties.InstrumentationKey` +1. Sign in to Azure by using `$Connect-AzAccount`. +1. Set your context to a subscription with `Set-AzContext "<subscription ID>"`. +1. Then use: + 1. `$resource = Get-AzResource -Name "<resource name>" -ResourceType "Microsoft.Insights/components"` + 1. `$details = Get-AzResource -ResourceId $resource.ResourceId` + 1. `$details.Properties.InstrumentationKey` To see a list of many other properties of your Application Insights resource, use: To see a list of many other properties of your Application Insights resource, us Get-AzApplicationInsights -ResourceGroupName Fabrikam -Name FabrikamProd | Format-List ``` -Additional properties are available via the cmdlets: +More properties are available via the cmdlets: + * `Set-AzApplicationInsightsDailyCap` * `Set-AzApplicationInsightsPricingPlan` * `Get-AzApplicationInsightsApiKey` * `Get-AzApplicationInsightsContinuousExport` -Refer to the [detailed documentation](/powershell/module/az.applicationinsights) for the parameters for these cmdlets. +See the [detailed documentation](/powershell/module/az.applicationinsights) for the parameters for these cmdlets. [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## Set the data retention -Below are three methods to programmatically set the data retention on an Application Insights resource. +You can use the following three methods to programmatically set the data retention on an Application Insights resource. -### Setting data retention using a PowerShell commands +### Set data retention by using PowerShell commands Here's a simple set of PowerShell commands to set the data retention for your Application Insights resource: $Resource.Properties.RetentionInDays = 365 $Resource | Set-AzResource -Force ``` -### Setting data retention using REST +### Set data retention by using REST -To get the current data retention for your Application Insights resource, you can use the OSS tool [ARMClient](https://github.com/projectkudu/ARMClient). (Learn more about ARMClient from articles by [David Ebbo](http://blog.davidebbo.com/2015/01/azure-resource-manager-client.html) and Daniel Bowbyes.) Here's an example using `ARMClient`, to get the current retention: +To get the current data retention for your Application Insights resource, you can use the OSS tool [ARMClient](https://github.com/projectkudu/ARMClient). Learn more about ARMClient from articles by [David Ebbo](http://blog.davidebbo.com/2015/01/azure-resource-manager-client.html) and Daniel Bowbyes. Here's an example that uses `ARMClient` to get the current retention: ```PS armclient GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/microsoft.insights/components/MyResourceName?api-version=2018-05-01-preview To set the retention, the command is a similar PUT: armclient PUT /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/microsoft.insights/components/MyResourceName?api-version=2018-05-01-preview "{location: 'eastus', properties: {'retentionInDays': 365}}" ``` -To set the data retention to 365 days using the template above, run: +To set the data retention to 365 days by using the preceding template, run: ```PS New-AzResourceGroupDeployment -ResourceGroupName "<resource group>" ` New-AzResourceGroupDeployment -ResourceGroupName "<resource group>" ` -appName myApp ``` -### Setting data retention using a PowerShell script +### Set data retention by using a PowerShell script -The following script can also be used to change retention. Copy this script to save as `Set-ApplicationInsightsRetention.ps1`. +The following script can also be used to change retention. Copy this script to save it as `Set-ApplicationInsightsRetention.ps1`. ```PS Param( Set-ApplicationInsightsRetention ` ## Set the daily cap -To get the daily cap properties, use the [Set-AzApplicationInsightsPricingPlan](/powershell/module/az.applicationinsights/set-azapplicationinsightspricingplan) cmdlet: +To get the daily cap properties, use the [Set-AzApplicationInsightsPricingPlan](/powershell/module/az.applicationinsights/set-azapplicationinsightspricingplan) cmdlet: ```PS Set-AzApplicationInsightsDailyCap -ResourceGroupName <resource group> -Name <resource name> | Format-List ``` -To set the daily cap properties, use same cmdlet. For instance, to set the cap to 300 GB/day, +To set the daily cap properties, use the same cmdlet. For instance, to set the cap to 300 GB per day: ```PS Set-AzApplicationInsightsDailyCap -ResourceGroupName <resource group> -Name <resource name> -DailyCapGB 300 armclient GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/ ## Set the daily cap reset time -To set the daily cap reset time, you can use [ARMClient](https://github.com/projectkudu/ARMClient). Here's an example using `ARMClient`, to set the reset time to a new hour (in this example 12:00 UTC): +To set the daily cap reset time, you can use [ARMClient](https://github.com/projectkudu/ARMClient). Here's an example using `ARMClient` to set the reset time to a new hour. This example shows 12:00 UTC: ```PS armclient PUT /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/microsoft.insights/components/MyResourceName/CurrentBillingFeatures?api-version=2018-05-01-preview "{'CurrentBillingFeatures':['Basic'],'DataVolumeCap':{'Cap':100,'WarningThreshold':80,'ResetTime':12}}" ``` <a id="price"></a>-## Set the pricing plan -To get current pricing plan, use the [Set-AzApplicationInsightsPricingPlan](/powershell/module/az.applicationinsights/set-azapplicationinsightspricingplan) cmdlet: +## Set the pricing plan ++To get the current pricing plan, use the [Set-AzApplicationInsightsPricingPlan](/powershell/module/az.applicationinsights/set-azapplicationinsightspricingplan) cmdlet: ```PS Set-AzApplicationInsightsPricingPlan -ResourceGroupName <resource group> -Name <resource name> | Format-List ``` -To set the pricing plan, use same cmdlet with the `-PricingPlan` specified: +To set the pricing plan, use the same cmdlet with the `-PricingPlan` specified: ```PS Set-AzApplicationInsightsPricingPlan -ResourceGroupName <resource group> -Name <resource name> -PricingPlan Basic ``` -You can also set the pricing plan on an existing Application Insights resource using the Resource Manager template above, omitting the "microsoft.insights/components" resource and the `dependsOn` node from the billing resource. For instance, to set it to the Per GB plan (formerly called the Basic plan), run: +You can also set the pricing plan on an existing Application Insights resource by using the preceding ARM template, omitting the "microsoft.insights/components" resource and the `dependsOn` node from the billing resource. For instance, to set it to the Per GB plan (formerly called the Basic plan), run: ```PS New-AzResourceGroupDeployment -ResourceGroupName "<resource group>" ` You can also set the pricing plan on an existing Application Insights resource u The `priceCode` is defined as: -|priceCode|plan| +|priceCode|Plan| ||| |1|Per GB (formerly named the Basic plan)| |2|Per Node (formerly name the Enterprise plan)| -Finally, you can use [ARMClient](https://github.com/projectkudu/ARMClient) to get and set pricing plans and daily cap parameters. To get the current values, use: +Finally, you can use [ARMClient](https://github.com/projectkudu/ARMClient) to get and set pricing plans and daily cap parameters. To get the current values, use: ```PS armclient GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/microsoft.insights/components/MyResourceName/CurrentBillingFeatures?api-version=2018-05-01-preview ``` -And you can set all of these parameters using: +You can set all of these parameters by using: ```PS armclient PUT /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/microsoft.insights/components/MyResourceName/CurrentBillingFeatures?api-version=2018-05-01-preview "{'CurrentBillingFeatures':['Basic'],'DataVolumeCap':{'Cap':200,'ResetTime':12,'StopSendNotificationWhenHitCap':true,'WarningThreshold':90,'StopSendNotificationWhenHitThreshold':true}}" ``` -This will set the daily cap to 200 GB/day, configure the daily cap reset time to 12:00 UTC, send emails both when the cap is hit and the warning level is met, and set the warning threshold to 90% of the cap. +This code will set the daily cap to 200 GB per day, configure the daily cap reset time to 12:00 UTC, send emails both when the cap is hit and the warning level is met, and set the warning threshold to 90% of the cap. ## Add a metric alert -To automate the creation of metric alerts, consult the [metric alerts template article](../alerts/alerts-metric-create-templates.md#template-for-a-simple-static-threshold-metric-alert) -+To automate the creation of metric alerts, see the [Metric alerts template](../alerts/alerts-metric-create-templates.md#template-for-a-simple-static-threshold-metric-alert) article. ## Add an availability test -To automate availability tests, consult the [metric alerts template article](../alerts/alerts-metric-create-templates.md#template-for-an-availability-test-along-with-a-metric-alert). +To automate availability tests, see the [Metric alerts template](../alerts/alerts-metric-create-templates.md#template-for-an-availability-test-along-with-a-metric-alert) article. ## Add more resources -To automate the creation of any other resource of any kind, create an example manually, and then copy and parameterize its code from [Azure Resource Manager](https://resources.azure.com/). +To automate the creation of any other resource of any kind, create an example manually and then copy and parameterize its code from [Azure Resource Manager](https://resources.azure.com/). ++1. Open [Azure Resource Manager](https://resources.azure.com/). Navigate down through `subscriptions/resourceGroups/<your resource group>/providers/Microsoft.Insights/components` to your application resource. ++ ![Screenshot that shows navigation in Azure Resource Explorer.](./media/powershell/01.png) -1. Open [Azure Resource Manager](https://resources.azure.com/). Navigate down through `subscriptions/resourceGroups/<your resource group>/providers/Microsoft.Insights/components`, to your application resource. - - ![Navigation in Azure Resource Explorer](./media/powershell/01.png) - *Components* are the basic Application Insights resources for displaying applications. There are separate resources for the associated alert rules and availability web tests.-2. Copy the JSON of the component into the appropriate place in `template1.json`. -3. Delete these properties: - +1. Copy the JSON of the component into the appropriate place in `template1.json`. +1. Delete these properties: + * `id` * `InstrumentationKey` * `CreationDate` * `TenantId`-4. Open the `webtests` and `alertrules` sections and copy the JSON for individual items into your template. (Don't copy from the `webtests` or `alertrules` nodes: go into the items under them.) - +1. Open the `webtests` and `alertrules` sections and copy the JSON for individual items into your template. Don't copy from the `webtests` or `alertrules` nodes. Go into the items under them. + Each web test has an associated alert rule, so you have to copy both of them.- -5. Insert this line in each resource: - ++1. Insert this line in each resource: + `"apiVersion": "2015-05-01",` ### Parameterize the template-Now you have to replace the specific names with parameters. To [parameterize a template](../../azure-resource-manager/templates/syntax.md), you write expressions using a [set of helper functions](../../azure-resource-manager/templates/template-functions.md). +Now you have to replace the specific names with parameters. To [parameterize a template](../../azure-resource-manager/templates/syntax.md), you write expressions using a [set of helper functions](../../azure-resource-manager/templates/template-functions.md). -You can't parameterize just part of a string, so use `concat()` to build strings. +You can't parameterize only part of a string, so use `concat()` to build strings. Here are examples of the substitutions you'll want to make. There are several occurrences of each substitution. You might need others in your template. These examples use the parameters and variables we defined at the top of the template. -| find | replace with | +| Find | Replace with | | | | | `"hidden-link:/subscriptions/.../../components/MyAppName"` |`"[concat('hidden-link:',`<br/>`resourceId('microsoft.insights/components',` <br/> `parameters('appName')))]"` | | `"/subscriptions/.../../alertrules/myAlertName-myAppName-subsId",` |`"[resourceId('Microsoft.Insights/alertrules', variables('alertRuleName'))]",` | Azure should set up the resources in strict order. To make sure one setup comple `"dependsOn": ["[resourceId('Microsoft.Insights/webtests', variables('testName'))]"],` -- ## Next steps-Other automation articles: -* [Create an Application Insights resource](./create-new-resource.md#creating-a-resource-automatically) - quick method without using a template. -* [Create web tests](../alerts/resource-manager-alerts-metric.md#availability-test-with-metric-alert) -* [Send Azure Diagnostics to Application Insights](../agents/diagnostics-extension-to-application-insights.md) -* [Create release annotations](annotations.md) +See these other automation articles: ++* [Create an Application Insights resource](./create-new-resource.md#creating-a-resource-automatically) via a quick method without using a template. +* [Create web tests](../alerts/resource-manager-alerts-metric.md#availability-test-with-metric-alert). +* [Send Azure Diagnostics to Application Insights](../agents/diagnostics-extension-to-application-insights.md). +* [Create release annotations](annotations.md). |
azure-monitor | Telemetry Channels | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/telemetry-channels.md | Title: Telemetry channels in Azure Application Insights | Microsoft Docs -description: How to customize telemetry channels in Azure Application Insights SDKs for .NET and .NET Core. + Title: Telemetry channels in Application Insights | Microsoft Docs +description: How to customize telemetry channels in Application Insights SDKs for .NET and .NET Core. Last updated 05/14/2019 ms.devlang: csharp-Telemetry channels are an integral part of the [Azure Application Insights SDKs](./app-insights-overview.md). They manage buffering and transmission of telemetry to the Application Insights service. The .NET and .NET Core versions of the SDKs have two built-in telemetry channels: `InMemoryChannel` and `ServerTelemetryChannel`. This article describes each channel in detail, including how to customize channel behavior. +Telemetry channels are an integral part of the [Application Insights SDKs](./app-insights-overview.md). They manage buffering and transmission of telemetry to the Application Insights service. The .NET and .NET Core versions of the SDKs have two built-in telemetry channels: `InMemoryChannel` and `ServerTelemetryChannel`. This article describes each channel and shows how to customize channel behavior. ## What are telemetry channels? Telemetry channels are responsible for buffering telemetry items and sending them to the Application Insights service, where they're stored for querying and analysis. A telemetry channel is any class that implements the [`Microsoft.ApplicationInsights.ITelemetryChannel`](/dotnet/api/microsoft.applicationinsights.channel.itelemetrychannel) interface. -The `Send(ITelemetry item)` method of a telemetry channel is called after all telemetry initializers and telemetry processors are called. So, any items dropped by a telemetry processor won't reach the channel. `Send()` doesn't typically send the items to the back end instantly. Typically, it buffers them in memory and sends them in batches, for efficient transmission. +The `Send(ITelemetry item)` method of a telemetry channel is called after all telemetry initializers and telemetry processors are called. So, any items dropped by a telemetry processor won't reach the channel. The `Send()` method doesn't ordinarily send the items to the back end instantly. Typically, it buffers them in memory and sends them in batches for efficient transmission. [Live Metrics Stream](live-stream.md) also has a custom channel that powers the live streaming of telemetry. This channel is independent of the regular telemetry channel, and this document doesn't apply to it. The `Send(ITelemetry item)` method of a telemetry channel is called after all te The Application Insights .NET and .NET Core SDKs ship with two built-in channels: -* `InMemoryChannel`: A lightweight channel that buffers items in memory until they're sent. Items are buffered in memory and flushed once every 30 seconds, or whenever 500 items are buffered. This channel offers minimal reliability guarantees because it doesn't retry sending telemetry after a failure. This channel also doesn't keep items on disk, so any unsent items are lost permanently upon application shutdown (graceful or not). This channel implements a `Flush()` method that can be used to force-flush any in-memory telemetry items synchronously. This channel is well suited for short-running applications where a synchronous flush is ideal. +* `InMemoryChannel`: A lightweight channel that buffers items in memory until they're sent. Items are buffered in memory and flushed once every 30 seconds, or whenever 500 items are buffered. This channel offers minimal reliability guarantees because it doesn't retry sending telemetry after a failure. This channel also doesn't keep items on disk. So any unsent items are lost permanently upon application shutdown, whether it's graceful or not. This channel implements a `Flush()` method that can be used to force-flush any in-memory telemetry items synchronously. This channel is well suited for short-running applications where a synchronous flush is ideal. This channel is part of the larger Microsoft.ApplicationInsights NuGet package and is the default channel that the SDK uses when nothing else is configured. -* `ServerTelemetryChannel`: A more advanced channel that has retry policies and the capability to store data on a local disk. This channel retries sending telemetry if transient errors occur. This channel also uses local disk storage to keep items on disk during network outages or high telemetry volumes. Because of these retry mechanisms and local disk storage, this channel is considered more reliable and is recommended for all production scenarios. This channel is the default for [ASP.NET](./asp-net.md) and [ASP.NET Core](./asp-net-core.md) applications that are configured according to the official documentation. This channel is optimized for server scenarios with long-running processes. The [`Flush()`](#which-channel-should-i-use) method that's implemented by this channel isn't synchronous. +* `ServerTelemetryChannel`: A more advanced channel that has retry policies and the capability to store data on a local disk. This channel retries sending telemetry if transient errors occur. This channel also uses local disk storage to keep items on disk during network outages or high telemetry volumes. Because of these retry mechanisms and local disk storage, this channel is considered more reliable. We recommend it for all production scenarios. This channel is the default for [ASP.NET](./asp-net.md) and [ASP.NET Core](./asp-net-core.md) applications that are configured according to the official documentation. This channel is optimized for server scenarios with long-running processes. The [`Flush()`](#which-channel-should-i-use) method that's implemented by this channel isn't synchronous. - This channel is shipped as the Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel NuGet package and is acquired automatically when you use either the Microsoft.ApplicationInsights.Web or Microsoft.ApplicationInsights.AspNetCore NuGet package. + This channel is shipped as the Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel NuGet package and is acquired automatically when you use either the Microsoft.ApplicationInsights.Web or Microsoft.ApplicationInsights.AspNetCore NuGet package. ## Configure a telemetry channel -You configure a telemetry channel by setting it to the active telemetry configuration. For ASP.NET applications, configuration involves setting the telemetry channel instance to `TelemetryConfiguration.Active`, or by modifying `ApplicationInsights.config`. For ASP.NET Core applications, configuration involves adding the channel to the Dependency Injection Container. +You configure a telemetry channel by setting it to the active telemetry configuration. For ASP.NET applications, configuration involves setting the telemetry channel instance to `TelemetryConfiguration.Active` or by modifying `ApplicationInsights.config`. For ASP.NET Core applications, configuration involves adding the channel to the dependency injection container. -The following sections show examples of configuring the `StorageFolder` setting for the channel in various application types. `StorageFolder` is just one of the configurable settings. For the full list of configuration settings, see [the settings section](#configurable-settings-in-channels) later in this article. +The following sections show examples of configuring the `StorageFolder` setting for the channel in various application types. `StorageFolder` is just one of the configurable settings. For the full list of configuration settings, see the [Configurable settings in channels](#configurable-settings-in-channels) section later in this article. ### Configuration by using ApplicationInsights.config for ASP.NET applications The following section from [ApplicationInsights.config](configuration-with-appli ### Configuration in code for ASP.NET applications -The following code sets up a 'ServerTelemetryChannel' instance with `StorageFolder` set to a custom location. Add this code at the beginning of the application, typically in `Application_Start()` method in Global.aspx.cs. +The following code sets up a `ServerTelemetryChannel` instance with `StorageFolder` set to a custom location. Add this code at the beginning of the application, typically in the `Application_Start()` method in Global.aspx.cs. ```csharp using Microsoft.ApplicationInsights.Extensibility; public void ConfigureServices(IServiceCollection services) ``` > [!IMPORTANT]-> Configuring the channel by using `TelemetryConfiguration.Active` is not supported for ASP.NET Core applications. +> Configuring the channel by using `TelemetryConfiguration.Active` isn't supported for ASP.NET Core applications. ### Configuration in code for .NET/.NET Core console applications TelemetryConfiguration.Active.TelemetryChannel = serverTelemetryChannel; `ServerTelemetryChannel` stores arriving items in an in-memory buffer. The items are serialized, compressed, and stored into a `Transmission` instance once every 30 seconds, or when 500 items have been buffered. A single `Transmission` instance contains up to 500 items and represents a batch of telemetry that's sent over a single HTTPS call to the Application Insights service. -By default, a maximum of 10 `Transmission` instances can be sent in parallel. If telemetry is arriving at faster rates, or if the network or the Application Insights back end is slow, `Transmission` instances are stored in memory. The default capacity of this in-memory `Transmission` buffer is 5 MB. When the in-memory capacity has been exceeded, `Transmission` instances are stored on local disk up to a limit of 50 MB. `Transmission` instances are stored on local disk also when there are network problems. Only those items that are stored on a local disk survive an application crash. They're sent whenever the application starts again. If network issues persist the `ServerTelemetryChannel` will use an exponential backoff logic ranging from 10 seconds to 1 hour before retrying to send telemetry. +By default, a maximum of 10 `Transmission` instances can be sent in parallel. If telemetry is arriving at faster rates, or if the network or the Application Insights back end is slow, `Transmission` instances are stored in memory. The default capacity of this in-memory `Transmission` buffer is 5 MB. When the in-memory capacity has been exceeded, `Transmission` instances are stored on local disk up to a limit of 50 MB. ++`Transmission` instances are stored on local disk also when there are network problems. Only those items that are stored on a local disk survive an application crash. They're sent whenever the application starts again. If network issues persist, `ServerTelemetryChannel` will use an exponential backoff logic ranging from 10 seconds to 1 hour before retrying to send telemetry. ## Configurable settings in channels For the full list of configurable settings for each channel, see: * [InMemoryChannel](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/BASE/src/Microsoft.ApplicationInsights/Channel/InMemoryChannel.cs)- * [ServerTelemetryChannel](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/BASE/src/ServerTelemetryChannel/ServerTelemetryChannel.cs) Here are the most commonly used settings for `ServerTelemetryChannel`: -1. `MaxTransmissionBufferCapacity`: The maximum amount of memory, in bytes, used by the channel to buffer transmissions in memory. When this capacity is reached, new items are stored directly to local disk. The default value is 5 MB. Setting a higher value leads to less disk usage, but remember that items in memory will be lost if the application crashes. --1. `MaxTransmissionSenderCapacity`: The maximum number of `Transmission` instances that will be sent to Application Insights at the same time. The default value is 10. This setting can be configured to a higher number, which is recommended when a huge volume of telemetry is generated. High volume typically occurs during load testing or when sampling is turned off. --1. `StorageFolder`: The folder that's used by the channel to store items to disk as needed. In Windows, either %LOCALAPPDATA% or %TEMP% is used if no other path is specified explicitly. In environments other than Windows, you must specify a valid location or telemetry won't be stored to local disk. +- `MaxTransmissionBufferCapacity`: The maximum amount of memory, in bytes, used by the channel to buffer transmissions in memory. When this capacity is reached, new items are stored directly to local disk. The default value is 5 MB. Setting a higher value leads to less disk usage, but remember that items in memory will be lost if the application crashes. +- `MaxTransmissionSenderCapacity`: The maximum number of `Transmission` instances that will be sent to Application Insights at the same time. The default value is 10. This setting can be configured to a higher number, which we recommend when a huge volume of telemetry is generated. High volume typically occurs during load testing or when sampling is turned off. +- `StorageFolder`: The folder that's used by the channel to store items to disk as needed. In Windows, either %LOCALAPPDATA% or %TEMP% is used if no other path is specified explicitly. In environments other than Windows, you must specify a valid location or telemetry won't be stored to local disk. ## Which channel should I use? -`ServerTelemetryChannel` is recommended for most production scenarios involving long-running applications. The `Flush()` method implemented by `ServerTelemetryChannel` isn't synchronous, and it also doesn't guarantee sending all pending items from memory or disk. If you use this channel in scenarios where the application is about to shut down, we recommend that you introduce some delay after calling `Flush()`. The exact amount of delay that you might require isn't predictable. It depends on factors like how many items or `Transmission` instances are in memory, how many are on disk, how many are being transmitted to the back end, and whether the channel is in the middle of exponential back-off scenarios. +We recommend `ServerTelemetryChannel` for most production scenarios that involve long-running applications. The `Flush()` method implemented by `ServerTelemetryChannel` isn't synchronous. It also doesn't guarantee sending all pending items from memory or disk. ++If you use this channel in scenarios where the application is about to shut down, introduce some delay after you call `Flush()`. The exact amount of delay that you might require isn't predictable. It depends on factors like how many items or `Transmission` instances are in memory, how many are on disk, how many are being transmitted to the back end, and whether the channel is in the middle of exponential back-off scenarios. -If you need to do a synchronous flush, we recommend that you use `InMemoryChannel`. +If you need to do a synchronous flush, use `InMemoryChannel`. ## Frequently asked questions +This section provides answers to common questions. + ### Does the Application Insights channel guarantee telemetry delivery? If not, what are the scenarios in which telemetry can be lost? The short answer is that none of the built-in channels offer a transaction-type guarantee of telemetry delivery to the back end. `ServerTelemetryChannel` is more advanced compared with `InMemoryChannel` for reliable delivery, but it also makes only a best-effort attempt to send telemetry. Telemetry can still be lost in several situations, including these common scenarios: -1. Items in memory are lost when the application crashes. --1. Telemetry is lost during extended periods of network problems. Telemetry is stored to local disk during network outages or when problems occur with the Application Insights back end. However, items older than 48 hours are discarded. --1. The default disk locations for storing telemetry in Windows are %LOCALAPPDATA% or %TEMP%. These locations are typically local to the machine. If the application migrates physically from one location to another, any telemetry stored in the original location is lost. +- Items in memory are lost when the application crashes. +- Telemetry is lost during extended periods of network problems. Telemetry is stored to local disk during network outages or when problems occur with the Application Insights back end. However, items older than 48 hours are discarded. +- The default disk locations for storing telemetry in Windows are %LOCALAPPDATA% or %TEMP%. These locations are typically local to the machine. If the application migrates physically from one location to another, any telemetry stored in the original location is lost. +- In Azure Web Apps on Windows, the default disk-storage location is D:\local\LocalAppData. This location isn't persisted. It's wiped out in app restarts, scale-outs, and other such operations, which leads to loss of any telemetry stored there. You can override the default and specify storage to a persisted location like D:\home. However, such persisted locations are served by remote storage and so can be slow. -1. In Azure Web Apps on Windows, the default disk-storage location is D:\local\LocalAppData. This location isn't persisted. It's wiped out in app restarts, scale-outs, and other such operations, leading to loss of any telemetry stored there. You can override the default and specify storage to a persisted location like D:\home. However, such persisted locations are served by remote storage and so can be slow. --Though less likely, it is also possible that channel can cause duplicate -telemetry items. This occurs when `ServerTelemetryChannel` retries due to -network failure/timeout, when the telemetry was actually delivered to the -backend, but the response was lost due to network issues or there was timeout. +Although less likely, it's also possible that the channel can cause duplicate telemetry items. This behavior occurs when `ServerTelemetryChannel` retries because of network failure or timeout, when the telemetry was delivered to the back end, but the response was lost because of network issues or there was a timeout. ### Does ServerTelemetryChannel work on systems other than Windows? Although the name of its package and namespace includes "WindowsServer," this channel is supported on systems other than Windows, with the following exception. On systems other than Windows, the channel doesn't create a local storage folder by default. You must create a local storage folder and configure the channel to use it. After local storage has been configured, the channel works the same way on all systems. > [!NOTE]-> With the release 2.15.0-beta3 and greater local storage is now automatically created for Linux, Mac, and Windows. For non Windows systems the SDK will automatically create a local storage folder based on the following logic: -> - `${TMPDIR}` - if `${TMPDIR}` environment variable is set this location is used. -> - `/var/tmp` - if the previous location does not exist we try `/var/tmp`. -> - `/tmp` - if both the previous locations do not exist we try `tmp`. -> - If none of those locations exist local storage is not created and manual configuration is still required. [For full implementation details](https://github.com/microsoft/ApplicationInsights-dotnet/pull/1860). +> With the release 2.15.0-beta3 and greater, local storage is now automatically created for Linux, Mac, and Windows. For non-Windows systems, the SDK will automatically create a local storage folder based on the following logic: +> +> - `${TMPDIR}`: If the `${TMPDIR}` environment variable is set, this location is used. +> - `/var/tmp`: If the previous location doesn't exist, we try `/var/tmp`. +> - `/tmp`: If both the previous locations don't exist, we try `tmp`. +> - If none of those locations exist, local storage isn't created and manual configuration is still required. For full implementation details, see [this GitHub repo](https://github.com/microsoft/ApplicationInsights-dotnet/pull/1860). ### Does the SDK create temporary local storage? Is the data encrypted at storage? The SDK stores telemetry items in local storage during network problems or during throttling. This data isn't encrypted locally. -For Windows systems, the SDK automatically creates a temporary local folder in the %TEMP% or %LOCALAPPDATA% directory, and restricts access to administrators and the current user only. +For Windows systems, the SDK automatically creates a temporary local folder in the %TEMP% or %LOCALAPPDATA% directory and restricts access to administrators and the current user only. -For systems other than Windows, no local storage is created automatically by the SDK, and so no data is stored locally by default. +For systems other than Windows, no local storage is created automatically by the SDK, so no data is stored locally by default. > [!NOTE]-> With the release 2.15.0-beta3 and greater local storage is now automatically created for Linux, Mac, and Windows. +> With the release 2.15.0-beta3 and greater, local storage is now automatically created for Linux, Mac, and Windows. - You can create a storage directory yourself and configure the channel to use it. In this case, you're responsible for ensuring that the directory is secured. -Read more about [data protection and privacy](data-retention-privacy.md#does-the-sdk-create-temporary-local-storage). + You can create a storage directory yourself and configure the channel to use it. In this case, you're responsible for ensuring that the directory is secured. Read more about [data protection and privacy](data-retention-privacy.md#does-the-sdk-create-temporary-local-storage). ## Open-source SDK-Like every SDK for Application Insights, channels are open source. Read and contribute to the code, or report problems, at [the official GitHub repo](https://github.com/Microsoft/ApplicationInsights-dotnet). +Like every SDK for Application Insights, channels are open source. Read and contribute to the code or report problems at [the official GitHub repo](https://github.com/Microsoft/ApplicationInsights-dotnet). ## Next steps * [Sampling](./sampling.md)-* [SDK Troubleshooting](./asp-net-troubleshoot-no-data.md) -+* [SDK troubleshooting](./asp-net-troubleshoot-no-data.md) |
azure-monitor | Best Practices Cost | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md | Diagnostic settings don't allow granular filtering of resource logs. You might r See the documentation for other services that store their data in a Log Analytics workspace for recommendations on optimizing their data usage: -- **Container insights**: [Understand monitoring costs for Container insights](containers/container-insights-cost.md#controlling-ingestion-to-reduce-cost)+- **Container insights**: [Understand monitoring costs for Container insights](containers/container-insights-cost.md#control-ingestion-to-reduce-cost) - **Microsoft Sentinel**: [Reduce costs for Microsoft Sentinel](../sentinel/billing-reduce-costs.md) - **Defender for Cloud**: [Setting the security event option at the workspace level](../defender-for-cloud/working-with-log-analytics-agent.md#data-collection-tier) |
azure-monitor | Container Insights Agent Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-agent-config.md | To configure and deploy your ConfigMap configuration file to your cluster: - To exclude specific namespaces for stdout log collection, configure the key/value by using the following example: `[log_collection_settings.stdout] enabled = true exclude_namespaces = ["my-namespace-1", "my-namespace-2"]`.- - To disable environment variable collection for a specific container, set the key/value `[log_collection_settings.env_var] enabled = true` to enable variable collection globally. Then follow the steps [here](container-insights-manage-agent.md#how-to-disable-environment-variable-collection-on-a-container) to complete configuration for the specific container. + - To disable environment variable collection for a specific container, set the key/value `[log_collection_settings.env_var] enabled = true` to enable variable collection globally. Then follow the steps [here](container-insights-manage-agent.md#disable-environment-variable-collection-on-a-container) to complete configuration for the specific container. - To disable stderr log collection cluster-wide, configure the key/value by using the following example: `[log_collection_settings.stderr] enabled = false`. Save your changes in the editor. |
azure-monitor | Container Insights Cost | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md | Title: Monitoring cost for Container insights | Microsoft Docs -description: This article describes the monitoring cost for metrics & inventory data collected by Container insights to help customers manage their usage and associated costs. +description: This article describes the monitoring cost for metrics and inventory data collected by Container insights to help customers manage their usage and associated costs. Last updated 08/29/2022-This article provides pricing guidance for Container insights to help you understand the following: +This article provides pricing guidance for Container insights to help you understand how to: -* How to estimate costs up-front before you enable Container Insights. -* How to measure costs after Container insights has been enabled for one or more containers -* How to control the collection of data and make cost reductions +* Estimate costs up front before you enable Container insights. +* Measure costs after Container insights has been enabled for one or more containers. +* Control the collection of data and make cost reductions. -Azure Monitor Logs collects, indexes, and stores data generated by your Kubernetes cluster. +Azure Monitor Logs collects, indexes, and stores data generated by your Kubernetes cluster. -The Azure Monitor pricing model is primarily based on the amount of data ingested in gigabytes per day into your Log Analytics workspace. The cost of a Log Analytics workspace isn't based only on the volume of data collected, it is also dependent on the plan selected, and how long you chose to store data generated from your clusters. +The Azure Monitor pricing model is primarily based on the amount of data ingested in gigabytes per day into your Log Analytics workspace. The cost of a Log Analytics workspace isn't based only on the volume of data collected. It's also dependent on the plan selected and how long you chose to store data generated from your clusters. >[!NOTE]->All sizes and pricing are for sample estimation only. Please refer to the Azure Monitor [pricing](https://azure.microsoft.com/pricing/details/monitor/) page for the most recent pricing based on your Azure Monitor Log Analytics pricing model and Azure region. +>All sizes and pricing are for sample estimation only. See the Azure Monitor [pricing](https://azure.microsoft.com/pricing/details/monitor/) page for the most recent pricing based on your Azure Monitor Log Analytics pricing model and Azure region. -The following is a summary of what types of data are collected from a Kubernetes cluster with Container insights that influences cost and can be customized based on your usage: +The following types of data collected from a Kubernetes cluster with Container insights influence cost and can be customized based on your usage: -- Stdout, stderr container logs from every monitored container in every Kubernetes namespace in the cluster+- Stdout and stderr container logs from every monitored container in every Kubernetes namespace in the cluster - Container environment variables from every monitored container in the cluster-- Completed Kubernetes jobs/pods in the cluster that does not require monitoring+- Completed Kubernetes jobs/pods in the cluster that don't require monitoring - Active scraping of Prometheus metrics-- [Diagnostic log collection](../../aks/monitor-aks.md#configure-monitoring) of Kubernetes master node logs in your AKS cluster to analyze log data generated by master components such as the *kube-apiserver* and *kube-controller-manager*.+- [Diagnostic log collection](../../aks/monitor-aks.md#configure-monitoring) of Kubernetes main node logs in your Azure Kubernetes Service (AKS) cluster to analyze log data generated by main components, such as `kube-apiserver` and `kube-controller-manager`. -## What is collected from Kubernetes clusters +## What's collected from Kubernetes clusters? -Container insights includes a predefined set of metrics and inventory items collected that are written as log data in your Log Analytics workspace. All metrics listed below are collected every one minute. +Container insights includes a predefined set of metrics and inventory items that are collected and written as log data in your Log Analytics workspace. All the metrics listed here are collected every minute. ### Node metrics collected -The following list is the 24 metrics per node that are collected: +The 24 metrics per node that are collected: - cpuUsageNanoCores - cpuCapacityNanoCores The following list is the 24 metrics per node that are collected: ### Container metrics -The following list is the eight metrics per container collected: +The eight metrics per container that are collected: - cpuUsageNanoCores - cpuRequestNanoCores The following list is the eight metrics per container collected: ### Cluster inventory -The following list is the cluster inventory data collected by default: +The cluster inventory data that's collected by default: -- KubePodInventory ΓÇô 1 per pod per minute-- KubeNodeInventory ΓÇô 1 per node per minute-- KubeServices ΓÇô 1 per service per minute-- ContainerInventory ΓÇô 1 per container per minute+- KubePodInventory: 1 per pod per minute +- KubeNodeInventory: 1 per node per minute +- Kube +- ContainerInventory: 1 per container per minute -## Estimating costs to monitor your AKS cluster +## Estimate costs to monitor your AKS cluster -The estimation below is based on an Azure Kubernetes Service (AKS) cluster with the following sizing example. Also, the estimate applies only for metrics and inventory data collected. For container logs (stdout, stderr, and environmental variables), it varies based on the log sizes generated by the workload, and they are excluded from our estimation. +The following estimation is based on an AKS cluster with the following sizing example. The estimate applies only for metrics and inventory data collected. For container logs like stdout, stderr, and environmental variables, the estimate varies based on the log sizes generated by the workload. They're excluded from our estimation. -If you enabled monitoring of an AKS cluster configured as follows, +If you enabled monitoring of an AKS cluster configured as follows: - Three nodes - Two disks per node You can see the tables and volume of data generated per hour in the assigned Log |KubeHealth | 0.1 | |KubeMonAgentEvents |0.005 | -Total = 31 MB/Hour = 23.1 GB/month (one month = 31 days) +Total = 31 MB/hour = 23.1 GB/month (one month = 31 days) -Using the default [pricing](https://azure.microsoft.com/pricing/details/monitor/) for Log Analytics, which is a Pay-As-You-Go model, you can estimate the Azure Monitor cost per month. After including a capacity reservation, the price would be higher per month depending on the reservation selected. +By using the default [pricing](https://azure.microsoft.com/pricing/details/monitor/) for Log Analytics, which is a pay-as-you-go model, you can estimate the Azure Monitor cost per month. After a capacity reservation is included, the price would be higher per month depending on the reservation selected. -## Controlling ingestion to reduce cost +## Control ingestion to reduce cost -Consider a scenario where your organization's different business unit shares Kubernetes infrastructure and a Log Analytics workspace. With each business unit separated by a Kubernetes namespace. You can visualize how much data is ingested in each workspace using the **Data Usage** runbook which is available from the **View Workbooks** dropdown. +Consider a scenario where your organization's different business units share Kubernetes infrastructure and a Log Analytics workspace. Each business unit is separated by a Kubernetes namespace. You can visualize how much data is ingested in each workspace by using the **Data Usage** runbook. The runbook is available from the **View Workbooks** dropdown list. -[![View workbooks dropdown](media/container-insights-cost/workbooks-dropdown.png)](media/container-insights-cost/workbooks-dropdown.png#lightbox) +[![Screenshot that shows the View Workbooks dropdown list.](media/container-insights-cost/workbooks-dropdown.png)](media/container-insights-cost/workbooks-dropdown.png#lightbox) +This workbook helps you visualize the source of your data without having to build your own library of queries from what we share in our documentation. In this workbook, you can view charts that present billable data such as the: -This workbook helps you to visualize the source of your data without having to build your own library of queries from what we share in our documentation. In this workbook, there are charts with which you can view billable data from such perspectives as: +- Total billable data ingested in GB by solution. +- Billable data ingested by Container logs (application logs). +- Billable container logs data ingested by Kubernetes namespace. +- Billable container logs data ingested segregated by Cluster name. +- Billable container log data ingested by log source entry. +- Billable diagnostic data ingested by diagnostic main node logs. -- Total billable data ingested in GB by solution-- Billable data ingested by Container logs(application logs)-- Billable container logs data ingested per by Kubernetes namespace-- Billable container logs data ingested segregated by Cluster name-- Billable container log data ingested by log source entry-- Billable diagnostic data ingested by diagnostic master node logs--[![Data usage workbook](media/container-insights-cost/data-usage-workbook.png)](media/container-insights-cost/data-usage-workbook.png#lightbox) +[![Screenshot that shows the Data Usage workbook.](media/container-insights-cost/data-usage-workbook.png)](media/container-insights-cost/data-usage-workbook.png#lightbox) To learn about managing rights and permissions to the workbook, review [Access control](../visualize/workbooks-overview.md#access-control). -After completing your analysis to determine which source or sources are generating the most data or more data that are exceeding your requirements, you can reconfigure data collection. Details on configuring collection of stdout, stderr, and environmental variables is described in the [Configure agent data collection settings](container-insights-agent-config.md) article. +After you finish your analysis to determine which sources are generating the data that's exceeding your requirements, you can reconfigure data collection. For more information on configuring collection of stdout, stderr, and environmental variables, see [Configure agent data collection settings](container-insights-agent-config.md). -The following are examples of what changes you can apply to your cluster by modifying the ConfigMap file to help control cost. +The following examples show what changes you can apply to your cluster by modifying the ConfigMap file to help control cost. -1. Disable stdout logs across all namespaces in the cluster by modifying the following in the ConfigMap file for the Azure Container Insights service pulling the metrics: +1. Disable stdout logs across all namespaces in the cluster by modifying the following code in the ConfigMap file for the Azure Container insights service that's pulling the metrics: ``` [log_collection_settings] The following are examples of what changes you can apply to your cluster by modi enabled = false ``` -2. Disable collecting stderr logs from your development namespace (for example, **dev-test**), and continue collecting stderr logs from other namespaces (for example, **prod** and **default**) by modifying the following in the ConfigMap file: +1. Disable collecting stderr logs from your development namespace. An example is `dev-test`. Continue collecting stderr logs from other namespaces, such as, `prod` and `default`, by modifying the following code in the ConfigMap file: >[!NOTE]- >The kube-system log collection is disabled by default. The default setting is retained, adding **dev-test** namespace to the list of exclusion namespaces is applied to stderr log collection. + >The kube-system log collection is disabled by default. The default setting is retained. Adding the `dev-test` namespace to the list of exclusion namespaces is applied to stderr log collection. ``` [log_collection_settings.stderr] The following are examples of what changes you can apply to your cluster by modi exclude_namespaces = ["kube-system", "dev-test"] ``` -3. Disable environment variable collection across the cluster by modifying the following in the ConfigMap file. This is applicable to all containers in every Kubernetes namespace. +1. Disable environment variable collection across the cluster by modifying the following code in the ConfigMap file. This modification applies to all containers in every Kubernetes namespace. ``` [log_collection_settings.env_var] enabled = false ``` -4. To clean up completed jobs, specify the cleanup policy in the job definition by modifying the following in the ConfigMap file: +1. To clean up jobs that are finished, specify the cleanup policy in the job definition by modifying the following code in the ConfigMap file: ``` apiVersion: batch/v1 The following are examples of what changes you can apply to your cluster by modi ttlSecondsAfterFinished: 100 ``` -After applying one or more of these changes to your ConfigMaps, apply it to your cluster withe the command `kubectl apply -f <config3. map_yaml_file.yaml>`. For example, run the command `kubectl apply -f container-azm-ms-agentconfig.yaml` to open the file in your default editor to modify and then save it. +After you apply one or more of these changes to your ConfigMaps, apply it to your cluster with the command `kubectl apply -f <config3. map_yaml_file.yaml>`. For example, run the command `kubectl apply -f container-azm-ms-agentconfig.yaml` to open the file in your default editor to modify and then save it. ### Prometheus metrics scraping -If you are utilizing [Prometheus metric scraping](container-insights-prometheus.md), ensure you consider the following to limit the number of metrics that you collect from your cluster: --- Ensure scraping frequency is set optimally (the default is 60 seconds). While you can increase the frequency to 15 seconds, you need to ensure that the metrics you are scraping are published at that frequency. Otherwise there will be many duplicate metrics scraped and sent to your Log Analytics workspace at intervals adding to data ingestion and retention costs, but are of less value. +If you use [Prometheus metric scraping](container-insights-prometheus.md), make sure that you limit the number of metrics you collect from your cluster: -- Container insights supports exclusion & inclusion lists by metric name. For example, if you are scraping **kubedns** metrics in your cluster, there might be hundreds of them that gets scraped by default, but you are most likely only interested in a subset. Confirm you specified a list of metrics to scrape, or exclude others except a few to save on data ingestion volume. It is easy to enable scraping and not use many of those metrics, which will only add additional charges to your Log Analytics bill.+- Ensure that scraping frequency is optimally set. The default is 60 seconds. You can increase the frequency to 15 seconds, but you must ensure that the metrics you're scraping are published at that frequency. Otherwise, many duplicate metrics will be scraped and sent to your Log Analytics workspace at intervals that add to data ingestion and retention costs but are of less value. +- Container insights supports exclusion and inclusion lists by metric name. For example, if you're scraping **kubedns** metrics in your cluster, hundreds of them might get scraped by default. But you're most likely only interested in a subset of the metrics. Confirm that you specified a list of metrics to scrape, or exclude others except for a few to save on data ingestion volume. It's easy to enable scraping and not use many of those metrics, which will only add charges to your Log Analytics bill. +- When you scrape through pod annotations, ensure you filter by namespace so that you exclude scraping of pod metrics from namespaces that you don't use. An example is the `dev-test` namespace. -- When scraping through pod annotations, ensure you filter by namespace so that you exclude scraping of pod metrics from namespaces that you don't use (for example, **dev-test** namespace).- ### Configure Basic Logs You can save on data ingestion costs by configuring certain tables in your Log Analytics workspace that you primarily use for debugging, troubleshooting, and auditing as Basic Logs. For more information, including the limitations of Basic Logs, see [Configure Basic Logs](../best-practices-cost.md#configure-basic-logs). ContainerLogV2 is the configured version of Basic Logs that Container Insights uses. ContainerLogV2 includes verbose text-based log records. You must be on the ContainerLogV2 schema to configure Basic Logs. For more infor ## Next steps -For more information about how to understand what the costs are likely to be based on recent usage patterns from data collected with Container insights, see [Analyze usage in Log Analytics workspace](../logs/analyze-usage.md). +To help you understand what the costs are likely to be based on recent usage patterns from data collected with Container insights, see [Analyze usage in a Log Analytics workspace](../logs/analyze-usage.md). |
azure-monitor | Container Insights Log Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-alerts.md | -# How to create log alerts from Container insights +# Create log alerts from Container insights -Container insights monitors the performance of container workloads that are deployed to managed or self-managed Kubernetes clusters. To alert on what matters, this article describes how to create log-based alerts for the following situations with AKS clusters: +Container insights monitors the performance of container workloads that are deployed to managed or self-managed Kubernetes clusters. To alert on what matters, this article describes how to create log-based alerts for the following situations with Azure Kubernetes Service (AKS) clusters: - When CPU or memory utilization on cluster nodes exceeds a threshold - When CPU or memory utilization on any container within a controller exceeds a threshold as compared to a limit that's set on the corresponding resource-- *NotReady* status node counts-- *Failed*, *Pending*, *Unknown*, *Running*, or *Succeeded* pod-phase counts+- `NotReady` status node counts +- `Failed`, `Pending`, `Unknown`, `Running`, or `Succeeded` pod-phase counts - When free disk space on cluster nodes exceeds a threshold -To alert for high CPU or memory utilization, or low free disk space on cluster nodes, use the queries that are provided to create a metric alert or a metric measurement alert. While metric alerts have lower latency than log alerts, log alerts provide advanced querying and greater sophistication. Log alert queries compare a datetime to the present by using the *now* operator and going back one hour. (Container insights stores all dates in Coordinated Universal Time (UTC) format.) +To alert for high CPU or memory utilization, or low free disk space on cluster nodes, use the queries that are provided to create a metric alert or a metric measurement alert. Metric alerts have lower latency than log alerts, but log alerts provide advanced querying and greater sophistication. Log alert queries compare a datetime to the present by using the `now` operator and going back one hour. (Container insights stores all dates in Coordinated Universal Time [UTC] format.) > [!IMPORTANT]-> Most alert rules have a cost that's dependent on the type of rule, how many dimensions it includes, and how frequently it's run. Refer to **Alert rules** in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) before you create any alert rules. +> Most alert rules have a cost that's dependent on the type of rule, how many dimensions it includes, and how frequently it's run. Before you create alert rules, see the "Alert rules" section in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). -If you're not familiar with Azure Monitor alerts, see [Overview of alerts in Microsoft Azure](../alerts/alerts-overview.md) before you start. To learn more about alerts that use log queries, see [Log alerts in Azure Monitor](../alerts/alerts-unified-log.md). For more about metric alerts, see [Metric alerts in Azure Monitor](../alerts/alerts-metric-overview.md). +If you aren't familiar with Azure Monitor alerts, see [Overview of alerts in Microsoft Azure](../alerts/alerts-overview.md) before you start. To learn more about alerts that use log queries, see [Log alerts in Azure Monitor](../alerts/alerts-unified-log.md). For more about metric alerts, see [Metric alerts in Azure Monitor](../alerts/alerts-metric-overview.md). ## Log query measurements [Log alerts](../alerts/alerts-unified-log.md) can measure two different things, which can be used to monitor virtual machines in different scenarios: -- [Result count](../alerts/alerts-unified-log.md#result-count): Counts the number of rows returned by the query, and can be used to work with events such as Windows event logs, syslog, application exceptions.-- [Calculation of a value](../alerts/alerts-unified-log.md#calculation-of-a-value): Makes a calculation based on a numeric column, and can be used to include any number of resources. For example, CPU percentage.-### Targeting resources and dimensions +- [Result count](../alerts/alerts-unified-log.md#result-count): Counts the number of rows returned by the query and can be used to work with events such as Windows event logs, Syslog, and application exceptions. +- [Calculation of a value](../alerts/alerts-unified-log.md#calculation-of-a-value): Makes a calculation based on a numeric column and can be used to include any number of resources. An example is CPU percentage. -You can monitor multiple instancesΓÇÖ values with one rule using dimensions. For example, you would use dimensions if you want to monitor the CPU usage on multiple instances running your web site or app, and create an alert for CPU usage of over 80%. +### Target resources and dimensions -To create resource-centric alerts at scale for a subscription or resource group, you can **Split by dimensions**. When you want to monitor the same condition on multiple Azure resources, splitting by dimensions splits the alerts into separate alerts by grouping unique combinations using numerical or string columns. Splitting on Azure resource ID column makes the specified resource into the alert target. +You can use one rule to monitor the values of multiple instances by using dimensions. For example, you would use dimensions if you wanted to monitor the CPU usage on multiple instances running your website or app, and create an alert for CPU usage of over 80%. -You may also decide not to split when you want a condition on multiple resources in the scope. For example, if you want to create an alert if at least five machines in the resource group scope have CPU usage over 80%. +To create resource-centric alerts at scale for a subscription or resource group, you can *split by dimensions*. When you want to monitor the same condition on multiple Azure resources, splitting by dimensions splits the alerts into separate alerts by grouping unique combinations by using numerical or string columns. Splitting an Azure resource ID column makes the specified resource into the alert target. +You might also decide not to split when you want a condition on multiple resources in the scope. For example, you might want to create an alert if at least five machines in the resource group scope have CPU usage over 80%. +++You might want to see a list of the alerts by affected computer. You can use a custom workbook that uses a custom [resource graph](../../governance/resource-graph/overview.md) to provide this view. Use the following query to display alerts, and use the data source **Azure Resource Graph** in the workbook. -You might want to see a list of the alerts by affected computer. You can use a custom workbook that uses a custom [Resource Graph](../../governance/resource-graph/overview.md) to provide this view. Use the following query to display alerts, and use the data source **Azure Resource Graph** in the workbook. ## Create a log query alert rule-[This example of a log query alert](../vm/monitor-virtual-machine-alerts.md#example-log-query-alert) provides a complete walkthrough of creating a log query alert rule. You can use these same processes to create alert rules for AKS clusters using queries similar to the ones in this article. +[This example of a log query alert](../vm/monitor-virtual-machine-alerts.md#example-log-query-alert) provides a complete walkthrough of creating a log query alert rule. You can use these same processes to create alert rules for AKS clusters by using queries similar to the ones in this article. -## Resource utilization +## Resource utilization -**Average CPU utilization as an average of member nodes' CPU utilization every minute (metric measurement)** +Average CPU utilization as an average of member nodes' CPU utilization every minute (metric measurement): ```kusto let endDateTime = now(); KubeNodeInventory | summarize AggregatedValue = avg(UsagePercent) by bin(TimeGenerated, trendBinSize), ClusterName ``` -**Average memory utilization as an average of member nodes' memory utilization every minute (metric measurement)** +Average memory utilization as an average of member nodes' memory utilization every minute (metric measurement): ```kusto let endDateTime = now(); KubeNodeInventory | summarize AggregatedValue = avg(UsagePercent) by bin(TimeGenerated, trendBinSize), ClusterName ``` - >[!IMPORTANT] >The following queries use the placeholder values \<your-cluster-name> and \<your-controller-name> to represent your cluster and controller. Replace them with values specific to your environment when you set up alerts. -**Average CPU utilization of all containers in a controller as an average of CPU utilization of every container instance in a controller every minute (metric measurement)** +Average CPU utilization of all containers in a controller as an average of CPU utilization of every container instance in a controller every minute (metric measurement): ```kusto let endDateTime = now(); KubePodInventory | summarize AggregatedValue = avg(UsagePercent) by bin(TimeGenerated, trendBinSize) , ContainerName ``` -**Average memory utilization of all containers in a controller as an average of memory utilization of every container instance in a controller every minute (metric measurement)** +Average memory utilization of all containers in a controller as an average of memory utilization of every container instance in a controller every minute (metric measurement): ```kusto let endDateTime = now(); KubePodInventory | summarize AggregatedValue = avg(UsagePercent) by bin(TimeGenerated, trendBinSize) , ContainerName ``` -## Resource availability +## Resource availability -**Nodes and counts that have a status of Ready and NotReady (metric measurement)** +Nodes and counts that have a status of Ready and NotReady (metric measurement): ```kusto let endDateTime = now(); KubeNodeInventory NotReadyCount = todouble(NotReadyCount) / ClusterSnapshotCount | order by ClusterName asc, Computer asc, TimeGenerated desc ```-The following query returns pod phase counts based on all phases: *Failed*, *Pending*, *Unknown*, *Running*, or *Succeeded*. ++The following query returns pod phase counts based on all phases: `Failed`, `Pending`, `Unknown`, `Running`, or `Succeeded`. ```kusto let endDateTime = now(); KubePodInventory ``` >[!NOTE]->To alert on certain pod phases, such as *Pending*, *Failed*, or *Unknown*, modify the last line of the query. For example, to alert on *FailedCount* use: <br/>`| summarize AggregatedValue = avg(FailedCount) by bin(TimeGenerated, trendBinSize)` +>To alert on certain pod phases, such as `Pending`, `Failed`, or `Unknown`, modify the last line of the query. For example, to alert on `FailedCount`, use `| summarize AggregatedValue = avg(FailedCount) by bin(TimeGenerated, trendBinSize)`. The following query returns cluster nodes disks that exceed 90% free space used. To get the cluster ID, first run the following query and copy the value from the `ClusterId` property: InsightsMetrics | where AggregatedValue >= 90 ``` +Individual container restarts (number of results) alert when the individual system container restart count exceeds a threshold for the last 10 minutes: --**Individual container restarts (number of results)**<br> -Alerts when the individual system container restart count exceeds a threshold for last 10 minutes. -- ```kusto let _threshold = 10m; let _alertThreshold = 2; KubePodInventory ## Next steps -- View [log query examples](container-insights-log-query.md) to see pre-defined queries and examples to evaluate or customize for alerting, visualizing, or analyzing your clusters.-+- View [log query examples](container-insights-log-query.md) to see predefined queries and examples to evaluate or customize for alerting, visualizing, or analyzing your clusters. - To learn more about Azure Monitor and how to monitor other aspects of your Kubernetes cluster, see [View Kubernetes cluster performance](container-insights-analyze.md) and [View Kubernetes cluster health](./container-insights-overview.md). |
azure-monitor | Container Insights Manage Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-manage-agent.md | Title: How to manage the Container insights agent | Microsoft Docs -description: This article describes managing the most common maintenance tasks with the containerized Log Analytics agent used by Container insights. + Title: Manage the Container insights agent | Microsoft Docs +description: This article describes how to manage the most common maintenance tasks with the containerized Log Analytics agent used by Container insights. Last updated 07/21/2020 -# How to manage the Container insights agent --Container insights uses a containerized version of the Log Analytics agent for Linux. After initial deployment, there are routine or optional tasks you may need to perform during its lifecycle. This article details on how to manually upgrade the agent and disable collection of environmental variables from a particular container. +# Manage the Container insights agent +Container insights uses a containerized version of the Log Analytics agent for Linux. After initial deployment, you might need to perform routine or optional tasks during its lifecycle. This article explains how to manually upgrade the agent and disable collection of environmental variables from a particular container. >[!NOTE]->The Container Insights agent name has changed from OMSAgent to Azure Monitor Agent, along with a few other resoruce names. This doc reflects the new name. Please update your commands, alerts and scripts referencing the old name. Read more about the name change in [our blog post](https://techcommunity.microsoft.com/t5/azure-monitor-status-archive/name-update-for-agent-and-associated-resources-in-azure-monitor/ba-p/3576810). +>The Container Insights agent name has changed from OMSAgent to Azure Monitor Agent, along with a few other resource names. This article reflects the new name. Update your commands, alerts, and scripts that reference the old name. Read more about the name change in [our blog post](https://techcommunity.microsoft.com/t5/azure-monitor-status-archive/name-update-for-agent-and-associated-resources-in-azure-monitor/ba-p/3576810). > -## How to upgrade the Container insights agent +## Upgrade the Container insights agent -Container insights uses a containerized version of the Log Analytics agent for Linux. When a new version of the agent is released, the agent is automatically upgraded on your managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS) and Azure Arc enabled Kubernetes. +Container insights uses a containerized version of the Log Analytics agent for Linux. When a new version of the agent is released, the agent is automatically upgraded on your managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes. -If the agent upgrade fails for a cluster hosted on AKS, this article also describes the process to manually upgrade the agent. To follow the versions released, see [agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod). +If the agent upgrade fails for a cluster hosted on AKS, this article also describes the process to manually upgrade the agent. To follow the versions released, see [Agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod). -### Upgrade agent on AKS cluster +### Upgrade the agent on an AKS cluster -The process to upgrade the agent on AKS clusters consists of two straight forward steps. The first step is to disable monitoring with Container insights using Azure CLI. Follow the steps described in the [Disable monitoring](container-insights-optout.md?#azure-cli) article. Using Azure CLI allows us to remove the agent from the nodes in the cluster without impacting the solution and the corresponding data that is stored in the workspace. +The process to upgrade the agent on an AKS cluster consists of two steps. The first step is to disable monitoring with Container insights by using the Azure CLI. Follow the steps described in the [Disable monitoring](container-insights-optout.md?#azure-cli) article. By using the Azure CLI, you can remove the agent from the nodes in the cluster without affecting the solution and the corresponding data that's stored in the workspace. >[!NOTE]->While you are performing this maintenance activity, the nodes in the cluster are not forwarding collected data, and performance views will not show data between the time you remove the agent and install the new version. +>While you're performing this maintenance activity, the nodes in the cluster aren't forwarding collected data. Performance views won't show data between the time you removed the agent and installed the new version. > -To install the new version of the agent, follow the steps described in the [enable monitoring using Azure CLI](container-insights-enable-new-cluster.md#enable-using-azure-cli), to complete this process. +The second step is to install the new version of the agent. Follow the steps described in [Enable monitoring by using the Azure CLI](container-insights-enable-new-cluster.md#enable-using-azure-cli) to finish this process. -After you've re-enabled monitoring, it might take about 15 minutes before you can view updated health metrics for the cluster. To verify the agent upgraded successfully, you can either: +After you've reenabled monitoring, it might take about 15 minutes before you can view updated health metrics for the cluster. You have two methods to verify the agent upgraded successfully: -* Run the command: `kubectl get pod <ama-logs-agent-pod-name> -n kube-system -o=jsonpath='{.spec.containers[0].image}'`. In the status returned, note the value under **Image** for Azure Monitor Agent in the *Containers* section of the output. -* On the **Nodes** tab, select the cluster node and on the **Properties** pane to the right, note the value under **Agent Image Tag**. +* Run the command `kubectl get pod <ama-logs-agent-pod-name> -n kube-system -o=jsonpath='{.spec.containers[0].image}'`. In the status returned, note the value under **Image** for Azure Monitor Agent in the **Containers** section of the output. +* On the **Nodes** tab, select the cluster node. On the **Properties** pane to the right, note the value under **Agent Image Tag**. The version of the agent shown should match the latest version listed on the [Release history](https://github.com/microsoft/docker-provider/tree/ci_feature_prod) page. -### Upgrade agent on hybrid Kubernetes cluster +### Upgrade the agent on a hybrid Kubernetes cluster -Perform the following steps to upgrade the agent on a Kubernetes cluster running on: +Perform the following steps to upgrade the agent on a Kubernetes cluster that runs on: -* Self-managed Kubernetes clusters hosted on Azure using AKS Engine. -* Self-managed Kubernetes clusters hosted on Azure Stack or on-premises using AKS Engine. +* Self-managed Kubernetes clusters hosted on Azure by using the AKS engine. +* Self-managed Kubernetes clusters hosted on Azure Stack or on-premises by using the AKS engine. If the Log Analytics workspace is in commercial Azure, run the following command: If the Log Analytics workspace is in Azure US Government, run the following comm $ helm upgrade --set omsagent.domain=opinsights.azure.us,omsagent.secret.wsid=<your_workspace_id>,omsagent.secret.key=<your_workspace_key>,omsagent.env.clusterName=<your_cluster_name> incubator/azuremonitor-containers ``` -## How to disable environment variable collection on a container +## Disable environment variable collection on a container -Container insights collects environmental variables from the containers running in a pod and presents them in the property pane of the selected container in the **Containers** view. You can control this behavior by disabling collection for a specific container either during deployment of the Kubernetes cluster, or after by setting the environment variable *AZMON_COLLECT_ENV*. This feature is available from the agent version ΓÇô ciprod11292018 and higher. +Container insights collects environmental variables from the containers running in a pod and presents them in the property pane of the selected container in the **Containers** view. You can control this behavior by disabling collection for a specific container either during deployment of the Kubernetes cluster or after by setting the environment variable `AZMON_COLLECT_ENV`. This feature is available from the agent version ciprod11292018 and higher. -To disable collection of environmental variables on a new or existing container, set the variable **AZMON_COLLECT_ENV** with a value of **False** in your Kubernetes deployment yaml configuration file. +To disable collection of environmental variables on a new or existing container, set the variable `AZMON_COLLECT_ENV` with a value of `False` in your Kubernetes deployment YAML configuration file. ```yaml - name: AZMON_COLLECT_ENV value: "False" ``` -Run the following command to apply the change to Kubernetes clusters other than Azure Red Hat OpenShift): `kubectl apply -f <path to yaml file>`. To edit ConfigMap and apply this change for Azure Red Hat OpenShift clusters, run the command: +Run the following command to apply the change to Kubernetes clusters other than Azure Red Hat OpenShift: `kubectl apply -f <path to yaml file>`. To edit ConfigMap and apply this change for Azure Red Hat OpenShift clusters, run the following command: ```bash oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging ``` -This opens your default text editor. After setting the variable, save the file in the editor. +This command opens your default text editor. After you set the variable, save the file in the editor. -To verify the configuration change took effect, select a container in the **Containers** view in Container insights, and in the property panel, expand **Environment Variables**. The section should show only the variable created earlier - **AZMON_COLLECT_ENV=FALSE**. For all other containers, the Environment Variables section should list all the environment variables discovered. +To verify the configuration change took effect, select a container in the **Containers** view in Container insights. In the property pane, expand **Environment Variables**. The section should show only the variable created earlier, which is `AZMON_COLLECT_ENV=FALSE`. For all other containers, the **Environment Variables** section should list all the environment variables discovered. -To re-enable discovery of the environmental variables, apply the same process earlier and change the value from **False** to **True**, and then rerun the `kubectl` command to update the container. +To reenable discovery of the environmental variables, apply the same process you used earlier and change the value from `False` to `True`. Then rerun the `kubectl` command to update the container. ```yaml - name: AZMON_COLLECT_ENV To re-enable discovery of the environmental variables, apply the same process ea ## Next steps -If you experience issues while upgrading the agent, review the [troubleshooting guide](container-insights-troubleshoot.md) for support. +If you experience issues when you upgrade the agent, review the [troubleshooting guide](container-insights-troubleshoot.md) for support. |
azure-monitor | Container Insights Optout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout.md | Title: How to Stop Monitoring Your Azure Kubernetes Service cluster | Microsoft Docs + Title: Stop monitoring your Azure Kubernetes Service cluster | Microsoft Docs description: This article describes how you can discontinue monitoring of your Azure AKS cluster with Container insights. Last updated 05/24/2022-# How to stop monitoring your Azure Kubernetes Service (AKS) with Container insights --After you enable monitoring of your AKS cluster, you can stop monitoring the cluster if you decide you no longer want to monitor it. This article shows how to accomplish this using the Azure CLI or with the provided Azure Resource Manager templates. +# Stop monitoring your Azure Kubernetes Service cluster with Container insights +After you enable monitoring of your Azure Kubernetes Service (AKS) cluster, you can stop monitoring the cluster if you decide you no longer want to monitor it. This article shows you how to do this task by using the Azure CLI or the provided Azure Resource Manager templates (ARM templates). ## Azure CLI -Use the [az aks disable-addons](/cli/azure/aks#az-aks-disable-addons) command to disable Container insights. The command removes the agent from the cluster nodes, it does not remove the solution or the data already collected and stored in your Azure Monitor resource. +Use the [az aks disable-addons](/cli/azure/aks#az-aks-disable-addons) command to disable Container insights. The command removes the agent from the cluster nodes. It doesn't remove the solution or the data already collected and stored in your Azure Monitor resource. ```azurecli az aks disable-addons -a monitoring -n MyExistingManagedCluster -g MyExistingManagedClusterRG ``` -To re-enable monitoring for your cluster, see [Enable monitoring using Azure CLI](container-insights-enable-new-cluster.md#enable-using-azure-cli). +To reenable monitoring for your cluster, see [Enable monitoring by using the Azure CLI](container-insights-enable-new-cluster.md#enable-using-azure-cli). ## Azure Resource Manager template -Provided are two Azure Resource Manager template to support removing the solution resources consistently and repeatedly in your resource group. One is a JSON template specifying the configuration to stop monitoring and the other contains parameter values that you configure to specify the AKS cluster resource ID and resource group that the cluster is deployed in. +Two ARM templates are provided to support removing the solution resources consistently and repeatedly in your resource group. One is a JSON template that specifies the configuration to stop monitoring. The other template contains parameter values that you configure to specify the AKS cluster resource ID and resource group in which the cluster is deployed. If you're unfamiliar with the concept of deploying resources by using a template, see:-* [Deploy resources with Resource Manager templates and Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md) -* [Deploy resources with Resource Manager templates and the Azure CLI](../../azure-resource-manager/templates/deploy-cli.md) ++* [Deploy resources with ARM templates and Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md) +* [Deploy resources with ARM templates and the Azure CLI](../../azure-resource-manager/templates/deploy-cli.md) >[!NOTE]->The template needs to be deployed in the same resource group of the cluster. If you omit any other properties or add-ons when using this template, it can result in their removal from the cluster. For example, *enableRBAC* for Kubernetes RBAC policies implemented in your cluster, or *aksResourceTagValues* if tags are specified for the AKS cluster. +>The template must be deployed in the same resource group of the cluster. If you omit any other properties or add-ons when you use this template, they might be removed from the cluster. Examples are `enableRBAC` for Kubernetes RBAC policies implemented in your cluster, or `aksResourceTagValues`, if tags are specified for the AKS cluster. > -If you choose to use the Azure CLI, you first need to install and use the CLI locally. You must be running the Azure CLI version 2.0.27 or later. To identify your version, run `az --version`. If you need to install or upgrade the Azure CLI, see [Install the Azure CLI](/cli/azure/install-azure-cli). +If you choose to use the Azure CLI, you must install and use the CLI locally. You must be running the Azure CLI version 2.0.27 or later. To identify your version, run `az --version`. If you need to install or upgrade the Azure CLI, see [Install the Azure CLI](/cli/azure/install-azure-cli). -### Create template +### Create a template 1. Copy and paste the following JSON syntax into your file: If you choose to use the Azure CLI, you first need to install and use the CLI lo } ``` -2. Save this file as **OptOutTemplate.json** to a local folder. +1. Save this file as **OptOutTemplate.json** to a local folder. -3. Paste the following JSON syntax into your file: +1. Paste the following JSON syntax into your file: ```json { If you choose to use the Azure CLI, you first need to install and use the CLI lo } ``` -4. Edit the values for **aksResourceId** and **aksResourceLocation** by using the values of the AKS cluster, which you can find on the **Properties** page for the selected cluster. +1. Edit the values for **aksResourceId** and **aksResourceLocation** by using the values of the AKS cluster, which you can find on the **Properties** page for the selected cluster. - ![Container properties page](media/container-insights-optout/container-properties-page.png) + ![Screenshot that shows the Container properties page.](media/container-insights-optout/container-properties-page.png) - While you are on the **Properties** page, also copy the **Workspace Resource ID**. This value is required if you decide you want to delete the Log Analytics workspace later. Deleting the Log Analytics workspace is not performed as part of this process. + While you're on the **Properties** page, also copy the **Workspace Resource ID**. This value is required if you decide you want to delete the Log Analytics workspace later. Deleting the Log Analytics workspace isn't performed as part of this process. Edit the values for **aksResourceTagValues** to match the existing tag values specified for the AKS cluster. -5. Save this file as **OptOutParam.json** to a local folder. +1. Save this file as **OptOutParam.json** to a local folder. -6. You are ready to deploy this template. +Now you're ready to deploy this template. -### Remove the solution using Azure CLI +### Remove the solution by using the Azure CLI -Execute the following command with Azure CLI on Linux to remove the solution and clean up the configuration on your AKS cluster. +To remove the solution and clean up the configuration on your AKS cluster, run the following command with the Azure CLI on Linux: ```azurecli az login az account set --subscription "Subscription Name" az deployment group create --resource-group <ResourceGroupName> --template-file ./OptOutTemplate.json --parameters @./OptOutParam.json ``` -The configuration change can take a few minutes to complete. When it's completed, a message similar to the following that includes the result is returned: +The configuration change can take a few minutes to finish. The result is returned in a message similar to the following example: ```output ProvisioningState : Succeeded ``` -### Remove the solution using PowerShell +### Remove the solution by using PowerShell [!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)] -Execute the following PowerShell commands in the folder containing the template to remove the solution and clean up the configuration from your AKS cluster. +To remove the solution and clean up the configuration from your AKS cluster, run the following PowerShell commands in the folder that contains the template: ```powershell Connect-AzAccount Select-AzSubscription -SubscriptionName <yourSubscriptionName> New-AzResourceGroupDeployment -Name opt-out -ResourceGroupName <ResourceGroupName> -TemplateFile .\OptOutTemplate.json -TemplateParameterFile .\OptOutParam.json ``` -The configuration change can take a few minutes to complete. When it's completed, a message similar to the following that includes the result is returned: +The configuration change can take a few minutes to finish. The result is returned in a message similar to the following example: ```output ProvisioningState : Succeeded ``` - ## Next steps -If the workspace was created only to support monitoring the cluster and it's no longer needed, you have to manually delete it. If you are not familiar with how to delete a workspace, see [Delete an Azure Log Analytics workspace with the Azure portal](../logs/delete-workspace.md). Don't forget about the **Workspace Resource ID** copied earlier in step 4, you're going to need that. +If the workspace was created only to support monitoring the cluster and it's no longer needed, you must delete it manually. If you aren't familiar with how to delete a workspace, see [Delete an Azure Log Analytics workspace with the Azure portal](../logs/delete-workspace.md). Don't forget about the **Workspace Resource ID** copied earlier in step 4. You'll need that information. |
azure-monitor | Container Insights Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-troubleshoot.md | Title: How to Troubleshoot Container insights | Microsoft Docs + Title: Troubleshoot Container insights | Microsoft Docs description: This article describes how you can troubleshoot and resolve issues with Container insights. Last updated 05/24/2022-# Troubleshooting Container insights +# Troubleshoot Container insights -When you configure monitoring of your Azure Kubernetes Service (AKS) cluster with Container insights, you may encounter an issue preventing data collection or reporting status. This article details some common issues and troubleshooting steps. +When you configure monitoring of your Azure Kubernetes Service (AKS) cluster with Container insights, you might encounter an issue that prevents data collection or reporting status. This article discusses some common issues and troubleshooting steps. ## Known error messages -The table below summarizes known errors you may encounter while using Container insights. +The following table summarizes known errors you might encounter when you use Container insights. | Error messages | Action | | - | |-| Error Message `No data for selected filters` | It may take some time to establish monitoring data flow for newly created clusters. Allow at least 10 to 15 minutes for data to appear for your cluster.<br><br>If data still doesn't show up, check if the configured log analytics workspace is configured for *disableLocalAuth = true*, if yes, update back to *disableLocalAuth = false*.<br><br>`az resource show --ids "/subscriptions/[Your subscription ID]/resourcegroups/[Your resource group]/providers/microsoft.operationalinsights/workspaces/[Your workspace name]"`<br><br>`az resource update --ids "/subscriptions/[Your subscription ID]/resourcegroups/[Your resource group]/providers/microsoft.operationalinsights/workspaces/[Your workspace name]" --api-version "2021-06-01" --set properties.features.disableLocalAuth=False` | -| Error Message `Error retrieving data` | While Azure Kubernetes Service cluster is setting up for health and performance monitoring, a connection is established between the cluster and Azure Log Analytics workspace. A Log Analytics workspace is used to store all monitoring data for your cluster. This error may occur when your Log Analytics workspace has been deleted. Check if the workspace was deleted. If it was, you'll need to re-enable monitoring of your cluster with Container insights and either specify an existing workspace or create a new one. To re-enable, you'll need to [disable](container-insights-optout.md) monitoring for the cluster and [enable](container-insights-enable-new-cluster.md) Container insights again. | -| `Error retrieving data` after adding Container insights through az aks cli | When enable monitoring using `az aks cli`, Container insights may not be properly deployed. Check whether the solution is deployed. To verify, go to your Log Analytics workspace and see if the solution is available by selecting **Solutions** from the pane on the left-hand side. To resolve this issue, you'll need to redeploy the solution by following the instructions on [how to deploy Container insights](container-insights-onboard.md) | +| Error message "No data for selected filters" | It might take some time to establish monitoring data flow for newly created clusters. Allow at least 10 to 15 minutes for data to appear for your cluster.<br><br>If data still doesn't show up, check if the Log Analytics workspace is configured for `disableLocalAuth = true`. If yes, update back to `disableLocalAuth = false`.<br><br>`az resource show --ids "/subscriptions/[Your subscription ID]/resourcegroups/[Your resource group]/providers/microsoft.operationalinsights/workspaces/[Your workspace name]"`<br><br>`az resource update --ids "/subscriptions/[Your subscription ID]/resourcegroups/[Your resource group]/providers/microsoft.operationalinsights/workspaces/[Your workspace name]" --api-version "2021-06-01" --set properties.features.disableLocalAuth=False` | +| Error message "Error retrieving data" | While an AKS cluster is setting up for health and performance monitoring, a connection is established between the cluster and a Log Analytics workspace. A Log Analytics workspace is used to store all monitoring data for your cluster. This error might occur when your Log Analytics workspace has been deleted. Check if the workspace was deleted. If it was, reenable monitoring of your cluster with Container insights. Then specify an existing workspace or create a new one. To reenable, [disable](container-insights-optout.md) monitoring for the cluster and [enable](container-insights-enable-new-cluster.md) Container insights again. | +| "Error retrieving data" after adding Container insights through `az aks cli` | When you enable monitoring by using `az aks cli`, Container insights might not be properly deployed. Check whether the solution is deployed. To verify, go to your Log Analytics workspace and see if the solution is available by selecting **Solutions** from the pane on the left side. To resolve this issue, redeploy the solution. Follow the instructions in [Enable Container insights](container-insights-onboard.md). | To help diagnose the problem, we've provided a [troubleshooting script](https://github.com/microsoft/Docker-Provider/tree/ci_dev/scripts/troubleshoot). ## Authorization error during onboarding or update operation -While enabling Container insights or updating a cluster to support collecting metrics, you may receive an error resembling the following - *The client <userΓÇÖs Identity>' with object id '<userΓÇÖs objectId>' does not have authorization to perform action 'Microsoft.Authorization/roleAssignments/write' over scope* +When you enable Container insights or update a cluster to support collecting metrics, you might receive an error like "The client `<user's Identity>` with object id `<user's objectId>` does not have authorization to perform action `Microsoft.Authorization/roleAssignments/write` over scope." -During the onboarding or update process, granting the **Monitoring Metrics Publisher** role assignment is attempted on the cluster resource. The user initiating the process to enable Container insights or the update to support the collection of metrics must have access to the **Microsoft.Authorization/roleAssignments/write** permission on the AKS cluster resource scope. Only members of the **Owner** and **User Access Administrator** built-in roles are granted access to this permission. If your security policies require assigning granular level permissions, we recommend you view [custom roles](../../role-based-access-control/custom-roles.md) and assign it to the users who require it. +During the onboarding or update process, granting the **Monitoring Metrics Publisher** role assignment is attempted on the cluster resource. The user initiating the process to enable Container insights or the update to support the collection of metrics must have access to the **Microsoft.Authorization/roleAssignments/write** permission on the AKS cluster resource scope. Only members of the Owner and User Access Administrator built-in roles are granted access to this permission. If your security policies require you to assign granular-level permissions, see [Azure custom roles](../../role-based-access-control/custom-roles.md) and assign permission to the users who require it. -You can also manually grant this role from the Azure portal by performing the following steps: --1. Assign the **Publisher** role to the **Monitoring Metrics** scope. -- For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). +You can also manually grant this role from the Azure portal: Assign the **Publisher** role to the **Monitoring Metrics** scope. For detailed steps, see [Assign Azure roles by using the Azure portal](../../role-based-access-control/role-assignments-portal.md). ## Container insights is enabled but not reporting any information-Use the following steps to diagnose the problem if you can't view status information or no results are returned from a log query: +To diagnose the problem if you can't view status information or no results are returned from a log query: -1. Check the status of the agent by running the command: +1. Check the status of the agent by running the following command: `kubectl get ds ama-logs --namespace=kube-system` Use the following steps to diagnose the problem if you can't view status inform NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE ama-logs 2 2 2 2 2 beta.kubernetes.io/os=linux 1d ```-2. If you have Windows Server nodes, then check the status of the agent by running the command: ++1. If you have Windows Server nodes, check the status of the agent by running the following command: `kubectl get ds omsagent-win --namespace=kube-system` Use the following steps to diagnose the problem if you can't view status inform NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE ama-logs-windows 2 2 2 2 2 beta.kubernetes.io/os=windows 1d ```-3. Check the deployment status with agent version *06072018* or later using the command: ++1. Check the deployment status with agent version **06072018** or later by using the following command: `kubectl get deployment ama-logs-rs -n=kube-system` Use the following steps to diagnose the problem if you can't view status inform ama-logs 1 1 1 1 3h ``` -4. Check the status of the pod to verify that it's running using the command: `kubectl get pods --namespace=kube-system` +1. Check the status of the pod to verify that it's running by using the command `kubectl get pods --namespace=kube-system`. - The output should resemble the following example with a status of *Running* for the omsagent: + The output should resemble the following example with a status of `Running` for the omsagent: ``` User@aksuser:~$ kubectl get pods --namespace=kube-system Use the following steps to diagnose the problem if you can't view status inform ama-logs-windows-6drwq 1/1 Running 0 1d ``` -## Container insights agent ReplicaSet Pods aren't scheduled on non-Azure Kubernetes cluster +## Container insights agent ReplicaSet Pods aren't scheduled on a non-AKS cluster -Container insights agent ReplicaSet Pods has a dependency on the following node selectors on the worker (or agent) nodes for the scheduling: +Container insights agent ReplicaSet Pods have a dependency on the following node selectors on the worker (or agent) nodes for the scheduling: ``` nodeSelector: nodeSelector: kubernetes.io/role: agent ``` -If your worker nodes donΓÇÖt have node labels attached, then agent ReplicaSet Pods will not get scheduled. Refer to [Kubernetes assign label selectors](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/) for instructions on how to attach the label. +If your worker nodes donΓÇÖt have node labels attached, agent ReplicaSet Pods won't get scheduled. For instructions on how to attach the label, see [Kubernetes assign label selectors](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/). ## Performance charts don't show CPU or memory of nodes and containers on a non-Azure cluster Container insights agent pods use the cAdvisor endpoint on the node agent to gather the performance metrics. Verify the containerized agent on the node is configured to allow `cAdvisor port: 10255` to be opened on all nodes in the cluster to collect performance metrics. -## Non-Azure Kubernetes cluster aren't showing in Container insights +## Non-AKS clusters aren't showing in Container insights -To view the non-Azure Kubernetes cluster in Container insights, Read access is required on the Log Analytics workspace supporting this Insight and on the Container Insights solution resource **ContainerInsights (*workspace*)**. +To view the non-AKS cluster in Container insights, read access is required on the Log Analytics workspace that supports this insight and on the Container insights solution resource **ContainerInsights (*workspace*)**. ## Metrics aren't being collected 1. Verify that the cluster is in a [supported region for custom metrics](../essentials/metrics-custom-overview.md#supported-regions). -2. Verify that the **Monitoring Metrics Publisher** role assignment exists using the following CLI command: +1. Verify that the **Monitoring Metrics Publisher** role assignment exists by using the following CLI command: ``` azurecli az role assignment list --assignee "SP/UserassignedMSI for Azure Monitor Agent" --scope "/subscriptions/<subid>/resourcegroups/<RG>/providers/Microsoft.ContainerService/managedClusters/<clustername>" --role "Monitoring Metrics Publisher" ```- For clusters with MSI, the user assigned client ID for Azure Monitor Agent changes every time monitoring is enabled and disabled, so the role assignment should exist on the current msi client ID. + For clusters with MSI, the user-assigned client ID for Azure Monitor Agent changes every time monitoring is enabled and disabled, so the role assignment should exist on the current MSI client ID. -3. For clusters with Azure Active Directory pod identity enabled and using MSI: +1. For clusters with Azure Active Directory pod identity enabled and using MSI: - - Verify the required label **kubernetes.azure.com/managedby: aks** is present on the Azure Monitor Agent pods using the following command: + - Verify that the required label **kubernetes.azure.com/managedby: aks** is present on the Azure Monitor Agent pods by using the following command: `kubectl get pods --show-labels -n kube-system | grep ama-logs` - - Verify that exceptions are enabled when pod identity is enabled using one of the supported methods at https://github.com/Azure/aad-pod-identity#1-deploy-aad-pod-identity. + - Verify that exceptions are enabled when pod identity is enabled by using one of the supported methods at https://github.com/Azure/aad-pod-identity#1-deploy-aad-pod-identity. Run the following command to verify: `kubectl get AzurePodIdentityException -A -o yaml` - You should receive output similar to the following: + You should receive output similar to the following example: ``` apiVersion: "aadpodidentity.k8s.io/v1" To view the non-Azure Kubernetes cluster in Container insights, Read access is r kubernetes.azure.com/managedby: aks ``` -## Installation of Azure Monitor Containers Extension fail with an error containing ΓÇ£manifests contain a resource that already existsΓÇ¥ on Azure Arc Enabled Kubernetes cluster -The error _manifests contain a resource that already exists_ indicates that resources of the Container Insights agent already exist on the Azure Arc Enabled Kubernetes cluster. This indicates that the container insights agent is already installed, either through azuremonitor-containers HELM chart or Monitoring Addon if it's AKS Cluster that's connected Azure Arc. The solution to this issue is to clean up the existing resources of container insights agent if it exists. Then enable Azure Monitor Containers Extension. +## Installation of Azure Monitor Containers Extension fails on an Azure Arc-enabled Kubernetes cluster +The error "manifests contain a resource that already exists" indicates that resources of the Container insights agent already exist on the Azure Arc-enabled Kubernetes cluster. This error indicates that the Container insights agent is already installed. It's installed either through an azuremonitor-containers Helm chart or the Monitoring Add-on if it's an AKS cluster that's connected via Azure Arc. ++The solution to this issue is to clean up the existing resources of the Container insights agent if it exists. Then enable the Azure Monitor Containers Extension. -### For non-AKS clusters -1. Against the K8s cluster that's connected to Azure Arc, run below command to verify whether the azmon-containers-release-1 helm chart release exists or not: +### For non-AKS clusters +1. Against the K8s cluster that's connected to Azure Arc, run the following command to verify whether the `azmon-containers-release-1` Helm chart release exists or not: `helm list -A` -2. If the output of the above command indicates that azmon-containers-release-1 exists, delete the helm chart release: +1. If the output of the preceding command indicates that the `azmon-containers-release-1` exists, delete the Helm chart release: `helm del azmon-containers-release-1` ### For AKS clusters-1. Run the following commands and look for Azure Monitor Agent addon profile to verify whether the AKS monitoring addon is enabled: +1. Run the following commands and look for the Azure Monitor Agent add-on profile to verify whether the AKS Monitoring Add-on is enabled: ``` az account set -s <clusterSubscriptionId> az aks show -g <clusterResourceGroup> -n <clusterName> ``` -2. If the output includes an Azure Monitor Agent addon profile config with a log analytics workspace resource ID, this indicates that AKS Monitoring addon is enabled and needs to be disabled: +1. If the output includes an Azure Monitor Agent add-on profile config with a Log Analytics workspace resource ID, this information indicates that the AKS Monitoring Add-on is enabled and must be disabled: `az aks disable-addons -a monitoring -g <clusterResourceGroup> -n <clusterName>` -If above steps didnΓÇÖt resolve the installation of Azure Monitor Containers Extension issues, create a ticket to Microsoft for further investigation. -+If the preceding steps didn't resolve the installation of Azure Monitor Containers Extension issues, create a support ticket to send to Microsoft for further investigation. ## Next steps -With monitoring enabled to capture health metrics for both the AKS cluster nodes and pods, these health metrics are available in the Azure portal. To learn how to use Container insights, see [View Azure Kubernetes Service health](container-insights-analyze.md). +When monitoring is enabled to capture health metrics for the AKS cluster nodes and pods, these health metrics are available in the Azure portal. To learn how to use Container insights, see [View Azure Kubernetes Service health](container-insights-analyze.md). |
azure-monitor | Analyze Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/analyze-usage.md | find where TimeGenerated between(startofday(ago(1d))..startofday(now())) project > [!TIP] > For workspaces with large data volumes, doing queries such as the ones shown in this section, which query large volumes of raw data, might need to be restricted to a single day. To track trends over time, consider setting up a [Power BI report](./log-powerbi.md) and using [incremental refresh](./log-powerbi.md#collect-data-with-power-bi-dataflows) to collect data volumes per resource once a day. +## Querying for data volumes excluding known free data types +The following query will return the monthly data volume in GB, excluding all data types which are supposed to be free from data ingestion charges: ++```kusto +let freeTables = dynamic([ +"AppAvailabilityResults","AppSystemEvents","ApplicationInsights","AzureActivity","AzureNetworkAnalyticsIPDetails_CL", +"AzureNetworkAnalytics_CL","AzureTrafficAnalyticsInsights_CL","ComputerGroup","DefenderIoTRawEvent","Heartbeat", +"MAApplication","MAApplicationHealth","MAApplicationHealthIssues","MAApplicationInstance","MAApplicationInstanceReadiness", +"MAApplicationReadiness","MADeploymentPlan","MADevice","MADeviceNotEnrolled","MADeviceReadiness","MADriverInstanceReadiness", +"MADriverReadiness","MAProposedPilotDevices","MAWindowsBuildInfo","MAWindowsCurrencyAssessment", +"MAWindowsCurrencyAssessmentDailyCounts","MAWindowsDeploymentStatus","NTAIPDetails_CL","NTANetAnalytics_CL", +"OfficeActivity","Operation","SecurityAlert","SecurityIncident","UCClient","UCClientReadinessStatus", +"UCClientUpdateStatus","UCDOAggregatedStatus","UCDOStatus","UCDeviceAlert","UCServiceUpdateStatus","UCUpdateAlert", +"Usage","WUDOAggregatedStatus","WUDOStatus","WaaSDeploymentStatus","WaaSInsiderStatus","WaaSUpdateStatus"]); +Usage | where DataType !in (freeTables) | where TimeGenerated > ago(30d) | summarize MonthlyGB=sum(Quantity)/1000 +``` + ## Querying for common data types If you find that you have excessive billable data for a particular data type, you might need to perform a query to analyze data in that table. The following queries provide samples for some common data types: |
azure-monitor | Daily Cap | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md | To receive an alert when the daily cap is reached, create a [log alert rule](../ | **Condition** | | | Signal type | Log | | Signal name | Custom log search |-| Query | `_LogOperation | where Operation =~ "Data collection stopped" | where Detail contains "OverQuota"` | +| Query | `_LogOperation | where Category =~ "Ingestion" | where Detail contains "OverQuota"` | | Measurement | Measure: *Table rows*<br>Aggregation type: Count<br>Aggregation granularity: 5 minutes | | Alert Logic | Operator: Greater than<br>Threshold value: 0<br>Frequency of evaluation: 5 minutes | | Actions | Select or add an [action group](../alerts/action-groups.md) to notify you when the threshold is exceeded. | |
azure-monitor | Data Retention Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-archive.md | For example: az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name Syslog --retention-time -1 --total-retention-time -1 ``` +# [PowerShell](#tab/PowerShell-1) ++Use the [Update-AzOperationalInsightsTable](/powershell/module/az.operationalinsights/Update-AzOperationalInsightsTable?view=azps-9.1.0) cmdlet to set the retention and archive duration for a table. This example sets the table's interactive retention to 30 days, and the total retention to two years, which means that the archive duration is 23 months: ++```powershell +Update-AzOperationalInsightsTable -ResourceGroupName ContosoRG -WorkspaceName ContosoWorkspace -TableName AzureMetrics -RetentionInDays 30 -TotalRetentionInDays 730 +``` ++To reapply the workspace's default interactive retention value to the table and reset its total retention to 0, run the [Update-AzOperationalInsightsTable](/powershell/module/az.operationalinsights/Update-AzOperationalInsightsTable?view=azps-9.1.0) cmdlet with the `-RetentionInDays` and `-TotalRetentionInDays` parameters set to `-1`. ++For example: ++```powershell +Update-AzOperationalInsightsTable -ResourceGroupName ContosoRG -WorkspaceName ContosoWorkspace -TableName Syslog -RetentionInDays -1 -TotalRetentionInDays -1 +``` + ## Get retention and archive policy by table For example: az monitor log-analytics workspace table show --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace --name SecurityEvent ``` +# [PowerShell](#tab/PowerShell-2) ++To get the retention policy of a particular table, run the [Get-AzOperationalInsightsTable](/powershell/module/az.operationalinsights/get-azoperationalinsightstable?view=azps-9.1.0) cmdlet. ++For example: ++```powershell +Get-AzOperationalInsightsTable -ResourceGroupName ContosoRG -WorkspaceName ContosoWorkspace -tableName SecurityEvent +``` + ## Purge retained data |
azure-monitor | Logs Dedicated Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md | Remove-AzOperationalInsightsLinkedService -ResourceGroupName "resource-group-nam You need to have *write* permissions on the cluster resource. -When deleting a cluster, you're losing access to all data in cluster, which was ingested from workspaces that were linked to it. This operation isn't reversible. If you delete your cluster while workspaces are linked, Workspaces get automatically unlinked from the cluster before the cluster delete, and new data sent to workspaces gets ingested to Log Analytics store instead. If the retention of data in workspaces older than the period it was linked to the cluster, you can query workspace for the time range before the link to cluster and after the unlink, and the service performs cross-cluster queries seamlessly. +When deleting a cluster, you're losing access to all data in cluster, which was ingested from workspaces that were linked to it. This operation isn't reversible. +The cluster's billing stops when deleted, regardless the 30 days commitment tier. ++If you delete your cluster while workspaces are linked, Workspaces get automatically unlinked from the cluster before the cluster delete, and new data sent to workspaces gets ingested to Log Analytics store instead. If the retention of data in workspaces older than the period it was linked to the cluster, you can query workspace for the time range before the link to cluster and after the unlink, and the service performs cross-cluster queries seamlessly. > [!NOTE] > - There is a limit of seven clusters per subscription, five active, plus two deleted in past 14 days. |
azure-monitor | Tutorial Logs Ingestion Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-api.md | A [DCE](../essentials/data-collection-endpoint-overview.md) is required to accep 1. On the **Custom deployment** screen, specify a **Subscription** and **Resource group** to store the DCR and then provide values like a **Name** for the DCE. The **Location** should be the same location as the workspace. The **Region** will already be populated and will be used for the location of the DCE. - :::image type="content" source="media/tutorial-workspace-transformations-api/custom-deployment-values.png" lightbox="media/tutorial-workspace-transformations-api/custom-deployment-values.png" alt-text="Screenshot to edit custom deployment values."::: + :::image type="content" source="media/tutorial-logs-ingestion-api/data-collection-endpoint-custom-deploy.png" lightbox="media/tutorial-logs-ingestion-api/data-collection-endpoint-custom-deploy.png" alt-text="Screenshot to edit custom deployment values."::: 1. Select **Review + create** and then select **Create** after you review the details. |
azure-monitor | Usage Estimated Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/usage-estimated-costs.md | Use the following basic guidance for common resources: - **Virtual machines**: With typical monitoring enabled, a virtual machine generates from 1 GB to 3 GB of data per month. This range is highly dependent on the configuration of your agents. - **Application Insights**: For different methods to estimate data from your applications, see the following section.-- **Container insights**: For guidance on estimating data for your Azure Kubernetes Service (AKS) cluster, see [Estimating costs to monitor your AKS cluster](containers/container-insights-cost.md#estimating-costs-to-monitor-your-aks-cluster).+- **Container insights**: For guidance on estimating data for your Azure Kubernetes Service (AKS) cluster, see [Estimating costs to monitor your AKS cluster](containers/container-insights-cost.md#estimate-costs-to-monitor-your-aks-cluster). The [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) includes data volume estimation calculators for these three cases. |
azure-netapp-files | Create Active Directory Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md | Several features of Azure NetApp Files require that you have an Active Directory * Administrators * Account Operators * Azure AD DC Administrators _(Azure AD DS Only)_-- Alternatively, an AD domain user account with `msDS-SupportedEncryptionTypes` write permission on the AD connection admin account can also be used to set the Kerberos encryption type property on the AD connection admin account. + * Alternatively, an AD domain user account with `msDS-SupportedEncryptionTypes` write permission on the AD connection admin account can also be used to set the Kerberos encryption type property on the AD connection admin account. >[!NOTE]- >It's not recommended or required to add the Azure NetApp Files AD admin account to the AD domain groups listed above. Nor is it recommended or required to grant `msDS-SupportedEncryptionTypes` write permission to the AD admin account. + >It's _not_ recommended nor required to add the Azure NetApp Files AD admin account to the AD domain groups listed above. Nor is it recommended or required to grant `msDS-SupportedEncryptionTypes` write permission to the AD admin account. If you set both AES-128 and AES-256 Kerberos encryption on the admin account of the AD connection, the highest level of encryption supported by your AD DS will be used. |
azure-portal | How To Create Azure Support Request | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/how-to-create-azure-support-request.md | Title: How to create an Azure support request description: Customers who need assistance can use the Azure portal to find self-service solutions and to create and manage support requests. Previously updated : 09/01/2022 Last updated : 11/18/2022 # Create an Azure support request We'll walk you through some steps to gather information about your problem and h ### Problem description -The first step of the support request process is to select an issue type. You'll then be prompted for more information, which can vary depending on what type of issue you selected. If you select **Technical**, you'll need to specify the service that your issue relates to. Depending on the service, you'll see additional options for **Problem type** and **Problem subtype**. +The first step of the support request process is to select an issue type. You'll then be prompted for more information, which can vary depending on what type of issue you selected. If you select **Technical**, you'll need to specify the service that your issue relates to. Depending on the service, you'll see additional options for **Problem type** and **Problem subtype**. Be sure to select the service (and problem type/subtype if applicable) that is most related to your issue. Selecting an unrelated service may result in delays in addressing your support request. > [!IMPORTANT] > In most cases, you'll need to specify a subscription. Be sure to choose the subscription where you are experiencing the problem. The support engineer assigned to your case will only be able to access resources in the subscription you specify. The access requirement serves as a point of confirmation that the support engineer is sharing information to the right audience, which is a key factor for ensuring the security and privacy of customer data. For details on how Azure treats customer data, see [Data Privacy in the Trusted Cloud](https://azure.microsoft.com/overview/trusted-cloud/privacy/). If you're still unable to resolve the issue, continue creating your support requ Next, we collect additional details about the problem. Providing thorough and detailed information in this step helps us route your support request to the right engineer. -1. Complete the **problem details** so that we have more information about your issue. If possible, tell us when the problem started and any steps to reproduce it. You can optionally upload one file (or a compressed file such as .zip that contains multiple files), such as a log file or [browser trace](../capture-browser-trace.md). For more information on file uploads, see [File upload guidelines](how-to-manage-azure-support-request.md#file-upload-guidelines). +1. Complete the **Problem details** so that we have more information about your issue. If possible, tell us when the problem started and any steps to reproduce it. You can optionally upload one file (or a compressed file such as .zip that contains multiple files), such as a log file or [browser trace](../capture-browser-trace.md). For more information on file uploads, see [File upload guidelines](how-to-manage-azure-support-request.md#file-upload-guidelines). 1. In the **Advanced diagnostic information** section, select **Yes** or **No**. Selecting **Yes** allows Azure support to gather [advanced diagnostic information](https://azure.microsoft.com/support/legal/support-diagnostic-information-collection/) from your Azure resources. If you prefer not to share this information, select **No**. See the [Advanced diagnostic information logs](#advanced-diagnostic-information-logs) section for more details about the types of files we might collect. - In some cases, there will be additional options to choose from. For example, for certain types of Virtual Machine problem types, you can choose whether to [allow access to a virtual machine's memory](#memory-dump-collection). + In some cases, there will be additional options to choose from. For example, for certain types of Virtual Machine problem types, you can choose whether to [allow access to a virtual machine's memory](#memory-dump-collection). -1. In the **Support method** section, select the severity of impact. The maximum severity level depends on your [support plan](https://azure.microsoft.com/support/plans). +1. In the **Support method** section, select the **Severity** level, depending on the business impact. The [maximum available severity level and time to respond](https://azure.microsoft.com/support/plans/response/) depends on your [support plan](https://azure.microsoft.com/support/plans). 1. Provide your preferred contact method, your availability, and your preferred support language. -1. Next, complete the **Contact info** section so we know how to contact you. +1. Complete the **Contact info** section so that we know how to reach you. Select **Next** when you've completed all of the necessary information. ### Review + create -Before you create your request, review all of the details that you'll send to support. You can select **Previous** to return to any tab if you need to make changes. When you're satisfied the support request is complete, select **Create**. +Before you create your request, review all of the details that you'll send to support. You can select **Previous** to return to any tab if you want to make changes. When you're satisfied that the support request is complete, select **Create**. A support engineer will contact you using the method you indicated. For information about initial response times, see [Support scope and responsiveness](https://azure.microsoft.com/support/plans/response/). Follow these links to learn more: - Engage with us on [Twitter](https://twitter.com/azuresupport) - Get help from your peers in the [Microsoft Q&A question page](/answers/products/azure) - Learn more in [Azure Support FAQ](https://azure.microsoft.com/support/faq)-- [Azure Quotas overview](../../quotas/quotas-overview.md)+- [Azure Quotas overview](../../quotas/quotas-overview.md) |
azure-resource-manager | Bicep Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config.md | Bicep supports a configuration file named `bicepconfig.json`. Within this file, To customize values, create this file in the directory where you store Bicep files. You can add `bicepconfig.json` files in multiple directories. The configuration file closest to the Bicep file in the directory hierarchy is used. -To create a `bicepconfig.json` file in Visual Studio Code, open the Command Palette (**[CTRL/CMD]**+**[SHIRT]**+**P**), and then select **Bicep: Create Bicep Configuration File**. For more information, see [Visual Studio Code](./visual-studio-code.md#create-bicep-configuration-file). +To create a `bicepconfig.json` file in Visual Studio Code, open the Command Palette (**[CTRL/CMD]**+**[SHIFT]**+**P**), and then select **Bicep: Create Bicep Configuration File**. For more information, see [Visual Studio Code](./visual-studio-code.md#create-bicep-configuration-file). :::image type="content" source="./media/bicep-config/vscode-create-bicep-configuration-file.png" alt-text="Screenshot of how to create Bicep configuration file in VSCode."::: |
azure-resource-manager | Visual Studio Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/visual-studio-code.md | The [Bicep configuration file (bicepconfig.json)](./bicep-config.md) can be used To create a Bicep configuration file: 1. Open Visual Studio Code.-1. From the **View** menu, select **Command Palette** (or press **[CTRL/CMD]**+**[SHIRT]**+**P**), and then select **Bicep: Create Bicep Configuration File**. +1. From the **View** menu, select **Command Palette** (or press **[CTRL/CMD]**+**[SHIFT]**+**P**), and then select **Bicep: Create Bicep Configuration File**. 1. Select the file directory where you want to place the file. 1. Save the configuration file when you're done. |
azure-resource-manager | Azure Subscription Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md | This section lists the most common service limits you might encounter as you use [!INCLUDE [sentinel-service-limits](../../../includes/sentinel-limits-machine-learning.md)] -## Multi workspace limits +### Multi workspace limits [!INCLUDE [sentinel-service-limits](../../../includes/sentinel-limits-multi-workspace.md)] |
bastion | Bastion Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md | description: Learn about frequently asked questions for Azure Bastion. Previously updated : 10/25/2022 Last updated : 11/21/2022 # Azure Bastion FAQ-### <a name="pricingpage"></a>What is the pricing? +### <a name="pricingpage"></a>How does pricing work? -For more information, see the [pricing page](https://aka.ms/BastionHostPricing). +Azure Bastion pricing is a combination of hourly pricing based on SKU and instances (scale units), plus data transfer rates. Hourly pricing starts from the moment Bastion is deployed, regardless of outbound data usage. For the latest pricing information, see the [Azure Bastion pricing](https://azure.microsoft.com/pricing/details/azure-bastion) page. ### <a name="ipv6"></a>Is IPv6 supported? |
bastion | Bastion Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-overview.md | For more information, see the [Configuration settings](configuration-settings.md ## <a name="pricing"></a>Pricing -Azure Bastion pricing involves a combination of hourly pricing based on SKU, scale units, and data transfer rates. Pricing information can be found on the [Pricing](https://azure.microsoft.com/pricing/details/azure-bastion) page. +Azure Bastion pricing is a combination of hourly pricing based on SKU and instances (scale units), plus data transfer rates. Hourly pricing starts from the moment Bastion is deployed, regardless of outbound data usage. For the latest pricing information, see the [Azure Bastion pricing](https://azure.microsoft.com/pricing/details/azure-bastion) page. ## <a name="new"></a>What's new? |
batch | Create Pool Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/create-pool-extensions.md | The following extensions can currently be installed when creating a Batch pool: - [Azure Diagnostics extension for Windows VMs](../virtual-machines/windows/extensions-diagnostics.md) - [HPC GPU driver extension for Windows on AMD](../virtual-machines/extensions/hpccompute-amd-gpu-windows.md) - [HPC GPU driver extension for Windows on NVIDIA](../virtual-machines/extensions/hpccompute-gpu-windows.md)+- [HPC GPU driver extension for Linux on NVIDIA](../virtual-machines/extensions/hpccompute-gpu-linux.md) - [Microsoft Antimalware extension for Windows](../virtual-machines/extensions/iaas-antimalware-windows.md) You can request support for additional publishers and/or extension types by opening a support request. |
center-sap-solutions | Register Existing System | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/register-existing-system.md | In this how-to guide, you'll learn how to register an existing SAP system with * - Check that you're trying to register a [supported SAP system configuration](#supported-systems) - Check that your Azure account has **Contributor** role access on the subscription or resource groups where you have the SAP system resources. - Register the **Microsoft.Workloads** Resource Provider in the subscription where you have the SAP system.-- A **User-assigned managed identity** which has **Contributor** role access to the Compute, Network and Storage resource groups of the SAP system. Azure Center for SAP solutions service uses this identity to discover your SAP system resources and register the system as a VIS resource.+- A **User-assigned managed identity** which has **Virtual Machine Contributor** role access to the Compute resource group and **Reader** role access to the Network resource group of the SAP system. Azure Center for SAP solutions service uses this identity to discover your SAP system resources and register the system as a VIS resource. - Make sure each virtual machine (VM) in the SAP system is currently running on Azure. These VMs include: - The ABAP SAP Central Services (ASCS) Server instance - The Application Server instance or instances The following SAP system configurations aren't supported in Azure Center for SAP ## Enable resource permissions -When you register an existing SAP system as a VIS, Azure Center for SAP solutions service needs a **User-assigned managed identity** which has **Contributor** role access to the Compute, Network and Storage resource groups of the SAP system. Before you register an SAP system with Azure Center for SAP solutions, either [create a new user-assigned managed identity or update role access for an existing managed identity](#setup-user-assigned-managed-identity). +When you register an existing SAP system as a VIS, Azure Center for SAP solutions service needs a **User-assigned managed identity** which has **Virtual Machine Contributor** role access to the Compute resource groups and **Reader** role access to the Network resource groups of the SAP system. Before you register an SAP system with Azure Center for SAP solutions, either [create a new user-assigned managed identity or update role access for an existing managed identity](#setup-user-assigned-managed-identity). Azure Center for SAP solutions uses this user-assigned managed identity to install VM extensions on the ASCS, Application Server and DB VMs. This step allows Azure Center for SAP solutions to discover the SAP system components, and other SAP system metadata. Azure Center for SAP solutions also needs this user-assigned managed identity to enable SAP system monitoring and management capabilities. Azure Center for SAP solutions uses this user-assigned managed identity to insta To provide permissions to the SAP system resources to a user-assigned managed identity: 1. [Create a new user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity) if needed or use an existing one.-1. [Assign **Contributor** role access](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#manage-access-to-user-assigned-managed-identities) to the user-assigned managed identity on all Resource Groups in which the SAP system resources exist. That is, Compute, Network and Storage Resource Groups. +1. [Assign **Virtual Machine Contributor** role access](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#manage-access-to-user-assigned-managed-identities) to the user-assigned managed identity on the resource group(s) which have the Virtual Machines of the SAP system and **Reader** role on the resource group(s) which have the Network components on the SAP system resources exist. 1. Once the permissions are assigned, this managed identity can be used in Azure Center for SAP solutions to register and manage SAP systems. ## Register SAP system To register an existing SAP system in Azure Center for SAP solutions: 1. For **SAP product**, select the SAP system product from the drop-down menu. 1. For **Environment**, select the environment type from the drop-down menu. For example, production or non-production environments. 1. For **Managed identity source**, select **Use existing user-assigned managed identity** option.- 1. For **Managed identity name**, select a **User-assigned managed identity** which has **Contributor** role access to the [resources of this SAP system.](#enable-resource-permissions) + 1. For **Managed identity name**, select a **User-assigned managed identity** which has **Virtual Machine Contributor** and **Reader** role access to the [respective resources of this SAP system.](#enable-resource-permissions) 1. Select **Review + register** to discover the SAP system and begin the registration process. :::image type="content" source="media/register-existing-system/registration-page.png" alt-text="Screenshot of Azure Center for SAP solutions registration page, highlighting mandatory fields to identify the existing SAP system." lightbox="media/register-existing-system/registration-page.png"::: The process of registering an SAP system in Azure Center for SAP solutions might - Command to start up sapstartsrv process on SAP VMs: /usr/sap/hostctrl/exe/hostexecstart -start - At least one Application Server and the Database aren't running for the SAP system that you chose. Make sure the Application Servers and Database VMs are in the **Running** state. - The user trying to register the SAP system doesn't have **Contributor** role permissions. For more information, see the [prerequisites for registering an SAP system](#prerequisites).-- The user-assigned managed identity doesn't have **Contributor** role access to the Azure subscription or resource groups where the SAP system exists. For more information, see [how to enable Azure Center for SAP solutions resource permissions](#enable-resource-permissions).+- The user-assigned managed identity doesn't have **Virtual Machine Contributor** role access to the Compute resources and **Reader** role access to the Network resource groups of the SAP system. For more information, see [how to enable Azure Center for SAP solutions resource permissions](#enable-resource-permissions). There's also a known issue with registering *S/4HANA 2021* version SAP systems. You might receive the error message: **Failed to discover details from the Db VM**. This error happens when the Database identifier is incorrectly configured on the SAP system. One possible cause is that the Application Server profile parameter `rsdb/dbid` has an incorrect identifier for the HANA Database. To fix the error: |
cognitive-services | Entity Categories | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/concepts/entity-categories.md | This category contains the following entity: :::row-end::: -## Category: IP +## Category: IP Address This category contains the following entity: This category contains the following entity: :::column span=""::: **Entity** - IP + IPAddress :::column-end::: :::column span="2"::: This category contains the following entity: Network IP addresses. Returned as both PII and PHI. - To get this entity category, add `IP` to the `piiCategories` parameter. `IP` will be returned in the API response if detected. + To get this entity category, add `IPAddress` to the `piiCategories` parameter. `IPAddress` will be returned in the API response if detected. :::column-end::: |
cognitive-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md | Azure Cognitive Service for Language is updated on an ongoing basis. To stay up- ## November 2022 * Conversational PII now supports up to 40,000 characters as document size.+* New versions of the text analysis client library are available in preview: + + ### [Java](#tab/java) + + [**Package (Maven)**](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.3.0-beta.1) + + [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav) + + [**ReadMe**](https://github.com/Azure/azure-sdk-for-jav) + + [**Samples**](https://github.com/Azure/azure-sdk-for-java/tree/azure-ai-textanalytics_5.3.0-beta.1/sdk/textanalytics/azure-ai-textanalytics/src/samples) + + ### [JavaScript](#tab/javascript) ++ [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-language-text/v/1.1.0-beta.1) + + [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cognitivelanguage/ai-language-text/CHANGELOG.md) + + [**ReadMe**](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cognitivelanguage/ai-language-text/README.md) + + [**Samples**](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/cognitivelanguage/ai-language-text/samples/v1) ++ ### [Python](#tab/python) + + [**Package (PyPi)**](https://pypi.org/project/azure-ai-textanalytics/5.3.0b1/) + + [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-textanalytics_5.3.0b1/sdk/textanalytics/azure-ai-textanalytics/CHANGELOG.md) + + [**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-textanalytics_5.3.0b1/sdk/textanalytics/azure-ai-textanalytics/README.md) + + [**Samples**](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-textanalytics_5.3.0b1/sdk/textanalytics/azure-ai-textanalytics/samples) + + ## October 2022 Azure Cognitive Service for Language is updated on an ongoing basis. To stay up- * West US 2 * Text Analytics for Health now [supports additional languages](./text-analytics-for-health/language-support.md) in preview: Spanish, French, German Italian, Portuguese and Hebrew. These languages are available when using a docker container to deploy the API service. * The Azure.AI.TextAnalytics client library v5.2.0 are generally available and ready for use in production applications. For more information on Language service client libraries, see the [**Developer overview**](./concepts/developer-guide.md).- - This release includes the following updates: - - ### [C#/.NET](#tab/csharp) - - [**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0) - - [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/CHANGELOG.md) - - [**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/README.md) - - [**Samples**](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples) - - ### [Java](#tab/java) - - [**Package (Maven)**](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0) - - [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav) - - [**ReadMe**](https://github.com/Azure/azure-sdk-for-jav) - - [**Samples**](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples) - - ### [Python](#tab/python) - - [**Package (PyPi)**](https://pypi.org/project/azure-ai-textanalytics/5.2.0/) - - [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/CHANGELOG.md) - - [**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/README.md) - - [**Samples**](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples) - - + * Java + * [**Package (Maven)**](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0) + * [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav) + * [**ReadMe**](https://github.com/Azure/azure-sdk-for-jav) + * [**Samples**](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples) + * Python + * [**Package (PyPi)**](https://pypi.org/project/azure-ai-textanalytics/5.2.0/) + * [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/CHANGELOG.md) + * [**ReadMe**](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/README.md) + * [**Samples**](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples) + * C#/.NET + * [**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.TextAnalytics/5.2.0) + * [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/CHANGELOG.md) + * [**ReadMe**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/README.md) + * [**Samples**](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples) ## August 2022 |
communication-services | Manage Teams Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/manage-teams-identity.md | The application must declare Teams.ManageCalls and Teams.ManageChats permissions 1. Navigate to your Azure AD app in the Azure portal and select **API permissions** 1. Select **Add Permissions** 1. In the **Add Permissions** menu, select **Azure Communication Services**-1. Select the permissions **Teams.ManageCalls** and **Teams.ManageCalls**, then select **Add permissions** +1. Select the permissions **Teams.ManageCalls** and **Teams.ManageChats**, then select **Add permissions** ![Add Teams.ManageCalls and Teams.ManageChats permission to the Azure Active Directory application created in previous step.](./media/active-directory-permissions.png) |
cosmos-db | Access Key Vault Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/access-key-vault-managed-identity.md | In this step, create an access policy in Azure Key Vault using the previously ma ## Next steps * To use customer-managed keys in Azure Key Vault with your Azure Cosmos DB account, see [configure customer-managed keys](how-to-setup-cmk.md#using-managed-identity)-* To use Azure Key Vault to manage secrets, see [secure credentials](access-secrets-from-keyvault.md). +* To use Azure Key Vault to manage secrets, see [secure credentials](store-credentials-key-vault.md). |
cosmos-db | Tokens | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/tokens.md | + + Title: Tokens and the Token Function in Azure Cosmos DB for Apache Cassandra +description: Tokens and the Token Function in Azure Cosmos DB for Apache Cassandra. +++++ Last updated : 11/04/2022+++# Tokens and the Token Function in Azure Cosmos DB for Apache Cassandra +++This article discusses tokens and token function in Azure Cosmos DB for Apache Cassandra and clarifies the difference between the computation and usage of token in native Cassandra. ++## What is a Token ++A token is a hashed partition key used to distribute data across the cluster. When data is distributed in Apache Cassandra, a range of tokens are assigned to each node, and you can either assign a token range or this can be done by Cassandra. So, when data is ingested, Cassandra can calculate the token and use that in finding the node to store the newly ingested data. ++## What is the Token Function ++The Token Function is a function available via the CQL API of a Cassandra cluster. It provides a means to expose the partitioning function used by the cluster. As a cql function, Token differs from most other functions, since it restricts the parameters passed to it based on the table that you are querying. The number of parameters allowed for the function equates to the number of partition keys for the table being queried, and the data type of the parameters are also restricted to the data types of the corresponding partition keys. ++Note though, this type of restriction on Apache Cassandra is arbitrary, and is only applied on constant values being passed to the function. The most notable usage of the Token function is with applying relations on the token of the partition key. Azure Cosmos DB for Apache Cassandra allows for `SELECT` queries to make use of a `WHERE` clause filtering on the tokens of your data instead of the data itself. ++```sql +SELECT token(accountid) FROM uprofile.accounts; ++system.token(accountid) +- + 2601062599670757427 + 2976626013207263698 ++``` ++```sql +SELECT token(accountid) +FROM uprofile.accounts +WHERE token(accountid)=2976626013207263698; ++ name | accountid | state | country +-+--+-+-+ + Devon | 405 | NYC | USA | ++``` ++> [!NOTE] +> In this usage, only the partition key columns can be specified as parameters to the Token function. +> This usage of the function is merely a placeholder to allow you make filters directly on the partition hash, instead of the partition key value. This is very useful for breaking up scans into sub parts and parallelizing the read of data from a table. +> Also, Azure Cosmos DB for Apache Cassandra does not allow range queries on partition key. ++## How Token works in Azure Cosmos DB for Apache Cassandra ++Azure Cosmos DB for Apache Cassandra uses the default partitioner, Murmur3Partitioner for native Cassandra. It has better performance than other partitioners and hashes key(s) faster. We use the same Murmur3Partitioner function while having some variants to ensure cross-compatibility across the host of 3rd party tools that work against the default Murmur3Partitioner in Apache Cassandra. ++There are certain limitations on usage of the Token function in Cosmos DBΓÇÖs Cassandra API: ++1. The Token function can only be used as a projection on the partition key columns. That is, it can only be used to project the token of the row(s). +2. For a given partition key value, the token value generated on Cosmos DBΓÇÖs Cassandra API will be different from the token value generated on Apache Cassandra. +3. The usage of the Token function `WHERE` clauses is the same for both Cosmos DB Cassandra and Apache Cassandra. ++> [!NOTE] +> The token function should only be used for projecting the actual token(pk) of the row, or for token scans (where it's used in the LHS of where clauses). ++### What scenarios are unsupported for Cosmos DB Cassandra API (but are supported on Apache Cassandra)? +The following scenarios are unsupported for Azure Cosmos DB for Apache Cassandra: +1. Token Function used as a projection on non-partition key columns. +2. Token Function used as a projection on constant values. +3. Token Function used on the right-hand side of a Token where clause. ++## Next steps ++- Get started with [creating a API for Cassandra account, database, and a table](manage-data-python.md) by using a Java application |
cosmos-db | Configure Periodic Backup Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-periodic-backup-restore.md | After you restore the data, you get a notification about the name of the new acc The following are different ways to migrate data back to the original account: -* Use the [Azure Cosmos DB data migration tool](import-data.md). * Use the [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md). * Use the [change feed](change-feed.md) in Azure Cosmos DB. * You can write your own custom code. |
cosmos-db | Convert Vcore To Request Unit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/convert-vcore-to-request-unit.md | The table below summarizes the relationship between *vCores* and *vCPUs* for Azu * [Learn about Azure Cosmos DB pricing](https://azure.microsoft.com/pricing/details/cosmos-db/) * [Learn how to plan and manage costs for Azure Cosmos DB](plan-manage-costs.md) * [Review options for migrating to Azure Cosmos DB](migration-choices.md)-* [Migrate to Azure Cosmos DB for NoSQL](import-data.md) * [Plan your migration to Azure Cosmos DB for MongoDB](mongodb/pre-migration-steps.md). This doc includes links to different migration tools that you can use once you are finished planning. [regions]: https://azure.microsoft.com/regions/ |
cosmos-db | Database Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/database-security.md | Title: Database security - Azure Cosmos DB + Title: Database security overview + description: Learn how Azure Cosmos DB provides database protection and data security for your data. Previously updated : 07/18/2022 Last updated : 11/21/2022 -# Security in Azure Cosmos DB - overview +# Overview of database security in Azure Cosmos DB + [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)] This article discusses database security best practices and key features offered by Azure Cosmos DB to help you prevent, detect, and respond to database breaches. ## What's new in Azure Cosmos DB security -Encryption at rest is now available for documents and backups stored in Azure Cosmos DB in all Azure regions. Encryption at rest is applied automatically for both new and existing customers in these regions. There is no need to configure anything; and you get the same great latency, throughput, availability, and functionality as before with the benefit of knowing your data is safe and secure with encryption at rest. Data stored in your Azure Cosmos DB account is automatically and seamlessly encrypted with keys managed by Microsoft using service-managed keys. Optionally, you can choose to add a second layer of encryption with keys you manage using [customer-managed keys or CMK](how-to-setup-cmk.md). +Encryption at rest is now available for documents and backups stored in Azure Cosmos DB in all Azure regions. Encryption at rest is applied automatically for both new and existing customers in these regions. There's no need to configure anything. You get the same great latency, throughput, availability, and functionality as before with the benefit of knowing your data is safe and secure with encryption at rest. Data stored in your Azure Cosmos DB account is automatically and seamlessly encrypted with keys managed by Microsoft using service-managed keys. Optionally, you can choose to add a second layer of encryption with keys you manage using [customer-managed keys or CMK](how-to-setup-cmk.md). ## How do I secure my database Let's dig into each one in detail. |Security requirement|Azure Cosmos DB's security approach| |||-|Network security|Using an IP firewall is the first layer of protection to secure your database. Azure Cosmos DB supports policy driven IP-based access controls for inbound firewall support. The IP-based access controls are similar to the firewall rules used by traditional database systems, but they are expanded so that an Azure Cosmos DB database account is only accessible from an approved set of machines or cloud services. Learn more in [Azure Cosmos DB firewall support](how-to-configure-firewall.md) article.<br><br>Azure Cosmos DB enables you to enable a specific IP address (168.61.48.0), an IP range (168.61.48.0/8), and combinations of IPs and ranges. <br><br>All requests originating from machines outside this allowed list are blocked by Azure Cosmos DB. Requests from approved machines and cloud services then must complete the authentication process to be given access control to the resources.<br><br> You can use [virtual network service tags](../virtual-network/service-tags-overview.md) to achieve network isolation and protect your Azure Cosmos DB resources from the general Internet. Use service tags in place of specific IP addresses when you create security rules. By specifying the service tag name (for example, AzureCosmosDB) in the appropriate source or destination field of a rule, you can allow or deny the traffic for the corresponding service.| -|Authorization|Azure Cosmos DB uses hash-based message authentication code (HMAC) for authorization. <br><br>Each request is hashed using the secret account key, and the subsequent base-64 encoded hash is sent with each call to Azure Cosmos DB. To validate the request, the Azure Cosmos DB service uses the correct secret key and properties to generate a hash, then it compares the value with the one in the request. If the two values match, the operation is authorized successfully and the request is processed, otherwise there is an authorization failure and the request is rejected.<br><br>You can use either a [primary key](#primary-keys), or a [resource token](secure-access-to-data.md#resource-tokens) allowing fine-grained access to a resource such as a document.<br><br>Learn more in [Securing access to Azure Cosmos DB resources](secure-access-to-data.md).| +|Network security|Using an IP firewall is the first layer of protection to secure your database. Azure Cosmos DB supports policy driven IP-based access controls for inbound firewall support. The IP-based access controls are similar to the firewall rules used by traditional database systems. However, they're expanded so that an Azure Cosmos DB database account is only accessible from an approved set of machines or cloud services. Learn more in [Azure Cosmos DB firewall support](how-to-configure-firewall.md) article.<br><br>Azure Cosmos DB enables you to enable a specific IP address (168.61.48.0), an IP range (168.61.48.0/8), and combinations of IPs and ranges. <br><br>All requests originating from machines outside this allowed list are blocked by Azure Cosmos DB. Requests from approved machines and cloud services then must complete the authentication process to be given access control to the resources.<br><br> You can use [virtual network service tags](../virtual-network/service-tags-overview.md) to achieve network isolation and protect your Azure Cosmos DB resources from the general Internet. Use service tags in place of specific IP addresses when you create security rules. By specifying the service tag name (for example, AzureCosmosDB) in the appropriate source or destination field of a rule, you can allow or deny the traffic for the corresponding service.| +|Authorization|Azure Cosmos DB uses hash-based message authentication code (HMAC) for authorization. <br><br>Each request is hashed using the secret account key, and the subsequent base-64 encoded hash is sent with each call to Azure Cosmos DB. To validate the request, the Azure Cosmos DB service uses the correct secret key and properties to generate a hash, then it compares the value with the one in the request. If the two values match, the operation is authorized successfully, and the request is processed. If they don't match, there's an authorization failure and the request is rejected.<br><br>You can use either a [primary key](#primary-keys), or a [resource token](secure-access-to-data.md#resource-tokens) allowing fine-grained access to a resource such as a document.<br><br>Learn more in [Securing access to Azure Cosmos DB resources](secure-access-to-data.md).| |Users and permissions|Using the primary key for the account, you can create user resources and permission resources per database. A resource token is associated with a permission in a database and determines whether the user has access (read-write, read-only, or no access) to an application resource in the database. Application resources include container, documents, attachments, stored procedures, triggers, and UDFs. The resource token is then used during authentication to provide or deny access to the resource.<br><br>Learn more in [Securing access to Azure Cosmos DB resources](secure-access-to-data.md).|-|Active directory integration (Azure RBAC)| You can also provide or restrict access to the Azure Cosmos DB account, database, container, and offers (throughput) using Access control (IAM) in the Azure portal. IAM provides role-based access control and integrates with Active Directory. You can use built in roles or custom roles for individuals and groups. See [Active Directory integration](role-based-access-control.md) article for more information.| -|Global replication|Azure Cosmos DB offers turnkey global distribution, which enables you to replicate your data to any one of Azure's world-wide datacenters with the click of a button. Global replication lets you scale globally and provide low-latency access to your data around the world.<br><br>In the context of security, global replication ensures data protection against regional failures.<br><br>Learn more in [Distribute data globally](distribute-data-globally.md).| -|Regional failovers|If you have replicated your data in more than one data center, Azure Cosmos DB automatically rolls over your operations should a regional data center go offline. You can create a prioritized list of failover regions using the regions in which your data is replicated. <br><br>Learn more in [Regional Failovers in Azure Cosmos DB](high-availability.md).| +|Active directory integration (Azure role-based access control)| You can also provide or restrict access to the Azure Cosmos DB account, database, container, and offers (throughput) using Access control (IAM) in the Azure portal. IAM provides role-based access control and integrates with Active Directory. You can use built in roles or custom roles for individuals and groups. For more information, see [Active Directory integration](role-based-access-control.md).| +|Global replication|Azure Cosmos DB offers turnkey global distribution, which enables you to replicate your data to any one of Azure's world-wide datacenters in a turnkey way. Global replication lets you scale globally and provide low-latency access to your data around the world.<br><br>In the context of security, global replication ensures data protection against regional failures.<br><br>Learn more in [Distribute data globally](distribute-data-globally.md).| +|Regional failovers|If you've replicated your data in more than one data center, Azure Cosmos DB automatically rolls over your operations should a regional data center go offline. You can create a prioritized list of failover regions using the regions in which your data is replicated. <br><br>Learn more in [Regional Failovers in Azure Cosmos DB](high-availability.md).| |Local replication|Even within a single data center, Azure Cosmos DB automatically replicates data for high availability giving you the choice of [consistency levels](consistency-levels.md). This replication guarantees a 99.99% [availability SLA](https://azure.microsoft.com/support/legal/sla/cosmos-db) for all single region accounts and all multi-region accounts with relaxed consistency, and 99.999% read availability on all multi-region database accounts.| |Automated online backups|Azure Cosmos DB databases are backed up regularly and stored in a geo redundant store. <br><br>Learn more in [Automatic online backup and restore with Azure Cosmos DB](online-backup-and-restore.md).| |Restore deleted data|The automated online backups can be used to recover data you may have accidentally deleted up to ~30 days after the event. <br><br>Learn more in [Automatic online backup and restore with Azure Cosmos DB](online-backup-and-restore.md)| |Protect and isolate sensitive data|All data in the regions listed in What's new? is now encrypted at rest.<br><br>Personal data and other confidential data can be isolated to specific container and read-write, or read-only access can be limited to specific users.|-|Monitor for attacks|By using [audit logging and activity logs](./monitor.md), you can monitor your account for normal and abnormal activity. You can view what operations were performed on your resources, who initiated the operation, when the operation occurred, the status of the operation, and much more as shown in the screenshot following this table.| -|Respond to attacks|Once you have contacted Azure support to report a potential attack, a 5-step incident response process is kicked off. The goal of the 5-step process is to restore normal service security and operations as quickly as possible after an issue is detected and an investigation is started.<br><br>Learn more in [Microsoft Azure Security Response in the Cloud](https://azure.microsoft.com/resources/shared-responsibilities-for-cloud-computing/).| +|Monitor for attacks|By using [audit logging and activity logs](./monitor.md), you can monitor your account for normal and abnormal activity. You can view what operations were performed on your resources. This data includes; who initiated the operation, when the operation occurred, the status of the operation, and much more.| +|Respond to attacks|Once you have contacted Azure support to report a potential attack, a five-step incident response process is kicked off. The goal of the five-step process is to restore normal service security and operations. The five-step process restores services as quickly as possible after an issue is detected and an investigation is started.<br><br>Learn more in [Microsoft Azure Security Response in the Cloud](https://azure.microsoft.com/resources/shared-responsibilities-for-cloud-computing/).| |Geo-fencing|Azure Cosmos DB ensures data governance for sovereign regions (for example, Germany, China, US Gov).| |Protected facilities|Data in Azure Cosmos DB is stored on SSDs in Azure's protected data centers.<br><br>Learn more in [Microsoft global datacenters](https://www.microsoft.com/en-us/cloud-platform/global-datacenters)|-|HTTPS/SSL/TLS encryption|All connections to Azure Cosmos DB support HTTPS. Azure Cosmos DB supports TLS levels up to 1.3 (included).<br>It is possible to enforce a minimum TLS level server-side. To do so, open an [Azure support ticket](https://azure.microsoft.com/support/options/).| +|HTTPS/SSL/TLS encryption|All connections to Azure Cosmos DB support HTTPS. Azure Cosmos DB supports TLS levels up to 1.3 (included).<br>It's possible to enforce a minimum TLS level server-side. To do so, open an [Azure support ticket](https://azure.microsoft.com/support/options/).| |Encryption at rest|All data stored into Azure Cosmos DB is encrypted at rest. Learn more in [Azure Cosmos DB encryption at rest](./database-encryption-at-rest.md)| |Patched servers|As a managed database, Azure Cosmos DB eliminates the need to manage and patch servers, that's done for you, automatically.| |Administrative accounts with strong passwords|It's hard to believe we even need to mention this requirement, but unlike some of our competitors, it's impossible to have an administrative account with no password in Azure Cosmos DB.<br><br> Security via TLS and HMAC secret based authentication is baked in by default.|-|Security and data protection certifications| For the most up-to-date list of certifications see the overall [Azure Compliance site](https://www.microsoft.com/en-us/trustcenter/compliance/complianceofferings) as well as the latest [Azure Compliance Document](https://azure.microsoft.com/mediahandler/files/resourcefiles/microsoft-azure-compliance-offerings/Microsoft%20Azure%20Compliance%20Offerings.pdf) with all certifications (search for Cosmos). +|Security and data protection certifications| For the most up-to-date list of certifications, see [Azure compliance](https://www.microsoft.com/en-us/trustcenter/compliance/complianceofferings) and the latest [Azure compliance document](https://azure.microsoft.com/mediahandler/files/resourcefiles/microsoft-azure-compliance-offerings/Microsoft%20Azure%20Compliance%20Offerings.pdf) with all Azure certifications including Azure Cosmos DB. The following screenshot shows how you can use audit logging and activity logs to monitor your account: :::image type="content" source="./media/database-security/nosql-database-security-application-logging.png" alt-text="Activity logs for Azure Cosmos DB"::: The following screenshot shows how you can use audit logging and activity logs t Primary/secondary keys provide access to all the administrative resources for the database account. Primary/secondary keys: - Provide access to accounts, databases, users, and permissions. -- Cannot be used to provide granular access to containers and documents.+- Can't be used to provide granular access to containers and documents. - Are created during the creation of an account. - Can be regenerated at any time. Each account consists of two keys: a primary key and secondary key. The purpose of dual keys is so that you can regenerate, or roll keys, providing continuous access to your account and data. -Primary/secondary keys come in two versions: read-write and read-only. The read-only keys only allow read operations on the account, but do not provide access to read permissions resources. +Primary/secondary keys come in two versions: read-write and read-only. The read-only keys only allow read operations on the account, but don't provide access to read permissions resources. ### <a id="key-rotation"></a> Key rotation and regeneration The process of key rotation and regeneration is simple. First, make sure that ** :::image type="content" source="./media/database-security/regenerate-secondary-key.png" alt-text="Screenshot of the Azure portal showing how to regenerate the secondary key" border="true"::: -# [Azure Cosmso DB for MongoDB](#tab/mongo-api) +# [Azure Cosmos DB for MongoDB](#tab/mongo-api) #### If your application is currently using the primary key The process of key rotation and regeneration is simple. First, make sure that ** ## Track the status of key regeneration -After you rotate or regenerate a key, you can track it's status from the Activity log. Use the following steps to track the status: +After you rotate or regenerate a key, you can track its status from the Activity log. Use the following steps to track the status: 1. Sign into the [Azure portal](https://portal.azure.com/) and navigate to your Azure Cosmos DB account. After you rotate or regenerate a key, you can track it's status from the Activit :::image type="content" source="./media/database-security/track-key-regeneration-status.png" alt-text="Status of key regeneration from Activity log" border="true"::: -1. You should see the key regeneration events along with it's status, time at which the operation was issued, details of the user who initiated key regeneration. The key generation operation initiates with **Accepted** status, it then changes to **Started** and then to **Succeeded** when the operation completes. +1. You should see the key regeneration events along with its status, time at which the operation was issued, details of the user who initiated key regeneration. The key generation operation initiates with **Accepted** status, it then changes to **Started** and then to **Succeeded** when the operation completes. ## Next steps |
cosmos-db | Distributed Nosql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/distributed-nosql.md | + + Title: Understanding distributed NoSQL databases ++description: Learn about distributed NoSQL databases and how you can use them together with your cloud-native global-scale applications at with flexible data schemas. +++++ Last updated : 11/21/2021+++# Understanding distributed NoSQL databases +++Azure Cosmos DB is a globally distributed database platform for both NoSQL and relational databases of any scale. This article explores distributed NoSQL databases in the context of Azure Cosmos DBΓÇÖs various NoSQL API options. ++For more information about other data storage options in Azure, see [choosing the right data store in the Azure Architecture Center](/azure/architecture/guide/technology-choices/data-store-overview). ++## Challenges ++One of the challenges when maintaining a database system is that many database engines apply locks and latches to enforce strict [ACID semantics](https://en.wikipedia.org/wiki/ACID). This approach is beneficial in scenarios where databases require high consistency of the state of the data no matter how itΓÇÖs accessed. While this approach promises high consistency, it makes heavy trade-offs with respect to concurrency, latency, and availability. This restriction is fundamentally an architectural restriction and will force any team with a high transactional workload to find workarounds like manually distributing, or sharding, data across many different databases or database nodes. These workarounds can be time consuming and challenging to implement. ++## NoSQL databases ++[NoSQL databases](https://en.wikipedia.org/wiki/NoSQL) refer to databases that were designed to simplify horizontal scaling by adjusting consistency to minimize the trade-offs to concurrency, latency, and availability. NoSQL databases offered configurable levels of consistency so that data can scale across many nodes and offer speed or availability that better mapped to the needs of your application. ++## Distributed databases ++[Distributed databases](https://en.wikipedia.org/wiki/Distributed_database) refer to databases that scale across many different instances or locations. While many NoSQL databases are designed for scale, not all are necessarily distributed databases. Even more, many NoSQL databases require time and effort to distribute across redundant nodes for local-redundancy or globally for geo-redundancy. The planning, implementation, and networking requirements for a globally distribute database can be complex. ++## Azure Cosmos DB ++With a distributed database that is also a NoSQL database, high transactional workloads suddenly became easier to build and manage.[Azure Cosmos DB](introduction.md) is a database platform that offers distributed data APIs in both NoSQL and relational variants. Specifically, many of the NoSQL APIs offer various consistency options that allow you to fine tune the level of consistency or availability that meets your real-world application requirements. Your database could be configured to offer high consistency with tradeoffs to speed and availability. Similarly, your database could be configured to offer the best performance with predictable tradeoffs to consistency and latency of your replicated data. Azure Cosmos DB will automatically and dynamically distribute your data across local instances or globally. Azure Cosmos DB can also provide ACID guarantees and scale throughput to map to your applicationΓÇÖs requirements. ++## Next steps ++> [!div class="nextstepaction"] +> [Understanding distributed relational databases](distributed-relational.md) ++Want to get started with Azure Cosmos DB? ++- [Learn about the various APIs](choose-api.md) +- [Get started with the API for NoSQL](nosql/quickstart-dotnet.md) +- [Get started with the API for MongoDB](mongodb/quickstart-nodejs.md) +- [Get started with the API for Apache Cassandra](cassandr) +- [Get started with the API for Apache Gremlin](gremlin/quickstart-python.md) +- [Get started with the API for Table](table/quickstart-dotnet.md) |
cosmos-db | Distributed Relational | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/distributed-relational.md | + + Title: Understanding distributed relational databases ++description: Learn about distributed relational databases and how you can use them together with your global-scale applications and your existing RDBMS development skills. +++++ Last updated : 11/21/2021+++# Understanding distributed relational databases +++Azure Cosmos DB is a globally distributed database platform for both NoSQL and relational databases of any scale. This article explores distributed relational databases in the context of Azure Cosmos DBΓÇÖs relational API option. ++For more information about other data storage options in Azure, see [choosing the right data store in the Azure Architecture Center](/azure/architecture/guide/technology-choices/data-store-overview). ++## Challenges ++Many times when you read about large volume or high transactional workloads, itΓÇÖs easy to think that these workloads are much larger than any that your application may face. The assumption that your workload will stay small can be a safe assumption at the start of a project, idea, or organization. However, that assumption can quickly lead to a scenario where your applicationΓÇÖs workload grows far beyond any predictions you have made. ItΓÇÖs not uncommon to hear stories of workloads that meet the maximum throughput or processing power of the single-instance database that was economical and performant at the start of a project. ++## Relational databases ++[Relational databases](https://en.wikipedia.org/wiki/Relational_database) organize data into a tabular (row/column) format with relations between different tables in the databases. Relational databases are common in various enterprises. These enterprises often have a wealth of software developers who have written code against a relational database or administrators who design schemas and manage relational database platforms. Relational databases also often support transactions with [ACID guarantees](https://en.wikipedia.org/wiki/ACID). ++Unfortunately, many relational database systems are initially configured by organizations in a single-node manner with upper constraints on compute, memory, and networking resources. This context can lead to an incorrect assumption that all relational databases are single node by their very nature. ++## Distributed databases ++With many cloud-native whitepapers, itΓÇÖs common to hear about the benefits of NoSQL databases making it seem like relational databases aren't a reasonable choice for large scale databases or distributed workloads. While many [distributed databases](https://en.wikipedia.org/wiki/Distributed_database) are non-relational, that are options out there for distributed relational database workloads. ++Many of these options for distributed relational databases require your organization to plan for large scale and distribution from the beginning of the project. This planning requirement can add significant complexity at the start of a project to make sure all relevant server nodes are configured, managed, and maintained by your team. The planning, implementation, and networking requirements for a globally distributed relational database can easily grow to be far more complex than standing up a single instance (or node). ++## Azure Cosmos DB ++[Azure Cosmos DB](introduction.md) is a database platform that offers distributed data APIs in both NoSQL and relational variants. Specifically, the relational API for Azure Cosmos DB is based on [PostgreSQL](https://www.postgresql.org/) and the [Citus extension](https://github.com/citusdata/citus). ++Citus is a PostgreSQL extension that adds support to Postgres for distribution of data and transactions. [Azure Cosmos DB for PostgreSQL](postgresql/introduction.md) is a fully managed service, using Citus, that automatically gives you high availability without the need to manually plan, manage, and maintain individual server nodes. With the API for PostgreSQL, you can start with a fully managed single-node cluster, build your database solution and then scale it in a turnkey fashion as your applicationΓÇÖs needs grow over time. With the API for PostgreSQL, thereΓÇÖs no need to plan a complex distribution project in advance or plan a project to migrate your data from a single-node to a distributed database down the road. ++## Next steps ++> [!div class="nextstepaction"] +> [Understanding distributed NoSQL databases](distributed-nosql.md) ++Want to get started with Azure Cosmos DB? ++- [Learn about the various APIs](choose-api.md) +- [Get started with the API for PostgreSQL](postgresql/quickstart-app-stacks-python.md) |
cosmos-db | Docker Emulator Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/docker-emulator-linux.md | + + Title: Run the Azure Cosmos DB Emulator on Docker for Linux +description: Learn how to run and use the Azure Cosmos DB Linux Emulator on Linux, and macOS. Using the emulator you can develop and test your application locally for free, without an Azure subscription. ++++++ Last updated : 05/09/2022+++# Run the emulator on Docker for Linux (Preview) ++The Azure Cosmos DB Linux Emulator provides a local environment that emulates the Azure Cosmos DB service for development purposes. Currently, the Linux emulator only supports API for NoSQL and MongoDB. Using the Azure Cosmos DB Emulator, you can develop and test your application locally, without creating an Azure subscription or incurring any costs. When you're satisfied with how your application is working in the Azure Cosmos DB Linux Emulator, you can switch to using an Azure Cosmos DB account in the cloud. This article describes how to install and use the emulator on macOS and Linux environments. ++> [!NOTE] +> The Azure Cosmos DB Linux Emulator is currently in preview mode and supports only the APIs for NoSQL and MongoDB. Users may experience slight performance degradations in terms of the number of requests per second processed by the emulator when compared to the Windows version. The default number of physical partitions which directly impacts the number of containers that can be provisioned is 10. +> +> We do not recommend use of the emulator (Preview) in production. For heavier workloads, use our [Windows emulator](local-emulator.md). ++## How does the emulator work? ++The Azure Cosmos DB Linux Emulator provides a high-fidelity emulation of the Azure Cosmos DB service. The emulator supports equivalent functionality as the Azure Cosmos DB. Functionality includes creating data, querying data, provisioning and scaling containers, and executing stored procedures and triggers. You can develop and test applications using the Azure Cosmos DB Linux Emulator. You can also deploy applications to Azure at global scale by updating the Azure Cosmos DB connection endpoint from the emulator to a live account. ++Functionality that relies on the Azure infrastructure like global replication, single-digit millisecond latency for reads/writes, and tunable consistency levels aren't applicable when you use the emulator. ++## Differences between the Linux Emulator and the cloud service ++Since the Azure Cosmos DB Emulator provides an emulated environment that runs on the local developer workstation, there are some differences in functionality between the emulator and an Azure Cosmos DB account in the cloud: ++- Currently, the **Data Explorer** pane in the emulator fully supports API for NoSQL and MongoDB clients only. ++- With the Linux emulator, you can create an Azure Cosmos DB account in [provisioned throughput](set-throughput.md) mode only; currently it doesn't support [serverless](serverless.md) mode. ++- The Linux emulator isn't a scalable service and it doesn't support a large number of containers. When using the Azure Cosmos DB Emulator, by default, you can create up to 10 fixed size containers at 400 RU/s (only supported using Azure Cosmos DB SDKs), or 5 unlimited containers. For more information on how to change this value, see [Set the PartitionCount value](emulator-command-line-parameters.md#set-partitioncount) article. ++- While [consistency levels](consistency-levels.md) can be adjusted using command-line arguments for testing scenarios only (default setting is Session), a user might not expect the same behavior as in the cloud service. For instance, Strong and Bounded staleness consistency has no effect on the emulator, other than signaling to the Azure Cosmos DB SDK the default consistency of the account. ++- The Linux emulator doesn't offer [multi-region replication](distribute-data-globally.md). ++- Your Azure Cosmos DB Linux Emulator might not always be up to date with the most recent changes in the Azure Cosmos DB service. You should always refer to the [Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) to accurately estimate the throughput (RUs) needs of your application. ++- The Linux emulator supports a maximum ID property size of 254 characters. ++## Run the Linux Emulator on macOS ++> [!NOTE] +> The emulator only supports MacBooks with Intel processors. ++To get started, visit the Docker Hub and install [Docker Desktop for macOS](https://hub.docker.com/editions/community/docker-ce-desktop-mac/). Use the following steps to run the emulator on macOS: +++## Install the certificate ++1. After the emulator is running, using a different terminal, load the IP address of your local machine into a variable. ++ ```bash + ipaddr="`ifconfig | grep "inet " | grep -Fv 127.0.0.1 | awk '{print $2}' | head -n 1`" + ``` ++1. Next, download the certificate for the emulator. ++ ```bash + curl -k https://$ipaddr:8081/_explorer/emulator.pem > emulatorcert.crt + ``` +++## Consume the endpoint via UI ++The emulator is using a self-signed certificate to secure the connectivity to its endpoint and needs to be manually trusted. Use the following steps to consume the endpoint via the UI using your desired web browser: ++1. Make sure you've downloaded the emulator self-signed certificate ++ ```bash + curl -k https://$ipaddr:8081/_explorer/emulator.pem > emulatorcert.crt + ``` ++1. Open the **Keychain Access** app on your Mac to import the emulator certificate. ++1. Select **File** and **Import Items** and import the **emulatorcert.crt**. ++1. After the *emulatorcert.crt* is loaded into KeyChain, double-click on the **localhost** name and change the trust settings to **Always Trust**. ++1. You can now browse to `https://localhost:8081/_explorer/https://docsupdatetracker.net/index.html` or `https://{your_local_ip}:8081/_explorer/https://docsupdatetracker.net/index.html` and retrieve the connection string to the emulator. ++Optionally, you can disable TLS/SSL validation on your application. Disabling validation is only recommended for development purposes and shouldn't be done when running in a production environment. ++## Run the Linux Emulator on Linux OS ++To get started, use the `apt` package and install the latest version of Docker. ++```bash +sudo apt-get update +sudo apt-get install docker-ce docker-ce-cli containerd.io +``` ++If you're using Windows Subsystem for Linux (WSL), run the following command to get `ifconfig`: ++```bash +sudo apt-get install net-tools +``` ++Use the following steps to run the emulator on Linux: +++4. After the emulator is running, using a different terminal, load the IP address of your local machine into a variable. ++ ```bash + ipaddr="`ifconfig | grep "inet " | grep -Fv 127.0.0.1 | awk '{print $2}' | head -n 1`" + ``` ++5. Next, download the certificate for the emulator. Alternatively, the endpoint below which downloads the self-signed emulator certificate, can also be used for signaling when the emulator endpoint is ready to receive requests from another application. ++ ```bash + curl -k https://$ipaddr:8081/_explorer/emulator.pem > ~/emulatorcert.crt + ``` ++6. Copy the CRT file to the folder that contains custom certificates in your Linux distribution. Commonly on Debian distributions, it's located on `/usr/local/share/ca-certificates/`. ++ ```bash + cp ~/emulatorcert.crt /usr/local/share/ca-certificates/ + ``` ++7. Update the TLS/SSL certificates, which will update the `/etc/ssl/certs/` folder. ++ ```bash + update-ca-certificates + ``` ++ For Java-based applications, the certificate must be imported to the [Java trusted store.](local-emulator-export-ssl-certificates.md) ++ ```bash + keytool -keystore ~/cacerts -importcert -alias emulator_cert -file ~/emulatorcert.crt + java -ea -Djavax.net.ssl.trustStore=~/cacerts -Djavax.net.ssl.trustStorePassword="changeit" $APPLICATION_ARGUMENTS + ``` ++## Configuration options ++|Name |Default |Description | +|||| +| Ports: `-p` | | Currently, only ports `8081` and `10250-10255` are needed by the emulator endpoint. | +| `AZURE_COSMOS_EMULATOR_PARTITION_COUNT` | 10 | Controls the total number of physical partitions, which in return controls the number of containers that can be created and can exist at a given point in time. We recommend starting small to improve the emulator start up time, i.e 3. | +| Memory: `-m` | | On memory, 3 GB or more is required. | +| Cores: `--cpus` | | Make sure to allocate enough memory and CPU cores. At least four cores are recommended. | +|`AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE` | false | This setting used by itself will help persist the data between container restarts. | +|`AZURE_COSMOS_EMULATOR_ENABLE_MONGODB_ENDPOINT` | | This setting enables the API for MongoDB endpoint for the emulator and configures the MongoDB server version. (Valid server version values include ``3.2``, ``3.6``, ``4.0`` and ``4.2``) | ++## Troubleshoot issues ++This section provides tips to troubleshoot errors when using the Linux emulator. ++### Connectivity issues ++#### My app can't connect to emulator endpoint ("The TLS/SSL connection couldn't be established") or I can't start the Data Explorer ++- Ensure the emulator is running with the following command: ++ ```bash + docker ps --all + ``` ++- Verify that the specific emulator container is in a running state. ++- Verify that no other applications are using emulator ports: `8081` and `10250-10255`. ++- Verify that the container port `8081`, is mapped correctly and accessible from an environment outside of the container. ++ ```bash + netstat -lt + ``` ++- Try to access the endpoint and port for the emulator using the Docker container's IP address instead of "localhost". ++- Make sure that the emulator self-signed certificate has been properly added to [KeyChain](#consume-the-endpoint-via-ui). ++- For Java applications, make sure you imported the certificate to the [Java Certificates Store section](#run-the-linux-emulator-on-linux-os). ++- For .NET applications you can disable TLS/SSL validation: ++# [.NET Standard 2.1+](#tab/ssl-netstd21) ++For any application running in a framework compatible with .NET Standard 2.1 or later, we can use `CosmosClientOptions.HttpClientFactory`: ++[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/HttpClientFactory/Program.cs?name=DisableSSLNETStandard21)] ++# [.NET Standard 2.0](#tab/ssl-netstd20) ++For any application running in a framework compatible with .NET Standard 2.0, we can use `CosmosClientOptions.HttpClientFactory`: ++[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/HttpClientFactory/Program.cs?name=DisableSSLNETStandard20)] ++++#### My Node.js app is reporting a self-signed certificate error ++If you attempt to connect to the emulator via an address other than `localhost`, such as the containers IP address, Node.js will raise an error about the certificate being self-signed, even if the certificate has been installed. ++TLS verification can be disabled by setting the environment variable `NODE_TLS_REJECT_UNAUTHORIZED` to `0`: ++```bash +NODE_TLS_REJECT_UNAUTHORIZED=0 +``` ++This flag is only recommended for local development as it disables TLS for Node.js. More information can be found on in [Node.js documentation](https://nodejs.org/api/cli.html#cli_node_tls_reject_unauthorized_value) and the [Azure Cosmos DB Emulator Certificates documentation](local-emulator-export-ssl-certificates.md#how-to-use-the-certificate-in-nodejs). ++#### The Docker container failed to start ++The emulator errors out with the following message: ++```bash +/palrun: ERROR: Invalid mapping of address 0x40037d9000 in reserved address space below 0x400000000000. Possible causes: +1. The process (itself, or via a wrapper) starts up its own running environment sets the stack size limit to unlimited via syscall setrlimit(2); +2. The process (itself, or via a wrapper) adjusts its own execution domain and flag the system its legacy personality via syscall personality(2); +3. Sysadmin deliberately sets the system to run on legacy VA layout mode by adjusting a sysctl knob vm.legacy_va_layout. +``` ++This error is likely because the current Docker Host processor type is incompatible with our Docker image. For example, if the computer is using a unique chipset or processor architecture. ++#### My app received too many connectivity-related timeouts ++- The Docker container isn't provisioned with enough resources [(cores or memory)](#configuration-options). We recommend increasing the number of cores and alternatively, reduce the number of physical partitions provisioned upon startup. ++- Ensure the number of TCP connections doesn't exceed your current OS settings. ++- Try reducing the size of the documents in your application. + +#### My app couldn't create databases or containers ++The number of physical partitions provisioned on the emulator is too low. Either delete your unused databases/collections or start the emulator with a [larger number of physical partitions](#configuration-options). ++### Reliability and crashes ++- The emulator fails to start: ++ - Make sure you're [running the latest image of the Azure Cosmos DB emulator for Linux](#refresh-linux-container). Otherwise, see the section above regarding connectivity-related issues. ++ - If the Azure Cosmos DB emulator data folder is "volume mounted", ensure that the volume has enough space and is read/write. ++ - Confirm that creating a container with the recommended settings works. If yes, most likely the cause of failure was the extra settings passed via the respective Docker command upon starting the container. ++ - If the emulator fails to start with the following error: + + ```bash + "Failed loading Emulator secrets certificate. Error: 0x8009000f or similar, a new policy might have been added to your host that prevents an application such as Azure Cosmos DB Emulator from creating and adding self signed certificate files into your certificate store." + ``` ++ This failure can occur even when you run in Administrator context, since the specific policy usually added by your IT department takes priority over the local Administrator. Using a Docker image for the emulator instead might help in this case. The image can help as long as you still have the permission to add the self-signed emulator TLS/SSL certificate into your host machine context. The self-signed certificate is required by Java and .NET Azure Cosmos DB SDK client applications. ++- The emulator is crashing: ++ - Confirm that creating a container with the [recommended settings](#run-the-linux-emulator-on-linux-os) works. If yes, most likely the cause of failure is the extra settings passed via the respective Docker command upon starting the container. ++ - Start the emulator's Docker container in an attached mode (see `docker start -it`). ++ - Collect the crash-related dump/data and follow the [steps outlined](#report-an-emulator-issue) to report the issue. ++### Data explorer errors ++- I can't view my data: ++ - See section regarding connectivity-related issues above. ++ - Make sure that the self-signed emulator certificate is properly imported and manually trusted in order for your browser to access the data explorer page. ++ - Try creating a database/container and inserting an item using the Data Explorer. If successful, most likely the cause of the issue resides within your application. If not, [contact the Azure Cosmos DB team](#report-an-emulator-issue). ++### Performance issues ++Number of requests per second is low, latency of the requests is high: ++- The Docker container isn't provisioned with enough resources [(cores or memory)](#configuration-options). We recommend increasing the number of cores and alternatively, reduce the number of physical partitions provisioned upon startup. ++## Refresh Linux container ++Use the following steps to refresh the Linux container: ++1. Run the following command to view all Docker containers. ++ ```bash + docker ps --all + ``` ++1. Remove the container using the ID retrieved from above command. ++ ```bash + docker rm ID_OF_CONTAINER_FROM_ABOVE + ``` ++1. Next list all Docker images. ++ ```bash + docker images + ``` ++1. Remove the image using the ID retrieved from previous step. ++ ```bash + docker rmi ID_OF_IMAGE_FROM_ABOVE + ``` ++1. Pull the latest image of the Azure Cosmos DB Linux Emulator. ++ ```bash + docker pull mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator + ``` ++1. To start a stopped container, run the following command: ++ ```bash + docker start -ai ID_OF_CONTAINER + ``` ++## Report an emulator issue ++When reporting an issue with the Linux emulator, provide as much information as possible about your issue. These details include: ++- Description of the error/issue encountered +- Environment (OS, host configuration) +- Computer and processor type +- Command used to create and start the emulator (YML file if Docker compose is used) +- Description of the workload +- Sample of the database/collection and item used +- Include the console output from starting the Docker container for the emulator in attached mode +- Post feedback on our [Azure Cosmos DB Q&A forums](/answers/topics/azure-cosmos-db.html). ++## Next steps ++In this article, you've learned how to use the Azure Cosmos DB Linux emulator for free local development. You can now proceed to the next articles: ++- [Export the Azure Cosmos DB Emulator certificates for use with Java, Python, and Node.js apps](local-emulator-export-ssl-certificates.md) +- [Debug issues with the emulator](troubleshoot-local-emulator.md) |
cosmos-db | Docker Emulator Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/docker-emulator-rest-api.md | + + Title: REST API with Docker container emulator ++description: Learn how to send secure requests to the REST API of the Azure Cosmos DB emulator running in a Docker container. +++++ Last updated : 11/21/2022+++# Use the REST API with the Azure Cosmos DB emulator Docker container +++You may find yourself in a situation where you need to start the emulator from the command line, create resources, and populate data without any UI intervention. For example, you may start the emulator as part of an automated test suite in a DevOps platform. The REST API for Azure Cosmos DB is available in the emulator to use for many of these requests. This guide will walk you through the steps necessary to interact with the REST API in the emulator. ++## Provide a test key when starting the emulator ++When you need to automate startup and data bootstrapping, the key you'll use should be known in advance. You can pass the key as an environmental variable when starting the emulator. ++Consider this sample key that is stored as an environmental variable. ++```bash +EMULATOR_KEY="C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==" +``` ++> [!IMPORTANT] +> It is strongly recommended you generate your own key using a tool like `ssh-keygen` instead of using the sample key in this article. ++Set the key when starting the emulator to the stored sample key. In this example command, other [sensible defaults](linux-emulator.md#run-the-linux-emulator-on-linux-os) are also used. ++```bash +docker run \ + -it --rm \ + --name cosmosdb \ + --detach -p 8081:8081 -p 10251-10254:10251-10254 \ + --memory 3g --cpus=2.0 \ + -e AZURE_COSMOS_EMULATOR_PARTITION_COUNT=3 \ + -e AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE=false \ + -e AZURE_COSMOS_EMULATOR_KEY=$EMULATOR_KEY \ + mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator +``` ++## Wait for the emulator to start ++The emulator will take some time to start up. In the case where you have it running in the background using `--detach`, you can create a script to loop and check to see when the REST API is available: ++```bash +echo "Wait until the emulator REST API responds" ++until [ "$(curl -k -s -o -w "%{http_code}" https://127.0.0.1:8081)" == "401" ]; do + sleep 2; +done; ++echo "Emulator REST API ready" +``` ++## Create authorization token ++The REST API for the emulator requires an authorization token to be present in the header. Due to this logic requiring multiple steps, it's easier to export the creation of the token to a reusable function in the script. ++First, let's review a list of prerequisite commands and packages you'll need to create this function. ++- `tr` - to lowercase the date +- `openssl` - to sign the expected structure containing the API operation with a key +- `jq` - to encode the token as a URI ++Now, let's create a function named `create_cosmos_rest_token` that will build an authorization token. This code sample includes comments to explain each step. ++```bash +create_cosmos_rest_token() { + # HTTP-date + # https://www.rfc-editor.org/rfc/rfc7231#section-7.1.1.1 + # e.g., `TZ=GMT date '+%a, %d %b %Y %T %Z'` + ISSUE_DATE=$1 + ISSUE_DATE_LOWER=$(echo -n "$ISSUE_DATE" | tr '[:upper:]' '[:lower:]') + # Base64 encoded key + MASTER_KEY_BASE64=$2 + # Operation: + # Database operations: dbs + # Container operations: colls + # Stored Procedures: sprocs + # User Defined Functions: udfs + # Triggers: triggers + # Users: users + # Permissions: permissions + # Item level operations: docs + RESOURCE_TYPE=${3:-dbs} + # A link to the resource + RESOURCE_LINK=$4 + # HTTP verb in lowercase, e.g. post, get + VERB=$5 + # Read the bytes of a key + KEY=$(echo -n "$MASTER_KEY_BASE64" | base64 -d) + # Sign + SIG=$(printf "%s\n%s\n%s\n%s\n\n" "$VERB" "$RESOURCE_TYPE" "$RESOURCE_LINK" "$ISSUE_DATE_LOWER" | openssl sha256 -hmac "$KEY" -binary | base64) + # Encode and return + printf %s "type=master&ver=1.0&sig=$SIG"|jq -sRr @uri +} +``` ++Let's look at examples where we can create tokens for common operations. ++- First, creating a token to use when creating a new database ++ ```bash + ISSUE_DATE=$(TZ=GMT date '+%a, %d %b %Y %T %Z') + CREATE_DB_TOKEN=$( create_cosmos_rest_token "$ISSUE_DATE" "$EMULATOR_KEY" "dbs" "" "post" ) + ``` ++- Next, creating a token to pass to the API for container creation ++ ```bash + DATABASE_ID="<database-name>" + + ISSUE_DATE=$(TZ=GMT date '+%a, %d %b %Y %T %Z') + CREATE_COLL_TOKEN=$( create_cosmos_rest_token "$ISSUE_DATE" "$EMULATOR_KEY" "colls" "dbs/$DATABASE_ID" "post" ) + ``` ++## Add test data ++Here are some examples that utilize the above function that generates the token. ++- **Create a database** ++ ```bash + DB_ID="<database-name>" + echo "Creating a database $DB_ID" + + ISSUE_DATE=$(TZ=GMT date '+%a, %d %b %Y %T %Z') + CREATE_DB_TOKEN=$( create_cosmos_rest_token "$ISSUE_DATE" "$EMULATOR_KEY" "dbs" "" "post" ) + + curl --data '{"id":"$DB_ID"}' \ + -H "Content-Type: application/json" \ + -H "x-ms-date: $ISSUE_DATE" \ + -H "Authorization: $CREATE_DB_TOKEN" \ + -H "x-ms-version: 2015-08-06" \ + https://127.0.0.1:8081/dbs + ``` ++- **Create a container** ++ ```bash + DB_ID="<database-name>" + CONTAINER_ID="baz" + echo "Creating a container $CONTAINER_ID in the database $DB_ID" ++ ISSUE_DATE=$(TZ=GMT date '+%a, %d %b %Y %T %Z') + CREATE_CT_TOKEN=$( create_cosmos_rest_token "$ISSUE_DATE" "$EMULATOR_KEY" "colls" "dbs/$DB_ID" "post" ) + + curl --data '{"id":"$CONTAINER_ID", "partitionKey":{"paths":["/id"], "kind":"Hash", "Version":2}}' \ + -H "Content-Type: application/json" \ + -H "x-ms-date: $ISSUE_DATE" \ + -H "Authorization: $CREATE_CT_TOKEN" \ + -H "x-ms-version: 2015-08-06" \ + "https://127.0.0.1:8081/dbs/$DB_ID/colls" + ``` ++## Next steps ++In this article, you've learned how to generate an authorization token and use it in subsequent API requests to your emulated Cosmos DB instance. ++To learn more about the linux emulator, check out these articles: ++- [Run the emulator on Docker for Linux](linux-emulator.md) +- [Use the emulator on Docker for Windows](local-emulator-on-docker-windows.md) |
cosmos-db | Docker Emulator Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/docker-emulator-windows.md | + + Title: Running the emulator on Docker for Windows +itleSuffix: Running the Azure Cosmos DB emulator on Docker for Windows +description: Learn how to run and use the Azure Cosmos DB Emulator on Docker for Windows. Using the emulator you can develop and test your application locally for free, without creating an Azure subscription. +++++ Last updated : 04/20/2021+++# <a id="run-on-windows-docker"></a>Use the emulator on Docker for Windows ++You can run the Azure Cosmos DB Emulator on a Windows Docker container. See [GitHub](https://github.com/Azure/azure-cosmos-db-emulator-docker) for the `Dockerfile` and more information. Currently, the emulator does not work on Docker for Oracle Linux. Use the following instructions to run the emulator on Docker for Windows: ++1. After you have [Docker for Windows](https://www.docker.com/docker-windows) installed, switch to Windows containers by right-clicking the Docker icon on the toolbar and selecting **Switch to Windows containers**. ++1. Next, pull the emulator image from Docker Hub by running the following command from your favorite shell. ++ ```bash + docker pull mcr.microsoft.com/cosmosdb/windows/azure-cosmos-emulator + ``` ++1. To start the image, run the following commands depending on the command line or the PowerShell environment: ++ # [Command line](#tab/cli) ++ ```bash ++ md %LOCALAPPDATA%\CosmosDBEmulator\bind-mount ++ docker run --name azure-cosmosdb-emulator --memory 2GB --mount "type=bind,source=%LOCALAPPDATA%\CosmosDBEmulator\bind-mount,destination=C:\CosmosDB.Emulator\bind-mount" --interactive --tty -p 8081:8081 -p 8900:8900 -p 8901:8901 -p 8902:8902 -p 10250:10250 -p 10251:10251 -p 10252:10252 -p 10253:10253 -p 10254:10254 -p 10255:10255 -p 10256:10256 -p 10350:10350 mcr.microsoft.com/cosmosdb/windows/azure-cosmos-emulator + ``` + Windows based Docker images might not be generally compatible with every Windows host OS. For instance, the default Azure Cosmos DB Emulator image is only compatible with Windows 10 and Windows Server 2016. If you need an image that is compatible with Windows Server 2019, run the following command instead: ++ ```bash + docker run --name azure-cosmosdb-emulator --memory 2GB --mount "type=bind,source=%hostDirectory%,destination=C:\CosmosDB.Emulator\bind-mount" --interactive --tty -p 8081:8081 -p 8900:8900 -p 8901:8901 -p 8902:8902 -p 10250:10250 -p 10251:10251 -p 10252:10252 -p 10253:10253 -p 10254:10254 -p 10255:10255 -p 10256:10256 -p 10350:10350 mcr.microsoft.com/cosmosdb/winsrv2019/azure-cosmos-emulator:latest + ``` ++ # [PowerShell](#tab/powershell) ++ ```powershell ++ md $env:LOCALAPPDATA\CosmosDBEmulator\bind-mount 2>null ++ docker run --name azure-cosmosdb-emulator --memory 2GB --mount "type=bind,source=$env:LOCALAPPDATA\CosmosDBEmulator\bind-mount,destination=C:\CosmosDB.Emulator\bind-mount" --interactive --tty -p 8081:8081 -p 8900:8900 -p 8901:8901 -p 8902:8902 -p 10250:10250 -p 10251:10251 -p 10252:10252 -p 10253:10253 -p 10254:10254 -p 10255:10255 -p 10256:10256 -p 10350:10350 mcr.microsoft.com/cosmosdb/windows/azure-cosmos-emulator ++ ``` ++ The response looks similar to the following: ++ ```bash + Starting emulator + Emulator Endpoint: https://172.20.229.193:8081/ + Primary Key: C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw== + Exporting SSL Certificate + You can import the SSL certificate from an administrator command prompt on the host by running: + cd /d %LOCALAPPDATA%\CosmosDBEmulatorCert + powershell .\importcert.ps1 + -- + Starting interactive shell + ``` + ++ > [!NOTE] + > When executing the `docker run` command, if you see a port conflict error (that is if the specified port is already in use), pass a custom port by altering the port numbers. For example, you can change the "-p 8081:8081" parameter to "-p 443:8081" ++1. Now use the emulator endpoint and primary key from the response and import the TLS/SSL certificate into your host. To import the TLS/SSL certificate, run the following steps from an admin command prompt: ++ # [Command line](#tab/cli) ++ ```bash + cd %LOCALAPPDATA%\CosmosDBEmulator\bind-mount + powershell .\importcert.ps1 + ``` ++ # [PowerShell](#tab/powershell) ++ ```powershell + cd $env:LOCALAPPDATA\CosmosDBEmulator\bind-mount + .\importcert.ps1 + ``` + ++1. If you close the interactive shell after the emulator has started, it will shut down the emulator's container. To reopen the data explorer, navigate to the following URL in your browser. The emulator endpoint is provided in the response message shown above. ++ `https://<emulator endpoint provided in response>/_explorer/https://docsupdatetracker.net/index.html` ++If you have a .NET client application running on a Linux docker container and if you are running Azure Cosmos DB Emulator on a host machine, use the instructions in the next section to import the certificate into the Linux docker container. ++## Regenerate the emulator certificates ++When running the emulator in a Docker container, the certificates associated with the emulator are regenerated every time you stop and restart the respective container. Because of that you have to re-import the certificates after each container start. To work around this limitation, you can use a Docker compose file to bind the Docker container to a particular IP address and a container image. ++For example, you can use the following configuration within the Docker compose file, make sure to format it per your requirement: ++```yml +version: '2.4' # Do not upgrade to 3.x yet, unless you plan to use swarm/docker stack: https://github.com/docker/compose/issues/4513 ++networks: + default: + external: false + ipam: + driver: default + config: + - subnet: "172.16.238.0/24" ++++ # First create a directory that will hold the emulator traces and certificate to be imported + # set hostDirectory=C:\emulator\bind-mount + # mkdir %hostDirectory% ++ cosmosdb: + container_name: "azurecosmosemulator" + hostname: "azurecosmosemulator" + image: 'mcr.microsoft.com/cosmosdb/windows/azure-cosmos-emulator' + platform: windows + tty: true + mem_limit: 3GB + ports: + - '8081:8081' + - '8900:8900' + - '8901:8901' + - '8902:8902' + - '10250:10250' + - '10251:10251' + - '10252:10252' + - '10253:10253' + - '10254:10254' + - '10255:10255' + - '10256:10256' + - '10350:10350' + networks: + default: + ipv4_address: 172.16.238.246 + volumes: + - '${hostDirectory}:C:\CosmosDB.Emulator\bind-mount' +``` ++## Next steps ++In this article, you've learned how to use the local emulator for free local development. You can now proceed to the next articles: ++* [Export the Azure Cosmos DB Emulator certificates for use with Java, Python, and Node.js apps](local-emulator-export-ssl-certificates.md) +* [Use command line parameters and PowerShell commands to control the emulator](emulator-command-line-parameters.md) +* [Debug issues with the emulator](troubleshoot-local-emulator.md) |
cosmos-db | How To Setup Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-managed-identity.md | az cosmosdb identity remove \ ## Next steps > [!div class="nextstepaction"]-> [Tutorial: Store and use Azure Cosmos DB credentials with Azure Key Vault](access-secrets-from-keyvault.md) +> [Tutorial: Store and use Azure Cosmos DB credentials with Azure Key Vault](store-credentials-key-vault.md) - Learn more about [managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) - Learn more about [customer-managed keys on Azure Cosmos DB](how-to-setup-cmk.md) |
cosmos-db | Import Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/import-data.md | - Title: 'Tutorial: Database migration tool for Azure Cosmos DB' -description: 'Tutorial: Learn how to use the open-source Azure Cosmos DB data migration tools to import data to Azure Cosmos DB from various sources including MongoDB, SQL Server, Table storage, Amazon DynamoDB, CSV, and JSON files. CSV to JSON conversion.' ------- Previously updated : 08/26/2021---# Tutorial: Use Data migration tool to migrate your data to Azure Cosmos DB --This tutorial provides instructions on using the Azure Cosmos DB Data Migration tool, which can import data from various sources into Azure Cosmos DB containers and tables. You can import from JSON files, CSV files, SQL, MongoDB, Azure Table storage, Amazon DynamoDB, and even Azure Cosmos DB API for NoSQL collections. You migrate that data to collections and tables for use with Azure Cosmos DB. The Data Migration tool can also be used when migrating from a single partition collection to a multi-partition collection for the API for NoSQL. --> [!NOTE] -> The Azure Cosmos DB Data Migration tool is an open source tool designed for small migrations. For larger migrations, view our [guide for ingesting data](migration-choices.md). --* **[API for NoSQL](./introduction.md)** - You can use any of the source options provided in the Data Migration tool to import data at a small scale. [Learn about migration options for importing data at a large scale](migration-choices.md). -* **[API for Table](table/introduction.md)** - You can use the Data Migration tool to import data. For more information, see [Import data for use with the Azure Cosmos DB API for Table](table/import.md). -* **[Azure Cosmos DB's API for MongoDB](mongodb/introduction.md)** - The Data Migration tool doesn't support Azure Cosmos DB's API for MongoDB either as a source or as a target. If you want to migrate the data in or out of collections in Azure Cosmos DB, refer to [How to migrate MongoDB data to an Azure Cosmos DB database with Azure Cosmos DB's API for MongoDB](../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json) for instructions. You can still use the Data Migration tool to export data from MongoDB to Azure Cosmos DB API for NoSQL collections for use with the API for NoSQL. -* **[API for Cassandra](gremlin/introduction.md)** - The Data Migration tool isn't a supported import tool for API for Cassandra accounts. [Learn about migration options for importing data into API for Cassandra](migration-choices.md#azure-cosmos-db-api-for-cassandra) -* **[API for Gremlin](introduction.md)** - The Data Migration tool isn't a supported import tool for API for Gremlin accounts at this time. [Learn about migration options for importing data into API for Gremlin](migration-choices.md#other-apis) --This tutorial covers the following tasks: --> [!div class="checklist"] -> * Installing the Data Migration tool -> * Importing data from different data sources -> * Exporting from Azure Cosmos DB to JSON --## <a id="Prerequisites"></a>Prerequisites --Before following the instructions in this article, ensure that you do the following steps: --* **Install** [Microsoft .NET Framework 4.51](https://www.microsoft.com/download/developer-tools.aspx) or higher. --* **Increase throughput:** The duration of your data migration depends on the amount of throughput you set up for an individual collection or a set of collections. Be sure to increase the throughput for larger data migrations. After you've completed the migration, decrease the throughput to save costs. For more information about increasing throughput in the Azure portal, see [performance levels](performance-levels.md) and [pricing tiers](https://azure.microsoft.com/pricing/details/cosmos-db/) in Azure Cosmos DB. --* **Create Azure Cosmos DB resources:** Before you start the migrating data, pre-create all your collections from the Azure portal. To migrate to an Azure Cosmos DB account that has database level throughput, provide a partition key when you create the Azure Cosmos DB containers. --> [!IMPORTANT] -> To make sure that the Data migration tool uses Transport Layer Security (TLS) 1.2 when connecting to your Azure Cosmos DB accounts, use the .NET Framework version 4.7 or follow the instructions found in [this article](/dotnet/framework/network-programming/tls). --## <a id="Overviewl"></a>Overview --The Data Migration tool is an open-source solution that imports data to Azure Cosmos DB from a variety of sources, including: --* JSON files -* MongoDB -* SQL Server -* CSV files -* Azure Table storage -* Amazon DynamoDB -* HBase -* Azure Cosmos DB containers --While the import tool includes a graphical user interface (dtui.exe), it can also be driven from the command-line (dt.exe). In fact, there's an option to output the associated command after setting up an import through the UI. You can transform tabular source data, such as SQL Server or CSV files, to create hierarchical relationships (subdocuments) during import. Keep reading to learn more about source options, sample commands to import from each source, target options, and viewing import results. --> [!NOTE] -> We recommend using [container copy jobs](intra-account-container-copy.md) for copying data within the same Azure Cosmos DB account. -> -> You should only use the Azure Cosmos DB migration tool for small migrations. For large migrations, view our [guide for ingesting data](migration-choices.md). --## <a id="Install"></a>Installation --### Download executable package -- * Download a zip of the latest signed **dt.exe** and **dtui.exe** Windows binaries [here](https://github.com/Azure/azure-documentdb-datamigrationtool/releases/tag/1.8.3) - * Unzip into any directory on your computer and open the extracted directory to find the binaries --### Build from source -- The migration tool source code is available on GitHub in [this repository](https://github.com/Azure/azure-documentdb-datamigrationtool/tree/archive). You can download and compile the solution locally then run either: -- * **Dtui.exe**: Graphical interface version of the tool - * **Dt.exe**: Command-line version of the tool --## Select data source --Once you've installed the tool, it's time to import your data. What kind of data do you want to import? --* [JSON files](#JSON) -* [MongoDB](#MongoDB) -* [MongoDB Export files](#MongoDBExport) -* [SQL Server](#SQL) -* [CSV files](#CSV) -* [Azure Table storage](#AzureTableSource) -* [Amazon DynamoDB](#DynamoDBSource) -* [Blob](#BlobImport) -* [Azure Cosmos DB containers](#SQLSource) -* [HBase](#HBaseSource) -* [Azure Cosmos DB bulk import](#SQLBulkTarget) -* [Azure Cosmos DB sequential record import](#SQLSeqTarget) --## <a id="JSON"></a>Import JSON files --The JSON file source importer option allows you to import one or more single document JSON files or JSON files that each have an array of JSON documents. When adding folders that have JSON files to import, you have the option of recursively searching for files in subfolders. ---The connection string is in the following format: --`AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>` --* The `<CosmosDB Endpoint>` is the endpoint URI. You can get this value from the Azure portal. Navigate to your Azure Cosmos DB account. Open the **Overview** pane and copy the **URI** value. -* The `<AccountKey>` is the "Password" or **PRIMARY KEY**. You can get this value from the Azure portal. Navigate to your Azure Cosmos DB account. Open the **Connection Strings** or **Keys** pane, and copy the "Password" or **PRIMARY KEY** value. -* The `<CosmosDB Database>` is the CosmosDB database name. --Example: -`AccountEndpoint=https://myCosmosDBName.documents.azure.com:443/;AccountKey=wJmFRYna6ttQ79ATmrTMKql8vPri84QBiHTt6oinFkZRvoe7Vv81x9sn6zlVlBY10bEPMgGM982wfYXpWXWB9w==;Database=myDatabaseName` --> [!NOTE] -> Use the Verify command to ensure that the Azure Cosmos DB account specified in the connection string field can be accessed. --Here are some command-line samples to import JSON files: --```console -#Import a single JSON file -dt.exe /s:JsonFile /s.Files:.\Sessions.json /t:DocumentDBBulk /t.ConnectionString:"AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /t.Collection:Sessions /t.CollectionThroughput:2500 --#Import a directory of JSON files -dt.exe /s:JsonFile /s.Files:C:\TESessions\*.json /t:DocumentDBBulk /t.ConnectionString:" AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /t.Collection:Sessions /t.CollectionThroughput:2500 --#Import a directory (including sub-directories) of JSON files -dt.exe /s:JsonFile /s.Files:C:\LastFMMusic\**\*.json /t:DocumentDBBulk /t.ConnectionString:" AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /t.Collection:Music /t.CollectionThroughput:2500 --#Import a directory (single), directory (recursive), and individual JSON files -dt.exe /s:JsonFile /s.Files:C:\Tweets\*.*;C:\LargeDocs\**\*.*;C:\TESessions\Session48172.json;C:\TESessions\Session48173.json;C:\TESessions\Session48174.json;C:\TESessions\Session48175.json;C:\TESessions\Session48177.json /t:DocumentDBBulk /t.ConnectionString:"AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /t.Collection:subs /t.CollectionThroughput:2500 --#Import a single JSON file and partition the data across 4 collections -dt.exe /s:JsonFile /s.Files:D:\\CompanyData\\Companies.json /t:DocumentDBBulk /t.ConnectionString:"AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /t.Collection:comp[1-4] /t.PartitionKey:name /t.CollectionThroughput:2500 -``` --## <a id="MongoDB"></a>Import from MongoDB --> [!IMPORTANT] -> If you're importing to an Azure Cosmos DB account configured with Azure Cosmos DB's API for MongoDB, follow these [instructions](../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json). --With the MongoDB source importer option, you can import from a single MongoDB collection, optionally filter documents using a query, and modify the document structure by using a projection. ---The connection string is in the standard MongoDB format: --`mongodb://<dbuser>:<dbpassword>@<host>:<port>/<database>` --> [!NOTE] -> Use the Verify command to ensure that the MongoDB instance specified in the connection string field can be accessed. --Enter the name of the collection from which data will be imported. You may optionally specify or provide a file for a query, such as `{pop: {$gt:5000}}`, or a projection, such as `{loc:0}`, to both filter and shape the data that you're importing. --Here are some command-line samples to import from MongoDB: --```console -#Import all documents from a MongoDB collection -dt.exe /s:MongoDB /s.ConnectionString:mongodb://<dbuser>:<dbpassword>@<host>:<port>/<database> /s.Collection:zips /t:DocumentDBBulk /t.ConnectionString:"AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /t.Collection:BulkZips /t.IdField:_id /t.CollectionThroughput:2500 --#Import documents from a MongoDB collection which match the query and exclude the loc field -dt.exe /s:MongoDB /s.ConnectionString:mongodb://<dbuser>:<dbpassword>@<host>:<port>/<database> /s.Collection:zips /s.Query:{pop:{$gt:50000}} /s.Projection:{loc:0} /t:DocumentDBBulk /t.ConnectionString:"AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /t.Collection:BulkZipsTransform /t.IdField:_id/t.CollectionThroughput:2500 -``` --## <a id="MongoDBExport"></a>Import MongoDB export files --> [!IMPORTANT] -> If you're importing to an Azure Cosmos DB account with support for MongoDB, follow these [instructions](../dms/tutorial-mongodb-cosmos-db.md?toc=%2fazure%2fcosmos-db%2ftoc.json%253ftoc%253d%2fazure%2fcosmos-db%2ftoc.json). --The MongoDB export JSON file source importer option allows you to import one or more JSON files produced from the mongoexport utility. ---When adding folders that have MongoDB export JSON files for import, you have the option of recursively searching for files in subfolders. --Here is a command-line sample to import from MongoDB export JSON files: --```console -dt.exe /s:MongoDBExport /s.Files:D:\mongoemployees.json /t:DocumentDBBulk /t.ConnectionString:"AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /t.Collection:employees /t.IdField:_id /t.Dates:Epoch /t.CollectionThroughput:2500 -``` --## <a id="SQL"></a>Import from SQL Server --The SQL source importer option allows you to import from an individual SQL Server database and optionally filter the records to be imported using a query. In addition, you can modify the document structure by specifying a nesting separator (more on that in a moment). ---The format of the connection string is the standard SQL connection string format. --> [!NOTE] -> Use the Verify command to ensure that the SQL Server instance specified in the connection string field can be accessed. --The nesting separator property is used to create hierarchical relationships (sub-documents) during import. Consider the following SQL query: --`select CAST(BusinessEntityID AS varchar) as Id, Name, AddressType as [Address.AddressType], AddressLine1 as [Address.AddressLine1], City as [Address.Location.City], StateProvinceName as [Address.Location.StateProvinceName], PostalCode as [Address.PostalCode], CountryRegionName as [Address.CountryRegionName] from Sales.vStoreWithAddresses WHERE AddressType='Main Office'` --Which returns the following (partial) results: ---Note the aliases such as Address.AddressType and Address.Location.StateProvinceName. By specifying a nesting separator of '.', the import tool creates Address and Address.Location subdocuments during the import. Here is an example of a resulting document in Azure Cosmos DB: --*{ - "id": "956", - "Name": "Finer Sales and Service", - "Address": { - "AddressType": "Main Office", - "AddressLine1": "#500-75 O'Connor Street", - "Location": { - "City": "Ottawa", - "StateProvinceName": "Ontario" - }, - "PostalCode": "K4B 1S2", - "CountryRegionName": "Canada" - } -}* --Here are some command-line samples to import from SQL Server: --```console -#Import records from SQL which match a query -dt.exe /s:SQL /s.ConnectionString:"Data Source=<server>;Initial Catalog=AdventureWorks;User Id=advworks;Password=<password>;" /s.Query:"select CAST(BusinessEntityID AS varchar) as Id, * from Sales.vStoreWithAddresses WHERE AddressType='Main Office'" /t:DocumentDBBulk /t.ConnectionString:" AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /t.Collection:Stores /t.IdField:Id /t.CollectionThroughput:2500 --#Import records from sql which match a query and create hierarchical relationships -dt.exe /s:SQL /s.ConnectionString:"Data Source=<server>;Initial Catalog=AdventureWorks;User Id=advworks;Password=<password>;" /s.Query:"select CAST(BusinessEntityID AS varchar) as Id, Name, AddressType as [Address.AddressType], AddressLine1 as [Address.AddressLine1], City as [Address.Location.City], StateProvinceName as [Address.Location.StateProvinceName], PostalCode as [Address.PostalCode], CountryRegionName as [Address.CountryRegionName] from Sales.vStoreWithAddresses WHERE AddressType='Main Office'" /s.NestingSeparator:. /t:DocumentDBBulk /t.ConnectionString:" AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /t.Collection:StoresSub /t.IdField:Id /t.CollectionThroughput:2500 -``` --## <a id="CSV"></a>Import CSV files and convert CSV to JSON --The CSV file source importer option enables you to import one or more CSV files. When adding folders that have CSV files for import, you have the option of recursively searching for files in subfolders. ---Similar to the SQL source, the nesting separator property may be used to create hierarchical relationships (sub-documents) during import. Consider the following CSV header row and data rows: ---Note the aliases such as DomainInfo.Domain_Name and RedirectInfo.Redirecting. By specifying a nesting separator of '.', the import tool will create DomainInfo and RedirectInfo subdocuments during the import. Here is an example of a resulting document in Azure Cosmos DB: --*{ - "DomainInfo": { - "Domain_Name": "ACUS.GOV", - "Domain_Name_Address": "https:\//www.ACUS.GOV" - }, - "Federal Agency": "Administrative Conference of the United States", - "RedirectInfo": { - "Redirecting": "0", - "Redirect_Destination": "" - }, - "id": "9cc565c5-ebcd-1c03-ebd3-cc3e2ecd814d" -}* --The import tool tries to infer type information for unquoted values in CSV files (quoted values are always treated as strings). Types are identified in the following order: number, datetime, boolean. --There are two other things to note about CSV import: --1. By default, unquoted values are always trimmed for tabs and spaces, while quoted values are preserved as-is. This behavior can be overridden with the Trim quoted values checkbox or the /s.TrimQuoted command-line option. -2. By default, an unquoted null is treated as a null value. This behavior can be overridden (that is, treat an unquoted null as a "null" string) with the Treat unquoted NULL as string checkbox or the /s.NoUnquotedNulls command-line option. --Here is a command-line sample for CSV import: --```console -dt.exe /s:CsvFile /s.Files:.\Employees.csv /t:DocumentDBBulk /t.ConnectionString:"AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /t.Collection:Employees /t.IdField:EntityID /t.CollectionThroughput:2500 -``` --## <a id="AzureTableSource"></a>Import from Azure Table storage --The Azure Table storage source importer option allows you to import from an individual Azure Table storage table. Optionally, you can filter the table entities to be imported. --You may output data that was imported from Azure Table Storage to Azure Cosmos DB tables and entities for use with the API for Table. Imported data can also be output to collections and documents for use with the API for NoSQL. However, API for Table is only available as a target in the command-line utility. You can't export to API for Table by using the Data Migration tool user interface. For more information, see [Import data for use with the Azure Cosmos DB API for Table](table/import.md). ---The format of the Azure Table storage connection string is: --`DefaultEndpointsProtocol=<protocol>;AccountName=<Account Name>;AccountKey=<Account Key>;` --> [!NOTE] -> Use the Verify command to ensure that the Azure Table storage instance specified in the connection string field can be accessed. --Enter the name of the Azure table from to import from. You may optionally specify a [filter](/visualstudio/azure/vs-azure-tools-table-designer-construct-filter-strings). --The Azure Table storage source importer option has the following additional options: --1. Include Internal Fields - 1. All - Include all internal fields (PartitionKey, RowKey, and Timestamp) - 2. None - Exclude all internal fields - 3. RowKey - Only include the RowKey field -2. Select Columns - 1. Azure Table storage filters don't support projections. If you want to only import specific Azure Table entity properties, add them to the Select Columns list. All other entity properties are ignored. --Here is a command-line sample to import from Azure Table storage: --```console -dt.exe /s:AzureTable /s.ConnectionString:"DefaultEndpointsProtocol=https;AccountName=<Account Name>;AccountKey=<Account Key>" /s.Table:metrics /s.InternalFields:All /s.Filter:"PartitionKey eq 'Partition1' and RowKey gt '00001'" /s.Projection:ObjectCount;ObjectSize /t:DocumentDBBulk /t.ConnectionString:" AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /t.Collection:metrics /t.CollectionThroughput:2500 -``` --## <a id="DynamoDBSource"></a>Import from Amazon DynamoDB --The Amazon DynamoDB source importer option allows you to import from a single Amazon DynamoDB table. It can optionally filter the entities to be imported. Several templates are provided so that setting up an import is as easy as possible. ----The format of the Amazon DynamoDB connection string is: --`ServiceURL=<Service Address>;AccessKey=<Access Key>;SecretKey=<Secret Key>;` --> [!NOTE] -> Use the Verify command to ensure that the Amazon DynamoDB instance specified in the connection string field can be accessed. --Here is a command-line sample to import from Amazon DynamoDB: --```console -dt.exe /s:DynamoDB /s.ConnectionString:ServiceURL=https://dynamodb.us-east-1.amazonaws.com;AccessKey=<accessKey>;SecretKey=<secretKey> /s.Request:"{ """TableName""": """ProductCatalog""" }" /t:DocumentDBBulk /t.ConnectionString:"AccountEndpoint=<Azure Cosmos DB Endpoint>;AccountKey=<Azure Cosmos DB Key>;Database=<Azure Cosmos DB database>;" /t.Collection:catalogCollection /t.CollectionThroughput:2500 -``` --## <a id="BlobImport"></a>Import from Azure Blob storage --The JSON file, MongoDB export file, and CSV file source importer options allow you to import one or more files from Azure Blob storage. After specifying a Blob container URL and Account Key, provide a regular expression to select the file(s) to import. ---Here is command-line sample to import JSON files from Azure Blob storage: --```console -dt.exe /s:JsonFile /s.Files:"blobs://<account key>@account.blob.core.windows.net:443/importcontainer/.*" /t:DocumentDBBulk /t.ConnectionString:"AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /t.Collection:doctest -``` --## <a id="SQLSource"></a>Import from a API for NoSQL collection --The Azure Cosmos DB source importer option allows you to import data from one or more Azure Cosmos DB containers and optionally filter documents using a query. ---The format of the Azure Cosmos DB connection string is: --`AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;` --You can retrieve the Azure Cosmos DB account connection string from the Keys page of the Azure portal, as described in [How to manage an Azure Cosmos DB account](./how-to-manage-database-account.md). However, the name of the database needs to be appended to the connection string in the following format: --`Database=<CosmosDB Database>;` --> [!NOTE] -> Use the Verify command to ensure that the Azure Cosmos DB instance specified in the connection string field can be accessed. --To import from a single Azure Cosmos DB container, enter the name of the collection to import data from. To import from more than one Azure Cosmos DB container, provide a regular expression to match one or more collection names (for example, collection01 | collection02 | collection03). You may optionally specify, or provide a file for, a query to both filter and shape the data that you're importing. --> [!NOTE] -> Since the collection field accepts regular expressions, if you're importing from a single collection whose name has regular expression characters, then those characters must be escaped accordingly. --The Azure Cosmos DB source importer option has the following advanced options: --1. Include Internal Fields: Specifies whether or not to include Azure Cosmos DB document system properties in the export (for example, _rid, _ts). -2. Number of Retries on Failure: Specifies the number of times to retry the connection to Azure Cosmos DB in case of transient failures (for example, network connectivity interruption). -3. Retry Interval: Specifies how long to wait between retrying the connection to Azure Cosmos DB in case of transient failures (for example, network connectivity interruption). -4. Connection Mode: Specifies the connection mode to use with Azure Cosmos DB. The available choices are DirectTcp, DirectHttps, and Gateway. The direct connection modes are faster, while the gateway mode is more firewall friendly as it only uses port 443. ---> [!TIP] -> The import tool defaults to connection mode DirectTcp. If you experience firewall issues, switch to connection mode Gateway, as it only requires port 443. --Here are some command-line samples to import from Azure Cosmos DB: --```console -#Migrate data from one Azure Cosmos DB container to another Azure Cosmos DB containers -dt.exe /s:DocumentDB /s.ConnectionString:"AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /s.Collection:TEColl /t:DocumentDBBulk /t.ConnectionString:" AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /t.Collection:TESessions /t.CollectionThroughput:2500 --#Migrate data from more than one Azure Cosmos DB container to a single Azure Cosmos DB container -dt.exe /s:DocumentDB /s.ConnectionString:"AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /s.Collection:comp1|comp2|comp3|comp4 /t:DocumentDBBulk /t.ConnectionString:"AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /t.Collection:singleCollection /t.CollectionThroughput:2500 --#Export an Azure Cosmos DB container to a JSON file -dt.exe /s:DocumentDB /s.ConnectionString:"AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /s.Collection:StoresSub /t:JsonFile /t.File:StoresExport.json /t.Overwrite -``` --> [!TIP] -> The Azure Cosmos DB Data Import Tool also supports import of data from the [Azure Cosmos DB Emulator](local-emulator.md). When importing data from a local emulator, set the endpoint to `https://localhost:<port>`. --## <a id="HBaseSource"></a>Import from HBase --The HBase source importer option allows you to import data from an HBase table and optionally filter the data. Several templates are provided so that setting up an import is as easy as possible. ----The format of the HBase Stargate connection string is: --`ServiceURL=<server-address>;Username=<username>;Password=<password>` --> [!NOTE] -> Use the Verify command to ensure that the HBase instance specified in the connection string field can be accessed. --Here is a command-line sample to import from HBase: --```console -dt.exe /s:HBase /s.ConnectionString:ServiceURL=<server-address>;Username=<username>;Password=<password> /s.Table:Contacts /t:DocumentDBBulk /t.ConnectionString:"AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;" /t.Collection:hbaseimport -``` --## <a id="SQLBulkTarget"></a>Import to the API for NoSQL (Bulk Import) --The Azure Cosmos DB Bulk importer allows you to import from any of the available source options, using an Azure Cosmos DB stored procedure for efficiency. The tool supports import to one single-partitioned Azure Cosmos DB container. It also supports sharded import whereby data is partitioned across more than one single-partitioned Azure Cosmos DB container. For more information about partitioning data, see [Partitioning and scaling in Azure Cosmos DB](partitioning-overview.md). The tool creates, executes, and then deletes the stored procedure from the target collection(s). ---The format of the Azure Cosmos DB connection string is: --`AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;` --The Azure Cosmos DB account connection string can be retrieved from the Keys page of the Azure portal, as described in [How to manage an Azure Cosmos DB account](./how-to-manage-database-account.md), however the name of the database needs to be appended to the connection string in the following format: --`Database=<CosmosDB Database>;` --> [!NOTE] -> Use the Verify command to ensure that the Azure Cosmos DB instance specified in the connection string field can be accessed. --To import to a single collection, enter the name of the collection to import data from and click the Add button. To import to more than one collection, either enter each collection name individually or use the following syntax to specify more than one collection: *collection_prefix*[start index - end index]. When specifying more than one collection using the aforementioned syntax, keep the following guidelines in mind: --1. Only integer range name patterns are supported. For example, specifying collection[0-3] creates the following collections: collection0, collection1, collection2, collection3. -2. You can use an abbreviated syntax: collection[3] creates the same set of collections mentioned in step 1. -3. More than one substitution can be provided. For example, collection[0-1] [0-9] generates 20 collection names with leading zeros (collection01, ..02, ..03). --Once the collection name(s) have been specified, choose the desired throughput of the collection(s) (400 RUs to 10,000 RUs). For best import performance, choose a higher throughput. For more information about performance levels, see [Performance levels in Azure Cosmos DB](performance-levels.md). --> [!NOTE] -> The performance throughput setting only applies to collection creation. If the specified collection already exists, its throughput won't be modified. --When you import to more than one collection, the import tool supports hash-based sharding. In this scenario, specify the document property you wish to use as the Partition Key. (If Partition Key is left blank, documents are sharded randomly across the target collections.) --You may optionally specify which field in the import source should be used as the Azure Cosmos DB document ID property during the import. If documents don't have this property, then the import tool generates a GUID as the ID property value. --There are a number of advanced options available during import. First, while the tool includes a default bulk import stored procedure (BulkInsert.js), you may choose to specify your own import stored procedure: -- :::image type="content" source="./media/import-data/bulkinsertsp.png" alt-text="Screenshot of Azure Cosmos DB bulk insert sproc option"::: --Additionally, when importing date types (for example, from SQL Server or MongoDB), you can choose between three import options: -- :::image type="content" source="./media/import-data/datetimeoptions.png" alt-text="Screenshot of Azure Cosmos DB date time import options"::: --* String: Persist as a string value -* Epoch: Persist as an Epoch number value -* Both: Persist both string and Epoch number values. This option creates a subdocument, for example: - "date_joined": { - "Value": "2013-10-21T21:17:25.2410000Z", - "Epoch": 1382390245 - } --The Azure Cosmos DB Bulk importer has the following additional advanced options: --1. Batch Size: The tool defaults to a batch size of 50. If the documents to be imported are large, consider lowering the batch size. Conversely, if the documents to be imported are small, consider raising the batch size. -2. Max Script Size (bytes): The tool defaults to a max script size of 512 KB. -3. Disable Automatic Id Generation: If every document to be imported has an ID field, then selecting this option can increase performance. Documents missing a unique ID field aren't imported. -4. Update Existing Documents: The tool defaults to not replacing existing documents with ID conflicts. Selecting this option allows overwriting existing documents with matching IDs. This feature is useful for scheduled data migrations that update existing documents. -5. Number of Retries on Failure: Specifies how often to retry the connection to Azure Cosmos DB during transient failures (for example, network connectivity interruption). -6. Retry Interval: Specifies how long to wait between retrying the connection to Azure Cosmos DB in case of transient failures (for example, network connectivity interruption). -7. Connection Mode: Specifies the connection mode to use with Azure Cosmos DB. The available choices are DirectTcp, DirectHttps, and Gateway. The direct connection modes are faster, while the gateway mode is more firewall friendly as it only uses port 443. ---> [!TIP] -> The import tool defaults to connection mode DirectTcp. If you experience firewall issues, switch to connection mode Gateway, as it only requires port 443. --## <a id="SQLSeqTarget"></a>Import to the API for NoSQL (Sequential Record Import) --The Azure Cosmos DB sequential record importer allows you to import from an available source option on a record-by-record basis. You might choose this option if youΓÇÖre importing to an existing collection that has reached its quota of stored procedures. The tool supports import to a single (both single-partition and multi-partition) Azure Cosmos DB container. It also supports sharded import whereby data is partitioned across more than one single-partition or multi-partition Azure Cosmos DB container. For more information about partitioning data, see [Partitioning and scaling in Azure Cosmos DB](partitioning-overview.md). ---The format of the Azure Cosmos DB connection string is: --`AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB Database>;` --You can retrieve the connection string for the Azure Cosmos DB account from the Keys page of the Azure portal, as described in [How to manage an Azure Cosmos DB account](./how-to-manage-database-account.md). However, the name of the database needs to be appended to the connection string in the following format: --`Database=<Azure Cosmos DB database>;` --> [!NOTE] -> Use the Verify command to ensure that the Azure Cosmos DB instance specified in the connection string field can be accessed. --To import to a single collection, enter the name of the collection to import data into, and then click the Add button. To import to more than one collection, enter each collection name individually. You may also use the following syntax to specify more than one collection: *collection_prefix*[start index - end index]. When specifying more than one collection via the aforementioned syntax, keep the following guidelines in mind: --1. Only integer range name patterns are supported. For example, specifying collection[0-3] creates the following collections: collection0, collection1, collection2, collection3. -2. You can use an abbreviated syntax: collection[3] creates the same set of collections mentioned in step 1. -3. More than one substitution can be provided. For example, collection[0-1] [0-9] creates 20 collection names with leading zeros (collection01, ..02, ..03). --Once the collection name(s) have been specified, choose the desired throughput of the collection(s) (400 RUs to 250,000 RUs). For best import performance, choose a higher throughput. For more information about performance levels, see [Performance levels in Azure Cosmos DB](performance-levels.md). Any import to collections with throughput >10,000 RUs require a partition key. If you choose to have more than 250,000 RUs, you need to file a request in the portal to have your account increased. --> [!NOTE] -> The throughput setting only applies to collection or database creation. If the specified collection already exists, its throughput won't be modified. --When importing to more than one collection, the import tool supports hash-based sharding. In this scenario, specify the document property you wish to use as the Partition Key. (If Partition Key is left blank, documents are sharded randomly across the target collections.) --You may optionally specify which field in the import source should be used as the Azure Cosmos DB document ID property during the import. (If documents don't have this property, then the import tool generates a GUID as the ID property value.) --There are a number of advanced options available during import. First, when importing date types (for example, from SQL Server or MongoDB), you can choose between three import options: -- :::image type="content" source="./media/import-data/datetimeoptions.png" alt-text="Screenshot of Azure Cosmos DB date time import options"::: --* String: Persist as a string value -* Epoch: Persist as an Epoch number value -* Both: Persist both string and Epoch number values. This option creates a subdocument, for example: - "date_joined": { - "Value": "2013-10-21T21:17:25.2410000Z", - "Epoch": 1382390245 - } --The Azure Cosmos DB - Sequential record importer has the following additional advanced options: --1. Number of Parallel Requests: The tool defaults to two parallel requests. If the documents to be imported are small, consider raising the number of parallel requests. If this number is raised too much, the import may experience rate limiting. -2. Disable Automatic Id Generation: If every document to be imported has an ID field, then selecting this option can increase performance. Documents missing a unique ID field aren't imported. -3. Update Existing Documents: The tool defaults to not replacing existing documents with ID conflicts. Selecting this option allows overwriting existing documents with matching IDs. This feature is useful for scheduled data migrations that update existing documents. -4. Number of Retries on Failure: Specifies how often to retry the connection to Azure Cosmos DB during transient failures (for example, network connectivity interruption). -5. Retry Interval: Specifies how long to wait between retrying the connection to Azure Cosmos DB during transient failures (for example, network connectivity interruption). -6. Connection Mode: Specifies the connection mode to use with Azure Cosmos DB. The available choices are DirectTcp, DirectHttps, and Gateway. The direct connection modes are faster, while the gateway mode is more firewall friendly as it only uses port 443. ---> [!TIP] -> The import tool defaults to connection mode DirectTcp. If you experience firewall issues, switch to connection mode Gateway, as it only requires port 443. --## <a id="IndexingPolicy"></a>Specify an indexing policy --When you allow the migration tool to create Azure Cosmos DB API for NoSQL collections during import, you can specify the indexing policy of the collections. In the advanced options section of the Azure Cosmos DB Bulk import and Azure Cosmos DB Sequential record options, navigate to the Indexing Policy section. ---Using the Indexing Policy advanced option, you can select an indexing policy file, manually enter an indexing policy, or select from a set of default templates (by right-clicking in the indexing policy textbox). --The policy templates the tool provides are: --* Default. This policy is best when you perform equality queries against strings. It also works if you use ORDER BY, range, and equality queries for numbers. This policy has a lower index storage overhead than Range. -* Range. This policy is best when you use ORDER BY, range, and equality queries on both numbers and strings. This policy has a higher index storage overhead than Default or Hash. ---> [!NOTE] -> If you don't specify an indexing policy, then the default policy is applied. For more information about indexing policies, see [Azure Cosmos DB indexing policies](index-policy.md). --## Export to JSON file --The Azure Cosmos DB JSON exporter allows you to export any of the available source options to a JSON file that has an array of JSON documents. The tool handles the export for you. Alternatively, you can choose to view the resulting migration command and run the command yourself. The resulting JSON file may be stored locally or in Azure Blob storage. ----You may optionally choose to prettify the resulting JSON. This action will increase the size of the resulting document while making the contents more human readable. --* Standard JSON export -- ```JSON - [{"id":"Sample","Title":"About Paris","Language":{"Name":"English"},"Author":{"Name":"Don","Location":{"City":"Paris","Country":"France"}},"Content":"Don's document in Azure Cosmos DB is a valid JSON document as defined by the JSON spec.","PageViews":10000,"Topics":[{"Title":"History of Paris"},{"Title":"Places to see in Paris"}]}] - ``` --* Prettified JSON export -- ```JSON - [ - { - "id": "Sample", - "Title": "About Paris", - "Language": { - "Name": "English" - }, - "Author": { - "Name": "Don", - "Location": { - "City": "Paris", - "Country": "France" - } - }, - "Content": "Don's document in Azure Cosmos DB is a valid JSON document as defined by the JSON spec.", - "PageViews": 10000, - "Topics": [ - { - "Title": "History of Paris" - }, - { - "Title": "Places to see in Paris" - } - ] - }] - ``` --Here is a command-line sample to export the JSON file to Azure Blob storage: --```console -dt.exe /ErrorDetails:All /s:DocumentDB /s.ConnectionString:"AccountEndpoint=<CosmosDB Endpoint>;AccountKey=<CosmosDB Key>;Database=<CosmosDB database_name>" /s.Collection:<CosmosDB collection_name> -/t:JsonFile /t.File:"blobs://<Storage account key>@<Storage account name>.blob.core.windows.net:443/<Container_name>/<Blob_name>" -/t.Overwrite -``` --## Advanced configuration --In the Advanced configuration screen, specify the location of the log file to which you would like any errors written. The following rules apply to this page: --1. If a file name isn't provided, then all errors are returned on the Results page. -2. If a file name is provided without a directory, then the file is created (or overwritten) in the current environment directory. -3. If you select an existing file, then the file is overwritten, there's no append option. -4. Then, choose whether to log all, critical, or no error messages. Finally, decide how frequently the on-screen transfer message is updated with its progress. -- :::image type="content" source="./media/import-data/AdvancedConfiguration.png" alt-text="Screenshot of Advanced configuration screen"::: --## Confirm import settings and view command line --1. After you specify the source information, target information, and advanced configuration, review the migration summary and view or copy the resulting migration command if you want. (Copying the command is useful to automate import operations.) -- :::image type="content" source="./media/import-data/summary.png" alt-text="Screenshot of summary screen."::: -- :::image type="content" source="./media/import-data/summarycommand.png" alt-text="Screenshot of summary screen with Command Line Preview."::: --2. Once youΓÇÖre satisfied with your source and target options, click **Import**. The elapsed time, transferred count, and failure information (if you didn't provide a file name in the Advanced configuration) update as the import is in process. Once complete, you can export the results (for example, to deal with any import failures). -- :::image type="content" source="./media/import-data/viewresults.png" alt-text="Screenshot of Azure Cosmos DB JSON export option."::: --3. You may also start a new import by either resetting all values or keeping the existing settings. (For example, you may choose to keep connection string information, source and target choice, and more.) -- :::image type="content" source="./media/import-data/newimport.png" alt-text="Screenshot of Azure Cosmos DB JSON export option with the New Import confirmation dialog box."::: --## Next steps --In this tutorial, you've done the following tasks: --> [!div class="checklist"] -> * Installed the Data Migration tool -> * Imported data from different data sources -> * Exported from Azure Cosmos DB to JSON --You can now proceed to the next tutorial and learn how to query data using Azure Cosmos DB. --Trying to do capacity planning for a migration to Azure Cosmos DB? - * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md) - * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) --> [!div class="nextstepaction"] ->[How to query data?](nosql/tutorial-query.md) |
cosmos-db | Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md | Get started with Azure Cosmos DB with one of our quickstarts: - [Get started with Azure Cosmos DB for Apache Cassandra](cassandr) - [Get started with Azure Cosmos DB for Apache Gremlin](gremlin/quickstart-dotnet.md) - [Get started with Azure Cosmos DB for Table](table/quickstart-dotnet.md)+- [Get started with Azure Cosmos DB for PostgreSQL](postgresql/quickstart-app-stacks-python.md) - [A whitepaper on next-gen app development with Azure Cosmos DB](https://azure.microsoft.com/resources/microsoft-azure-cosmos-db-flexible-reliable-cloud-nosql-at-any-scale/) - Trying to do capacity planning for a migration to Azure Cosmos DB? - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md) |
cosmos-db | Local Emulator Export Ssl Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/local-emulator-export-ssl-certificates.md | Once the "CosmosDBEmulatorCertificate" TLS/SSL certificate is installed, your ap ## Use the certificate with Python apps -When connecting to the emulator from Python apps, TLS verification is disabled. By default the [Python SDK(version 2.0.0 or higher)](nosql/sdk-python.md) for the API for NoSQL will not try to use the TLS/SSL certificate when connecting to the local emulator. If however you want to use TLS validation, you can follow the examples in the [Python socket wrappers](https://docs.python.org/2/library/ssl.html) documentation. +When connecting to the emulator from Python apps, TLS verification is disabled. By default the [Python SDK](nosql/quickstart-python.md) for the API for NoSQL will not try to use the TLS/SSL certificate when connecting to the local emulator. If however you want to use TLS validation, you can follow the examples in the [Python socket wrappers](https://docs.python.org/3/library/ssl.html) documentation. ## How to use the certificate in Node.js |
cosmos-db | Local Emulator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/local-emulator.md | The Azure Cosmos DB Emulator provides a high-fidelity emulation of the Azure Cos While emulation of the Azure Cosmos DB service is faithful, the emulator's implementation is different than the service. For example, the emulator uses standard OS components such as the local file system for persistence, and the HTTPS protocol stack for connectivity. Functionality that relies on the Azure infrastructure like global replication, single-digit millisecond latency for reads/writes, and tunable consistency levels are not applicable when you use the emulator. -You can migrate data between the Azure Cosmos DB Emulator and the Azure Cosmos DB service by using the [Azure Cosmos DB Data Migration Tool](https://github.com/azure/azure-documentdb-datamigrationtool). - ## Differences between the emulator and the cloud service Because the Azure Cosmos DB Emulator provides an emulated environment that runs on the local developer workstation, there are some differences in functionality between the emulator and an Azure Cosmos DB account in the cloud: |
cosmos-db | Managed Identity Based Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/managed-identity-based-authentication.md | In this step, you'll query the document endpoint for the API for NoSQL account. ## Grant access to your Azure Cosmos DB account -In this step, you'll assign a role to the function app's system-assigned managed identity. Azure Cosmos DB has multiple built-in roles that you can assign to the managed identity for control-plane access. For data-plane access, you'll create a new custom role with acess to read metadata. +In this step, you'll assign a role to the function app's system-assigned managed identity. Azure Cosmos DB has multiple built-in roles that you can assign to the managed identity for control-plane access. For data-plane access, you'll create a new custom role with access to read metadata. > [!TIP] > For more information about the importance of least privilege access, see the [Lower exposure of privileged accounts](../security/fundamentals/identity-management-best-practices.md#lower-exposure-of-privileged-accounts) article. Once published, the ``DefaultAzureCredential`` class will use credentials from t ## Next steps - [Certificate-based authentication with Azure Cosmos DB and Azure Active Directory](certificate-based-authentication.md)-- [Secure Azure Cosmos DB keys using Azure Key Vault](access-secrets-from-keyvault.md)+- [Secure Azure Cosmos DB keys using Azure Key Vault](store-credentials-key-vault.md) - [Security baseline for Azure Cosmos DB](security-baseline.md) |
cosmos-db | Merge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md | To enroll in the preview, your Azure Cosmos DB account must meet all the followi - Azure Functions - Azure Search - Azure Cosmos DB Spark connector- - Azure Cosmos DB data migration tool - Any third party library or tool that has a dependency on an Azure Cosmos DB SDK that isn't .NET V3 SDK v3.27.0 or higher ### Account resources and configuration If you enroll in the preview, the following connectors will fail. - Azure Functions <sup>1</sup> - Azure Search <sup>1</sup> - Azure Cosmos DB Spark connector <sup>1</sup>-- Azure Cosmos DB data migration tool - Any third party library or tool that has a dependency on an Azure Cosmos DB SDK that isn't .NET V3 SDK v3.27.0 or higher <sup>1</sup> Support for these connectors is planned for the future. |
cosmos-db | Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migrate.md | Before you migrate the entire workload to Azure Cosmos DB, you can migrate a sub ## Tools for data migration -Azure Cosmos DB migration strategies currently differ based on the API choice and the size of the data. To migrate smaller datasets ΓÇô for validating data modeling, query performance, partition key choice etc. ΓÇô you can choose the [Data Migration Tool](import-data.md) or [Azure Data FactoryΓÇÖs Azure Cosmos DB connector](../data-factory/connector-azure-cosmos-db.md). If you are familiar with Spark, you can also choose to use the [Azure Cosmos DB Spark connector](./nosql/quickstart-spark.md) to migrate data. +Azure Cosmos DB migration strategies currently differ based on the API choice and the size of the data. To migrate smaller datasets ΓÇô for validating data modeling, query performance, partition key choice etc. ΓÇô you can use [Azure Data FactoryΓÇÖs Azure Cosmos DB connector](../data-factory/connector-azure-cosmos-db.md). If you are familiar with Spark, you can also choose to use the [Azure Cosmos DB Spark connector](./nosql/quickstart-spark.md) to migrate data. ## Challenges for large-scale migrations |
cosmos-db | Migration Choices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migration-choices.md | If you need help with capacity planning, consider reading our [guide to estimati |Migration type|Solution|Supported sources|Supported targets|Considerations| |||||| |Offline|[Intra-account container copy](intra-account-container-copy.md)|Azure Cosmos DB for NoSQL|Azure Cosmos DB for NoSQL|• CLI-based; No set up needed. <br/>• Supports large datasets.|-|Offline|[Data Migration Tool](import-data.md)| •JSON/CSV Files<br/>•Azure Cosmos DB for NoSQL<br/>•MongoDB<br/>•SQL Server<br/>•Table Storage<br/>•AWS DynamoDB<br/>•Azure Blob Storage|•Azure Cosmos DB for NoSQL<br/>•Azure Cosmos DB Tables API<br/>•JSON Files |• Easy to set up and supports multiple sources. <br/>• Not suitable for large datasets.| |Offline|[Azure Data Factory](../data-factory/connector-azure-cosmos-db.md)| •JSON/CSV Files<br/>•Azure Cosmos DB for NoSQL<br/>•Azure Cosmos DB for MongoDB<br/>•MongoDB <br/>•SQL Server<br/>•Table Storage<br/>•Azure Blob Storage <br/> <br/>See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported sources.|•Azure Cosmos DB for NoSQL<br/>•Azure Cosmos DB for MongoDB<br/>•JSON Files <br/><br/> See the [Azure Data Factory](../data-factory/connector-overview.md) article for other supported targets. |• Easy to set up and supports multiple sources.<br/>• Makes use of the Azure Cosmos DB bulk executor library. <br/>• Suitable for large datasets. <br/>• Lack of checkpointing - It means that if an issue occurs during the course of migration, you need to restart the whole migration process.<br/>• Lack of a dead letter queue - It means that a few erroneous files can stop the entire migration process.| |Offline|[Azure Cosmos DB Spark connector](./nosql/quickstart-spark.md)|Azure Cosmos DB for NoSQL. <br/><br/>You can use other sources with additional connectors from the Spark ecosystem.| Azure Cosmos DB for NoSQL. <br/><br/>You can use other targets with additional connectors from the Spark ecosystem.| • Makes use of the Azure Cosmos DB bulk executor library. <br/>• Suitable for large datasets. <br/>• Needs a custom Spark setup. <br/>• Spark is sensitive to schema inconsistencies and this can be a problem during migration. | |Online|[Azure Cosmos DB Spark connector + Change Feed](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/DatabricksLiveContainerMigration)|Azure Cosmos DB for NoSQL. <br/><br/>Uses Azure Cosmos DB Change Feed to stream all historic data as well as live updates.| Azure Cosmos DB for NoSQL. <br/><br/>You can use other targets with additional connectors from the Spark ecosystem.| • Makes use of the Azure Cosmos DB bulk executor library. <br/>• Suitable for large datasets. <br/>• Needs a custom Spark setup. <br/>• Spark is sensitive to schema inconsistencies and this can be a problem during migration. | If you need help with capacity planning, consider reading our [guide to estimati For APIs other than the API for NoSQL, API for MongoDB and the API for Cassandra, there are various tools supported by each of the API's existing ecosystems. -**API for Table** --* [Data Migration Tool](table/import.md#data-migration-tool) - **API for Gremlin** * [Graph bulk executor library](gremlin/bulk-executor-dotnet.md) |
cosmos-db | Integrations Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/integrations-overview.md | Azure Networking features allow you to connect and deliver your hybrid and cloud ### Azure Key Vault Azure Key Vault helps you to securely store and manage application secrets. You can use Azure Key Vault to --* [Secure Azure Cosmos DB for MongoDB account credentials.](../access-secrets-from-keyvault.md) +* [Secure Azure Cosmos DB for MongoDB account credentials.](../store-credentials-key-vault.md) * [Configure customer-managed keys for your account.](../how-to-setup-cmk.md) ### Azure AD |
cosmos-db | Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/introduction.md | Trying to do capacity planning for a migration to Azure Cosmos DB? You can use i ## Quickstart -* [Migrate an existing MongoDB Node.js web app](create-mongodb-nodejs.md). -* [Build a web app using Azure Cosmos DB's API for MongoDB and .NET SDK](create-mongodb-dotnet.md) -* [Build a console app using Azure Cosmos DB's API for MongoDB and Java SDK](quickstart-java.md) +* [Build an app using Azure Cosmos DB for MongoDB and Node.js SDK](quickstart-nodejs.md) +* [Build an app using Azure Cosmos DB for MongoDB and Java SDK](quickstart-java.md) * [Estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) * [Estimating request units using Azure Cosmos DB capacity planner](../estimate-ru-with-capacity-planner.md) |
cosmos-db | Best Practice Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/best-practice-dotnet.md | Watch the video below to learn more about using the .NET SDK from an Azure Cosmo | <input type="checkbox"/> | Regions | Make sure to run your application in the same [Azure region](../distribute-data-globally.md) as your Azure Cosmos DB account, whenever possible to reduce latency. Enable 2-4 regions and replicate your accounts in multiple regions for [best availability](../distribute-data-globally.md). For production workloads, enable [automatic failover](../how-to-manage-database-account.md#configure-multiple-write-regions). In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover won't succeed due to lack of region connectivity. To learn how to add multiple regions using the .NET SDK visit [here](tutorial-global-distribution.md) | | <input type="checkbox"/> | Availability and Failovers | Set the [ApplicationPreferredRegions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationpreferredregions?view=azure-dotnet&preserve-view=true) or [ApplicationRegion](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion?view=azure-dotnet&preserve-view=true) in the v3 SDK, and the [PreferredLocations](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.preferredlocations?view=azure-dotnet&preserve-view=true) in the v2 SDK using the [preferred regions list](./tutorial-global-distribution.md?tabs=dotnetv3%2capi-async#preferred-locations). During failovers, write operations are sent to the current write region and all reads are sent to the first region within your preferred regions list. For more information about regional failover mechanics, see the [availability troubleshooting guide](troubleshoot-sdk-availability.md). | | <input type="checkbox"/> | CPU | You may run into connectivity/availability issues due to lack of resources on your client machine. Monitor your CPU utilization on nodes running the Azure Cosmos DB client, and scale up/out if usage is high. |-| <input type="checkbox"/> | Hosting | Use [Windows 64-bit host](performance-tips.md#hosting) processing for best performance, whenever possible. | +| <input type="checkbox"/> | Hosting | Use [Windows 64-bit host](performance-tips-query-sdk.md#use-local-query-plan-generation) processing for best performance, whenever possible. | | <input type="checkbox"/> | Connectivity Modes | Use [Direct mode](sdk-connection-modes.md) for the best performance. For instructions on how to do this, see the [V3 SDK documentation](performance-tips-dotnet-sdk-v3.md#networking) or the [V2 SDK documentation](performance-tips.md#networking).| |<input type="checkbox"/> | Networking | If using a virtual machine to run your application, enable [Accelerated Networking](../../virtual-network/create-vm-accelerated-networking-powershell.md) on your VM to help with bottlenecks due to high traffic and reduce latency or CPU jitter. You might also want to consider using a higher end Virtual Machine where the max CPU usage is under 70%. |-|<input type="checkbox"/> | Ephemeral Port Exhaustion | For sparse or sporadic connections, we set the [`IdleConnectionTimeout`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout?view=azure-dotnet&preserve-view=true) and [`PortReuseMode`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.portreusemode?view=azure-dotnet&preserve-view=true) to `PrivatePortPool`. The `IdleConnectionTimeout` property helps which control the time unused connections are closed. This will reduce the number of unused connections. By default, idle connections are kept open indefinitely. The value set must be greater than or equal to 10 minutes. We recommended values between 20 minutes and 24 hours. The `PortReuseMode` property allows the SDK to use a small pool of ephemeral ports for various Azure Cosmos DB destination endpoints. | +|<input type="checkbox"/> | Ephemeral Port Exhaustion | For sparse or sporadic connections, we set the [`IdleConnectionTimeout`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout?view=azure-dotnet&preserve-view=true) and [`PortReuseMode`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.portreusemode?view=azure-dotnet&preserve-view=true) to `PrivatePortPool`. The `IdleConnectionTimeout` property helps control the time after which unused connections are closed. This will reduce the number of unused connections. By default, idle connections are kept open indefinitely. The value set must be greater than or equal to 10 minutes. We recommended values between 20 minutes and 24 hours. The `PortReuseMode` property allows the SDK to use a small pool of ephemeral ports for various Azure Cosmos DB destination endpoints. | |<input type="checkbox"/> | Use Async/Await | Avoid blocking calls: `Task.Result`, `Task.Wait`, and `Task.GetAwaiter().GetResult()`. The entire call stack is asynchronous in order to benefit from [async/await](/dotnet/csharp/programming-guide/concepts/async/) patterns. Many synchronous blocking calls lead to [Thread Pool starvation](/archive/blogs/vancem/diagnosing-net-core-threadpool-starvation-with-perfview-why-my-service-is-not-saturating-all-cores-or-seems-to-stall) and degraded response times. | |<input type="checkbox"/> | End-to-End Timeouts | To get end-to-end timeouts, you'll need to use both `RequestTimeout` and `CancellationToken` parameters. For more details on timeouts with Azure Cosmos DB [visit](troubleshoot-dotnet-sdk-request-timeout.md) | |<input type="checkbox"/> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. For accounts configured with a single write region, the SDK won't retry on writes for transient failures as writes aren't idempotent. For accounts configured with multiple write regions, there are [some scenarios](troubleshoot-sdk-availability.md#transient-connectivity-issues-on-tcp-protocol) where the SDK will automatically retry writes on other regions. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit](conceptual-resilient-sdk-applications.md#should-my-application-retry-on-errors) | |
cosmos-db | Certificate Based Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/certificate-based-authentication.md | Similar to the previous section, you can view the Activity log of your Azure Cos ## Next steps -* [Secure Azure Cosmos DB keys using Azure Key Vault](../access-secrets-from-keyvault.md) +* [Secure Azure Cosmos DB keys using Azure Key Vault](../store-credentials-key-vault.md) * [Security baseline for Azure Cosmos DB](../security-baseline.md) |
cosmos-db | Conceptual Resilient Sdk Applications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/conceptual-resilient-sdk-applications.md | For a video overview of the concepts discussed in this article, see: ## Connectivity modes -Azure Cosmos DB SDKs can connect to the service in two [connectivity modes](sdk-connection-modes.md). The .NET and Java SDKs can connect to the service in both Gateway and Direct mode, while the others can only connect in Gateway mode. Gateway mode uses the HTTP protocol and Direct mode uses the TCP protocol. +Azure Cosmos DB SDKs can connect to the service in two [connectivity modes](sdk-connection-modes.md). The .NET and Java SDKs can connect to the service in both Gateway and Direct mode, while the others can only connect in Gateway mode. Gateway mode uses the HTTP protocol and Direct mode typically uses the TCP protocol. Gateway mode is always used to fetch metadata such as the account, container, and routing information regardless of which mode SDK is configured to use. This information is cached in memory and is used to connect to the [service replicas](../partitioning-overview.md#replica-sets). |
cosmos-db | Couchbase Cosmos Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/couchbase-cosmos-migration.md | There are two ways to migrate data. * **Use Azure Data Factory:** This is the most recommended method to migrate the data. Configure the source as Couchbase and sink as Azure Cosmos DB for NoSQL, see the Azure [Azure Cosmos DB Data Factory connector](../../data-factory/connector-azure-cosmos-db.md) article for detailed steps. -* **Use the Azure Cosmos DB data import tool:** This option is recommended to migrate using VMs with less amount of data. For detailed steps, see the [Data importer](../import-data.md) article. - ## Next Steps * To do performance testing, see [Performance and scale testing with Azure Cosmos DB](./performance-testing.md) article. |
cosmos-db | Distribute Throughput Across Partitions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/distribute-throughput-across-partitions.md | To enroll in the preview, your Azure Cosmos DB account must meet all the followi - Azure Functions - Azure Search - Azure Cosmos DB Spark connector- - Azure Cosmos DB data migration tool - Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher ### SDK requirements (API for NoSQL only) If you enroll in the preview, the following connectors will fail. * Azure Functions<sup>1</sup> * Azure Search<sup>1</sup> * Azure Cosmos DB Spark connector<sup>1</sup>-* Azure Cosmos DB data migration tool * Any 3rd party library or tool that has a dependency on an Azure Cosmos DB SDK that is not .NET V3 SDK v3.27.0 or higher <sup>1</sup>Support for these connectors is planned for the future. |
cosmos-db | How To Create Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-create-account.md | Create a single Azure Cosmos DB account using the API for NoSQL. ## Next steps -In this guide, you learned how to create an Azure Cosmos DB for NoSQL account. You can now import additional data to your Azure Cosmos DB account. +In this guide, you learned how to create an Azure Cosmos DB for NoSQL account. You can now create an application with your Azure Cosmos DB account. > [!div class="nextstepaction"]-> [Import data into Azure Cosmos DB for NoSQL](../import-data.md) +> [Create a .NET console application with Azure Cosmos DB for NoSQL](tutorial-dotnet-web-app.md) |
cosmos-db | Migrate Dotnet V3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-dotnet-v3.md | Some settings in `ConnectionPolicy` have been renamed or replaced by `CosmosClie |`MediaRequestTimeout`|Removed. Attachments are no longer supported.| |`SetCurrentLocation`|`CosmosClientOptions.ApplicationRegion` can be used to achieve the same effect.| |`PreferredLocations`|`CosmosClientOptions.ApplicationPreferredRegions` can be used to achieve the same effect.|+|`UserAgentSuffix`| | `CosmosClientBuilder.ApplicationName` can be used to achieve the same effect.| ### Indexing policy |
cosmos-db | Migrate Hbase To Cosmos Db | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-hbase-to-cosmos-db.md | sqlline.py ZOOKEEPER/hbase-unsecure ### Migration options -There are various methods to migrate data offline, but here we will introduce how to use Azure Data Factory and Data Migration Tool. +There are various methods to migrate data offline, but here we will introduce how to use Azure Data Factory. | Solution | Source version | Considerations | | | -- | - | It writes in parallel at high speed, its performance is high. On the other hand, Phoenix is supported as a Data Factory data source. Refer to the following documents for detailed steps. * [Copy data from Phoenix using Azure Data Factory](../../data-factory/connector-phoenix.md)-* [Tutorial: Use Data migration tool to migrate your data to Azure Cosmos DB](../import-data.md) * [Copy data from HBase using Azure Data Factory](../../data-factory/connector-hbase.md) ## Migrate your code |
cosmos-db | Performance Tips | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips.md | A common performance problem in apps using the Azure Cosmos DB SDK is blocking c * Use [Task.Run](/dotnet/api/system.threading.tasks.task.run) to make a synchronous API asynchronous. * Acquire locks in common code paths. Azure Cosmos DB .NET SDK is most performant when architected to run code in parallel. * Call [Task.Run](/dotnet/api/system.threading.tasks.task.run) and immediately await it. ASP.NET Core already runs app code on normal Thread Pool threads, so calling Task.Run only results in extra unnecessary Thread Pool scheduling. Even if the scheduled code would block a thread, Task.Run does not prevent that.-* Do not use ToList() on `DocumentClient.CreateDocumentQuery(...)` which uses blocking calls to synchronously drain the query. Use [AsDocumentQuery()](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/a4348f8cc0750434376b02ae64ca24237da28cd7/samples/code-samples/Queries/Program.cs#L690) to drain the query asynchronously. +* Use ToList() on `DocumentClient.CreateDocumentQuery(...)` which uses blocking calls to synchronously drain the query. Use [AsDocumentQuery()](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/a4348f8cc0750434376b02ae64ca24237da28cd7/samples/code-samples/Queries/Program.cs#L690) to drain the query asynchronously. **Do**: |
cosmos-db | Working With Dates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/working-with-dates.md | In addition to the basic types, many applications need the DateTime type to repr Azure Cosmos DB supports JSON types such as - string, number, boolean, null, array, object. It does not directly support a DateTime type. Currently, Azure Cosmos DB doesn't support localization of dates. So, you need to store DateTimes as strings. The recommended format for DateTime strings in Azure Cosmos DB is `yyyy-MM-ddTHH:mm:ss.fffffffZ` which follows the ISO 8601 UTC standard. It is recommended to store all dates in Azure Cosmos DB as UTC. Converting the date strings to this format will allow sorting dates lexicographically. If non-UTC dates are stored, the logic must be handled at the client-side. To convert a local DateTime to UTC, the offset must be known/stored as a property in the JSON and the client can use the offset to compute the UTC DateTime value. -Range queries with DateTime strings as filters are only supported if the DateTime strings are all in UTC and the same length. In Azure Cosmos DB, the [GetCurrentDateTime](getcurrentdatetime.md) system function will return the current UTC date and time ISO 8601 string value in the format: `yyyy-MM-ddTHH:mm:ss.fffffffZ`. +Range queries with DateTime strings as filters are only supported if the DateTime strings are all in UTC. In Azure Cosmos DB, the [GetCurrentDateTime](getcurrentdatetime.md) system function will return the current UTC date and time ISO 8601 string value in the format: `yyyy-MM-ddTHH:mm:ss.fffffffZ`. Most applications can use the default string representation for DateTime for the following reasons: |
cosmos-db | Quickstart Go | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-go.md | To learn more about Azure Cosmos DB, go to [Azure Cosmos DB](../introduction.md) ## Prerequisites -- An Azure Cosmos DB Account. Your options are:- * Within an Azure active subscription: - * [Create an Azure free Account](https://azure.microsoft.com/free) or use your existing subscription - * [Visual Studio Monthly Credits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers) - * [Azure Cosmos DB Free Tier](../optimize-dev-test.md#azure-cosmos-db-free-tier) - * Without an Azure active subscription: - * [Try Azure Cosmos DB for free](https://aka.ms/trycosmosdb), a tests environment that lasts for 30 days. - * [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) +- An Azure account with an active subscription. + - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required. - [Go 1.16 or higher](https://golang.org/dl/) - [Azure CLI](/cli/azure/install-azure-cli) In this quickstart, you've learned how to create an Azure Cosmos DB account, cre Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)--> [!div class="nextstepaction"] -> [Import data into Azure Cosmos DB for the API for NoSQL](../import-data.md) |
cosmos-db | Quickstart Java Spring Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-java-spring-data.md | In this quickstart, you create and manage an Azure Cosmos DB for NoSQL account f ## Prerequisites -- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://aka.ms/trycosmosdb) without an Azure subscription or credit card. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.+- An Azure account with an active subscription. + - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required. - [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed. - A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven. - [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git. In this quickstart, you've learned how to create an Azure Cosmos DB for NoSQL ac Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)--> [!div class="nextstepaction"] -> [Import data into Azure Cosmos DB](../import-data.md) |
cosmos-db | Quickstart Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-java.md | In this quickstart, you create and manage an Azure Cosmos DB for NoSQL account f ## Prerequisites -- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://aka.ms/trycosmosdb) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.+- An Azure account with an active subscription. + - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required. - [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed. - A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven. - [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git. In this quickstart, you've learned how to create an Azure Cosmos DB for NoSQL ac Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)--> [!div class="nextstepaction"] -> [Import data into Azure Cosmos DB](../import-data.md) |
cosmos-db | Quickstart Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-nodejs.md | Get started with the Azure Cosmos DB client library for JavaScript to create dat ## Prerequisites +- An Azure account with an active subscription. + - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required. +- [Node.js 10 or later](https://dotnet.microsoft.com/download) +- [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/) ++### Prerequisite check + - In a terminal or command window, run ``node --version`` to check that the Node.js version is one of the current long term support (LTS) versions. - Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed. This section walks you through creating an Azure Cosmos account and setting up a ### Create an Azure Cosmos DB account +> [!TIP] +> No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required. If you create an account using the free trial, you can safely skip ahead to the [Create a new JavaScript project](#create-a-new-javascript-project) section. + [!INCLUDE [Create resource tabbed conceptual - ARM, Azure CLI, PowerShell, Portal](includes/create-resources.md)] ### Configure environment variables |
cosmos-db | Quickstart Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-portal.md | In this quickstart, you learned how to create an Azure Cosmos DB account, create Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)--> [!div class="nextstepaction"] -> [Import data into Azure Cosmos DB](../import-data.md) |
cosmos-db | Quickstart Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-python.md | Get started with the Azure Cosmos DB client library for Python to create databas ## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://aka.ms/trycosmosdb).+- An Azure account with an active subscription. + - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required. - [Python 3.7 or later](https://www.python.org/downloads/) - Ensure the `python` executable is in your `PATH`. - [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/) This section walks you through creating an Azure Cosmos DB account and setting u ### Create an Azure Cosmos DB account > [!TIP]-> Alternatively, you can [try Azure Cosmos DB free](../try-free.md) before you commit. If you create an account using the free trial, you can safely skip this section. +> No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required. If you create an account using the free trial, you can safely skip ahead to the [Create a new Python app](#create-a-new-python-app) section. [!INCLUDE [Create resource tabbed conceptual - ARM, Azure CLI, PowerShell, Portal](./includes/create-resources.md)] The output of the app should be similar to this example: ## Next steps In this quickstart, you learned how to create an Azure Cosmos DB for NoSQL account, create a database, and create a container using the Python SDK. You can now dive deeper into guidance on how to import your data into the API for NoSQL.--> [!div class="nextstepaction"] -> [Import data into Azure Cosmos DB for NoSQL](../import-data.md) |
cosmos-db | Quickstart Spark | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-spark.md | You can use any other Spark (for e.g., spark 3.1.1) offering as well, also you s ## Prerequisites -* An active Azure account. If you don't have one, you can sign up for a [free account](https://azure.microsoft.com/try/cosmosdb/). Alternatively, you can use the [use Azure Cosmos DB Emulator](../local-emulator.md) for development and testing. +* An Azure account with an active subscription. ++ * No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required. * [Azure Databricks](/azure/databricks/release-notes/runtime/10.4) runtime 10.4 with Spark 3.2.1 |
cosmos-db | Sdk Connection Modes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-connection-modes.md | These connectivity modes essentially condition the route that data plane request ## Service port ranges -When you use direct mode, in addition to the gateway ports, you need to ensure the port range between 10000 and 20000 is open because Azure Cosmos DB uses dynamic TCP ports. When using direct mode on [private endpoints](../how-to-configure-private-endpoints.md), the full range of TCP ports - from 0 to 65535 should be open. If these ports aren't open and you try to use the TCP protocol, you might receive a 503 Service Unavailable error. +When you use direct mode, you need to ensure that your client can access ports ranging between 10000 and 20000 because Azure Cosmos DB uses dynamic TCP ports. This is in addition to the gateway ports. When using direct mode on [private endpoints](../how-to-configure-private-endpoints.md), the full range of TCP ports - from 0 to 65535 may be used by Azure Cosmos DB. If these ports aren't open on your client and you try to use the TCP protocol, you might receive a 503 Service Unavailable error. The following table shows a summary of the connectivity modes available for various APIs and the service ports used for each API: |
cosmos-db | Tutorial Create Notebook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-create-notebook.md | This tutorial walks through how to use the Jupyter Notebooks feature of Azure Co ## Prerequisites -- [Azure Cosmos DB for NoSQL account](create-cosmosdb-resources-portal.md#create-an-azure-cosmos-db-account) (configured with serverless throughput).+- An existing Azure Cosmos DB for NoSQL account. + - If you have an existing Azure subscription, [create a new account](how-to-create-account.md?tabs=azure-portal). + - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required. ## Create a new notebook |
cosmos-db | Tutorial Import Notebooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-import-notebooks.md | This tutorial walks through how to import Jupyter notebooks from a GitHub reposi ## Prerequisites -- [Azure Cosmos DB for NoSQL account](create-cosmosdb-resources-portal.md#create-an-azure-cosmos-db-account) (configured with serverless throughput).+- An existing Azure Cosmos DB for NoSQL account. + - If you have an existing Azure subscription, [create a new account](how-to-create-account.md?tabs=azure-portal). + - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required. ## Create a copy of a GitHub repository |
cosmos-db | Performance Levels | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/performance-levels.md | Assuming you have 10 S1 collections, 1 GB of storage for each, in the US East re <a name="more-storage-needed"></a> -## What if I need more than 20 GB of storage? --Whether you have a collection with S1, S2, or S3 performance level, or have a single partition collection, all of which have 20 GB of storage available, you can use the Azure Cosmos DB Data Migration tool to migrate your data to a partitioned collection with virtually unlimited storage. For information about the benefits of a partitioned collection, see [Partitioning and scaling in Azure Cosmos DB](partitioning-overview.md). - <a name="change-before"></a> ## Can I change between the S1, S2, and S3 performance levels before the planned migration? |
cosmos-db | Quickstart App Stacks Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-app-stacks-java.md | Create a new Java project and a configuration file to connect to Azure Cosmos DB ### Create a new Java project -Using your favorite integrated development environment (IDE), create a new Java project with groupId `test` and artifactId `crud`. In the project's root directory, add a *pom.xml* file with the following contents. This file configures [Apache Maven](https://maven.apache.org) to use Java 18 and a recent PostgreSQL driver for Java. +Using your favorite integrated development environment (IDE), create a new Java project with groupId `test` and artifactId `crud`. In the project's root directory, add a *pom.xml* file with the following contents. This file configures [Apache Maven](https://maven.apache.org) to use Java 8 and a recent PostgreSQL driver for Java. ```xml <?xml version="1.0" encoding="UTF-8"?> |
cosmos-db | Relational Nosql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/relational-nosql.md | - Title: Understanding the differences between Azure Cosmos DB NoSQL and relational databases -description: This article enumerates the differences between NoSQL and relational databases ------ Previously updated : 10/05/2021-adobe-target: true ---# Understanding the differences between NoSQL and relational databases ---This article will enumerate some of the key benefits of NoSQL databases over relational databases. We'll also discuss some of the challenges in working with NoSQL. For an in-depth look at the different data stores that exist, have a look at our article on [choosing the right data store](/azure/architecture/guide/technology-choices/data-store-overview). --## High throughput --One of the most obvious challenges when maintaining a relational database system is that most relational engines apply locks and latches to enforce strict [ACID semantics](https://en.wikipedia.org/wiki/ACID). This approach has benefits in terms of ensuring a consistent data state within the database. However, there are heavy trade-offs with respect to concurrency, latency, and availability. Due to these fundamental architectural restrictions, high transactional volumes can result in the need to manually shard data. Implementing manual sharding can be a time consuming and painful exercise. --In these scenarios, [distributed databases](https://en.wikipedia.org/wiki/Distributed_database) can offer a more scalable solution. However, maintenance can still be a costly and time-consuming exercise. Administrators may have to do extra work to ensure that the distributed nature of the system is transparent. They may also have to account for the ΓÇ£disconnectedΓÇ¥ nature of the database. --[Azure Cosmos DB](./introduction.md) simplifies these challenges, by being deployed worldwide across all Azure regions. Partition ranges are capable of being dynamically subdivided to seamlessly grow the database in line with the application, while simultaneously maintaining high availability. Fine-grained multi-tenancy and tightly controlled, cloud-native resource governance facilitates [astonishing latency guarantees](./consistency-levels.md#consistency-levels-and-latency) and predictable performance. Partitioning is fully managed, so administrators won't need to write code or manage partitions. --If your transactional volumes are reaching extreme levels, such as many thousands of transactions per second, you should consider a distributed NoSQL database. Consider Azure Cosmos DB for maximum efficiency, ease of maintenance, and reduced total cost of ownership. ---## Hierarchical data --There are a significant number of use cases where transactions in the database can contain many parent-child relationships. These relationships can grow significantly over time, and prove difficult to manage. Forms of [hierarchical databases](https://en.wikipedia.org/wiki/Hierarchical_database_model) did emerge during the 1980s, but weren't popular due to inefficiency in storage. They also lost traction as [the relational model (RM)](https://en.wikipedia.org/wiki/Relational_model) became the de facto standard used by virtually all mainstream database management systems. --However, today the popularity of document-style databases has grown significantly. These databases might be considered a reinventing of the hierarchical database paradigm, now uninhibited by concerns around the cost of storing data on disk. As a result, maintaining many complex parent-child entity relationships in a relational database could now be considered an anti-pattern compared to modern document-oriented approaches. --The emergence of [object oriented design](https://en.wikipedia.org/wiki/Object-oriented_design), and the [impedance mismatch](https://en.wikipedia.org/wiki/Object-relational_impedance_mismatch) that arises when combining it with relational models, also highlights an anti-pattern in relational databases for certain use cases. Hidden but often significant maintenance costs can arise as a result. Although [ORM approaches](https://en.wikipedia.org/wiki/Object-relational_mapping) have evolved to partly mitigate this, document-oriented databases nonetheless coalesce much better with object-oriented approaches. With this approach, developers won't need ORM drivers, or bespoke language specific [OO Database engines](https://en.wikipedia.org/wiki/Object_database). If your data contains many parent-child relationships and deep levels of hierarchy, you may want to consider using a NoSQL document database such as the [Azure Cosmos DB API for NoSQL](./introduction.md). ---## Complex networks and relationships --Ironically, given their name, relational databases present a less than optimal solution for modeling deep and complex relationships. The reason for this is that relationships between entities don't actually exist in a relational database. They need to be computed at runtime, with complex relationships requiring cartesian joins in order to allow mapping using queries. As a result, operations become exponentially more expensive in terms of computation as relationships increase. In some cases, a relational database attempting to manage such entities will become unusable. --Various forms of ΓÇ£NetworkΓÇ¥ databases did emerge during the time that relational databases emerged, but as with hierarchical databases, these systems struggled to gain popularity. Slow adoption was due to a lack of use cases at the time, and storage inefficiencies. Today, graph database engines could be considered a re-emergence of the network database paradigm. The key benefit with these systems is that relationships are stored as ΓÇ£first class citizensΓÇ¥ within the database. Thus, traversing relationships can be done in constant time, rather than increasing in time complexity with each new join or cross product. --If you're maintaining a complex network of relationships in your database, you may want to consider a graph database such as the [Azure Cosmos DB for Gremlin](./gremlin/introduction.md). ---Azure Cosmos DB is a multi-model database service, which offers an API projection for all the major NoSQL model types; Column-family, Document, Graph, and Key-Value. The [API for Gremlin (graph)](./gremlin/support.md) and NoSQL are fully interoperable. This interoperability has benefits for switching between different models at the programmability level. Graph stores can be queried in terms of both complex network traversals and transactions modeled as document records in the same store. --## Fluid schema --Another particular characteristic of relational databases is that schemas are required to be defined at design time. Pre-defined schemas have benefits in terms of referential integrity and conformity of data. However, it can also be restrictive as the application grows. Responding to changes in the schema across logically separate models sharing the same table or database definition can become complex over time. Such use cases often benefit from the schema being devolved to the application to manage on a per record basis. These use cases require the database to be ΓÇ£schema agnosticΓÇ¥ and allow records to be ΓÇ£self-describingΓÇ¥ in terms of the data contained within them. --If you're managing data whose structures are constantly changing at a high rate, consider a more schema-agnostic approach using a managed NoSQL database service like Azure Cosmos DB. Particularly, if transactions can come from external sources where it's difficult to enforce conformity across the database, consider services like Azure Cosmos DB. --## Microservices --The [microservices](https://en.wikipedia.org/wiki/Microservices) pattern has grown significantly in recent years. This pattern has its roots in [Service-Oriented Architecture](https://en.wikipedia.org/wiki/Service-oriented_architecture). The de-facto standard for data transmission in these modern microservices architectures is [JSON](https://en.wikipedia.org/wiki/JSON), which also happens to be the storage medium for most document-oriented NoSQL Databases. The JSON data type makes NoSQL document stores a much more seamless fit for both the persistence and synchronization (using [event sourcing patterns](https://en.wikipedia.org/wiki/Event-driven_architecture)) across complex Microservice implementations. More traditional relational databases can be much more complex to maintain in these architectures. This complexity is due to the greater amount of transformation required for both state and synchronization across APIs. Azure Cosmos DB in particular has many features that make it an even more seamless fit for JSON-based Microservices Architectures than many NoSQL databases: --* a choice of pure JSON data types -* a JavaScript engine and [query API](sql/javascript-query-api.md) built into the database. -* a state-of-the-art [change feed](./change-feed.md) which clients can subscribe to in order to get notified of modifications to a container. --## Some challenges with NoSQL databases --Although there are some clear advantages when implementing NoSQL databases, there are also some challenges that you may want to take into consideration. These challenges may not be present to the same degree when working with the relational model: --* transactions with many relations pointing to the same entity. -* transactions requiring strong consistency across the entire dataset. --The first challenge requires a denormalized solution. The rule-of-thumb in NoSQL databases is generally denormalization, which as articulated earlier, produces more efficient reads in a distributed system. However, there are some design challenges that come into play with this approach. LetΓÇÖs take an example of a product thatΓÇÖs related to one category and multiple tags: ---A best practice approach in a NoSQL document database would be to denormalize the category name and tag names directly in a ΓÇ£product documentΓÇ¥. In order to keep categories, tags, and products in sync; the design options to facilitate this have added maintenance complexity. This complexity occurs because the data is duplicated across multiple records in the product. In a relational database, the operation would be a straightforward update in a ΓÇ£one-to-manyΓÇ¥ relationship, and a join to retrieve the data. --The trade-off is that reads are more efficient in the denormalized record, and become increasingly more efficient as the number of conceptually joined entities increases. However, just as the read efficiency increases with increasing numbers of joined entities in a denormalize record, so too does the maintenance complexity of keeping entities in sync. One way of mitigating this trade-off is to create a [hybrid data model](./modeling-data.md#hybrid-data-models). --While there's more flexibility available in NoSQL databases to deal with these trade-offs, increased flexibility can also produce more design decisions. For more information about keeping [denormalized user data in sync](./how-to-model-partition-example.md#denormalizing-usernames), see [how to model and partition data on Azure Cosmos DB using a real-world example](./how-to-model-partition-example.md). This example includes an approach for keeping denormalized data in sync where users not only sit in different partitions, but in different containers. --With respect to strong consistency, it's rare that strong consistency will be required across the entire data set. However, cases where this level of consistency is necessary can be a challenge in distributed databases. To ensure strong consistency, data needs to be synchronized across all replicas and regions before allowing clients to read it. This synchronization can increase the latency of reads. --Again, Azure Cosmos DB offers more flexibility than relational databases for the various trade-offs that are relevant here. For small scale implementations, this approach may add more design considerations. For more information, see [consistency, availability, and performance tradeoffs](./consistency-levels.md). --## Next steps --Learn how to manage your Azure Cosmos DB account and other concepts: --* [How-to manage your Azure Cosmos DB account](how-to-manage-database-account.md) -* [Global distribution](distribute-data-globally.md) -* [Consistency levels](consistency-levels.md) -* [Working with Azure Cosmos DB containers and items](account-databases-containers-items.md) -* [VNET service endpoint for your Azure Cosmos DB account](how-to-configure-vnet-service-endpoint.md) -* [IP-firewall for your Azure Cosmos DB account](how-to-configure-firewall.md) -* [How-to add and remove Azure regions to your Azure Cosmos DB account](how-to-manage-database-account.md) -* [Azure Cosmos DB SLAs](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_2/) |
cosmos-db | Store Credentials Key Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/store-credentials-key-vault.md | + + Title: | + Tutorial: Store and use Azure Cosmos DB credentials with Azure Key Vault +description: | + Use Azure Key Vault to store and access Azure Cosmos DB connection string, keys, and endpoints. +++++ms.devlang: csharp ++ Last updated : 11/07/2022+++# Tutorial: Store and use Azure Cosmos DB credentials with Azure Key Vault +++> [!IMPORTANT] +> It's recommended to access Azure Cosmos DB is to use a [system-assigned managed identity](managed-identity-based-authentication.md). If your service cannot take advantage of managed identities then use the [certificate-based authentication](certificate-based-authentication.md). If both the managed identity solution and cert based solution do not meet your needs, please use the Azure Key vault solution in this article. ++If you're using Azure Cosmos DB as your database, you connect to databases, container, and items by using an SDK, the API endpoint, and either the primary or secondary key. ++It's not a good practice to store the endpoint URI and sensitive read-write keys directly within application code or configuration file. Ideally, this data is read from environment variables within the host. In Azure App Service, [app settings](../app-service/configure-common.md#configure-app-settings) allow you to inject runtime credentials for your Azure Cosmos DB account without the need for developers to store these credentials in an insecure clear text manner. ++Azure Key Vault iterates on this best practice further by allowing you to store these credentials securely while giving services like Azure App Service managed access to the credentials. Azure App Service will securely read your credentials from Azure Key Vault and inject those credentials into your running application. ++With this best practice, developers can store the credentials for tools like the [Azure Cosmos DB emulator](local-emulator.md) or [Try Azure Cosmos DB free](try-free.md) during development. Then, the operations team can ensure that the correct production settings are injected at runtime. ++In this tutorial, you learn how to: ++> [!div class="checklist"] +> +> - Create an Azure Key Vault instance +> - Add Azure Cosmos DB credentials as secrets to the key vault +> - Create and register an Azure App Service resource and grant "read key" permissions +> - Inject key vault secrets into the App Service resource +> ++> [!NOTE] +> This tutorial and the sample application uses an Azure Cosmos DB for NoSQL account. You can perform many of the same steps using other APIs. ++## Prerequisites ++- An existing Azure Cosmos DB for NoSQL account. + - If you have an Azure subscription, [create a new account](nosql/how-to-create-account.md?tabs=azure-portal). + - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. + - Alternatively, you can [try Azure Cosmos DB free](try-free.md) before you commit. +- GitHub account. ++## Before you begin: Get Azure Cosmos DB credentials ++Before you start, you'll get the credentials for your existing account. ++1. Navigate to the [Azure portal](https://portal.azure.com/) page for the existing Azure Cosmos DB for NoSQL account. ++1. From the Azure Cosmos DB for NoSQL account page, select the **Keys** navigation menu option. ++ :::image type="content" source="media/access-secrets-from-keyvault/cosmos-keys-option.png" lightbox="media/access-secrets-from-keyvault/cosmos-keys-option.png" alt-text="Screenshot of an Azure Cosmos DB SQL API account page. The Keys option is highlighted in the navigation menu."::: ++1. Record the values from the **URI** and **PRIMARY KEY** fields. You'll use these values later in this tutorial. ++ :::image type="content" source="media/access-secrets-from-keyvault/cosmos-endpoint-key-credentials.png" lightbox="media/access-secrets-from-keyvault/cosmos-endpoint-key-credentials.png" alt-text="Screenshot of Keys page with various credentials for an Azure Cosmos DB SQL API account."::: ++## Create an Azure Key Vault resource ++First, create a new key vault to store your API for NoSQL credentials. ++1. Sign in to the [Azure portal](https://portal.azure.com/). ++1. Select **Create a resource > Security > Key Vault**. ++1. On the **Create key vault** page, enter the following information: ++ | Setting | Description | + | | | + | **Subscription** | Select the Azure subscription that you wish to use for this Azure Cosmos account. | + | **Resource group** | Select a resource group, or select **Create new**, then enter a unique name for the new resource group. | + | **Key vault name** | Enter a globally unique name for your key vault. | + | **Region** | Select a geographic location to host your Azure Cosmos DB account. Use the location that is closest to your users to give them the fastest access to the data. | + | **Pricing tier** | Select *Standard*. | ++1. Leave the remaining settings to their default values. + +1. Select **Review + create**. ++1. Review the settings you provide, and then select **Create**. It takes a few minutes to create the account. Wait for the portal page to display **Your deployment is complete** before moving on. ++## Add Azure Cosmos DB access keys to the Key Vault ++Now, store your Azure Cosmos DB credentials as secrets in the key vault. ++1. Select **Go to resource** to go to the Azure Key Vault resource page. ++1. From the Azure Key Vault resource page, select the **Secrets** navigation menu option. ++1. Select **Generate/Import** from the menu. ++ :::image type="content" source="media/access-secrets-from-keyvault/create-new-secret.png" alt-text="Screenshot of the Generate/Import option in a key vault menu."::: ++1. On the **Create a secret** page, enter the following information: ++ | Setting | Description | + | | | + | **Upload options** | *Manual* | + | **Name** | *cosmos-endpoint* | + | **Secret value** | Enter the **URI** you copied earlier in this tutorial. | ++ :::image type="content" source="media/access-secrets-from-keyvault/create-endpoint-secret.png" alt-text="Screenshot of the Create a secret dialog in the Azure portal with details for an URI secret."::: ++1. Select **Create** to create the new **cosmos-endpoint** secret. ++1. Select **Generate/Import** from the menu again. On the **Create a secret** page, enter the following information: ++ | Setting | Description | + | | | + | **Upload options** | *Manual* | + | **Name** | *cosmos-readwrite-key* | + | **Secret value** | Enter the **PRIMARY KEY** you copied earlier in this tutorial. | ++ :::image type="content" source="media/access-secrets-from-keyvault/create-key-secret.png" alt-text="Screenshot of the Create a secret dialog in the Azure portal with details for a PRIMARY KEY secret."::: ++1. Select **Create** to create the new **cosmos-readwrite-key** secret. ++1. After the secrets are created, view them in the list of secrets within the **Secrets** page. ++ :::image type="content" source="media/access-secrets-from-keyvault/view-secrets-list.png" alt-text="Screenshot of the list of secrets for a key vault."::: ++1. Select each key, select the latest version, and then copy the **Secret Identifier**. You'll use the identifier for the **cosmos-endpoint** and **cosmos-readwrite-key** secrets later in this tutorial. ++ > [!TIP] + > The secret identifier will be in this format `https://<key-vault-name>.vault.azure.net/secrets/<secret-name>/<version-id>`. For example, if the name of the key vault is **msdocs-key-vault**, the name of the key is **cosmos-readwrite-key**, and the version if **83b995e363d947999ac6cf487ae0e12e**; then the secret identifier would be `https://msdocs-key-vault.vault.azure.net/secrets/cosmos-readwrite-key/83b995e363d947999ac6cf487ae0e12e`. + > + > :::image type="content" source="media/access-secrets-from-keyvault/view-secret-identifier.png" alt-text="Screenshot of a secret identifier for a key vault secret named cosmos-readwrite-key."::: + > ++## Create and register an Azure Web App with Azure Key Vault ++In this section, create a new Azure Web App, deploy a sample application, and then register the Web App's managed identity with Azure Key Vault. ++1. Create a new GitHub repository using the [cosmos-db-nosql-dotnet-sample-web-environment-variables template](https://github.com/azure-samples/cosmos-db-nosql-dotnet-sample-web-environment-variables/generate). ++1. In the Azure portal, select **Create a resource > Web > Web App**. ++1. On the **Create Web App** page and **Basics** tab, enter the following information: ++ | Setting | Description | + | | | + | **Subscription** | Select the Azure subscription that you wish to use for this Azure Cosmos account. | + | **Resource group** | Select a resource group, or select **Create new**, then enter a unique name for the new resource group. | + | **Name** | Enter a globally unique name for your web app. | + | **Publish** | Select *Code*. | + | **Runtime stack** | Select *.NET 6 (LTS)*. | + | **Operating System** | Select *Windows*. | + | **Region** | Select a geographic location to host your Azure Cosmos DB account. Use the location that is closest to your users to give them the fastest access to the data. | ++1. Leave the remaining settings to their default values. ++1. Select **Next: Deployment**. ++1. On the **Deployment** tab, enter the following information: ++ | Setting | Description | + | | | + | **Continuous deployment** | Select *Enable*. | + | **GitHub account** | Select *Authorize*. Follow the GitHub account authorization prompts to grant Azure permission to read your newly created GitHub repository. | + | **Organization** | Select the organization for your new GitHub repository. | + | **Repository** | Select the name your new GitHub repository. | + | **Branch** | Select *main*. | ++1. Select **Review + create**. ++1. Review the settings you provide, and then select **Create**. It takes a few minutes to create the account. Wait for the portal page to display **Your deployment is complete** before moving on. ++1. You may need to wait a few extra minutes for the web application to be initially deployed to the web app. From the Azure Web App resource page, select **Browse** to see the default state of the app. ++ :::image type="content" source="media/access-secrets-from-keyvault/sample-web-app-empty.png" lightbox="media/access-secrets-from-keyvault/sample-web-app-empty.png" alt-text="Screenshot of the web application in it's default state without credentials."::: ++1. Select the **Identity** navigation menu option. ++1. On the **Identity** page, select **On** for **System-assigned** managed identity, and then select **Save**. ++ :::image type="content" source="media/access-secrets-from-keyvault/enable-managed-identity.png" alt-text="Screenshot of system-assigned managed identity being enabled from the Identity page."::: ++## Inject Azure Key Vault secrets as Azure Web App app settings ++Finally, inject the secrets stored in your key vault as app settings within the web app. The app settings will, in turn, inject the credentials into the application at runtime without storing the credentials in clear text. ++1. Return to the key vault page in the Azure portal. Select **Access policies** from the navigation menu. ++1. On the **Access policies** page, select **Create** from the menu. ++ :::image type="content" source="media/access-secrets-from-keyvault/create-access-policy.png" alt-text="Screenshot of the Create option in the Access policies menu."::: ++1. On the **Permissions** tab of the **Create an access policy** page, select the **Get** option in the **Secret permissions** section. Select **Next**. ++ :::image type="content" source="media/access-secrets-from-keyvault/get-secrets-permission.png" alt-text="Screenshot of the Get permission enabled for Secret permissions."::: ++1. On the **Principal** tab, select the name of the web app you created earlier in this tutorial. Select **Next**. ++ :::image type="content" source="media/access-secrets-from-keyvault/assign-principal.png" alt-text="Screenshot of a web app managed identity assigned to a permission."::: ++ > [!NOTE] + > In this example screenshot, the web app is named **msdocs-dotnet-web**. ++1. Select **Next** again to skip the **Application** tab. On the **Review + create** tab, review the settings you provide, and then select **Create**. ++1. Return to the web app page in the Azure portal. Select **Configuration** from the navigation menu. ++1. On the **Configuration** page, select **New application setting**. In the **Add/Edit application setting** dialog, enter the following information: ++ | Setting | Description | + | | | + | **Name** | `CREDENTIALS__ENDPOINT` | + | **Key** | Get the **secret identifier** for the **cosmos-endpoint** secret in your key vault that you created earlier in this tutorial. Enter the identifier in the following format: `@Microsoft.KeyVault(SecretUri=<secret-identifier>)`. | ++ > [!TIP] + > Ensure that the environment variable has a double underscore (`__`) value instead of a single underscore. The double-underscore is a key delimeter supported by .NET on all platforms. For more information, see [environment variables configuration](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider). ++ > [!NOTE] + > For example, if the secret identifier is `https://msdocs-key-vault.vault.azure.net/secrets/cosmos-endpoint/69621c59ef5b4b7294b5def118921b07`, then the reference would be `@Microsoft.KeyVault(SecretUri=https://msdocs-key-vault.vault.azure.net/secrets/cosmos-endpoint/69621c59ef5b4b7294b5def118921b07)`. + > + > :::image type="content" source="media/access-secrets-from-keyvault/create-app-setting.png" alt-text="Screenshot of the Add/Edit application setting dialog with a new app setting referencing a key vault secret."::: + > ++1. Select **OK** to persist the new app setting ++1. Select **New application setting** again. In the **Add/Edit application setting** dialog, enter the following information and then select **OK**: ++ | Setting | Description | + | | | + | **Name** | `CREDENTIALS__KEY` | + | **Key** | Get the **secret identifier** for the **cosmos-readwrite-key** secret in your key vault that you created earlier in this tutorial. Enter the identifier in the following format: `@Microsoft.KeyVault(SecretUri=<secret-identifier>)`. | ++1. Back on the **Configuration** page, select **Save** to update the app settings for the web app. ++ :::image type="content" source="media/access-secrets-from-keyvault/save-app-settings.png" alt-text="Screenshot of the Save option in the Configuration page's menu."::: ++1. Wait a few minutes for the web app to restart with the new app settings. At this point, the new app settings should indicate that they're a **Key vault Reference**. ++ :::image type="content" source="media/access-secrets-from-keyvault/app-settings-reference.png" lightbox="media/access-secrets-from-keyvault/app-settings-reference.png" alt-text="Screenshot of the Key vault Reference designation on two app settings in a web app."::: ++1. Select **Overview** from the navigation menu. Select **Browse** to see the app with populated credentials. ++ :::image type="content" source="media/access-secrets-from-keyvault/sample-web-app-populated.png" lightbox="media/access-secrets-from-keyvault/sample-web-app-populated.png" alt-text="Screenshot of the web application with valid Azure Cosmos DB for NoSQL account credentials."::: ++## Next steps ++- To configure a firewall for Azure Cosmos DB, see [firewall support](how-to-configure-firewall.md) article. +- To configure virtual network service endpoint, see [secure access by using VNet service endpoint](how-to-configure-vnet-service-endpoint.md) article. |
cosmos-db | How To Create Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-create-account.md | Create a single Azure Cosmos DB account using the API for Table. ## Next steps -In this guide, you learned how to create an Azure Cosmos DB for Table account. You can now import more data to your Azure Cosmos DB account. +In this guide, you learned how to create an Azure Cosmos DB for Table account. > [!div class="nextstepaction"]-> [Import data into Azure Cosmos DB for Table](../import-data.md) +> [Quickstart: Azure Cosmos DB for Table for .NET](quickstart-dotnet.md) |
cosmos-db | How To Use Go | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-go.md | az group delete --resource-group myResourceGroup ## Next steps +In this quickstart, you've learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run an app. Now you can query your data using the API for Table. + > [!div class="nextstepaction"]-> [Import table data to the API for Table](import.md) +> [Query Azure Cosmos DB by using the API for Table](tutorial-query.md) |
cosmos-db | Import | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/import.md | - Title: Migrate existing data to a API for Table account in Azure Cosmos DB -description: Learn how to migrate or import on-premises or cloud data to an Azure API for Table account in Azure Cosmos DB. ------ Previously updated : 03/03/2022----# Migrate your data to an Azure Cosmos DB for Table account --This tutorial provides instructions on importing data for use with the Azure Cosmos DB [API for Table](introduction.md). If you have data stored in Azure Table Storage, you can use the **Data migration tool** to import your data to the Azure Cosmos DB for Table. ---## Prerequisites --* **Increase throughput:** The duration of your data migration depends on the amount of throughput you set up for an individual container or a set of containers. Be sure to increase the throughput for larger data migrations. After you've completed the migration, decrease the throughput to save costs. --* **Create Azure Cosmos DB resources:** Before you start migrating the data, create all your tables from the Azure portal. If you're migrating to an Azure Cosmos DB account that has database-level throughput, make sure to provide a partition key when you create the Azure Cosmos DB tables. --## Data migration tool --> [!IMPORTANT] -> Ownership of the Data Migration Tool has been transferred to a 3rd party who is acting as maintainers of this tool which is open source. The tool is currently being updated to use the latest nuget packages so does not currently work on the main branch. There is a fork of this tool which does work. You can learn more [here](https://github.com/Azure/azure-documentdb-datamigrationtool/issues/89). --You can use the command-line data migration tool (dt.exe) in Azure Cosmos DB to import your existing Azure Table Storage data to a API for Table account. --To migrate table data: --1. Download the migration tool from [GitHub](https://github.com/azure/azure-documentdb-datamigrationtool/tree/archive). -2. Run `dt.exe` by using the command-line arguments for your scenario. `dt.exe` takes a command in the following format: -- ```bash - dt.exe [/<option>:<value>] /s:<source-name> [/s.<source-option>:<value>] /t:<target-name> [/t.<target-option>:<value>] - ``` --The supported options for this command are: --* **/ErrorLog:** Optional. Name of the CSV file to redirect data transfer failures. -* **/OverwriteErrorLog:** Optional. Overwrite the error log file. -* **/ProgressUpdateInterval:** Optional, default is `00:00:01`. The time interval to refresh on-screen data transfer progress. -* **/ErrorDetails:** Optional, default is `None`. Specifies that detailed error information should be displayed for the following errors: `None`, `Critical`, or `All`. -* **/EnableCosmosTableLog:** Optional. Direct the log to an Azure Cosmos DB table account. If set, this defaults to the destination account connection string unless `/CosmosTableLogConnectionString` is also provided. This is useful if multiple instances of the tool are being run simultaneously. -* **/CosmosTableLogConnectionString:** Optional. The connection string to direct the log to a remote Azure Cosmos DB table account. --### Command-line source settings --Use the following source options when you define Azure Table Storage as the source of the migration. --* **/s:AzureTable:** Reads data from Table Storage. -* **/s.ConnectionString:** Connection string for the table endpoint. You can retrieve this from the Azure portal. -* **/s.LocationMode:** Optional, default is `PrimaryOnly`. Specifies which location mode to use when connecting to Table Storage: `PrimaryOnly`, `PrimaryThenSecondary`, `SecondaryOnly`, `SecondaryThenPrimary`. -* **/s.Table:** Name of the Azure table. -* **/s.InternalFields:** Set to `All` for table migration, because `RowKey` and `PartitionKey` are required for import. -* **/s.Filter:** Optional. Filter string to apply. -* **/s.Projection:** Optional. List of columns to select, --To retrieve the source connection string when you import from Table Storage, open the Azure portal. Select **Storage accounts** > **Account** > **Access keys**, and copy the **Connection string**. ---### Command-line target settings --Use the following target options when you define the Azure Cosmos DB for Table as the target of the migration. --* **/t:TableAPIBulk:** Uploads data into the Azure Cosmos DB for Table in batches. -* **/t.ConnectionString:** The connection string for the table endpoint. -* **/t.TableName:** Specifies the name of the table to write to. -* **/t.Overwrite:** Optional, default is `false`. Specifies if existing values should be overwritten. -* **/t.MaxInputBufferSize:** Optional, default is `1GB`. Approximate estimate of input bytes to buffer before flushing data to sink. -* **/t.Throughput:** Optional, service defaults if not specified. Specifies throughput to configure for table. -* **/t.MaxBatchSize:** Optional, default is `2MB`. Specify the batch size in bytes. --### Sample command: Source is Table Storage --Here's a command-line sample showing how to import from Table Storage to the API for Table: --```bash -dt /s:AzureTable /s.ConnectionString:DefaultEndpointsProtocol=https;AccountName=<Azure Table storage account name>;AccountKey=<Account Key>;EndpointSuffix=core.windows.net /s.Table:<Table name> /t:TableAPIBulk /t.ConnectionString:DefaultEndpointsProtocol=https;AccountName=<Azure Cosmos DB account name>;AccountKey=<Azure Cosmos DB account key>;TableEndpoint=https://<Account name>.table.cosmos.azure.com:443 /t.TableName:<Table name> /t.Overwrite -``` -## Next steps --Learn how to query data by using the Azure Cosmos DB for Table. --> [!div class="nextstepaction"] ->[How to query data?](tutorial-query.md) |
cosmos-db | Quickstart Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-java.md | Remove-AzResourceGroup -Name $resourceGroupName ## Next steps -In this quickstart, you've learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run an app. Now you can query your data using the Tables API. +In this quickstart, you've learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run an app. Now you can query your data using the API for Table. > [!div class="nextstepaction"]-> [Import table data to the Tables API](import.md) +> [Query Azure Cosmos DB by using the API for Table](tutorial-query.md) |
cosmos-db | Quickstart Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-nodejs.md | Remove-AzResourceGroup -Name $resourceGroupName ## Next steps -In this quickstart, you learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run a Node.js app to add table data. Now you can query your data using the API for Table. +In this quickstart, you've learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run an app. Now you can query your data using the API for Table. > [!div class="nextstepaction"]-> [Import table data to the API for Table](import.md) +> [Query Azure Cosmos DB by using the API for Table](tutorial-query.md) |
cosmos-db | Quickstart Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-python.md | Remove-AzResourceGroup -Name $resourceGroupName ## Next steps -In this quickstart, you've learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run an app. Now you can query your data using the API for Table. +In this quickstart, you've learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run an app. Now you can query your data using the API for Table. > [!div class="nextstepaction"]-> [Import table data to the API for Table](import.md) +> [Query Azure Cosmos DB by using the API for Table](tutorial-query.md) |
cosmos-db | Total Cost Ownership | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/total-cost-ownership.md | Last updated 08/26/2021 # Total Cost of Ownership (TCO) with Azure Cosmos DB [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)] -Azure Cosmos DB is designed with the fine grained multi-tenancy and resource governance. This design allows Azure Cosmos DB to operate at significantly lower cost and help users save. Currently Azure Cosmos DB supports more than 280 customer workloads on a single machine with the density continuously increasing, and thousands of customer workloads within a cluster. It load balances replicas of customers' workloads across different machines in a cluster and across multiple clusters within a data center. To learn more, see [Azure Cosmos DB: Pushing the frontier of globally distributed databases](https://azure.microsoft.com/blog/azure-cosmos-db-pushing-the-frontier-of-globally-distributed-databases/). Because of resource-governance, multi-tenancy, and native integration with the rest of Azure infrastructure, Azure Cosmos DB is on average 4 to 6 times cheaper than MongoDB, Cassandra, or other OSS NoSQL running on IaaS and up to 10 times cheaper than the database engines running on premises. See the paper on [The total cost of (non) ownership of a NoSQL database cloud service](https://documentdbportalstorage.blob.core.windows.net/papers/11.15.2017/NoSQL%20TCO%20paper.pdf). +Azure Cosmos DB is designed with the fine grained multi-tenancy and resource governance. This design allows Azure Cosmos DB to operate at significantly lower cost and help users save. Currently Azure Cosmos DB supports more than 280 customer workloads on a single machine with the density continuously increasing, and thousands of customer workloads within a cluster. It load balances replicas of customers' workloads across different machines in a cluster and across multiple clusters within a data center. To learn more, see [Azure Cosmos DB: Pushing the frontier of globally distributed databases](https://azure.microsoft.com/blog/azure-cosmos-db-pushing-the-frontier-of-globally-distributed-databases/). Because of resource-governance, multi-tenancy, and native integration with the rest of Azure infrastructure, Azure Cosmos DB is on average 4 to 6 times cheaper than MongoDB, Cassandra, or other OSS NoSQL running on IaaS and up to 10 times cheaper than the database engines running on premises. See the paper on [The total cost of (non) ownership of a NoSQL database cloud service](https://azure.microsoft.com/blog/documentdb-total-cost-of-ownership-for-nosql-databases/). The OSS NoSQL database solutions, such as Apache Cassandra, MongoDB, HBase, engines were designed for on-premises. When offered as a managed service they are equivalent to a Resource Manager template with a tenant database for managing the provisioned clusters and monitoring support. OSS NoSQL architectures require significant operational overhead, and the expertise can be difficult and expensive to find. On the other hand, Azure Cosmos DB is a fully managed cloud service, which allows developers to focus on business innovation rather than on managing and maintaining database infrastructure. As an aid for estimating TCO, it can be helpful to start with capacity planning. * Learn more about [Optimizing the cost of reads and writes](optimize-cost-reads-writes.md) * Learn more about [Optimizing the cost of queries](./optimize-cost-reads-writes.md) * Learn more about [Optimizing the cost of multi-region Azure Cosmos DB accounts](optimize-cost-regions.md)-* Learn more about [The Total Cost of (Non) Ownership of a NoSQL Database Cloud Service](https://documentdbportalstorage.blob.core.windows.net/papers/11.15.2017/NoSQL%20TCO%20paper.pdf) +* Learn more about [The Total Cost of (Non) Ownership of a NoSQL Database Cloud Service](https://azure.microsoft.com/blog/documentdb-total-cost-of-ownership-for-nosql-databases/) |
cosmos-db | Visualize Qlik Sense | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/visualize-qlik-sense.md | Before following the instructions in this article, ensure that you have the foll * [Create a database and a collection](nosql/quickstart-java.md#add-a-container) ΓÇô You can use set the collection throughput value to 1000 RU/s. -* Load the sample video game sales data to your Azure Cosmos DB account. You can import the data by using Azure Cosmos DB data migration tool, you can do a [sequential](import-data.md#SQLSeqTarget) or a [bulk import](import-data.md#SQLBulkTarget) of data. It takes around 3-5 minutes for the data to import to the Azure Cosmos DB account. +* Load the sample video game sales data to your Azure Cosmos DB account. * Download, install, and configure the ODBC driver by using the steps in the [connect to Azure Cosmos DB with ODBC driver](odbc-driver.md) article. The video game data is a simple data set and you donΓÇÖt have to edit the schema, just use the default collection-mapping schema. |
cost-management-billing | Cancel Azure Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/cancel-azure-subscription.md | tags: billing Previously updated : 11/11/2022 Last updated : 11/21/2022 A billing account owner uses the following steps to cancel a subscription. A subscription owner can navigate in the Azure portal to **Subscriptions** and then start at step 3. 1. In the Azure portal, navigate to Cost Management + Billing.-1. In the left menu under **Products + services**, select **All billing subscriptions**. If you a support plan, it's shown in the list. +1. In the left menu under **Products + services**, select **All billing subscriptions**. If you have a support plan, it's shown in the list. :::image type="content" source="./media/cancel-azure-subscription/all-billing-subscriptions.png" alt-text="Screenshot showing all billing subscriptions." lightbox="./media/cancel-azure-subscription/all-billing-subscriptions.png" ::: 1. Select the subscription that you want to cancel. 1. At the top of page, select **Cancel**. A subscription owner can navigate in the Azure portal to **Subscriptions** and t :::image type="content" source="./media/cancel-azure-subscription/cancel-subscription.png" alt-text="Screenshot showing the subscription properties where you select Cancel subscription." lightbox="./media/cancel-azure-subscription/cancel-subscription.png" ::: 1. Select a reason for cancellation. 1. If you have a support plan and no other subscriptions use it, select **Turn off auto-renew**. If other subscriptions use the support plan, clear the option.-1. If you have any running resources associated with the subscription, you must select **Turn off resources**. Ensure that you have already backed up any data that you want to keep. +1. If you have any running resources associated with the subscription, you must select **Turn off resources**. Ensure that you've already backed up any data that you want to keep. 1. Select **Cancel subscription**. :::image type="content" source="./media/cancel-azure-subscription/cancel-subscription-final.png" alt-text="Screenshot showing the Cancel subscription window options." lightbox="./media/cancel-azure-subscription/cancel-subscription-final.png" ::: After your subscription is canceled, Microsoft waits 30 - 90 days before permane The **Delete subscription** option isn't available until at least 15 minutes after you cancel your subscription. +Depending on your subscription type, you may not be able to delete a subscription immediately. + 1. Select your subscription on the [Subscriptions](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) page in the Azure portal. 1. Select the subscription that you want to delete.-1. Select **Overview**, and then select **Delete subscription**. +1. At the top of the subscription page, select **Delete**. + - When all required conditions are met, you can delete the subscription. + - If you have required deletion conditions that aren't met, you'll see the following page. + :::image type="content" source="./media/cancel-azure-subscription/manual-delete-subscription.png" alt-text="Screenshot showing the Delete your subscription page." lightbox="./media/cancel-azure-subscription/manual-delete-subscription.png" ::: + - If **Delete resources** doesn't display a green check mark, then you have resources that must be deleted in order to delete the subscription. You can select **View resources** to navigate to the Resources page to manually delete the resources. After resource deletion, you might need to wait 10 minutes for resource deletion status to update in order to delete the subscription. + - If **Manual deletion date** doesn't display a green check mark, you must wait the required period before you can delete the subscription. + >[!NOTE] > 90 days after you cancel a subscription, the subscription is automatically deleted. -## Delete other subscriptions --The only subscription types that you can manually delete are free trial and pay-as-you-go subscriptions. All other subscription types are deleted only through the [subscription cancellation](#cancel-a-subscription-in-the-azure-portal) process. In other words, you can't delete a subscription directly unless it's a free trial or pay-as-you-go subscription. However, after you cancel a subscription, you can create an [Azure support request](https://go.microsoft.com/fwlink/?linkid=2083458) to ask to have the subscription deleted immediately. - ## Reactivate a subscription If you cancel your subscription with Pay-As-You-Go rates accidentally, you can [reactivate it in the Azure portal](subscription-disabled.md). You may not have the permissions required to cancel a subscription. See [Who can * If you have an Azure Active Directory account via your organization, the Azure AD administrator could delete the account. After that, your services are disabled. That means your virtual machines are de-allocated, temporary IP addresses are freed, and storage is read-only. In summary, once you cancel, billing is stopped immediately. -* If you don't have an Azure AD account via your organization, you can cancel then delete your Azure subscriptions and then remove your credit card from the account. While the action doesn't delete the account, it renders it inoperable. You can go a step further and also delete the associated Microsoft account if it's not being used for any other purpose. +* If you don't have an Azure AD account via your organization, you can cancel then delete your Azure subscriptions, and then remove your credit card from the account. While the action doesn't delete the account, it renders it inoperable. You can go a step further and also delete the associated Microsoft account if it's not being used for any other purpose. ## How do I cancel a Visual Studio Professional account? |
cost-management-billing | Mca Request Billing Ownership | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-request-billing-ownership.md | Title: Transfer Azure product billing ownership to a Microsoft Customer Agreement -description: Learn how to transfer billing ownership of Azure subscriptions and reservations. +description: Learn how to transfer billing ownership of Azure subscriptions, reservations, and savings plans. tags: billing Previously updated : 06/29/2022 Last updated : 11/14/2022 # Transfer Azure product billing ownership to a Microsoft Customer Agreement -This article helps you transfer billing ownership for your Azure products (subscriptions and reservations) to a Microsoft Customer Agreement when: +This article helps you transfer billing ownership for your Azure products (subscriptions, reservations, and savings plans) to a Microsoft Customer Agreement when: - You want to move billing responsibilities for a product to a different billing owner. - You want to transfer your Azure products from one licensing agreement to another. For example, from an Enterprise Agreement or a Microsoft Online Subscription Agreement (MOSA) to a Microsoft Customer Agreement.-- You want to transfer reservation ownership.+- You want to transfer reservation or savings plan ownership. [Check if you have access to a Microsoft Customer Agreement](#check-for-access). This process contains the following primary tasks, which weΓÇÖll guide you throu 2. Review and approve the transfer request 3. Check transfer request status -There are three options to transfer products: +There are four options to transfer products: - Transfer only subscriptions - Transfer only reservations-- Transfer both subscriptions and reservations+- Transfer only savings plans +- Transfer a combination of products -When you send or accept transfer request, you agree to terms and conditions. For more information, see [Transfer terms and conditions](subscription-transfer.md#transfer-terms-and-conditions). +When you send or accept a transfer request, you agree to terms and conditions. For more information, see [Transfer terms and conditions](subscription-transfer.md#transfer-terms-and-conditions). Before you transfer billing products, read [Supplemental information about transfers](subscription-transfer.md#supplemental-information-about-transfers). ## Prerequisites +>[!IMPORTANT] +> When you have a savings plan purchased under an Enterprise Agreement enrollment that was bought in a non-USD currency, you can't transfer it. Instead you must use it in the original enrollment. However, you change the scope of the savings plan so that is used by other subscriptions. For more information, see [Change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope). You can view your billing currency in the Azure portal on the enrollment properties page. For more information, see [To view enrollment properties](direct-ea-administration.md#to-view-enrollment-properties). + Before you begin, make sure that the people involved in the product transfer have the required permissions. > [!NOTE] > To perform a transfer, the destination account must be a paid account with a valid form of payment. For example, if the destination is an Azure free account, you can upgrade it to a pay-as-you-go Azure plan under a Microsoft Customer Agreement. Then you can make the transfer. -You can also go along with the following video that outlines each step of the process for subscription transfer. However, it doesn't cover reservation transfer. +You can also go along with the following video that outlines each step of the process for subscription transfer. However, it doesn't cover reservation or savings plan transfers. >[!VIDEO https://www.youtube.com/embed/gfiUI2YLsgc] ### Required permission for the transfer requestor -For both subscriptions and reservations, the product transfer requestor must have one of the following permissions: +The product transfer requestor must have one of the following permissions: For a Microsoft Customer Agreement, the person must have an owner or contributor role for the billing account or for the relevant billing profile or invoice section. For more information, see [Billing roles and tasks](understand-mca-roles.md#invoice-section-roles-and-tasks). -### Required permission for the subscription transfer recipient --The subscription product owner (transfer request recipient) must have one of the following permissions: --- For a Microsoft Customer Agreement, the person must have an owner or contributor role for the billing account or for the relevant billing profile or invoice section. For more information, see [Billing roles and tasks](understand-mca-roles.md#invoice-section-roles-and-tasks).-- For an Enterprise Agreement, the person must be an account owner or EA administrator. -- For a Microsoft Online Subscription Agreement, the person must be an Account Administrator.--### Required permission for the reservation transfer recipient +### Required permission for the transfer recipient -The reservation product owner (transfer request recipient) must have one of the following permissions: +The subscription, reservation, or savings plan product owner (transfer request recipient) must have one of the following permissions: - For a Microsoft Customer Agreement, the person must have an owner or contributor role for the billing account or for the relevant billing profile or invoice section. For more information, see [Billing roles and tasks](understand-mca-roles.md#invoice-section-roles-and-tasks).-- For an Enterprise Agreement, the person must be an EA administrator.+- For an Enterprise Agreement subscription, the person must be an account owner or EA administrator. +- For an Enterprise Agreement savings plan or reservation, the person must be an EA administrator - For a Microsoft Online Subscription Agreement, the person must be an Account Administrator. ## Create the product transfer request The person creating the transfer request uses the following procedure to create When the request is created, an email is sent to the target recipient. -The following procedure has you navigate to **Transfer requests** by selecting a **Billing scope** > **Billing account** > **Billing profile** > **Invoice sections** to **Add a new request**. If you navigate to **Add a new request** from selecting a billing profile, you'll have to select a billing profile and then select an invoice section. +The following procedure has you navigate to **Transfer requests** by selecting a **Billing scope** > **Billing account** > **Billing profile** > **Invoice sections** to **Add a new request**. If you navigate to **Add a new request** from selecting a billing profile, you'll have to select a billing profile and then select an invoice section. 1. Sign in to the [Azure portal](https://portal.azure.com) as an invoice section owner or contributor for a billing account for Microsoft Customer Agreement. Use the same credentials that you used to accept your Microsoft Customer Agreement. 1. Search for **Cost Management + Billing**. The recipient of the transfer request uses the following procedure to review and - Transfer one or more subscriptions only - Transfer one or more reservations only-- Transfer both subscriptions and reservations-+- Transfer one or more savings plans only +- Transfer a combination of products 1. The user gets an email with instructions to review your transfer request. Select **Review the request** to open it in the Azure portal. :::image type="content" source="./media/mca-request-billing-ownership/mca-review-transfer-request-email.png" alt-text="Screenshot that shows review transfer request email." lightbox="./media/mca-request-billing-ownership/mca-review-transfer-request-email.png" ::: The recipient of the transfer request uses the following procedure to review and 1. In the Azure portal, the user selects the billing account that they want to transfer Azure products from. Then they select eligible subscriptions on the **Subscriptions** tab. If the owner doesnΓÇÖt want to transfer subscriptions and instead wants to transfer reservations only, make sure that no subscriptions are selected. :::image type="content" source="./media/mca-request-billing-ownership/review-transfer-request-subscriptions-select.png" alt-text="Screenshot showing the Subscriptions tab." lightbox="./media/mca-request-billing-ownership/review-transfer-request-subscriptions-select.png" ::: *Disabled subscriptions can't be transferred.*-1. If there are reservations available to transfer, select the **Reservations** tab and then select them. If reservations wonΓÇÖt be transferred, make sure that no reservations are selected. +1. If there are reservations available to transfer, select the **Reservations** tab and then select them. If reservations wonΓÇÖt be transferred, make sure that no reservations are selected. If reservations are transferred, they're applied to the scope thatΓÇÖs set in the request. If you want to change the scope of the reservation after itΓÇÖs transferred, see [Change the reservation scope](../reservations/manage-reserved-vm-instance.md#change-the-reservation-scope). :::image type="content" source="./media/mca-request-billing-ownership/review-transfer-request-reservations-select.png" alt-text="Screenshot showing the Reservations tab." lightbox="./media/mca-request-billing-ownership/review-transfer-request-reservations-select.png" :::+1. If there are savings plans available to transfer, select the **Saving plan** tab and then select them. If savings plans wonΓÇÖt be transferred, make sure that no savings plans are selected. +If savings plans are transferred, they're applied to the scope thatΓÇÖs set in the request. If you want to change the scope of the savings plan after itΓÇÖs transferred, see [Change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope). + :::image type="content" source="./media/mca-request-billing-ownership/review-transfer-request-savings-plan-select.png" alt-text="Screenshot showing the Savings plan tab." lightbox="./media/mca-request-billing-ownership/review-transfer-request-savings-plan-select.png" ::: + 1. Select the **Review request** tab and verify the information about the products to transfer. If there are Warnings or Failed status messages, see the following information. When you're ready to continue, select **Transfer**. :::image type="content" source="./media/mca-request-billing-ownership/review-transfer-request-complete.png" alt-text="Screenshot showing the Review request tab where you review your transfer selections." lightbox="./media/mca-request-billing-ownership/review-transfer-request-complete.png" ::: 1. You'll briefly see a `Transfer is in progress` message. When the transfer is completed successfully, you'll see the Transfer details page with the `Transfer completed successfully` message. You can request billing ownership of products for the subscription types listed - [Microsoft Azure Plan](https://azure.microsoft.com/offers/ms-azr-0017g/)┬▓ - [Microsoft Azure Sponsored Offer](https://azure.microsoft.com/offers/ms-azr-0036p/)┬╣ - [Microsoft Enterprise Agreement](https://azure.microsoft.com/pricing/enterprise-agreement/)- - Subscription and reservation transfer are supported for direct EA customers. A direct enterprise agreement is one that's signed between Microsoft and an enterprise agreement customer. - - Only subscription transfers are supported for indirect EA customers. Reservation transfers aren't supported. An indirect EA agreement is one where a customer signs an agreement with a Microsoft partner. + - Subscription, reservation, and savings plan transfers are supported for direct EA customers. A direct enterprise agreement is one that's signed between Microsoft and an enterprise agreement customer. + - Only subscription transfers are supported for indirect EA customers. Reservation and savings plan transfers aren't supported. An indirect EA agreement is one where a customer signs an agreement with a Microsoft partner. - [Microsoft Customer Agreement](https://azure.microsoft.com/pricing/purchase-options/microsoft-customer-agreement/) - [Microsoft Cloud Partner Program](https://azure.microsoft.com/offers/ms-azr-0025p/)┬╣ - [MSDN Platforms](https://azure.microsoft.com/offers/ms-azr-0062p/)┬╣ |
cost-management-billing | Mca Setup Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-setup-account.md | tags: billing Previously updated : 09/23/2022 Last updated : 11/14/2022 You can use the following options to start the migration experience for your EA If you have both the enterprise administrator and billing account owner roles, you see the following page in the Azure portal. You can continue setting up your EA enrollments and Microsoft Customer Agreement billing account for transition. +Here's an example screenshot showing the Get started experience. We'll cover each of the steps in more detail later in this article. + :::image type="content" source="./media/microsoft-customer-agreement-setup-account/setup-billing-account-page.png" alt-text="Screenshot showing the Set up your billing account page" lightbox="./media/microsoft-customer-agreement-setup-account/setup-billing-account-page.png" ::: If you don't have the enterprise administrator role for the enterprise agreement or the billing account owner role for the Microsoft Customer Agreement, then use the following information to get the access that you need to complete setup. Next, select the source enrollment to transition. Then select the billing accoun If you have the Enterprise Administrator (read-only) role, you'll see the following error that prevents the transition. You must have the Enterprise Administrator role before you can transition your enrollment. -`Select another enrollment. You do not hve Enterprise Administrator write permission to the enrollment.` +`Select another enrollment. You do not have Enterprise Administrator write permission to the enrollment.` -If your enrollment has more than 60 days until its end date, you'll see the following error that prevents the transition. The current date must be within 60 of the enrollment end before you can transition your enrollment. +If your enrollment has more than 60 days until its end date, you'll see the following error that prevents the transition. The current date must be within 60 of the enrollment end date before you can transition your enrollment. `Select another enrollment. This enrollment has more than 60 days before its end date.` The following sections provide additional information about setting up your bill ### No service downtime -Azure services in your subscription keep running without any interruption. We only transition the billing relationship for your Azure subscriptions. There won't be an impact to existing resources, resource groups, or management groups. +Azure services in your subscription keep running without any interruption. We only transition the billing relationship for your Azure subscriptions. There won't be any change to existing resources, resource groups, or management groups. ### User access to Azure resources Access to Azure resources that was set using Azure role-based access control (Azure RBAC) isn't affected during the transition. -### Azure Reservations +### Azure reservations and savings plans ++Any Azure reservations and savings plan in your Enterprise Agreement enrollment are moved to your new billing account. During the transition, there won't be any changes to the reservation discounts that are being applied to your subscriptions. If you have a savings plan that's getting transferred and it was purchased in USD, there won't be any changes to the saving plan discounts that are being applied to your subscriptions. ++### Savings plan transfers with a non-USD billing currency ++You'll see the following image when your Enterprise Agreement enrollment savings plan wasn't purchased in USD. +++>[!NOTE] +> You must cancel any savings plan under the Enterprise Agreement that wasn't purchased in USD. Then you can repurchase it under the terms of the new Microsoft Customer Agreement in USD. -Any Azure Reservations in your Enterprise Agreement enrollment is moved to your new billing account. During the transition, there won't be any changes to the reservation discounts that are being applied to your subscriptions. +To move forward, select **View charges** to open the Exchange savings plans page and view the savings plans that must be repurchased. ++The Exchange savings plans page shows you the savings plans that will get canceled and credit that will get returned in the original currency to the source enrollment. It also shows the new savings plans that will get charged in USD for a one-year term for the target billing account. The new offer is a one-year term and matches the previous savings plan commitment per hour. ++HereΓÇÖs an example showing the exchange. Monetary values are for example purposes. +++Close the Exchange savings plan page and then select the **I have viewed and agree to the charges for my new savings plans and understand they my current savings plans will be canceled and refunded to my original payment method** prompt to agree and continue. ### Azure Marketplace products Complete the setup of your billing account before your Enterprise Agreement enro ### Changes to the Enterprise Agreement enrollment after the setup -Azure subscriptions that are created for the Enterprise Agreement enrollment after the transition can be manually moved to the new billing account. For more information, see [get billing ownership of Azure subscriptions from other users](mca-request-billing-ownership.md). To move Azure Reservations that are purchased after the transition, [contact Azure Support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). You can also provide users access to the billing account after the transition. For more information, see [manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal) +Azure subscriptions that are created for the Enterprise Agreement enrollment after the transition can be manually moved to the new billing account. For more information, see [get billing ownership of Azure subscriptions from other users](mca-request-billing-ownership.md). To move Azure reservations or savings plans that are purchased after the transition, [contact Azure Support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). You can also provide users access to the billing account after the transition. For more information, see [manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal) ### Revert the transition To complete the setup, you need access to both the new billing account and the E - A billing hierarchy corresponding to your Enterprise Agreement hierarchy is created in the new billing account. For more information, see [understand changes to your billing hierarchy](#understand-changes-to-your-billing-hierarchy). - Administrators from your Enterprise Agreement enrollment are given access to the new billing account so that they continue to manage billing for your organization. - The billing of your Azure subscriptions is transitioned to the new account. **There won't be any impact on your Azure services during this transition. They'll keep running without any disruption**.- - If you have Azure Reservations, they're moved to your new billing account with no change to benefits or term. + - If you have Azure reservations or savings plans, they're moved to your new billing account with no change to benefits or term. If you have savings plans under the Enterprise Agreement that weren't purchased in USD, then the savings plans are canceled. They're repurchased under the terms of the new Microsoft Customer Agreement in USD. -4. You can monitor the status of the transition on the **Transition status** page. +4. You can monitor the status of the transition on the **Transition status** page. Any savings plans shown in the Transition details are ones that were canceled. + - If you had a savings plan that was repurchased, select the **new savings plan** link to view its details and to verify that it was created successfully. ![Screenshot that shows the transition status](./media/microsoft-customer-agreement-setup-account/ea-microsoft-customer-agreement-set-up-status.png) To complete the setup, you need access to both the new billing account and the E Azure subscriptions that are transitioned from your Enterprise Agreement enrollment to the new billing account are displayed on the Azure subscriptions page. If you believe any subscription is missing, transition the billing of the subscription manually in the Azure portal. For more information, see [get billing ownership of Azure subscriptions from other users](mca-request-billing-ownership.md) -### Azure reservations --Azure reservations in your Enterprise Agreement enrollment will be moved to your new billing account with no change to benefits or term. Transactions completed before the transition won't appear in your new billing account. However, you can verify that the benefits of your reservations are applied to your subscriptions by visiting the [Azure reservations page](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/ReservationsBrowseBlade). - ### Access of enterprise administrators on the billing account 1. Sign in to the [Azure portal](https://portal.azure.com). |
cost-management-billing | Subscription Transfer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md | Title: Azure subscription and reservation transfer hub -description: This article helps you understand what's needed to transfer Azure subscriptions and provides links to other articles for more detailed information. + Title: Azure product transfer hub +description: This article helps you understand what's needed to transfer Azure subscriptions, reservations, and savings plans and provides links to other articles for more detailed information. tags: billing Previously updated : 08/01/2022 Last updated : 10/25/2022 -# Azure subscription and reservation transfer hub +# Azure product transfer hub -This article describes the types of supported transfers for Azure subscriptions and Azure reservations, referred to as _products_. It also helps you understand what's needed to transfer Azure products between different billing agreements and provides links to other articles for more detailed information about specific transfers. Azure products are created upon different Azure agreement types and a transfer from a source agreement type to another varies depending on the source and destination agreement types. Azure product transfers can be an automatic or a manual process, depending on the source and destination agreement type. If it's a manual process, the agreement types determine how much manual effort is needed. +This article describes the types of supported transfers for Azure subscriptions, reservations, and savings plans referred to as _products_. It also helps you understand what's needed to transfer Azure products between different billing agreements and provides links to other articles for more detailed information about specific transfers. Azure products are created upon different Azure agreement types and a transfer from a source agreement type to another varies depending on the source and destination agreement types. Azure product transfers can be an automatic or a manual process, depending on the source and destination agreement type. If it's a manual process, the agreement types determine how much manual effort is needed. > [!NOTE] > There are many types of Azure products, however not every product can be transferred from one type to another. Only supported product transfers are documented in this article. If you need help with a situation that isn't addressed in this article, you can create an [Azure support request](https://go.microsoft.com/fwlink/?linkid=2083458) for assistance. For more information about product transfers between different Azure AD tenants, ## Product transfer planning -As you begin to plan your product transfer, consider the information needed to answer to the following questions: +As you begin to plan your product transfer, consider the information needed to answer the following questions: - Why is the product transfer required? - What's the wanted timeline for the product transfer? As you begin to plan your product transfer, consider the information needed to a - Others like MSDN, BizSpark, EOPEN, Azure Pass, and Free Trial - Do you have the required permissions on the product to accomplish a transfer? Specific permission needed for each transfer type is listed in the following product transfer support table. - Only the billing administrator of an account can transfer subscription ownership.- - Only a billing administrator owner can transfer reservation ownership. -- Are there existing subscriptions that benefit from reservations and will the reservations need to be transferred with the subscription?+ - Only a billing administrator owner can transfer reservation or savings plan ownership. +- Are there existing subscriptions that benefit from reservations or savings plans and will they need to be transferred with the subscription? You should have an answer for each question before you continue with any transfer. Support plans can't be transferred. If you have a support plan, you should cance Dev/Test products aren't shown in the following table. Transfers for Dev/Test products are handled in the same way as other product types. For example, an EA Dev/Test product transfer is handled in the way an EA product transfer. > [!NOTE]-> Reservations transfer with most supported product transfers. However, there are some transfers where reservations won't transfer, as noted in the following table. +> Reservations and savings plans transfer with most supported product transfers. However, there are some transfers where reservations or savings plans won't transfer, as noted in the following table. | Source (current) product agreement type | Destination (future) product agreement type | Notes | | | | |-| EA | MOSP (PAYG) | ΓÇó Transfer from an EA enrollment to a MOSP subscription requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations don't automatically transfer and transferring them isn't supported. | -| EA | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. | -| EA | EA | ΓÇó Transferring between EA enrollments requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Self-service reservation transfers are supported.<br><br> ΓÇó Transfer within the same enrollment is the same action as changing the account owner. For details, see [Change EA subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership). | -| EA | MCA - Enterprise | ΓÇó Transferring all enrollment products is completed as part of the MCA transition process from an EA. For more information, see [Complete Enterprise Agreement tasks in your billing account for a Microsoft Customer Agreement](mca-enterprise-operations.md).<br><br> ΓÇó If you want to transfer specific products, not all of the products in an enrollment, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). - Self-service reservation transfers are supported. | +| EA | MOSP (PAYG) | ΓÇó Transfer from an EA enrollment to a MOSP subscription requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. | +| EA | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers with no currency change are supported. <br><br> ΓÇó You can't transfer a savings plan purchased under an Enterprise Agreement enrollment that was bought in a non-USD currency. However, you can [change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope) so that it applies to other subscriptions. | +| EA | EA | ΓÇó Transferring between EA enrollments requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported.<br><br> ΓÇó Transfer within the same enrollment is the same action as changing the account owner. For details, see [Change EA subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership). | +| EA | MCA - Enterprise | ΓÇó Transferring all enrollment products is completed as part of the MCA transition process from an EA. For more information, see [Complete Enterprise Agreement tasks in your billing account for a Microsoft Customer Agreement](mca-enterprise-operations.md).<br><br> ΓÇó If you want to transfer specific products, not all of the products in an enrollment, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <br><br>ΓÇó Self-service reservation and savings plan transfers with no currency change are supported.<br><br> ΓÇó You can't transfer a savings plan purchased under an Enterprise Agreement enrollment that was bought in a non-USD currency. You can [change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope) so that it applies to other subscriptions. | | EA | MPA | ΓÇó Transfer is only allowed for direct EA to MPA. A direct EA is signed between Microsoft and an EA customer.<br><br>ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Direct Enterprise Agreement (EA). For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Transfer from EA Government to MPA isn't supported.<br><br>ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.md#transfer-ea-or-mca-enterprise-subscriptions-to-a-csp-partner). |-| MCA - individual | MOSP (PAYG) | ΓÇó For details, see [Transfer billing ownership of an Azure subscription to another account](billing-subscription-transfer.md).<br><br> ΓÇó Reservations don't automatically transfer and transferring them isn't supported. | -| MCA - individual | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. | -| MCA - individual | EA | ΓÇó For details, see [Transfer a subscription to an EA](mosp-ea-transfer.md#transfer-the-subscription-to-the-ea).<br><br> ΓÇó Self-service reservation transfers are supported. | -| MCA - individual | MCA - Enterprise | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br>ΓÇó Self-service reservation transfers are supported. | -| MCA - Enterprise | MOSP | ΓÇó Requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations don't automatically transfer and transferring them isn't supported. | -| MCA - Enterprise | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. | -| MCA - Enterprise | MCA - Enterprise | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. | -| MCA - Enterprise | MPA | ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Microsoft Customer Agreement with a Microsoft representative. For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Self-service reservation transfers are supported.<br><br> ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.md#transfer-ea-or-mca-enterprise-subscriptions-to-a-csp-partner). | +| MCA - individual | MOSP (PAYG) | ΓÇó For details, see [Transfer billing ownership of an Azure subscription to another account](billing-subscription-transfer.md).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. | +| MCA - individual | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported. | +| MCA - individual | EA | ΓÇó For details, see [Transfer a subscription to an EA](mosp-ea-transfer.md#transfer-the-subscription-to-the-ea).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported. | +| MCA - individual | MCA - Enterprise | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br>ΓÇó Self-service reservation and savings plan transfers are supported. | +| MCA - Enterprise | MOSP | ΓÇó Requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. | +| MCA - Enterprise | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported. | +| MCA - Enterprise | MCA - Enterprise | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported. | +| MCA - Enterprise | MPA | ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Microsoft Customer Agreement with a Microsoft representative. For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Self-service reservation and savings plan transfers are supported.<br><br> ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.md#transfer-ea-or-mca-enterprise-subscriptions-to-a-csp-partner). | | Previous Azure offer in CSP | Previous Azure offer in CSP | ΓÇó Requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations don't automatically transfer and transferring them isn't supported. | | Previous Azure offer in CSP | MPA | For details, see [Transfer a customer's Azure subscriptions to a different CSP (under an Azure plan)](/partner-center/transfer-azure-subscriptions-under-azure-plan). |-| MPA | EA | ΓÇó Automatic transfer isn't supported. Any transfer requires resources to move from the existing MPA product manually to a newly created or an existing EA product.<br><br> ΓÇó Use the information in the [Perform resource transfers](#perform-resource-transfers) section. <br><br> ΓÇó Reservations don't automatically transfer and transferring them isn't supported. | -| MPA | MPA | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. | +| MPA | EA | ΓÇó Automatic transfer isn't supported. Any transfer requires resources to move from the existing MPA product manually to a newly created or an existing EA product.<br><br> ΓÇó Use the information in the [Perform resource transfers](#perform-resource-transfers) section. <br><br> ΓÇó Reservations and savings plan don't automatically transfer and transferring them isn't supported. | +| MPA | MPA | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported. | | MOSP (PAYG) | MOSP (PAYG) | ΓÇó If you're changing the billing owner of the subscription, see [Transfer billing ownership of an Azure subscription to another account](billing-subscription-transfer.md).<br><br> ΓÇó Reservations don't automatically transfer so you must open a [billing support ticket](https://azure.microsoft.com/support/create-ticket/) to transfer them. | | MOSP (PAYG) | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. | | MOSP (PAYG) | EA | ΓÇó If you're transferring the subscription to the EA enrollment, see [Transfer a subscription to an EA](mosp-ea-transfer.md#transfer-the-subscription-to-the-ea).<br><br> ΓÇó If you're changing billing ownership, see [Change Azure subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership). | |
cost-management-billing | Buy Savings Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/buy-savings-plan.md | To determine if you're eligible to buy a plan, [check your billing type](../mana Savings plan discounts only apply to resources associated with subscriptions purchased through an Enterprise Agreement, Microsoft Customer Agreement, or Microsoft Partner Agreement (MPA). +## Change agreement type to one supported by savings plan ++If your current agreement type isn't supported by a savings plan, you might be able to transfer or migrate it to one that's supported. For more information, see the following articles. ++- [Transfer Azure products between different billing agreements](../manage/subscription-transfer.md) +- [Product transfer support](../manage/subscription-transfer.md#product-transfer-support) +- [From MOSA to the Microsoft Customer Agreement](https://www.microsoft.com/licensing/news/from-mosa-to-microsoft-customer-agreement) + ### Enterprise Agreement customers - EA admins with write permissions can directly purchase savings plans from **Cost Management + Billing** > **Savings plan**. No specific permission for a subscription is needed. |
cost-management-billing | Manage Savings Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/manage-savings-plan.md | After you buy an Azure savings plan, you may need to apply the savings plan to a _Permission needed to manage a savings plan is separate from subscription permission._ -## Savings plan order and savings plan +## View savings plan order To view a savings plan order, go to **Savings Plan** > select the savings plan, and then select the **Savings plan order ID**. If you're a billing administrator you don't need to be an owner on the subscript 2. In the left menu, select **Products + services** > **Savings plan**. 3. The complete list of savings plans for your EA enrollment or billing profile is shown. -## Change billing subscription --We don't allow changing Billing subscription after a savings plan is purchased. - ## Cancel, exchange, or refund You can't cancel, exchange, or refund savings plans. +## Transfer savings plan ++Although you can't cancel, exchange, or refund a savings plan, you can transfer it from one supported agreement to another. For more information about supported transfers, see [Azure product transfer hub](../manage/subscription-transfer.md#product-transfer-support). + ## View savings plan usage Billing administrators can view savings plan usage Cost Management + Billing. |
data-factory | Concepts Data Flow Debug Mode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-debug-mode.md | In most cases, it's a good practice to build your Data Flows in debug mode so th > [!NOTE] > Every debug session that a user starts from their browser UI is a new session with its own Spark cluster. You can use the monitoring view for debug sessions above to view and manage debug sessions. You are charged for every hour that each debug session is executing including the TTL time. +This video clip talks about tips, tricks, and good practices for data flow debug mode +> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWQK3m] + ## Cluster status The cluster status indicator at the top of the design surface turns green when the cluster is ready for debug. If your cluster is already warm, then the green indicator will appear almost instantly. If your cluster wasn't already running when you entered debug mode, then the Spark cluster will perform a cold boot. The indicator will spin until the environment is ready for interactive debugging. |
databox-online | Azure Stack Edge Gpu 2210 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2210-release-notes.md | The following release notes identify the critical open issues and the resolved i The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes. -This article applies to the **Azure Stack Edge 2210** release, which maps to software version **2.2.2111.98**. This software can be applied to your device if you're running **Azure Stack Edge 2207 or later** (2.2.2038.5916). +This article applies to the **Azure Stack Edge 2210** release, which maps to software version **2.2.2111.1002**. This software can be applied to your device if you're running **Azure Stack Edge 2207 or later** (2.2.2038.5916). ## What's new The 2210 release has the following features and enhancements: -- **High performance network VMs** - In this release, when you deploy high performance network (HPN) VMs, vCPUs are automatically reserved using a default SkuPolicy. If a vCPU reservation was defined in an earlier version, and if you update the device to 2210, then that existing reservation is carried forth to 2210. For more information, see how to [Deploy HPN VMs on your Azure Stack Edge](azure-stack-edge-gpu-deploy-virtual-machine-high-performance-network.md).+- **High performance network VM enhancements:** + - When you deploy high performance network (HPN) VMs, vCPUs are automatically reserved using a default SkuPolicy. If a vCPU reservation was defined in an earlier version, and if you update the device to 2210, then that existing reservation is carried forth to 2210. For more information, see how to [Deploy HPN VMs on your Azure Stack Edge](azure-stack-edge-gpu-deploy-virtual-machine-high-performance-network.md). + - Added support for bulk network configuration changes. For example, you now can edit multiple virtual switches and multiple virtual networks in the local UI. This improvement will reduce network configuration time. + - Added HPN GPU VM sizes for T4 and A2 GPUs and Standard_F4s_v1 VM size. See the updated article at [VM sizes and types for Azure Stack Edge Pro](azure-stack-edge-gpu-virtual-machine-sizes.md). - **Kubernetes security updates** - This release includes security updates and security hardening improvements for Kubernetes VMs. If you have questions or concerns, [open a support case through the Azure portal](azure-stack-edge-contact-microsoft-support.md). +## Issues fixed in this release ++| No. | Feature | Issue | +| | | | +|**1.**|Networking |Open vSwitch Database Management Protocol (OVSDB) connection failure caused by stale node IP after user changed physical net adapter link status. | ++## Known issues in this release ++| No. | Feature | Issue | Workaround/comments | +| | | | | +|**1.**|Preview features |For this release, the following features are available in preview: <br> - Clustering and Multi-Access Edge Computing (MEC) for Azure Stack Edge Pro GPU devices only. <br> - VPN for Azure Stack Edge Pro R and Azure Stack Edge Mini R only. <br> - Local Azure Resource Manager, VMs, Cloud management of VMs, Kubernetes cloud management, and Multi-process service (MPS) for Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R. |These features will be generally available in later releases. | + ## Known issues from previous releases The following table provides a summary of known issues carried over from the previous releases. |
databox-online | Azure Stack Edge Gpu Install Update | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-install-update.md | The procedure described in this article was performed using a different version The current update is Update 2210. This update installs two updates, the device update followed by Kubernetes updates. The associated versions for this update are: -- Device software version: Azure Stack Edge 2210 (2.2.2111.98)-- Device Kubernetes version: Azure Stack Kubernetes Edge 2210 (2.2.2111.98)+- Device software version: Azure Stack Edge 2210 (2.2.2111.1002) +- Device Kubernetes version: Azure Stack Kubernetes Edge 2210 (2.2.2111.1002) - Kubernetes server version: v1.23.8 - IoT Edge version: 0.1.0-beta15 - Azure Arc version: 1.7.18 |
defender-for-cloud | Attack Path Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-reference.md | To learn about how to respond to these attack paths, see [Identify and remediate ### Azure VMs -Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentless.md). +Prerequisite: For a list of prerequisites see the [Availability](how-to-manage-attack-path.md#availability) table for attack paths. | Attack Path Display Name | Attack Path Description | |--|--| |
defender-for-cloud | Concept Agentless Data Collection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-data-collection.md | Agentless scanning for VMs provides vulnerability assessment and software invent | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected GCP accounts | | Operating systems: | :::image type="icon" source="./media/icons/yes-icon.png"::: Windows<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Linux | | Instance types: | **Azure**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Standard VMs<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Virtual machine scale set - Flex<br>:::image type="icon" source="./media/icons/no-icon.png"::: Virtual machine scale set - Uniform<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: EC2<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Auto Scale instances<br>:::image type="icon" source="./media/icons/no-icon.png"::: Instances with a ProductCode (Paid AMIs) |-| Encryption: | **Azure**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Unencrypted<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted ΓÇô Azure Disk Encryption - platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô Azure Disk Encryption - customer-managed keys (CMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô Disk Encryption Set - platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô Disk Encryption Set - customer-managed keys (CMK)<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Unencrypted<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted | +| Encryption: | **Azure**<br>:::image type="icon" source="./medi) with platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô other scenarios using platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô customer-managed keys (CMK)<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Unencrypted<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted | ## How agentless scanning for VMs works Agentless scanning protects disk snapshots according to MicrosoftΓÇÖs highest se - All operations are audited. ### Does agentless scanning support encrypted disks?-Agentless scanning doesn't yet support encrypted disks, except for Azure Disk Encryption. +Agentless scanning doesn't currently support encrypted disks, except for Azure managed disks using [Azure Storage encryption](../virtual-machines/disk-encryption.md) with platform-managed keys (PMK). ## Next steps |
defender-for-cloud | Defender For Cloud Glossary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-glossary.md | Title: Defender for Cloud glossary for device builder + Title: Defender for Cloud glossary description: The glossary provides a brief description of important Defender for Cloud platform terms and concepts. Last updated 10/30/2022 -# Defender for Cloud glossary for device builder +# Defender for Cloud glossary This glossary provides a brief description of important terms and concepts for the Microsoft Defender for Cloud platform. Select the **Learn more** links to go to related terms in the glossary. This will help you to learn and use the product tools quickly and effectively. |
defender-for-cloud | How To Manage Attack Path | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-attack-path.md | You can check out the full list of [Attack path names and descriptions](attack-p | Aspect | Details | |--|--| | Release state | Preview |-| Prerequisite | - [Enable agentless scanning](enable-vulnerability-assessment-agentless.md) <br> - [Enable Defender for CSPM](enable-enhanced-security.md) <br> - [Enable Defender for Containers](defender-for-containers-enable.md), and install the relevant agents in order to view attack paths that are related to containers. This will also give you the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in security explorer. | +| Prerequisites | - [Enable agentless scanning](enable-vulnerability-assessment-agentless.md), or [Enable Defender for Server P1 (which includes MDVM)](defender-for-servers-introduction.md) or [Defender for Server P2 (which includes MDVM and Qualys)](defender-for-servers-introduction.md). <br> - [Enable Defender for CSPM](enable-enhanced-security.md) <br> - [Enable Defender for Containers](defender-for-containers-enable.md), and install the relevant agents in order to view attack paths that are related to containers. This will also give you the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in security explorer. | | Required plans | - Defender Cloud Security Posture Management (CSPM) enabled | | Required roles and permissions: | - **Security Reader** <br> - **Security Admin** <br> - **Reader** <br> - **Contributor** <br> - **Owner** | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS) <br>:::image type="icon" source="./media/icons/no-icon.png"::: Commercial clouds (GCP) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) | |
defender-for-cloud | Quickstart Onboard Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-machines.md | You can connect your non-Azure computers in any of the following ways: Each of these is described on this page. > [!TIP]-> If you're connecting machines from other cloud providers, see [Connect your AWS accounts](quickstart-onboard-aws.md) or [Connect your GCP projects](quickstart-onboard-gcp.md). +> If you're connecting machines from other cloud providers, see [Connect your AWS accounts](quickstart-onboard-aws.md) or [Connect your GCP projects](quickstart-onboard-gcp.md). Defender for Cloud's multicloud connectors for AWS and GCP transparently handles the Azure Arc deployment for you. ::: zone pivot="azure-arc" Learn more about [Azure Arc-enabled servers](../azure-arc/servers/overview.md). - For one machine, follow the instructions in [Quickstart: Connect hybrid machines with Azure Arc-enabled servers](../azure-arc/servers/learn/quick-enable-hybrid-vm.md). - To connect multiple machines at scale to Azure Arc-enabled servers, see [Connect hybrid machines to Azure at scale](../azure-arc/servers/onboard-service-principal.md) -> [!TIP] -> If you're onboarding machines running on Amazon Web Services (AWS), Defender for Cloud's connector for AWS transparently handles the Azure Arc deployment for you. Learn more in [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md). - ::: zone-end ::: zone pivot="azure-portal" |
defender-for-iot | Onboard Sensors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/onboard-sensors.md | This procedure describes how to use the Azure portal to contact vendors for pre- - **To buy a pre-configured appliance**, select **Contact** under **Buy preconfigured appliance**. - This link opens an email to [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com) with a template request for Defender for IoT appliances. For more information, see [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md). + This link opens an email to [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsoft.com&subject=Information%20about%20Microsoft%20Defender%20for%20IoT%20pre-configured%20appliances&body=Dear%20Arrow%20Representative,%0D%0DOur%20organization%20is%20interested%20in%20receiving%20quotes%20for%20Microsoft%20Defender%20for%20IoT%20appliances%20as%20well%20as%20fulfillment%20options.%0D%0DThe%20purpose%20of%20this%20communication%20is%20to%20inform%20you%20of%20a%20project%20which%20involves%20[NUMBER]%20sites%20and%20[NUMBER]%20sensors%20for%20[ORGANIZATION%20NAME].%20Having%20reviewed%20potential%20appliances%20suitable%20for%20our%20project,%20we%20would%20like%20to%20obtain%20more%20information%20about:%20___________%0D%0D%0DI%20would%20appreciate%20being%20contacted%20by%20an%20Arrow%20representative%20to%20receive%20a%20quote%20for%20the%20items%20mentioned%20above.%0DI%20understand%20the%20quote%20and%20appliance%20delivery%20shall%20be%20in%20accordance%20with%20the%20relevant%20Arrow%20terms%20and%20conditions%20for%20Microsoft%20Defender%20for%20IoT%20pre-configured%20appliances.%0D%0D%0DBest%20Regards,%0D%0D%0D%0D%0D%0D//////////////////////////////%20%0D/////////%20Replace%20[NUMBER]%20with%20appropriate%20values%20related%20to%20your%20request.%0D/////////%20Replace%20[ORGANIZATION%20NAME]%20with%20the%20name%20of%20the%20organization%20you%20represent.%0D//////////////////////////////%0D%0D)with a template request for Defender for IoT appliances. For more information, see [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md). - **To install software on your own appliances**, do the following: |
governance | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/overview.md | Title: Understand the machine configuration feature of Azure Policy description: Learn how Azure Policy uses the machine configuration feature to audit or configure settings inside virtual machines. Previously updated : 07/25/2022 Last updated : 11/16/2022 definitions as long as they're one of the operating systems in the table above. ## Network requirements -Virtual machines in Azure can use either their local network adapter or a -private link to communicate with the machine configuration service. +Azure virtual machines can use either their local virtual network adapter (vNIC) +or Azure Private Link to communicate with the machine configuration service. -Azure Arc machines connect using the on-premises network infrastructure to reach +Azure Arc-enabled machines connect using the on-premises network infrastructure to reach Azure services and report compliance status. +Following is a list of the Azure Storage endpoints required for Azure and Azure Arc-enabled +virtual machines to communicate with the machine configuration resource provider in Azure: ++- oaasguestconfigac2s1.blob.core.windows.net +- oaasguestconfigacs1.blob.core.windows.net +- oaasguestconfigaes1.blob.core.windows.net +- oaasguestconfigases1.blob.core.windows.net +- oaasguestconfigbrses1.blob.core.windows.net +- oaasguestconfigbrss1.blob.core.windows.net +- oaasguestconfigccs1.blob.core.windows.net +- oaasguestconfigces1.blob.core.windows.net +- oaasguestconfigcids1.blob.core.windows.net +- oaasguestconfigcuss1.blob.core.windows.net +- oaasguestconfigeaps1.blob.core.windows.net +- oaasguestconfigeas1.blob.core.windows.net +- oaasguestconfigeus2s1.blob.core.windows.net +- oaasguestconfigeuss1.blob.core.windows.net +- oaasguestconfigfcs1.blob.core.windows.net +- oaasguestconfigfss1.blob.core.windows.net +- oaasguestconfiggewcs1.blob.core.windows.net +- oaasguestconfiggns1.blob.core.windows.net +- oaasguestconfiggwcs1.blob.core.windows.net +- oaasguestconfigjiws1.blob.core.windows.net +- oaasguestconfigjpes1.blob.core.windows.net +- oaasguestconfigjpws1.blob.core.windows.net +- oaasguestconfigkcs1.blob.core.windows.net +- oaasguestconfigkss1.blob.core.windows.net +- oaasguestconfigncuss1.blob.core.windows.net +- oaasguestconfignes1.blob.core.windows.net +- oaasguestconfignres1.blob.core.windows.net +- oaasguestconfignrws1.blob.core.windows.net +- oaasguestconfigqacs1.blob.core.windows.net +- oaasguestconfigsans1.blob.core.windows.net +- oaasguestconfigscuss1.blob.core.windows.net +- oaasguestconfigseas1.blob.core.windows.net +- oaasguestconfigsecs1.blob.core.windows.net +- oaasguestconfigsfns1.blob.core.windows.net +- oaasguestconfigsfws1.blob.core.windows.net +- oaasguestconfigsids1.blob.core.windows.net +- oaasguestconfigstzns1.blob.core.windows.net +- oaasguestconfigswcs1.blob.core.windows.net +- oaasguestconfigswns1.blob.core.windows.net +- oaasguestconfigswss1.blob.core.windows.net +- oaasguestconfigswws1.blob.core.windows.net +- oaasguestconfiguaecs1.blob.core.windows.net +- oaasguestconfiguaens1.blob.core.windows.net +- oaasguestconfigukss1.blob.core.windows.net +- oaasguestconfigukws1.blob.core.windows.net +- oaasguestconfigwcuss1.blob.core.windows.net +- oaasguestconfigwes1.blob.core.windows.net +- oaasguestconfigwids1.blob.core.windows.net +- oaasguestconfigwus2s1.blob.core.windows.net +- oaasguestconfigwus3s1.blob.core.windows.net +- oaasguestconfigwuss1.blob.core.windows.net + ### Communicate over virtual networks in Azure To communicate with the machine configuration resource provider in Azure, machines For more information about troubleshooting machine configuration, see Guest Configuration policy definitions now support assigning the same guest assignment to more than once per machine when the policy assignment uses different-parameters. +parameters. ### Assignments to Azure Management Groups |
hdinsight | Hdinsight Migrate Granular Access Cluster Configurations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-migrate-granular-access-cluster-configurations.md | Update to [version 1.0.0](https://search.maven.org/artifact/com.microsoft.azure. ### SDK For Go -Update to [version 27.1.0](https://github.com/Azure/azure-sdk-for-go/tree/master/services/preview/hdinsight/mgmt/2015-03-01-preview/hdinsight) or later of the HDInsight SDK for Go. Minimal code modifications may be required if you are using a method affected by these changes: +Update to [version 27.1.0](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/resourcemanager/hdinsight) or later of the HDInsight SDK for Go. Minimal code modifications may be required if you are using a method affected by these changes: - [`ConfigurationsClient.get`](https://godoc.org/github.com/Azure/azure-sdk-for-go/services/preview/hdinsight/mgmt/2015-03-01-preview/hdinsight#ConfigurationsClient.Get) will **no longer return sensitive parameters** like storage keys (core-site) or HTTP credentials (gateway). - To retrieve all configurations, including sensitive parameters, use [`ConfigurationsClient.list`](https://godoc.org/github.com/Azure/azure-sdk-for-go/services/preview/hdinsight/mgmt/2015-03-01-preview/hdinsight#ConfigurationsClient.List) going forward.ΓÇ» Note that users with the 'Reader' role will not be able to use this method. This allows for granular control over which users can access sensitive information for a cluster. |
healthcare-apis | Events Consume Logic Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-consume-logic-apps.md | Continue specifying your Logic App by clicking "Next: Monitoring". - Application Insights - Region -Enable Azure Monitor application insights to automatically monitor your application. If you enable insights, you must create a new insight and specify the region. +Enable Azure Monitor Application Insights to automatically monitor your application. If you enable insights, you must create a new insight and specify the region. ### Tags - Tab 4 |
healthcare-apis | Deploy 02 New Button | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-02-new-button.md | Title: Deploy the MedTech service with a QuickStart template - Azure Health Data Services -description: In this article, you'll learn how to deploy the MedTech service in the Azure portal using a Quickstart template. + Title: Deploy MedTech service with an Azure Resource Manager (ARM) template - Azure Health Data Services +description: In this article, you'll learn how to deploy MedTech service in the Azure portal using an Azure Resource Manager (ARM) template. Previously updated : 11/01/2022 Last updated : 11/18/2022 -# Deploy the MedTech service with an Azure Resource Manager Quickstart template +# Quickstart: Deploy MedTech service with an Azure Resource Manager template -In this article, you'll learn how to deploy the MedTech service in the Azure portal using an Azure Resource Manager (ARM) Quickstart template. This template will be used with the **Deploy to Azure** button to make it easy to provide the information you need to automatically create the infrastructure and configuration of your deployment. For more information about Azure Resource Manager (ARM) templates, see [What are ARM templates?](../../azure-resource-manager/templates/overview.md). +In this article, you'll learn how to deploy MedTech service in the Azure portal using an Azure Resource Manager (ARM) template. This ARM template will be used with the **Deploy to Azure** button to make it easy to provide the information you need to automatically create the infrastructure and configuration of your deployment. For more information about Azure Resource Manager (ARM) templates, see [What are ARM templates?](../../azure-resource-manager/templates/overview.md). ++The ARM template used in this article is available from the [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors-with-iothub/) site using the **azuredeploy.json** file located on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/). If you need to see a diagram with information on the MedTech service deployment, there's an architecture overview at [Choose a deployment method](deploy-iot-connector-in-azure.md#deployment-architecture-overview). This diagram shows the data flow steps of deployment and how MedTech service processes data into a Fast Healthcare Interoperability Resources (FHIR®) Observation. Next, you need to select the ARM template **Deploy to Azure** button here: [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.healthcareapis%2Fworkspaces%2Fiotconnectors%2Fazuredeploy.json). -This button will call a template from the Azure Resource Manager (ARM) Quickstart template library to get information from your Azure subscription environment and begin deploying the MedTech service. +This button will call an ARM template from the [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors-with-iothub/) site to get information from your Azure subscription environment and begin deploying the MedTech service. After you select the **Deploy to Azure** button, it may take a few minutes to implement the following resources and roles: Now that the MedTech service is successfully deployed, there are three post-depl ## Next steps -In this article, you learned how to deploy the MedTech service in the Azure portal using a Quickstart ARM template with a **Deploy to Azure** button. To learn more about other methods of deployment, see +In this article, you learned how to deploy the MedTech service in the Azure portal using an ARM template with a **Deploy to Azure** button. To learn more about other methods of deployment, see > [!div class="nextstepaction"] > [Choosing a method of deployment for MedTech service in Azure](deploy-iot-connector-in-azure.md) |
healthcare-apis | Deploy 03 New Manual | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-03-new-manual.md | -# How to manually deploy MedTech service using the Azure portal +# Quickstart: How to manually deploy MedTech service using the Azure portal You may prefer to manually deploy MedTech service if you need to track every step of the developmental process. Manual deployment might be necessary if you have to customize or troubleshoot your deployment. Manual deployment will help you by providing all the details for implementing each task. |
healthcare-apis | Deploy 05 New Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-05-new-config.md | -# Part 2: Configure the MedTech service for manual deployment using the Azure portal +# Quickstart: Part 2: Configure the MedTech service for manual deployment using the Azure portal Before you can manually deploy the MedTech service, you must complete the following configuration tasks: |
healthcare-apis | Deploy 06 New Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-06-new-deploy.md | -# Part 3: Manual deployment and post-deployment of MedTech service +# Quickstart: Part 3: Manual deployment and post-deployment of MedTech service When you're satisfied with your configuration and it has been successfully validated, you can complete the deployment and post-deployment process. Now that you have granted access to the device message event hub and the FHIR se ## Next steps -In this article, you learned how to perform the manual deployment and post-deployment steps to implement your MedTech service. To learn more about other methods of deployment, see +In this article, you learned how to perform the manual deployment and post-deployment steps to implement your MedTech service. To learn more about other methods of deployment, see > [!div class="nextstepaction"] > [Choosing a method of deployment for MedTech service in Azure](deploy-iot-connector-in-azure.md) > [!div class="nextstepaction"]-> [Deploy the MedTech service with a QuickStart template](deploy-02-new-button.md) +> [Deploy the MedTech service with an Azure Resource Manager template](deploy-02-new-button.md) > [!div class="nextstepaction"] > [Using Azure PowerShell and Azure CLI to deploy the MedTech service using Azure Resource Manager templates](deploy-08-new-ps-cli.md) |
healthcare-apis | Deploy 08 New Ps Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-08-new-ps-cli.md | -# Using Azure PowerShell and Azure CLI to deploy the MedTech service with Azure Resource Manager templates +# Quickstart: Using Azure PowerShell and Azure CLI to deploy the MedTech service with Azure Resource Manager templates -In this Quickstart article, you'll learn how to use Azure PowerShell and Azure CLI to deploy the MedTech service using an Azure Resource Manager (ARM) template. When you call the template from PowerShell or CLI, it provides automation that enables you to distribute your deployment to large numbers of developers. Using PowerShell or CLI allows for modifiable automation capabilities that will speed up your deployment configuration in enterprise environments. For more information about ARM templates, see [What are ARM templates?](./../../azure-resource-manager/templates/overview.md). +In this article, you'll learn how to use Azure PowerShell and Azure CLI to deploy the MedTech service using an Azure Resource Manager (ARM) template. When you call the template from PowerShell or CLI, it provides automation that enables you to distribute your deployment to large numbers of developers. Using PowerShell or CLI allows for modifiable automation capabilities that will speed up your deployment configuration in enterprise environments. For more information about ARM templates, see [What are ARM templates?](./../../azure-resource-manager/templates/overview.md). ++The ARM template used in this article is available from the [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors-with-iothub/) site using the **azuredeploy.json** file located on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/). ## Resources provided by the ARM template The ARM template will help you automatically configure and deploy the following - system-assigned managed identity access roles needed to read and write to the FHIR service (named **FHIR Data Writer**) - An output file containing the ARM template deployment results (named **medtech_service_ARM_template_deployment_results.txt**). The file is located in the directory from which you ran the script. -The ARM template used in this article is available from the [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/iotconnectors/) site using the **azuredeploy.json** file located on [GitHub](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors/azuredeploy.json). - If you need to see a diagram with information on the MedTech service deployment, there's an architecture overview at [Choose a deployment method](deploy-iot-connector-in-azure.md#deployment-architecture-overview). This diagram shows the data flow steps of deployment and how MedTech service processes data into a FHIR Observation. ## Azure PowerShell prerequisites |
healthcare-apis | Deploy Iot Connector In Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-iot-connector-in-azure.md | -# Choose a deployment method +# Quickstart: Choose a deployment method -MedTech service provides multiple methods for deploying it into an Azure Platform as a service (PaaS) configuration. Each method has different advantages that will allow you to customize your development environment to suit your needs. +MedTech service provides multiple methods for deploying it into an Azure Platform as a Service (PaaS) configuration. Each method has different advantages that will allow you to customize your development environment to suit your needs. The different deployment methods are: -- Azure Resource Manager (ARM) Quickstart template with Deploy to Azure button+- Azure Resource Manager (ARM) template with Deploy to Azure button - Azure PowerShell and Azure CLI automation-- Manual deployment+- Azure portal manual deployment -## Azure ARM Quickstart template with Deploy to Azure button +## Azure Resource Manager template with Deploy to Azure button -Using a Quickstart template with Azure portal is the easiest and fastest deployment method because it automates most of your configuration with the touch of a **Deploy to Azure** button. This button automatically generates the following configurations and resources: managed identity RBAC roles, a provisioned workspace and namespace, an Event Hubs instance, a Fast Healthcare Interoperability Resources (FHIR®) service instance, and a MedTech service instance. All you need to add are post-deployment device mapping, destination mapping, and a shared access policy key. This method simplifies your deployment, but doesn't allow for much customization. +Using an ARM template with Azure portal is the easiest and fastest deployment method because it automates most of your configuration with the touch of a **Deploy to Azure** button. This button automatically generates the following configurations and resources: managed identity Azure role-based access (RBAC) roles, a provisioned workspace and namespace, an Event Hubs instance, a Fast Healthcare Interoperability Resources (FHIR®) service instance, and a MedTech service instance. All you need to add are post-deployment device mapping, destination mapping, and a shared access policy key. This method simplifies your deployment, but doesn't allow for much customization. -For more information about the Quickstart template and the Deploy to Azure button, see [Deploy the MedTech service with a Quickstart template](deploy-02-new-button.md). +For more information about the ARM template and the Deploy to Azure button, see [Quickstart: Deploy MedTech service with an Azure Resource Manager template](deploy-02-new-button.md). ## Azure PowerShell and Azure CLI automation Azure provides Azure PowerShell and Azure CLI to speed up your configurations when used in enterprise environments. Deploying MedTech service with Azure PowerShell or Azure CLI can be useful for adding automation so that you can scale your deployment for a large number of developers. This method is more detailed but provides extra speed and efficiency because it allows you to automate your deployment. -For more information about Using an ARM template with Azure PowerShell and Azure CLI, see [Using Azure PowerShell and Azure CLI to deploy the MedTech service using Azure Resource Manager templates](deploy-08-new-ps-cli.md). +For more information about Using an ARM template with Azure PowerShell and Azure CLI, see [Quickstart: Using Azure PowerShell and Azure CLI to deploy the MedTech service using Azure Resource Manager templates](deploy-08-new-ps-cli.md). -## Manual deployment +## Azure portal manual deployment The manual deployment method uses the Azure portal to implement each deployment task individually. Using the manual deployment method will allow you to see all the details of how to complete the sequence of each deployment task. The manual deployment method can be beneficial if you need to customize or troubleshoot your deployment process. The manual deployment is the most complex method, but it provides valuable technical information and developmental options that will enable you to fine-tune your deployment precisely. There are six different steps of the MedTech service PaaS. Only the first four a ### Step 1: Prerequisites - Have an Azure subscription-- Create RBAC roles contributor and user access administrator or owner. This feature is automatically done in the Quickstart template method with the Deploy to Azure button. It isn't included in the manual or PowerShell/CLI methods and needs to be implemented individually.+- Create RBAC roles contributor and user access administrator or owner. This feature is automatically done in the ARM template method with the Deploy to Azure button. It isn't included in the manual or PowerShell/CLI methods and needs to be implemented individually. ### Step 2: Provision -The QuickStart template method with the Deploy to Azure button automatically provides all these steps, but they aren't included in the manual or the PowerShell/CLI method and must be completed individually. +The ARM template method with the Deploy to Azure button automatically provides all these steps, but they aren't included in the manual or the PowerShell/CLI method and must be completed individually. - Create a resource group and workspace for Event Hubs, FHIR, and MedTech services. - Provision an Event Hubs instance to a namespace. Each method must add **all** these post-deployment tasks: - Connect to services using device and destination mapping. - Use managed identity to grant access to the device message event hub. - Use managed identity to grant access to the FHIR service, enabling FHIR to receive data from the MedTech service.-- Note: only the ARM Quickstart method requires a shared access key for post-deployment.+- Note: only the ARM template method requires a shared access key for post-deployment. ### Granting access to the device message event hub |
industrial-iot | Industrial Iot Platform Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/industrial-iot-platform-versions.md | Title: Azure Industrial IoT platform versions description: This article provides an overview of the existing version of the Industrial IoT platform and their support.--++ Last updated 11/10/2021 # Azure Industrial IoT Platform Release 2.8.2 -We are pleased to announce the release of version 2.8.2 of our Industrial IoT Platform components as a second patch update of the 2.8 Long-Term Support (LTS) release. This release contains important backward compatibility fixes including Direct Methods API support with version 2.5.x, performance optimizations as well as security updates and bugfixes. +We're pleased to announce the release of version 2.8.2 of our Industrial IoT Platform components as a second patch update of the 2.8 Long-Term Support (LTS) release. This release contains important backward compatibility fixes including Direct Methods API support with version 2.5.x, performance optimizations and security updates and bug fixes. ## Azure Industrial IoT Platform Release 2.8.1-We are pleased to announce the release of version 2.8.1 of our Industrial IoT Platform components. This is the first patch update of the 2.8 Long-Term Support (LTS) release. It contains important security updates, bug fixes, and performance optimizations. +We're pleased to announce the release of version 2.8.1 of our Industrial IoT Platform components. This is the first patch update of the 2.8 Long-Term Support (LTS) release. It contains important security updates, bug fixes, and performance optimizations. ## Azure Industrial IoT Platform Release 2.8 -We are pleased to announce the declaration of Long-Term Support (LTS) for version 2.8. While we continue to develop and release updates to our ongoing projects on GitHub, we now also offer a branch that will only get critical bug fixes and security updates starting in July 2021. Customers can rely upon a longer-term support lifecycle for these LTS builds, providing stability and assurance for the planning on longer time horizons our customers require. The LTS branch offers customers a guarantee that they will benefit from any necessary security or critical bug fixes with minimal impact to their deployments and module interactions. At the same time, customers can access the latest updates in the [main branch](https://github.com/Azure/Industrial-IoT) to keep pace with the latest developments and fastest cycle time for product updates. +We're pleased to announce the declaration of Long-Term Support (LTS) for version 2.8. While we continue to develop and release updates to our ongoing projects on GitHub, we now also offer a branch that will only get critical bug fixes and security updates starting in July 2021. Customers can rely upon a longer-term support lifecycle for these LTS builds, providing stability and assurance for the planning on longer time horizons our customers require. The LTS branch offers customers a guarantee that they'll benefit from any necessary security or critical bug fixes with minimal impact to their deployments and module interactions. At the same time, customers can access the latest updates in the [main branch](https://github.com/Azure/Industrial-IoT) to keep pace with the latest developments and fastest cycle time for product updates. ## Version history |Version |Type |Date |Highlights | |-|--|-||-|2.5.4 |Stable |March 2020 |IoT Hub Direct Method Interface, control from cloud without any additional microservices (standalone mode), OPC UA Server Interface, uses OPC Foundation's OPC stack - [Release notes](https://github.com/Azure/Industrial-IoT/releases/tag/2.5.4)| -|[2.7.206](https://github.com/Azure/Industrial-IoT/tree/release/2.7.206) |Stable |January 2021 |Configuration through REST API (orchestrated mode), supports Samples telemetry format as well as PubSub - [Release notes](https://github.com/Azure/Industrial-IoT/releases/tag/2.7.206)| +|2.5.4 |Stable |March 2020 |IoT Hub Direct Method Interface, control from cloud without any microservices (standalone mode), OPC UA Server interface, uses OPC Foundation's OPC stack - [Release notes](https://github.com/Azure/Industrial-IoT/releases/tag/2.5.4)| +|[2.7.206](https://github.com/Azure/Industrial-IoT/tree/release/2.7.206) |Stable |January 2021 |Configuration through REST API (orchestrated mode), supports Samples telemetry format and PubSub format - [Release notes](https://github.com/Azure/Industrial-IoT/releases/tag/2.7.206)| |[2.8](https://github.com/Azure/Industrial-IoT/tree/2.8.0) |Long-term support (LTS)|July 2021 |IoT Edge update to 1.1 LTS, OPC stack logging and tracing for better OPC Publisher diagnostics, Security fixes - [Release notes](https://github.com/Azure/Industrial-IoT/releases/tag/2.8.0)| |[2.8.1](https://github.com/Azure/Industrial-IoT/tree/2.8.1) |Patch release for LTS 2.8|November 2021 |Critical bug fixes, security updates, performance optimizations for LTS v2.8| |[2.8.2](https://github.com/Azure/Industrial-IoT/tree/2.8.2) |Patch release for LTS 2.8|March 2022 |Backwards compatibility with 2.5.x, bug fixes, security updates, performance optimizations for LTS v2.8| |
industrial-iot | Overview What Is Industrial Iot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/overview-what-is-industrial-iot.md | Title: Azure Industrial IoT Overview description: This article provides an overview of Industrial IoT. It explains the shop floor connectivity and security components in IIoT.--++ Last updated 3/22/2021 |
industrial-iot | Overview What Is Opc Publisher | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/overview-what-is-opc-publisher.md | Title: Microsoft OPC Publisher description: This article provides an overview of the OPC Publisher Edge module.--++ Last updated 3/22/2021 Last updated 3/22/2021 # What is the OPC Publisher? -OPC Publisher is a fully-supported Microsoft product that bridges the gap between industrial assets and the Microsoft Azure cloud. It does so by connecting OPC UA-enabled assets or industrial connectivity software to your Microsoft Azure cloud. It publishes the telemetry data it gathers to Azure IoT Hub in various formats, including IEC62541 OPC UA PubSub standard format (from version 2.6 onwards). OPC Publisher runs on Azure IoT Edge as a Module or on plain Docker as a container. Because it leverages the .NET cross-platform runtime, it runs natively on both Linux and Windows 10. +OPC Publisher is a fully supported Microsoft product that bridges the gap between industrial assets and the Microsoft Azure cloud. It does so by connecting OPC UA-enabled assets or industrial connectivity software to your Microsoft Azure cloud. It publishes the telemetry data it gathers to Azure IoT Hub in various formats, including IEC62541 OPC UA PubSub standard format (from version 2.6 onwards). OPC Publisher runs on Azure IoT Edge as a Module or on plain Docker as a container. Because it uses the .NET cross-platform runtime, it runs natively on both Linux and Windows 10. OPC Publisher is a reference implementation that demonstrates how to: OPC Publisher supports batching of the data sent to IoT Hub to reduce network lo This application uses the OPC Foundation OPC UA reference stack as NuGet packages. See [https://opcfoundation.org/license/redistributables/1.3/](https://opcfoundation.org/license/redistributables/1.3/) for the licensing terms. ## Next steps-Now that you have learned what the OPC Publisher is, you can get started by deploying it: +Now that you've learned what the OPC Publisher is, you can get started by deploying it: > [!div class="nextstepaction"] > [Deploy OPC Publisher in standalone mode](tutorial-publisher-deploy-opc-publisher-standalone.md) |
industrial-iot | Reference Command Line Arguments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/reference-command-line-arguments.md | Title: Microsoft OPC Publisher command-line arguments description: This article provides an overview of the OPC Publisher Command-line Arguments.--++ Last updated 3/22/2021 |
industrial-iot | Reference Opc Publisher Telemetry Format | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/reference-opc-publisher-telemetry-format.md | Title: Microsoft OPC Publisher Telemetry Format description: This article provides an overview of the configuration settings file--++ Last updated 3/22/2021 |
industrial-iot | Tutorial Configure Industrial Iot Components | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/tutorial-configure-industrial-iot-components.md | Title: Configure Azure Industrial IoT components description: In this tutorial, you learn how to change the default values of the Azure Industrial IoT configuration.--++ Last updated 3/22/2021 |
industrial-iot | Tutorial Deploy Industrial Iot Platform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/tutorial-deploy-industrial-iot-platform.md | Title: Deploy the Azure Industrial IoT Platform description: In this tutorial, you learn how to deploy the IIoT Platform.--++ |
industrial-iot | Tutorial Industrial Iot Azure Data Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/tutorial-industrial-iot-azure-data-explorer.md | Title: Pull Azure Industrial IoT data into ADX description: In this tutorial, you learn how to pull IIoT data into ADX.--++ Last updated 3/22/2021 Last updated 3/22/2021 # Tutorial: Pull Azure Industrial IoT data into ADX -The Azure Industrial IoT (IIoT) Platform combines edge modules and cloud microservices with a number of Azure PaaS services to provide capabilities for industrial asset discovery and to collect data from these assets using OPC UA. [Azure Data Explorer (ADX)](/azure/data-explorer) is a natural destination for IIoT data with data analytics features that enables running flexible queries on the ingested data from the OPC UA servers connected to the IoT Hub through the OPC Publisher. Although an ADX cluster can ingest data directly from the IoT Hub, the IIoT platform does further processing of the data to make it more useful before putting it on the Event Hub provided when a full deployment of the microservices is used (refer to the IIoT platform architecture). +The Azure Industrial IoT (IIoT) Platform combines edge modules and cloud microservices with many Azure PaaS services to provide capabilities for industrial asset discovery and to collect data from these assets using OPC UA. [Azure Data Explorer (ADX)](/azure/data-explorer) is a natural destination for IIoT data with data analytics features that enables running flexible queries on the ingested data from the OPC UA servers connected to the IoT Hub through the OPC Publisher. Although an ADX cluster can ingest data directly from the IoT Hub, the IIoT platform does further processing of the data to make it more useful before putting it on the Event Hubs provided when a full deployment of the microservices is used (refer to the IIoT platform architecture). In this tutorial, you learn how to: In this tutorial, you learn how to: ## How to make the data available in the ADX cluster to query it effectively -If we look at the message format from the Event Hub (as defined by the class Microsoft.Azure.IIoT.OpcUa.Subscriber.Models.MonitoredItemMessageModel), we can see a hint to the structure that we need for the ADX table schema. +If we look at the message format from the Event Hubs (as defined by the class Microsoft.Azure.IIoT.OpcUa.Subscriber.Models.MonitoredItemMessageModel), we can see a hint to the structure that we need for the ADX table schema. ![Structure](media/tutorial-iiot-data-adx/industrial-iot-in-azure-data-explorer-pic-1.png) We also need to add a json ingestion mapping to instruct the cluster to put the .create table ['iiot_stage'] ingestion json mapping 'iiot_stage_mapping' '[{"column":"payload","path":"$","datatype":"dynamic"}]' ``` -5. Our table is now ready to receive data from the Event Hub. -6. Use the instructions [here](/azure/data-explorer/ingest-data-event-hub#connect-to-the-event-hub) to connect the Event Hub to the ADX cluster and start ingesting the data into our staging table. We only need to create the connection as we already have an Event Hub provisioned by the IIoT platform. +5. Our table is now ready to receive data from the Event Hubs. +6. Use the instructions [here](/azure/data-explorer/ingest-data-event-hub#connect-to-the-event-hub) to connect the Event Hubs to the ADX cluster and start ingesting the data into our staging table. We only need to create the connection as we already have an Event Hubs provisioned by the IIoT platform. 7. Once the connection is verified, data will start flowing to our table and after a short delay we can start examining the data in our table. Use the following query in the ADX web interface to look at a data sample of 10 rows. We can see here how the data in the payload resembles the MonitoredItemMessageModel class mentioned earlier. ![Query](media/tutorial-iiot-data-adx/industrial-iot-in-azure-data-explorer-pic-2.png) Since our ΓÇÿpayloadΓÇÖ column contains a dynamic data type, we need to carry ou As we mentioned earlier, ingesting the OPC UA data into a staging table with one ΓÇÿDynamicΓÇÖ column gives us flexibility. However, having to run data type conversions at query time can result in delays in executing the queries particularly if the data volume is large and if there are many concurrent queries. At this stage, we can create another table with the data types already determined, so that we avoid the query-time data type conversions. -9. Create a new table for the parsed data that consists of a limited selection from the content of the dynamic ΓÇÿpayloadΓÇÖ in the staging table. Note that we've created a value column for each of the expected data types expected in our telemetry. +9. Create a new table for the parsed data that consists of a limited selection from the content of the dynamic ΓÇÿpayloadΓÇÖ in the staging table. We've created a value column for each of the expected data types expected in our telemetry. ``` .create table ['iiot_parsed'] We can see that the query that uses the parsed table is roughly twice as fast as > The Update Policy only works on the data that is ingested into the staging table after the policy was set up and doesn't apply to any pre-existing data. This needs to be taken into consideration when, for example, we need to change the update policy. Full details can be found in the ADX documentation. ## Next steps-Now that you have learned how to change the default values of the configuration, you can +Now that you've learned how to change the default values of the configuration, you can > [!div class="nextstepaction"] > [Configure Industrial IoT components](tutorial-configure-industrial-iot-components.md) |
industrial-iot | Tutorial Publisher Configure Opc Publisher | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/tutorial-publisher-configure-opc-publisher.md | Title: Configure the Microsoft OPC Publisher description: In this tutorial, you learn how to configure the OPC Publisher in standalone mode.--++ Last updated 3/22/2021 |
industrial-iot | Tutorial Publisher Deploy Opc Publisher Standalone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/tutorial-publisher-deploy-opc-publisher-standalone.md | Title: Deploy the Microsoft OPC Publisher -description: In this tutorial you learn how to deploy the OPC Publisher in standalone mode. --+description: In this tutorial, you learn how to deploy the OPC Publisher in standalone mode. ++ Last updated 3/22/2021 A connection to an OPC UA server using its hostname without a DNS server configu ``` ## Next steps -Now that you have deployed the OPC Publisher IoT Edge module, the next step is to configure it: +Now that you've deployed the OPC Publisher IoT Edge module, the next step is to configure it: > [!div class="nextstepaction"] > [Configure the OPC Publisher](tutorial-publisher-configure-opc-publisher.md) |
industrial-iot | Tutorial Visualize Data Time Series Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/tutorial-visualize-data-time-series-insights.md | Title: Visualize OPC UA data in Azure Time Series Insights description: In this tutorial, you learn how to visualize data with Time Series Insights.--++ Last updated 3/22/2021 In this tutorial, you learn how to: ## Time Series Insights explorer -The Time Series Insights explorer is a web app you can use to visualize your telemetry. To retrieve the url of the application open the `.env` file saved as a result of the deployment. Open a browser to the Url in the `PCS_TSI_URL` variable. +The Time Series Insights explorer is a web app you can use to visualize your telemetry. To retrieve the url of the application, open the `.env` file saved as a result of the deployment. Open a browser to the Url in the `PCS_TSI_URL` variable. -Before using the Time Series Insights explorer, you must grant access to the TSI data to the users entitled to visualize the data. Note that on a fresh deployment no data access policies are set by default, therefore nobody can see the data. The data access policies need to be set in the Azure portal, in the Time Series Insights Environment deployed in the IIoT's platform deployed resource group, as follows: +Before using the Time Series Insights explorer, you must grant access to the TSI data to the users entitled to visualize the data. On a new deployment no data access policies are set by default, therefore nobody can see the data. The data access policies need to be set in the Azure portal, in the Time Series Insights Environment deployed in the IIoT's platform deployed resource group, as follows: ![Time Series Insights Explorer 1](media/tutorial-iiot-visualize-data-tsi/tutorial-time-series-insights-data-access-1.png) Assign the required users: ![Time Series Insights Explorer 3](media/tutorial-iiot-visualize-data-tsi/tutorial-time-series-insights-data-access-3.png) -In the TSI Explorer, please note the Unassigned Time Series Instances. A TSI Instance corresponds to the time/value series for a specific data-point originated from a published node in an OPC server. The TSI Instance, respectively the OPC UA Data point, is uniquely identified by the EndpointId, SubscriptionId, and NodeId. The TSI instances models are automatically detected and display in the explorer based on the telemetry data ingested from the IIoT platform telemetry processor's event hub. +In the TSI Explorer, note the Unassigned Time Series Instances. A TSI Instance corresponds to the time/value series for a specific data-point originated from a published node in an OPC server. The TSI Instance, respectively the OPC UA Data point, is uniquely identified by the EndpointId, SubscriptionId, and NodeId. The TSI instances models are automatically detected and display in the explorer based on the telemetry data ingested from the IIoT platform telemetry processor's event hub. ![Time Series Insights Explorer 4](media/tutorial-iiot-visualize-data-tsi/tutorial-time-series-insights-step-0.png) For more information, see [Quickstart: Explore the Azure Time Series Insights Pr Since the telemetry instances are now just in raw format, they need to be contextualized with the appropriate -For detailed information on TSI models see [Time Series Model in Azure Time Series Insights Preview](../time-series-insights/concepts-model-overview.md) +For detailed information on TSI models, see [Time Series Model in Azure Time Series Insights Preview](../time-series-insights/concepts-model-overview.md) 1. Step 1 - In the model tab of the Explorer, define a new hierarchy for the telemetry data ingested. A hierarchy is the logical tree structure meant to enable the user to insert the meta-information required for a more intuitive navigation through the TSI instances. a user can create/delete/modify hierarchy templates that can be later on instantiated for the various TSI instances. For detailed information on TSI models see [Time Series Model in Azure Time Seri ![Step 3](media/tutorial-iiot-visualize-data-tsi/tutorial-time-series-insights-step-3.png) -4. Step 4 - fill in the instances properties - name, description, data value, as well as the hierarchy fields in order to match the logical structure +4. Step 4 - fill in the instances properties - name, description, data value, and the hierarchy fields in order to match the logical structure ![Step 4](media/tutorial-iiot-visualize-data-tsi/tutorial-time-series-insights-step-4.png) You can also connect the Time Series Insights environment to Power BI. For more ## Next steps-Now that you have learned how to visualize data in TSI, you can check out the Industrial IoT GitHub repository: +Now that you've learned how to visualize data in TSI, you can check out the Industrial IoT GitHub repository: > [!div class="nextstepaction"] > [IIoT Platform GitHub repository](https://github.com/Azure/iot-edge-opc-publisher) |
iot-central | Howto Integrate With Devops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-integrate-with-devops.md | To get started, fork the IoT Central CI/CD GitHub repository and then clone your 1. Clone your fork of the repository to your local machine by opening a console or bash window and running the following command. - ```cmd\bash + ```cmd/sh git clone https://github.com/{your GitHub username}/iot-central-CICD-sample ``` Now that you have a configuration file that represents the settings for your dev 1. To upload the *Configuration* folder to your GitHub repository, run the following commands from the *IoTC-CICD-howto* folder. - ```cmd/bash + ```cmd/sh git add Config git commit -m "Adding config directories and files" git push |
iot-central | Howto Migrate To Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-migrate-to-iot-hub.md | If your devices use X.509 certificates to authenticate to your IoT Central appli Download or clone a copy of the migrator tool to your local machine: -```cmd/bash +```cmd/sh git clone https://github.com/Azure/iotc-migrator.git ``` Open the *config.ts* file in a text editor. Update the `AADClientID` and `AADDIr In your command-line environment, navigate to the root of the `iotc-migrator` repository. Then run the following commands to install the required node.js packages and then run the tool: -```cmd/bash +```cmd/sh npm install npm start ``` |
iot-develop | Concepts Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-architecture.md | Title: IoT Plug and Play architecture | Microsoft Docs -description: As a solution builder, understand key architectural elements of IoT Plug and Play. +description: Understand the key architectural elements of an IoT Plug and Play solution. Previously updated : 09/15/2020 Last updated : 11/17/2022 Every model and interface has a unique ID. The following diagram shows the key elements of an IoT Plug and Play solution: ## Model repository The [model repository](./concepts-model-repository.md) is a store for model and The web UI lets you manage the models and interfaces. -The model repository has built-in role-based access controls that let you limit access to interface definitions. +The model repository has built-in role-based access controls that let you manage access to interface definitions. ## Devices The device SDKs help a module builder to: An IoT hub: - Makes the model ID implemented by a device available to a backend solution.-- Maintains the digital twin associated with each Plug and Play device connected to the hub.+- Maintains the digital twin associated with each IoT Plug and Play device connected to the hub. - Forwards telemetry streams to other services for processing or storage. - Routes digital twin change events to other services to enable device monitoring. ## Backend solution -A backend solution monitors and controls connected devices by interacting with digital twins in the IoT hub. Use one of the service SDKs to implement your backend solution. To understand the capabilities of a connected device, the solution backend: +A backend solution monitors and controls connected devices by interacting with digital twins in the IoT hub. Use one of the Azure IoT service SDKs to implement your backend solution. To understand the capabilities of a connected device, the solution backend: 1. Retrieves the model ID the device registered with the IoT hub. 1. Uses the model ID to retrieve the interface definitions from any model repository. |
iot-develop | Concepts Convention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-convention.md | Title: IoT Plug and Play conventions | Microsoft Docs description: Description of the conventions IoT Plug and Play expects devices to use when they send telemetry and properties, and handle commands and property updates. Previously updated : 05/11/2022 Last updated : 11/17/2022 -IoT Plug and Play devices should follow a set of conventions when they exchange messages with an IoT hub. IoT Plug and Play devices use the MQTT protocol to communicate with IoT Hub, AMQP is supported by IoT Hub and available in some device SDKs. +IoT Plug and Play devices should follow a set of conventions when they exchange messages with an IoT hub. IoT Plug and Play devices use the MQTT protocol to communicate with IoT Hub, AMQP is supported by IoT Hub and available in some device SDKs. -Devices can include [modules](../iot-hub/iot-hub-devguide-module-twins.md), or be implemented in an [IoT Edge module](../iot-edge/about-iot-edge.md) hosted by the IoT Edge runtime. +A device can include [modules](../iot-hub/iot-hub-devguide-module-twins.md), or be implemented in an [IoT Edge module](../iot-edge/about-iot-edge.md) hosted by the IoT Edge runtime. You describe the telemetry, properties, and commands that an IoT Plug and Play device implements with a [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) _model_. There are two types of model referred to in this article: -- **No component** - A model with no components. The model declares telemetry, properties, and commands as top-level properties in the contents section of the main interface. In the Azure IoT explorer tool, this model appears as a single _default component_.-- **Multiple components** - A model composed of two or more interfaces. A main interface, which appears as the _default component_, with telemetry, properties, and commands. One or more interfaces declared as components with additional telemetry, properties, and commands.+- **No component** - A model with no components. The model declares telemetry, properties, and commands as top-level elements in the contents section of the main interface. In the Azure IoT explorer tool, this model appears as a single _default component_. +- **Multiple components** - A model composed of two or more interfaces. A main interface, which appears as the _default component_, with telemetry, properties, and commands. One or more interfaces declared as components with more telemetry, properties, and commands. For more information, see [IoT Plug and Play modeling guide](concepts-modeling-guide.md). To identify the model that a device or module implements, a service can get the ## Telemetry - Telemetry sent from a no component device doesn't require any extra metadata. The system adds the `dt-dataschema` property.-- Telemetry sent from a device using components must add the component name to the telemetry message. -- When using MQTT add the `$.sub` property with the component name to the telemetry topic, the system adds the `dt-subject` property. +- Telemetry sent from a device using components must add the component name to the telemetry message. +- When using MQTT add the `$.sub` property with the component name to the telemetry topic, the system adds the `dt-subject` property. - When using AMQP add the `dt-subject` property with the component name as a message annotation. -> [!Note] +> [!NOTE] > Telemetry from components requires one message per component. ## Read-only properties +A read-only property is set by the device and reported to the back-end application. + ### Sample no component read-only property -A device or module can send any valid JSON that follows the DTDL v2 rules. +A device or module can send any valid JSON that follows the DTDL V2 rules. -DTDL: +DTDL that defines a property on an interface: ```json { Sample reported property payload: ## Writable properties +A writable property can be set by the back-end application and sent to the device. + The device or module should confirm that it received the property by sending a reported property. The reported property should include: - `value` - the actual value of the property (typically the received value, but the device may decide to report a different value). The device or module should confirm that it received the property by sending a r ### Acknowledgment responses -When reporting writable properties the device should compose the acknowledgment message, using the four fields described above, to indicate the actual device state, as described in this table -+When reporting writable properties the device should compose the acknowledgment message, using the four fields described above, to indicate the actual device state, as described in the following table: |Status(ac)|Version(av)|Value(value)|Description(av)| |:|:|:|:| When reporting writable properties the device should compose the acknowledgment When a device starts up, it should request the device twin, and check for any writable property updates. If the version of a writable property increased while the device was offline, the device should send a reported property response to confirm that it received the update. -When a device starts up for the first time, it can send an initial value for a reported property if it doesn't receive an initial desired property from the hub. In this case, the device can send the default value with `av` to `0` and `ac` to `203`. For example: +When a device starts up for the first time, it can send an initial value for a reported property if it doesn't receive an initial desired property from the IoT hub. In this case, the device can send the default value with `av` to `0` and `ac` to `203`. For example: ```json "reported": { DTDL: } ``` -To update this writable property, send a complete object from the service that looks like the following: +To update this writable property, send a complete object from the service that looks like the following example: ```json { To update this writable property, send a complete object from the service that l } ``` -The device responds with an acknowledgment that looks like the following: +The device responds with an acknowledgment that looks like the following example: ```json { The device responds with an acknowledgment that looks like the following: When a device receives multiple desired properties in a single payload, it can send the reported property responses across multiple payloads or combine the responses into a single payload. -A device or module can send any valid JSON that follows the DTDL v2 rules: +A device or module can send any valid JSON that follows the DTDL V2 rules. DTDL: Sample reported property second payload: The device or module must add the `{"__t": "c"}` marker to indicate that the element refers to a component. -The marker is sent only for updates to properties defined in a component. Updates to properties defined in the default component don't include the marker, see [Sample no component writable property](#sample-no-component-writable-property) +The marker is sent only for updates to properties defined in a component. Updates to properties defined in the default component don't include the marker, see [Sample no component writable property](#sample-no-component-writable-property). When a device receives multiple reported properties in a single payload, it can send the reported property responses across multiple payloads or combine the responses into a single payload. On a device or module, multiple component interfaces use command names with the ## Next steps -Now that you've learned about IoT Plug and Play conventions, here are some additional resources: +Now that you've learned about IoT Plug and Play conventions, here are some other resources: - [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) - [C device SDK](https://github.com/Azure/azure-iot-sdk-c/) |
iot-develop | Concepts Developer Guide Device | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-developer-guide-device.md | Title: Device developer guide (C) - IoT Plug and Play | Microsoft Docs -description: Description of IoT Plug and Play for C device developers + Title: Device developer guide - IoT Plug and Play | Microsoft Docs +description: "Description of IoT Plug and Play for device developers. Includes examples in the following languages: C, C#, Java, JavaScript, Python, and Embedded C." Previously updated : 11/19/2020 Last updated : 11/17/2022 zone_pivot_groups: programming-languages-set-twenty-seven IoT Plug and Play lets you build IoT devices that advertise their capabilities to Azure IoT applications. IoT Plug and Play devices don't require manual configuration when a customer connects them to IoT Plug and Play-enabled applications. -A IoT device might be implemented directly, use [modules](../iot-hub/iot-hub-devguide-module-twins.md), or use [IoT Edge modules](../iot-edge/about-iot-edge.md). +You can implement an IoT device directly by using [modules](../iot-hub/iot-hub-devguide-module-twins.md), or by using [IoT Edge modules](../iot-edge/about-iot-edge.md). This guide describes the basic steps required to create a device, module, or IoT Edge module that follows the [IoT Plug and Play conventions](../iot-develop/concepts-convention.md). To build an IoT Plug and Play device, module, or IoT Edge module, follow these s 1. Ensure your device is using either the MQTT or MQTT over WebSockets protocol to connect to Azure IoT Hub. 1. Create a [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) model to describe your device. To learn more, see [Understand components in IoT Plug and Play models](concepts-modeling-guide.md). 1. Update your device or module to announce the `model-id` as part of the device connection.-1. Implement telemetry, properties, and commands using the [IoT Plug and Play conventions](concepts-convention.md) +1. Implement telemetry, properties, and commands that follow the [IoT Plug and Play conventions](concepts-convention.md) Once your device or module implementation is ready, use the [Azure IoT explorer](../iot-fundamentals/howto-use-iot-explorer.md) to validate that the device follows the IoT Plug and Play conventions. Once your device or module implementation is ready, use the [Azure IoT explorer] ## Next steps -Now that you've learned about IoT Plug and Play device development, here are some additional resources: +Now that you've learned about IoT Plug and Play device development, here are some other resources: - [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) - [C device SDK](https://github.com/Azure/azure-iot-sdk-c/) |
iot-develop | Concepts Developer Guide Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-developer-guide-service.md | Title: Service developer guide - IoT Plug and Play | Microsoft Docs description: Description of IoT Plug and Play for service developers Previously updated : 10/01/2020 Last updated : 11/17/2022 IoT Plug and Play lets you build IoT devices that advertise their capabilities t IoT Plug and Play lets you use devices that have announced their model ID with your IoT hub. For example, you can access the properties and commands of a device directly. -To use an IoT Plug and Play device that's connected to your IoT hub, one of the IoT service SDKs: +To use an IoT Plug and Play device that's connected to your IoT hub, one of the Azure IoT service SDKs: ## Service SDKs -Use the Azure IoT Service SDKs in your solution to interact with devices and modules. For example, you can use the service SDKs to read and update twin properties and invoke commands. Supported languages include C#, Java, Node.js, and Python. +Use the Azure IoT service SDKs in your solution to interact with devices and modules. For example, you can use the service SDKs to read and update twin properties and invoke commands. Supported languages include C#, Java, Node.js, and Python. -The service SDKs let you access device information from a solution, such as a desktop or web application. The service SDKs include two namespaces and object models that you can use to retrieve the model ID: +The service SDKs let you access device information from a solution component such as a desktop or web application. The service SDKs include two namespaces and object models that you can use to retrieve the model ID: - Iot Hub service client. This service exposes the model ID as a device twin property. The service SDKs let you access device information from a solution, such as a de ## Next steps -Now that you've learned about device modeling, here are some additional resources: +Now that you've learned about device modeling, here are some more resources: - [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) - [C device SDK](https://github.com/Azure/azure-iot-sdk-c/) |
iot-develop | Concepts Digital Twin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-digital-twin.md | Title: Understand IoT Plug and Play digital twins description: Understand how IoT Plug and Play uses digital twins-- Previously updated : 12/14/2020++ Last updated : 11/17/2022 The following table describes the fields in the digital twin JSON object: | Field name | Description | | | |-| `$dtId` | A user-provided string representing the ID of the device digital twin | -| `{propertyName}` | The value of a property in JSON | -| `$metadata.$model` | [Optional] The ID of the model interface that characterizes this digital twin | -| `$metadata.{propertyName}.desiredValue` | [Only for writable properties] The desired value of the specified property | -| `$metadata.{propertyName}.desiredVersion` | [Only for writable properties] The version of the desired value maintained by IoT Hub| -| `$metadata.{propertyName}.ackVersion` | [Required, only for writable properties] The version acknowledged by the device implementing the digital twin, it must by greater or equal to desired version | -| `$metadata.{propertyName}.ackCode` | [Required, only for writable properties] The `ack` code returned by the device app implementing the digital twin | -| `$metadata.{propertyName}.ackDescription` | [Optional, only for writable properties] The `ack` description returned by the device app implementing the digital twin | -| `$metadata.{propertyName}.lastUpdateTime` | IoT Hub maintains the timestamp of the last update of the property by the device. The timestamps are in UTC and encoded in the ISO8601 format YYYY-MM-DDTHH:MM:SS.mmmZ | +| `$dtId` | A user-provided string representing the ID of the device digital twin. | +| `{propertyName}` | The value of a property in JSON. | +| `$metadata.$model` | [Optional] The ID of the model interface that characterizes this digital twin. | +| `$metadata.{propertyName}.desiredValue` | [Only for writable properties] The desired value of the specified property. | +| `$metadata.{propertyName}.desiredVersion` | [Only for writable properties] The version of the desired value maintained by IoT Hub.| +| `$metadata.{propertyName}.ackVersion` | [Required, only for writable properties] The version acknowledged by the device implementing the digital twin, it must by greater or equal to desired version. | +| `$metadata.{propertyName}.ackCode` | [Required, only for writable properties] The `ack` code returned by the device app implementing the digital twin. | +| `$metadata.{propertyName}.ackDescription` | [Optional, only for writable properties] The `ack` description returned by the device app implementing the digital twin. | +| `$metadata.{propertyName}.lastUpdateTime` | IoT Hub maintains the timestamp of the last update of the property by the device. The timestamps are in UTC and encoded in the ISO8601 format YYYY-MM-DDTHH:MM:SS.mmmZ. | | `{componentName}` | A JSON object containing the component's property values and metadata. |-| `{componentName}.{propertyName}` | The value of the component's property in JSON | +| `{componentName}.{propertyName}` | The value of the component's property in JSON. | | `{componentName}.$metadata` | The metadata information for the component. | ### Properties -Properties are data fields that represent the state of an entity (like the properties in many object-oriented programming languages). +Properties are data fields that represent the state of an entity just like the properties in many object-oriented programming languages. -#### Read-only Property +#### Read-only property DTDL schema: DTDL schema: ``` In this example, `alwinexlepaho8329` is the current value of the `serialNumber` read-only property reported by the device.+ The following snippets show the side-by-side JSON representation of the `serialNumber` property: :::row::: The following snippets show the side-by-side JSON representation of the `serialN :::column-end::: :::row-end::: -#### Writable Property +#### Writable property The following examples show a writable property in the default component. In this example, `3.0` is the current value of the `fanSpeed` property reported ### Components -Components allow building model interface as an assembly of other interfaces. -For example, the [Thermostat](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/Thermostat.json) interface can be incorporated as components `thermostat1` and `thermostat2` in the [Temperature Controller model](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json) model. +Components let you build a model interface as an assembly of other interfaces. For example, the [Thermostat](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/Thermostat.json) interface can be incorporated as components `thermostat1` and `thermostat2` in the [Temperature Controller model](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json) model. In a device twin, a component is identified by the `{ "__t": "c"}` marker. In a digital twin, the presence of `$metadata` marks a component. content-encoding:utf-8 ## Next steps -Now that you've learned about digital twins, here are some additional resources: +Now that you've learned about digital twins, here are some more resources: - [How to use IoT Plug and Play digital twin APIs](howto-manage-digital-twin.md) - [Interact with a device from your solution](tutorial-service.md) |
iot-develop | Concepts Iot Pnp Bridge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-iot-pnp-bridge.md | Title: IoT Plug and Play bridge | Microsoft Docs description: Understand the IoT Plug and Play bridge and how to use it to connect existing devices attached to a Windows or Linux gateway as IoT Plug and Play devices.-- Previously updated : 1/20/2021++ Last updated : 11/17/2022 -#Customer intent: As a solution or device builder, I want to understand what IoT Plug and Play bridge is and how I can connect existing sensors attached to a Windows or Linux PC as IoT Plug and Play devices. +#Customer intent: As a solution or device builder, I want to understand what the IoT Plug and Play bridge is, and how I can use it to connect existing sensors attached to a Windows or Linux PC as IoT Plug and Play devices. # IoT Plug and Play bridge -The IoT Plug and Play bridge is an open-source application for connecting existing devices attached to Windows or Linux gateway as IoT Plug and Play devices. After installing and configuring the application on your Windows or Linux machine, you can use it to connect attached devices to an IoT hub. You can use the bridge to map IoT Plug and Play interfaces to the telemetry the attached devices are sending, work with device properties, and invoke commands. +The IoT Plug and Play bridge connects existing devices attached to a Windows or Linux gateway to an IoT hub as IoT Plug and Play devices. Use the bridge to map IoT Plug and Play interfaces to the telemetry the attached devices are sending, work with device properties, and invoke commands. -IoT Plug and Play bridge can be deployed as a standalone executable on any IoT device, industrial PC, server, or gateway running Windows 10 or Linux. It can also be compiled into your application code. A simple configuration JSON file tells the IoT Plug and Play bridge which attached devices/peripherals should be exposed up to Azure. +IoT Plug and Play bridge is an open-source application. You can deploy the application as a standalone executable on any IoT device, industrial PC, server, or gateway that runs Windows 10 or Linux. It can also be compiled into your application code. The IoT Plug and Play bridge uses a simple configuration JSON file to identify the attached devices/peripherals that should be exposed up to Azure. ## Supported protocols and sensors -IoT Plug and Play bridge supports the following types of peripherals by default, with links to the adapter documentation: +IoT Plug and Play bridge supports the following types of peripherals by default. The table includes links to the adapter documentation: |Peripheral|Windows|Linux| |||| IoT Plug and Play bridge supports the following types of peripherals by default, |[SerialPnP adapter](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/serialpnp/Readme.md) connects devices that communicate over a serial connection. |Yes|Yes| |[Windows USB peripherals](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/pnpbridge/docs/coredevicehealth_adapter.md) uses a list of adapter-supported device interface classes to connect devices that have a specific hardware ID. |Yes|Not Applicable| -To learn how to extend the IoT Plug and Play bridge to support additional device protocols, see [Extend the IoT Plug and Play bridge](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/pnpbridge/docs/author_adapter.md). To learn how to build and deploy the IoT Plug and Play bridge, see [Build and deploy the IoT Plug and Play bridge](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/pnpbridge/docs/build_deploy.md). +To learn how to extend the IoT Plug and Play bridge to support other device protocols, see [Extend the IoT Plug and Play bridge](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/pnpbridge/docs/author_adapter.md). To learn how to build and deploy the IoT Plug and Play bridge, see [Build and deploy the IoT Plug and Play bridge](https://github.com/Azure/iot-plug-and-play-bridge/blob/master/pnpbridge/docs/build_deploy.md). ## IoT Plug and Play bridge architecture ### IoT Plug and Play bridge adapters The bridge adapter manager uses the manifest to identify and call adapter functi A bridge adapter creates and acquires a digital twin interface handle. The adapter uses this handle to bind the device functionality to the digital twin. -Using information in the configuration file, the bridge adapter uses of the following techniques to enable full device to digital twin communication through the bridge: +The bridge adapter uses information in the configuration file to configure full device to digital twin communication through the bridge: - Establishes a communication channel directly. - Creates a device watcher to wait for a communication channel to become available. Using information in the configuration file, the bridge adapter uses of the foll The IoT Plug and Play bridge uses a JSON-based configuration file that specifies: -- How to connect to an IoT hub or IoT Central application: Options include connection strings, authentication parameters, or Device Provisioning Service (DPS).+- How to connect to an IoT hub or IoT Central application. Options include connection strings, authentication parameters, or the Device Provisioning Service (DPS). - The location of the IoT Plug and Play capability models that the bridge uses. The model defines the capabilities of an IoT Plug and Play device, and is static and immutable. - A list of IoT Plug and Play interface components and the following information for each component:-- The interface ID and component name.-- The bridge adapter required to interact with the component.-- Device information that the bridge adapter needs to establish communication with the device. For example hardware ID, or specific information for an adapter, interface, or protocol.-- An optional bridge adapter subtype or interface configuration if the adapter supports multiple communication types with similar devices. The example shows how a bluetooth sensor component could be configured:-- ```json - { - "_comment": "Component BLE sensor", - "pnp_bridge_component_name": "blesensor1", - "pnp_bridge_adapter_id": "bluetooth-sensor-pnp-adapter", - "pnp_bridge_adapter_config": { - "bluetooth_address": "267541100483311", - "blesensor_identity" : "Blesensor1" + - The interface ID and component name. + - The bridge adapter required to interact with the component. + - Device information that the bridge adapter needs to establish communication with the device. For example hardware ID, or specific information for an adapter, interface, or protocol. + - An optional bridge adapter subtype or interface configuration if the adapter supports multiple communication types with similar devices. The example shows how a bluetooth sensor component could be configured: + + ```json + { + "_comment": "Component BLE sensor", + "pnp_bridge_component_name": "blesensor1", + "pnp_bridge_adapter_id": "bluetooth-sensor-pnp-adapter", + "pnp_bridge_adapter_config": { + "bluetooth_address": "267541100483311", + "blesensor_identity" : "Blesensor1" + } }- } - ``` -+ ``` + - An optional list of global bridge adapter parameters. For example, the bluetooth sensor bridge adapter has a dictionary of supported configurations. An interface component that requires the bluetooth sensor adapter can pick one of these configurations as its `blesensor_identity`: ```json The IoT Plug and Play bridge uses a JSON-based configuration file that specifies ## Download IoT Plug and Play bridge -You can download a pre-built version of the bridge with supported adapters in [IoT Plug and Play bridge releases](https://github.com/Azure/iot-plug-and-play-bridge/releases) and expand the list of assets for the most recent release. Download the most recent version of the application for your operating system. +You can download a pre-built version of the bridge with supported adapters from [IoT Plug and Play bridge releases](https://github.com/Azure/iot-plug-and-play-bridge/releases) and expand the list of assets for the most recent release. Download the most recent version of the application for your operating system. You can also download and view the source code of [IoT Plug and Play bridge on GitHub](https://github.com/Azure/iot-plug-and-play-bridge). |
iot-develop | Concepts Model Discovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-model-discovery.md | Title: Use IoT Plug and Play models in a solution | Microsoft Docs description: As a solution builder, learn about how you can use IoT Plug and Play models in your IoT solution.-- Previously updated : 07/23/2020++ Last updated : 11/17/2022 -There are two broad categories of an IoT solution: +There are two broad categories of IoT solution: -- A *purpose-built solution* works with a known set of models for the IoT Plug and Play devices that will connect to the solution. You use these models when you develop the solution.+- A *purpose-built solution* works with a known set of models for the IoT Plug and Play devices that connect to the solution. You use these models when you develop the solution. -- A *model-driven* solution can work with the model of any IoT Plug and Play device. Building a model-driven solution is more complex, but the benefit is that your solution works with any devices that may be added in the future. A model-driven IoT solution retrieves a model and uses it to determine the telemetry, properties, and commands the device implements.+- A *model-driven solution* works with the model of any IoT Plug and Play device. Building a model-driven solution is more complex, but the benefit is that your solution works with any devices that are added in the future. A model-driven IoT solution retrieves a model and uses it to determine the telemetry, properties, and commands the device implements. To use an IoT Plug and Play model, an IoT solution: Solutions can use the [model repository](concepts-model-repository.md) to retrie After you identify the model ID for a new device connection, follow these steps: -1. Retrieve the model definition using the model ID from the model repository. For more information, see [Device model Repository](concepts-model-repository.md). +1. Retrieve the model definition using the model ID from the model repository. For more information, see [Device model repository](concepts-model-repository.md). 1. Using the model definition of the connected device, you can enumerate the capabilities of the device. -1. Using the enumerated capabilities of the device, you can allow users to [interact with the device](tutorial-service.md). +1. Using the enumerated capabilities of the device, you can enable users to [interact with the device](tutorial-service.md). ### Custom store After you identify the model ID for a new device connection, follow these steps: 1. Retrieve the model definition using the model ID from your custom store. -1. Using the model definition of the connected device, you can enumerate the capabilities of the device. +1. Using the model definition of the connected device, you can enumerate the capabilities of the device. -1. Using the enumerated capabilities of the device, you can allow users to [interact with the device](tutorial-service.md). +1. Using the enumerated capabilities of the device, you can enable users to [interact with the device](tutorial-service.md). ## Next steps |
iot-develop | Concepts Model Parser | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-model-parser.md | Title: Understand the Digital Twins model parser | Microsoft Docs + Title: Understand the Azure Digital Twins model parser | Microsoft Docs description: As a developer, learn how to use the DTDL parser to validate models. Previously updated : 10/21/2020 Last updated : 11/17/2022 -The Digital Twins Definition Language (DTDL) is described in the [DTDL Specification V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). Users can use the _Digital Twins Model Parser_ NuGet package to validate and query a model defined in multiple files. +The Digital Twins Definition Language (DTDL) is described in the [DTDL Specification V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). Users can use the _Digital Twins Model Parser_ NuGet package to validate and query a DTDL model. The DTDL model may be defined in multiple files. ## Install the DTDL model parser dotnet add package Microsoft.Azure.DigitalTwins.Parser ## Use the parser to validate a model -A model can be composed of one or more interfaces described in JSON files. You can use the parser to load all the files in a given folder and use the parser to validate all the files as a whole, including any references between the files: +A model can be composed of one or more interfaces described in JSON files. You can use the parser to load all the files in a given folder and then validate all the files as a whole, including any references between the files: 1. Create an `IEnumerable<string>` with a list of all model contents: A model can be composed of one or more interfaces described in JSON files. You c } ``` -1. Inspect the `Model`. If the validation succeeds, you can use the model parser API to inspect the model. The following code snippet shows how to iterate over all the models parsed and displays the existing properties: +1. Inspect the `Model`. If the validation succeeds, you can use the model parser API to inspect the model. The following code snippet shows how to iterate over all the models parsed and display the existing properties: ```csharp foreach (var item in parseResult) |
iot-develop | Concepts Model Repository | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-model-repository.md | Title: Understand concepts of the device models repository | Microsoft Docs -description: As a solution developer or an IT professional, learn about the basic concepts of the device models repository. + Title: Understand the IoT Plug and Play device models repository | Microsoft Docs +description: As a solution developer or an IT professional, learn about the basic concepts of the device models repository for IoT Plug and Play devices. Previously updated : 01/20/2022 Last updated : 11/17/2022 The DMR defines a pattern to store DTDL interfaces in a folder structure based o ## Index, expanded and metadata -The DMR conventions include additional artifacts for simplifying consumption of hosted models. These features are _optional_ for custom or private repositories. +The DMR conventions include other artifacts for simplifying consumption of hosted models. These features are _optional_ for custom or private repositories. - _Index_. All available DTMIs are exposed through an *index* composed by a sequence of json files, for example: [https://devicemodels.azure.com/index.page.2.json](https://devicemodels.azure.com/index.page.2.json) - _Expanded_. A file with all the dependencies is available for each interface, for example: [https://devicemodels.azure.com/dtmi/com/example/temperaturecontroller-1.expanded.json](https://devicemodels.azure.com/dtmi/com/example/temperaturecontroller-1.expanded.json) Microsoft hosts a public DMR with these characteristics: ## Custom device models repository -Use the same DMR pattern to create a custom DMR in any storage medium, such as local file system or custom HTTP web servers. You can retrieve device models from the custom DMR in the same way as from the public DMR by changing the base URL used to access the DMR. +Use the same DMR pattern to create a custom DMR in any storage medium, such as local file system or custom HTTP web server. You can retrieve device models from the custom DMR in the same way as from the public DMR by changing the base URL used to access the DMR. > [!NOTE] > Microsoft provides tools to validate device models in the public DMR. You can reuse these tools in custom repositories. ## Public models -The public device models stored in the models repository are available for everyone to consume and integrate in their applications. Public device models enable an open eco-system for device builders and solution developers to share and reuse their IoT Plug and Play device models. +The public device models stored in the DMR are available for everyone to consume and integrate in their applications. Public device models enable an open eco-system for device builders and solution developers to share and reuse their IoT Plug and Play device models. -Refer to the [Publish a model](#publish-a-model) section for instructions on how to publish a model in the models repository to make it public. +See the [Publish a model](#publish-a-model) section to learn how to publish a model in the DMR and make it public. Users can browse, search, and view public interfaces from the official [GitHub repository](https://github.com/Azure/iot-plugandplay-models). ModelResult models = client.GetModel("dtmi:com:example:TemperatureController;1") models.Content.Keys.ToList().ForEach(k => Console.WriteLine(k)); ``` -The expected output should display the `DTMI` of the three interfaces found in the dependency chain: +The expected output displays the `DTMI` of the three interfaces found in the dependency chain: ```txt dtmi:com:example:TemperatureController;1 dtmi:com:example:Thermostat;1 dtmi:azure:DeviceManagement:DeviceInformation;1 ``` -The `ModelsRepositoryClient` can be configured to query a custom model repository -available through http(s)- and specify the dependency resolution using any of the available `ModelDependencyResolution`: +The `ModelsRepositoryClient` can be configured to query a custom DMR -available through http(s)- and to specify the dependency resolution by using the `ModelDependencyResolution` flag: - Disabled. Returns the specified interface only, without any dependency. - Enabled. Returns all the interfaces in the dependency chain -> [!Tip] -> Custom repositories might not expose the `.expanded.json` file, when not available the client will fallback to process each dependency locally. +> [!TIP] +> Custom repositories might not expose the `.expanded.json` file. When this file isn't available, the client will fallback to process each dependency locally. -The next sample code shows how to initialize the `ModelsRepositoryClient` by using a custom repository base URL, in this case using the `raw` URLs from the GitHub API without using the `expanded` form -since it's not available in the `raw` endpoint. The `AzureEventSourceListener` is initialized to inspect the HTTP request performed by the client: +The following sample code shows how to initialize the `ModelsRepositoryClient` by using a custom repository base URL, in this case using the `raw` URLs from the GitHub API without using the `expanded` form since it's not available in the `raw` endpoint. The `AzureEventSourceListener` is initialized to inspect the HTTP request performed by the client: ```cs using AzureEventSourceListener listener = AzureEventSourceListener.CreateConsoleLogger(); ModelResult model = await client.GetModelAsync( model.Content.Keys.ToList().ForEach(k => Console.WriteLine(k)); ``` -There are more samples available within the source code in the Azure SDK GitHub repository: [Azure.Iot.ModelsRepository/samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/modelsrepository/Azure.IoT.ModelsRepository/samples) +There are more samples available in the Azure SDK GitHub repository: [Azure.Iot.ModelsRepository/samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/modelsrepository/Azure.IoT.ModelsRepository/samples). ## Publish a model -> [!Important] +> [!IMPORTANT] > You must have a GitHub account to be able to submit models to the public DMR. 1. Fork the public GitHub repository: [https://github.com/Azure/iot-plugandplay-models](https://github.com/Azure/iot-plugandplay-models). The tools used to validate the models during the PR checks can also be used to a ### Install `dmr-client` -```bash +```cmd/sh dotnet tool install --global Microsoft.IoT.ModelsRepository.CommandLine --version 1.0.0-beta.6 ``` dotnet tool install --global Microsoft.IoT.ModelsRepository.CommandLine --versio If you have your model already stored in json files, you can use the `dmr-client import` command to add them to the `dtmi/` folder with the correct file names: -```bash +```cmd/sh # from the local repo root folder dmr-client import --model-file "MyThermostat.json" ``` dmr-client import --model-file "MyThermostat.json" You can validate your models with the `dmr-client validate` command: -```bash +```cmd/sh dmr-client validate --model-file ./my/model/file.json ``` dmr-client validate --model-file ./my/model/file.json To validate external dependencies, they must exist in the local repository. To validate models, use the `--repo` option to specify a `local` or `remote` folder to resolve dependencies: -```bash +```cmd/sh # from the repo root folder dmr-client validate --model-file ./my/model/file.json --repo . ``` ### Strict validation -The DMR includes additional [requirements](https://github.com/Azure/iot-plugandplay-models/blob/main/pr-reqs.md), use the `stict` flag to validate your model against them: +The DMR includes extra [requirements](https://github.com/Azure/iot-plugandplay-models/blob/main/pr-reqs.md), use the `strict` flag to validate your model against them: -```bash +```cmd/sh dmr-client validate --model-file ./my/model/file.json --repo . --strict true ``` Check the console output for any error messages. Models can be exported from a given repository (local or remote) to a single file using a JSON Array: -```bash +```cmd/sh dmr-client export --dtmi "dtmi:com:example:TemperatureController;1" -o TemperatureController.expanded.json ``` The DMR can include an *index* with a list of all the DTMIs available at the tim To generate the index in a custom or private DMR, use the index command: -```bash +```cmd/sh dmr-client index -r . -o index.json ``` dmr-client index -r . -o index.json Expanded files can be generated using the command: -```bash +```cmd/sh dmr-client expand -r . ``` |
iot-develop | Concepts Modeling Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-modeling-guide.md | Title: Understand IoT Plug and Play device models | Microsoft Docs description: Understand the Digital Twins Definition Language (DTDL) modeling language for IoT Plug and Play devices. The article describes primitive and complex datatypes, reuse patterns that use components and inheritance, and semantic types. The article provides guidance on the choice of device twin model identifier and tooling support for model authoring. Previously updated : 03/09/2021 Last updated : 11/17/2022 The thermostat model has a single interface. Later examples in this article show This article describes how to design and author your own models and covers topics such as data types, model structure, and tools. -To learn more, see the [Digital Twins Definition Language v2](https://github.com/Azure/opendigitaltwins-dtdl) specification. +To learn more, see the [Digital Twins Definition Language V2](https://github.com/Azure/opendigitaltwins-dtdl) specification. ## Model structure The following example shows part of a simple model that doesn't use components: ... ``` -Tools such as Azure IoT Explorer and the IoT Central device template designer label a standalone interface like the the thermostat as a _default component_. +Tools such as Azure IoT Explorer and the IoT Central device template designer label a standalone interface like the thermostat as a _default component_. -The following screenshot shows how the model displays in the Azure IoT explorer tool: +The following screenshot shows how the model displays in the Azure IoT Explorer tool: The following screenshot shows how the model displays as the default component in the IoT Central device template designer. Select **View identity** to see the DTMI of the model: The model ID is stored in a device twin property as the following screenshot shows: A DTDL model without components is a useful simplification for a device or an IoT Edge module with a single set of telemetry, properties, and commands. A model that doesn't use components makes it easy to migrate an existing device or module to be an IoT Plug and Play device or module - you create a DTDL model that describes your actual device or module without the need to define any components. A DTDL model without components is a useful simplification for a device or an Io ### Reuse -There are two ways to reuse interface definitions. Use multiple components in a model to reference other interface definitions. Use inheritance to extend existing interface definitions. +There are two ways to reuse interface definitions. ++- Use multiple components in a model to reference other interface definitions. +- Use inheritance to extend existing interface definitions. ### Multiple components The following screenshots show how this model appears in IoT Central. The proper To learn how to write device code that interacts with components, see [IoT Plug and Play device developer guide](concepts-developer-guide-device.md). -To learn how to write service code that intercats with components on a device, see [IoT Plug and Play service developer guide](concepts-developer-guide-service.md). +To learn how to write service code that interacts with components on a device, see [IoT Plug and Play service developer guide](concepts-developer-guide-service.md). ### Inheritance Inheritance lets you reuse capabilities in a base interfaces to extend the capabilities of an interface. For example, several device models can share common capabilities such as a serial number: The following snippet shows a DTML model that uses the `extends` keyword to define the inheritance relationship shown in the previous diagram: The following snippet shows a DTML model that uses the `extends` keyword to defi The following screenshot shows this model in the IoT Central device template environment: When you write device or service-side code, your code doesn't need to do anything special to handle inherited interfaces. In the example shown in this section, your device code reports the serial number as if it's part of the thermostat interface. You can combine components and inheritance when you create a model. The followin :::image type="content" source="media/concepts-modeling-guide/inheritance-components.png" alt-text="Diagram showing a model that uses both components and inheritance." border="false"::: -The following snippet shows a DTML model that uses the `extends` and `component` keywords to define the inheritance relationship and component usage shown in the previous diagram: +The following snippet shows a DTML model that uses the `extends` and `component` keywords to define the inheritance relationship and component usage shown in the previous diagram: ```json [ The following snippet shows a DTML model that uses the `extends` and `component` ## Data types -Use data types to define telemetry, properties, and command parameters. Data types can be primitive or complex. Complex datatypes use primitives or other complex types. The maximum depth for complex types is five levels. +Use data types to define telemetry, properties, and command parameters. Data types can be primitive or complex. Complex data types use primitives or other complex types. The maximum depth for complex types is five levels. ### Primitive types The following snippet shows an example telemetry definition that uses the `doubl } ``` -### Complex datatypes +### Complex data types -Complex datatypes are one of *array*, *enumeration*, *map*, *object*, or one of the geospatial types. +Complex data types are one of *array*, *enumeration*, *map*, *object*, or one of the geospatial types. #### Arrays Because the geospatial types are array-based, they can't currently be used in pr ## Semantic types -The datatype of a property or telemetry definition specifies the format of the data that a device exchanges with a service. The semantic type provides information about telemetry and properties that an application can use to determine how to process or display a value. Each semantic type has one or more associated units. For example, celsius and fahrenheit are units for the temperature semantic type. IoT Central dashboards and analytics can use the semantic type information to determine how to plot telemetry or property values and display units. To learn how you can use the model parser to read the semantic types, see [Understand the digital twins model parser](concepts-model-parser.md). +The data type of a property or telemetry definition specifies the format of the data that a device exchanges with a service. The semantic type provides information about telemetry and properties that an application can use to determine how to process or display a value. Each semantic type has one or more associated units. For example, celsius and fahrenheit are units for the temperature semantic type. IoT Central dashboards and analytics can use the semantic type information to determine how to plot telemetry or property values and display units. To learn how you can use the model parser to read the semantic types, see [Understand the digital twins model parser](concepts-model-parser.md). The following snippet shows an example telemetry definition that includes semantic type information. The semantic type `Temperature` is added to the `@type` array, and the `unit` value, `degreeCelsius` is one of the valid units for the semantic type: DTML device models are JSON documents that you can create in a text editor. Howe To learn more, see [Define a new IoT device type in your Azure IoT Central application](../iot-central/core/howto-set-up-template.md). -There are DTDL authoring extensions for both VS Code and Visual Studio 2019. +There's a DTDL authoring extension for VS Code. To install the DTDL extension for VS Code, go to [DTDL editor for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-dtdl). You can also search for **DTDL** in the **Extensions** view in VS Code. When you've installed the extension, use it to help you author DTDL model files - The extension provides syntax validation in DTDL model files, highlighting errors as shown on the following screenshot: - :::image type="content" source="media/concepts-modeling-guide/model-validation.png" alt-text="Model validation in VS Code"::: + :::image type="content" source="media/concepts-modeling-guide/model-validation.png" alt-text="Screenshot that shows DTDL model validation in VS Code."::: - Use intellisense and autocomplete when you're editing DTDL models: - :::image type="content" source="media/concepts-modeling-guide/model-intellisense.png" alt-text="Use intellisense for DTDL models in VS Code"::: + :::image type="content" source="media/concepts-modeling-guide/model-intellisense.png" alt-text="Screenshot that shows intellisense for DTDL models in VS Code."::: - Create a new DTDL interface. The **DTDL: Create Interface** command creates a JSON file with a new interface. The interface includes example telemetry, property, and command definitions. -To install the DTDL extension for Visual Studio 2019, go to [DTDL Language Support for VS 2019](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16dtdllanguagesupport). You can also search for **DTDL** in **Manage Extensions** in Visual Studio. --When you've installed the extension, use it to help you author DTDL model files in Visual Studio: --- The extension provides syntax validation in DTDL model files, highlighting errors as shown on the following screenshot:-- :::image type="content" source="media/concepts-modeling-guide/model-validation-2.png" alt-text="Model validation in Visual Studio"::: --- Use intellisense and autocomplete when you're editing DTDL models:-- :::image type="content" source="media/concepts-modeling-guide/model-intellisense-2.png" alt-text="Use intellisense for DTDL models in Visual Studio"::: - ### Publish To make your DTML models shareable and discoverable, you publish them in a device models repository. IoT Central implements more versioning rules for device models. If you version a The following list summarizes some key constraints and limits on models: -- Currently, the maximum depth for arrays, maps, and objects is five levels of depth.+- Currently, the maximum depth for arrays, maps, and objects is five levels. - You can't use arrays in property definitions. - You can extend interfaces to a depth of 10 levels. - An interface can extend at most two other interfaces. The following list summarizes some key constraints and limits on models: ## Next steps -Now that you've learned about device modeling, here are some additional resources: +Now that you've learned about device modeling, here are some more resources: -- [Digital Twins Definition Language v2 (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl)+- [Digital Twins Definition Language V2 (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) - [Model repositories](./concepts-model-repository.md) |
iot-develop | Howto Convert To Pnp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/howto-convert-to-pnp.md | Title: Convert an existing device to use IoT Plug and Play | Microsoft Docs description: This article describes how to convert your existing device code to work with IoT Plug and Play by creating a device model and then sending the model ID when the device connects. Previously updated : 05/14/2021 Last updated : 11/17/2022 Before you create a model for your device, you need to understand the existing c - The read-only and writable properties the device synchronizes with your service. - The commands invoked from the service that the device responds to. -For example, review the following device code snippets that implement various device capabilities. These examples are based on the sample in the [PnPMQTTWin32-Before](https://github.com/Azure-Samples/IoTMQTTSample/tree/master/src/Windows/PnPMQTTWin32-Before) before: +For example, review the following device code snippets that implement various device capabilities. These examples are based on the sample in [PnPMQTTWin32-Before](https://github.com/Azure-Samples/IoTMQTTSample/tree/master/src/Windows/PnPMQTTWin32-Before). The following snippet shows the device sending temperature telemetry: |
iot-develop | Howto Manage Digital Twin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/howto-manage-digital-twin.md | Title: How to manage IoT Plug and Play digital twins -description: How to manage IoT Plug and Play device using digital twin APIs -- Previously updated : 12/17/2020+description: How to manage an IoT Plug and Play device by using the digital twin APIs ++ Last updated : 11/17/2022 -At the time of writing, the digital twin API version is `2020-09-30`. - ## Update a digital twin An IoT Plug and Play device implements a model described by [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl). Solution developers can use the **Update Digital Twin API** to update the state of component and the properties of the digital twin. The following JSON Patch sample shows how to add, replace, or remove a property **Name** -The name of a component or property must be valid DTDL v2 name. +The name of a component or property must be valid DTDL V2 name. Allowed characters are a-z, A-Z, 0-9 (not as the first character), and underscore (not as the first or last character). A name can be 1-64 characters long. **Property value** -The value must be a valid [DTDL v2 Property](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#property). +The value must be a valid [DTDL V2 Property](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#property). -All primitive types are supported. Within complex types, enums, maps, and objects are supported. To learn more, see [DTDL v2 Schemas](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#schemas). +All primitive types are supported. Within complex types, enums, maps, and objects are supported. To learn more, see [DTDL V2 Schemas](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#schemas). Properties don't support array or any complex schema with an array. A maximum depth of a five levels is supported for a complex object. -All field names within complex object should be valid DTDL v2 names. +All field names within complex object should be valid DTDL V2 names. -All map keys should be valid DTDL v2 names. +All map keys should be valid DTDL V2 names. ## Troubleshoot update digital twin API errors The digital twin API throws the following generic error message: `ErrorCode:ArgumentInvalid;'{propertyName}' exists within the device twin and is not digital twin conformant property. Please refer to aka.ms/dtpatch to update this to be conformant.` -If you see this error, make sure the update patch follows the [rules for setting desired value of a digital twin property](#rules-for-setting-the-desired-value-of-a-digital-twin-property) +If you see this error, make sure the update patch follows the [rules for setting desired value of a digital twin property](#rules-for-setting-the-desired-value-of-a-digital-twin-property). When you update a component, make sure that the [empty object $metadata marker](#add-replace-or-remove-a-component) is set. Updates can fail if a device's reported values don't conform to the [IoT plug an ## Next steps -Now that you've learned about digital twins, here are some additional resources: +Now that you've learned about digital twins, here are some more resources: - [Interact with a device from your solution](tutorial-service.md) - [IoT Digital Twin REST API](/rest/api/iothub/service/digitaltwin) |
iot-develop | Overview Iot Plug And Play | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/overview-iot-plug-and-play.md | Title: Introduction to IoT Plug and Play | Microsoft Docs -description: Learn about IoT Plug and Play. IoT Plug and Play is based on an open modeling language that enables smart IoT devices to declare their capabilities. IoT devices present that declaration, called a device model, when they connect to cloud solutions. The cloud solution can then automatically understand the device and start interacting with it, all without writing any code. +description: Learn about IoT Plug and Play. IoT Plug and Play is based on an open modeling language that enables smart IoT devices to declare their capabilities. IoT devices present that declaration, called a device model, when they connect to cloud solutions. The cloud solution can then automatically understand the device and start interacting with it, all without writing any code. Previously updated : 05/10/2022 Last updated : 11/17/2022 - #Customer intent: As a device builder, I need to know what is IoT Plug and Play, so I can understand how it can help me build and market my IoT devices. This article outlines: ## User roles -IoT Plug and Play is useful for two types of developers: +IoT Plug and Play is used by two types of developer: -- A _solution builder_ is responsible for developing an IoT solution using Azure IoT Hub and other Azure resources, and for identifying IoT devices to integrate. To learn more, see [IoT Plug and Play service developer guide](concepts-developer-guide-service.md).-- A _device builder_ creates the code that runs on a device connected to your solution. To learn more, see [IoT Plug and Play device developer guide](concepts-developer-guide-device.md).+- A _solution builder_ who is responsible for developing an IoT solution using Azure IoT Hub and other Azure resources, and for identifying IoT devices to integrate. To learn more, see [IoT Plug and Play service developer guide](concepts-developer-guide-service.md). +- A _device builder_ who creates the code that runs on a device connected to your solution. To learn more, see [IoT Plug and Play device developer guide](concepts-developer-guide-device.md). ## Use IoT Plug and Play devices To learn more, see [IoT Plug and Play architecture](concepts-architecture.md) As a device builder, you can develop an IoT hardware product that supports IoT Plug and Play. The process includes three key steps: -1. Define the device model. You author a set of JSON files that define your device's capabilities using the [DTDL](https://github.com/Azure/opendigitaltwins-dtdl). A model describes a complete entity such as a physical product, and defines the set of interfaces implemented by that entity. Interfaces are shared contracts that uniquely identify the telemetry, properties, and commands supported by a device. Interfaces can be reused across different models. +1. Define the device model. You author a set of JSON files that define your device's capabilities using the [DTDL](https://github.com/Azure/opendigitaltwins-dtdl). A model describes a complete entity such as a physical product, and defines the set of interfaces implemented by that entity. Interfaces are shared contracts that uniquely identify the telemetry, properties, and commands supported by a device. You can reuse interfaces across different models. -1. You should create device software or firmware in a way that your telemetry, properties, and commands follow the [IoT Plug and Play conventions](concepts-convention.md). If you're connecting existing sensors attached to a Windows or Linux gateway, the [IoT Plug and Play bridge](./concepts-iot-pnp-bridge.md) can simplify this step. +1. Implement your device software or firmware such that your telemetry, properties, and commands follow the [IoT Plug and Play conventions](concepts-convention.md). If you're connecting existing sensors attached to a Windows or Linux gateway, the [IoT Plug and Play bridge](./concepts-iot-pnp-bridge.md) can simplify this step. -1. The device announces the model ID as part of the MQTT connection. The Azure IoT SDK includes new constructs to provide the model ID at connection time. +1. Ensure the device announces the model ID as part of the MQTT connection. The Azure IoT SDKs include constructs to provide the model ID at connection time. ## Device certification -The [IoT Plug and Play device certification program](../certification/program-requirements-pnp.md) verifies that a device meets the IoT Plug and Play certification requirements. You can add a certified device to the public [Certified for Azure IoT device catalog](https://aka.ms/devicecatalog). +The [IoT Plug and Play device certification program](../certification/program-requirements-pnp.md) verifies that a device meets the IoT Plug and Play certification requirements. You can add a certified device to the public [Certified for Azure IoT device catalog](https://aka.ms/devicecatalog) where it's discoverable by other solution builders. ## Next steps |
iot-develop | Set Up Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/set-up-environment.md | Title: Tutorial - Set up the IoT resources you need for IoT Plug and Play | Micr description: Tutorial - Create an IoT Hub and Device Provisioning Service instance to use with the IoT Plug and Play quickstarts and tutorials. Previously updated : 08/11/2020- Last updated : 11/17/2022+ |
iot-develop | Tutorial Connect Device | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/tutorial-connect-device.md | Title: Tutorial - Connect IoT Plug and Play sample device code to Azure IoT Hub | Microsoft Docs description: Tutorial - Build and run IoT Plug and Play sample device code (C, C#, Java, JavaScript, or Python) on Linux or Windows that connects to an IoT hub. Use the Azure IoT explorer tool to view the information sent by the device to the hub.-- Previously updated : 07/14/2020++ Last updated : 11/17/2022 |
iot-develop | Tutorial Migrate Device To Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/tutorial-migrate-device-to-module.md | Title: Tutorial - Connect a generic Azure IoT Plug and Play module | Microsoft Docs description: Tutorial - Use sample C# IoT Plug and Play device code in a generic module.-- Previously updated : 9/22/2020++ Last updated : 11/17/2022 -A device is an IoT Plug and Play device if it publishes its model ID when it connects to an IoT hub and implements the properties and methods described in the Digital Twins Definition Language (DTDL) V2 model identified by the model ID. To learn more about how devices use a DTDL and model ID, see [IoT Plug and Play developer guide](./concepts-developer-guide-device.md). Modules use model IDs and DTDL models in the same way. +A device is an IoT Plug and Play device if it: ++* Publishes its model ID when it connects to an IoT hub. +* Implements the properties and methods described in the Digital Twins Definition Language (DTDL) V2 model identified by the model ID. ++To learn more about how devices use a DTDL and model ID, see [IoT Plug and Play developer guide](./concepts-developer-guide-device.md). Modules use model IDs and DTDL models in the same way. To demonstrate how to implement an IoT Plug and Play module, this tutorial shows you how to: To demonstrate how to implement an IoT Plug and Play module, this tutorial shows [!INCLUDE [iot-pnp-prerequisites](../../includes/iot-pnp-prerequisites.md)] -To complete this tutorial on Windows, install the following software on your local Windows environment: +To complete this tutorial, install the following software in your local development environment: -* [Visual Studio (Community, Professional, or Enterprise)](https://visualstudio.microsoft.com/downloads/). +* Install the latest .NET for your operating system from [https://dot.net](https://dot.net). * [Git](https://git-scm.com/download/). Use the Azure IoT explorer tool to add a new device called **my-module-device** to your IoT hub. If you haven't already done so, clone the Azure IoT Hub Device C# SDK GitHub rep Open a command prompt in a folder of your choice. Use the following command to clone the [Azure IoT C# SDK](https://github.com/Azure/azure-iot-sdk-csharp) GitHub repository into this location: -```cmd +```cmd/sh git clone https://github.com/Azure/azure-iot-sdk-csharp.git ``` git clone https://github.com/Azure/azure-iot-sdk-csharp.git To open and prepare the sample project: -1. Open the *azure-iot-sdk-csharp\iothub\device\samples\solutions\PnpDeviceSamples\Thermostat\Thermostat.csproj* project file in Visual Studio 2019. +1. Navigate to the *azure-iot-sdk-csharp/iothub/device/samples/solutions/PnpDeviceSamples/Thermostat* folder. -1. In Visual Studio, navigate to **Project > Thermostat Properties > Debug**. Then add the following environment variables to the project: +1. Add the following environment variables to your shell environment: | Name | Value | | - | -- | To open and prepare the sample project: To modify the code to work as a module instead of a device: -1. In Visual Studio, open *Parameter.cs* and modify the line that sets the **PrimaryConnectionString** variable as follows: +1. In your text editor or IDE, open *Parameter.cs* and modify the line that sets the **PrimaryConnectionString** variable as follows: ```csharp public string PrimaryConnectionString { get; set; } = Environment.GetEnvironmentVariable("IOTHUB_MODULE_CONNECTION_STRING"); ``` -1. In Visual Studio, open *Program.cs* and replace the seven instances of the `DeviceClient` class with the `ModuleClient` class. +1. In your text editor or IDE, open *Program.cs* and replace the nine instances of the `DeviceClient` class with the `ModuleClient` class. > [!TIP] > Use the Visual Studio search and replace feature with **Match case** and **Match whole word** enabled to replace `DeviceClient` with `ModuleClient`. -1. In Visual Studio, open *Thermostat.cs* and replace both instances of the `DeviceClient` class with the `ModuleClient` class as follows. +1. In your text editor or IDE, open *Thermostat.cs* and replace both instances of the `DeviceClient` class with the `ModuleClient` class. 1. Save the changes to the files you modified. +1. To run the sample in your shell environment, make sure you're in the *azure-iot-sdk-csharp/iothub/device/samples/solutions/PnpDeviceSamples/Thermostat* folder and that the environment variables are set. Then run: ++ ```cmd/sh + dotnet build + dotnet run + ``` + If you run the code and then use the Azure IoT explorer to view the updated module twin, you see the updated device twin with the model ID and module reported property: ```json { "deviceId": "my-module-device",- "moduleId": "my-mod", + "moduleId": "my-module", "etag": "AAAAAAAAAAE=",- "deviceEtag": "NjgzMzQ1MzQ1", + "deviceEtag": "MTk0ODMyMjI4", "status": "enabled", "statusUpdateTime": "0001-01-01T00:00:00Z", "connectionState": "Connected",- "lastActivityTime": "0001-01-01T00:00:00Z", + "lastActivityTime": "2022-11-16T13:56:43.1711257Z", "cloudToDeviceMessageCount": 0, "authenticationType": "sas", "x509Thumbprint": { If you run the code and then use the Azure IoT explorer to view the updated modu "secondaryThumbprint": null }, "modelId": "dtmi:com:example:Thermostat;1",- "version": 3, + "version": 5, "properties": { "desired": { "$metadata": { If you run the code and then use the Azure IoT explorer to view the updated modu "$version": 1 }, "reported": {- "maxTempSinceLastReboot": 5, + "targetTemperature": { + "value": 0, + "ac": 203, + "av": 0, + "ad": "Initialized with default value" + }, + "maxTempSinceLastReboot": 23.4, "$metadata": {- "$lastUpdated": "2020-09-28T08:53:45.9956637Z", + "$lastUpdated": "2022-11-16T14:06:59.4376422Z", + "targetTemperature": { + "$lastUpdated": "2022-11-16T13:55:55.6688872Z", + "value": { + "$lastUpdated": "2022-11-16T13:55:55.6688872Z" + }, + "ac": { + "$lastUpdated": "2022-11-16T13:55:55.6688872Z" + }, + "av": { + "$lastUpdated": "2022-11-16T13:55:55.6688872Z" + }, + "ad": { + "$lastUpdated": "2022-11-16T13:55:55.6688872Z" + } + }, "maxTempSinceLastReboot": {- "$lastUpdated": "2020-09-28T08:53:45.9956637Z" + "$lastUpdated": "2022-11-16T14:06:59.4376422Z" } }, "$version": 2 If you run the code and then use the Azure IoT explorer to view the updated modu The service SDKs let you retrieve the model ID of connected IoT Plug and Play devices and modules. You can use the service SDKs to set writable properties and call commands: -1. In another instance of Visual Studio, open the *azure-iot-sdk-csharp\iot-hub\Samples\service\PnpServiceSamples\Thermostat\Thermostat.csproj* project. +1. In another shell environment, navigate to the *azure-iot-sdk-csharp\iothub\service\samples\solutions\PnpServiceSamples\Thermostat* folder. -1. In Visual Studio, navigate to **Project > Thermostat Properties > Debug**. Then add the following environment variables to the project: +1. Add the following environment variables to your shell environment: | Name | Value | | - | -- | The service SDKs let you retrieve the model ID of connected IoT Plug and Play de > [!TIP] > You can also find your IoT hub connection string in the Azure IoT explorer tool. -1. Open the *Program.cs* file and modify the line that calls a command as follows: +1. In your text editor or IDE, open the *ThermostatSample.cs* file and modify the line that calls a command as follows: ```csharp- CloudToDeviceMethodResult result = await s_serviceClient.InvokeDeviceMethodAsync(s_deviceId, "my-module", commandInvocation); + CloudToDeviceMethodResult result = await _serviceClient.InvokeDeviceMethodAsync(_deviceId, "my-module", commandInvocation); ``` -1. In the *Program.cs* file, modify the line that retrieves the device twin as follows: +1. In the *ThermostatSample.cs* file, modify the line that retrieves the device twin as follows: ```csharp Twin twin = await s_registryManager.GetTwinAsync(s_deviceId, "my-module"); ``` -1. Make sure the module client sample is still running, and then run this service sample. The output from the service sample shows the model ID from the device twin and the command call: -- ```cmd - [09/28/2020 10:52:55]dbug: Thermostat.Program[0] - Initialize the service client. - [09/28/2020 10:52:55]dbug: Thermostat.Program[0] - Get Twin model Id and Update Twin - [09/28/2020 10:52:59]dbug: Thermostat.Program[0] - Model Id of this Twin is: dtmi:com:example:Thermostat;1 - [09/28/2020 10:52:59]dbug: Thermostat.Program[0] - Invoke a command - [09/28/2020 10:53:00]dbug: Thermostat.Program[0] - Command getMaxMinReport invocation result status is: 200 - ``` +1. Save your changes. - The output from the module client shows the command handler's response: +1. Make sure the module client sample is still running, and then run this service sample: - ```cmd - [09/28/2020 10:53:00]dbug: Thermostat.ThermostatSample[0] - Command: Received - Generating max, min and avg temperature report since 28/09/2020 10:52:55. - [09/28/2020 10:53:00]dbug: Thermostat.ThermostatSample[0] - Command: MaxMinReport since 28/09/2020 10:52:55: maxTemp=25.4, minTemp=25.4, avgTemp=25.4, startTime=28/09/2020 10:52:56, endTime=28/09/2020 10:52:56 + ```cmd/sh + dotnet build + dotnet run ``` +The output from the service sample shows the model ID from the device twin and the command call: ++```cmd +[11/16/2022 14:27:56]dbug: Microsoft.Azure.Devices.Samples.ThermostatSample[0] + Get the my-module-device device twin. +... +[11/16/2022 14:27:58]dbug: Microsoft.Azure.Devices.Samples.ThermostatSample[0] + The my-module-device device twin has a model with ID dtmi:com:example:Thermostat;1. +[11/16/2022 14:27:58]dbug: Microsoft.Azure.Devices.Samples.ThermostatSample[0] + Update the targetTemperature property on the my-module-device device twin to 44. +[11/16/2022 14:27:58]dbug: Microsoft.Azure.Devices.Samples.ThermostatSample[0] + Get the my-module-device device twin. +... +[11/16/2022 14:27:58]dbug: Microsoft.Azure.Devices.Samples.ThermostatSample[0] + Invoke the getMaxMinReport command on my-module-device device twin. +[11/16/2022 14:27:59]dbug: Microsoft.Azure.Devices.Samples.ThermostatSample[0] + Command getMaxMinReport was invoked on device twin my-module-device. +Device returned status: 200. +Report: {"maxTemp":23.4,"minTemp":23.4,"avgTemp":23.39999999999999,"startTime":"2022-11-16T14:26:00.7446533+00:00","endTime":"2022-11-16T14:27:54.3102604+00:00"} +``` ++The output from the module client shows the command handler's response: ++```cmd +[11/16/2022 14:27:59]Microsoft.Azure.Devices.Client.Samples.ThermostatSample[0] Command: Received - Generating max, min and avg temperature report since 16/11/2022 14:25:58. +[11/16/2022 14:27:59]Microsoft.Azure.Devices.Client.Samples.ThermostatSample[0] Command: MaxMinReport since 16/11/2022 14:25:58: maxTemp=23.4, minTemp=23.4, avgTemp=23.39999999999999, startTime=16/11/2022 14:26:00, endTime=16/11/2022 14:27:54 +``` + ## Convert to an IoT Edge module To convert this sample to work as an IoT Plug and Play IoT Edge module, you must containerize the application. You don't need to make any further code changes. The connection string environment variable is injected by the IoT Edge runtime at startup. To learn more, see [Use Visual Studio 2019 to develop and debug modules for Azure IoT Edge](../iot-edge/how-to-visual-studio-develop-module.md). You can use the Azure IoT Explorer tool to see: * The model ID of your IoT Edge device in the module twin. * Telemetry from the IoT Edge device. * IoT Edge module twin property updates triggering IoT Plug and Play notifications.-* The IoT Edge module react to your IoT Plug and Play commands. +* The IoT Edge module reacts to your IoT Plug and Play commands. ## Clean up resources |
iot-develop | Tutorial Multiple Components | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/tutorial-multiple-components.md | Title: Tutorial - Connect an IoT Plug and Play multiple component device applications to IoT Hub | Microsoft Docs -description: Tutorial - Build and run IoT Plug and Play sample device code (C, C#, Java, JavaScript, or Python) that uses multiple components and connects to an IoT hub. Use the Azure IoT explorer tool to view the information sent by the device to the hub. -- Previously updated : 07/22/2020+description: Tutorial - Build and run IoT Plug and Play sample device code that uses multiple components and connects to an IoT hub. The tutorial shows you how to use C, C#, Java, JavaScript, or Python. Use the Azure IoT explorer tool to view the information sent by the device to the hub. ++ Last updated : 11/17/2022 |
iot-develop | Tutorial Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/tutorial-service.md | Title: Tutorial - Interact with an IoT Plug and Play device connected to your Azure IoT solution | Microsoft Docs description: Tutorial - Use C#, JavaScript, Java, or Python to connect to and interact with an IoT Plug and Play device that's connected to your Azure IoT solution.-- Previously updated : 09/21/2020++ Last updated : 11/17/2022 |
iot-develop | Tutorial Use Mqtt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/tutorial-use-mqtt.md | Title: Tutorial - Use MQTT to create an Azure IoT Plug and Play device client | Microsoft Docs + Title: Tutorial - Use MQTT to create an IoT Plug and Play device client | Microsoft Docs description: Tutorial - Use the MQTT protocol directly to create an IoT Plug and Play device client without using the Azure IoT Device SDKs Previously updated : 05/13/2020 Last updated : 11/17/2022 Verify the code is working correctly, by starting Azure IoT explorer, start list Run the application (Ctrl+F5), after couple of seconds you see output that looks like: -In Azure IoT explorer, you can see that the device isn't an IoT Plug and Play device: +In Azure IoT explorer, you can see that the device isn't an IoT Plug and Play device because there's no model ID: ### Make the device an IoT Plug and Play device Rebuild and run the sample. The device twin now includes the model ID: You can now navigate the IoT Plug and Play component: You can now modify your device code to implement the telemetry, properties, and commands defined in your model. To see an example implementation of the thermostat device using the Mosquitto library, see [Using MQTT PnP with Azure IoTHub without the IoT SDK on Windows](https://github.com/Azure-Samples/IoTMQTTSample/tree/master/src/Windows/PnPMQTTWin32) on GitHub. |
iot-dps | How To Verify Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-verify-certificates.md | Now, you need to sign the *Verification Code* with the private key associated wi Microsoft provides tools and samples that can help you create a signed verification certificate: - The **Azure IoT Hub C SDK** provides PowerShell (Windows) and Bash (Linux) scripts to help you create CA and leaf certificates for development and to perform proof-of-possession using a verification code. You can download the [files](https://github.com/Azure/azure-iot-sdk-c/tree/master/tools/CACertificates) relevant to your system to a working folder and follow the instructions in the [Managing CA certificates readme](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md) to perform proof-of-possession on a CA certificate. -- The **Azure IoT Hub C# SDK** contains the [Group Certificate Verification Sample](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/service/samples/How%20To/GroupCertificateVerificationSample), which you can use to do proof-of-possession.+- The **Azure IoT Hub C# SDK** contains the [Group Certificate Verification Sample](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/service/samples/how%20to%20guides/GroupCertificateVerificationSample), which you can use to do proof-of-possession. > [!IMPORTANT] > In addition to performing proof-of-possession, the PowerShell and Bash scripts cited previously also allow you to create root certificates, intermediate certificates, and leaf certificates that can be used to authenticate and provision devices. These certificates should be used for development only. They should never be used in a production environment. |
iot-dps | Quick Create Simulated Device Symm Key | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-symm-key.md | To update and run the provisioning sample with your device information: | :-- | :- | :-- | | `--i` or `--IdScope` | True | The ID Scope of the DPS instance | | `--r` or `--RegistrationId` | True | The registration ID is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). |- | `--p` or `--PrimaryKey` | True | The primary key of the individual enrollment or the derived device key of the group enrollment. See the [ComputeDerivedSymmetricKeySample](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/device/samples/Getting%20Started/ComputeDerivedSymmetricKeySample) for how to generate the derived key. | + | `--p` or `--PrimaryKey` | True | The primary key of the individual enrollment or the derived device key of the group enrollment. See the [ComputeDerivedSymmetricKeySample](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/device/samples/getting%20started/ComputeDerivedSymmetricKeySample) for how to generate the derived key. | | `--g` or `--GlobalDeviceEndpoint` | False | The global endpoint for devices to connect to. Defaults to `global.azure-devices-provisioning.net` | | `--t` or `--TransportType` | False | The transport to use to communicate with the device provisioning instance. Defaults to `Mqtt`. Possible values include `Mqtt`, `Mqtt_WebSocket_Only`, `Mqtt_Tcp_Only`, `Amqp`, `Amqp_WebSocket_Only`, `Amqp_Tcp_only`, and `Http1`.| |
iot-dps | Quick Enroll Device X509 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-enroll-device-x509.md | If you plan to explore the Azure IoT Hub Device Provisioning Service tutorials, The [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) has scripts that can help you create root CA, intermediate CA, and device certificates, and do proof-of-possession with the service to verify root and intermediate CA certificates. To learn more, see [Managing test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md). -The [Group certificate verification sample](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/service/samples/How%20To/GroupCertificateVerificationSample) in the [Azure IoT SDK for C# (.NET)](https://github.com/Azure/azure-iot-sdk-csharp) shows how to do proof-of-possession in C# with an existing X.509 intermediate or root CA certificate. +The [Group certificate verification sample](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/service/samples/how%20to%20guides/GroupCertificateVerificationSample) in the [Azure IoT SDK for C# (.NET)](https://github.com/Azure/azure-iot-sdk-csharp) shows how to do proof-of-possession in C# with an existing X.509 intermediate or root CA certificate. :::zone-end |
iot-edge | Version History | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/version-history.md | For more information about IoT Edge releases, see [Azure IoT Edge supported syst Azure IoT Edge for Linux on Windows (EFLOW) supports the following versions: * **EFLOW Continuous Release (CR)** based on the latest non-LTS Azure IoT Edge version, it contains new features and capabilities that are in the latest stable release. For more information, see the [EFLOW release notes](https://github.com/Azure/iotedge-eflow/releases). * **EFLOW 1.1 (LTS)** based on Azure IoT Edge 1.1, it's the Long-term support version. This version will be stable through the supported lifetime of this version and won't include new features released in later versions. This version will be supported until Dec 2022 to match the IoT Edge 1.1 LTS release lifecycle.  -* **EFLOW 1.4 (LTS)** based on Azure IoT Edge 1.4, it's the latest Long-term support version. This version will be stable through the supported lifetime of this version and won't include new features released in later versions. This version will be supported until Nov 2024 to match the IoT Edge 1.3 LTS release lifecycle.  +* **EFLOW 1.4 (LTS)** based on Azure IoT Edge 1.4, it's the latest Long-term support version. This version will be stable through the supported lifetime of this version and won't include new features released in later versions. This version will be supported until Nov 2024 to match the IoT Edge 1.4 LTS release lifecycle.  All new releases are made available in the [Azure IoT Edge for Linux on Windows project](https://github.com/Azure/iotedge-eflow). This table provides recent version history for IoT Edge package releases, and hi | IoT Edge release | Available in EFLOW branch | Release date | End of Support Date | Highlights | | - | - | | - | - |-| 1.4 | Long-term support (LTS) | TBA | November 12, 2024 | [Azure IoT Edge 1.4.0](https://github.com/Azure/azure-iotedge/releases/tag/1.4.0)<br/> [CBL-Mariner 2.0](https://microsoft.github.io/CBL-Mariner/announcing-mariner-2.0/)<br/> [USB passthrough using USB-Over-IP](https://aka.ms/AzEFLOW-USBIP)<br/>[File/Folder sharing between Windows OS and the EFLOW VM](https://aka.ms/AzEFLOW-FolderSharing) | +| 1.4 | Long-term support (LTS) | November 2022 | November 12, 2024 | [Azure IoT Edge 1.4.0](https://github.com/Azure/azure-iotedge/releases/tag/1.4.0)<br/> [CBL-Mariner 2.0](https://microsoft.github.io/CBL-Mariner/announcing-mariner-2.0/)<br/> [USB passthrough using USB-Over-IP](https://aka.ms/AzEFLOW-USBIP)<br/>[File/Folder sharing between Windows OS and the EFLOW VM](https://aka.ms/AzEFLOW-FolderSharing) | | 1.3 | [Continuous release (CR)](https://github.com/Azure/iotedge-eflow/releases/tag/1.3.1.02092) | September 2022 | In support | [Azure IoT Edge 1.3.0](https://github.com/Azure/azure-iotedge/releases/tag/1.3.0)<br/> [CBL-Mariner 2.0](https://microsoft.github.io/CBL-Mariner/announcing-mariner-2.0/)<br/> [USB passthrough using USB-Over-IP](https://aka.ms/AzEFLOW-USBIP)<br/>[File/Folder sharing between Windows OS and the EFLOW VM](https://aka.ms/AzEFLOW-FolderSharing) | | 1.2 | [Continuous release (CR)](https://github.com/Azure/iotedge-eflow/releases/tag/1.2.7.07022) | January 2022 | September 2022 | [Public Preview](https://techcommunity.microsoft.com/t5/internet-of-things-blog/azure-iot-edge-for-linux-on-windows-eflow-continuous-release/ba-p/3169590) | | 1.1 | [Long-term support (LTS)](https://github.com/Azure/iotedge-eflow/releases/tag/1.1.2106.0) | June 2021 | December 13, 2022 | IoT Edge 1.1 LTS is supported through December 13, 2022 to match the [.NET Core 3.1 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core). <br> [Long-term support plan and supported systems updates](support.md) | |
iot-hub-device-update | Components Enumerator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/components-enumerator.md | Here are the responsibilities of each part of the proxy update flow: - **Child steps handler** - - Iterate through a list of component instances that are compatible with the child update content. For more information, see [Steps handler](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers/steps_handler). + - Iterate through a list of component instances that are compatible with the child update content. For more information, see [Steps handler](https://github.com/Azure/iot-hub-device-update/tree/main/src/extensions/step_handlers). -In production, device builders can use [existing handlers](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers) or implement a custom handler that invokes any installer needed for an over-the-air update. For more information, see [Implement a custom update content handler](https://github.com/Azure/iot-hub-device-update/tree/main/docs/agent-reference/how-to-implement-custom-update-handler.md). +In production, device builders can use [existing handlers](https://github.com/Azure/iot-hub-device-update/blob/main/src/extensions/inc/aduc/content_handler.hpp) or implement a custom handler that invokes any installer needed for an over-the-air update. For more information, see [Implement a custom update content handler](https://github.com/Azure/iot-hub-device-update/tree/main/docs/agent-reference/how-to-implement-custom-update-handler.md). ## Virtual Vacuum components For example, for *hostfw*, the value of the property `properties.version` will b The example in this article used C. To explore C++ example source codes, see: -- [CMakeLists.txt](https://github.com/Azure/iot-hub-device-update/blob/main/src/extensions/component-enumerators/examples/contoso-component-enumerator/CMakeLists.txt)-- [contoso-component-enumerator.cpp](https://github.com/Azure/iot-hub-device-update/blob/main/src/extensions/component-enumerators/examples/contoso-component-enumerator/contoso-component-enumerator.cpp)+- [CMakeLists.txt](https://github.com/Azure/iot-hub-device-update/blob/main/src/extensions/component_enumerators/examples/contoso_component_enumerator/CMakeLists.txt) +- [contoso-component-enumerator.cpp](https://github.com/Azure/iot-hub-device-update/blob/main/src/extensions/component_enumerators/examples/contoso_component_enumerator/contoso_component_enumerator.cpp) - [inc/aduc/component_enumerator_extension.hpp](https://github.com/Azure/iot-hub-device-update/tree/main/src/extensions/inc/aduc/component_enumerator_extension.hpp) -For various sample updates for components connected to the Contoso Virtual Vacuum device, see [Proxy update demo](https://github.com/Azure/iot-hub-device-update/tree/main/src/extensions/component-enumerators/examples/contoso-component-enumerator/demo/README.md). +For various sample updates for components connected to the Contoso Virtual Vacuum device, see [Proxy update demo](https://github.com/Azure/iot-hub-device-update/blob/main/src/extensions/component_enumerators/examples/contoso_component_enumerator/demo/README.md). |
iot-hub-device-update | Device Update Howto Proxy Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-howto-proxy-updates.md | For testing and demonstration purposes, we'll create the following mock componen > [!IMPORTANT] > The preceding component configuration is based on the implementation of an example component enumerator extension called *libcontoso-component-enumerator.so*. It also requires this mock component inventory data file: */usr/local/contoso-devices/components-inventory.json*. -1. Copy the [demo](https://github.com/Azure/iot-hub-device-update/tree/main/src/extensions/component-enumerators/examples/contoso-component-enumerator/demo) folder to your home directory on the test VM. Then, run the following command to copy required files to the right locations: +1. Copy the [demo](https://github.com/Azure/iot-hub-device-update/blob/main/src/extensions/component_enumerators/examples/contoso_component_enumerator/demo) folder to your home directory on the test VM. Then, run the following command to copy required files to the right locations: ```sh `~/demo/tools/reset-demo-components.sh` For testing and demonstration purposes, we'll create the following mock componen The `reset-demo-components.sh` command takes the following steps on your behalf: - * It copies [components-inventory.json](https://github.com/Azure/iot-hub-device-update/tree/main/src/extensions/component-enumerators/examples/contoso-component-enumerator/demo/demo-devices/contoso-devices/components-inventory.json) and adds it to the */usr/local/contoso-devices* folder. + * It copies [components-inventory.json](https://github.com/Azure/iot-hub-device-update/blob/main/src/extensions/component_enumerators/examples/contoso_component_enumerator/demo/demo-devices/contoso-devices/components-inventory.json) and adds it to the */usr/local/contoso-devices* folder. * It copies the Contoso component enumerator extension (*libcontoso-component-enumerator.so*) from the [Assets folder](https://github.com/Azure/iot-hub-device-update/releases) and adds it to the */var/lib/adu/extensions/sources* folder. |
iot-hub-device-update | Device Update Multi Step Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-multi-step-updates.md | The steps content handler applies **IsInstalled** validation logic for each step To report an update result, the result of a step handler execution must be written to ADUC_Result struct in a desired result file as specified in --result-file option. Then based on results of the execution, for success return 0, for any fatal errors return -1 or 0xFF. -For more information, see [Steps content handler](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers/steps_handler/README.md) and [Implementing a custom component-aware content handler](https://github.com/Azure/iot-hub-device-update/tree/main/docs/agent-reference/how-to-implement-custom-update-handler.md). +For more information, see [Steps content handler](https://github.com/Azure/iot-hub-device-update/tree/main/src/extensions/step_handlers) and [Implementing a custom component-aware content handler](https://github.com/Azure/iot-hub-device-update/tree/main/docs/agent-reference/how-to-implement-custom-update-handler.md). ### Reference steps in a parent update |
iot-hub | Iot Hub Devguide Messages Read Builtin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-read-builtin.md | You can use the Event Hubs SDKs to read from the built-in endpoint in environmen | Language | Sample | | -- | |-| .NET | [ReadD2cMessages .NET](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/device/samples/getting%20started/ReadD2cMessages) | +| .NET | [ReadD2cMessages .NET](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/service/samples/getting%20started/ReadD2cMessages) | | Java | [read-d2c-messages Java](https://github.com/Azure-Samples/azure-iot-samples-java/tree/master/iot-hub/Quickstarts/read-d2c-messages) | | Node.js | [read-d2c-messages Node.js](https://github.com/Azure-Samples/azure-iot-samples-node/tree/master/iot-hub/Quickstarts/read-d2c-messages) | | Python | [read-dec-messages Python](https://github.com/Azure-Samples/azure-iot-samples-python/tree/master/iot-hub/Quickstarts/read-d2c-messages) | |
key-vault | Overview Vnet Service Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/overview-vnet-service-endpoints.md | description: Learn how virtual network service endpoints for Azure Key Vault all Previously updated : 09/06/2022 Last updated : 11/20/2022 Here's a list of trusted services that are allowed to access a key vault if the | Azure Disk Encryption volume encryption service|Allow access to BitLocker Key (Windows VM) or DM Passphrase (Linux VM), and Key Encryption Key, during virtual machine deployment. This enables [Azure Disk Encryption](../../security/fundamentals/encryption-overview.md).| | Azure Disk Storage | When configured with a Disk Encryption Set (DES). For more information, see [Server-side encryption of Azure Disk Storage using customer-managed keys](../../virtual-machines/disk-encryption.md#customer-managed-keys).| | Azure Event Hubs|[Allow access to a key vault for customer-managed keys scenario](../../event-hubs/configure-customer-managed-key.md)|+| Azure Firewall Premium| [Azure Firewall Premium certificates](../../firewall/premium-certificates.md)| | Azure Front Door Classic|[Using Key Vault certificates for HTTPS](../../frontdoor/front-door-custom-domain-https.md#prepare-your-key-vault-and-certificate) | Azure Front Door Standard/Premium|[Using Key Vault certificates for HTTPS](../../frontdoor/standard-premium/how-to-configure-https-custom-domain.md#prepare-your-key-vault-and-certificate) | Azure Import/Export| [Use customer-managed keys in Azure Key Vault for Import/Export service](../../import-export/storage-import-export-encryption-key-portal.md) |
key-vault | Hsm Protected Keys Ncipher | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/hsm-protected-keys-ncipher.md | To validate the downloaded package: ``` > [!TIP]- > The nCipher nShield software includes python at %NFAST_HOME%\python\bin + > The nCipher nShield software includes Python at %NFAST_HOME%\python\bin > > 2. Confirm that you see the following, which indicates successful validation: **Result: SUCCESS** |
logic-apps | Create Single Tenant Workflows Visual Studio Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md | For more information, review the [Azurite documentation](https://github.com/Azur * [C# for Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.csharp), which enables F5 functionality to run your logic app. - * [Azure Functions Core Tools - 3.x version](https://github.com/Azure/azure-functions-core-tools/releases/tag/3.0.4868) by using the Microsoft Installer (MSI) version, which is `func-cli-X.X.XXXX-x*.msi`. These tools include a version of the same runtime that powers the Azure Functions runtime, which the Azure Logic Apps (Standard) extension uses in Visual Studio Code. + * [Azure Functions Core Tools - 4.x version](https://github.com/Azure/azure-functions-core-tools/releases/tag/4.0.4865) by using the Microsoft Installer (MSI) version, which is `func-cli-X.X.XXXX-x*.msi`. These tools include a version of the same runtime that powers the Azure Functions runtime, which the Azure Logic Apps (Standard) extension uses in Visual Studio Code. * If you have an installation that's earlier than these versions, uninstall that version first, or make sure that the PATH environment variable points at the version that you download and install. The authoring capability is currently available only in Visual Studio Code, but ## Open the workflow definition file in the designer -1. Check the versions that are installed on your computer by running this command: -- `..\Users\{yourUserName}\dotnet --list-sdks` -- If you have .NET Core SDK 5.x, this version might prevent you from opening the logic app's underlying workflow definition in the designer. Rather than uninstall this version, at your project's root folder, create a **global.json** file that references the .NET Core runtime 3.x version that you have that's later than 3.1.201, for example: -- ```json - { - "sdk": { - "version": "3.1.8", - "rollForward": "disable" - } - } - ``` -- > [!IMPORTANT] - > Make sure that you explicitly add the **global.json** file in your project's - > root folder from inside Visual Studio Code. Otherwise, the designer won't open. - 1. Expand the project folder for your workflow. Open the **workflow.json** file's shortcut menu, and select **Open in Designer**. ![Screenshot that shows Explorer pane and shortcut window for the workflow.json file with "Open in Designer" selected.](./media/create-single-tenant-workflows-visual-studio-code/open-definition-file-in-designer.png) |
machine-learning | How To Access Data Interactive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-interactive.md | SOURCE=https://<account_name>.blob.core.windows.net/<container>/<path> DEST=/home/azureuser/data azcopy cp $SOURCE $DEST ```++## Next steps ++- [Interactive Data Wrangling with Apache Spark in Azure Machine Learning (preview)](interactive-data-wrangling-with-apache-spark-azure-ml.md) +- [Access data in a job](how-to-read-write-data-v2.md) |
machine-learning | How To Debug Visual Studio Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-visual-studio-code.md | |
machine-learning | How To Interactive Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-interactive-jobs.md | Last updated 03/15/2022 # Debug jobs and monitor training progress (preview)++> [!IMPORTANT] +> Items marked (preview) in this article are currently in public preview. +> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. +> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). + Machine learning model training is usually an iterative process and requires significant experimentation. With the Azure Machine Learning interactive job experience, data scientists can use the Azure Machine Learning Python SDKv2, Azure Machine Learning CLIv2 or the Azure Studio to access the container where their job is running. Once the job container is accessed, users can iterate on training scripts, monitor training progress or debug the job remotely like they typically do on their local machines. Jobs can be interacted with via different training applications including **JupyterLab, TensorBoard, VS Code** or by connecting to the job container directly via **SSH**. Interactive training is supported on **Azure Machine Learning Compute Clusters** and **Azure Arc-enabled Kubernetes Cluster**. |
machine-learning | How To Manage Environments V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-environments-v2.md | Get the details of a specific environment: # [Azure CLI](#tab/cli) ```cli-az ml environment list --name docker-image-example --version 1 +az ml environment show --name docker-image-example --version 1 ``` # [Python SDK](#tab/python) |
machine-learning | How To Manage Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models.md | Use the following tabs to select where your model is located. # [Local model](#tab/use-local) -```YAML -$schema: https://azuremlschemas.azureedge.net/latest/model.schema.json -name: local-file-example -path: mlflow-model/model.pkl -description: Model created from local file. -``` ```bash az ml model create -f <file-name>.yml Use the following tabs to select where your model is located. # [Local model](#tab/use-local) -```python -from azure.ai.ml.entities import Model -from azure.ai.ml.constants import ModelType --file_model = Model( - path="mlflow-model/model.pkl", - type=ModelType.CUSTOM, - name="local-file-example", - description="Model created from local file." -) -ml_client.models.create_or_update(file_model) -``` +[!notebook-python[] (~/azureml-examples-main/sdk/python/assets/model/model.ipynb?name=file_model)] # [Datastore](#tab/use-datastore) You can create a model from a cloud path by using any one of the following supported URI formats. -```python -from azure.ai.ml.entities import Model -from azure.ai.ml.constants import ModelType +[!notebook-python[] (~/azureml-examples-main/sdk/python/assets/model/model.ipynb?name=cloud_model)] -cloud_model = Model( - path= "azureml://datastores/workspaceblobstore/paths/model.pkl" - name="cloud-path-example", - type=ModelType.CUSTOM, - description="Model created from cloud path." -) -ml_client.models.create_or_update(cloud_model) -``` The examples use the shorthand `azureml` scheme for pointing to a path on the `datastore` by using the syntax `azureml://datastores/${{datastore-name}}/paths/${{path_on_datastore}}`. Examples: Saving model from a named output: -```python -from azure.ai.ml.entities import Model -from azure.ai.ml.constants import ModelType --run_model = Model( - path="azureml://jobs/$RUN_ID/outputs/artifacts/paths/model/" - name="run-model-example", - description="Model created from run.", - type=ModelType.CUSTOM -) +[!notebook-python[] (~/azureml-examples-main/sdk/python/assets/model/model.ipynb?name=run_model)] -ml_client.models.create_or_update(run_model) -``` -For a complete example, see the [model notebook](https://github.com/Azure/azureml-examples/tree/march-sdk-preview/sdk/assets/model). +For a complete example, see the [model notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/assets/model/model.ipynb). To create a model in Machine Learning, from the UI, open the **Models** page. Se +## Manage models ++The SDK and CLI (v2) also allow you to manage the lifecycle of your Azure ML model assets. ++### List ++List all the models in your workspace: ++# [Azure CLI](#tab/cli) ++```cli +az ml model list +``` ++# [Python SDK](#tab/python) ++```python +models = ml_client.models.list() +for model in models: + print(model.name) +``` ++++List all the model versions under a given name: ++# [Azure CLI](#tab/cli) ++```cli +az ml model list --name run-model-example +``` ++# [Python SDK](#tab/python) ++```python +models = ml_client.models.list(name="run-model-example") +for model in models: + print(model.version) +``` ++++### Show ++Get the details of a specific model: ++# [Azure CLI](#tab/cli) ++```cli +az ml model show --name run-model-example --version 1 +``` ++# [Python SDK](#tab/python) ++```python +model_example = ml_client.models.get(name="run-model-example", version="1") +print(model_example) +``` +++### Update ++Update mutable properties of a specific model: ++# [Azure CLI](#tab/cli) ++```cli +az ml model update --name run-model-example --version 1 --set description="This is an updated description." --set tags.stage="Prod" +``` ++# [Python SDK](#tab/python) ++```python +model_example.description="This is an updated description." +model_example.tags={"stage":"Prod"} +ml_client.models.create_or_update(model=model_example) +``` +++> [!IMPORTANT] +> For model, only `description` and `tags` can be updated. All other properties are immutable; if you need to change any of those properties you should create a new version of the model. ++### Archive ++Archiving a model will hide it by default from list queries (`az ml model list`). You can still continue to reference and use an archived model in your workflows. You can archive either all versions of a model or only a specific version. ++If you don't specify a version, all versions of the model under that given name will be archived. If you create a new model version under an archived model container, that new version will automatically be set as archived as well. ++Archive all versions of a model: ++# [Azure CLI](#tab/cli) ++```cli +az ml model archive --name run-model-example +``` ++# [Python SDK](#tab/python) ++```python +ml_client.models.archive(name="run-model-example") +``` +++ +Archive a specific model version: ++# [Azure CLI](#tab/cli) ++```cli +az ml model archive --name run-model-example --version 1 +``` ++# [Python SDK](#tab/python) ++```python +ml_client.models.archive(name="run-model-example", version="1") +``` ++++## Use model for training ++The SDK and CLI (v2) also allow you to use a model in a training job as an input or output. + ## Use model as input in a job # [Azure CLI](#tab/cli) Create a job specification YAML file (`<file-name>.yml`). Specify in the `inputs 1. The `type`; whether the model is a `mlflow_model`,`custom_model` or `triton_model`. 1. The `path` of where your data is located; can be any of the paths outlined in the [Supported Paths](#supported-paths) section. -```yaml -$schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json --# Possible Paths for models: -# AzureML Datastore: azureml://datastores/<datastore-name>/paths/<path_on_datastore> -# MLflow run: runs:/<run-id>/<path-to-model-relative-to-the-root-of-the-artifact-location> -# Job: azureml://jobs/<job-name>/outputs/<output-name>/paths/<path-to-model-relative-to-the-named-output-location> -# Model Asset: azureml:<my_model>:<version> --command: | - ls ${{inputs.my_model}} -code: <folder where code is located> -inputs: - my_model: - type: <type> # mlflow_model,custom_model, triton_model - path: <path> -environment: azureml:AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest -compute: azureml:cpu-cluster -``` Next, run in the CLI Next, run in the CLI az ml job create -f <file-name>.yml ``` +For a complete example, see the [model GitHub repo](https://github.com/Azure/azureml-examples/tree/main/cli/assets/model). ++ # [Python SDK](#tab/python) The `Input` class allows you to define: In your job you can write model to your cloud-based storage using *outputs*. Create a job specification YAML file (`<file-name>.yml`), with the `outputs` section populated with the type and path of where you would like to write your data to: -```yaml -$schema: https://azuremlschemas.azureedge.net/latest/CommandJob.schema.json --# Possible Paths for Model: -# Local path: mlflow-model/model.pkl -# AzureML Datastore: azureml://datastores/<datastore-name>/paths/<path_on_datastore> -# MLflow run: runs:/<run-id>/<path-to-model-relative-to-the-root-of-the-artifact-location> -# Job: azureml://jobs/<job-name>/outputs/<output-name>/paths/<path-to-model-relative-to-the-named-output-location> -# Model Asset: azureml:<my_model>:<version> --code: src -command: >- - python load_write_model.py - --input_model ${{inputs.input_model}} - --custom_model_output ${{outputs.output_folder}} -inputs: - input_model: - type: <type> # mlflow_model,custom_model, triton_model - path: <path> -outputs: - output_folder: - type: <type> # mlflow_model,custom_model, triton_model -environment: azureml:AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9 -compute: azureml:cpu-cluster -``` Next create a job using the CLI: ```azurecli az ml job create --file <file-name>.yml ```+For a complete example, see the [model GitHub repo](https://github.com/Azure/azureml-examples/tree/main/cli/assets/model). # [Python SDK](#tab/python) |
machine-learning | How To Manage Optimize Cost | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-optimize-cost.md | Title: Manage and optimize costs description: Learn tips to optimize your cost when building machine learning models in Azure Machine Learning -+ |
machine-learning | How To Manage Resources Vscode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-resources-vscode.md | |
machine-learning | How To Monitor Online Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-online-endpoints.md | In this article you learn how to: ## Metrics -Use the following steps to view metrics for an online endpoint or deployment: +You can view metrics pages for online endpoints or deployments in the Azure portal. An easy way to access these metrics pages is through links available in the Azure Machine Learning studio user interfaceΓÇöspecifically in the **Details** tab of an endpoint's page. Following these links will take you to the exact metrics page in the Azure portal for the endpoint or deployment. Alternatively, you can also go into the Azure portal to search for the metrics page for the endpoint or deployment. ++To access the metrics pages through links available in the studio: ++1. Go to the [Azure Machine Learning studio](https://ml.azure.com). +1. In the left navigation bar, select the **Endpoints** page. +1. Select an endpoint by clicking its name. +1. Select **View metrics** in the **Attributes** section of the endpoint to open up the endpoint's metrics page in the Azure portal. +1. Select **View metrics** in the section for each available deployment to open up the deployment's metrics page in the Azure portal. ++ :::image type="content" source="media/how-to-monitor-online-endpoints/online-endpoints-access-metrics-from-studio.png" alt-text="A screenshot showing how to access the metrics of an endpoint and deployment from the studio UI." lightbox="media/how-to-monitor-online-endpoints/online-endpoints-access-metrics-from-studio.png"::: ++To access metrics directly from the Azure portal: + 1. Go to the [Azure portal](https://portal.azure.com). 1. Navigate to the online endpoint or deployment resource. - online endpoints and deployments are Azure Resource Manager (ARM) resources that can be found by going to their owning resource group. Look for the resource types **Machine Learning online endpoint** and **Machine Learning online deployment**. + Online endpoints and deployments are Azure Resource Manager (ARM) resources that can be found by going to their owning resource group. Look for the resource types **Machine Learning online endpoint** and **Machine Learning online deployment**. 1. In the left-hand column, select **Metrics**. |
machine-learning | How To Set Up Vs Code Remote | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-set-up-vs-code-remote.md | |
machine-learning | How To Setup Vs Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-vs-code.md | |
machine-learning | How To Use Automl Onnx Model Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automl-onnx-model-dotnet.md | Title: Make predictions with AutoML ONNX Model in .NET description: Learn how to make predictions using an AutoML ONNX model in .NET with ML.NET -+ Last updated 10/21/2021 |
machine-learning | Resource Curated Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-curated-environments.md | |
machine-learning | Tutorial Pipeline Python Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-pipeline-python-sdk.md | If you're not going to use the endpoint, delete it to stop using the resource. ## Next steps > [!div class="nextstepaction"]-> Learn more about [Azure ML logging](./how-to-use-mlflow-cli-runs.md). +> Learn more about [Azure ML logging](./how-to-use-mlflow-cli-runs.md). |
machine-learning | Tutorial Train Deploy Image Classification Model Vscode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-train-deploy-image-classification-model-vscode.md | |
machine-learning | How To Deploy Local | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-local.md | description: 'This article describes how to use your local computer as a target -+ Last updated 08/15/2022 |
machine-learning | How To Deploy Package Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-package-models.md | |
machine-learning | How To Troubleshoot Deployment Local | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-troubleshoot-deployment-local.md | description: Try a local model deployment as a first step in troubleshooting mod -+ Last updated 08/15/2022 |
machine-learning | How To Tune Hyperparameters V1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-tune-hyperparameters-v1.md | Title: Hyperparameter tuning a model (v1) description: Automate hyperparameter tuning for deep learning and machine learning models using Azure Machine Learning.(v1)-+ |
migrate | Add Server Credentials | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/add-server-credentials.md | |
migrate | Agent Based Migration Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/agent-based-migration-architecture.md | |
migrate | Best Practices Assessment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/best-practices-assessment.md | |
migrate | Common Questions Appliance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-appliance.md | |
migrate | Common Questions Discovery Assessment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-discovery-assessment.md | |
migrate | Common Questions Server Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-server-migration.md | |
migrate | Concepts Assessment Calculation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-assessment-calculation.md | |
migrate | Concepts Azure Sql Assessment Calculation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-azure-sql-assessment-calculation.md | description: Learn about Azure SQL assessments in Azure Migrate Discovery and as Previously updated : 05/05/2022- Last updated : 08/05/2022+ # Assessment Overview (migrate to Azure SQL) |
migrate | Concepts Azure Vmware Solution Assessment Calculation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-azure-vmware-solution-assessment-calculation.md | |
migrate | Concepts Azure Webapps Assessment Calculation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-azure-webapps-assessment-calculation.md | description: Learn about Azure App Service assessments in Azure Migrate Discover Previously updated : 07/27/2021- Last updated : 06/14/2022+ # Assessment overview (migrate to Azure App Service) |
migrate | Concepts Dependency Visualization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-dependency-visualization.md | |
migrate | Concepts Migration Planning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-migration-planning.md | |
migrate | Concepts Migration Webapps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-migration-webapps.md | |
migrate | Concepts Vmware Agentless Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-vmware-agentless-migration.md | |
migrate | Create Manage Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/create-manage-projects.md | |
migrate | Deploy Appliance Script Government | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/deploy-appliance-script-government.md | |
migrate | Deploy Appliance Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/deploy-appliance-script.md | |
migrate | Discover And Assess Using Private Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/discover-and-assess-using-private-endpoints.md | |
migrate | Discovered Metadata | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/discovered-metadata.md | |
migrate | How To Assess | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-assess.md | |
migrate | How To Automate Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-automate-migration.md | |
migrate | How To Create A Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-a-group.md | |
migrate | How To Create Azure App Service Assessment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-azure-app-service-assessment.md | description: Learn how to assess web apps for migration to Azure App Service Previously updated : 07/28/2021- Last updated : 09/08/2021+ # Create an Azure App Service assessment |
migrate | How To Create Azure Sql Assessment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-azure-sql-assessment.md | description: Learn how to assess SQL instances for migration to Azure SQL Manage Previously updated : 06/27/2022- Last updated : 08/05/2022+ # Create an Azure SQL assessment |
migrate | How To Create Group Machine Dependencies Agentless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-group-machine-dependencies-agentless.md | |
migrate | How To Create Group Machine Dependencies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-group-machine-dependencies.md | |
migrate | How To Delete Project | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-delete-project.md | |
migrate | How To Discover Applications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-discover-applications.md | |
migrate | How To Discover Sql Existing Project | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-discover-sql-existing-project.md | |
migrate | How To Migrate Vmware Vms With Cmk Disks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-migrate-vmware-vms-with-cmk-disks.md | |
migrate | How To Modify Assessment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-modify-assessment.md | |
migrate | How To Scale Out For Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-scale-out-for-migration.md | |
migrate | How To Set Up Appliance Hyper V | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-set-up-appliance-hyper-v.md | |
migrate | How To Set Up Appliance Physical | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-set-up-appliance-physical.md | |
migrate | How To Set Up Appliance Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-set-up-appliance-vmware.md | |
migrate | How To Use Azure Migrate With Private Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-use-azure-migrate-with-private-endpoints.md | description: Use Azure Migrate private link support to discover, assess, and mig ms.-+ Previously updated : 4/5/2022 Last updated : 9/20/2022 # Support requirements and considerations for Private endpoint connectivity (Preview) |
migrate | Hyper V Migration Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/hyper-v-migration-architecture.md | |
migrate | Migrate Appliance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-appliance.md | |
migrate | Migrate Replication Appliance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-replication-appliance.md | |
migrate | Migrate Servers To Azure Using Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-servers-to-azure-using-private-link.md | |
migrate | Migrate Services Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-services-overview.md | |
migrate | Migrate Support Matrix Hyper V Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v-migration.md | |
migrate | Migrate Support Matrix Physical | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical.md | |
migrate | Migrate Support Matrix Vmware Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware-migration.md | |
migrate | Migrate Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix.md | |
migrate | Migrate V1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-v1.md | |
migrate | Onboard To Azure Arc With Azure Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/onboard-to-azure-arc-with-azure-migrate.md | Once the vCenter Server discovery has been completed, software inventory (discov If you receive an error when onboarding to Azure Arc using the Azure Migrate appliance, the following section can help identify the probable cause and suggested steps to resolve your problem. -If you don't see the error code listed below or if the error code starts with **_AZCM_**, refer to [this guide for troubleshooting Azure Arc](../azure-arc/servers/troubleshoot-agent-onboard.md) +If you don't see the error code listed below or if the error code starts with **_AZCM_**, refer to [this guide for troubleshooting Azure Arc](../azure-arc/servers/troubleshoot-agent-onboard.md). ### Error 60001 - UnableToConnectToPhysicalServer |
migrate | Prepare For Agentless Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/prepare-for-agentless-migration.md | |
migrate | Prepare For Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/prepare-for-migration.md | |
migrate | Prepare Isv Movere | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/prepare-isv-movere.md | |
migrate | Replicate Using Expressroute | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/replicate-using-expressroute.md | |
migrate | Server Migrate Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/server-migrate-overview.md | |
migrate | Troubleshoot Appliance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-appliance.md | |
migrate | Troubleshoot Changed Block Tracking Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-changed-block-tracking-replication.md | |
migrate | Troubleshoot Dependencies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-dependencies.md | |
migrate | Troubleshoot Discovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-discovery.md | |
migrate | Troubleshoot General | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-general.md | |
migrate | Troubleshoot Network Connectivity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-network-connectivity.md | Title: Troubleshoot network connectivity issues | Microsoft Docs description: Provides troubleshooting tips for common errors in using Azure Migrate with private endpoints.-- -ms. ++ Previously updated : 11/03/2021- Last updated : 06/27/2022+ # Troubleshoot network connectivity |
migrate | Troubleshoot Project | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-project.md | |
migrate | Troubleshoot Webapps Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-webapps-migration.md | description: Troubleshoot web apps migration issues Previously updated : 6/22/2022 Last updated : 7/14/2022 + # Troubleshooting web apps migration issues |
migrate | Tutorial App Containerization Azure Pipeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-azure-pipeline.md | |
migrate | Tutorial Assess Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-aws.md | |
migrate | Tutorial Assess Hyper V | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-hyper-v.md | |
migrate | Tutorial Assess Physical | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-physical.md | |
migrate | Tutorial Assess Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-sql.md | description: Learn how to create assessment for Azure SQL in Azure Migrate Previously updated : 05/05/2022- Last updated : 08/05/2022+ |
migrate | Tutorial Assess Vmware Azure Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-vmware-azure-vm.md | |
migrate | Tutorial Assess Webapps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-webapps.md | description: Learn how to create assessment for Azure App Service in Azure Migra Previously updated : 07/28/2021- Last updated : 06/27/2022+ |
migrate | Tutorial Discover Import | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-import.md | |
migrate | Tutorial Migrate Aws Virtual Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-aws-virtual-machines.md | |
migrate | Tutorial Migrate Gcp Virtual Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-gcp-virtual-machines.md | |
migrate | Tutorial Migrate Hyper V | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-hyper-v.md | |
migrate | Tutorial Migrate Physical Virtual Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-physical-virtual-machines.md | |
migrate | Tutorial Migrate Vmware Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware-powershell.md | |
migrate | Tutorial Migrate Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware.md | |
migrate | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md | +- Support for selecting VNet and Subnet during test migration using PowerShell for agentless VMware scenario. +- Support for OS disk swap using the Azure portal and PowerShell for agentless VMware scenario. +- Support for pausing and resuming replications using PowerShell for agentless VMware scenario. ## Update (October 2022) |
mysql | Azure Pipelines Deploy Database Task | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/azure-pipelines-deploy-database-task.md | You can see the full list of all the task inputs when using Azure CLI task with | :- | :-| | azureSubscription| (Required) Provide the Azure Resource Manager subscription for the deployment. This parameter is shown only when the selected task version is 0.* as Azure CLI task v1.0 supports only Azure Resource Manager subscriptions. | |scriptType| (Required) Provide the type of script. Supported scripts are PowerShell, PowerShell Core, Bat, Shell, and script. When running on a **Linux agent**, select one of the following: ```bash``` or ```pscore``` . When running **Windows agent**, select one of the following: ```batch```,```ps``` and ```pscore```. |-|sriptLocation| (Required) Provide the path to script, for example real file path or use ```Inline script``` when providing the scripts inline. The default value is ```scriptPath```. | +|scriptLocation| (Required) Provide the path to script, for example real file path or use ```Inline script``` when providing the scripts inline. The default value is ```scriptPath```. | |scriptPath| (Required) Fully qualified path of the script(.ps1 or .bat or .cmd when using Windows-based agent else <code>.ps1 </code> or <code>.sh </code> when using linux-based agent) or a path relative to the default working directory. | |inlineScript|(Required) You can write your scripts inline here. When using Windows agent, use PowerShell or PowerShell Core or batch scripting whereas use PowerShell Core or shell scripting when using Linux-based agents. For batch files use the prefix \"call\" before every Azure command. You can also pass predefined and custom variables to this script using arguments. <br/>Example for PowerShell/PowerShellCore/shell:``` az --version az account show``` <br/>Example for batch: ``` call az --version call az account show```. | | arguments| (Optional) Provide all the arguments passed to the script. For examples ```-SERVERNAME mydemoserver```. | |
mysql | Concepts Customer Managed Key | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-customer-managed-key.md | description: Learn how data encryption with customer-managed keys for Azure Data Previously updated : 09/15/2022 Last updated : 11/21/2022 -With data encryption with customer-managed keys for Azure Database for MySQL - Flexible Server Preview, you can bring your own key (BYOK) for data protection at rest and implement separation of duties for managing keys and data. With customer managed keys (CMKs), the customer is responsible for and in a full control of key lifecycle management (key creation, upload, rotation, deletion), key usage permissions, and auditing operations on keys. +With data encryption with customer-managed keys for Azure Database for MySQL - Flexible Server Preview, you can bring your own key (BYOK) for data protection at rest and implement separation of duties for managing keys and data. With customer managed keys (CMKs), the customer is responsible for and ultimately controls the key lifecycle management (key creation, upload, rotation, deletion), key usage permissions, and auditing operations on keys. ## Benefits -Data encryption with customer-managed keys for Azure Database for MySQL Flexible server provides the following benefits: +Data encryption with customer-managed keys for Azure Database for MySQL Flexible server provides the following benefits: -- Data-access is fully controlled by you by the ability to remove the key and making the database inaccessible -- Full control over the key-lifecycle, including rotation of the key to align with corporate policies -- Central management and organization of keys in Azure Key Vault -- Ability to implement separation of duties between security officers, and DBA and system administrators -- +- You fully control data access by the ability to remove the key and make the database inaccessible +- Full control over the key lifecycle, including rotation of the key to aligning with corporate policies +- Central management and organization of keys in Azure Key Vault +- Ability to implement separation of duties between security officers, DBA, and system administrators +- ## How does data encryption with a customer-managed key work? Managed identities in Azure Active Directory (Azure AD) provide Azure services an alternative to storing credentials in the code by provisioning an automatically assigned identity that can be used to authenticate to any service supporting Azure AD authentication, such as Azure Key Vault (AKV). Azure Database for MySQL Flexible server currently supports only User-assigned Managed Identity (UMI). For more information, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) in Azure. -To configure the CMK for an Azure Database for MySQL flexible server, you need to link the UMI to the server and specify the Azure Key vault, and key to use. +To configure the CMK for an Azure Database for MySQL flexible server, you need to link the UMI to the server and specify the Azure Key vault and key to use. -The UMI must have the following access to the key vault: +The UMI must have the following access to the key vault: -- **Get**: For retrieving the public part and properties of the key in the key vault. +- **Get**: For retrieving the public part and properties of the key in the key vault. - **List**: List the versions of the key stored in a Key Vault.-- **Wrap Key**: To be able to encrypt the DEK. The encrypted DEK is stored in the Azure Database for MySQL Flexible server. -- **Unwrap Key**: To be able to decrypt the DEK. Azure Database for MySQL Flexible server needs the decrypted DEK to encrypt/decrypt the data +- **Wrap Key**: To be able to encrypt the DEK. The encrypted DEK is stored in the Azure Database for MySQL Flexible server. +- **Unwrap Key**: To be able to decrypt the DEK. Azure Database for MySQL Flexible server needs the decrypted DEK to encrypt/decrypt the data ### Terminology and description -**Data encryption key (DEK)**: A symmetric AES256 key used to encrypt a partition or block of data. Encrypting each block of data with a different key makes crypto analysis attacks more difficult. Access to DEKs is needed by the resource provider or application instance that is encrypting and decrypting a specific block. When you replace a DEK with a new key, only the data in its associated block must be re-encrypted with the new key. +**Data encryption key (DEK)**: A symmetric AES256 key used to encrypt a partition or block of data. Encrypting each block of data with a different key makes crypto analysis attacks more difficult. Access to DEKs is needed by the resource provider or application instance that encrypts and decrypts a specific block. When you replace a DEK with a new key, only the data in its associated block must be re-encrypted with the new key. -**Key encryption key (KEK)**: An encryption key used to encrypt the DEKs. A KEK that never leaves Key Vault allows the DEKs themselves to be encrypted and controlled. The entity that has access to the KEK might be different than the entity that requires the DEK. Since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEKs can be effectively deleted by deletion of the KEK. The DEKs, encrypted with the KEKs, are stored separately. Only an entity with access to the KEK can decrypt these DEKs. For more information, see [Security in encryption rest](../../security/fundamentals/encryption-atrest.md). +**Key encryption key (KEK)**: An encryption key used to encrypt the DEKs. A KEK that never leaves Key Vault allows the DEKs themselves to be encrypted and controlled. The entity that has access to the KEK might be different than the entity that requires the DEK. Since the KEK is required to decrypt the DEKs, the KEK is effectively a single point by which DEKs can be effectively deleted by deleting the KEK. The DEKs, encrypted with the KEKs, are stored separately. Only an entity with access to the KEK can decrypt these DEKs. For more information, see [Security in encryption rest](../../security/fundamentals/encryption-atrest.md). ### How it works -Data encryption with CMKs is set at the server level. For a given server, a CMK, called the key encryption key (KEK), is used to encrypt the data encryption key (DEK) used by the service. The KEK is an asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault instance](../../key-vault/general/security-features.md). Key Vault is highly available and scalable secure storage for RSA cryptographic keys, optionally backed by FIPS 140-2 Level 2 validated hardware security modules (HSMs). Key Vault doesn't allow direct access to a stored key, but instead provides encryption/decryption services using the key to the authorized entities. The key can be generated by the key vault, imported, or [transferred to the key vault from an on-premises HSM device](../../key-vault/keys/hsm-protected-keys.md). +Data encryption with CMKs is set at the server level. For a given server, a CMK, called the key encryption key (KEK), is used to encrypt the service's data encryption key (DEK). The KEK is an asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault instance](../../key-vault/general/security-features.md). Key Vault is highly available and scalable secure storage for RSA cryptographic keys, optionally backed by FIPS 140-2 Level 2 validated hardware security modules (HSMs). Key Vault doesn't allow direct access to a stored key, but instead provides encryption/decryption services using the key to the authorized entities. The key vault, imported can generate the key, or [transferred to the key vault from an on-premises HSM device](../../key-vault/keys/hsm-protected-keys.md). -When you configure a flexible server to use a CMK stored in the key vault, the server sends the DEK to the key vault for encryptions. Key Vault returns the encrypted DEK, which is stored in the user database. Similarly, when needed, the flexible server will send the protected DEK to the key vault for decryption. +When you configure a flexible server to use a CMK stored in the key vault, the server sends the DEK to the key vault for encryption. Key Vault returns the encrypted DEK stored in the user database. Similarly, the flexible server will send the protected DEK to the key vault for decryption when needed. :::image type="content" source="media/concepts-customer-managed-key/mysql-customer-managed-key.jpg" alt-text="Diagram of how data encryption with a customer-managed key work."::: After logging is enabled, auditors can use Azure Monitor to review Key Vault audit event logs. To enable logging of [Key Vault auditing events](../../key-vault/key-vault-insights-overview.md), see Monitoring your key vault service with Key Vault insights. -> [!Note] -> Permission changes can take up to 10 minutes to impact the key vault. This includes revoking access permissions to the TDE protector in AKV, and users within this time frame may still have access permissions. +> [!NOTE] +> Permission changes can take up to 10 minutes to impact the key vault. This includes revoking access permissions to the TDE protector in AKV, and users within this time frame may still have access permissions. ## Requirements for configuring data encryption for Azure Database for MySQL Flexible server -Before you attempt to configure Key Vault, be sure to address the following requirements. +Before you attempt to configure Key Vault, be sure to address the following requirements. -- The Key Vault and Azure Database for MySQL flexible server must belong to the same Azure Active Directory (Azure AD) tenant. Cross-tenant Key Vault and flexible server interactions aren't supported. If you move Key Vault resources after performing the configuration, youΓÇÖll need to reconfigure data encryption. -- The Key Vault and Azure Database for MySQL flexible server must reside in the same region. -- Enable the [soft-delete](../../key-vault/general/soft-delete-overview.md) feature on the key vault with retention period set to 90 days to protect from data loss should an accidental key (or Key Vault) deletion occur. The recover and purge actions have their own permissions associated in a Key Vault access policy. The soft-delete feature is off by default, but you can enable it through the Azure portal or by using PowerShell or the Azure CLI. -- Enable the [Purge Protection](../../key-vault/general/soft-delete-overview.md#purge-protection) feature on the key vault and set the retention period to 90 days. When purge protection is on, a vault or an object in the deleted state can't be purged until the retention period has passed. You can enable this feature by using PowerShell or the Azure CLI, and only after you've enabled soft-delete. +- The Key Vault and Azure Database for MySQL flexible server must belong to the same Azure Active Directory (Azure AD) tenant. Cross-tenant Key Vault and flexible server interactions need to be supported. You'll need to reconfigure data encryption if you move Key Vault resources after performing the configuration. +- The Key Vault and Azure Database for MySQL flexible server must reside in the same region. +- Enable the [soft-delete](../../key-vault/general/soft-delete-overview.md) feature on the key vault with a retention period set to 90 days to protect from data loss should an accidental key (or Key Vault) deletion occur. The recover and purge actions have their own permissions in a Key Vault access policy. The soft-delete feature is off by default, but you can enable it through the Azure portal or by using PowerShell or the Azure CLI. +- Enable the [Purge Protection](../../key-vault/general/soft-delete-overview.md#purge-protection) feature on the key vault and set the retention period to 90 days. When purge protection is on, a vault or an object in the deleted state can't be purged until the retention period has passed. You can enable this feature using PowerShell or the Azure CLI, and only after you've enabled soft-delete. -Before you attempt to configure the CMK, be sure to address the following requirements. - -- The customer-managed key to be used for encrypting the DEK can be only asymmetric, RSA 2048. -- The key activation date (if set) must be a date and time in the past. The expiration date not set. -- The key must be in the **Enabled** state. -- The key must have [soft delete](../../key-vault/general/soft-delete-overview.md) with retention period set to 90 days. This implicitly sets the required key attribute recoveryLevel: ΓÇ£RecoverableΓÇ¥. -- The key must have [purge protection enabled](../../key-vault/general/soft-delete-overview.md#purge-protection). +Before you attempt to configure the CMK, be sure to address the following requirements. ++- The customer-managed key to encrypt the DEK can be only asymmetric, RSA 2048. +- The key activation date (if set) must be a date and time in the past. The expiration date not set. +- The key must be in the **Enabled** state. +- The key must have [soft delete](../../key-vault/general/soft-delete-overview.md) with retention period set to 90 days. This implicitly sets the required key attribute recoveryLevel: ΓÇ£Recoverable.ΓÇ¥ +- The key must have [purge protection enabled](../../key-vault/general/soft-delete-overview.md#purge-protection). - If you're [importing an existing key](/rest/api/keyvault/keys/import-key/import-key?tabs=HTTP) into the key vault, make sure to provide it in the supported file formats (.pfx, .byok, .backup). -> [!Note] +> [!NOTE] > For detailed, step-by-step instructions about how to configure date encryption for an Azure Database for MySQL flexible server via the Azure portal, see [Configure data encryption for MySQL Flexible server](how-to-data-encryption-portal.md). -## Recommendations for configuring data encryption +## Recommendations for configuring data encryption -As you configure Key Vault to use data encryption by using a customer-managed key, keep in mind the following recommendations. +As you configure Key Vault to use data encryption using a customer-managed key, keep in mind the following recommendations. -- Set a resource lock on Key Vault to control who can delete this critical resource and prevent accidental or unauthorized deletion. -- Enable auditing and reporting on all encryption keys. Key Vault provides logs that are easy to inject into other security information and event management tools. Azure Monitor Log Analytics is one example of a service that's already integrated. -- Keep a copy of the customer-managed key in a secure place or escrow it to the escrow service. -- If Key Vault generates the key, create a key backup before using the key for the first time. You can only restore the backup to Key Vault. For more information about the backup command, see [Backup-AzKeyVaultKey](/powershell/module/az.keyVault/backup-azkeyVaultkey?view=azps-8.3.0).+- Set a resource lock on Key Vault to control who can delete this critical resource and prevent accidental or unauthorized deletion. +- Enable auditing and reporting on all encryption keys. Key Vault provides logs that are easy to inject into other security information and event management tools. Azure Monitor Log Analytics is one example of a service that's already integrated. +- Keep a copy of the customer-managed key in a secure place or escrow it to the escrow service. +- If Key Vault generates the key, create a key backup before using the key for the first time. You can only restore the backup to Key Vault. For more information about the backup command, see [Backup-AzKeyVaultKey](/powershell/module/az.keyVault/backup-azkeyVaultkey). ## Inaccessible customer-managed key condition -When you configure data encryption with a CMK in Key Vault, continuous access to this key is required for the server to stay online. If the flexible server loses access to the customer-managed key in Key Vault, the server begins denying all connections within 10 minutes. The flexible server issues a corresponding error message and changes the server state to Inaccessible. The server can reach this state for various reasons. +When you configure data encryption with a CMK in Key Vault, continuous access to this key is required for the server to stay online. If the flexible server loses access to the customer-managed key in Key Vault, the server begins denying all connections within 10 minutes. The flexible server issues a corresponding error message and changes the server state to Inaccessible. The server can reach this state for various reasons. -- If you delete the KeyVault, the Azure Database for MySQL Flexible server will be unable to access the key, and will move to _Inaccessible_ state. Recover the [Key Vault](../../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the Flexible server _Available_. -- If we delete the key from the KeyVault, the Azure Database for MySQL Flexible server will be unable to access the key, and will move to _Inaccessible_ state. Recover the [Key](../../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the Flexible server _Available_. +- If you delete the KeyVault, the Azure Database for MySQL Flexible server will be unable to access the key and will move to _Inaccessible_ state. Recover the [Key Vault](../../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the Flexible server _Available_. +- If we delete the key from the KeyVault, the Azure Database for MySQL Flexible server will be unable to access the key and will move to _Inaccessible_ state. Recover the [Key](../../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the Flexible server _Available_. - If the key stored in the Azure KeyVault expires, the key will become invalid, and the Azure Database for MySQL Flexible server will transition into _Inaccessible_ state. Extend the key expiry date using [CLI](/cli/azure/keyvault/key?view=azure-cli-latest#az-keyvault-key-set-attributes) and then revalidate the data encryption to make the Flexible server _Available_. ## Accidental key access revocation from Key Vault It might happen that someone with sufficient access rights to Key Vault accident To monitor the database state, and to enable alerting for the loss of transparent data encryption protector access, configure the following Azure features: - [Activity log](../../service-health/alerts-activity-log-service-notifications-portal.md): When access to the Customer Key in the customer-managed Key Vault fails, entries are added to the activity log. You can reinstate access as soon as possible if you create alerts for these events.-- [Action groups](../../azure-monitor/alerts/action-groups.md): Define these groups to send you notifications and alerts based on your preferences.+- [Action groups](../../azure-monitor/alerts/action-groups.md): Define these groups to send notifications and alerts based on your preferences. ## Replica with a customer managed key in Key Vault -Once Azure Database for MySQL flexible server is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted. When trying to encrypt Azure Database for MySQL flexible server with a customer managed key that already has a replica(s), we recommend configuring the replica(s) as well by adding the managed identity and key. If the flexible server is configured with geo-redundancy backup, the replica must be configured with the managed identity and key to which the identity has access and which resides in the server's geo-paired region. +Once Azure Database for MySQL flexible server is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted. When trying to encrypt Azure Database for MySQL flexible server with a customer managed key that already has a replica(s), we recommend configuring the replica(s) by adding the managed identity and key. Suppose the flexible server is configured with geo-redundancy backup. In that case, the replica must be configured with the managed identity and key to which the identity has access and which resides in the server's geo-paired region. ## Restore with a customer managed key in Key Vault -When attempting to restore an Azure Database for MySQL flexible server, you're given the option to select the User managed identity, and Key to encrypt the restore server. If the flexible server is configured with geo-redundancy backup, the restore server must be configured with the managed identity and key to which the identity has access and which resides in the server's geo-paired region. +When attempting to restore an Azure Database for MySQL flexible server, you can select the user-managed identity and key to encrypt the restore server. Suppose the flexible server is configured with geo-redundancy backup. In that case, you must configure the restore server with the managed identity and key to which the identity has access and which resides in the server's geo-paired region. -To avoid issues while setting up customer-managed data encryption during restore or read replica creation, it's important to follow these steps on the source and restored/replica servers: +To avoid issues while setting up customer-managed data encryption during restore or read replica creation, it's essential to follow these steps on the source and restored/replica servers: - Initiate the restore or read replica creation process from the source Azure Database for MySQL Flexible server. - On the restored/replica server, revalidate the customer-managed key in the data encryption settings to ensure that the User managed identity is given _Get, List, Wrap key_ and _Unwrap key_ permissions to the key stored in Key Vault. -> [!Note] +> [!NOTE] > Using the same identity and key as on the source server is not mandatory when performing a restore. ## Next steps+ - [Data encryption with Azure CLI (Preview)](how-to-data-encryption-cli.md) - [Data encryption with Azure portal (Preview)](how-to-data-encryption-portal.md) - [Security in encryption rest](../../security/fundamentals/encryption-atrest.md) |
mysql | Concepts Migrate Dump Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-migrate-dump-restore.md | To optimize performance, take notice of these considerations when dumping large - Use partitioned tables when appropriate. - Load data in parallel. Avoid too much parallelism that would cause you to hit a resource limit, and monitor resources using the metrics available in the Azure portal. - Use the `defer-table-indexes` option in mysqldump when dumping databases, so that index creation happens after tables data is loaded.-- Use the `skip-definer` option in mysqldump to omit definer and SQL SECURITY clauses from the create statements for views and stored procedures. When you reload the dump file, it creates objects that use the default DEFINER and SQL SECURITY values. - Copy the backup files to an Azure blob/store and perform the restore from there, which should be a lot faster than performing the restore across the Internet. ## Create a database on the target Azure Database for MySQL server |
mysql | Select Right Deployment Type | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/select-right-deployment-type.md | Title: Selecting the right deployment type - Azure Database for MySQL description: This article describes what factors to consider before you deploy Azure Database for MySQL as either infrastructure as a service (IaaS) or platform as a service (PaaS). -+ Last updated 06/20/2022 [!INCLUDE[azure-database-for-mysql-single-server-deprecation](../includes/azure-database-for-mysql-single-server-deprecation.md)] -With Azure, your MySQL server workloads can run in a hosted virtual machine infrastructure as a service (IaaS) or as a hosted platform as a service (PaaS). PaaS has two deployment options, and there are service tiers within each deployment option. When you choose between IaaS and PaaS, you must decide if you want to manage your database, apply patches, backups, security, monitoring, scaling or if you want to delegate these operations to Azure. +With Azure, your MySQL server workloads can run in a hosted virtual machine infrastructure as a service (IaaS) or as a hosted platform as a service (PaaS). PaaS has two deployment options, and there are service tiers within each deployment option. When you choose between IaaS and PaaS, you must decide if you want to manage your database, apply patches, backups, security, monitoring, and scaling, or if you're going to delegate these operations to Azure. When making your decision, consider the following two options: -- **Azure Database for MySQL**. This option is a fully managed MySQL database engine based on the stable version of MySQL community edition. This relational database as a service (DBaaS), hosted on the Azure cloud platform, falls into the industry category of PaaS. With a managed instance of MySQL on Azure, you can use built-in features viz automated patching, high availability, automated backups, elastic scaling, enterprise grade security, compliance and governance, monitoring and alerting that otherwise require extensive configuration when MySQL Server is either on-premises or in an Azure VM. When using MySQL as a service, you pay-as-you-go, with options to scale up or scale out for greater control with no interruption. [Azure Database for MySQL](overview.md), powered by the MySQL community edition is available in two deployment modes:+- **Azure Database for MySQL**. This option is a fully managed MySQL database engine based on the stable version of MySQL community edition. This relational database as a service (DBaaS), hosted on the Azure cloud platform, falls into the industry category of PaaS. With a managed instance of MySQL on Azure, you can use built-in features viz automated patching, high availability, automated backups, elastic scaling, enterprise grade security, compliance and governance, monitoring and alerting that require extensive configuration when MySQL Server is either on-premises or in an Azure VM. When using MySQL as a service, you pay-as-you-go, with options to scale up or out for greater control without interruption. [Azure Database for MySQL](overview.md), powered by the MySQL community edition, is available in two deployment modes: - - [Flexible Server](../flexible-server/overview.md) - Azure Database for MySQL Flexible Server is a fully managed production-ready database service designed for more granular control and flexibility over database management functions and configuration settings. The flexible server architecture allows users to opt for high availability within single availability zone and across multiple availability zones. Flexible servers provides better cost optimization controls with the ability to stop/start server and burstable compute tier, ideal for workloads that do not need full compute capacity continuously. Flexible Server also supports reserved instances allowing you to save up to 63% cost, ideal for production workloads with predictable compute capacity requirements. The service supports community version of MySQL 5.7 and 8.0. The service is generally available today in wide variety of [Azure regions](../flexible-server/overview.md#azure-regions). Flexible servers are best suited for all new developments and migration of production workloads to Azure Database for MySQL service. + - [Flexible Server](../flexible-server/overview.md) - Azure Database for MySQL Flexible Server is a fully managed production-ready database service designed for more granular control and flexibility over database management functions and configuration settings. The flexible server architecture allows users to opt for high availability within a single availability zone and across multiple availability zones. Flexible servers provide better cost optimization controls with the ability to stop/start the server and burstable compute tier, ideal for workloads that don't need full compute capacity continuously. Flexible Server also supports reserved instances allowing you to save up to 63% cost, ideal for production workloads with predictable compute capacity requirements. The service supports the community version of MySQL 5.7 and 8.0. The service is generally available today in various [Azure regions](../flexible-server/overview.md#azure-regions). Flexible servers are best suited for all new developments and migration of production workloads to Azure Database for MySQL service. - - [Single Server](single-server-overview.md) is a fully managed database service designed for minimal customization. The single server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized for built-in high availability with 99.99% availability on single availability zone. It supports community version of MySQL 5.6 (retired), 5.7 and 8.0. The service is generally available today in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/). Single servers are best suited **only for existing applications already leveraging single server**. For all new developments or migrations, Flexible Server would be the recommended deployment option. + - [Single Server](single-server-overview.md) is a fully managed database service designed for minimal customization. The single server platform is designed to handle most database management functions such as patching, backups, high availability, and security with minimal user configuration and control. The architecture is optimized for built-in high availability with 99.99% availability in a single availability zone. It supports the community version of MySQL 5.6 (retired), 5.7, and 8.0. The service is generally available today in various [Azure regions](https://azure.microsoft.com/global-infrastructure/services/). Single servers are best-suited **only for existing applications already leveraging single server**. A Flexible Server would be the recommended deployment option for all new developments or migrations. - **MySQL on Azure VMs**. This option falls into the industry category of IaaS. With this service, you can run MySQL Server inside a managed virtual machine on the Azure cloud platform. All recent versions and editions of MySQL can be installed in the virtual machine. When making your decision, consider the following two options: The main differences between these options are listed in the following table: -| Attribute | Azure Database for MySQL<br/>Single Server |Azure Database for MySQL<br/>Flexible Server |MySQL on Azure VMs | -|:-|:-|:|:| -| [**General**](../flexible-server/overview.md) | | | | -| General availability | Generally Available | Generally Available | Generally Available | +| Attribute | Azure Database for MySQL<br/>Single Server |Azure Database for MySQL<br/>Flexible Server | MySQL on Azure VMs | +|:-|:-|:|:-| +| [**General**](../flexible-server/overview.md) | | | | +| General availability | Generally available | Generally available | Generally available | | Service-level agreement (SLA) | 99.99% availability SLA |99.99% using Availability Zones| 99.99% using Availability Zones| | Underlying O/S | Windows | Linux | User Managed | | MySQL Edition | Community Edition | Community Edition | Community or Enterprise Edition | The main differences between these options are listed in the following table: | IOPs scaling | Not Supported| Supported| Not Supported| | [**Cost Optimization**](https://azure.microsoft.com/pricing/details/mysql/flexible-server/) | | | | | Reserved Instance Pricing | Supported | Supported | Supported |-| Stop/Start Server for development | Server can be stopped up to 7 days | Server can be stopped up to 30 days | Supported | +| Stop/Start Server for development | Server can be stopped up to seven days | Server can be stopped up to 30 days | Supported | | Low cost Burstable SKU | Not Supported | Supported | Supported | | [**Networking/Security**](concepts-security.md) | | | | | Network Connectivity | - Public endpoints with server firewall.<br/> - Private access with Private Link support.|- Public endpoints with server firewall.<br/> - Private access with Virtual Network integration.| - Public endpoints with server firewall.<br/> - Private access with Private Link support.| | SSL/TLS | Enabled by default with support for TLS v1.2, 1.1 and 1.0 | Enabled by default with support for TLS v1.2, 1.1 and 1.0| Supported with TLS v1.2, 1.1 and 1.0 | | Data Encryption at rest | Supported with customer managed keys (BYOK) | Supported with service managed keys | Not Supported|-| Azure AD Authentication | Supported | Not Supported | Not Supported| +| Azure AD Authentication | Supported | Supported | Not Supported| | Microsoft Defender for Cloud support | Yes | No | No | | Server Audit | Supported | Supported | User Managed | | [**Patching & Maintenance**](../flexible-server/concepts-maintenance.md) | | | The main differences between these options are listed in the following table: | MySQL minor version upgrade | Automatic | Automatic | User managed | | MySQL in-place major version upgrade | Supported from 5.6 to 5.7 | Not Supported | User Managed | | Maintenance control | System managed | Customer managed | User managed |-| Maintenance window | Anytime within 15 hrs window | 1hr window | User managed | -| Planned maintenance notification | 3 days | 5 days | User managed | +| Maintenance window | Anytime within 15-hrs window | 1 hr window | User managed | +| Planned maintenance notification | Three days | Five days | User managed | | [**High Availability**](../flexible-server/concepts-high-availability.md) | | | | | High availability | Built-in HA (without hot standby)| Built-in HA (without hot standby), Same-zone and zone-redundant HA with hot standby | User managed | | Zone redundancy | Not supported | Supported | Supported| The main differences between these options are listed in the following table: ## Business motivations for choosing PaaS or IaaS -There are several factors that can influence your decision to choose PaaS or IaaS to host your MySQL databases. +Several factors can influence whether you choose PaaS or IaaS to host your MySQL databases. ### Cost -Cost reduction is often the primary consideration that determines the best solution for hosting your databases. This is true whether you're a startup with little cash or a team in an established company that operates under tight budget constraints. This section describes billing and licensing basics in Azure as they apply to Azure Database for MySQL and MySQL on Azure VMs. +Cost reduction is often the primary consideration in determining the best solution for hosting your databases. This is true whether you're a startup with little cash or a team in an established company that operates under tight budget constraints. This section describes billing and licensing basics in Azure as they apply to Azure Database for MySQL and MySQL on Azure VMs. #### Billing Azure Database for MySQL is currently available as a service in several tiers with different prices for resources. All resources are billed hourly at a fixed rate. For the latest information on the currently supported service tiers, compute sizes, and storage amounts, see [pricing page](https://azure.microsoft.com/pricing/details/mysql/). You can dynamically adjust service tiers and compute sizes to match your application's varied throughput needs. You're billed for outgoing Internet traffic at regular [data transfer rates](https://azure.microsoft.com/pricing/details/data-transfers/). -With Azure Database for MySQL, Microsoft automatically configures, patches, and upgrades the database software. These automated actions reduce your administration costs. Also, Azure Database for MySQL has [automated backups](./concepts-backup.md) capabilities. These capabilities help you achieve significant cost savings, especially when you have a large number of databases. In contrast, with MySQL on Azure VMs you can choose and run any MySQL version. No matter what MySQL version you use, you pay for the provisioned VM, storage cost associated with the data, backup, monitoring data and log storage and the costs for the specific MySQL license type used (if any). +With Azure Database for MySQL, Microsoft automatically configures, patches, and upgrades the database software. These automated actions reduce your administration costs. Also, Azure Database for MySQL has [automated backups](./concepts-backup.md) capabilities. These capabilities help you achieve significant cost savings, especially when you have many databases. In contrast, with MySQL on Azure VMs, you can choose and run any MySQL version. No matter what MySQL version you use, you pay for the provisioned VM, storage cost associated with the data, backup, monitoring data, and log storage, and the costs for the specific MySQL license type used (if any). -Azure Database for MySQL provides built-in high availability for any kind of node-level interruption while still maintaining the 99.99% SLA guarantee for the service. However, for database high availability within VMs, you use the high availability options like [MySQL replication](https://dev.mysql.com/doc/refman/8.0/en/replication.html) that are available on a MySQL database. Using a supported high availability option doesn't provide an additional SLA. But it does let you achieve greater than 99.99% database availability at additional cost and administrative overhead. +Azure Database for MySQL provides built-in high availability for any node-level interruption while maintaining the service's 99.99% SLA guarantee. However, for database high availability within VMs, you use the high availability options like [MySQL replication](https://dev.mysql.com/doc/refman/8.0/en/replication.html) that are available on a MySQL database. Using a supported high availability option doesn't provide an additional SLA. But it lets you achieve more than 99.99% database availability at additional cost and administrative overhead. For more information on pricing, see the following articles: For more information on pricing, see the following articles: ### Administration -For many businesses, the decision to transition to a cloud service is as much about offloading complexity of administration as it is about cost. +For many businesses, the decision to transition to a cloud service is as much about offloading the complexity of administration as it is about cost. With IaaS, Microsoft: With PaaS, Microsoft: The following list describes administrative considerations for each option: -- With Azure Database for MySQL, you can continue to administer your database. But you no longer need to manage the database engine, the operating system, or the hardware. Examples of items you can continue to administer include:+- With Azure Database for MySQL, you can continue administering your database. But you no longer need to manage the database engine, the operating system, or the hardware. Examples of items you can continue to administer include: - Databases - Sign-in The following list describes administrative considerations for each option: Additionally, configuring high availability to another data center requires minimal to no configuration or administration. -- With MySQL on Azure VMs, you have full control over the operating system and the MySQL server instance configuration. With a VM, you decide when to update or upgrade the operating system and database software and what patches to apply. You also decide when to install any additional software such as an antivirus application. Some automated features are provided to greatly simplify patching, backup, and high availability. You can control the size of the VM, the number of disks, and their storage configurations. For more information, see [Virtual machine and cloud service sizes for Azure](../../virtual-machines/sizes.md).+- With MySQL on Azure VMs, you can control the operating system and the MySQL server instance configuration. You decide when to update or upgrade the operating system and database software with a VM and what patches to apply. You also choose when to install any additional software such as an antivirus application. Some automated features are provided to simplify significantly patching, backup, and high availability. You can control the size of the VM, the number of disks, and their storage configurations. For more information, see [Virtual machine and cloud service sizes for Azure](../../virtual-machines/sizes.md). ### Time to move to Azure -- Azure Database for MySQL is the right solution for cloud-designed applications when developer productivity and fast time to market for new solutions are critical. With programmatic functionality that is like DBA, the service is suitable for cloud architects and developers because it lowers the need for managing the underlying operating system and database.+- Azure Database for MySQL is the right solution for cloud-designed applications when developer productivity and fast time to market for new solutions are critical. With programmatic functionality like DBA, the service is suitable for cloud architects and developers because it lowers the need for managing the underlying operating system and database. -- When you want to avoid the time and expense of acquiring new on-premises hardware, MySQL on Azure VMs is the right solution for applications that require a granular control and customization of MySQL engine not supported by the service or requiring access of the underlying OS. This solution is also suitable for migrating existing on-premises applications and databases to Azure intact, for cases where Azure Database for MySQL is a poor fit.+- When you want to avoid the time and expense of acquiring new on-premises hardware, MySQL on Azure VMs is the right solution for applications that require granular control and customization of MySQL engine not supported by the service or requiring access to the underlying OS. This solution is also suitable for migrating existing on-premises applications and databases to Azure intact for cases where Azure Database for MySQL is poorly fit. Because there's no need to change the presentation, application, and data layers, you save time and budget on rearchitecting your existing solution. Instead, you can focus on migrating all your solutions to Azure and addressing some performance optimizations that the Azure platform might require. |
openshift | Howto Gpu Workloads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-gpu-workloads.md | All GPU quotas in Azure are 0 by default. You will need to sign in to the Azure ARO supports the following GPU workers: * NC4as T4 v3+* NC6s v3 * NC8as T4 v3+* NC12s v3 * NC16as T4 v3+* NC24s v3 +* NC24rs v3 * NC64as T4 v3 > [!NOTE] |
openshift | Howto Secure Openshift With Front Door | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-secure-openshift-with-front-door.md | This section explains how to register a domain in Azure DNS. 3. Note the four nameservers that are present in Azure DNS for apps.example.com. -4. Create a new **NS** record set in the example.com zone that points to **app** and specify the four nameservers that were present when the **apps** zone was created. +4. Create a new **NS** record set in the example.com zone that points to **apps** and specify the four nameservers that were present when the **apps** zone was created. ## Create a new Azure Front Door Premium service |
openshift | Support Policies V4 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-policies-v4.md | Azure Red Hat OpenShift 4 supports node instances on the following virtual machi |Series|Size|vCPU|Memory: GiB| |-|-|-|-| |NC4asT4v3|Standard_NC4as_T4_v3|4|28|+|NC6sV3|Standard_NC6s_v3|6|112| |NC8asT4v3|Standard_NC8as_T4_v3|8|56|+|NC12sV3|Standard_NC12s_v3|12|224| |NC16asT4v3|Standard_NC16as_T4_v3|16|110|+|NC24sV3|Standard_NC24s_v3|24|448| +|NC24rsV3|Standard_NC24rs_v3|24|448| |NC64asT4v3|Standard_NC64as_T4_v3|64|440| ### Memory and storage optimized |
postgresql | Concepts Data Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-data-encryption.md | The DEKs, encrypted with the KEKs, are stored separately. Only an entity with ac :::image type="content" source="./media/concepts-data-encryption/postgresql-data-encryption-overview.png" alt-text ="Diagram that shows an overview of Bring Your Own Key." ::: +Azure Active Directory [user- assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) will be used to connect and retrieve customer-managed key. Follow this [tutorial](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) to create identity. + For a PostgreSQL server to use customer-managed keys stored in Key Vault for encryption of the DEK, a Key Vault administrator gives the following access rights to the server: - **get**: For retrieving, the public part and properties of the key in the key Vault. |
private-5g-core | Azure Private 5G Core Release Notes 2209 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-private-5g-core-release-notes-2209.md | The following release notes identify the new features, critical open issues, and This article applies to the Azure Private 5G Core 2209 release (PMN-4-17-2). This release is compatible with the Azure Stack Edge Pro GPU running the 2209 release and is supported by the 2022-04-01-preview [Microsoft.MobileNetwork API version](/rest/api/mobilenetwork). +## What's new ++- **Updated template for Log Analytics** - There is a new version of the Log Analytics Dashboard Quickstart template. This is required to view metrics on Packet Core versions 4.17 and above. To continue using your Log Analytics Dashboard, you must redeploy it with the new template. See [Create an overview Log Analytics dashboard using an ARM template](/azure/private-5g-core/create-overview-dashboard). + ## Issues fixed in the 2209 release The following table provides a summary of issues fixed in this release. |
purview | Troubleshoot Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/troubleshoot-connections.md | There are specific instructions for each [source type](azure-purview-connector-o ### Registering single Azure data source -To register a single data source in Microsoft Purview, such as an Azure Blog Storage or an Azure SQL Database, you must be granted at least **Reader** role on the resource or inherited from higher scope such as resource group or subscription. Some Azure RBAC roles, such as Security Admin, don't have read access to view Azure resources in control plane. +To register a single data source in Microsoft Purview, such as an Azure Blob Storage or an Azure SQL Database, you must be granted at least **Reader** role on the resource or inherited from higher scope such as resource group or subscription. Some Azure RBAC roles, such as Security Admin, don't have read access to view Azure resources in control plane. Verify this by following the steps below: |
purview | Troubleshoot Policy Distribution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/troubleshoot-policy-distribution.md | Title: Troubleshoot distribution of Microsoft Purview access policies -description: Learn how to troubleshoot the communication of access policies that were created in Microsoft Purview and need to be enforced in data sources +description: Learn how to troubleshoot the communication of access policies that were created in Microsoft Purview and need to be enforced in data sources. -# Tutorial: troubleshoot distribution of Microsoft Purview access policies (preview) +# Tutorial: Troubleshoot distribution of Microsoft Purview access policies (preview) [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)] -In this tutorial, learn how to programmatically fetch access policies that were created in Microsoft Purview. With this you can troubleshoot the communication of policies between Microsoft Purview, where policies are created and updated, and the data sources, where these policies need to be enforced. +In this tutorial, you learn how to programmatically fetch access policies that were created in Microsoft Purview. By doing so, you can troubleshoot the communication of policies between Microsoft Purview, where policies are created and updated, and the data sources, where these policies need to be enforced. -To get the necessary context about Microsoft Purview policies, see concept guides listed in [next-steps](#next-steps). +For more information about Microsoft Purview policies, see the concept guides listed in the [Next steps](#next-steps) section. -This guide will use examples for Azure SQL Server as data source. +This guide uses examples from SQL Server as data sources. ## Prerequisites -* If you don't have an Azure subscription, [create a free one](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin. -* You must have an existing Microsoft Purview account. If you don't have one, see the [quickstart for creating a Microsoft Purview account](create-catalog-portal.md). -* Register a data source, enable *Data use management*, and create a policy. To do so, follow one of the Microsoft Purview policies guides. To follow along the examples in this tutorial you can [create a DevOps policy for Azure SQL Database](how-to-policies-devops-azure-sql-db.md) -* To establish a bearer token and to call any data plane APIs, see [the documentation about how to call REST APIs for Microsoft Purview data planes](tutorial-using-rest-apis.md). In order to be authorized to fetch policies, you need to be Policy Author, Data Source Admin or Data Curator at root-collection level in Microsoft Purview. You can assign those roles by following this guide: [managing Microsoft Purview role assignments](catalog-permissions.md#assign-permissions-to-your-users). +* An Azure subscription. If you don't already have one, [create a free subscription](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). +* A Microsoft Purview account. If you don't have one, see the [quickstart for creating a Microsoft Purview account](create-catalog-portal.md). +* Register a data source, enable *Data use management*, and create a policy. To do so, use one of the Microsoft Purview policy guides. To follow along with the examples in this tutorial, you can [create a DevOps policy for Azure SQL Database](how-to-policies-devops-azure-sql-db.md). +* Establish a bearer token and call data plane APIs. To learn how, see [how to call REST APIs for Microsoft Purview data planes](tutorial-using-rest-apis.md). To be authorized to fetch policies, you need to be a Policy Author, Data Source Admin, or Data Curator at the root-collection level in Microsoft Purview. To assign those roles, see [Manage Microsoft Purview role assignments](catalog-permissions.md#assign-permissions-to-your-users). ## Overviewrelecloud-sql-srv1-There are two ways to fetch access policies from Microsoft Purview -- Full pull: Provides a complete set of policies for a particular data resource scope.-- Delta pull: Provides an incremental view of policies, that is, what changed since the last pull request, regardless of whether the last pull was a full or a delta one. A full pull is required prior to issuing the first delta pull. -Microsoft Purview policy model is described using [JSON syntax](https://datatracker.ietf.org/doc/html/rfc8259) +You can fetch access policies from Microsoft Purview via either a *full pull* or a *delta pull*, as described in the following sections. -The policy distribution endpoint can be constructed from the Microsoft Purview account name as: -`{endpoint} = https://<account-name>.purview.azure.com/pds` +The Microsoft Purview policy model is written in [JSON syntax](https://datatracker.ietf.org/doc/html/rfc8259). ++You can construct the policy distribution endpoint from the Microsoft Purview account name as +`{endpoint} = https://<account-name>.purview.azure.com/pds`. ## Full pull +Full pull provides a complete set of policies for a particular data resource scope. + ### Request-To fetch policies for a data source via full pull, send a `GET` request to /policyElements as follows: ++To fetch policies for a data source via full pull, send a `GET` request to `/policyElements`, as follows: ``` GET {{endpoint}}/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProvider}/{resourceType}/{resourceName}/policyelements?api-version={apiVersion} ``` -where the path /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProvider}/{resourceType}/{resourceName} matches the resource ID for the data source. +where the path `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProvider}/{resourceType}/{resourceName}` matches the resource ID for the data source. >[!Tip]-> The resource ID can be found under the properties for the data source in Azure portal. +> The resource ID can be found under the properties for the data source in the Azure portal. ### Response status codes -|Http Code|Http Code Description|Type|Description|Response| +|HTTP code|HTTP code description|Type|Description|Response| |||-|--|--|-|200|Success|Success|Request processed successfully|Policy data| -|401|Unauthenticated|Error|No bearer token passed in request or invalid token|Error data| +|200|Success|Success|The request was processed successfully|Policy data| +|401|Unauthenticated|Error|No bearer token was passed in the request, or invalid token|Error data| |403|Forbidden|Error|Other authentication errors|Error data| |404|Not found|Error|The request path is invalid or not registered|Error data|-|500|Internal server error|Error|Backend service unavailable|Error data| -|503|Backend service unavailable|Error|Backend service unavailable|Error data| +|500|Internal server error|Error|The back-end service is unavailable|Error data| +|503|Backend service unavailable|Error|The back-end service is unavailable|Error data| -### Example for Azure SQL Server (Azure SQL Database) +### Example for SQL Server (Azure SQL Database) -##### Example parameters: +**Example parameters**: - Microsoft Purview account: relecloud-pv - Data source Resource ID: /subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1 -##### Example request: +**Example request**: + ``` GET https://relecloud-pv.purview.azure.com/pds/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1/policyElements?api-version=2021-01-01-preview ```--##### Example response: +**Example response**: `200 OK` GET https://relecloud-pv.purview.azure.com/pds/subscriptions/BB345678-abcd-ABCD- ## Delta pull +A delta pull provides an incremental view of policies (that is, the changes since the last pull request), regardless of whether the last pull was a full or a delta pull. A full pull is required prior to issuing the first delta pull. + ### Request-To fetch policies via delta pull, send a `GET` request to /policyEvents as follows: ++To fetch policies via delta pull, send a `GET` request to `/policyEvents`, as follows: ``` GET {{endpoint}}/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProvider}/{resourceType}/{resourceName}/policyEvents?api-version={apiVersion}&syncToken={syncToken} Provide the syncToken you got from the prior pull in any successive delta pulls. ### Response status codes -|Http Code|Http Code Description|Type|Description|Response| +|HTTP code|HTTP code description|Type|Description|Response| |||-|--|--|-|200|Success|Success|Request processed successfully|Policy data| -|304|Not modified|Success|No events received since last delta pull call|None| -|401|Unauthenticated|Error|No bearer token passed in request or invalid token|Error data| +|200|Success|Success|The request was processed successfully|Policy data| +|304|Not modified|Success|No events were received since the last delta pull call|None| +|401|Unauthenticated|Error|No bearer token was passed in the request, or invalid token|Error data| |403|Forbidden|Error|Other authentication errors|Error data| |404|Not found|Error|The request path is invalid or not registered|Error data|-|500|Internal server error|Error|Backend service unavailable|Error data| -|503|Backend service unavailable|Error|Backend service unavailable|Error data| +|500|Internal server error|Error| The back-end service is unavailable|Error data| +|503|Backend service unavailable|Error| The back-end service is unavailable|Error data| -### Example for Azure SQL Server (Azure SQL Database) +### Examples for SQL Server (Azure SQL Database) -##### Example parameters: -- Microsoft Purview account: relecloud-pv-- Data source Resource ID: /subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1+**Example parameters**: +- Microsoft Purview account: `relecloud-pv` +- Data source resource ID: `/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1` - syncToken: 820:0 -##### Example request: +**Example request**: ``` https://relecloud-pv.purview.azure.com/pds/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg/providers/Microsoft.Sql/servers/relecloud-sql-srv1/policyEvents?api-version=2021-01-01-preview&syncToken=820:0 ``` -##### Example response: +**Example response**: `200 OK` https://relecloud-pv.purview.azure.com/pds/subscriptions/BB345678-abcd-ABCD-0000 } ``` -In this example, the delta pull communicates the event that the policy on the resource group marketing-rg, which had the scope ```"scopes": ["/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg"]``` was deleted, per the ```"eventType": "Microsoft.Purview/PolicyElements/Delete"```. +In this example, the delta pull communicates the event that the policy on the resource group *marketing-rg*, which has the scope ```"scopes": ["/subscriptions/BB345678-abcd-ABCD-0000-bbbbffff9012/resourceGroups/marketing-rg"]``` was deleted, per the ```"eventType": "Microsoft.Purview/PolicyElements/Delete"```. ## Policy constructs-There are 3 top-level policy constructs used within the responses to the full pull (/policyElements) and delta pull (/policyEvents) requests: Policy, PolicySet and AttributeRule. +Three top-level policy constructs are used within the responses to the full pull (`/policyElements`) and delta pull (`/policyEvents`) requests: `Policy`, `PolicySet`, and `AttributeRule`. ### Policy -Policy specifies the decision the data source must enforce (permit vs. deny) when an Azure AD principal attempts an access via a client, provided request context attributes satisfy attribute predicates specified in the policy (for example scope, requested action, etc.). Evaluation of the Policy triggers evaluation of AttributeRules referenced in the Policy. +`Policy` specifies the decision that the data source must enforce (*permit* or *deny*) when an Azure AD principal attempts access via a client, provided that the request context attributes satisfy the attribute predicates, as specified in the policy (for example: *scope*, *requested action*, and so on). An evaluation of the policy triggers an evaluation of `AttributeRules`, as referenced in the policy. -|member|value|type|cardinality|description| +|Member|Value|Type|Cardinality|Description| ||--|-|--|--| |ID| |string|1|| |name| |string|1|| |kind| |string|1|| |version|1|number|1||-|updatedAt| |string|1|String representation of time in yyyy-MM-ddTHH:mm:ss.fffffffZ Ex: "2022-01-11T09:55:52.6472858Z"| +|updatedAt| |string|1| A string representation of time, in the format yyyy-MM-ddTHH:mm:ss.fffffffZ (for example: "2022-01-11T09:55:52.6472858Z")| |preconditionRules| |array[Object:Rule]|0..1|All the rules are 'anded'| |decisionRules| |array[Object:DecisionRule]|1|| ### PolicySet -PolicySet associates an array of Policy IDs to a resource scope where they need to be enforced. +`PolicySet` associates an array of policy IDs with a resource scope, where they need to be enforced. -|member|value|type|cardinality|description| +|Member|Value|Type|Cardinality|Description| ||--|-|--|--| |ID| |string|1|| |name| |string|1|| |kind| |string|1|| |version|1|number|1||-|updatedAt| |string|1|String representation of time in yyyy-MM-ddTHH:mm:ss.fffffffZ Ex: "2022-01-11T09:55:52.6472858Z"| +|updatedAt| |string|1| A string representation of time in the format yyyy-MM-ddTHH:mm:ss.fffffffZ (for example: "2022-01-11T09:55:52.6472858Z")| |preconditionRules| |array[Object:Rule]|0..1||-|policyRefs| |array[string]|1|List of policy IDs| +|policyRefs| |array[string]|1|A list of policy IDs| ### AttributeRule -AttributeRule produces derived attributes and add them to request context attributes. Evaluation of AttributeRule triggers evaluation of additional AttributeRules referenced in the AttributeRule. +`AttributeRule` produces derived attributes and adds them to the request context attributes. An evaluation of `AttributeRule` triggers an evaluation of additional `AttributeRules`, as referenced in `AttributeRule`. -|member|value|type|cardinality|description| +|Member|Value|Type|Cardinality|Description| ||--|-|--|--| |ID| |string|1|| |name| |string|1|| AttributeRule produces derived attributes and add them to request context attrib |condition| |Object: Condition|0..1|| |derivedAttributes| |array[Object:DerivedAttribute]|1|| -## Common sub-constructs used in PolicySet, Policy, AttributeRule --#### AttributePredicate -AttributePredicate checks whether predicate specified on an attribute is satisfied. AttributePredicate can specify the following properties: -- attributeName: specifies attribute name on which attribute predicate needs to be evaluated.-- matcherId: ID of matcher function that is used to compare the attribute value looked up in request context by the attribute name to the attribute value literal specified in the predicate. At present we support 2 matcherId(s): ExactMatcher, GlobMatcher. If matcherId isn't specified, it defaults to GlobMatcher.-- fromRule: optional property specifying the ID of an AttributeRule that needs to be evaluated to populate the request context with attribute values that would be compared in this predicate.-- attributeValueIncludes: scalar literal value that should match the request context attribute values.-- attributeValueIncludedIn: array of literal values that should match the request context attribute values.-- attributeValueExcluded: scalar literal value that should not match the request context attribute values.-- attributeValueExcludedIn: array of literal values that should not match the request context attribute values.--#### CNFCondition -Array of array of AttributePredicates that have to be satisfied with the semantic of ANDofORs. --#### DNFCondition -Array of array of AttributePredicates that have to be satisfied with the semantic of ORofANDs. --#### PreConditionRule -- A PreConditionRule can specify at most one each of CNFCondition, DNFConition, Condition.-- All of the specified CNFCondition, DNFCondition, Condition should evaluate to ΓÇ£trueΓÇ¥ for the PreConditionRule to be satisfied for the current request.-- If any of the precondition rules is not satisfied, containing PolicySet or Policy is considered not applicable for the current request and skipped.--#### Condition -- A Condition allows specifying a complex condition of predicates that can nest functions from library of functions.-- At decision compute time the Condition evaluates to ΓÇ£trueΓÇ¥ or ΓÇ£falseΓÇ¥ and also could emit optional Obligation(s).-- If the Condition evaluates to ΓÇ£falseΓÇ¥ the containing DecisionRule is considered Not Applicable to the current request.+## Common subconstructs used in PolicySet, Policy, and AttributeRule ++### AttributePredicate +`AttributePredicate` checks to see whether the predicate that's specified on an attribute is satisfied. `AttributePredicate` can specify the following properties: +- `attributeName`: Specifies the attribute name on which an attribute predicate needs to be evaluated. +- `matcherId`: The ID of a matcher function that's used to compare the attribute value that's looked up in the request context by attribute name to the attribute value literal that's specified in the predicate. At present, we support two `matcherId` values: `ExactMatcher` and `GlobMatcher`. If `matcherId` isn't specified, it defaults to `GlobMatcher`. +- `fromRule`: An optional property that specifies the ID of `AttributeRule` that needs to be evaluated to populate the request context with attribute values that would be compared in this predicate. +- `attributeValueIncludes`: A scalar literal value that should match the request context attribute values. +- `attributeValueIncludedIn`: An array of literal values that should match the request context attribute values. +- `attributeValueExcluded`: A scalar literal value that should *not* match the request context attribute values. +- `attributeValueExcludedIn`: An array of literal values that should *not* match the request context attribute values. ++### CNFCondition +An array of `AttributePredicates` that have to be satisfied with the semantics of ANDofORs. ++### DNFCondition +An array of `AttributePredicates` that have to be satisfied with the semantics of ORofANDs. ++### PreConditionRule +- A `PreConditionRule` can specify at most one each of `CNFCondition`, `DNFCondition`, or `Condition`. +- All of the specified `CNFCondition`, `DNFCondition`, and `Condition` should evaluate to `true` for `PreConditionRule` to be satisfied for the current request. +- If any of the precondition rules isn't satisfied, `PolicySet` or `Policy` is considered not applicable for the current request and skipped. ++### Condition +- `condition` allows you to specify a complex condition of predicates that can nest functions from a library of functions. +- At `decision compute time`, `condition` evaluates to `true` or `false` and also could emit optional obligations. +- If `condition` evaluates to `false`, the containing `DecisionRule` is considered not applicable to the current request. ## Next steps |
search | Search Howto Index Mysql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-mysql.md | When configured to include a high water mark and soft deletion, the indexer take - [Register for the preview](https://aka.ms/azure-cognitive-search/indexer-preview) to provide feedback and get help with any issues you encounter. -- [Azure Database for MySQL single server](../mysql/single-server-overview.md).+- [Azure Database for MySQL flexible server](../mysql/flexible-server/overview.md). - A table or view that provides the content. A primary key is required. If you're using a view, it must have a [high water mark column](#DataChangeDetectionPolicy). When configured to include a high water mark and soft deletion, the indexer take You can also use the [Azure SDK for .NET](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourcetype.mysql). You can't use the portal for indexer creation, but you can manage indexers and data sources once they're created. -For more information, see [Azure Database for MySQL](../mysql/overview.md). +For more information, see [Azure Database for MySQL](../mysql/flexible-server/overview.md). ## Preview limitations |
sentinel | Connect Cef Ama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-cef-ama.md | Select the machines on which you want to install the AMA. These machines are VMs > [!NOTE] > **Using the same machine to forward both plain Syslog *and* CEF messages** >-> If you plan to use this log forwarder machine to forward Syslog messages as well as CEF, then in order to avoid the duplication of events to the Syslog and CommonSecurityLog tables: +> If you plan to use this log forwarder machine to forward Syslog messages as well as CEF, in order to avoid the duplication of events to the Syslog and CommonSecurityLog tables: >-> 1. On each source machine that sends logs to the forwarder in CEF format, you must edit the Syslog configuration file to remove the facilities that are being used to send CEF messages. This way, the facilities that are sent in CEF won't also be sent in Syslog. See [Configure Syslog on Linux agent](../azure-monitor/agents/data-sources-syslog.md#configure-syslog-on-linux-agent) for detailed instructions on how to do this. -> -> 1. You must run the following command on those machines to disable the synchronization of the agent with the Syslog configuration in Microsoft Sentinel. This ensures that the configuration change you made in the previous step does not get overwritten.<br> -> `sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/OMS_MetaConfigHelper.py --disable'` +> On each source machine that sends logs to the forwarder in CEF format, you must edit the Syslog configuration file to remove the facilities that are being used to send CEF messages. This way, the facilities that are sent in CEF won't also be sent in Syslog. 1. Select the **Collect** tab and select **Linux syslog** as the data source type. 1. Configure the minimum log level for each facility. When you select a log level, Microsoft Sentinel collects logs for the selected level and other levels with lower severity. For example, if you select **LOG_ERR**, Microsoft Sentinel collects logs for the **LOG_ERR**, **LOG_WARNING**, **LOG_NOTICE**, **LOG_INFO**, and **LOG_DEBUG** levels. |
sentinel | Extend Sentinel Across Workspaces Tenants | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/extend-sentinel-across-workspaces-tenants.md | You can then write a query across both workspaces by beginning with `unionSecuri <!-- Bookmark added for backward compatibility with old heading --> You can now include cross-workspace queries in scheduled analytics rules. You can use cross-workspace analytics rules in a central SOC, and across tenants (using Azure Lighthouse), suitable for MSSPs. Note these limitations: -- You can include **up to 20 workspaces** in a single query.+- You can include **up to 100 workspaces** in a single query. - You must deploy Microsoft Sentinel **on every workspace** referenced in the query. - Alerts generated by a cross-workspace analytics rule, and the incidents created from them, exist **only in the workspace where the rule was defined**. The alerts won't be displayed in any of the other workspaces referenced in the query. |
sentinel | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md | The listed features were released in the last three months. For information abou ## November 2022 +- [Common Event Format (CEF) via AMA (Preview)](#common-event-format-cef-via-ama-preview) - [Monitor the health of automation rules and playbooks](#monitor-the-health-of-automation-rules-and-playbooks) - [Updated Microsoft Sentinel Logstash plugin](#updated-microsoft-sentinel-logstash-plugin) +### Common Event Format (CEF) via AMA (Preview) ++The [Common Event Format (CEF) via AMA](connect-cef-ama.md) connector allows you to quickly filter and upload logs over CEF from multiple on-premises appliances to Microsoft Sentinel via the Azure Monitor Agent (AMA). ++The AMA supports Data Collection Rules (DCRs), which you can use to filter the logs before ingestion, for quicker upload, efficient analysis, and querying. + ### Monitor the health of automation rules and playbooks To ensure proper functioning and performance of your security orchestration, automation, and response operations in your Microsoft Sentinel service, keep track of the health of your automation rules and playbooks by monitoring their execution logs. |
site-recovery | Azure To Azure Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md | Windows 7 (x64) with SP1 onwards | From version [9.30](https://support.microsoft **Operating system** | **Details** | Red Hat Enterprise Linux | 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6,[7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/), [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609/), [8.3](https://support.microsoft.com/help/4597409/), [8.4](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-305.30.1.el8_4.x86_64 or higher), [8.5](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-348.5.1.el8_5.x86_64 or higher), [8.6](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b)-CentOS | 6.5, 6.6, 6.7, 6.8, 6.9, 6.10 </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, [7.8](https://support.microsoft.com/help/4564347/), [7.9 pre-GA version](https://support.microsoft.com/help/4578241/), 7.9 GA version is supported from 9.37 hot fix patch** </br> 8.0, 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4, 8.5, 8.6 +CentOS | 6.5, 6.6, 6.7, 6.8, 6.9, 6.10 </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, [7.8](https://support.microsoft.com/help/4564347/), [7.9 pre-GA version](https://support.microsoft.com/help/4578241/), 7.9 GA version is supported from 9.37 hot fix patch** </br> 8.0, 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4, 8.5 (4.18.0-348.5.1.el8_5.x86_64 or higher), 8.6 Ubuntu 14.04 LTS Server | Includes support for all 14.04.*x* versions; [Supported kernel versions](#supported-ubuntu-kernel-versions-for-azure-virtual-machines); Ubuntu 16.04 LTS Server | Includes support for all 16.04.*x* versions; [Supported kernel version](#supported-ubuntu-kernel-versions-for-azure-virtual-machines)<br/><br/> Ubuntu servers using password-based authentication and sign-in, and the cloud-init package to configure cloud VMs, might have password-based sign-in disabled on failover (depending on the cloudinit configuration). Password-based sign in can be re-enabled on the virtual machine by resetting the password from the Support > Troubleshooting > Settings menu (of the failed over VM in the Azure portal. Ubuntu 18.04 LTS Server | Includes support for all 18.04.*x* versions; [Supported kernel version](#supported-ubuntu-kernel-versions-for-azure-virtual-machines)<br/><br/> Ubuntu servers using password-based authentication and sign-in, and the cloud-init package to configure cloud VMs, might have password-based sign-in disabled on failover (depending on the cloudinit configuration). Password-based sign in can be re-enabled on the virtual machine by resetting the password from the Support > Troubleshooting > Settings menu (of the failed over VM in the Azure portal. |
storage | Archive Cost Estimation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-cost-estimation.md | The following sections show you how to calculate each component. This article uses fictitious prices in all calculations. You can find these sample prices in the [Sample prices](#sample-prices) section at the end of this article. These prices are meant only as examples, and shouldn't be used to calculate your costs. -For official prices, see [Azure Blob Storage pricing](/pricing/details/storage/blobs/) or [Azure Data Lake Storage pricing](/pricing/details/storage/data-lake/). For more information about how to choose the correct pricing page, see [Understand the full billing model for Azure Blob Storage](../common/storage-plan-manage-costs.md). +For official prices, see [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) or [Azure Data Lake Storage pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/). For more information about how to choose the correct pricing page, see [Understand the full billing model for Azure Blob Storage](../common/storage-plan-manage-costs.md). #### The cost to write This article uses the following fictitious prices. | Price of data retrieval (per GB) | $0.02 | $0.01 | | Price of high priority data retrieval (per GB) | $0.10 | N/A | -For official prices, see [Azure Blob Storage pricing](/pricing/details/storage/blobs/) or [Azure Data Lake Storage pricing](/pricing/details/storage/data-lake/). +For official prices, see [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) or [Azure Data Lake Storage pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/). For more information about how to choose the correct pricing page, see [Understand the full billing model for Azure Blob Storage](../common/storage-plan-manage-costs.md). |
storage | Blobfuse2 Commands Unmount | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-unmount.md | Title: How to use the 'blobfuse2 unmount' command to unmount an existing mount point (preview)| Microsoft Docs + Title: How to use the 'blobfuse2 unmount' command to unmount an existing mount point (preview) description: How to use the 'blobfuse2 unmount' command to unmount an existing mount point. (preview) |
storage | Data Lake Storage Use Distcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-use-distcp.md | Title: Copy data into Azure Data Lake Storage Gen2 using DistCp| Microsoft Docs + Title: Copy data into Azure Data Lake Storage Gen2 using DistCp description: Copy data to and from Azure Data Lake Storage Gen2 using the Apache Hadoop distributed copy tool (DistCp). |
storage | Network File System Protocol Support Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support-performance.md | Title: NFS 3.0 performance considerations in Azure Blob storage| Microsoft Docs + Title: NFS 3.0 performance considerations in Azure Blob storage description: Optimize the performance of your Network File System (NFS) 3.0 storage requests by using the recommendations in this article. |
storage | Secure File Transfer Protocol Host Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-host-keys.md | Title: Host keys for SFTP support for Azure Blob Storage| Microsoft Docs + Title: Host keys for SFTP support for Azure Blob Storage description: Find a list of valid host keys when using an SFTP client to connect with Azure Blob Storage. |
storage | Secure File Transfer Protocol Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md | Title: Limitations & known issues with SFTP in Azure Blob Storage| Microsoft Docs + Title: Limitations & known issues with SFTP in Azure Blob Storage description: Learn about limitations and known issues of SSH File Transfer Protocol (SFTP) support for Azure Blob Storage. |
storage | Static Website Content Delivery Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/static-website-content-delivery-network.md | Title: Integrate a static website with Azure CDN - Azure Storage + Title: Integrate a static website with Azure CDN + description: Learn how to cache static website content from an Azure Storage account by using Azure Content Delivery Network (CDN). |
storage | Storage Blob Append | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-append.md | Title: Append data to a blob with .NET - Azure Storage + Title: Append data to a blob with .NET + description: Learn how to append data to a blob in Azure Storage by using the.NET client library. |
storage | Storage Blob Change Feed | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-change-feed.md | Enable change feed on your storage account by using Azure portal: :::image type="content" source="media/storage-blob-change-feed/change-feed-enable-portal.png" alt-text="Screenshot showing how to enable change feed in Azure portal"::: +### [Azure CLI](#tab/azure-cli) ++Enable change feed on a storage account by calling the [az storage account blob-service-properties update](/cli/azure/storage/account/blob-service-properties#az-storage-account-blob-service-properties-update) command with the `--enable-change-feed` parameter: ++```azurecli +az storage account blob-service-properties update \ + --resource-group <resource-group> \ + --account-name <source-storage-account> \ + --enable-change-feed +``` + ### [PowerShell](#tab/azure-powershell) Enable change feed by using PowerShell: |
storage | Storage Blob Container Create Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-javascript.md | Title: Create a blob container with JavaScript - Azure Storage + Title: Create a blob container with JavaScript + description: Learn how to create a blob container in your Azure Storage account using the JavaScript client library. |
storage | Storage Blob Container Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create.md | Title: Create a blob container with .NET - Azure Storage + Title: Create a blob container with .NET + description: Learn how to create a blob container in your Azure Storage account using the .NET client library. |
storage | Storage Blob Container Delete Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-javascript.md | Title: Delete and restore a blob container with JavaScript - Azure Storage + Title: Delete and restore a blob container with JavaScript + description: Learn how to delete and restore a blob container in your Azure Storage account using the JavaScript client library. |
storage | Storage Blob Container Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete.md | Title: Delete and restore a blob container with .NET - Azure Storage + Title: Delete and restore a blob container with .NET + description: Learn how to delete and restore a blob container in your Azure Storage account using the .NET client library. |
storage | Storage Blob Container Lease | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease.md | Title: Create and manage blob or container leases with .NET - Azure Storage + Title: Create and manage blob or container leases with .NET + description: Learn how to manage a lock on a blob or container in your Azure Storage account using the .NET client library. |
storage | Storage Blob Containers List Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-javascript.md | Title: List blob containers with JavaScript - Azure Storage + Title: List blob containers with JavaScript + description: Learn how to list blob containers in your Azure Storage account using the JavaScript client library. |
storage | Storage Blob Containers List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list.md | Title: List blob containers with .NET - Azure Storage + Title: List blob containers with .NET + description: Learn how to list blob containers in your Azure Storage account using the .NET client library. |
storage | Storage Blob Copy Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-javascript.md | Title: Copy a blob with JavaScript - Azure Storage + Title: Copy a blob with JavaScript + description: Learn how to copy a blob in Azure Storage by using the JavaScript client library. |
storage | Storage Blob Copy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy.md | Title: Copy a blob with .NET - Azure Storage + Title: Copy a blob with .NET + description: Learn how to copy a blob in Azure Storage by using the .NET client library. |
storage | Storage Blob Customer Provided Key | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-customer-provided-key.md | Title: Specify a customer-provided key on a request to Blob storage with .NET - Azure Storage + Title: Specify a customer-provided key on a request to Blob storage with .NET + description: Learn how to specify a customer-provided key on a request to Blob storage using .NET. |
storage | Storage Blob Delete Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-javascript.md | Title: Delete and restore a blob with JavaScript - Azure Storage + Title: Delete and restore a blob with JavaScript + description: Learn how to delete and restore a blob in your Azure Storage account using the JavaScript client library |
storage | Storage Blob Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete.md | Title: Delete and restore a blob with .NET - Azure Storage + Title: Delete and restore a blob with .NET + description: Learn how to delete and restore a blob in your Azure Storage account using the .NET client library |
storage | Storage Blob Download Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-javascript.md | Title: Download a blob with JavaScript - Azure Storage + Title: Download a blob with JavaScript + description: Learn how to download a blob in Azure Storage by using the JavaScript client library. |
storage | Storage Blob Download | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download.md | Title: Download a blob with .NET - Azure Storage + Title: Download a blob with .NET + description: Learn how to download a blob in Azure Storage by using the .NET client library. |
storage | Storage Blob Encryption Status | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-encryption-status.md | Title: Check the encryption status of a blob - Azure Storage + Title: Check the encryption status of a blob + description: Learn how to use Azure portal, PowerShell, or Azure CLI to check whether a given blob is encrypted. If a blob is not encrypted, learn how to use AzCopy to force encryption by downloading and re-uploading the blob. |
storage | Storage Blob Get Url Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-get-url-javascript.md | Title: Get container and blob urlJavaScript - Azure Storage + Title: Get container and blob urlJavaScript + description: Learn how to get a container or blob URL in Azure Storage by using the JavaScript client library. |
storage | Storage Blob Properties Metadata Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-javascript.md | Title: Manage properties and metadata for a blob with JavaScript - Azure Storage + Title: Manage properties and metadata for a blob with JavaScript + description: Learn how to set and retrieve system properties and store custom metadata on blobs in your Azure Storage account using the JavaScript client library. |
storage | Storage Blob Properties Metadata | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata.md | Title: Manage properties and metadata for a blob with .NET - Azure Storage + Title: Manage properties and metadata for a blob with .NET + description: Learn how to set and retrieve system properties and store custom metadata on blobs in your Azure Storage account using the .NET client library. |
storage | Storage Blob Static Website Host | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-static-website-host.md | Title: 'Tutorial: Host a static website on Blob storage - Azure Storage' + Title: 'Tutorial: Host a static website on Blob storage + description: Learn how to configure a storage account for static website hosting, and deploy a static website to Azure Storage. |
storage | Storage Blob Upload Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-javascript.md | Title: Upload a blob using JavaScript - Azure Storage + Title: Upload a blob using JavaScript + description: Learn how to upload a blob to your Azure Storage account using the JavaScript client library. |
storage | Storage Blob Upload | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload.md | Title: Upload a blob using .NET - Azure Storage + Title: Upload a blob using .NET + description: Learn how to upload a blob to your Azure Storage account using the .NET client library. |
storage | Storage Blobs Latency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-latency.md | Title: Latency in Blob storage - Azure Storage + Title: Latency in Blob storage + description: Understand and measure latency for Blob storage operations, and learn how to design your Blob storage applications for low latency. |
storage | Storage Blobs List Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-javascript.md | Title: List blobs with JavaScript - Azure Storage + Title: List blobs with JavaScript + description: Learn how to list blobs in your storage account using the Azure Storage client library for JavaScript. Code examples show how to list blobs in a flat listing, or how to list blobs hierarchically, as though they were organized into directories or folders. |
storage | Storage Blobs List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list.md | Title: List blobs with .NET - Azure Storage + Title: List blobs with .NET + description: Learn how to list blobs in your storage account using the Azure Storage client library for .NET. Code examples show how to list blobs in a flat listing, or how to list blobs hierarchically, as though they were organized into directories or folders. |
storage | Storage Blobs Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-overview.md | Title: About Blob (object) storage - Azure Storage + Title: About Blob (object) storage + description: Azure Blob storage stores massive amounts of unstructured object data, such as text or binary data. Blob storage also supports Azure Data Lake Storage Gen2 for big data analytics. |
storage | Storage Performance Checklist | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-performance-checklist.md | Title: Performance and scalability checklist for Blob storage - Azure Storage + Title: Performance and scalability checklist for Blob storage + description: A checklist of proven practices for use with Blob storage in developing high-performance applications. |
storage | Storage Choose Data Transfer Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-choose-data-transfer-solution.md | Title: Choose an Azure solution for data transfer| Microsoft Docs + Title: Choose an Azure solution for data transfer description: Learn how to choose an Azure solution for data transfer based on data sizes and available network bandwidth in your environment. |
storage | Storage Network Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md | When planning for disaster recovery during a regional outage, you should create To enable access from a virtual network that is located in another region over service endpoints, register the `AllowGlobalTagsForStorage` feature in the subscription of the virtual network. All the subnets in the subscription that has the _AllowedGlobalTagsForStorage_ feature enabled will no longer use a public IP address to communicate with any storage account. Instead, all the traffic from these subnets to storage accounts will use a private IP address as a source IP. As a result, any storage accounts that use IP network rules to permit traffic from those subnets will no longer have an effect. > [!NOTE]-> For updating the existing service endpoints to access a storage account in another region, perform an [update subnet](/cli/azure/network/vnet/subnet?view=azure-cli-latest#az-network-vnet-subnet-update&preserve-view=true) operation on the subnet after registering the subscription with the `AllowGlobalTagsForStorage` feature. Similarly, to go back to the old configuration, perform an [update subnet](/cli/azure/network/vnet/subnet?view=azure-cli-latest#az-network-vnet-subnet-update&preserve-view=true) operation after deregistering the subscription with the `AllowGlobalTagsForStorage` feature. +> For updating the existing service endpoints to access a storage account in another region, perform an [update subnet](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update&preserve-view=true) operation on the subnet after registering the subscription with the `AllowGlobalTagsForStorage` feature. Similarly, to go back to the old configuration, perform an [update subnet](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update&preserve-view=true) operation after deregistering the subscription with the `AllowGlobalTagsForStorage` feature. #### [Portal](#tab/azure-portal) |
storage | Storage Ref Azcopy Copy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-copy.md | Title: azcopy copy| Microsoft Docs + Title: azcopy copy description: This article provides reference information for the azcopy copy command. |
storage | Storage Solution Large Dataset Low Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-solution-large-dataset-low-network.md | Title: Azure data transfer options for large datasets with low or no network bandwidth| Microsoft Docs + Title: Azure data transfer options for large datasets with low or no network bandwidth description: Learn how to choose an Azure solution for data transfer when you have limited to no network bandwidth in your environment and you are planning to transfer large data sets. |
storage | Storage Solution Large Dataset Moderate High Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-solution-large-dataset-moderate-high-network.md | Title: Azure data transfer options for large datasets, moderate to high network bandwidth| Microsoft Docs + Title: Azure data transfer options for large datasets, moderate to high network bandwidth description: Learn how to choose an Azure solution for data transfer when you have moderate to high network bandwidth in your environment and you are planning to transfer large datasets. |
storage | Storage Solution Periodic Data Transfer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-solution-periodic-data-transfer.md | Title: Choose an Azure solution for periodic data transfer| Microsoft Docs + Title: Choose an Azure solution for periodic data transfer description: Learn how to choose an Azure solution for data transfer when you are transferring data periodically. |
storage | Storage Solution Small Dataset Low Moderate Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-solution-small-dataset-low-moderate-network.md | Title: Azure data transfer options for small datasets with low to moderate network bandwidth| Microsoft Docs + Title: Azure data transfer options for small datasets with low to moderate network bandwidth description: Learn how to choose an Azure solution for data transfer when you have low to moderate network bandwidth in your environment and you are planning to transfer small datasets. The options recommended in this scenario are: - **Graphical interface tools** such as Azure Storage Explorer and Azure Storage in Azure portal. These provide an easy way to view your data and quickly transfer a few files. - **Azure Storage Explorer** - This cross-platform tool lets you manage the contents of your Azure storage accounts. It allows you to upload, download, and manage blobs, files, queues, tables, and Azure Cosmos DB entities. Use it with Blob storage to manage blobs and folders, as well as upload and download blobs between your local file system and Blob storage, or between storage accounts.- - **Azure portal** - Azure Storage in Azure portal provides a web-based interface to explore files and upload new files one at a time. This is a good option if you do not want to install any tools or issue commands to quickly explore your files, or to simply upload a handful of new ones. + - **Azure portal** + - **Scripting/programmatic tools** such as AzCopy/PowerShell/Azure CLI and Azure Storage REST APIs. |
storage | Storage Use Azcopy Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-files.md | Title: Transfer data to or from Azure Files by using AzCopy v10 | Microsoft Docs + Title: Transfer data to or from Azure Files by using AzCopy v10 description: Transfer data with AzCopy and file storage. AzCopy is a command-line tool for copying blobs or files to or from a storage account. Use AzCopy with Azure Files. |
storage | Storage Use Azcopy Google Cloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-google-cloud.md | Title: "Copy from Google Cloud Storage to Azure Storage with AzCopy | Microsoft Docs" + Title: "Copy from Google Cloud Storage to Azure Storage with AzCopy" description: Use AzCopy to copy data from Google Cloud Storage to Azure Storage. AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. |
storage | Storage Use Azcopy Migrate On Premises Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-migrate-on-premises-data.md | Title: 'Tutorial: Migrate on-premises data to Azure Storage with AzCopy| Microsoft Docs' + Title: 'Tutorial: Migrate on-premises data to Azure Storage with AzCopy' description: In this tutorial, you use AzCopy to migrate data or copy data to or from blob, table, and file content. Easily migrate data from your local storage to Azure Storage. |
storage | Storage Use Azcopy Optimize | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-optimize.md | Title: Optimize the performance of AzCopy v10 with Azure Storage | Microsoft Docs + Title: Optimize the performance of AzCopy v10 with Azure Storage description: This article helps you to optimize the performance of AzCopy v10 with Azure Storage. |
storage | Storage Use Azcopy S3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-s3.md | Title: Copy data from Amazon S3 to Azure Storage by using AzCopy | Microsoft Docs + Title: Copy data from Amazon S3 to Azure Storage by using AzCopy description: Use AzCopy to copy data from Amazon S3 to Azure Storage. AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. |
storage | Storage Use Azcopy Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-troubleshoot.md | Title: Troubleshoot problems with AzCopy (Azure Storage) | Microsoft Docs + Title: Troubleshoot problems with AzCopy (Azure Storage) description: Find workarounds to common issues with AzCopy v10. |
storage | Storage Use Azcopy V10 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-v10.md | Title: Copy or move data to Azure Storage by using AzCopy v10 | Microsoft Docs + Title: Copy or move data to Azure Storage by using AzCopy v10 description: AzCopy is a command-line utility that you can use to copy data to, from, or between storage accounts. This article helps you download AzCopy, connect to your storage account, and then transfer data. |
storage | File Sync Choose Cloud Tiering Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-choose-cloud-tiering-policies.md | Title: Choose Azure File Sync cloud tiering policies | Microsoft Docs + Title: Choose Azure File Sync cloud tiering policies description: Details on what to keep in mind when choosing Azure File Sync cloud tiering policies. |
storage | File Sync Cloud Tiering Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-cloud-tiering-overview.md | Title: Understand Azure File Sync cloud tiering | Microsoft Docs + Title: Understand Azure File Sync cloud tiering description: Understand cloud tiering, an optional Azure File Sync feature. Frequently accessed files are cached locally on the server; others are tiered to Azure Files. |
storage | File Sync Cloud Tiering Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-cloud-tiering-policy.md | Title: Azure File Sync cloud tiering policies | Microsoft Docs + Title: Azure File Sync cloud tiering policies description: Details on how the date and volume free space policies work together for different scenarios. |
storage | File Sync Deployment Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-deployment-guide.md | Title: Deploy Azure File Sync | Microsoft Docs + Title: Deploy Azure File Sync description: Learn how to deploy Azure File Sync from start to finish using the Azure portal, PowerShell, or the Azure CLI. |
storage | File Sync Extend Servers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-extend-servers.md | Title: Tutorial - Extend Windows file servers with Azure File Sync | Microsoft Docs + Title: Tutorial - Extend Windows file servers with Azure File Sync description: Learn how to extend Windows file servers with Azure File Sync, from start to finish. |
storage | File Sync How To Manage Tiered Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-how-to-manage-tiered-files.md | Title: How to manage Azure File Sync tiered files | Microsoft Docs + Title: How to manage Azure File Sync tiered files description: Tips and PowerShell commandlets to help you manage tiered files |
storage | File Sync Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-introduction.md | Title: Introduction to Azure File Sync | Microsoft Docs + Title: Introduction to Azure File Sync description: An overview of Azure File Sync, a service that enables you to create and use network file shares in the cloud using the industry standard SMB protocol. |
storage | File Sync Modify Sync Topology | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-modify-sync-topology.md | Title: Modify your Azure File Sync topology | Microsoft Docs + Title: Modify your Azure File Sync topology description: Guidance on how to modify your Azure File Sync sync topology |
storage | File Sync Monitor Cloud Tiering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-monitor-cloud-tiering.md | Title: Monitor Azure File Sync cloud tiering | Microsoft Docs + Title: Monitor Azure File Sync cloud tiering description: Details on metrics to use to monitor your cloud tiering policies. |
storage | File Sync Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-monitoring.md | Title: Monitor Azure File Sync | Microsoft Docs + Title: Monitor Azure File Sync description: Review how to monitor your Azure File Sync deployment by using Azure Monitor, Storage Sync Service, and Windows Server. |
storage | File Sync Networking Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-networking-overview.md | Title: Azure File Sync networking considerations | Microsoft Docs + Title: Azure File Sync networking considerations description: Learn how to configure networking to use Azure File Sync to cache files on-premises. |
storage | File Sync Planning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-planning.md | Title: Planning for an Azure File Sync deployment | Microsoft Docs + Title: Planning for an Azure File Sync deployment description: Plan for a deployment with Azure File Sync, a service that allows you to cache several Azure file shares on an on-premises Windows Server or cloud VM. |
storage | File Sync Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-release-notes.md | The following Azure File Sync agent versions are supported: | Milestone | Agent version number | Release date | Status | |-|-|--||+| V15.2 Release - [KB5013875](https://support.microsoft.com/topic/9159eee2-3d16-4523-ade4-1bac78469280)| 15.2.0.0 | November 21, 2022 | Supported | | V15.1 Release - [KB5003883](https://support.microsoft.com/topic/45761295-d49a-431e-98ec-4fb3329b0544)| 15.1.0.0 | September 19, 2022 | Supported | | V15 Release - [KB5003882](https://support.microsoft.com/topic/2f93053f-869b-4782-a832-e3c772a64a2d)| 15.0.0.0 | March 30, 2022 | Supported | | V14.1 Release - [KB5001873](https://support.microsoft.com/topic/d06b8723-c4cf-4c64-b7ec-3f6635e044c5)| 14.1.0.0 | December 1, 2021 | Supported | The following Azure File Sync agent versions have expired and are no longer supp ### Azure File Sync agent update policy [!INCLUDE [storage-sync-files-agent-update-policy](../../../includes/storage-sync-files-agent-update-policy.md)] +## Agent version 15.2.0.0 +The following release notes are for version 15.2.0.0 of the Azure File Sync agent released November 21, 2022. These notes are in addition to the release notes listed for version 15.0.0.0. ++### Improvements and issues that are fixed ++- Fixed a cloud tiering issue in the v15.1 agent that caused the following symptoms: + - Memory usage is higher after upgrading to v15.1. + - Storage Sync Agent (FileSyncSvc) service intermittently crashes. + - Files are failing to recall with error ERROR_INVALID_HANDLE (0x00000006). +- Fixed a health reporting issue with servers configured to use a non-Gregorian calendar. + ## Agent version 15.1.0.0 The following release notes are for version 15.1.0.0 of the Azure File Sync agent released September 19, 2022. These notes are in addition to the release notes listed for version 15.0.0.0. |
storage | File Sync Server Endpoint Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-server-endpoint-delete.md | Title: Deprovision your Azure File Sync server endpoint | Microsoft Docs + Title: Deprovision your Azure File Sync server endpoint description: Guidance on how to deprovision your Azure File Sync server endpoint based on your use case |
storage | File Sync Server Registration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-server-registration.md | Title: Manage registered servers with Azure File Sync | Microsoft Docs + Title: Manage registered servers with Azure File Sync description: Learn how to register and unregister a Windows Server with an Azure File Sync Storage Sync Service. |
storage | File Sync Storsimple Cost Comparison | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-storsimple-cost-comparison.md | Title: Comparing the costs of StorSimple to Azure File Sync | Microsoft Docs + Title: Comparing the costs of StorSimple to Azure File Sync description: Learn how you can save money and modernize your storage infrastructure by migrating from StorSimple to Azure File Sync. |
storage | File Sync Troubleshoot Cloud Tiering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-cloud-tiering.md | Title: Troubleshoot Azure File Sync cloud tiering | Microsoft Docs + Title: Troubleshoot Azure File Sync cloud tiering description: Troubleshoot common issues with cloud tiering in an Azure File Sync deployment. |
storage | File Sync Troubleshoot Installation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-installation.md | Title: Troubleshoot Azure File Sync agent installation and server registration | Microsoft Docs + Title: Troubleshoot Azure File Sync agent installation and server registration description: Troubleshoot common issues with installing the Azure File Sync agent and registering Windows Server with the Storage Sync Service. |
storage | File Sync Troubleshoot Sync Errors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-sync-errors.md | Title: Troubleshoot sync health and errors in Azure File Sync | Microsoft Docs + Title: Troubleshoot sync health and errors in Azure File Sync description: Troubleshoot common issues with monitoring sync health and resolving sync errors in an Azure File Sync deployment. |
storage | File Sync Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot.md | Title: Troubleshoot Azure File Sync | Microsoft Docs + Title: Troubleshoot Azure File Sync description: Troubleshoot common issues that you might encounter with Azure File Sync, which you can use to transform Windows Server into a quick cache of your Azure file share. |
storage | Files Remove Smb1 Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-remove-smb1-linux.md | Title: Secure your Azure and on-premises environments by removing SMB 1 on Linux | Microsoft Docs + Title: Secure your Azure and on-premises environments by removing SMB 1 on Linux description: Azure Files supports SMB 3.x and SMB 2.1, not insecure legacy versions of SMB such as SMB 1. Before connecting to an Azure file share, you may wish to disable older versions of SMB such as SMB 1. |
storage | Files Reserve Capacity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-reserve-capacity.md | Azure Files reserved capacity is available for a single subscription, multiple s A capacity reservation for Azure Files covers only the amount of data that is stored in a subscription or shared resource group. Transaction, bandwidth, data transfer, and metadata storage charges are not included in the reservation. As soon as you buy a reservation, the capacity charges that match the reservation attributes are charged at the discount rates instead of the pay-as-you go rates. For more information on Azure reservations, see [What are Azure Reservations?](../../cost-management-billing/reservations/save-compute-costs-reservations.md). +### Reserved capacity and snapshots +If you're taking snapshots of Azure file shares, there are differences in how capacity reservations work for standard versus premium file shares. If you're taking snapshots of standard file shares, then the snapshot differentials count against the reserved capacity and are billed as part of the normal used storage meter. However, if you're taking snapshots of premium file shares, then the snapshots are billed using a separate meter and don't count against the capacity reservation. For more information, see [Snapshots](understanding-billing.md#snapshots). + ### Supported tiers and redundancy options-Azure Files reserved capacity is available for premium, hot, and cool file shares. Reserved capacity is not available for Azure file shares in the transaction optimized tier. All storage redundancies support reservations. For more information about redundancy options, see [Azure Files redundancy](storage-files-planning.md#redundancy). +Azure Files reserved capacity is available for premium, hot, and cool file shares. Reserved capacity isn't available for Azure file shares in the transaction optimized tier. All storage redundancies support reservations. For more information about redundancy options, see [Azure Files redundancy](storage-files-planning.md#redundancy). ### Security requirements for purchase To purchase reserved capacity: |
storage | Storage Dotnet How To Use Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-dotnet-how-to-use-files.md | Title: Develop for Azure Files with .NET | Microsoft Docs + Title: Develop for Azure Files with .NET description: Learn how to develop .NET applications and services that use Azure Files to store data. |
storage | Storage Files Configure S2s Vpn | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-configure-s2s-vpn.md | Title: Configure a Site-to-Site (S2S) VPN for use with Azure Files | Microsoft Docs + Title: Configure a Site-to-Site (S2S) VPN for use with Azure Files description: How to configure a Site-to-Site (S2S) VPN for use with Azure Files |
storage | Storage Files Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md | Title: Frequently asked questions (FAQ) for Azure Files | Microsoft Docs + Title: Frequently asked questions (FAQ) for Azure Files description: Get answers to Azure Files frequently asked questions. You can mount Azure file shares concurrently on cloud or on-premises Windows, Linux, or macOS deployments. |
storage | Storage Files Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-introduction.md | Title: Introduction to Azure Files | Microsoft Docs + Title: Introduction to Azure Files description: An overview of Azure Files, a service that enables you to create and use network file shares in the cloud using either SMB or NFS protocols. |
storage | Storage Files Monitoring Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-monitoring-reference.md | Title: Azure Files monitoring data reference | Microsoft Docs + Title: Azure Files monitoring data reference description: Log and metrics reference for monitoring data from Azure Files. |
storage | Storage Files Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-monitoring.md | Title: Monitoring Azure Files | Microsoft Docs + Title: Monitoring Azure Files description: Learn how to monitor the performance and availability of Azure Files. Monitor Azure Files data, learn about configuration, and analyze metric and log data. |
storage | Storage Files Netapp Comparison | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-netapp-comparison.md | Title: Azure Files and Azure NetApp Files Comparison | Microsoft Docs + Title: Azure Files and Azure NetApp Files Comparison description: Comparison of Azure Files and Azure NetApp Files. |
storage | Storage Files Networking Dns | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-networking-dns.md | Title: Configuring DNS forwarding for Azure Files | Microsoft Docs + Title: Configuring DNS forwarding for Azure Files description: Learn how to configure DNS forwarding for Azure Files. |
storage | Storage Files Networking Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-networking-endpoints.md | Title: Configuring Azure Files network endpoints | Microsoft Docs + Title: Configuring Azure Files network endpoints description: Learn how to configure Azure File network endpoints. |
storage | Storage Files Networking Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-networking-overview.md | Title: Azure Files networking considerations | Microsoft Docs + Title: Azure Files networking considerations description: An overview of networking options for Azure Files. |
storage | Storage How To Use Files Mac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-mac.md | Title: Mount SMB Azure file share on macOS | Microsoft Docs + Title: Mount SMB Azure file share on macOS description: Learn how to mount an Azure file share over SMB with macOS using Finder or Terminal. Azure Files is Microsoft's easy-to-use cloud file system. |
storage | Storage How To Use Files Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-windows.md | Title: Mount SMB Azure file share on Windows | Microsoft Docs + Title: Mount SMB Azure file share on Windows description: Learn to use Azure file shares with Windows and Windows Server. Use Azure file shares with SMB 3.x on Windows installations running on-premises or on Azure VMs. For Azure Government Cloud, simply change the servername to: \\storageaccountname.file.core.usgovcloudapi.net\myfileshare ### Accessing share snapshots from Windows-If you've taken a share snapshot, either manually or automatically through a script or service like Azure Backup, you can view previous versions of a share, a directory, or a particular file from a file share on Windows. You can take a share snapshot using the [Azure portal](storage-files-quick-create-use-windows.md#create-a-share-snapshot), [Azure PowerShell](/powershell/module/az.storage/new-azrmstorageshare?view=azps-8.0.0), or [Azure CLI](/cli/azure/storage/share?view=azure-cli-latest#az-storage-share-snapshot). +If you've taken a share snapshot, either manually or automatically through a script or service like Azure Backup, you can view previous versions of a share, a directory, or a particular file from a file share on Windows. You can take a share snapshot using the [Azure portal](storage-files-quick-create-use-windows.md#create-a-share-snapshot), [Azure PowerShell](/powershell/module/az.storage/new-azrmstorageshare), or [Azure CLI](/cli/azure/storage/share#az-storage-share-snapshot). #### List previous versions Browse to the item or parent item that needs to be restored. Double-click to go to the desired directory. Right-click and select **Properties** from the menu. |
storage | Storage Java How To Use File Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-java-how-to-use-file-storage.md | Title: Develop for Azure Files with Java | Microsoft Docs + Title: Develop for Azure Files with Java description: Learn how to develop Java applications and services that use Azure Files to store file data. |
storage | Storage Python How To Use File Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-python-how-to-use-file-storage.md | Title: Develop for Azure Files with Python | Microsoft Docs + Title: Develop for Azure Files with Python description: Learn how to develop Python applications and services that use Azure Files to store file data. |
storage | Storage Snapshots Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-snapshots-files.md | Title: Overview of share snapshots for Azure Files | Microsoft Docs + Title: Overview of share snapshots for Azure Files description: A share snapshot is a read-only version of an Azure Files share that's taken at a point in time, as a way to back up the share. Imagine that you're working on a text file in a file share. After the text file ### General backup purposes -After you create a file share, you can periodically create a share snapshot of the file share to use it for data backup. A share snapshot, when taken periodically, helps maintain previous versions of data that can be used for future audit requirements or disaster recovery. We recommend using [Azure file share backup](../../backup/azure-file-share-backup-overview.md) as a backup solution for taking and managing snapshots. You may also take and manage snapshots yourself, using the [Azure portal](storage-files-quick-create-use-windows.md#create-a-share-snapshot), [Azure PowerShell](/powershell/module/az.storage/new-azrmstorageshare?view=azps-8.0.0), or [Azure CLI](/cli/azure/storage/share?view=azure-cli-latest#az-storage-share-snapshot). +After you create a file share, you can periodically create a share snapshot of the file share to use it for data backup. A share snapshot, when taken periodically, helps maintain previous versions of data that can be used for future audit requirements or disaster recovery. We recommend using [Azure file share backup](../../backup/azure-file-share-backup-overview.md) as a backup solution for taking and managing snapshots. You may also take and manage snapshots yourself, using the [Azure portal](storage-files-quick-create-use-windows.md#create-a-share-snapshot), [Azure PowerShell](/powershell/module/az.storage/new-azrmstorageshare), or [Azure CLI](/cli/azure/storage/share#az-storage-share-snapshot). ## Capabilities Share snapshots provide only file-level protection. Share snapshots don't preven ## Next steps - Working with share snapshots in: - [Azure file share backup](../../backup/azure-file-share-backup-overview.md)- - [Azure PowerShell](/powershell/module/az.storage/new-azrmstorageshare?view=azps-8.0.0) - - [Azure CLI](/cli/azure/storage/share?view=azure-cli-latest#az-storage-share-snapshot) + - [Azure PowerShell](/powershell/module/az.storage/new-azrmstorageshare) + - [Azure CLI](/cli/azure/storage/share#az-storage-share-snapshot) - [Windows](storage-how-to-use-files-windows.md#accessing-share-snapshots-from-windows) - [Share snapshot FAQ](storage-files-faq.md#share-snapshots) |
storage | Storage Troubleshoot Linux File Connection Problems | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshoot-linux-file-connection-problems.md | Title: Troubleshoot Azure Files problems in Linux (SMB) | Microsoft Docs + Title: Troubleshoot Azure Files problems in Linux (SMB) description: Troubleshooting Azure Files problems in Linux. See common issues related to SMB Azure file shares when you connect from Linux clients, and see possible resolutions. |
storage | Understanding Billing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/understanding-billing.md | Title: Understand Azure Files billing | Microsoft Docs + Title: Understand Azure Files billing description: Learn how to interpret the provisioned and pay-as-you-go billing models for Azure file shares. Azure Files supports storage capacity reservations, which enable you to achieve Once you purchase a capacity reservation, it will automatically be consumed by your existing storage utilization. If you use more storage than you have reserved, you'll pay list price for the balance not covered by the capacity reservation. Transaction, bandwidth, data transfer, and metadata storage charges aren't included in the reservation. +There are differences in how capacity reservations work with Azure file share snapshots for standard and premium file shares. If you're taking snapshots of standard file shares, then the snapshot differentials count against the reserved capacity and are billed as part of the normal used storage meter. However, if you're taking snapshots of premium file shares, then the snapshots are billed using a separate meter and don't count against the capacity reservation. For more information, see [Snapshots](#snapshots). + For more information on how to purchase storage reservations, see [Optimize costs for Azure Files with reserved capacity](files-reserve-capacity.md). ## Provisioned model |
storage | Queues Auth Abac Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/queues-auth-abac-attributes.md | Title: Actions and attributes for Azure role assignment conditions for Azure queues | Microsoft Docs + Title: Actions and attributes for Azure role assignment conditions for Azure queues description: Supported actions and attributes for Azure role assignment conditions and Azure attribute-based access control (Azure ABAC) for Azure queues. |
storage | Queues Auth Abac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/queues-auth-abac.md | Title: Authorize access to queues using Azure role assignment conditions | Microsoft Docs + Title: Authorize access to queues using Azure role assignment conditions description: Authorize access to Azure queues using Azure role assignment conditions and Azure attribute-based access control (Azure ABAC). Define conditions on role assignments using Storage attributes. |
storage | Storage C Plus Plus How To Use Queues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-c-plus-plus-how-to-use-queues.md | Title: How to use Queue Storage (C++) - Azure Storage + Title: How to use Queue Storage (C++) + description: Learn how to use the Queue Storage service in Azure. Samples are written in C++. |
storage | Storage Dotnet How To Use Queues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-dotnet-how-to-use-queues.md | Title: Get started with Azure Queue Storage using .NET - Azure Storage + Title: Get started with Azure Queue Storage using .NET + description: Azure Queue Storage provide reliable, asynchronous messaging between application components. Cloud messaging enables your application components to scale independently. |
storage | Storage Java How To Use Queue Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-java-how-to-use-queue-storage.md | Title: How to use Queue Storage from Java - Azure Storage + Title: How to use Queue Storage from Java + description: Learn how to use Queue Storage to create and delete queues. Learn to insert, peek, get, and delete messages with the Azure Storage client library for Java. |
storage | Storage Nodejs How To Use Queues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-nodejs-how-to-use-queues.md | Title: How to use Azure Queue Storage from Node.js - Azure Storage + Title: How to use Azure Queue Storage from Node.js + description: Learn to use the Azure Queue Storage to create and delete queues. Learn to insert, get, and delete messages using Node.js. |
storage | Storage Performance Checklist | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-performance-checklist.md | Title: Performance and scalability checklist for Queue Storage - Azure Storage + Title: Performance and scalability checklist for Queue Storage + description: A checklist of proven practices for use with Queue Storage in developing high-performance applications. |
storage | Storage Php How To Use Queues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-php-how-to-use-queues.md | Title: How to use Queue Storage from PHP - Azure Storage + Title: How to use Queue Storage from PHP + description: Learn how to use the Azure Queue Storage service to create and delete queues, and insert, get, and delete messages. Samples are written in PHP. |
storage | Storage Powershell How To Use Queues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-powershell-how-to-use-queues.md | Title: How to use Azure Queue Storage from PowerShell - Azure Storage + Title: How to use Azure Queue Storage from PowerShell + description: Perform operations on Azure Queue Storage via PowerShell. With Azure Queue Storage, you can store large numbers of messages that are accessible by HTTP/HTTPS. |
storage | Storage Queues Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-queues-introduction.md | Title: Introduction to Azure Queue Storage - Azure Storage + Title: Introduction to Azure Queue Storage + description: See an introduction to Azure Queue Storage, a service for storing large numbers of messages. A Queue Storage service contains a URL format, storage account, queue, and message. |
storage | Storage Ruby How To Use Queue Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-ruby-how-to-use-queue-storage.md | Title: How to use Queue Storage from Ruby - Azure Storage + Title: How to use Queue Storage from Ruby + description: Learn how to use the Azure Queue Storage to create and delete queues, and insert, get, and delete messages. Samples written in Ruby. |
storage | Storage Blobs Container Calculate Billing Size Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-calculate-billing-size-powershell.md | Title: Azure PowerShell script sample - Calculate the total billing size of a blob container | Microsoft Docs + Title: Azure PowerShell script sample - Calculate the total billing size of a blob container description: Calculate the total size of a container in Azure Blob storage for billing purposes. |
storage | Storage Blobs Container Calculate Size Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-calculate-size-cli.md | Title: Azure CLI Script Sample - Calculate blob container size | Microsoft Docs + Title: Azure CLI Script Sample - Calculate blob container size description: Calculate the size of a container in Azure Blob storage by totaling the size of the blobs in the container. This script uses the following commands to calculate the size of the Blob storag ||| | [az group create](/cli/azure/group) | Creates a resource group in which all resources are stored. | | [az storage blob upload](/cli/azure/storage/account) | Uploads local files to an Azure Blob storage container. |-| [az storage blob list](/cli/azure/storage/blob?view=azure-cli-latest#az-storage-blob-list) | Lists the blobs in an Azure Blob storage container. | +| [az storage blob list](/cli/azure/storage/blob#az-storage-blob-list) | Lists the blobs in an Azure Blob storage container. | ## Next steps |
storage | Storage Blobs Container Delete By Prefix Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-delete-by-prefix-cli.md | Title: Azure CLI Script Sample - Delete containers by prefix | Microsoft Docs + Title: Azure CLI Script Sample - Delete containers by prefix description: Delete Azure Storage blob containers based on a container name prefix, then clean up the deployment. See help links for commands used in the script sample. |
storage | Storage Blobs Container Delete By Prefix Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-delete-by-prefix-powershell.md | Title: Azure PowerShell Script Sample - Delete containers by prefix | Microsoft Docs + Title: Azure PowerShell Script Sample - Delete containers by prefix description: Read an example that shows how to delete Azure Blob storage based on a prefix in the container name, using Azure PowerShell. |
storage | Storage Common Rotate Account Keys Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-common-rotate-account-keys-cli.md | Title: Azure CLI Script Sample - Rotate storage account access keys | Microsoft Docs + Title: Azure CLI Script Sample - Rotate storage account access keys description: Create an Azure Storage account, then retrieve and rotate its account access keys. |
storage | Monitor Table Storage Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/monitor-table-storage-reference.md | Title: Azure Table storage monitoring data reference | Microsoft Docs + Title: Azure Table storage monitoring data reference description: Log and metrics reference for monitoring data from Azure Table storage. |
storage | Monitor Table Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/monitor-table-storage.md | Title: Monitoring Azure Table storage | Microsoft Docs + Title: Monitoring Azure Table storage description: Learn how to monitor the performance and availability of Azure Table storage. Monitor Azure Table storage data, learn about configuration, and analyze metric and log data. |
storage | Storage Performance Checklist | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/storage-performance-checklist.md | Title: Performance and scalability checklist for Table storage - Azure Storage + Title: Performance and scalability checklist for Table storage + description: A checklist of proven practices for use with Table storage in developing high-performance applications. |
storage | Table Storage Design Encrypt Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design-encrypt-data.md | Title: Encrypt Azure storage table data | Microsoft Docs + Title: Encrypt Azure storage table data description: Learn about table data encryption in Azure storage. The .NET Azure Storage Client Library lets you encrypt string entities for insert and replace operations. |
storage | Table Storage Design For Modification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design-for-modification.md | Title: Design Azure Table storage for data modification | Microsoft Docs + Title: Design Azure Table storage for data modification description: Design tables for data modification in Azure Table storage. Optimize insert, update, and delete operations. Ensure consistency in your stored entities. |
storage | Table Storage Design For Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design-for-query.md | Title: Design Azure Table storage for queries | Microsoft Docs + Title: Design Azure Table storage for queries description: Design tables for queries in Azure Table storage. Choose an appropriate partition key, optimize queries, and sort data for the Table service. |
storage | Table Storage Design Guidelines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design-guidelines.md | Title: Guidelines for Azure storage table design | Microsoft Docs + Title: Guidelines for Azure storage table design description: Understand guidelines for designing your Azure storage table service to support read and write operations efficiently. |
storage | Table Storage Design Modeling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design-modeling.md | Title: Modeling relationships in Azure Table storage design | Microsoft Docs + Title: Modeling relationships in Azure Table storage design description: Understand the modeling process when designing your Azure Table storage solution. Read about one-to-many, one-to-one, and inheritance relationships. |
storage | Table Storage Design Patterns | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design-patterns.md | Title: Azure storage table design patterns | Microsoft Docs + Title: Azure storage table design patterns description: Review design patterns that are appropriate for use with Table service solutions in Azure. Address issues and trade-offs that are discussed in other articles. |
storage | Table Storage Design | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design.md | Title: Design scalable and performant tables in Azure Table storage. | Microsoft Docs + Title: Design scalable and performant tables in Azure Table storage. description: Learn to design scalable and performant tables in Azure Table storage. Review table partitions, Entity Group Transactions, and capacity and cost considerations. |
storage | Table Storage How To Use Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-how-to-use-powershell.md | Title: Perform Azure Table storage operations with PowerShell | Microsoft Docs + Title: Perform Azure Table storage operations with PowerShell description: Learn how to run common tasks such as creating, querying, deleting data from Azure Table storage account by using PowerShell. |
storage | Table Storage Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-overview.md | Title: Introduction to Table storage - Object storage in Azure | Microsoft Docs + Title: Introduction to Table storage - Object storage in Azure description: Store structured data in the cloud using Azure Table storage, a NoSQL data store. |
synapse-analytics | How To Monitor Synapse Link Sql Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-monitor-synapse-link-sql-database.md | -> [!IMPORTANT] -> Azure Synapse Link for SQL is currently in preview. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - ## Monitor the status of an Azure Synapse Link for Azure SQL Database connection in Synapse Studio You can monitor the status of your Azure Synapse Link connection, see which tables are being initially copied over (*snapshotting*), and see which tables are in continuous replication mode (*replicating*) directly in Synapse Studio. In this section, we'll deep dive link-level monitoring and table-level monitoring: |
synapse-analytics | How To Monitor Synapse Link Sql Server 2022 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-monitor-synapse-link-sql-server-2022.md | -> [!IMPORTANT] -> Azure Synapse Link for SQL is currently in preview. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - ## Monitor the status of an Azure Synapse Link for SQL Server 2022 connection in Synapse Studio You can monitor the status of your Azure Synapse Link connection, see which tables are being initially copied over (*snapshotting*), and see which tables are in continuous replication mode (*replicating*) directly in Synapse Studio. In this section, we'll deep dive link-level monitoring and table-level monitoring: |
virtual-desktop | Apply Windows License | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/apply-windows-license.md | You can apply an Azure Virtual Desktop license to your VMs with the following me - You can create a host pool and its session host virtual machines using the [GitHub Azure Resource Manager template](https://github.com/Azure/RDS-Templates/tree/master/ARM-wvd-templates). Creating VMs with this method automatically applies the license. - You can manually apply a license to an existing session host virtual machine. To apply the license this way, first follow the instructions in [Create a host pool with PowerShell or the Azure CLI](./create-host-pools-powershell.md) to create a host pool and associated VMs, then return to this article to learn how to apply the license. -## Apply a Windows license to a Windows client session host VM +## Manually apply a Windows license to a Windows client session host VM >[!NOTE] >The directions in this section apply to Windows client VMs, not Windows Server VMs. $vms = Get-AzVM $vms | Where-Object {$_.LicenseType -like "Windows_Client"} | Select-Object ResourceGroupName, Name, LicenseType ``` -## Requirements for deploying Windows Server Remote Desktop Services +## Using Windows Server as session hosts -If you deploy Windows Server as Azure Virtual Desktop hosts in your deployment, a Remote Desktop Services license server must be accessible from those virtual machines. The Remote Desktop Services license server can be located on-premises or in Azure. For more information, see [Activate the Remote Desktop Services license server](/windows-server/remote/remote-desktop-services/rds-activate-license-server). +If you deploy Windows Server as session hosts in Azure Virtual Desktop, a Remote Desktop Services license server must be accessible from those virtual machines. The Remote Desktop Services license server can be located on-premises or in Azure, as long as there is network connectivity between the session hosts and license server. For more information, see [Activate the Remote Desktop Services license server](/windows-server/remote/remote-desktop-services/rds-activate-license-server). ## Known limitations -If you create a Windows Server VM using the Azure Virtual Desktop host pool creation process, the process might automatically assign it an incorrect license type. To change the license type using PowerShell, follow the instructions in [Convert an existing VM using Azure Hybrid Benefit for Windows Server](../virtual-machines/windows/hybrid-use-benefit-licensing.md#powershell-1). +If you create a Windows Server session host using the Azure Virtual Desktop host pool creation process, the process might automatically assign it an incorrect license type. To change the license type using PowerShell, follow the instructions in [Convert an existing VM using Azure Hybrid Benefit for Windows Server](../virtual-machines/windows/hybrid-use-benefit-licensing.md#powershell-1). |
virtual-desktop | Create Host Pools Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pools-powershell.md | You can create a virtual machine in multiple ways: >[!NOTE] >If you're deploying a virtual machine using Windows 7 as the host OS, the creation and deployment process will be a little different. For more details, see [Deploy a Windows 7 virtual machine on Azure Virtual Desktop](./virtual-desktop-fall-2019/deploy-windows-7-virtual-machine.md). -After you've created your session host virtual machines, [apply a Windows license to a session host VM](./apply-windows-license.md#apply-a-windows-license-to-a-windows-client-session-host-vm) to run your Windows or Windows Server virtual machines without paying for another license. +After you've created your session host virtual machines, [apply a Windows license to a session host VM](apply-windows-license.md#manually-apply-a-windows-license-to-a-windows-client-session-host-vm) to run your Windows or Windows Server virtual machines without paying for another license. ## Prepare the virtual machines for Azure Virtual Desktop agent installations |
virtual-desktop | Troubleshoot Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-agent.md | To resolve this issue, verify that your firewall and/or DNS settings are not blo ## Error: 3019 -On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3019, this means the agent can't reach the web socket transport URLs. To successfully connect to your session host and allow network traffic to bypass these restrictions, you must unblock the URLs listed in the the [Required URL list](safe-url-list.md). Work with your networking team to make sure your firewall, proxy, and DNS settings aren't blocking these URLs. You can also check your network trace logs to identify where the Azure Virtual Desktop service is being blocked. If you open a Microsoft Support case for this particular issue, make sure to attach your network trace logs to the request. +On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3019, this means the agent can't reach the web socket transport URLs. To successfully connect to your session host and allow network traffic to bypass these restrictions, you must unblock the URLs listed in the [Required URL list](safe-url-list.md). Work with your networking team to make sure your firewall, proxy, and DNS settings aren't blocking these URLs. You can also check your network trace logs to identify where the Azure Virtual Desktop service is being blocked. If you open a Microsoft Support case for this particular issue, make sure to attach your network trace logs to the request. ## Error: InstallationHealthCheckFailedException By reinstalling the most updated version of the agent and boot loader, the side- > For each of the the agent and boot loader installers you downloaded, you may need to unblock them. Right-click each file and select **Properties**, then select **Unblock**, and finally select **OK**. 1. Run the agent installer-1. When the installer asks you for the registration token, paste the registration key from the from your clipboard. +1. When the installer asks you for the registration token, paste the registration key from your clipboard. > [!div class="mx-imgBorder"] > ![Screenshot of pasted registration token](media/pasted-agent-token.png) If the issue continues, create a support case and include detailed information a - To troubleshoot issues while creating a host pool in a Azure Virtual Desktop environment, see [Environment and host pool creation](troubleshoot-set-up-issues.md). - To troubleshoot issues while configuring a virtual machine (VM) in Azure Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration.md). - To troubleshoot issues with Azure Virtual Desktop client connections, see [Azure Virtual Desktop service connections](troubleshoot-service-connection.md).-- To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](troubleshoot-client.md). - To troubleshoot issues when using PowerShell with Azure Virtual Desktop, see [Azure Virtual Desktop PowerShell](troubleshoot-powershell.md). - To learn more about the service, see [Azure Virtual Desktop environment](environment-setup.md). - To go through a troubleshoot tutorial, see [Tutorial: Troubleshoot Resource Manager template deployments](../azure-resource-manager/templates/template-tutorial-troubleshoot.md). |
virtual-desktop | Troubleshoot Azure Ad Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-azure-ad-connections.md | Title: Connections to Azure AD-joined VMs Azure Virtual Desktop - Azure -description: How to resolve issues while connecting to Azure AD-joined VMs in Azure Virtual Desktop. -+ Title: Troubleshoot connections to Azure AD-joined VMs - Azure Virtual Desktop +description: How to resolve issues when connecting to Azure AD-joined VMs in Azure Virtual Desktop. -- Last updated 08/24/2022 -# Connections to Azure AD-joined VMs +# Troubleshoot connections to Azure AD-joined VMs >[!IMPORTANT] >This content applies to Azure Virtual Desktop with Azure Resource Manager Azure Virtual Desktop objects. -Use this article to resolve issues with connections to Azure Active Directory (Azure AD)-joined VMs in Azure Virtual Desktop. +Use this article to resolve issues with connections to Azure Active Directory (Azure AD)-joined session host VMs in Azure Virtual Desktop. ## All clients -### Your account is configured to prevent you from using this device --If you come across an error saying **Your account is configured to prevent you from using this device. For more information, contact your system administrator**, ensure the user account was given the [Virtual Machine User Login role](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#azure-role-not-assigned) on the VMs. --### The user name or password is incorrect --If you can't sign in and keep receiving an error message that says your credentials are incorrect, first make sure you're using the right credentials. If you keep seeing error messages, check to make sure you've fulfilled the following requirements: --- Have you assigned the **Virtual Machine User Login** role-based access control (RBAC) permission to the virtual machine (VM) or resource group for each user?-- Does your Conditional Access policy exclude multi-factor authentication requirements for the **Azure Windows VM sign-in** cloud application?--If you've answered "no" to either of those questions, you'll need to reconfigure your multi-factor authentication. To reconfigure your multi-factor authentication, follow the instructions in [Enforce Azure Active Directory Multi-Factor Authentication for Azure Virtual Desktop using Conditional Access](set-up-mfa.md#azure-ad-joined-session-host-vms). --> [!WARNING] -> VM sign-ins don't support per-user enabled or enforced Azure AD Multi-Factor Authentication. If you try to sign in with multi-factor authentication on a VM, you won't be able to sign in and will receive an error message. --If you can access your Azure AD sign-in logs through Log Analytics, you can see if you've enabled multi-factor authentication and which Conditional Access policy is triggering the event. The events shown are non-interactive user login events for the VM, which means the IP address will appear to come from the external IP address that your VM accesses Azure AD from. --You can access your sign-in logs by running the following Kusto query: --```kusto -let UPN = "userupn"; -AADNonInteractiveUserSignInLogs -| where UserPrincipalName == UPN -| where AppId == "38aa3b87-a06d-4817-b275-7a316988d93b" -| project ['Time']=(TimeGenerated), UserPrincipalName, AuthenticationRequirement, ['MFA Result']=ResultDescription, Status, ConditionalAccessPolicies, DeviceDetail, ['Virtual Machine IP']=IPAddress, ['Cloud App']=ResourceDisplayName -| order by ['Time'] desc -``` ## Windows Desktop client -### The logon attempt failed --If you come across an error saying **The logon attempt failed** on the Windows Security credential prompt, verify the following: --- You are on a device that is Azure AD-joined or hybrid Azure AD-joined to the same Azure AD tenant as the session host OR-- You are on a device running Windows 10 2004 or later that is Azure AD registered to the same Azure AD tenant as the session host-- The [PKU2U protocol is enabled](/windows/security/threat-protection/security-policy-settings/network-security-allow-pku2u-authentication-requests-to-this-computer-to-use-online-identities) on both the local PC and the session host-- [Per-user multi-factor authentication is disabled](set-up-mfa.md#azure-ad-joined-session-host-vms) for the user account as it's not supported for Azure AD-joined VMs.--### The sign-in method you're trying to use isn't allowed --If you come across an error saying **The sign-in method you're trying to use isn't allowed. Try a different sign-in method or contact your system administrator**, you have Conditional Access policies restricting access. Follow the instructions in [Enforce Azure Active Directory Multi-Factor Authentication for Azure Virtual Desktop using Conditional Access](set-up-mfa.md#azure-ad-joined-session-host-vms) to enforce Azure Active Directory Multi-Factor Authentication for your Azure AD-joined VMs. --### A specified logon session does not exist. It may already have been terminated. --If you come across an error that says, **An authentication error occurred. A specified logon session does not exist. It may already have been terminated**, verify that you properly created and configured the Kerberos server object when [configuring single sign-on](configure-single-sign-on.md). ## Web client -### Sign in failed. Please check your username and password and try again --If you come across an error saying **Oops, we couldn't connect to NAME. Sign in failed. Please check your username and password and try again.** when using the web client, ensure that you [enabled connections from other clients](deploy-azure-ad-joined-vm.md#connect-using-the-other-clients). --### We couldn't connect to the remote PC because of a security error --If you come across an error saying **Oops, we couldn't connect to NAME. We couldn't connect to the remote PC because of a security error. If this keeps happening, ask your admin or tech support for help.**, you have Conditional Access policies restricting access. Follow the instructions in [Enforce Azure Active Directory Multi-Factor Authentication for Azure Virtual Desktop using Conditional Access](set-up-mfa.md#azure-ad-joined-session-host-vms) to enforce Azure Active Directory Multi-Factor Authentication for your Azure AD-joined VMs. --## Android client -### Error code 2607 - We couldn't connect to the remote PC because your credentials did not work +## Android and Chrome OS client -If you come across an error saying **We couldn't connect to the remote PC because your credentials did not work. The remote machine is AADJ joined.** with error code 2607 when using the Android client, ensure that you [enabled connections from other clients](deploy-azure-ad-joined-vm.md#connect-using-the-other-clients). ## Provide feedback |
virtual-desktop | Troubleshoot Client Android Chrome Os | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-client-android-chrome-os.md | + + Title: Troubleshoot the Remote Desktop client for Android and Chrome OS - Azure Virtual Desktop +description: Troubleshoot issues you may experience with the Remote Desktop client for Android and Chrome OS when connecting to Azure Virtual Desktop. ++ Last updated : 11/01/2022++++# Troubleshoot the Remote Desktop client for Android and Chrome OS when connecting to Azure Virtual Desktop ++This article describes issues you may experience with the [Remote Desktop client for Android and Chrome OS](users/connect-android-chrome-os.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json) when connecting to Azure Virtual Desktop and how to fix them. ++## General ++In this section you'll find troubleshooting guidance for general issues with the Remote Desktop client. ++++## Authentication and identity ++In this section you'll find troubleshooting guidance for authentication and identity issues with the Remote Desktop client. +++## Issue isn't listed here ++If your issue isn't listed here, see [Troubleshooting overview, feedback, and support for Azure Virtual Desktop](troubleshoot-set-up-overview.md) for information about how to open an Azure support case for Azure Virtual Desktop. |
virtual-desktop | Troubleshoot Client Ios Ipados | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-client-ios-ipados.md | + + Title: Troubleshoot the Remote Desktop client for iOS and iPadOS - Azure Virtual Desktop +description: Troubleshoot issues you may experience with the Remote Desktop client for iOS and iPadOS when connecting to Azure Virtual Desktop. ++ Last updated : 11/01/2022++++# Troubleshoot the Remote Desktop client for iOS and iPadOS when connecting to Azure Virtual Desktop ++This article describes issues you may experience with the [Remote Desktop client for iOS and iPadOS](users/connect-ios-ipados.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json) when connecting to Azure Virtual Desktop and how to fix them. ++## General ++In this section you'll find troubleshooting guidance for general issues with the Remote Desktop client. ++++## Authentication and identity ++In this section you'll find troubleshooting guidance for authentication and identity issues with the Remote Desktop client. ++### Delete existing security tokens ++If you're having issues signing in due to a cached token that has expired, do the following: ++1. Open the **Settings** app for iOS or iPadOS. ++1. From the list of apps, select **RD Client**. ++1. Under **AVD Security Tokens**, toggle **Delete on App Launch** to **On**. ++1. Try to subscribe to a workspace again. For more information, see [Connect to Azure Virtual Desktop with the Remote Desktop client for iOS and iPadOS](users/connect-ios-ipados.md). ++1. Toggle **Delete on App Launch** to **Off** once you can connect again. ++## Issue isn't listed here ++If your issue isn't listed here, see [Troubleshooting overview, feedback, and support for Azure Virtual Desktop](troubleshoot-set-up-overview.md) for information about how to open an Azure support case for Azure Virtual Desktop. |
virtual-desktop | Troubleshoot Client Macos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-client-macos.md | + + Title: Troubleshoot the Remote Desktop client for macOS - Azure Virtual Desktop +description: Troubleshoot issues you may experience with the Remote Desktop client for macOS when connecting to Azure Virtual Desktop. ++ Last updated : 11/01/2022++++# Troubleshoot the Remote Desktop client for macOS when connecting to Azure Virtual Desktop ++This article describes issues you may experience with the [Remote Desktop client for macOS](users/connect-macos.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json) when connecting to Azure Virtual Desktop and how to fix them. ++## General ++In this section you'll find troubleshooting guidance for general issues with the Remote Desktop client. ++++## Authentication and identity ++In this section you'll find troubleshooting guidance for authentication and identity issues with the Remote Desktop client. ++### Account switch detected ++If you see the error **Account switch detected**, you need to refresh the Azure AD token. To refresh the Azure AD token, do the following: ++1. Delete any workspaces from the Remote Desktop client. For more information, see [Edit, refresh, or delete a workspace](users/client-features-macos.md#edit-refresh-or-delete-a-workspace). ++1. Open the **Keychain Access** app on your device. ++1. Under **Default Keychains**, select **login**, then select **All Items**. ++1. In the search box, enter `https://www.wvd.microsoft.com`. ++1. Double-click to open an entry with the name **accesstoken**. ++1. Copy the first part of the value for **Account**, up to the first hyphen, for example **70f0a61f**. ++1. Enter the value you copied into the search box. ++1. Right-click and delete each entry containing this value. ++1. If you have multiple entries when searching for `https://www.wvd.microsoft.com`, repeat these steps for each entry. ++1. Try to subscribe to a workspace again. For more information, see [Connect to Azure Virtual Desktop with the Remote Desktop client for macOS](users/connect-macos.md). ++## Display ++In this section you'll find troubleshooting guidance for display issues with the Remote Desktop client. ++### Blank screen or cursor skipping when using multiple monitors ++Using multiple monitors in certain topologies can cause issues such as blank screens or the cursor skipping. Often this is a result of customized display configurations that create edge cases for the client's graphics algorithm when Retina optimizations are turned on, we're aware of these issues and plan to resolve them in future updates. For now, if you encounter display issues such as these, use a different configuration or disabling Retina optimization. To disable Retina optimization, see [Display settings for each remote desktop](users/client-features-macos.md#display-settings-for-each-remote-desktop). ++## Issue isn't listed here ++If your issue isn't listed here, see [Troubleshooting overview, feedback, and support for Azure Virtual Desktop](troubleshoot-set-up-overview.md) for information about how to open an Azure support case for Azure Virtual Desktop. |
virtual-desktop | Troubleshoot Client Microsoft Store | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-client-microsoft-store.md | + + Title: Troubleshoot the Remote Desktop client for Windows (Microsoft Store) - Azure Virtual Desktop +description: Troubleshoot issues you may experience with the Remote Desktop client for Windows (Microsoft Store) when connecting to Azure Virtual Desktop. ++ Last updated : 11/01/2022++++# Troubleshoot the Remote Desktop client for Windows (Microsoft Store) when connecting to Azure Virtual Desktop ++This article describes issues you may experience with the [Remote Desktop client for Windows (Microsoft Store)](users/connect-microsoft-store.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json) when connecting to Azure Virtual Desktop and how to fix them. ++## General ++In this section you'll find troubleshooting guidance for general issues with the Remote Desktop client. ++++## Issue isn't listed here ++If your issue isn't listed here, see [Troubleshooting overview, feedback, and support for Azure Virtual Desktop](troubleshoot-set-up-overview.md) for information about how to open an Azure support case for Azure Virtual Desktop. |
virtual-desktop | Troubleshoot Client Web | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-client-web.md | + + Title: Troubleshoot the Remote Desktop Web client - Azure Virtual Desktop +description: Troubleshoot issues you may experience with the Remote Desktop Web client when connecting to Azure Virtual Desktop. ++ Last updated : 11/01/2022++++# Troubleshoot the Remote Desktop Web client when connecting to Azure Virtual Desktop ++This article describes issues you may experience with the [Remote Desktop Web client](users/connect-web.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json) when connecting to Azure Virtual Desktop and how to fix them. ++## General ++In this section you'll find troubleshooting guidance for general issues with the Remote Desktop client. ++++### Web client stops responding or disconnects ++If the Remote Desktop Web client stops responding or keeps disconnecting, try closing and reopening the browser. If it continues, try connecting using another browser or a one of the other [Remote Desktop clients](users/remote-desktop-clients-overview.md). You can also try clearing your browsing data. For Microsoft Edge, see [Microsoft Edge, browsing data, and privacy +](https://support.microsoft.com/windows/microsoft-edge-browsing-data-and-privacy-bb8174ba-9d73-dcf2-9b4a-c582b4e640dd). ++### Web client out of memory ++If you see the error message "*Oops, we couldn't connect to 'SessionDesktop'*" (where SessionDesktop is the name of the resource you're connecting to), then the web client has run out of memory. ++To resolve this issue, you'll need to either reduce the size of the browser window so a smaller resolution will be used, or disconnect all existing connections and try connecting again. If you still encounter this issue after doing these things, contact your admin for help. ++## Network ++In this section you'll find troubleshooting guidance for network issues with the Remote Desktop client. ++### Web client won't open ++The URL for the Remote Desktop Web client is [https://client.wvd.microsoft.com/arm/webclient/](https://client.wvd.microsoft.com/arm/webclient/). If this page doesn't open, try the following: ++1. Test your internet connection by opening another website in your browser, for example [https://www.bing.com](https://www.bing.com). ++2. From PowerShell or Command Prompt on Windows, or Terminal on macOS, you can test if your DNS server can resolve the fully qualified domain name (FQDN) by running the following command: ++ ```powershell + nslookup client.wvd.microsoft.com + ``` ++If neither of these work you most likely have a problem with your network connection. Contact your network admin for help. ++> [!TIP] +> For the URLs of other Azure environments, such as Azure US Gov and Azure China 21Vianet, see [Connect to Azure Virtual Desktop with the Remote Desktop Web client](users/connect-web.md#access-your-resources). ++## Authentication and identity ++In this section you'll find troubleshooting guidance for authentication and identity issues with the Remote Desktop client. +++## Issue isn't listed here ++If your issue isn't listed here, see [Troubleshooting overview, feedback, and support for Azure Virtual Desktop](troubleshoot-set-up-overview.md) for information about how to open an Azure support case for Azure Virtual Desktop. |
virtual-desktop | Troubleshoot Client Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-client-windows.md | + + Title: Troubleshoot the Remote Desktop client for Windows - Azure Virtual Desktop +description: Troubleshoot issues you may experience with the Remote Desktop client for Windows when connecting to Azure Virtual Desktop. ++ Last updated : 11/01/2022++++# Troubleshoot the Remote Desktop client for Windows when connecting to Azure Virtual Desktop ++This article describes issues you may experience with the [Remote Desktop client for Windows](users/connect-windows.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json) when connecting to Azure Virtual Desktop and how to fix them. ++## General ++In this section you'll find troubleshooting guidance for general issues with the Remote Desktop client. ++++### Retrieve and open client logs ++You might need the client logs when investigating a problem. ++To retrieve the client logs: ++1. Ensure no sessions are active and the client process isn't running in the background by right-clicking on the **Remote Desktop** icon in the system tray and selecting **Disconnect all sessions**. +1. Open **File Explorer**. +1. Navigate to the **%temp%\DiagOutputDir\RdClientAutoTrace** folder. ++The logs are in the .ETL file format. You can convert these to .CSV or .XML to make them easily readable by using the `tracerpt` command. Find the name of the file you want to convert and make a note of it. ++- To convert the .ETL file to .CSV, open PowerShell and run the following, replacing the value for `$filename` with the name of the file you want to convert (without the extension) and `$outputFolder` with the directory in which to create the .CSV file. ++ ```powershell + $filename = "<filename>" + $outputFolder = "C:\Temp" + cd $env:TEMP\DiagOutputDir\RdClientAutoTrace + tracerpt "$filename.etl" -o "$outputFolder\$filename.csv" -of csv + ``` ++- To convert the .ETL file to .XML, open Command Prompt or PowerShell and run the following, replacing `<filename>` with the name of the file you want to convert and `$outputFolder` with the directory in which to create the .XML file. ++ ```powershell + $filename = "<filename>" + $outputFolder = "C:\Temp" + cd $env:TEMP\DiagOutputDir\RdClientAutoTrace + tracerpt "$filename.etl" -o "$outputFolder\$filename.xml" + ``` ++### Client stops responding or can't be opened ++If the Remote Desktop client for Windows stops responding or can't be opened, you may need to reset user data. If you can open the client, you can reset user data from the **About** menu, or if you can't open the client, you can reset user data from the command line. The default settings for the client will be restored and you'll be unsubscribed from all workspaces. ++To reset user data from the client: ++1. Open the **Remote Desktop** app on your device. ++1. Select the three dots at the top right-hand corner to show the menu, then select **About**. ++1. In the section **Reset user data**, select **Reset**. To confirm you want to reset your user data, select **Continue**. ++To reset user data from the command line: ++1. Open PowerShell. ++1. Change the directory to where the Remote Desktop client is installed, by default this is `C:\Program Files\Remote Desktop`. ++1. Run the following command to reset user data. You'll be prompted to confirm you want to reset your user data. ++ ```powershell + .\msrdcw.exe /reset + ``` ++ You can also add the `/f` option, where your user data will be reset without confirmation: ++ ```powershell + .\msrdcw.exe /reset /f + ``` ++## Authentication and identity ++In this section you'll find troubleshooting guidance for authentication and identity issues with the Remote Desktop client. +++### Authentication issues while using an N SKU of Windows ++Authentication issues can happen because you're using an *N* SKU of Windows on your local device without the *Media Feature Pack*. For more information and to learn how to install the Media Feature Pack, see [Media Feature Pack list for Windows N editions](https://support.microsoft.com/topic/media-feature-pack-list-for-windows-n-editions-c1c6fffa-d052-8338-7a79-a4bb980a700a). ++### Authentication issues when TLS 1.2 not enabled ++Authentication issues can happen when your local Windows device doesn't have TLS 1.2 enabled. This is most likely with Windows 7 where TLS 1.2 isn't enabled by default. To enable TLS 1.2 on Windows 7, you need to set the following registry values: ++- **Key**: `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client` ++ | Value Name | Type | Value Data | + |--|--|--| + | DisabledByDefault | DWORD | 0 | + | Enabled | DWORD | 1 | ++- **Key**: `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server` ++ | Value Name | Type | Value Data | + |--|--|--| + | DisabledByDefault | DWORD | 0 | + | Enabled | DWORD | 1 | ++- **Key**: `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.0.30319` ++ | Value Name | Type | Value Data | + |--|--|--| + | SystemDefaultTlsVersions | DWORD | 1 | + | SchUseStrongCrypto | DWORD | 1 | ++You can configure these registry values by opening PowerShell as an administrator and running the following commands: ++```powershell +New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server' -Force +New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server' -Name 'Enabled' -Value '1' -PropertyType 'DWORD' -Force +New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server' -Name 'DisabledByDefault' -Value '0' -PropertyType 'DWORD' -Force ++New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -Force +New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -Name 'Enabled' -Value '1' -PropertyType 'DWORD' -Force +New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -Name 'DisabledByDefault' -Value '0' -PropertyType 'DWORD' -Force ++New-Item 'HKLM:\SOFTWARE\Microsoft\.NETFramework\v4.0.30319' -Force +New-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\.NETFramework\v4.0.30319' -Name 'SystemDefaultTlsVersions' -Value '1' -PropertyType 'DWORD' -Force +New-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\.NETFramework\v4.0.30319' -Name 'SchUseStrongCrypto' -Value '1' -PropertyType 'DWORD' -Force +``` ++## Issue isn't listed here ++If your issue isn't listed here, see [Troubleshooting overview, feedback, and support for Azure Virtual Desktop](troubleshoot-set-up-overview.md) for information about how to open an Azure support case for Azure Virtual Desktop. |
virtual-desktop | Troubleshoot Client | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-client.md | - Title: Troubleshoot Remote Desktop client Azure Virtual Desktop - Azure -description: How to resolve issues with the Remote Desktop client when connecting to Azure Virtual Desktop. -- Previously updated : 09/20/2022----# Troubleshoot the Remote Desktop client --This article describes common issues with the Remote Desktop client and how to fix them. --## All clients --In this section you'll find troubleshooting guidance for all Remote Desktop clients. --### Remote Desktop Client doesn't show my resources --First, check the Azure Active Directory account you're using. If you've already signed in with a different Azure Active Directory account than the one you want to use for Azure Virtual Desktop, you should either sign out or use a private browser window. --If you're using Azure Virtual Desktop (classic), use the web client link in [this article](./virtual-desktop-fall-2019/connect-web-2019.md) to connect to your resources. --If that doesn't work, make sure your app group is associated with a workspace. --## Windows client --In this section you'll find troubleshooting guidance for the Remote Desktop client for Windows. --### Access client logs --You might need the client logs when investigating an issue. --To retrieve the client logs: --1. Ensure no sessions are active and the client process isn't running in the background by right-clicking on the **Remote Desktop** icon in the system tray and selecting **Disconnect all sessions**. -1. Open **File Explorer**. -1. Navigate to the **%temp%\DiagOutputDir\RdClientAutoTrace** folder. --Below you will find different methods used to read the client logs. --#### Event Viewer --1. Navigate to the Start menu, Control Panel, System and Security, and select **view event logs** under "Windows Tools". -1. Once the **Event Viewer** is open, click the Action tab at the top and select **Open Saved Log...**. -1. Navigate to the **%temp%\DiagOutputDir\RdClientAutoTrace** folder and select the log file you want to view. -1. The **Event Viewer** dialog box will open requesting a response to which it will convert etl format to evtx format. Select **Yes**. -1. In the **Open Saved Log** dialog box, you have the options to rename the log file and add a description. Select **Ok**. -1. The **Event Viewer** dialog box will open asking to overwrite the log file. Select **Yes**. This will not overwrite your original etl log file but create a copy in evtx format. --#### Command-line --This method will enable you to convert the log file from etl format to either _csv_ or _xml_ format using the `tracerpt` command. Open the Command Prompt or PowerShell and run the following: --``` -tracerpt "<FilePath>.etl" -o "<OutputFilePath>.extension" -``` --**CSV example:** --``` -tracerpt "C:\Users\admin\AppData\Local\Temp\DiagOutputDir\RdClientAutoTrace\msrdcw_09-07-2022-15-48-44.etl" -o "C:\Users\admin\Desktop\LogFile.csv" -of csv -``` --If the `-of csv` parameter is omitted from the command above, it won't properly convert the file. --**XML example:** --``` -tracerpt "C:\Users\admin\AppData\Local\Temp\DiagOutputDir\RdClientAutoTrace\msrdcw_09-07-2022-15-48-44.etl" -o "C:\Users\admin\Desktop\LogFile.xml" -``` --The `-of xml` parameter is not necessary in this instance as the default output for the conversion is in _xml_ format. --### Remote Desktop client for Windows stops responding or cannot be opened --If the Remote Desktop client for Windows stops responding or cannot be opened, you may need to reset your client. Starting with version 1.2.790, you can reset the user data from the About page or using a command. --You can also use the following command to remove your user data, restore default settings and unsubscribe from all Workspaces. From a Command Prompt or PowerShell session, run the following command: --```cmd -msrdcw.exe /reset [/f] -``` --If you're using an earlier version of the Remote Desktop client, we recommend you uninstall and reinstall the client. --### Authentication issues while using an N SKU --Authentication issues can happen because you're using an *N* SKU of Windows without the media features pack. To resolve this issue, [install the media features pack](https://support.microsoft.com/topic/media-feature-pack-list-for-windows-n-editions-c1c6fffa-d052-8338-7a79-a4bb980a700a). --### Authentication issues when TLS 1.2 not enabled --Authentication issues can happen when your client doesn't have TLS 1.2 enabled. This is most likely with Windows 7 where TLS 1.2 is not enabled by default. To enable TLS 1.2 on Windows 7, you need to set the following registry values: --- `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client`- - "DisabledByDefault": **00000000** - - "Enabled": **00000001** -- `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server`- - "DisabledByDefault": **00000000** - - "Enabled": **00000001** -- `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.0.30319`- - "SchUseStrongCrypto": **00000001** --You can configure these registry values by running the following commands from an elevated PowerShell session: --```powershell -New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server' -Force -New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server' -Name 'Enabled' -Value '1' -PropertyType 'DWORD' -Force -New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server' -Name 'DisabledByDefault' -Value '0' -PropertyType 'DWORD' -Force --New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -Force -New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -Name 'Enabled' -Value '1' -PropertyType 'DWORD' -Force -New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -Name 'DisabledByDefault' -Value '0' -PropertyType 'DWORD' -Force --New-Item 'HKLM:\SOFTWARE\Microsoft\.NETFramework\v4.0.30319' -Force -New-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\.NETFramework\v4.0.30319' -Name 'SystemDefaultTlsVersions' -Value '1' -PropertyType 'DWORD' -Force -New-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\.NETFramework\v4.0.30319' -Name 'SchUseStrongCrypto' -Value '1' -PropertyType 'DWORD' -Force -``` --### Windows client blocks Azure Virtual Desktop (classic) feed --If the Windows client feed won't show Azure Virtual Desktop (classic) apps, follow these instructions as an admin of Azure Virtual Desktop in Azure: --1. Check if the Conditional Access policy includes the app IDs associated with Azure Virtual Desktop (classic). -2. Check if the Conditional Access policy blocks all access except Azure Virtual Desktop (classic) app IDs. If so, you'll need to add the app ID.**9cdead84-a844-4324-93f2-b2e6bb768d07** to the policy to allow the client to discover the feeds. --If you can't find the app ID 9cdead84-a844-4324-93f2-b2e6bb768d07 in the list, you'll need to re-register the Azure Virtual Desktop resource provider. To re-register the resource provider: --1. Sign in to the Azure portal. -2. Go to **Subscription**, then select your subscription. -3. In the menu on the left side of the page, select **Resource provider**. -4. Find and select **Microsoft.DesktopVirtualization**, then select **Re-register**. --## Web client --In this section you'll find troubleshooting guidance for the Remote Desktop Web client. --### Web client stops responding or disconnects --Try connecting using another browser or client. --### Web client won't open --First, test your internet connection by opening another website in your browser, for example [www.bing.com](https://www.bing.com). --Next, open a Command Prompt or PowerShell session and use **nslookup** to confirm DNS can resolve the FQDN by running the following command: --```cmd -nslookup rdweb.wvd.microsoft.com -``` --If one or neither of these work, you most likely have a problem with your network connection. We recommend you contact your network admin for help. --### Your client can't connect but other clients on your network can connect --If your browser starts acting up or stops working while you're using the web client, try these actions to resolve it: --1. Restart the browser. -2. Clear browser cookies. See [How to delete cookie files in Internet Explorer](https://support.microsoft.com/help/278835/how-to-delete-cookie-files-in-internet-explorer). -3. Clear browser cache. See [clear browser cache for your browser](https://binged.it/2RKyfdU). -4. Open browser InPrivate mode. --If issues continue even after you've switched browsers, the problem may not be with your browser, but with your network. We recommend you contact your network admin for help. --### Web client keeps prompting for credentials --If the Web client keeps prompting for credentials, follow these instructions: --1. Confirm the web client URL is correct. -2. Confirm that the credentials you're using are for the Azure Virtual Desktop environment tied to the URL. -3. Clear browser cookies. For more information, see [How to delete cookie files in Internet Explorer](https://support.microsoft.com/help/278835/how-to-delete-cookie-files-in-internet-explorer). -4. Clear browser cache. For more information, see [Clear browser cache for your browser](https://binged.it/2RKyfdU). -5. Open your browser in Private mode. --### Web client out of memory --When using the web client, if you see the error message "Oops, we couldn't connect to 'SessionDesktop,'" (where *SessionDesktop* is the name of the resource you're connecting to), then the web client has run out of memory. --To resolve this issue, you'll need to either reduce the size of the browser window or disconnect all existing connections and try connecting again. If you still encounter this issue after doing these things, ask your local admin or tech support for help. --## Next steps --- For an overview on troubleshooting Azure Virtual Desktop and the escalation tracks, see [Troubleshooting overview, feedback, and support](troubleshoot-set-up-overview.md).-- To troubleshoot issues while creating an Azure Virtual Desktop environment and host pool in an Azure Virtual Desktop environment, see [Environment and host pool creation](troubleshoot-set-up-issues.md).-- To troubleshoot issues while configuring a virtual machine (VM) in Azure Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration.md).-- To troubleshoot issues related to the Azure Virtual Desktop agent or session connectivity, see [Troubleshoot common Azure Virtual Desktop Agent issues](troubleshoot-agent.md).-- To troubleshoot issues when using PowerShell with Azure Virtual Desktop, see [Azure Virtual Desktop PowerShell](troubleshoot-powershell.md).-- To go through a troubleshoot tutorial, see [Tutorial: Troubleshoot Resource Manager template deployments](../azure-resource-manager/templates/template-tutorial-troubleshoot.md). |
virtual-desktop | Troubleshoot Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-powershell.md | New-AzWvdApplicationGroup_CreateExpanded: ActivityId: e5fe6c1d-5f2c-4db9-817d-e4 - To troubleshoot issues while setting up your Azure Virtual Desktop environment and host pools, see [Environment and host pool creation](troubleshoot-set-up-issues.md). - To troubleshoot issues while configuring a virtual machine (VM) in Azure Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration.md). - To troubleshoot issues with Azure Virtual Desktop client connections, see [Azure Virtual Desktop service connections](troubleshoot-service-connection.md).-- To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](troubleshoot-client.md)+- To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](troubleshoot-client-windows.md) - To learn more about the service, see [Azure Virtual Desktop environment](environment-setup.md). - To learn about auditing actions, see [Audit operations with Resource Manager](../azure-monitor/essentials/activity-log.md). - To learn about actions to determine the errors during deployment, see [View deployment operations](../azure-resource-manager/templates/deployment-history.md). |
virtual-desktop | Troubleshoot Set Up Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-set-up-issues.md | the VM.\\\" - To troubleshoot issues while configuring a virtual machine (VM) in Azure Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration.md). - To troubleshoot issues related to the Azure Virtual Desktop agent or session connectivity, see [Troubleshoot common Azure Virtual Desktop Agent issues](troubleshoot-agent.md). - To troubleshoot issues with Azure Virtual Desktop client connections, see [Azure Virtual Desktop service connections](troubleshoot-service-connection.md).-- To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](troubleshoot-client.md)+- To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](troubleshoot-client-windows.md) - To troubleshoot issues when using PowerShell with Azure Virtual Desktop, see [Azure Virtual Desktop PowerShell](troubleshoot-powershell.md). - To learn more about the service, see [Azure Virtual Desktop environment](environment-setup.md). - To go through a troubleshoot tutorial, see [Tutorial: Troubleshoot Resource Manager template deployments](../azure-resource-manager/templates/template-tutorial-troubleshoot.md). |
virtual-desktop | Troubleshoot Set Up Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-set-up-overview.md | Use the following table to identify and resolve issues you may encounter when se | Managing Azure Virtual Desktop session host environment from the Azure portal | [Open an Azure support request](https://azure.microsoft.com/support/create-ticket/). <br> <br> For management issues when using Remote Desktop Services/Azure Virtual Desktop PowerShell, see [Azure Virtual Desktop PowerShell](troubleshoot-powershell.md) or [open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, select **Configuration and management** for the problem type, then select **Issues configuring environment using PowerShell** for the problem subtype. | | Managing Azure Virtual Desktop configuration tied to host pools and application groups (app groups) | See [Azure Virtual Desktop PowerShell](troubleshoot-powershell.md), or [open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, then select the appropriate problem type.| | Deploying and manage FSLogix Profile Containers | See [Troubleshooting guide for FSLogix products](/fslogix/fslogix-trouble-shooting-ht/) and if that doesn't resolve the issue, [Open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, select **FSLogix** for the problem type, then select the appropriate problem subtype. |-| Remote desktop clients malfunction on start | See [Troubleshoot the Remote Desktop client](troubleshoot-client.md) and if that doesn't resolve the issue, [Open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, then select **Remote Desktop clients** for the problem type. <br> <br> If it's a network issue, your users need to contact their network administrator. | +| Remote desktop clients malfunction on start | See [Troubleshoot the Remote Desktop client](troubleshoot-client-windows.md) and if that doesn't resolve the issue, [Open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, then select **Remote Desktop clients** for the problem type. <br> <br> If it's a network issue, your users need to contact their network administrator. | | Connected but no feed | Troubleshoot using the [User connects but nothing is displayed (no feed)](troubleshoot-service-connection.md#user-connects-but-nothing-is-displayed-no-feed) section of [Azure Virtual Desktop service connections](troubleshoot-service-connection.md). <br> <br> If your users have been assigned to an app group, [open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, then select **Remote Desktop Clients** for the problem type. | | Feed discovery problems due to the network | Your users need to contact their network administrator. | | Connecting clients | See [Azure Virtual Desktop service connections](troubleshoot-service-connection.md) and if that doesn't solve your issue, see [Session host virtual machine configuration](troubleshoot-vm-configuration.md). | Use the following table to identify and resolve issues you may encounter when se - To troubleshoot issues while configuring a virtual machine (VM) in Azure Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration.md). - To troubleshoot issues related to the Azure Virtual Desktop agent or session connectivity, see [Troubleshoot common Azure Virtual Desktop Agent issues](troubleshoot-agent.md). - To troubleshoot issues with Azure Virtual Desktop client connections, see [Azure Virtual Desktop service connections](troubleshoot-service-connection.md).-- To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](troubleshoot-client.md)+- To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](troubleshoot-client-windows.md) - To troubleshoot issues when using PowerShell with Azure Virtual Desktop, see [Azure Virtual Desktop PowerShell](troubleshoot-powershell.md). - To learn more about the service, see [Azure Virtual Desktop environment](environment-setup.md). - To go through a troubleshoot tutorial, see [Tutorial: Troubleshoot Resource Manager template deployments](../azure-resource-manager/templates/template-tutorial-troubleshoot.md). |
virtual-desktop | Troubleshoot Vm Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-vm-configuration.md | Golden images must not include the Azure Virtual Desktop agent. You can install - To troubleshoot issues while configuring a virtual machine (VM) in Azure Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration.md). - To troubleshoot issues related to the Azure Virtual Desktop agent or session connectivity, see [Troubleshoot common Azure Virtual Desktop Agent issues](troubleshoot-agent.md). - To troubleshoot issues with Azure Virtual Desktop client connections, see [Azure Virtual Desktop service connections](troubleshoot-service-connection.md).-- To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](troubleshoot-client.md)+- To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](troubleshoot-client-windows.md) - To troubleshoot issues when using PowerShell with Azure Virtual Desktop, see [Azure Virtual Desktop PowerShell](troubleshoot-powershell.md). - To learn more about the service, see [Azure Virtual Desktop environment](environment-setup.md). - To go through a troubleshoot tutorial, see [Tutorial: Troubleshoot Resource Manager template deployments](../azure-resource-manager/templates/template-tutorial-troubleshoot.md). |
virtual-desktop | Client Features Android Chrome Os | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-android-chrome-os.md | If you want to provide feedback to us on the Remote Desktop client for Android a ## Next steps -If you're having trouble with the Remote Desktop client, see [Troubleshoot the Remote Desktop client](../troubleshoot-client.md). +If you're having trouble with the Remote Desktop client, see [Troubleshoot the Remote Desktop client](../troubleshoot-client-android-chrome-os.md). |
virtual-desktop | Client Features Ios Ipados | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-ios-ipados.md | To set the orientation: You can choose the resolution for your remote session from a predefined list. This setting applies to all workspaces. +> [!NOTE] +> Changes to the display resolution only take effect for new connections. For current connections, you'll need to disconnect and reconnect from a remote session + To set the resolution: 1. Open the **RD Client** application on your device. To set the resolution: 1. Tap a resolution from the list. -1. You can also set **Use Home Indicator Area**. Toggling this on will show graphics from the remote session in the area at the bottom of the screen occupied by the Home indicator. This setting only applies in landscape orientation. For more information about display orientation, see [Set orientation](#set-orientation). +1. Tap the back arrow (**<**), then tap the **X** mark. ++### Use full display or home indicator area ++On iPadOS, you can set **Use Full Display**. Toggling this on will use the full display of your device, but will result in some content from the remote session being obscured, such as graphics n the rounded corners of the screen. ++1. Open the **RD Client** application on your device. ++1. In the top left-hand corner, tap the menu icon (the circle with three dots inside), then tap **Settings**. ++1. Tap **Display**. ++1. Toggle **Use Full Display**. ++1. Tap the back arrow (**<**), then tap the **X** mark. ++On iOS, you can set **Use Home Indicator Area**. Toggling this on will show graphics from the remote session in the area at the bottom of the screen occupied by the Home indicator. This setting only applies in landscape orientation. For more information about display orientation, see [Set orientation](#set-orientation). To set **Use Home Indicator Area**: ++1. Open the **RD Client** application on your device. ++1. In the top left-hand corner, tap the menu icon (the circle with three dots inside), then tap **Settings**. ++1. Tap **Display**. ++1. Toggle **Use Home Indicator Area**. 1. Tap the back arrow (**<**), then tap the **X** mark. If you want to provide feedback to us on the Remote Desktop client for iOS and i ## Next steps -If you're having trouble with the Remote Desktop client, see [Troubleshoot the Remote Desktop client](../troubleshoot-client.md). +If you're having trouble with the Remote Desktop client, see [Troubleshoot the Remote Desktop client](../troubleshoot-client-ios-ipados.md). |
virtual-desktop | Client Features Macos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-macos.md | If you want to provide feedback to us on the Remote Desktop client for macOS, yo ## Next steps -If you're having trouble with the Remote Desktop client, see [Troubleshoot the Remote Desktop client](../troubleshoot-client.md). +If you're having trouble with the Remote Desktop client, see [Troubleshoot the Remote Desktop client](../troubleshoot-client-macos.md). |
virtual-desktop | Client Features Microsoft Store | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-microsoft-store.md | To best help you, we need you to give us as detailed information as possible. Al ## Next steps -If you're having trouble with the Remote Desktop client, see [Troubleshoot the Remote Desktop client](../troubleshoot-client.md). +If you're having trouble with the Remote Desktop client, see [Troubleshoot the Remote Desktop client](../troubleshoot-client-microsoft-store.md). |
virtual-desktop | Client Features Web | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-web.md | If you want to provide feedback to us on the Remote Desktop Web client, you can ## Next steps -If you're having trouble with the Remote Desktop client, see [Troubleshoot the Remote Desktop client](../troubleshoot-client.md). +If you're having trouble with the Remote Desktop client, see [Troubleshoot the Remote Desktop client](../troubleshoot-client-web.md). |
virtual-desktop | Client Features Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-windows.md | To best help you, we need you to give us as detailed information as possible. Al ## Next steps -If you're having trouble with the Remote Desktop client, see [Troubleshoot the Remote Desktop client](../troubleshoot-client.md). +If you're having trouble with the Remote Desktop client, see [Troubleshoot the Remote Desktop client](../troubleshoot-client-windows.md). |
virtual-desktop | Create Host Pools Powershell 2019 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/create-host-pools-powershell-2019.md | You can create a virtual machine in multiple ways: > > Azure Virtual Desktop extended support for Windows 7 session host VMs ends on January 10, 2023. To see which operating systems are supported, review [Operating systems and licenses](../prerequisites.md#operating-systems-and-licenses). -After you've created your session host virtual machines, [apply a Windows license to a session host VM](../apply-windows-license.md#apply-a-windows-license-to-a-windows-client-session-host-vm) to run your Windows or Windows Server virtual machines without paying for another license. +After you've created your session host virtual machines, [apply a Windows license to a session host VM](../apply-windows-license.md#manually-apply-a-windows-license-to-a-windows-client-session-host-vm) to run your Windows or Windows Server virtual machines without paying for another license. ## Prepare the virtual machines for Azure Virtual Desktop agent installations |
virtual-desktop | Troubleshoot Powershell 2019 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-powershell-2019.md | Using the force command will let you delete the session host even if it has assi - To troubleshoot issues while creating a tenant and host pool in an Azure Virtual Desktop environment, see [Tenant and host pool creation](troubleshoot-set-up-issues-2019.md). - To troubleshoot issues while configuring a virtual machine (VM) in Azure Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration-2019.md). - To troubleshoot issues with Azure Virtual Desktop client connections, see [Azure Virtual Desktop service connections](troubleshoot-service-connection-2019.md).-- To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](../troubleshoot-client.md)+- To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](../troubleshoot-client-windows.md) - To learn more about the service, see [Azure Virtual Desktop environment](environment-setup-2019.md). - To go through a troubleshoot tutorial, see [Tutorial: Troubleshoot Resource Manager template deployments](../../azure-resource-manager/templates/template-tutorial-troubleshoot.md). - To learn about auditing actions, see [Audit operations with Resource Manager](../../azure-monitor/essentials/activity-log.md). |
virtual-desktop | Troubleshoot Set Up Issues 2019 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-set-up-issues-2019.md | Example of raw error: To fix this, do the following things: -1. Open the Azure Portal and go to the **Virtual networks** tab. +1. Open the Azure portal and go to the **Virtual networks** tab. 2. Find your VNET, then select **DNS servers**. 3. The DNS servers menu should appear on the right side of your screen. On that menu, select **Custom**. 4. Make sure the DNS servers listed under Custom match your domain controller or Active Directory domain. If you don't see your DNS server, you can add it by entering its value into the **Add DNS server** field. If you're running the GitHub Azure Resource Manager template, provide values for - For an overview on troubleshooting Azure Virtual Desktop and the escalation tracks, see [Troubleshooting overview, feedback, and support](troubleshoot-set-up-overview-2019.md). - To troubleshoot issues while configuring a virtual machine (VM) in Azure Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration-2019.md). - To troubleshoot issues with Azure Virtual Desktop client connections, see [Azure Virtual Desktop service connections](troubleshoot-service-connection-2019.md).-- To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](../troubleshoot-client.md)+- To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](../troubleshoot-client-windows.md) - To troubleshoot issues when using PowerShell with Azure Virtual Desktop, see [Azure Virtual Desktop PowerShell](troubleshoot-powershell-2019.md). - To learn more about the service, see [Azure Virtual Desktop environment](environment-setup-2019.md). - To go through a troubleshoot tutorial, see [Tutorial: Troubleshoot Resource Manager template deployments](../../azure-resource-manager/templates/template-tutorial-troubleshoot.md). |
virtual-desktop | Troubleshoot Set Up Overview 2019 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-set-up-overview-2019.md | Use the following table to identify and resolve issues you may encounter when se | Managing Azure Virtual Desktop session host environment from the Azure portal | [Open an Azure support request](https://azure.microsoft.com/support/create-ticket/). <br> <br> For management issues when using Remote Desktop Services/Azure Virtual Desktop PowerShell, see [Azure Virtual Desktop PowerShell](troubleshoot-powershell-2019.md) or [open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, select **Configuration and management** for the problem type, then select **Issues configuring tenant using PowerShell** for the problem subtype. | | Managing Azure Virtual Desktop configuration tied to host pools and application groups (app groups) | See [Azure Virtual Desktop PowerShell](troubleshoot-powershell-2019.md), or [open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, then select the appropriate problem type.| | Deploying and manage FSLogix Profile Containers | See [Troubleshooting guide for FSLogix products](/fslogix/fslogix-trouble-shooting-ht/) and if that doesn't resolve the issue, [Open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, select **FSLogix** for the problem type, then select the appropriate problem subtype. |-| Remote desktop clients malfunction on start | See [Troubleshoot the Remote Desktop client](../troubleshoot-client.md) and if that doesn't resolve the issue, [Open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, then select **Remote Desktop clients** for the problem type. <br> <br> If it's a network issue, your users need to contact their network administrator. | +| Remote desktop clients malfunction on start | See [Troubleshoot the Remote Desktop client](../troubleshoot-client-windows.md) and if that doesn't resolve the issue, [Open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, then select **Remote Desktop clients** for the problem type. <br> <br> If it's a network issue, your users need to contact their network administrator. | | Connected but no feed | Troubleshoot using the [User connects but nothing is displayed (no feed)](troubleshoot-service-connection-2019.md#user-connects-but-nothing-is-displayed-no-feed) section of [Azure Virtual Desktop service connections](troubleshoot-service-connection-2019.md). <br> <br> If your users have been assigned to an app group, [open an Azure support request](https://azure.microsoft.com/support/create-ticket/), select **Azure Virtual Desktop** for the service, then select **Remote Desktop Clients** for the problem type. | | Feed discovery problems due to the network | Your users need to contact their network administrator. | | Connecting clients | See [Azure Virtual Desktop service connections](troubleshoot-service-connection-2019.md) and if that doesn't solve your issue, see [Session host virtual machine configuration](troubleshoot-vm-configuration-2019.md). | Use the following table to identify and resolve issues you may encounter when se - To troubleshoot issues while creating a tenant and host pool in a Azure Virtual Desktop environment, see [Tenant and host pool creation](troubleshoot-set-up-issues-2019.md). - To troubleshoot issues while configuring a virtual machine (VM) in Azure Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration-2019.md). - To troubleshoot issues with Azure Virtual Desktop client connections, see [Azure Virtual Desktop service connections](troubleshoot-service-connection-2019.md).-- To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](../troubleshoot-client.md)+- To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](../troubleshoot-client-windows.md) - To troubleshoot issues when using PowerShell with Azure Virtual Desktop, see [Azure Virtual Desktop PowerShell](troubleshoot-powershell-2019.md). - To learn more about the service, see [Azure Virtual Desktop environment](environment-setup-2019.md). - To go through a troubleshoot tutorial, see [Tutorial: Troubleshoot Resource Manager template deployments](../../azure-resource-manager/templates/template-tutorial-troubleshoot.md). |
virtual-desktop | Troubleshoot Vm Configuration 2019 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-vm-configuration-2019.md | To learn more about this policy, see [Allow log on through Remote Desktop Servic - To troubleshoot issues while creating a tenant and host pool in a Azure Virtual Desktop environment, see [Tenant and host pool creation](troubleshoot-set-up-issues-2019.md). - To troubleshoot issues while configuring a virtual machine (VM) in Azure Virtual Desktop, see [Session host virtual machine configuration](troubleshoot-vm-configuration-2019.md). - To troubleshoot issues with Azure Virtual Desktop client connections, see [Azure Virtual Desktop service connections](troubleshoot-service-connection-2019.md).-- To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](../troubleshoot-client.md)+- To troubleshoot issues with Remote Desktop clients, see [Troubleshoot the Remote Desktop client](../troubleshoot-client-windows.md) - To troubleshoot issues when using PowerShell with Azure Virtual Desktop, see [Azure Virtual Desktop PowerShell](troubleshoot-powershell-2019.md). - To learn more about the service, see [Azure Virtual Desktop environment](environment-setup-2019.md). - To go through a troubleshoot tutorial, see [Tutorial: Troubleshoot Resource Manager template deployments](../../azure-resource-manager/templates/template-tutorial-troubleshoot.md). |
virtual-desktop | Whats New Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md | Title: What's new in the Azure Virtual Desktop Agent? - Azure description: New features and product updates for the Azure Virtual Desktop Agent. Previously updated : 11/07/2022 Last updated : 11/21/2022 The Azure Virtual Desktop Agent updates regularly. This article is where you'll Make sure to check back here often to keep up with new updates. +## Latest agent versions ++New versions of the Azure Virtual Desktop Agent are installed automatically. When new versions are released, they are rolled out progressively to all session hosts. This process is called *flighting* and it enables Microsoft to monitor the rollout. The following table lists the version that is in-flight and the version that is generally available. ++| Release | Latest version | +||| +| Generally available | 1.0.5555.1008 | +| In-flight | N/A | + ## Version 1.0.5555.1008 This update was released in November 2022 and includes the following changes: |
virtual-network | Nat Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-metrics.md | To create the alert, use the following steps: 8. From the When to evaluate section, select **1 minute** under the **Check every** drop-down menu. +9. For the lookback period, select **5 minutes** from the drop-down menu options. + 9. Create an **Action** for your alert by providing a name, notification type, and type of action that is performed when the alert is triggered. 10. Before deploying your action, **test the action group**. For more information on what each metric is showing you and how to analyze these * Learn about [Virtual Network NAT](nat-overview.md) * Learn about [NAT gateway resource](nat-gateway-resource.md) * Learn about [Azure Monitor](../../azure-monitor/overview.md)-* Learn about [troubleshooting NAT gateway resources](troubleshoot-nat.md). +* Learn about [troubleshooting NAT gateway resources](troubleshoot-nat.md). |
virtual-network | Troubleshoot Nat Connectivity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat-connectivity.md | -This article provides guidance on how to troubleshoot and resolve common outbound connectivity issues with your NAT gateway resource, as well as best practices on how to design applications to use outbound connections efficiently. +This article provides guidance on how to troubleshoot and resolve common outbound connectivity issues with your NAT gateway resource. This article also provides best practices on how to design applications to use outbound connections efficiently. ## SNAT exhaustion due to NAT gateway configuration -Common SNAT exhaustion issues with NAT gateway typically have to do with the configurations on the NAT gateway, such as: +SNAT exhaustion issues with NAT gateway typically have to do with the configurations on the NAT gateway, such as: -* Outbound connectivity on NAT gateway not scaled out with enough public IP addresses. +* NAT gateway not scaled out with enough public IP addresses. * NAT gateway's configurable TCP idle timeout timer is set higher than the default value of 4 minutes. -### Outbound connectivity not scaled out enough +### NAT gateway not scaled out enough -Each public IP address provides 64,512 SNAT ports for connecting outbound with NAT gateway. From those available SNAT ports, NAT gateway can support up to 50,000 concurrent connections to the same destination endpoint. If outbound connections are dropping because SNAT ports are being exhausted, then NAT gateway may not be scaled out enough to handle the workload. Additional Public IP addresses on NAT gateway may be required in order to provide more SNAT ports for outbound connectivity. +Each public IP address provides 64,512 SNAT ports for connecting outbound with NAT gateway. From those available SNAT ports, NAT gateway can support up to 50,000 concurrent connections to the same destination endpoint. If outbound connections are dropping because SNAT ports are being exhausted, then NAT gateway may not be scaled out enough to handle the workload. More public IP addresses on NAT gateway may be required in order to provide more SNAT ports for outbound connectivity. -The table below describes two common outbound connectivity failure scenarios due to scalability issues as well as how to validate and mitigate these issues: +The table below describes two common outbound connectivity failure scenarios due to scalability issues and how to validate and mitigate these issues: | Scenario | Evidence |Mitigation | |||| -| You're experiencing contention for SNAT ports and SNAT port exhaustion during periods of high usage. | You run the following [metrics](nat-metrics.md) in Azure Monitor: **Total SNAT Connection**: "Sum" aggregation shows high connection volume. For **SNAT Connection Count**, "Failed" connection state shows transient or persistent failures over time. **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume and connection failures. | Add more public IP addresses or public IP prefixes as need (assign up to 16 IP addresses in total to your NAT gateway). This addition will provide more SNAT port inventory and allow you to scale your scenario further. | +| You're experiencing contention for SNAT ports and SNAT port exhaustion during periods of high usage. | You run the following [metrics](nat-metrics.md) in Azure Monitor: **Total SNAT Connection Count**: "Sum" aggregation shows high connection volume. For **SNAT Connection Count**, "Failed" connection state shows transient or persistent failures over time. **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume and connection failures. | Add more public IP addresses or public IP prefixes as need (assign up to 16 IP addresses in total to your NAT gateway). This addition will provide more SNAT port inventory and allow you to scale your scenario further. | | You've already assigned 16 IP addresses to your NAT gateway and still are experiencing SNAT port exhaustion. | Attempt to add more IP addresses fails. Total number of IP addresses from public IP address or public IP prefix resources exceeds a total of 16. | Distribute your application environment across multiple subnets and provide a NAT gateway resource for each subnet. | >[!NOTE] ->It is important to understand why SNAT exhaustion occurs. Make sure you are using the right patterns for scalable and reliable scenarios. Adding more SNAT ports to a scenario without understanding the cause of the demand should be a last resort. If you do not understand why your scenario is applying pressure on SNAT port inventory, adding more SNAT portsby adding more IP addresses will only delay the same exhaustion failure as your application scales. You may be masking other inefficiencies and anti-patterns. See [best practices for efficient use of outbound connections](#best-practices-for-efficient-use-of-outbound-connections) for additional guidance. +>It is important to understand why SNAT exhaustion occurs. Make sure you are using the right patterns for scalable and reliable scenarios. Adding more SNAT ports to a scenario without understanding the cause of the demand should be a last resort. If you do not understand why your scenario is applying pressure on SNAT port inventory, adding more SNAT ports by adding more IP addresses will only delay the same exhaustion failure as your application scales. You may be masking other inefficiencies and anti-patterns. See [best practices for efficient use of outbound connections](#outbound-connectivity-best-practices) for additional guidance. ### TCP idle timeout timers set higher than the default value -The NAT gateway TCP idle timeout timer is set to 4 minutes by default but is configurable up to 120 minutes. If the timer is setting is set to a higher value than the default, NAT gateway will hold on to flows longer, and can create [extra pressure on SNAT port inventory](./nat-gateway-resource.md#timers). The table below describes a scenario where a long TCP idle timeout timer is causing SNAT exhaustion and provides possible mitigation steps to take: +The NAT gateway TCP idle timeout timer is set to 4 minutes by default but is configurable up to 120 minutes. If the timer is set to a higher value than the default, NAT gateway will hold on to flows longer, and can create [extra pressure on SNAT port inventory](./nat-gateway-resource.md#timers). The table below describes a scenario where a long TCP idle timeout timer is causing SNAT exhaustion and provides mitigation steps to take: | Scenario | Evidence | Mitigation | |||| -| You want to ensure that TCP connections stay active for long periods of time without going idle and timing out. You increase the TCP idle timeout timer setting. After a period of time, you start to notice that connection failures occur more often. You suspect that you may be exhausting your inventory of SNAT ports since connections are holding on to them longer. | You check the following [NAT gateway metrics](nat-metrics.md) in Azure Monitor to determine if SNAT port exhaustion is happening: **Total SNAT Connection**: "Sum" aggregation shows high connection volume. For **SNAT Connection Count**, "Failed" connection state shows transient or persistent failures over time. **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume and connection failures. | You have a few possible mitigation steps that you can take to resolve SNAT port exhaustion: </br></br> **Reduce the TCP idle timeout** to a lower value to free up SNAT port inventory earlier. The TCP idle timeout timer can't be set lower than 4 minutes. </br></br> Consider **[asynchronous polling patterns](/azure/architecture/patterns/async-request-reply)** to free up connection resources for other operations. </br></br> **Use TCP keepalives or application layer keepalives** to avoid intermediate systems timing out. For examples, see [.NET examples](/dotnet/api/system.net.servicepoint.settcpkeepalive). </br></br> For connections to Azure PaaS services, use **[Private Link](../../private-link/private-link-overview.md)**. Private Link eliminates the need to use public IPs of your NAT gateway, which frees up more SNAT ports for outbound connections to the internet. | +| You want to ensure that TCP connections stay active for long periods of time without going idle and timing out. You increase the TCP idle timeout timer setting. After a period of time, you start to notice that connection failures occur more often. You suspect that you may be exhausting your inventory of SNAT ports since connections are holding on to them longer. | You check the following [NAT gateway metrics](nat-metrics.md) in Azure Monitor to determine if SNAT port exhaustion is happening: **Total SNAT Connection Count**: "Sum" aggregation shows high connection volume. For **SNAT Connection Count**, "Failed" connection state shows transient or persistent failures over time. **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume and connection failures. | Some possible steps you can take to resolve SNAT port exhaustion include: </br></br> **Reduce the TCP idle timeout** to a lower value to free up SNAT port inventory earlier. The TCP idle timeout timer can't be set lower than 4 minutes. </br></br> Consider **[asynchronous polling patterns](/azure/architecture/patterns/async-request-reply)** to free up connection resources for other operations. </br></br> **Use TCP keepalives or application layer keepalives** to avoid intermediate systems timing out. For examples, see [.NET examples](/dotnet/api/system.net.servicepoint.settcpkeepalive). </br></br> Make connections to Azure PaaS services over the Azure backbone using **[Private Link](../../private-link/private-link-overview.md)**. This frees up SNAT ports for outbound connections to the internet. | ## Connection failures due to idle timeouts UDP idle timeout timers are set to 4 minutes. Unlike TCP idle timeout timers for | Scenario | Evidence | Mitigation | |||| -| You notice that UDP traffic is dropping connections that need to be maintained for long periods of time. | You check the following [NAT gateway metrics](nat-metrics.md) in Azure Monitor, **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume and connection failures. | A few possible mitigation steps that can be taken: - **Enable UDP keepalives**. Keep in mind that when a UDP keepalive is enabled, it's only active for one direction in a connection, so the connection can still time out from going idle on the other side of a connection. To prevent a UDP connection from idle time-out, UDP keepalives should be enabled for both directions in a connection flow. - **Application layer keepalives** can also be used to refresh idle flows and reset the idle timeout. Check the server side for what options exist for application specific keepalives. | +| You notice that UDP traffic is dropping connections that need to be maintained for long periods of time. | You check the following [NAT gateway metrics](nat-metrics.md) in Azure Monitor, **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume and connection failures. | A few possible mitigation steps that can be taken: - **Enable UDP keepalives**. Keep in mind that when a UDP keepalive is enabled, it's only active for one direction in a connection. The connection can still go idle and time out on the other side of a connection. To prevent a UDP connection from idle time-out, UDP keepalives should be enabled for both directions in a connection flow. - **Application layer keepalives** can also be used to refresh idle flows and reset the idle timeout. Check the server side for what options exist for application specific keepalives. | ## NAT gateway public IP not being used for outbound traffic ### VMs hold on to prior SNAT IP with active connection after NAT gateway added to a virtual network -[Virtual Network NAT gateway](nat-overview.md) supersedes outbound connectivity for a subnet. Migrations from default SNAT or load balancer outbound SNAT to NAT gateway results in new connections immediately using the IP address(es) associated with the NAT gateway resource. If a virtual machine has an established connection during the migration, the connection will continue to use the old SNAT IP address that was assigned when the connection was established. +[NAT gateway](nat-overview.md) becomes the default route to the internet when configured to a subnet. Migration from default outbound access or load balancer to NAT gateway results in new connections immediately using the IP address(es) associated with the NAT gateway resource. If a virtual machine has an established connection during the migration, the connection will continue to use the old SNAT IP address that was assigned when the connection was established. Test and resolve issues with VMs holding on to old SNAT IP addresses by: -- Ensure you've established a new connection and that existing connections aren't being reused in the OS or because the browser is caching the connections. For example, when using curl in PowerShell, make sure to specify the -DisableKeepalive parameter to force a new connection. If you're using a browser, connections may also be pooled. +- Ensure you've established a new connection and that existing connections aren't being reused in the OS or that the browser is caching the connections. For example, when using curl in PowerShell, make sure to specify the -DisableKeepalive parameter to force a new connection. If you're using a browser, connections may also be pooled. - It isn't necessary to reboot a virtual machine in a subnet configured to NAT gateway. However, if a virtual machine is rebooted, the connection state is flushed. When the connection state has been flushed, all connections will begin using the NAT gateway resource's IP address(es). This behavior is a side effect of the virtual machine reboot and not an indicator that a reboot is required. If you're still having trouble, open a support case for further troubleshooting. When forced tunneling with a custom UDR is enabled to direct traffic to a virtual appliance or VPN through ExpressRoute, the UDR or ExpressRoute takes precedence over NAT gateway for directing internet bound traffic. To learn more, see [custom UDRs](../virtual-networks-udr-overview.md#custom-routes). The order of precedence for internet routing configurations is as follows: -Virtual appliance UDR / ExpressRoute >> NAT gateway >> instance level public IP addresses >> outbound rules on Load balancer >> default system +Virtual appliance UDR / ExpressRoute >> NAT gateway >> instance level public IP addresses >> outbound rules on Load balancer >> default outbound access Test and resolve issues with a virtual appliance UDR or VPN ExpressRoute overriding your NAT gateway by: What else to check for: * If changing rate impacts the rate of failures, check if API rate limits, or other constraints on the destination side might have been reached. +### Other transient outbound connectivity issues ++Outbound Passive FTP may not work for NAT gateway with multiple public IP addresses, depending on your FTP server configuration. ++Passive FTP establishes different connections for control and data channels. When a NAT gateway with multiple public IP addresses sends traffic outbound, it randomly selects one of its public IP addresses for the source IP address. FTP may fail when data and control channels use different source IP addresses, depending on your FTP server configuration. ++To prevent possible passive FTP connection failures, make sure to do the following: +1. Check that your NAT gateway is attached to a single public IP address rather than multiple IP addresses or a prefix. +2. Make sure that the passive port range from your NAT gateway is allowed to pass any firewalls that may be at the destination endpoint. + ### Extra network captures If your investigation is inconclusive, open a support case for further troubleshooting and collect the following information for a quicker resolution. Choose a single virtual machine in your NAT gateway configured subnet to perform the following tests: If your investigation is inconclusive, open a support case for further troublesh * If no response is received in these ping tests, run a simultaneous Netsh trace on the backend VM, and the virtual network test VM while you run PsPing then stop the Netsh trace. -## Best practices for efficient use of outbound connections +## Outbound connectivity best practices -Azure monitors and operates its infrastructure with great care. However, transient failures can still occur from deployed applications, there's no guarantee that transmissions are lossless. AT gateway is the preferred option to connect outbound from Azure deployments in order to ensure highly reliable and resilient outbound connectivity. In addition to using NAT gateway to connect outbound, use the guidance below for extra steps that can be taken to ensure that applications are using connections efficiently. +Azure monitors and operates its infrastructure with great care. However, transient failures can still occur from deployed applications, there's no guarantee that transmissions are lossless. NAT gateway is the preferred option to connect outbound from Azure deployments in order to ensure highly reliable and resilient outbound connectivity. In addition to using NAT gateway to connect outbound, use the guidance below for how to ensure that applications are using connections efficiently. ### Modify the application to use connection pooling |
virtual-network | Troubleshoot Nat | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat.md | You may experience outbound connectivity failure if your NAT gateway resource is ### Can't delete NAT gateway -NAT gateway must be detached from all subnets within a virtual network before the resource can be removed or deleted. Follow these steps to remove subnets from your NAT gateway before you delete it: --**Recommended Steps** --1. In the portal, navigate to your NAT gateway resource Overview page --2. Under Settings on the left-hand navigation pane, select Subnets --3. Uncheck all boxes next to subnets that are associated to your NAT gateway --4. Save your Subnet configuration changes +NAT gateway must be detached from all subnets within a virtual network before the resource can be removed or deleted. See [Remove NAT gateway from an existing subnet and delete the resource](/azure/virtual-network/nat-gateway/manage-nat-gateway?tabs=manage-nat-portal#remove-a-nat-gateway-from-an-existing-subnet-and-delete-the-resource) for step by step guidance. ## Add or remove subnet NAT gateway is a standard SKU resource and can't be used with basic SKU resource ### Can't mismatch zones of public IP addresses and NAT gateway -NAT gateway is a zonal resource and can either be designated to a specific zone or to ΓÇÿno zoneΓÇÖ. When NAT gateway is placed in ΓÇÿno zoneΓÇÖ, Azure places the NAT gateway into a zone for you, but you don't have visibility into which zone the NAT gateway is located. +NAT gateway is a [zonal resource](/azure/virtual-network/nat-gateway/nat-availability-zones) and can either be designated to a specific zone or to ΓÇÿno zoneΓÇÖ. When NAT gateway is placed in ΓÇÿno zoneΓÇÖ, Azure places the NAT gateway into a zone for you, but you don't have visibility into which zone the NAT gateway is located. NAT gateway can be used with public IP addresses designated to a specific zone, no zone, all zones (zone-redundant) depending on its own availability zone configuration. Follow guidance below: To learn more about NAT gateway, see: * [NAT gateway resource](nat-gateway-resource.md) +* [Manage NAT gateway](/azure/virtual-network/nat-gateway/manage-nat-gateway) + * [Metrics and alerts for NAT gateway resources](nat-metrics.md). |
vpn-gateway | Vpn Gateway Troubleshoot Site To Site Disconnected Intermittently | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-troubleshoot-site-to-site-disconnected-intermittently.md | A user-defined route on the gateway subnet may be restricting some traffic and a Make sure that the on-premises VPN device is set to have **one VPN tunnel per subnet pair** for policy-based virtual network gateways. -### Step 5 Check for Security Association Limitation (for policy-based virtual network gateways) +### Step 5 Check for Security Association Limitations -The Policy-based virtual network gateway has limit of 200 subnet Security Association pairs. If the number of Azure virtual network subnets multiplied times by the number of local subnets is greater than 200, you see sporadic subnets disconnecting. +The virtual network gateway has limit of 200 subnet Security Association pairs. If the number of Azure virtual network subnets multiplied times by the number of local subnets is greater than 200, you may see sporadic subnets disconnecting. ### Step 6 Check on-premises VPN device external interface address |