Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
platform | How To Extend Copilot | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/archive/how-to-extend-copilot.md | To ensure your plugin works as intended, it's important to include good descript ## Upgrading your plugin to a rich conversational Teams app -When you connect your API to Teams, you've built a simple, powerful Microsoft 365 Copilot plugin. Teams makes it easier and helps you enhance this experience by adding rich conversational components. In addition to your plugin, you can use Teams Toolkit to add a bot to your manifest file. Developing a bot has never been easier with the release of the new [Teams AI library](../bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md). By bot, your plugin becomes a full conversational Teams app, allowing you to develop link unfurling experiences, message extensions, message actions, search bar actions, and end-to-end conversational bots. +When you connect your API to Teams, you've built a simple, powerful Microsoft 365 Copilot plugin. Teams makes it easier and helps you enhance this experience by adding rich conversational components. In addition to your plugin, you can use Teams Toolkit to add a bot to your manifest file. Developing a bot has never been easier with the release of the new [Teams AI library](../bots/how-to/teams-conversational-ai/teams-conversation-ai-overview.md). By bot, your plugin becomes a full conversational Teams app, allowing you to develop link unfurling experiences, message extensions, message actions, search bar actions, and end-to-end conversational bots. ## Connecting external data sources |
platform | Bot Messages Ai Generated Content | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/bots/how-to/bot-messages-ai-generated-content.md | For more information about `PredictedSayCommand`, see [PredictedSayCommand inter * [Bot activity handlers](../bot-basics.md) * [Format your bot messages](format-your-bot-messages.md)-* [Get started with Teams AI library](Teams%20conversational%20AI/how-conversation-ai-get-started.md) +* [Get started with Teams AI library](teams-conversational-ai/how-conversation-ai-get-started.md) |
platform | Assistants Api Quick Start | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/bots/how-to/teams-conversational-ai/assistants-api-quick-start.md | + + Title: Assistants API quick start guide ++description: In this module, learn how to quickly try the Assistants API with Teams AI library in Math tutor assistant sample using OpenAI Code Interpreter tool. ++ms.localizationpriority: high +zone_pivot_groups: assistant-ai-library-quick-start + Last updated : 05/20/2024+++# Quick start guide for using Assistants API with Teams AI library ++Get started using OpenAI or Azure OpenAI Assistants API with Teams AI library in Math tutor assistant sample. This guide uses the OpenAI Code Interpreter tool to help you create an assistant that specializes in mathematics. The bot uses the gpt-3.5-turbo model to chat with Microsoft Teams users and respond in a polite and respectful manner, staying within the scope of the conversation. ++## Prerequisites ++To get started, ensure that you have the following tools: ++| Install | For using... | +| | | +| [Visual Studio Code](https://code.visualstudio.com/download) | JavaScript, TypeScript, or C Sharp build environments. Use the latest version. | +| [Teams Toolkit](https://marketplace.visualstudio.com/items?itemName=TeamsDevApp.ms-teams-vscode-extension) | Microsoft Visual Studio Code extension that creates a project scaffolding for your app. Use the latest version.| +|[Git](https://git-scm.com/downloads)|Git is a version control system that helps you manage different versions of code within a repository. | +| [Node.js](https://nodejs.org/en/download/) | Back-end JavaScript runtime environment. For more information, see [Node.js version compatibility table for project type](~/toolkit/build-environments.md#nodejs-version-compatibility-table-for-project-type).| +| [Microsoft Teams](https://www.microsoft.com/microsoft-teams/download-app) | To collaborate with everyone, you work with through apps for chat, meetings, and call-all in one place.| +| [OpenAI](https://openai.com/api/) or [Azure OpenAI](https://oai.azure.com/portal)| First create your OpenAI API key to use OpenAI's GPT. If you want to host your app or access resources in Azure, you must create an Azure OpenAI service.| +| [Microsoft Edge](https://www.microsoft.com/edge) (recommended) or [Google Chrome](https://www.google.com/chrome/) | A browser with developer tools. | +| [Microsoft 365 developer account](/microsoftteams/platform/concepts/build-and-test/prepare-your-o365-tenant) | Access to Teams account with the appropriate permissions to install an app and [enable custom Teams apps and turn on custom app uploading](../../../concepts/build-and-test/prepare-your-o365-tenant.md#enable-custom-teams-apps-and-turn-on-custom-app-uploading). | ++<br/> +If you've already run the samples before or encountered a runtime error, follow these steps to start fresh: ++* Check all the `.env` and `env/.env.*.*` files in the sample and delete any automatically populated values to ensure that Teams Toolkit generates new resources for you. +* If you donΓÇÖt want Teams Toolkit to generate the app ID and password, update the `MicrosoftAppId` and `MicrosoftAppPassword` in the `.env` file with your own values. +* Remove values or leave the values blank for **SECRET_BOT_PASSWORD** and **TEAMS_APP_UPDATE_TIME** in the `.env` file to avoid conflicts. ++Teams Toolkit automatically provisions `MicrosoftAppId` and `MicrosoftAppPassword` resources. If you want to use your own resources, you need to manually add them to the `.env` file. Teams Toolkit doesn't auto-generate the following resources: ++* An Azure OpenAI or OpenAI key +* A database or similar storage options ++## Build and run the sample app ++Get started with Teams AI library using the **Math tutor assistant** sample. It enables your computerΓÇÖs localhost to quickly execute a Teams AI library-based sample. ++1. Go to the [sample](https://github.com/microsoft/teams-ai/tree/main/js/samples). ++1. Run the following command to clone the repository: ++ ```cmd + git clone https://github.com/microsoft/teams-ai.git + ``` ++1. Go to **Visual Studio Code**. ++1. Select **File** > **Open Folder**. ++1. Go to the location where you cloned teams-ai repo and select the **teams-ai** folder. ++1. Select **Select Folder**. ++ :::image type="content" source="../../../assets/images/bots/ai-library-dot-net-select-folder.png" alt-text="Screenshot shows the teams-ai folder and the Select Folder option."::: ++1. Select **View** > **Terminal**. A terminal window opens. ++1. In the terminal window, run the following command to go to the **js** folder: ++ ``` + cd .\js\ + ``` ++1. Run the following command to install dependencies: ++ ```terminal + yarn install + ``` ++1. Run the following command to build dependencies: ++ ```terminal + yarn build + ``` ++1. After the dependencies are installed, select **File** > **Open Folder**. ++1. Go to **teams-ai > js > samples > 04.ai-apps > d.assistants-mathBot** and select **Select Folder**. All the files for the Math tutor assistant sample are listed under the **EXPLORER** section in Visual Studio Code. ++1. Under **EXPLORER**, duplicate the `sample.env` file and update the duplicate file to `.env`. ++1. Update the following steps based on the AI services you select. ++ # [OpenAI key](#tab/OpenAI-key) ++ 1. Go to `env` folder and update the following code in `./env/.env.local.user` file: ++ ```text + SECRET_OPENAI_KEY=<your OpenAI key> + ASSISTANT_ID=<your Assistant ID> + ``` + 1. Go to the `infra` folder and ensure that the following lines in the `azure.bicep` file are commented out: ++ ```bicep + // { + // name: 'AZURE_OPENAI_KEY' + // value: azureOpenAIKey + // } + // { + // name: 'AZURE_OPENAI_ENDPOINT' + // value: azureOpenAIEndpoint + // } + ``` ++ # [Azure OpenAI](#tab/Azure-OpenAI) ++ 1. Go to `env` folder and update the following code in `./env/.env.local.user` file: ++ ```text + SECRET_AZURE_OPENAI_KEY=<your Azure OpenAI key> + SECRET_AZURE_OPENAI_ENDPOINT=<your Azure OpenAI Endpoint> + ``` ++ 1. Go to `teamsapp.local.yml` file and modify the last step to use Azure OpenAI variables: ++ ```yaml + - uses: file/createOrUpdateEnvironmentFile + with: + target: ./.env + envs: + BOT_ID: ${{BOT_ID}} + BOT_PASSWORD: ${{SECRET_BOT_PASSWORD}} + #OPENAI_KEY: ${{SECRET_OPENAI_KEY}} + AZURE_OPENAI_KEY: ${{SECRET_AZURE_OPENAI_KEY}} + AZURE_OPENAI_ENDPOINT: ${{SECRET_AZURE_OPENAI_ENDPOINT}} + ``` ++ 1. Go to the `infra` folder and ensure that the following lines in the `azure.bicep` file are commented out: ++ ```bicep + // { + // name: 'OPENAI_KEY' + // value: openAIKey + // } + ``` ++ 1. Go to `infra` > `azure.parameters.json` and replace the lines from [20 to 25](https://github.com/microsoft/teams-ai/blob/main/js/samples/04.ai-apps/d.assistants-mathBot/infra/azure.parameters.json#L20-L25) with the following code: ++ ```json + "azureOpenAIKey": { + "value": "${{SECRET_AZURE_OPENAI_KEY}}" + }, + "azureOpenAIEndpoint": { + "value": "${{SECRET_AZURE_OPENAI_ENDPOINT}}" + } + ``` + ++1. Copy the sample to a new directory that isn't a subdirectory of `teams-ai`. ++1. From the left pane, select **Teams Toolkit**. ++1. Under **ACCOUNTS**, sign in to the following: ++ * **Microsoft 365 account** + * **Azure account** ++1. To debug your app, select the **F5** key. ++ A browser tab opens a Teams web client requesting to add the bot to your tenant. ++1. Select **Add**. ++ :::image type="content" source="../../../assets/images/bots/math-bot-sample-app-add.png" alt-text="Screenshot shows the option to add the app in Teams web client."::: ++ A chat window opens. ++1. In the message compose area, send a message to invoke the bot. ++ :::image type="content" source="../../../assets/images/bots/mathbot-output.png" alt-text="Screenshot shows an example of the mathbot output." lightbox="../../../assets/images/bots/mathbot-output.png"::: +++> [!NOTE] +> If you're building a bot for the first time, it's recommended to use Teams Toolkit extension for Visual Studio Code to build a bot, see [build your first bot app using JavaScript](../../../sbs-gs-bot.yml). ++## Additional tools ++You can also use the following tools to run and set up a sample: ++1. **Teams Toolkit CLI**: You can use the Teams Toolkit CLI to create and manage Microsoft Teams apps from the command line. For more information, see [Teams Toolkit CLI set up instructions](https://github.com/microsoft/teams-ai/blob/main/getting-started/OTHER/TEAMS-TOOLKIT-CLI.md). ++1. **Bot Framework Emulator**: The [Bot Framework Emulator](https://github.com/microsoft/BotFramework-Emulator) is a desktop application that allows you to test and debug your bot locally. You can connect to your bot by entering the botΓÇÖs endpoint URL and Microsoft app ID and password. You can then send messages to your bot and see its responses in real-time. For more information, see [Bot Framework Emulator set up instructions](https://github.com/microsoft/teams-ai/blob/main/getting-started/OTHER/BOTFRAMEWORK-EMULATOR.md). ++1. **Manual setup**: If you prefer to set up your resources manually, you can do so by following the instructions provided by the respective services. For more information, see [manual set up instructions](https://github.com/microsoft/teams-ai/blob/main/getting-started/OTHER/MANUAL-RESOURCE-SETUP.md). ++## Next step ++> [!div class="nextstepaction"] +> [Assistants API](teams-conversation-ai-overview.md#assistants-api) |
platform | Conversation Ai Quick Start | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/bots/how-to/teams-conversational-ai/conversation-ai-quick-start.md | + + Title: Teams AI Library - Lightbot Sample ++description: In this module, learn how to quickly try the Teams AI library using the LightBot sample, which creates apps that control lights. ++ms.localizationpriority: high +zone_pivot_groups: ai-library-quick-start + Last updated : 12/06/2022+++# Teams AI library quick start guide ++Get started with Teams AI library using the LightBot sample, which is designed to help you through the process of creating apps that can control lights, such as turning them on and off using Teams AI library. The bot uses the gpt-3.5-turbo model to chat with Microsoft Teams users and respond in a polite and respectful manner, staying within the scope of the conversation. +++## Prerequisites ++To get started, ensure that you have the following tools: ++| Install | For using... | +| | | +| [Visual Studio Code](https://code.visualstudio.com/download) | JavaScript, TypeScript, and Python build environments. Use the latest version. | +| [Teams Toolkit](https://marketplace.visualstudio.com/items?itemName=TeamsDevApp.ms-teams-vscode-extension) | Microsoft Visual Studio Code extension that creates a project scaffolding for your app. Use the latest version.| +|[Git](https://git-scm.com/downloads)|Git is a version control system that helps you manage different versions of code within a repository. | +| [Node.js](https://nodejs.org/en/download/) | Back-end JavaScript runtime environment. For more information, see [Node.js version compatibility table for project type](~/toolkit/build-environments.md#nodejs-version-compatibility-table-for-project-type).| +| [Microsoft Teams](https://www.microsoft.com/microsoft-teams/download-app) | To collaborate with everyone, you work with apps for chat, meetings, and call all in one place.| +| [OpenAI](https://openai.com/api/) or [Azure OpenAI](https://oai.azure.com/portal)| First create your OpenAI API key to use OpenAI's GPT. If you want to host your app or access resources in Microsoft Azure, you must create an Azure OpenAI service.| +| [Microsoft Edge](https://www.microsoft.com/edge/) (recommended) or [Google Chrome](https://www.google.com/chrome/) | A browser with developer tools. | +| [Microsoft 365 developer account](/microsoftteams/platform/concepts/build-and-test/prepare-your-o365-tenant) | Access to Teams account with the appropriate permissions to install an app and [enable custom Teams apps and turn on custom app uploading](../../../concepts/build-and-test/prepare-your-o365-tenant.md#enable-custom-teams-apps-and-turn-on-custom-app-uploading). | ++<br/> +If you've already run the samples before or encountered a runtime error, follow these steps to start fresh: ++* Check all the `.env` and `env/.env.*.*` files in the sample and delete any automatically populated values to ensure that Teams Toolkit generates new resources for you. +* If you donΓÇÖt want Teams Toolkit to generate the app ID and password, update the `BOT_ID` and `BOT_PASSWORD` in the `.env` file with your own values. +* Remove values or leave the values blank for **SECRET_BOT_PASSWORD** and **TEAMS_APP_UPDATE_TIME** in the `.env` file to avoid conflicts. ++Teams Toolkit automatically provisions `BOT_ID` and `BOT_PASSWORD` resources. If you want to use your own resources, you need to manually add them to the `.env` file. Teams Toolkit doesn't auto-generate the following resources: ++* An Azure OpenAI or OpenAI key +* A database or similar storage options ++## Build and run the sample app ++Get started with Teams AI library using the LightBot sample. It enables your computerΓÇÖs localhost to quickly execute a Teams AI library-based sample. ++1. Go to the [sample](https://github.com/microsoft/teams-ai/tree/main/js/samples). ++1. Run the following command to clone the repository: ++ ```cmd + git clone https://github.com/microsoft/teams-ai.git + ``` ++1. Go to **Visual Studio Code**. ++1. Select **File** > **Open Folder**. ++1. Go to the location where you cloned teams-ai repo and select the **teams-ai** folder. ++1. Select **Select Folder**. ++ :::image type="content" source="../../../assets/images/bots/ai-library-dot-net-select-folder.png" alt-text="Screenshot shows the teams-ai folder and the Select Folder option."::: ++1. Select **View** > **Terminal**. A terminal window opens. ++1. In the terminal window, run the following command to go to the **js** folder: ++ ``` + cd .\js\ + ``` ++1. Run the following command to install dependencies: ++ ```terminal + yarn install + ``` ++1. Run the following command to build dependencies: ++ ```terminal + yarn build + ``` ++1. After the dependencies are installed, select **File** > **Open Folder**. ++1. Go to **teams-ai > js > samples> 03.ai-concepts> c.actionMapping-lightBot** and select **Select Folder**. All the files for the LightBot sample are listed under the **EXPLORER** section in Visual Studio Code. ++1. Update the following steps based on the AI services you select. ++ # [OpenAI key](#tab/OpenAI-key) ++ 1. Go to the `env` folder and update the following code in `./env/.env.local.user` file: ++ ```text + SECRET_OPENAI_KEY=<your OpenAI key> + ``` + 1. Go to the `infra` folder and ensure that the following lines in the `azure.bicep` file are commented out: ++ ```bicep + // { + // name: 'AZURE_OPENAI_KEY' + // value: azureOpenAIKey + // } + // { + // name: 'AZURE_OPENAI_ENDPOINT' + // value: azureOpenAIEndpoint + // } + ``` ++ # [Azure OpenAI](#tab/Azure-OpenAI) ++ 1. Go to the `env` folder and update the following code in `./env/.env.local.user` file: ++ ```text + SECRET_AZURE_OPENAI_KEY=<your Azure OpenAI key> + SECRET_AZURE_OPENAI_ENDPOINT=<your Azure OpenAI Endpoint> + ``` ++ 1. Go to the `teamsapp.local.yml` file and modify the last step to use Azure OpenAI variables: ++ ```yaml + - uses: file/createOrUpdateEnvironmentFile + with: + target: ./.env + envs: + BOT_ID: ${{BOT_ID}} + BOT_PASSWORD: ${{SECRET_BOT_PASSWORD}} + #OPENAI_KEY: ${{SECRET_OPENAI_KEY}} + AZURE_OPENAI_KEY: ${{SECRET_AZURE_OPENAI_KEY}} + AZURE_OPENAI_ENDPOINT: ${{SECRET_AZURE_OPENAI_ENDPOINT}} + ``` ++ 1. Go to the `infra` folder and ensure that the following lines in the `azure.bicep` file are commented out: ++ ```bicep + // { + // name: 'OPENAI_KEY' + // value: openAIKey + // } + ``` ++ 1. Go to `infra` > `azure.parameters.json` and replace the lines from [20 to 22](https://github.com/microsoft/teams-ai/blob/main/js/samples/03.ai-concepts/c.actionMapping-lightBot/infra/azure.parameters.json#L20-L22) with the following code: ++ ```json + "azureOpenAIKey": { + "value": "${{SECRET_AZURE_OPENAI_KEY}}" + }, + "azureOpenAIEndpoint": { + "value": "${{SECRET_AZURE_OPENAI_ENDPOINT}}" + } + ``` + ++1. From the left pane, select **Teams Toolkit**. ++1. Under **ACCOUNTS**, sign-in to the following: ++ * **Microsoft 365 account** + * **Azure account** ++1. To debug your app, select the **F5** key. ++ A browser tab opens a Teams web client requesting to add the bot to your tenant. ++1. Select **Add**. ++ :::image type="content" source="../../../assets/images/bots/lightbot-add.png" alt-text="Screenshot shows adding the LightBot app."::: ++ A chat window opens. ++1. In the message compose area, send a message to invoke the bot. ++ :::image type="content" source="../../../assets/images/bots/lightbot-output.png" alt-text="Screenshot shows an example of the LightBot output." lightbox="../../../assets/images/bots/lightbot-output.png"::: ++> [!NOTE] +> If you're building a bot for the first time, it's recommended to use Teams Toolkit extension for Visual Studio Code to build a bot, see [build your first bot app using JavaScript](../../../sbs-gs-bot.yml). ++++## Prerequisites ++To get started, ensure that you have the following tools: ++| Install | For using... | +| | | +| [Visual Studio](https://visualstudio.microsoft.com/downloads/) | C Sharp build environments. Use the latest version. | +| [Teams Toolkit](https://marketplace.visualstudio.com/items?itemName=TeamsDevApp.ms-teams-vscode-extension) | Microsoft Visual Studio Code extension that creates a project scaffolding for your app. Use the latest version.| +|[Git](https://git-scm.com/downloads)|Git is a version control system that helps you manage different versions of code within a repository. | +| [Microsoft Teams](https://www.microsoft.com/microsoft-teams/download-app) | To collaborate with everyone, you work with through apps for chat, meetings, and call all in one place.| +| [OpenAI](https://openai.com/api/) or [Azure OpenAI](https://oai.azure.com/portal)| First create your OpenAI API key to use OpenAI's GPT. If you want to host your app or access resources in Microsoft Azure, you must create an Azure OpenAI service.| +| [Microsoft Edge](https://www.microsoft.com/edge/) (recommended) or [Google Chrome](https://www.google.com/chrome/) | A browser with developer tools. | +| [Microsoft 365 developer account](/microsoftteams/platform/concepts/build-and-test/prepare-your-o365-tenant) | Access to Teams account with the appropriate permissions to install an app and [enable custom Teams apps and turn on custom app uploading](../../../concepts/build-and-test/prepare-your-o365-tenant.md#enable-custom-teams-apps-and-turn-on-custom-app-uploading). | ++<br/> +If you've already run the samples before or encountered a runtime error, follow these steps to start fresh: ++* Check all the `.env` and `env/.env.*.*` files in the sample and delete any automatically populated values to ensure that Teams Toolkit generates new resources for you. +* If you donΓÇÖt want Teams Toolkit to generate the app ID and password, update the `MicrosoftAppId` and `MicrosoftAppPassword` in the `.env` file with your own values. +* Remove values or leave the values blank for **SECRET_BOT_PASSWORD** and **TEAMS_APP_UPDATE_TIME** in the `.env` file to avoid conflicts. ++Teams Toolkit automatically provisions `MicrosoftAppId` and `MicrosoftAppPassword` resources. If you want to use your own resources, you need to manually add them to the `.env` file. Teams Toolkit doesn't auto-generate the following resources: ++* An Azure OpenAI or OpenAI key +* A database or similar storage options ++## Build and run the sample app ++1. Go to the [sample](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples). ++1. Clone the repository to test the sample app. ++ ``` + git clone https://github.com/microsoft/teams-ai.git + ``` ++1. Go to the **dotnet** folder. ++ ``` + cd teams-ai/dotnet + ``` ++1. Go to the folder where you cloned the repository and select **04.ai.c.actionMapping.lightBot**. ++1. Select **LightBot.sln**. The solution opens in Visual Studio. ++1. In Visual Studio, update your OpenAI related settings in the `appsettings.Development.json` file. ++ ```json + "Azure": { + "OpenAIApiKey": "<your-azure-openai-api-key>", + "OpenAIEndpoint": "<your-azure-openai-endpoint>" + }, + ``` ++1. Go to `Prompts/sequence/skprompt.txt` and update the following code in `skprompt.txt` file: ++ ```skprompt.txt + The following is a conversation with an AI assistant. + The assistant can turn a light on or off. + The assistant must return the following JSON structure: + + {"type":"plan","commands":[{"type":"DO","action":"<name>","entities":{"<name>":<value>}},{"type":"SAY","response":"<response>"}]} + + The following actions are supported: + + - LightsOn + - LightsOff + - Pause time=<duration in ms> + - LightStatus + + The lights are currently {{getLightStatus}}. + + Always respond in the form of a JSON based plan. Stick with DO/SAY. + ``` ++1. In the debug dropdown menu, select **Dev Tunnels** > **Create a Tunnel..**. ++ :::image type="content" source="../../../assets/images/bots/dotnet-ai-library-dev-tunnel.png" alt-text="Screenshot shows an example of the Dev Tunnel and Create a Tunnel option in Visual Studio."::: ++1. Select the **Account** to use to create the tunnel. Azure, Microsoft Account (MSA), and GitHub accounts are supported. Update the following options: + 1. **Name**: Enter a name for the tunnel. + 1. **Tunnel Type**: Select **Persistent** or **Temporary**. + 1. **Access**: Select **Public**. + 1. Select **OK**. Visual Studio displays a confirmation message that a tunnel is created. ++ The tunnel you created is listed under **Dev Tunnels > (name of the tunnel)**. ++1. Go to **Solution Explorer** and select your project. ++1. Right-click menu and select **Teams Toolkit** > **Prepare Teams App Dependencies**. ++ :::image type="content" source="../../../assets/images/bots/dotnet-ai-library-prepare-teams.png" alt-text="Screenshot shows an example of the Prepared Teams app Dependencies option under Teams Toolkit section in Visual Studio."::: ++ If prompted, sign in to your Microsoft 365 account. You receive a message that Teams app dependencies are successfully prepared. ++1. Select **OK**. ++1. Select **F5** or select **Debug** > **Start**. ++1. Select **Add**. The app is added to Teams and a chat window opens. ++ :::image type="content" source="../../../assets/images/bots/lightbot-add.png" alt-text="Screenshot shows adding the LightBot app."::: ++1. In the message compose area, send a message to invoke the bot. ++ :::image type="content" source="../../../assets/images/bots/lightbot-output.png" alt-text="Screenshot shows an example of the LightBot output."::: ++You can also deploy the samples to Azure using Teams Toolkit. To deploy, follow these steps: ++1. In Visual Studio, go to **Solution Explorer** and select your project. +1. Right-click menu and select **Teams Toolkit** > **Provision in the Cloud**. Toolkit provisions your sample to Azure. +1. Right-click menu and select **Teams Toolkit** > **Deploy to the Cloud**. ++++## Prerequisites ++To get started, ensure that you have the following tools: ++| Install | For using... | +| | | +| [Visual Studio Code](https://code.visualstudio.com/download) | JavaScript, TypeScript, and Python build environments. Use the latest version. | +| [Teams Toolkit](https://marketplace.visualstudio.com/items?itemName=TeamsDevApp.ms-teams-vscode-extension) | Microsoft Visual Studio Code extension that creates a project scaffolding for your app. Use the latest version.| +| [Python](https://www.python.org/) | Python is an interpreted and object-oriented programming language with dynamic semantics. Use versions between 3.8 to 4.0. | +| [Poetry](https://python-poetry.org/docs/#installing-with-pipx) | Dependency management and packaging tool for Python.| +| [Python VSCode Extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) | Provides rich support for Python on VSCode. | +|[Git](https://git-scm.com/downloads)|Git is a version control system that helps you manage different versions of code within a repository. | +| [Microsoft Teams](https://www.microsoft.com/microsoft-teams/download-app) | To collaborate with everyone, you work with through apps for chat, meetings, and call all in one place.| +| [OpenAI](https://openai.com/api/) or [Azure OpenAI](https://oai.azure.com/portal)| First create your OpenAI API key to use OpenAI's GPT. If you want to host your app or access resources in Microsoft Azure, you must create an Azure OpenAI service.| +| [Microsoft Edge](https://www.microsoft.com/edge/) (recommended) or [Google Chrome](https://www.google.com/chrome/) | A browser with developer tools. | +| [Microsoft 365 developer account](/microsoftteams/platform/concepts/build-and-test/prepare-your-o365-tenant) | Access to Teams account with the appropriate permissions to install an app and [enable custom Teams apps and turn on custom app uploading](../../../concepts/build-and-test/prepare-your-o365-tenant.md#enable-custom-teams-apps-and-turn-on-custom-app-uploading). | ++<br/> +If you've already run the samples before or encountered a runtime error, follow these steps to start fresh: ++* Check all the `.env` and `env/.env.*.*` files in the sample and delete any automatically populated values to ensure that Teams Toolkit generates new resources for you. +* If you donΓÇÖt want Teams Toolkit to generate the app ID and password, update the `BOT_ID` and `BOT_PASSWORD` in the `.env` file with your own values. +* Remove values or leave the values blank for **SECRET_BOT_PASSWORD** and **TEAMS_APP_UPDATE_TIME** in the `.env` file to avoid conflicts. ++Teams Toolkit automatically provisions `BOT_ID` and `BOT_PASSWORD` resources. If you want to use your own resources, you need to manually add them to the `.env` file. Teams Toolkit doesn't auto-generate the following resources: ++* An Azure OpenAI or OpenAI key +* A database or similar storage options ++## Build and run the sample app ++1. Go to the [sample](https://github.com/microsoft/teams-ai/tree/main/python/samples). ++1. Clone the repository to test the sample app. ++ ``` + git clone https://github.com/microsoft/teams-ai.git + ``` ++1. Go to the **python** folder. ++ ``` + cd teams-ai/python + ``` ++1. Go to the folder where you cloned the repository and select **04.ai.c.actionMapping.lightBot**. All the files for the LightBot sample are listed under the **EXPLORER** section in Visual Studio Code. ++1. Under **EXPLORER**, duplicate the **sample.env** file and update the duplicate file to **.env**. ++ # [OpenAI key](#tab/OpenAI-key2) ++ Go to the `env` folder and update the following code in `./env/.env.local.user` file: ++ ```text + SECRET_OPENAI_KEY=<your OpenAI key> ++ ``` ++ # [Azure OpenAI](#tab/Azure-OpenAI2) ++ Go to the `env` folder and update the following code in `./env/.env.local.user` file: ++ ```text + SECRET_AZURE_OPENAI_KEY=<your Azure OpenAI key> + SECRET_AZURE_OPENAI_ENDPOINT=<your Azure OpenAI Endpoint> ++ ``` ++ ++1. To install the following dependencies, go to **View** > **Terminal** and run the following commands: ++ |Dependencies |Command | + | | | + | python-dotenv | pip install python-dotenv | + | load-dotenv | pip install load-dotenv | + | teams-ai | pip install teams-ai | + | botbuilder-core | pip install botbuilder-core | ++1. Update `config.json` and `bot.py` with your model deployment name. ++1. Go to **View** > **Command Palette...** or select **Ctrl+Shift+P**. ++1. Enter **Python: Create Environment** to create a virtual environment. ++1. To debug your app, select the **F5** key. ++ A browser tab opens a Teams web client requesting to add the bot to your tenant. ++1. Select **Add**. ++ :::image type="content" source="../../../assets/images/bots/lightbot-add.png" alt-text="Screenshot shows adding the LightBot app."::: ++ A chat window opens. ++1. In the message compose area, send a message to invoke the bot. ++ :::image type="content" source="../../../assets/images/bots/lightbot-output.png" alt-text="Screenshot shows an example of the LightBot output."::: +++## Additional tools ++You can also use the following tools to run and set up a sample: ++1. **Teams Toolkit CLI**: You can use the Teams Toolkit CLI to create and manage Teams apps from the command line. For more information, see [Teams Toolkit CLI set up instructions](https://github.com/microsoft/teams-ai/blob/main/getting-started/OTHER/TEAMS-TOOLKIT-CLI.md). ++1. **Bot Framework Emulator**: The [Bot Framework Emulator](https://github.com/microsoft/BotFramework-Emulator) is a desktop application that allows you to test and debug your bot locally. You can connect to your bot by entering the botΓÇÖs endpoint URL and Microsoft app ID and password. You can then send messages to your bot and see its responses in real-time. For more information, see [Bot Framework Emulator set up instructions](https://github.com/microsoft/teams-ai/blob/main/getting-started/OTHER/BOTFRAMEWORK-EMULATOR.md). ++1. **Manual setup**: If you prefer to set up your resources manually, you can do so by following the instructions provided by the respective services. For more information, see [manual set up instructions](https://github.com/microsoft/teams-ai/blob/main/getting-started/OTHER/MANUAL-RESOURCE-SETUP.md). ++## Next step ++> [!div class="nextstepaction"] +> [Teams AI library FAQ](coversational-ai-faq.md) |
platform | Coversational Ai Faq | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/bots/how-to/teams-conversational-ai/coversational-ai-faq.md | + + Title: Teams AI library FAQs ++description: In this module, learn more about Teams AI library frequently asked questions related to OpenAI models, Large Language Models (LLMs), and ODSL. ++ms.localizationpriority: high + Last updated : 04/07/2022+++# Teams AI library FAQ ++<br> +<details> +<summary>What does the Teams AI library do?</summary> ++Teams AI library provides abstractions for you to build robust applications that utilize OpenAI Large Language Models (LLMs). +<br> +</details> +</br> ++<details> +<summary>Does Microsoft provide a hosted version of OpenAI models that are used by the AI library?</summary> ++No, you need to have your Large Language Models (LLMs), hosted in Azure OpenAI or elsewhere. +<br> +</details> +</br> ++<details> +<summary>Can we use the AI library with other large language models apart from OpenAI?</summary> ++Yes, it's possible to use Teams AI library with other Large Language Models (LLMs). +<br> +</details> +</br> ++<details> +<summary>Does a developer need to do anything to benefit from LLMs? If yes, why?</summary> ++Yes, Teams AI library provides abstractions to simplify utilization of Large Language Models (LLMs) in conversational applications. However, you (developer) must tweak the prompts, topic filters, and actions depending upon your scenarios. +<br> +</details> +</br> ++<details> +<summary>How does Teams AI library integrate with ODSL?</summary> ++The two are independent and can't be integrated. +<br> +</details> +</br> ++<details> +<summary>How does Teams AI library co-exist against the hero-story of developers building for the skills ecosystem in Microsoft 365?</summary> +</br> ++Teams AI library story is targeted towards Pro-developers and separate from the hero-story around skills ecosystem in Microsoft 365. +<br> +</details> +</br> ++<details> +<summary>How should information about the existing Bot Framework SDK be communicated after announcing a new version?</summary> ++Teams AI library works alongside the existing Bot Framework SDK and isn't a replacement. +<br> +</details> ++## See also ++[Teams AI library](teams-conversation-ai-overview.md) |
platform | How Conversation Ai Core Capabilities | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/bots/how-to/teams-conversational-ai/how-conversation-ai-core-capabilities.md | + + Title: Core Capabilities of Teams AI Library +description: In this article, learn more about Teams AI library capabilities, bot logic, Adaptive Cards capabilities, and message extension query. +ms.localizationpriority: medium ++ Last updated : 05/24/2023+++# Teams AI library capabilities ++Teams AI library supports JavaScript and is designed to simplify the process of building bots that can interact with Microsoft Teams, and facilitates the migration of existing bots. The AI library supports the migration of messaging capabilities, Message extension (ME) capabilities, and Adaptive Cards capabilities to the new format. It's also possible to upgrade existing Teams apps with these features. ++Earlier, you were using BotBuilder SDK directly to create bots for Microsoft Teams. Teams AI library is designed to facilitate the construction of bots that can interact with Microsoft Teams. While one of the key features of Teams AI library is the AI support that customers can utilize, the initial objective might be to upgrade their current bot without AI. After you upgrade, the bot can connect to AI or Large Language Models (LLMs) available in the AI library. ++Teams AI library supports the following capabilities: ++* [Send or receive message](#send-or-receive-message) ++* [Message extension (ME) capabilities](#message-extensions) ++* [Adaptive Cards capabilities](#adaptive-cards-capabilities) ++ You need to use the AI library to scaffold bot and Adaptive Card handlers to the source file. ++In the following section, we've used the samples from the [AI library](https://github.com/microsoft/teams-ai/tree/main) to explain each capability and the path to migration: ++## Send or receive message ++You can send and receive messages using the Bot Framework. The app listens for the user to send a message, and when it receives this message, it deletes the conversation state and sends a message back to the user. The app also keeps track of the number of messages received in a conversation and echoes back the user’s message with a count of messages received so far. ++# [.NET](#tab/dotnet6) ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/01.messaging.echoBot) ++* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/01.messaging.echoBot/Program.cs#L49) ++```csharp + // Listen for user to say "/reset" and then delete conversation state + app.OnMessage("/reset", ActivityHandlers.ResetMessageHandler); ++ // Listen for ANY message to be received. MUST BE AFTER ANY OTHER MESSAGE HANDLERS + app.OnActivity(ActivityTypes.Message, ActivityHandlers.MessageHandler); ++ return app; +``` ++# [JavaScript](#tab/javascript6) ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/js/samples/01.getting-started/a.echoBot) ++* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/01.getting-started/a.echoBot/src/index.ts#L74) ++```typescript +// Listen for user to say '/reset' and then delete conversation state +app.message('/reset', async (context: TurnContext, state: ApplicationTurnState) => { + state.deleteConversationState(); + await context.sendActivity(`Ok I've deleted the current conversation state.`); +}); ++// Listen for ANY message to be received. MUST BE AFTER ANY OTHER MESSAGE HANDLERS +app.activity(ActivityTypes.Message, async (context: TurnContext, state: ApplicationTurnState) => { + // Increment count state + let count = state.conversation.count ?? 0; + state.conversation.count = ++count; ++ // Echo back users request + await context.sendActivity(`[${count}] you said: ${context.activity.text}`); +}); +``` ++# [Python](#tab/python6) ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/python/samples/01.messaging.a.echoBot) ++* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/samples/01.messaging.a.echoBot/src/bot.py#L25) ++```python +@app.activity("message") +async def on_message(context: TurnContext, _state: TurnState): + await context.send_activity(f"you said: {context.activity.text}") + return True +``` ++++## Message extensions ++In the Bot Framework SDK's `TeamsActivityHandler`, you needed to set up the Message extensions query handler by extending handler methods. The app listens for search actions and item taps, and formats the search results as a list of HeroCards displaying package information. The result is used to display the search results in the messaging extension. ++# [.NET](#tab/dotnet5) ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/02.messageExtensions.a.searchCommand) ++* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/02.messageExtensions.a.searchCommand/Program.cs#L47) ++* [Search results reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/02.messageExtensions.a.searchCommand/ActivityHandlers.cs#L39) ++```csharp +// Listen for search actions + app.MessageExtensions.OnQuery("searchCmd", activityHandlers.QueryHandler); + // Listen for item tap + app.MessageExtensions.OnSelectItem(activityHandlers.SelectItemHandler); ++ return app; ++ // Format search results in ActivityHandlers.cs ++ List<MessagingExtensionAttachment> attachments = packages.Select(package => new MessagingExtensionAttachment + { + ContentType = HeroCard.ContentType, + Content = new HeroCard + { + Title = package.Id, + Text = package.Description + }, + Preview = new HeroCard + { + Title = package.Id, + Text = package.Description, + Tap = new CardAction + { + Type = "invoke", + Value = package + } + }.ToAttachment() + }).ToList(); ++ // Return results as a list ++ return new MessagingExtensionResult + { + Type = "result", + AttachmentLayout = "list", + Attachments = attachments + }; ++``` ++# [JavaScript](#tab/javascript5) ++Now, the app class has `messageExtensions` features to simplify creating the handlers: ++* `context`: `TurnContext` +* `state`: `TurnState` +* `query`: The data passed from message extension interaction ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/js/samples/02.teams-features/a.messageExtensions.searchCommand) ++* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/02.teams-features/a.messageExtensions.searchCommand/src/index.ts#L79) ++```javascript +import { MessagingExtensionAttachment } from "botbuilder"; +import axios from 'axios'; +import { Application } from `@microsoft/teams-ai`; ++// Listen for search actions +app.messageExtensions.query('searchCmd', async (context: TurnContext, state: TurnState, query) => { + const searchQuery = query.parameters.queryText ?? ''; + const count = query.count ?? 10; + const response = await axios.get( + `http://registry.npmjs.com/-/v1/search?${new URLSearchParams({ + size: count.toString(), + text: searchQuery + }).toString()}` + ); ++ // Format search results + const results: MessagingExtensionAttachment[] = []; + response?.data?.objects?.forEach((obj: any) => results.push(createNpmSearchResultCard(obj.package))); ++ // Return results as a list + return { + attachmentLayout: 'list', + attachments: results, + type: 'result' + }; +}); +``` ++Similarly, `selectItem` listener would be set up as: ++```typescript +app.messageExtensions.selectItem(async (context: TurnContext, state: TurnState, item) => { + // Generate detailed result + const card = createNpmPackageCard(item); ++ // Return results + return { + attachmentLayout: 'list', + attachments: [card], + type: 'result' + }; +}); +``` ++# [Python](#tab/python5) ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/python/samples/02.messageExtensions.a.searchCommand) ++* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/samples/02.messageExtensions.a.searchCommand/src/bot.py#L44) ++```python +@app.message_extensions.query("searchCmd") +async def search_command( + _context: TurnContext, _state: AppTurnState, query: MessagingExtensionQuery +) -> MessagingExtensionResult: + query_dict = query.as_dict() + search_query = "" + if query_dict["parameters"] is not None and len(query_dict["parameters"]) > 0: + for parameter in query_dict["parameters"]: + if parameter["name"] == "queryText": + search_query = parameter["value"] + break + count = query_dict["query_options"]["count"] if query_dict["query_options"]["count"] else 10 + url = "http://registry.npmjs.com/-/v1/search?" + params = {"size": count, "text": search_query} ++ async with aiohttp.ClientSession() as session: + async with session.get(url, params=params) as response: + res = await response.json() ++ results: List[MessagingExtensionAttachment] = [] ++ for obj in res["objects"]: + results.append(create_npm_search_result_card(result=obj["package"])) ++ return MessagingExtensionResult( + attachment_layout="list", attachments=results, type="result" + ) +``` ++++## Adaptive Cards capabilities ++You can register Adaptive Card action handlers using the `app.adaptiveCards` property. The app listens for messages containing the keywords `static` or `dynamic` and returns an Adaptive Card using the `StaticMessageHandler` or `DynamicMessageHandler` methods. The app also listens for queries from a dynamic search card, submit buttons on the Adaptive Cards. ++# [.NET](#tab/dotnet4) ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/03.adaptiveCards.a.typeAheadBot) ++* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/03.adaptiveCards.a.typeAheadBot/Program.cs#L52) ++```csharp +// Listen for messages that trigger returning an adaptive card + app.OnMessage(new Regex(@"static", RegexOptions.IgnoreCase), activityHandlers.StaticMessageHandler); + app.OnMessage(new Regex(@"dynamic", RegexOptions.IgnoreCase), activityHandlers.DynamicMessageHandler); ++ // Listen for query from dynamic search card + app.AdaptiveCards.OnSearch("nugetpackages", activityHandlers.SearchHandler); + // Listen for submit buttons + app.AdaptiveCards.OnActionSubmit("StaticSubmit", activityHandlers.StaticSubmitHandler); + app.AdaptiveCards.OnActionSubmit("DynamicSubmit", activityHandlers.DynamicSubmitHandler); ++ // Listen for ANY message to be received. MUST BE AFTER ANY OTHER HANDLERS + app.OnActivity(ActivityTypes.Message, activityHandlers.MessageHandler); ++ return app; +``` ++# [JavaScript](#tab/javascript4) ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/js/samples/02.teams-features/b.adaptiveCards.typeAheadBot) ++* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/02.teams-features/b.adaptiveCards.typeAheadBot/src/index.ts#L86) ++```javascript +// Listen for messages that trigger returning an adaptive card +app.message(/dynamic/i, async (context, _state) => { + const attachment = createDynamicSearchCard(); + await context.sendActivity({ attachments: [attachment] }); +}); ++app.message(/static/i, async (context, _state) => { + const attachment = createStaticSearchCard(); + await context.sendActivity({ attachments: [attachment] }); +}); ++// Listener for action.submit on cards from the user ++interface SubmitData { + choiceSelect?: string; +} ++// Listen for submit buttons +app.adaptiveCards.actionSubmit('DynamicSubmit', async (context, _state, data: SubmitData) => { + await context.sendActivity(`Dynamically selected option is: ${data.choiceSelect}`); +}); ++app.adaptiveCards.actionSubmit('StaticSubmit', async (context, _state, data: SubmitData) => { + await context.sendActivity(`Statically selected option is: ${data.choiceSelect}`); +}); +``` ++# [Python](#tab/python4) ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/python/samples/03.adaptiveCards.a.typeAheadBot) ++* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/samples/03.adaptiveCards.a.typeAheadBot/src/bot.py#L39C1-L78C1) ++```python +@app.message(re.compile(r"static", re.IGNORECASE)) +async def static_card(context: TurnContext, _state: AppTurnState) -> bool: + attachment = create_static_search_card() + await context.send_activity(Activity(attachments=[attachment])) + return True ++@app.adaptive_cards.action_submit("StaticSubmit") +async def on_static_submit(context: TurnContext, _state: AppTurnState, data) -> None: + await context.send_activity(f'Statically selected option is: {data["choiceSelect"]}') ++@app.adaptive_cards.action_submit("DynamicSubmit") +async def on_dynamic_submit(context: TurnContext, _state: AppTurnState, data) -> None: + await context.send_activity(f'Dynamically selected option is: {data["choiceSelect"]}') ++@app.message(re.compile(r"dynamic", re.IGNORECASE)) +async def dynamic_card(context: TurnContext, _state: AppTurnState) -> bool: + attachment = create_dynamic_search_card() + await context.send_activity(Activity(attachments=[attachment])) + return True +``` ++++## Core capabilities ++## Bot logic for handling an action ++The Bot responds to the user's input with the action `LightsOn` to turn the lights on. ++The following example illustrates how Teams AI library makes it possible to manage the bot logic for handling an action `LightsOn` or `LightsOff` and connect it to the prompt used with OpenAI: ++# [.NET](#tab/dotnet3) ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.c.actionMapping.lightBot) ++* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/04.ai.c.actionMapping.lightBot/Program.cs#L33) ++* [Actions sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/04.ai.c.actionMapping.lightBot/LightBotActions.cs#L10) ++```csharp +/ Create AI Model +if (!string.IsNullOrEmpty(config.OpenAI?.ApiKey)) +{ + builder.Services.AddSingleton<OpenAIModel>(sp => new( + new OpenAIModelOptions(config.OpenAI.ApiKey, "gpt-3.5-turbo") + { + LogRequests = true + }, + sp.GetService<ILoggerFactory>() + )); +} +else if (!string.IsNullOrEmpty(config.Azure?.OpenAIApiKey) && !string.IsNullOrEmpty(config.Azure.OpenAIEndpoint)) +{ + builder.Services.AddSingleton<OpenAIModel>(sp => new( + new AzureOpenAIModelOptions( + config.Azure.OpenAIApiKey, + "gpt-35-turbo", + config.Azure.OpenAIEndpoint + ) + { + LogRequests = true + }, + sp.GetService<ILoggerFactory>() + )); +} +else +{ + throw new Exception("please configure settings for either OpenAI or Azure"); +} ++// Create the bot as transient. In this case the ASP Controller is expecting an IBot. +builder.Services.AddTransient<IBot>(sp => +{ + // Create loggers + ILoggerFactory loggerFactory = sp.GetService<ILoggerFactory>()!; ++ // Create Prompt Manager + PromptManager prompts = new(new() + { + PromptFolder = "./Prompts" + }); ++ // Adds function to be referenced in the prompt template + prompts.AddFunction("getLightStatus", async (context, memory, functions, tokenizer, args) => + { + bool lightsOn = (bool)(memory.GetValue("conversation.lightsOn") ?? false); + return await Task.FromResult(lightsOn ? "on" : "off"); + }); ++ // Create ActionPlanner + ActionPlanner<AppState> planner = new( + options: new( + model: sp.GetService<OpenAIModel>()!, + prompts: prompts, + defaultPrompt: async (context, state, planner) => + { + PromptTemplate template = prompts.GetPrompt("sequence"); + return await Task.FromResult(template); + } + ) + { LogRepairs = true }, + loggerFactory: loggerFactory + ); ++ return new TeamsLightBot(new() + { + Storage = sp.GetService<IStorage>(), + AI = new(planner), + LoggerFactory = loggerFactory, + TurnStateFactory = () => + { + return new AppState(); + } + }); +}); ++// LightBotActions defined in LightBotActions.cs + +[Action("LightsOn")] + public async Task<string> LightsOn([ActionTurnContext] ITurnContext turnContext, [ActionTurnState] AppState turnState) + { + turnState.Conversation.LightsOn = true; + await turnContext.SendActivityAsync(MessageFactory.Text("[lights on]")); + return "the lights are now on"; + } ++ [Action("LightsOff")] + public async Task<string> LightsOff([ActionTurnContext] ITurnContext turnContext, [ActionTurnState] AppState turnState) + { + turnState.Conversation.LightsOn = false; + await turnContext.SendActivityAsync(MessageFactory.Text("[lights off]")); + return "the lights are now off"; + } ++ [Action("Pause")] + public async Task<string> LightsOff([ActionTurnContext] ITurnContext turnContext, [ActionParameters] Dictionary<string, object> args) + { + // Try to parse entities returned by the model. + // Expecting "time" to be a number of milliseconds to pause. + if (args.TryGetValue("time", out object? time)) + { + if (time != null && time is string timeString) + { + if (int.TryParse(timeString, out int timeInt)) + { + await turnContext.SendActivityAsync(MessageFactory.Text($"[pausing for {timeInt / 1000} seconds]")); + await Task.Delay(timeInt); + } + } + } ++ return "done pausing"; + } ++``` ++# [JavaScript](#tab/javascript3) ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/js/samples/03.ai-concepts/c.actionMapping-lightBot) ++* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/03.ai-concepts/c.actionMapping-lightBot/src/index.ts#L87) ++```typescript ++// Create AI components +const model = new OpenAIModel({ + // OpenAI Support + apiKey: process.env.OPENAI_KEY!, + defaultModel: 'gpt-3.5-turbo', ++ // Azure OpenAI Support + azureApiKey: process.env.AZURE_OPENAI_KEY!, + azureDefaultDeployment: 'gpt-3.5-turbo', + azureEndpoint: process.env.AZURE_OPENAI_ENDPOINT!, + azureApiVersion: '2023-03-15-preview', ++ // Request logging + logRequests: true +}); ++const prompts = new PromptManager({ + promptsFolder: path.join(__dirname, '../src/prompts') +}); ++const planner = new ActionPlanner({ + model, + prompts, + defaultPrompt: 'sequence', +}); ++// Define storage and application +const storage = new MemoryStorage(); +const app = new Application<ApplicationTurnState>({ + storage, + ai: { + planner + } +}); ++// Define a prompt function for getting the current status of the lights +planner.prompts.addFunction('getLightStatus', async (context: TurnContext, memory: Memory) => { + return memory.getValue('conversation.lightsOn') ? 'on' : 'off'; +}); ++// Register action handlers +app.ai.action('LightsOn', async (context: TurnContext, state: ApplicationTurnState) => { + state.conversation.lightsOn = true; + await context.sendActivity(`[lights on]`); + return `the lights are now on`; +}); ++app.ai.action('LightsOff', async (context: TurnContext, state: ApplicationTurnState) => { + state.conversation.lightsOn = false; + await context.sendActivity(`[lights off]`); + return `the lights are now off`; +}); ++interface PauseParameters { + time: number; +} ++app.ai.action('Pause', async (context: TurnContext, state: ApplicationTurnState, parameters: PauseParameters) => { + await context.sendActivity(`[pausing for ${parameters.time / 1000} seconds]`); + await new Promise((resolve) => setTimeout(resolve, parameters.time)); + return `done pausing`; +}); ++``` ++# [Python](#tab/python3) ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/python/samples/04.ai.c.actionMapping.lightBot) ++* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/samples/04.ai.c.actionMapping.lightBot/src/bot.py#L35) ++```python +# Create AI components +model: OpenAIModel ++if config.OPENAI_KEY: + model = OpenAIModel( + OpenAIModelOptions(api_key=config.OPENAI_KEY, default_model="gpt-3.5-turbo") + ) +elif config.AZURE_OPENAI_KEY and config.AZURE_OPENAI_ENDPOINT: + model = OpenAIModel( + AzureOpenAIModelOptions( + api_key=config.AZURE_OPENAI_KEY, + default_model="gpt-35-turbo", + api_version="2023-03-15-preview", + endpoint=config.AZURE_OPENAI_ENDPOINT, + ) + ) +``` ++++### Message extension query ++The Teams AI library offers you a more intuitive approach to create handlers for various message-extension query commands when compared to previous iterations of Teams Bot Framework SDK. The new SDK works alongside the existing Teams Bot Framework SDK. ++The following is an example of how you can structure their code to handle a message-extension query for the `searchCmd` command. ++# [.NET](#tab/dotnet2) ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/02.messageExtensions.a.searchCommand) ++* [Search actions sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/02.messageExtensions.a.searchCommand/Program.cs#L47) ++* [Search results sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/02.messageExtensions.a.searchCommand/ActivityHandlers.cs#L39) ++```csharp +// Listen for search actions + app.MessageExtensions.OnQuery("searchCmd", activityHandlers.QueryHandler); + // Listen for item tap + app.MessageExtensions.OnSelectItem(activityHandlers.SelectItemHandler); ++ return app; ++ // Format search results + List<MessagingExtensionAttachment> attachments = packages.Select(package => new MessagingExtensionAttachment + { + ContentType = HeroCard.ContentType, + Content = new HeroCard + { + Title = package.Id, + Text = package.Description + }, + Preview = new HeroCard + { + Title = package.Id, + Text = package.Description, + Tap = new CardAction + { + Type = "invoke", + Value = package + } + }.ToAttachment() + }).ToList(); ++ return new MessagingExtensionResult + { + Type = "result", + AttachmentLayout = "list", + Attachments = attachments + }; +``` ++# [JavaScript](#tab/javascript2) ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/js/samples/02.teams-features/a.messageExtensions.searchCommand) ++* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/02.teams-features/a.messageExtensions.searchCommand/src/index.ts#L78) ++```typescript ++// Listen for search actions +app.messageExtensions.query('searchCmd', async (context, state, query) => { + const searchQuery = query.parameters.queryText ?? ''; + const count = query.count ?? 10; + const response = await axios.get( + `http://registry.npmjs.com/-/v1/search?${new URLSearchParams({ + size: count.toString(), + text: searchQuery + }).toString()}` + ); +++ // Format search results + const results: MessagingExtensionAttachment[] = []; + response?.data?.objects?.forEach((obj: any) => results.push(createNpmSearchResultCard(obj.package))); +++ // Return results as a list + return { + attachmentLayout: 'list', + attachments: results, + type: 'result' + }; +}); ++And here’s how they can return a card when a message-extension result is selected. ++// Listen for item tap +app.messageExtensions.selectItem(async (context, state, item) => { + // Generate detailed result + const card = createNpmPackageCard(item); +++ // Return results + return { + attachmentLayout: 'list', + attachments: [card], + type: 'result' + }; +}); ++``` ++# [Python](#tab/python2) ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/python/samples/02.messageExtensions.a.searchCommand) ++* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/samples/02.messageExtensions.a.searchCommand/src/bot.py#L44) ++```python +@app.message_extensions.query("searchCmd") +async def search_command( + _context: TurnContext, _state: AppTurnState, query: MessagingExtensionQuery +) -> MessagingExtensionResult: + query_dict = query.as_dict() + search_query = "" + if query_dict["parameters"] is not None and len(query_dict["parameters"]) > 0: + for parameter in query_dict["parameters"]: + if parameter["name"] == "queryText": + search_query = parameter["value"] + break + count = query_dict["query_options"]["count"] if query_dict["query_options"]["count"] else 10 + url = "http://registry.npmjs.com/-/v1/search?" + params = {"size": count, "text": search_query} ++ async with aiohttp.ClientSession() as session: + async with session.get(url, params=params) as response: + res = await response.json() ++ results: List[MessagingExtensionAttachment] = [] ++ for obj in res["objects"]: + results.append(create_npm_search_result_card(result=obj["package"])) ++ return MessagingExtensionResult( + attachment_layout="list", attachments=results, type="result" + ) +++# Listen for item tap +@app.message_extensions.select_item() +async def select_item(_context: TurnContext, _state: AppTurnState, item: Any): + card = create_npm_package_card(item) ++ return MessagingExtensionResult(attachment_layout="list", attachments=[card], type="result") +``` ++++## Intents to actions ++A simple interface for actions and predictions allows bots to react when they have high confidence for taking action. Ambient presence lets bots learn intent, use prompts based on business logic, and generate responses. ++Thanks to our AI library, the prompt needs only to outline the actions supported by the bot, and supply a few-shot example of how to employ those actions. Conversation history helps with a natural dialogue between the user and bot, such as *add cereal to groceries list*, followed by *also add coffee*, which should indicate that coffee is to be added to the groceries list. ++The following is a conversation with an AI assistant. The AI assistant is capable of managing lists and recognizes the following commands: ++* DO `<action> <optional entities>` +* SAY `<response>` ++The following actions are supported: ++* `addItem list="<list name>" item="<text>"` +* `removeItem list="<list name>" item="<text>"` +* `summarizeLists` ++All entities are required parameters to actions. ++* Current list names: ++ ``` + {{conversation.listNames}} + ``` ++ ```text ++ Examples: ++ Human: remind me to buy milk + AI: DO addItem list="groceries" item="milk" THEN SAY Ok I added milk to your groceries list + Human: we already have milk + AI: DO removeItem list="groceries" item="milk" THEN SAY Ok I removed milk from your groceries list + Human: buy ingredients to make margaritas + AI: DO addItem list="groceries" item="tequila" THEN DO addItem list="groceries" item="orange liqueur" THEN DO addItem list="groceries" item="lime juice" THEN SAY Ok I added tequila, orange liqueur, and lime juice to your groceries list + Human: do we have have milk + AI: DO findItem list="groceries" item="milk" + Human: what's in my grocery list + AI: DO summarizeLists + Human: what's the contents of all my lists? + AI: DO summarizeLists + Human: show me all lists but change the title to Beach Party + AI: DO summarizeLists + Human: show me all lists as a card and sort the lists alphabetically + AI: DO summarizeLists ++ ``` ++* Conversation history: ++ ``` + {{conversation.(history}} + ``` ++* Current query: ++ ``` + Human: {{activity.text}} + ``` ++* Current list names: ++ ```javascript + {{conversation.listNames}} + ``` ++* AI: The bot logic is streamlined to include handlers for actions such as `addItem` and `removeItem`. This distinct separation between actions and the prompts guiding the AI on how to execute the actions and prompts serves as a powerful tool. ++# [.NET](#tab/dotnet1) ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.d.chainedActions.listBot) ++* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/04.ai.d.chainedActions.listBot/ListBotActions.cs#L40) ++```csharp + [Action("AddItem")] + public string AddItem([ActionTurnState] ListState turnState, [ActionParameters] Dictionary<string, object> parameters) + { + ArgumentNullException.ThrowIfNull(turnState); + ArgumentNullException.ThrowIfNull(parameters); ++ string listName = GetParameterString(parameters, "list"); + string item = GetParameterString(parameters, "item"); ++ IList<string> items = GetItems(turnState, listName); + items.Add(item); + SetItems(turnState, listName, items); ++ return "item added. think about your next action"; + } ++ [Action("RemoveItem")] + public async Task<string> RemoveItem([ActionTurnContext] ITurnContext turnContext, [ActionTurnState] ListState turnState, [ActionParameters] Dictionary<string, object> parameters) + { + ArgumentNullException.ThrowIfNull(turnContext); + ArgumentNullException.ThrowIfNull(turnState); + ArgumentNullException.ThrowIfNull(parameters); ++ string listName = GetParameterString(parameters, "list"); + string item = GetParameterString(parameters, "item"); ++ IList<string> items = GetItems(turnState, listName); ++ if (!items.Contains(item)) + { + await turnContext.SendActivityAsync(ResponseBuilder.ItemNotFound(listName, item)).ConfigureAwait(false); + return "item not found. think about your next action"; + } ++ items.Remove(item); + SetItems(turnState, listName, items); + return "item removed. think about your next action"; + } +``` ++# [JavaScript](#tab/javascript1) ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/js/samples/03.ai-concepts/d.chainedActions-listBot) ++* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/03.ai-concepts/d.chainedActions-listBot/src/index.ts#L161) ++```typescript + app.ai.action('addItems', async (context: TurnContext, state: ApplicationTurnState, parameters: ListAndItems) => { + const items = getItems(state, parameters.list); + items.push(...(parameters.items ?? [])); + setItems(state, parameters.list, items); + return `items added. think about your next action`; + }); ++ app.ai.action('removeItems', async (context: TurnContext, state: ApplicationTurnState, parameters: ListAndItems) => { + const items = getItems(state, parameters.list); + (parameters.items ?? []).forEach((item: string) => { + const index = items.indexOf(item); + if (index >= 0) { + items.splice(index, 1); + } + }); + setItems(state, parameters.list, items); + return `items removed. think about your next action`; + }); +``` ++# [Python](#tab/python1) ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/python/samples/04.ai.d.chainedActions.listBot) ++* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/samples/04.ai.d.chainedActions.listBot/src/bot.py#L96C1-L123C57) ++```python +@app.ai.action("addItems") +async def on_add_items( + context: ActionTurnContext[Dict[str, Any]], + state: AppTurnState, +): + parameters = ListAndItems.from_dict(context.data, infer_missing=True) + state.ensure_list_exists(parameters.list) + items = state.conversation.lists[parameters.list] + if parameters.items is not None: + for item in parameters.items: + items.append(item) + state.conversation.lists[parameters.list] = items + return "items added. think about your next action" ++@app.ai.action("removeItems") +async def on_remove_items( + context: ActionTurnContext[Dict[str, Any]], + state: AppTurnState, +): + parameters = ListAndItems.from_dict(context.data, infer_missing=True) + state.ensure_list_exists(parameters.list) + items = state.conversation.lists[parameters.list] + if parameters.items is not None and len(parameters.items) > 0: + for item in parameters.items: + if item in items: + items.remove(item) + state.conversation.lists[parameters.list] = items + return "items removed. think about your next action" +``` ++++## Next step ++> [!div class="nextstepaction"] +> [Get started with Teams AI library](how-conversation-ai-get-started.md) |
platform | How Conversation Ai Get Started | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/bots/how-to/teams-conversational-ai/how-conversation-ai-get-started.md | + + Title: Use Teams AI Library to Build Apps/Bots +description: In this article, learn how to create an app using Teams AI library with AI component, storage, register data source, prompts, and actions. +ms.localizationpriority: medium ++ Last updated : 11/27/2023+++# Get started with Teams AI library ++Teams AI library streamlines the process to build intelligent Microsoft Teams applications by using the AI components. It provides APIs to access and manipulate data, as well as a range of controls and components to create custom user interfaces. ++You can easily integrate Teams AI library, prompt management, and safety moderation into your apps and enhance the user experience. It also facilitates the creation of bots that uses an OpenAI API key or Azure OpenAI to provide an AI-driven conversational experience. ++## Initial setup ++Teams AI library is built on top of the Bot Framework SDK and uses its fundamentals to offer an extension to the Bot Framework SDK capabilities. As part of initial setup, it's important to import the Bot Framework SDK functionalities. ++> [!NOTE] +> The adapter class that handles connectivity with the channels is imported from [Bot Framework SDK](/azure/bot-service/bot-builder-basics?view=azure-bot-service-4.0&preserve-view=true#the-bot-adapter). ++# [.NET](#tab/dotnet1) ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/04.ai.a.teamsChefBot/Program.cs) ++```csharp +using Microsoft.Teams.AI; +using Microsoft.Bot.Builder; +using Microsoft.Bot.Builder.Integration.AspNet.Core; +using Microsoft.Bot.Connector.Authentication; +using Microsoft.TeamsFx.Conversation; ++var builder = WebApplication.CreateBuilder(args); ++builder.Services.AddControllers(); +builder.Services.AddHttpClient("WebClient", client => client.Timeout = TimeSpan.FromSeconds(600)); +builder.Services.AddHttpContextAccessor(); ++// Prepare Configuration for ConfigurationBotFrameworkAuthentication +var config = builder.Configuration.Get<ConfigOptions>(); +builder.Configuration["MicrosoftAppType"] = "MultiTenant"; +builder.Configuration["MicrosoftAppId"] = config.BOT_ID; +builder.Configuration["MicrosoftAppPassword"] = config.BOT_PASSWORD; ++// Create the Bot Framework Authentication to be used with the Bot Adapter. +builder.Services.AddSingleton<BotFrameworkAuthentication, ConfigurationBotFrameworkAuthentication>(); ++// Create the Cloud Adapter with error handling enabled. +// Note: some classes expect a BotAdapter and some expect a BotFrameworkHttpAdapter, so +// register the same adapter instance for all types. +builder.Services.AddSingleton<CloudAdapter, AdapterWithErrorHandler>(); +builder.Services.AddSingleton<IBotFrameworkHttpAdapter>(sp => sp.GetService<CloudAdapter>()); +builder.Services.AddSingleton<BotAdapter>(sp => sp.GetService<CloudAdapter>()); +``` ++# [JavaScript](#tab/javascript4) ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/04.ai-apps/a.teamsChefBot/src/index.ts#L9) ++```javascript +// Import required bot services. +// See https://aka.ms/bot-services to learn more about the different parts of a bot. +import { + CloudAdapter, + ConfigurationBotFrameworkAuthentication, + ConfigurationServiceClientCredentialFactory, + MemoryStorage, + TurnContext +} from 'botbuilder'; ++// Read botFilePath and botFileSecret from .env file. +const ENV_FILE = path.join(__dirname, '..', '.env'); +config({ path: ENV_FILE }); ++const botFrameworkAuthentication = new ConfigurationBotFrameworkAuthentication( + {}, + new ConfigurationServiceClientCredentialFactory({ + MicrosoftAppId: process.env.BOT_ID, + MicrosoftAppPassword: process.env.BOT_PASSWORD, + MicrosoftAppType: 'MultiTenant' + }) +); ++// Create adapter. +// See https://aka.ms/about-bot-adapter to learn more about how bots work. +const adapter = new CloudAdapter(botFrameworkAuthentication); ++``` ++# [Python](#tab/python4) ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/samples/01.messaging.a.echoBot/src/bot.py#L8C1-L23C2) ++```python +import sys +import traceback ++from botbuilder.core import TurnContext +from teams import Application, ApplicationOptions, TeamsAdapter +from teams.state import TurnState ++from config import Config ++config = Config() +app = Application[TurnState]( + ApplicationOptions( + bot_app_id=config.APP_ID, + adapter=TeamsAdapter(config), + ) +) +``` ++++### Import Teams AI library ++Import all the classes from `@microsoft/teams-ai` to build your bot and use the Teams AI library capabilities. ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/04.ai-apps/a.teamsChefBot/src/index.ts#L13) ++```javascript +// import Teams AI library +import { + AI, + Application, + ActionPlanner, + OpenAIModerator, + OpenAIModel, + PromptManager, + TurnState +} from '@microsoft/teams-ai'; +import { addResponseFormatter } from './responseFormatter'; +import { VectraDataSource } from './VectraDataSource'; +``` ++## Create AI components ++Add AI capabilities to your existing app or a new Bot Framework app. ++**OpenAIModel**: The OpenAIModel class provides a way to access the OpenAI API or any other service, which adheres to the OpenAI REST format. It's compatible with both OpenAI and Azure OpenAI language models. ++**Prompt manager**: The prompt manager manages prompt creation. It calls functions and injects from your code into the prompt. It copies the conversation state and the user state into the prompt for you automatically. ++**ActionPlanner**: The ActionPlanner is the main component calling your Large Language Model (LLM) and includes several features to enhance and customize your model. It's responsible for generating and executing plans based on the user's input and the available actions. ++# [.NET](#tab/dotnet2) ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/04.ai.c.actionMapping.lightBot/Program.cs#L33). ++```csharp + // Create model + + OpenAIModel? model = null; + + if (!string.IsNullOrEmpty(config.OpenAI?.ApiKey)) + { + model = new(new OpenAIModelOptions(config.OpenAI.ApiKey, "gpt-3.5-turbo")); + } + else if (!string.IsNullOrEmpty(config.Azure?.OpenAIApiKey) && !string.IsNullOrEmpty(config.Azure.OpenAIEndpoint)) + { + model = new(new AzureOpenAIModelOptions( + config.Azure.OpenAIApiKey, + "gpt-35-turbo", + config.Azure.OpenAIEndpoint + )); + } + + if (model == null) + { + throw new Exception("please configure settings for either OpenAI or Azure"); + } ++ // Create prompt manager + PromptManager prompts = new(new() + { + PromptFolder = "./Prompts", + }); ++ // Add function to be referenced in the prompt template ++ prompts.AddFunction("getLightStatus", async (context, memory, functions, tokenizer, args) => + { + bool lightsOn = (bool)(memory.GetValue("conversation.lightsOn") ?? false); + return await Task.FromResult(lightsOn ? "on" : "off"); + }); ++ // Create ActionPlanner + ActionPlanner<AppState> planner = new( + options: new( + model: model, + prompts: prompts, + defaultPrompt: async (context, state, planner) => + { + PromptTemplate template = prompts.GetPrompt("sequence"); + return await Task.FromResult(template); + } + ) + { LogRepairs = true }, + loggerFactory: loggerFactory + ); ++``` ++# [JavaScript](#tab/javascript1) ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/03.ai-concepts/c.actionMapping-lightBot/src/index.ts#L86) ++```javascript +/// Create AI components +const model = new OpenAIModel({ + // OpenAI Support + apiKey: process.env.OPENAI_KEY!, + defaultModel: 'gpt-3.5-turbo', ++ // Azure OpenAI Support + azureApiKey: process.env.AZURE_OPENAI_KEY!, + azureDefaultDeployment: 'gpt-3.5-turbo', + azureEndpoint: process.env.AZURE_OPENAI_ENDPOINT!, + azureApiVersion: '2023-03-15-preview', ++ // Request logging + logRequests: true +}); ++const prompts = new PromptManager({ + promptsFolder: path.join(__dirname, '../src/prompts') +}); ++const planner = new ActionPlanner({ + model, + prompts, + defaultPrompt: 'chat', +}); ++``` ++# [Python](#tab/python1) ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/samples/04.ai.c.actionMapping.lightBot/src/bot.py#L35) ++```python +# Create AI components +model: OpenAIModel ++if config.OPENAI_KEY: + model = OpenAIModel( + OpenAIModelOptions(api_key=config.OPENAI_KEY, default_model="gpt-3.5-turbo") + ) +elif config.AZURE_OPENAI_KEY and config.AZURE_OPENAI_ENDPOINT: + model = OpenAIModel( + AzureOpenAIModelOptions( + api_key=config.AZURE_OPENAI_KEY, + default_model="gpt-35-turbo", + api_version="2023-03-15-preview", + endpoint=config.AZURE_OPENAI_ENDPOINT, + ) + ) +``` ++++## Define storage and application ++The application object automatically manages the conversation and user state of your bot. ++* **Storage**: Create a storage provider to store the conversation and the user state for your bot. ++* **Application**: The application class has all the information and bot logic required for an app. You can register actions or activity handlers for the app in this class. ++# [.NET](#tab/dotnet3) ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/04.ai.c.actionMapping.lightBot/Program.cs#L99) ++```csharp + return new TeamsLightBot(new() + { + Storage = sp.GetService<IStorage>(), + AI = new(planner), + LoggerFactory = loggerFactory, + TurnStateFactory = () => + { + return new AppState(); + } + }); +``` ++`TurnStateFactory` allows you to create a custom state class for your application. You can use it to store additional information or logic that you need for your bot. You can also override some of the default properties of the turn state, such as the user input, the bot output, or the conversation history. To use `TurnStateFactory`, you need to create a class that extends the default turn state and pass a function that creates an instance of your class to the application constructor. ++# [JavaScript](#tab/javascript3) ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/03.ai-concepts/c.actionMapping-lightBot/src/index.ts#L112) ++```javascript +// Define storage and application +const storage = new MemoryStorage(); +const app = new Application<ApplicationTurnState>({ + storage, + ai: { + planner, + // moderator + } +}); +``` ++The `MemoryStorage()` function stores all the state for your bot. The `Application` class replaces the Teams Activity Handler class. You can configure your `ai` by adding the planner, moderator, prompt manager, default prompt and history. The `ai` object is passed into the `Application`, which receives the AI components and the default prompt defined earlier. ++# [Python](#tab/python3) ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/samples/04.ai.c.actionMapping.lightBot/src/bot.py#L52C1-L62C2) ++```python +storage = MemoryStorage() +app = Application[AppTurnState]( + ApplicationOptions( + bot_app_id=config.APP_ID, + storage=storage, + adapter=TeamsAdapter(config), + ai=AIOptions(planner=ActionPlanner( + ActionPlannerOptions(model=model, prompts=prompts, default_prompt="sequence") + )), + ) +) +``` ++++## Register data sources ++A vector data source makes it easy to add RAG to any prompt. You can register a named data source with the planner and then specify the name[s] of the data sources to augment the prompt within the prompt's `config.json` file. Data sources allow AI to inject relevant information from external sources into the prompt, such as vector databases or cognitive search. You can register named data sources with the planner and then specify the name[s] of the data sources they wish to augment the prompt within the prompt's `config.json` file. ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/04.ai-apps/a.teamsChefBot/src/index.ts#L118) ++```typescript +// Register your data source with planner +planner.prompts.addDataSource(new VectraDataSource({ + name: 'teams-ai', + apiKey: process.env.OPENAI_API_KEY!, + indexFolder: path.join(__dirname, '../index'), +})); +``` ++### Embeddings ++An Embedding is a kind of Vector generated by an LLM that represents a piece of text. The text could be a word, sentence, or an entire document. Since the model understands the syntax and semantics of language the Embedding can capture the semantic meaning of the text in a compact form. Embeddings are often used in natural language processing tasks, such as text classification or sentiment analysis, but also be used for search. ++The model for generating Embeddings is different from the foundational LLMs. For example, OpenAI provides an embedding model called **text-embedding-ada-002**, which returns a list of 1536 numbers that represents the input text. The system creates embeddings for text within the documents and stores them in a Vector Database. Now from our Chat application we can implement the RAG pattern by first retrieving relevant data about the documents from the Vector Database, and then augmenting the Prompt with this retrieved information. ++<br/> +<details> <summary> The following is an example of a VectraDataSource and OpenAIEmbeddings:</summary> ++```typescript +import { DataSource, Memory, RenderedPromptSection, Tokenizer } from '@microsoft/teams-ai'; +import { OpenAIEmbeddings, LocalDocumentIndex } from 'vectra'; +import * as path from 'path'; +import { TurnContext } from 'botbuilder'; ++/** + * Options for creating a `VectraDataSource`. + */ +export interface VectraDataSourceOptions { + /** + * Name of the data source and local index. + */ + name: string; ++ /** + * OpenAI API key to use for generating embeddings. + */ + apiKey: string; ++ /** + * Path to the folder containing the local index. + * @remarks + * This should be the root folder for all local indexes and the index itself + * needs to be in a subfolder under this folder. + */ + indexFolder: string; ++ /** + * Optional. Maximum number of documents to return. + * @remarks + * Defaults to `5`. + */ + maxDocuments?: number; ++ /** + * Optional. Maximum number of chunks to return per document. + * @remarks + * Defaults to `50`. + */ + maxChunks?: number; ++ /** + * Optional. Maximum number of tokens to return per document. + * @remarks + * Defaults to `600`. + */ + maxTokensPerDocument?: number; +} ++/** + * A data source that uses a local Vectra index to inject text snippets into a prompt. + */ +export class VectraDataSource implements DataSource { + private readonly _options: VectraDataSourceOptions; + private readonly _index: LocalDocumentIndex; ++ /** + * Name of the data source. + * @remarks + * This is also the name of the local Vectra index. + */ + public readonly name: string; ++ /** + * Creates a new `VectraDataSource` instance. + * @param options Options for creating the data source. + */ + public constructor(options: VectraDataSourceOptions) { + this._options = options; + this.name = options.name; ++ // Create embeddings model + const embeddings = new OpenAIEmbeddings({ + model: 'text-embedding-ada-002', + apiKey: options.apiKey, + }); ++ // Create local index + this._index = new LocalDocumentIndex({ + embeddings, + folderPath: path.join(options.indexFolder, options.name), + }); + } ++ /** + * Renders the data source as a string of text. + * @param context Turn context for the current turn of conversation with the user. + * @param memory An interface for accessing state values. + * @param tokenizer Tokenizer to use when rendering the data source. + * @param maxTokens Maximum number of tokens allowed to be rendered. + */ + public async renderData(context: TurnContext, memory: Memory, tokenizer: Tokenizer, maxTokens: number): Promise<RenderedPromptSection<string>> { + // Query index + const query = memory.getValue('temp.input') as string; + const results = await this._index.queryDocuments(query, { + maxDocuments: this._options.maxDocuments ?? 5, + maxChunks: this._options.maxChunks ?? 50, + }); ++ // Add documents until you run out of tokens + let length = 0; + let output = ''; + let connector = ''; + for (const result of results) { + // Start a new doc + let doc = `${connector}url: ${result.uri}\n`; + let docLength = tokenizer.encode(doc).length; + const remainingTokens = maxTokens - (length + docLength); + if (remainingTokens <= 0) { + break; + } ++ // Render document section + const sections = await result.renderSections(Math.min(remainingTokens, this._options.maxTokensPerDocument ?? 600), 1); + docLength += sections[0].tokenCount; + doc += sections[0].text; ++ // Append do to output + output += doc; + length += docLength; + connector = '\n\n'; + } ++ return { output, length, tooLong: length > maxTokens }; + } ++} +``` ++</details> ++## Prompt ++Prompts are pieces of text that can be used to create conversational experiences. Prompts are used to start conversations, ask questions, and generate responses. The use of prompts helps reduce the complexity of creating conversational experiences and make them more engaging for the user. ++A new object based prompt system breaks a prompt into sections and each section can be given a token budget that's either a fixed set of tokens, or proportional to the overall remaining tokens. You can generate prompts for both Text Completion and Chat Completion style APIs. ++The following are a few guidelines to create prompts: ++* Provide instructions, examples, or both. +* Provide quality data. Ensure that there are enough examples and proofread your examples. The model is smart enough to see through basic spelling mistakes and give you a response, but it also might assume that the input is intentional, and it might affect the response. +* Check your prompt settings. The temperature and top_p settings control how deterministic the model is in generating a response. Higher value such as 0.8 makes the output random, while lower value such as 0.2 makes the output focused and deterministic. ++Create a folder called prompts and define your prompts in the folder. When the user interacts with the bot by entering a text prompt, the bot responds with a text completion. ++* `skprompt.txt`: Contains the prompts text and supports template variables and functions. Define all your text prompts in the `skprompt.txt` file. + +* `config.json`: Contains the prompt model settings. Provide the right configuration to ensure bot responses are aligned with your requirement. ++ [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/03.ai-concepts/c.actionMapping-lightBot/src/prompts/sequence/config.json) ++ ```json + { + "schema": 1.1, + "description": "A bot that can turn the lights on and off", + "type": "completion", + "completion": { + "model": "gpt-3.5-turbo", + "completion_type": "chat", + "include_history": true, + "include_input": true, + "max_input_tokens": 2800, + "max_tokens": 1000, + "temperature": 0.2, + "top_p": 0.0, + "presence_penalty": 0.6, + "frequency_penalty": 0.0, + "stop_sequences": [] + }, + "augmentation": { + "augmentation_type": "sequence" + "data_sources": { + "teams-ai": 1200 + } + } + } + ``` ++### Query parameters ++The following table includes the query parameters: ++|**Value** |**Description** | +||| +|`model`|ID of the model to use.| +|`completion_type`|The type of completion you would like to use for your model. Given a prompt, the model will return one or more predicted completions along with the probabilities of alternative tokens at each position. Supported options are `chat` and `text`. Default is `chat`.| +|`include_history`|Boolean value. If you want to include history. Each prompt gets its own separate conversation history to make sure that the model doesn't get confused.| +|`include_input`|Boolean value. If you want to include user's input in the prompt. How many tokens for the prompt.| +|`max_input_tokens`|The maximum number of tokens for input. Max tokens supported is 4000.| +|`max_tokens` | The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens can't exceed the model's context length. | +|`temperature` | What sampling temperature to use, between 0 and 2. Higher values like 0.8 makes the output more random, while lower values like 0.2 makes it more focused and deterministic. | +|`top_p` |An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. Therefore, 0.1 means only the tokens comprising the top 10% probability mass are considered. | +|`presence_penalty` | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. | +|`frequency_penalty` |Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. | +|`stop_sequences` | Up to four sequences where the API stops generating further tokens. The returned text won't contain the stop sequence. | +|`augmentation_type`| The type of augmentation. Supported values are `sequence`, `monologue` and `tools`.| ++### Prompt management ++Prompt management helps adjust the size and content of the prompt sent to the language model, considering the available token budget and the data sources or augmentations. ++If a bot has a maximum of 4,000 tokens where 2,800 tokens are for input and 1,000 tokens are for output, the model can manage the overall context window and ensure that it never processes more than 3,800 tokens. The model starts with a text of about 100 tokens, adds in the data source of another 1,200 tokens, and then looks at the remaining budget of 1,500 tokens. The system allocates the remaining 1,500 tokens to the conversation history and input. The conversation history is then condensed to fit the remaining space, ensuring the model never surpasses 2,800 tokens. ++### Prompt actions ++Plans let the model perform actions or respond to the user. You can create a schema of the plan and add a list of actions that you support to perform an action and pass arguments. The OpenAI endpoint figures out the actions required to be used, extracts all the entities, and passes those as arguments to the action call. ++```text +The following is a conversation with an AI assistant. +The assistant can turn a light on or off. ++context: +The lights are currently {{getLightStatus}}. + ``` ++### Prompt template ++Prompt template is a simple and powerful way to define and compose AI functions using plain text. You can use prompt template to create natural language prompts, generate responses, extract information, invoke other prompts, or perform any other task that can be expressed with text. ++The language supports features that allow you to include variables, call external functions, and pass parameters to functions. You don't need to write any code or import any external libraries, just use the curly braces {{...}} to embed expressions in your prompts. Teams parses your template and executes the logic behind it. This way, you can easily integrate AI into your apps with minimal effort and maximum flexibility. ++* ``{{function}}``: Calls a registered function and inserts its return value string.ΓÇï ++* ``{{$input}}``: Inserts the message text. It gets its value from state.temp.input. ++* ``{{$state.[property]}}``: Inserts state properties. ++## Actions ++Actions handle events triggered by AI components. ++`FlaggedInputAction` and `FlaggedOutputAction` are the built-in action handlers to handle the moderator flags. If the moderator flags an incoming message input, the moderator redirects to the `FlaggedInputAction` handler and the `context.sendActivity` sends a message to the user about the flag. If you want to stop the action, you must add `AI.StopCommandName`. ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/04.ai-apps/a.teamsChefBot/src/index.ts#L132) ++```typescript +// Register other AI actions +app.ai.action( + AI.FlaggedInputActionName, + async (context: TurnContext, state: ApplicationTurnState, data: Record<string, any>) => { + await context.sendActivity(`I'm sorry your message was flagged: ${JSON.stringify(data)}`); + return AI.StopCommandName; + } +); ++app.ai.action(AI.FlaggedOutputActionName, async (context: TurnContext, state: ApplicationTurnState, data: any) => { + await context.sendActivity(`I'm not allowed to talk about such things.`); + return AI.StopCommandName; +}); +``` ++### Register Action Handlers ++Action handlers help users achieve the goals, which is shared in the user intents. ++One of the key aspects in action handlers is that you must first register the actions in the prompts and then help user achieve the goal. ++You must register a handler for each action listed in the prompt and also add a handler to deal with unknown actions. ++In the following example of a light bot, we have the `LightsOn`, `LightsOff`, and `Pause` action. Every time an action is called, you return a `string`. If you require the bot to return time, you don't need to parse the time and convert it to a number. The `PauseParameters` property ensures that it returns time in number format without pausing the prompt. ++# [.NET](#tab/dotnet4) ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/04.ai.c.actionMapping.lightBot/LightBotActions.cs) ++```csharp +public class LightBotActions + { + [Action("LightsOn")] + public async Task<string> LightsOn([ActionTurnContext] ITurnContext turnContext, [ActionTurnState] AppState turnState) + { + turnState.Conversation!.LightsOn = true; + await turnContext.SendActivityAsync(MessageFactory.Text("[lights on]")); + return "the lights are now on"; + } ++ [Action("LightsOff")] + public async Task<string> LightsOff([ActionTurnContext] ITurnContext turnContext, [ActionTurnState] AppState turnState) + { + turnState.Conversation!.LightsOn = false; + await turnContext.SendActivityAsync(MessageFactory.Text("[lights off]")); + return "the lights are now off"; + } ++ [Action("Pause")] + public async Task<string> LightsOff([ActionTurnContext] ITurnContext turnContext, [ActionParameters] Dictionary<string, object> args) + { + // Try to parse entities returned by the model. + // Expecting "time" to be a number of milliseconds to pause. + if (args.TryGetValue("time", out object? time)) + { + if (time != null && time is string timeString) + { + if (int.TryParse(timeString, out int timeInt)) + { + await turnContext.SendActivityAsync(MessageFactory.Text($"[pausing for {timeInt / 1000} seconds]")); + await Task.Delay(timeInt); + } + } + } ++ return "done pausing"; + } ++ [Action("LightStatus")] + public async Task<string> LightStatus([ActionTurnContext] ITurnContext turnContext, [ActionTurnState] AppState turnState) + { + await turnContext.SendActivityAsync(ResponseGenerator.LightStatus(turnState.Conversation!.LightsOn)); + return turnState.Conversation!.LightsOn ? "the lights are on" : "the lights are off"; + } ++ [Action(AIConstants.UnknownActionName)] + public async Task<string> UnknownAction([ActionTurnContext] TurnContext turnContext, [ActionName] string action) + { + await turnContext.SendActivityAsync(ResponseGenerator.UnknownAction(action ?? "Unknown")); + return "unknown action"; + } + } +} ++``` ++# [JavaScript](#tab/javascript2) ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/03.ai-concepts/c.actionMapping-lightBot/src/index.ts#L126) ++```javascript +// Register action handlers +app.ai.action('LightsOn', async (context: TurnContext, state: ApplicationTurnState) => { + state.conversation.lightsOn = true; + await context.sendActivity(`[lights on]`); + return `the lights are now on`; +}); ++app.ai.action('LightsOff', async (context: TurnContext, state: ApplicationTurnState) => { + state.conversation.lightsOn = false; + await context.sendActivity(`[lights off]`); + return `the lights are now off`; +}); ++interface PauseParameters { + time: number; +} ++app.ai.action('Pause', async (context: TurnContext, state: ApplicationTurnState, parameters: PauseParameters) => { + await context.sendActivity(`[pausing for ${parameters.time / 1000} seconds]`); + await new Promise((resolve) => setTimeout(resolve, parameters.time)); + return `done pausing`; +}); +``` ++# [Python](#tab/python2) ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/samples/04.ai.c.actionMapping.lightBot/src/bot.py#L85C1-L113C26) ++```python +@app.ai.action("LightsOn") +async def on_lights_on( + context: ActionTurnContext[Dict[str, Any]], + state: AppTurnState, +): + state.conversation.lights_on = True + await context.send_activity("[lights on]") + return "the lights are now on" +++@app.ai.action("LightsOff") +async def on_lights_off( + context: ActionTurnContext[Dict[str, Any]], + state: AppTurnState, +): + state.conversation.lights_on = False + await context.send_activity("[lights off]") + return "the lights are now off" +++@app.ai.action("Pause") +async def on_pause( + context: ActionTurnContext[Dict[str, Any]], + _state: AppTurnState, +): + time_ms = int(context.data["time"]) if context.data["time"] else 1000 + await context.send_activity(f"[pausing for {time_ms / 1000} seconds]") + time.sleep(time_ms / 1000) + return "done pausing" +``` ++++If you use either `sequence`, `monologue` or `tools` augmentation, it's impossible for the model to hallucinate an invalid function name, action name, or the correct parameters. You must create a new actions file and define all the actions you want the prompt to support for augmentation. You must define the actions to tell the model when to perform the action. Sequence augmentation is suitable for tasks that require multiple steps or complex logic. +Monologue augmentation is suitable for tasks that require natural language understanding and generation, and more flexibility and creativity. ++In the following example of a light bot, the `actions.json` file has a list of all the actions the bot can perform: ++```json +[ + { + "name": "LightsOn", + "description": "Turns on the lights" + }, + { + "name": "LightsOff", + "description": "Turns off the lights" + }, + { + "name": "Pause", + "description": "Delays for a period of time", + "parameters": { + "type": "object", + "properties": { + "time": { + "type": "number", + "description": "The amount of time to delay in milliseconds" + } + }, + "required": [ + "time" + ] + } + } +] +``` ++* `name`: Name of the action. Required. +* `description`: Description of the action. Optional. +* `parameters`: Add a JSON schema object of the required parameters. ++ Feedback loop is a model's response to validate, correct, or refine the answer to your question. If you're using a `sequence` augmentation, you can disable looping to guard against any accidental looping in the following ways: ++* You can set `allow_looping?` to `false` in the `AIOptions` definition. +* You can set `max_repair_attempts` to `0` in the `index.ts` file. ++#### Manage history ++You can use the `MaxHistoryMessages` and `MaxConversationHistoryTokens` arguments to allow the AI library to automatically manage your history. ++### Feedback loop ++A feedback loop allows you to monitor and improve the botΓÇÖs interactions over time, leading to more effective and user-friendly applications. The feedback received can be used to make adjustments and improvements, ensuring that the bot consistently meets user needs and expectations. ++A feedback loop consists of the following: ++**Repair Loop**: If the model's response falls short of expectations, it triggers a repair loop. The conversation history forks, enabling the system to try various solutions without impacting the main conversation. ++**Validation**: Validation verifies the corrected response. If it successfully passes validation, the system unforks the conversation and reinserts the repaired structure into the main conversation. ++**Learn from Mistakes**: Once the model sees an example of correct behavior, it learns to avoid making similar mistakes in the future. ++**Handle Complex Commands**: Once the model has learned from its mistakes, it becomes capable of handling more complex commands and returning the desired plan. ++## Next step ++> [!div class="nextstepaction"] +> [Teams AI library quick start guide](conversation-ai-quick-start.md) |
platform | Teams Conversation Ai Overview | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/bots/how-to/teams-conversational-ai/teams-conversation-ai-overview.md | + + Title: Introduction to Teams AI Library +description: Learn about Teams AI library, Teams-centric component scaffolding, natural language modeling, prompt engineering, LLM, action planner, assistants API, augmentation. +ms.localizationpriority: medium +++ Last updated : 02/12/2024+++# Teams AI library ++Teams AI library is a Teams-centric interface to GPT-based common language models and user intent engines which, moderates the need for you to take on complex and expensive tasks of writing and maintaining conversational bot logic to integrate with Large Language Models (LLMs). +++The AI library provides a simple capabilities-driven approach and helps you to create intelligent apps quickly and easily with prebuilt, reusable code snippets so that you can focus on building the business logic rather than learning the semantics of Teams conversational applications. ++## Why use Teams AI library? ++The AI Library is a Teams-centric interface to Large Language Models. Your apps can use LLMs to facilitate more natural conversational interactions with users, guiding that conversation into your apps skills. ++You can focus on writing your business logic, and allow Teams to handle the complexities of conversational bots so that you can easily extract and utilize user intent within your apps. +++* The AI Library is a Teams-centric interface to Large Language Models. Use prebuilt templates to add Teams app capabilities. ++* Use techniques like prompt engineering to add ChatGPT like conversational experiences to your bot and built-in safety features, like moderation, help ensure your bot always responds in an appropriate manner. ++* The library includes a planning engine that lets the model identify the user's intent and then maps that intent to actions that you implement. ++* You can easily add support for any LLM of your choice without changing the bot logic. ++The Teams AI Library is available in JavaScript and C# languages allowing you to harness the power of AI and create intelligent, user-friendly applications for Microsoft Teams using the programming languages they're most comfortable with. We're committed to a mindset where you build AI products with the tools and languages you want in order to make the best experiences possible for your customers on Teams. ++The following are some of the main features available through Teams AI library: ++## Simple Teams-centric component scaffolding ++The Teams AI library simplifies the Teams app model to focus on the extension needed versus the protocol required. You can use prebuilt templates and add your business logic to this scaffold to add modules such as bots, message extensions, Adaptive Cards, or link unfurling. ++## Natural language modeling ++The Teams AI library is built with GPT-powered language models, so that you don't need to spend time to write your conversational logic and identify user intents. Building AI-powered Teams apps is easier, more compliant, and consistently usable than ever before. ++Bots can run in-context and assist when the bot recognizes a user intent that maps to one of the bot actions. This boosts the conversation without requiring users to explicitly talk to the bot using a small set of registered actions. ++## Prompt engineering ++Prompt engineering helps you to design prompts considering user's intent, context of the conversation, and the bot personality. Bots can be personalized and customized to meet user needs. ++## Conversational session history ++Teams AI library remembers context across messages and helps improve the bot performance by analyzing patterns in user behavior. ++## Localization ++Since Teams AI library uses OpenAI's GPT model, localization is available. When a user inputs in any language, the input is consistently translated to intents, entities, and resultant actions that the app understands without the need to build and maintain localization records. ++## LLM modularity ++Large language model (LLM) is an advanced language model that utilizes latent variables to generate coherent and diverse natural language text and style. ++Although, Teams AI library is built to use Open AI’s GPT model, you have the flexibility to swap with any LLM of your choice without changing the bot logic. This means you can choose to keep your app's content outside the public domain and confined to your preferred LLM model. ++## Responsible AI ++Teams AI library allows you to create ethical and responsible conversational apps by: ++* Moderation hooks: To regulate bot responses against any moderation API. +* Conversation sweeping: To monitor conversations and intervene when the conversation goes astray through proactive detection and remediation. +* Feedback loops: To evaluate the performance of the bot for high quality conversations and enhance user experience. ++Teams AI library offers support from low code to complex scenarios. The library extends capabilities with AI constructs to build natural language modeling, scenario-specific user intent, personalization, and automated context-aware conversations. ++## Predictive engine for mapping intents to actions ++A simple interface for actions and predictions allows bots to react when the bot has confidence for taking action. Ambient presence lets bots learn intent, use prompts based on business logic, and generate responses. For example, if a user was out of office and needs to quickly summarize a thread, the library: ++1. Understands the intent as summarization. +1. Allows prompts to make summarizations over a period of time focused on the user’s manager. +1. Provides actions to summarize chat content for users to consume. ++<!-- ## Bots Architecture overview ++The bot framework using Teams AI library requires the following: ++* Support to OAuth S2S +* Adherence to Activity schema for reading and writing JSON documents +* Invoking Rest APIs to determine additional context required to handle a user's message, such as Azure Active Directory (Azure AD) ID and UPN of the user the bot is interacting with. --> ++## Action Planner ++Action Planner is the main component calling your Large Language Model (LLM) and includes several features to enhance and customize your model. Model plugins simplify configuring your selected LLM to the planner and ships with an OpenAIModel that supports both OpenAI and Azure OpenAI LLMs. Additional plugins for other models like Llama-2 can easily be added, giving you the flexibility to choose what model is best for your use case. An internal feedback loop increases reliability by fixing the subpar responses from the LLM. ++## Assistants API ++> [!NOTE] +> Teams AI library supports both OpenAI and Azure OpenAI Assistants API in [public developer preview](~/resources/dev-preview/developer-preview-intro.md) for you to get started with building intelligent assistants. ++Assistants API allows you to create powerful AI assistants capable of performing a variety of tasks that are difficult to code using traditional methods. It provides programmatic access to OpenAI’s GPT system for tasks ranging from chat to image processing, audio processing, and building custom assistants. The API supports natural language interaction, enabling the development of assistants that can understand and respond in a conversational manner. ++Follow the [quick start guide](assistants-api-quick-start.md) to create an assistant that specializes in mathematics. ++## Prompt management ++Dynamic prompt management is a feature of the AI system that allows it to adjust the size and content of the prompt that is sent to the language model, based on the available token budget and the data sources or augmentations. It can improve the efficiency and accuracy of the model by ensuring that the prompt doesn't exceed the context window or include irrelevant information. ++## Augmentation ++Efficiently enhance and direct your AI model’s responses with Augmentation. Using different augmentation modes, you can tailor your model to your needs, increasing its accuracy and desired outcomes. ++* **Retrieval Augmented Generation (RAG)**: Automatically incorporates real-time, dynamic, and specified external data sources into your model’s responses enabling up to date and contextually accurate results without fine-tuning or re-training your model. Answer questions about today’s sales numbers or customize to a specific user’s data; with RAG your model is no longer stuck in the past. ++* **Monologue**: Create AutoGPT-style agents capable of performing multi-step actions independently and reliability with full schema validation and automatic repair included. ++* **Sequence**: Enable your AI assistant to return a sequence of actions for execution with schema validation increasing reliability. ++* **Functions**: Produce structured responses from your model by employing user-defined Functions. These functions are customizable using JSON schemas to define the parameters and their format. The ActionPlanner assesses model responses against the schema, making repairs as needed increasing response reliability and consistency. ++### Vector data sources ++Vector databases are a new type of database designed to store vectors and enable efficient search over them. They return the most relevant results for a user's query. The vector search feature in a vector database allows retrieval-augmented generation to use LLMs and custom data or domain-specific information. This involves extracting relevant information from a custom data source and integrating it into the model request through prompt engineering. Before sending a request to the LLM, the user input, query, or request is transformed into an embedding, and vector search techniques are used to find the most similar embeddings in the database. ++## Enhanced reasoning ++Teams AI Library offers an integrated fact-checking system to tackle bot hallucinations. When a user interacts with your AI assistant, the system prompts the bot to engage in a process of self-reflection critically evaluating its potential responses before sending. The introspection allows the bot to identify inaccuracies and correct its answers, which improves accuracy, quality, and contextual relevance. Advanced reasoning ensures that your AI assistant becomes a dependable source of information and judgment, building trust in your product and drawing users back every day. ++## Feedback loop ++Feedback loop allows the bot to validate and correct the output of the language model. It checks the structure and parameters of the plan or monologue that the model returns and provides feedback on errors or missing information. The model then tries to fix its mistakes and returns a valid output. The feedback loop can improve the reliability and accuracy of the AI system and reduce the chances of hallucination or invalid actions. ++The following table lists the updates to the Teams AI library: ++|Type |Description |.NET|JavaScript|Python| +|||||| +|OpenAIModel |The OpenAIModel class lets you call both OAI and Azure OAI with one single component. New models can be defined for other model types like LLaMA2. | ✔️ |✔️|✔️| +|Embeddings | The OpenAIEmbeddings class lets you generate embeddings using either OAI or Azure OAI. New embeddings can be defined for things like OSS Embeddings. | ❌ |✔️|✔️| +|Prompts | A new object-based prompt system enables better token management and reduces the likelihood of overflowing the model's context window. | ✔️ |✔️|✔️| +| Augmentation | Augmentations simplify prompt engineering tasks by letting the developer add named augmentations to their prompt. Only `functions`, `sequence`, and `monologue` style augmentations are supported. | ✔️ |✔️|✔️| +|Data Sources | A new DataSource plugin makes it easy to add RAG to any prompt. You can register a named data source with the planner and then specify the name[s] of the data sources they wish to augment the prompt. | ❌ |✔️|✔️| ++## Code samples ++| Sample name | Description | .NET | Node.js | Python | +| -- | | -- | - | -- | +| Echo bot | This sample shows how to incorporate a basic conversational flow into a Microsoft Teams application using Bot Framework and the Teams AI library. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/01.messaging.echoBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/01.getting-started/a.echoBot) | [View](https://github.com/microsoft/teams-ai/tree/main/python/samples/01.messaging.a.echoBot) | +| Search command message extension | This sample shows how to incorporate a basic Message Extension app into a Microsoft Teams application using Bot Framework and the Teams AI library. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/02.messageExtensions.a.searchCommand) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/02.teams-features/a.messageExtensions.searchCommand) | [View](https://github.com/microsoft/teams-ai/tree/main/python/samples/02.messageExtensions.a.searchCommand)| +| Typeahead bot | This sample shows how to incorporate the typeahead search functionality in Adaptive Cards into a Microsoft Teams application using Bot Framework and the Teams AI library. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/03.adaptiveCards.a.typeAheadBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/02.teams-features/b.adaptiveCards.typeAheadBot) | [View](https://github.com/microsoft/teams-ai/tree/main/python/samples/03.adaptiveCards.a.typeAheadBot)| +| Conversational bot with AI: Teams chef | This sample shows how to incorporate a basic conversational bot behavior in Microsoft Teams. The bot is built to allow GPT to facilitate the conversation on its behalf, using only a natural language prompt file to guide it. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.a.teamsChefBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.ai-apps/a.teamsChefBot) | +| Message extensions: GPT-ME | This sample is a message extension (ME) for Microsoft Teams that uses the text-davinci-003 model to help users generate and update posts. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.b.messageExtensions.gptME) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/03.ai-concepts/b.AI-messageExtensions) | [View](https://github.com/microsoft/teams-ai/tree/main/python/samples/04.ai.b.messageExtensions.AI-ME) | +| Light bot | This sample illustrates more complex conversational bot behavior in Microsoft Teams. The bot is built to allow GPT to facilitate the conversation on its behalf and manually defined responses, and maps user intents to user defined actions. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.c.actionMapping.lightBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/03.ai-concepts/c.actionMapping-lightBot) | [View](https://github.com/microsoft/teams-ai/tree/main/python/samples/04.ai.c.actionMapping.lightBot) | +| List bot | This sample shows how to incorporate a basic conversational bot behavior in Microsoft Teams. The bot harnesses the power of AI to simplify your workflow and bring order to your daily tasks and showcases the action chaining capabilities. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.d.chainedActions.listBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/03.ai-concepts/d.chainedActions-listBot) |[View](https://github.com/microsoft/teams-ai/tree/main/python/samples/04.ai.d.chainedActions.listBot)| +| DevOps bot | This sample shows how to incorporate a basic conversational bot behavior in Microsoft Teams. The bot uses the gpt-3.5-turbo model to chat with Teams users and perform DevOps action such as create, update, triage and summarize work items. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.e.chainedActions.devOpsBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.ai-apps/b.devOpsBot) |[View](https://github.com/microsoft/teams-ai/tree/main/python/samples/04.ai.e.chainedActions.devOpsBot)| +| Twenty questions | This sample shows showcases the incredible capabilities of language models and the concept of user intent. Challenge your skills as the human player and try to guess a secret within 20 questions, while the AI-powered bot answers your queries about the secret. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.e.twentyQuestions) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/03.ai-concepts/a.twentyQuestions) |[View](https://github.com/microsoft/teams-ai/tree/main/python/samples/04.ai.a.twentyQuestions)| +| Math tutor assistant | This example shows how to create a basic conversational experience using OpenAI's Assistants APIs. It uses OpenAI's Code Interpreter tool to create an assistant that's an expert on math. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/06.assistants.a.mathBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.ai-apps/d.assistants-mathBot) |[View](https://github.com/microsoft/teams-ai/tree/main/python/samples/06.assistants.a.mathBot)| +| Food ordering assistant | This example shows how to create a conversational assistant that uses tools to call actions in your bots code. It's a food ordering assistant for a fictional restaurant called The Pub and is capable of complex interactions with the user as it takes their order. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/06.assistants.b.orderBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.ai-apps/e.assistants-orderBot) |[View](https://github.com/microsoft/teams-ai/tree/main/python/samples/06.assistants.b.orderBot)| ++## Next step ++> [!div class="nextstepaction"] +> [Teams AI library capabilities](how-conversation-ai-core-capabilities.md) |
platform | Tool Sdk Overview | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/concepts/build-and-test/tool-sdk-overview.md | The following flow diagram explains the different SDKs, libraries, and its relat | -- | -- | -- | | [Bot Framework SDK](/azure/bot-service/bot-service-overview) | Microsoft Bot Framework and Azure AI Bot Service are a collection of libraries, tools, and services that enable you to build, test, deploy, and manage intelligent bots. The Bot Framework includes a modular and extensible SDK for building bots and connecting to AI services. | :::image type="icon" source="../../assets/icons/grey-dot.png" border="false"::: Based on **Azure Bot Service**. | | [Microsoft Graph SDKs](/graph/sdks/sdks-overview) | The Microsoft Graph SDKs are designed to simplify the creation of high-quality, efficient, and resilient applications that access Microsoft Graph. The SDKs include two components such as service library and core library. | :::image type="icon" source="../../assets/icons/grey-dot.png" border="false"::: Based on **Microsoft Graph**. |-| [Teams AI library](../../bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md) | Teams AI library is a Teams-centric interface to GPT-based common language models and user intent engines. This reduces the requirement for you to handle on complex and expensive tasks of writing and maintaining conversational bot logic to integrate with Large Language Models (LLMs). | :::image type="icon" source="../../assets/icons/blue-dot.png" border="false"::: Depends on **Bot Framework SDK**. </br> :::image type="icon" source="../../assets/icons/grey-dot.png" border="false"::: Based on **Azure OpenAI**. | +| [Teams AI library](../../bots/how-to/teams-conversational-ai/teams-conversation-ai-overview.md) | Teams AI library is a Teams-centric interface to GPT-based common language models and user intent engines. This reduces the requirement for you to handle on complex and expensive tasks of writing and maintaining conversational bot logic to integrate with Large Language Models (LLMs). | :::image type="icon" source="../../assets/icons/blue-dot.png" border="false"::: Depends on **Bot Framework SDK**. </br> :::image type="icon" source="../../assets/icons/grey-dot.png" border="false"::: Based on **Azure OpenAI**. | ### Additional libraries and UI utilities to build Teams apps |
platform | Teams Store Validation Guidelines | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/concepts/deploy-and-publish/appsource/prepare/teams-store-validation-guidelines.md | Explore resources designed to help you with responsible Artificial Intelligence * App must not generate, contain, or provide access to inappropriate, harmful, or offensive AI generated content consistent with existing commercial marketplace policies outlined in [100.10](/legal/marketplace/certification-policies#10010-inappropriate-content). [*Mandatory Fix*] * Consider using any of the following:- * Use [Teams AI library](~/bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md), Teams-centric interface to GPT-based common language models and user intent engines. [*Suggested Fix*] + * Use [Teams AI library](~/bots/how-to/teams-conversational-ai/teams-conversation-ai-overview.md), Teams-centric interface to GPT-based common language models and user intent engines. [*Suggested Fix*] * Use of moderation hooks, which can be used to regulate bot responses through moderation API. [*Suggested Fix*] * Add conversation sweeping capability, which helps you monitor conversations and intervene when conversations go astray. [*Suggested Fix*] |
platform | Choose What Suits You | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/get-started/choose-what-suits-you.md | To start building your Teams app, you can select the tools and SDKs based on you | App capabilities | User interactions | Recommended tools | SDKs | Languages | |--|-|--|--|--| | **Tabs** | A full-screen embedded web experience. | VS Code or Visual Studio with Teams Toolkit extension, or [TeamsFx CLI](~/toolkit/teams-toolkit-cli.md) if you prefer using CLI | [Teams JavaScript client library](/javascript/api/overview/msteams-client#microsoft-teams-javascript-client-library) for UI functionalities, SharePoint Framework (SPFx), and Microsoft Graph SDK | C#, TypeScript, and JavaScript (including React) |-| **Bots** | A chat bot that converses with members. |VS Code or Visual Studio with Teams Toolkit extension, or [TeamsFx CLI](~/toolkit/teams-toolkit-cli.md) if you prefer using CLI | [TeamsFx SDK](/javascript/api/@microsoft/teamsfx), [Bot Framework SDK](https://dev.botframework.com/), [Teams AI library](../bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md), and Microsoft Graph SDK | C#, TypeScript, and JavaScript | -| **Message extensions** | Shortcuts for inserting external content into a conversation or taking action on messages. | VS Code or Visual Studio with Teams Toolkit extension, or [TeamsFx CLI](~/toolkit/teams-toolkit-cli.md) if you prefer using CLI | [TeamsFx SDK](/javascript/api/@microsoft/teamsfx), [Bot Framework SDK](https://dev.botframework.com/), [Teams AI library](../bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md), and Microsoft Graph SDK | C#, TypeScript, and JavaScript | +| **Bots** | A chat bot that converses with members. |VS Code or Visual Studio with Teams Toolkit extension, or [TeamsFx CLI](~/toolkit/teams-toolkit-cli.md) if you prefer using CLI | [TeamsFx SDK](/javascript/api/@microsoft/teamsfx), [Bot Framework SDK](https://dev.botframework.com/), [Teams AI library](../bots/how-to/teams-conversational-ai/teams-conversation-ai-overview.md), and Microsoft Graph SDK | C#, TypeScript, and JavaScript | +| **Message extensions** | Shortcuts for inserting external content into a conversation or taking action on messages. | VS Code or Visual Studio with Teams Toolkit extension, or [TeamsFx CLI](~/toolkit/teams-toolkit-cli.md) if you prefer using CLI | [TeamsFx SDK](/javascript/api/@microsoft/teamsfx), [Bot Framework SDK](https://dev.botframework.com/), [Teams AI library](../bots/how-to/teams-conversational-ai/teams-conversation-ai-overview.md), and Microsoft Graph SDK | C#, TypeScript, and JavaScript | > [!NOTE] > |
platform | Glossary | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/get-started/glossary.md | Common terms and definitions used in Microsoft Teams developer documentation. | [Task info](../task-modules-and-cards/task-modules/invoking-task-modules.md#dialoginfo-object) | The `TaskInfo` object contains the metadata for a dialogs (referred as task modules in TeamsJS v.1.0).| | [Thread discussion](../tabs/design/tabs.md#thread-discussion) | A conversation posted on a channel or chat between users. <br>**See also** [Conversation](#c); [Channel](#c) | | [Teams](../overview.md) | Microsoft Teams is the ultimate message app for your organization. It's a workspace for real-time collaboration and communication, meetings, file and app sharing. |-| [Teams AI library](../bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md) | A Teams-centric interface to GPT-based common language models and user intent engines. You can take on complex and expensive tasks of writing and maintaining conversational bot logic to integrate with Large Language Models (LLMs).| +| [Teams AI library](../bots/how-to/teams-conversational-ai/teams-conversation-ai-overview.md) | A Teams-centric interface to GPT-based common language models and user intent engines. You can take on complex and expensive tasks of writing and maintaining conversational bot logic to integrate with Large Language Models (LLMs).| | [Teams identity](../tabs/how-to/authentication/tab-sso-overview.md) | The Microsoft account or Microsoft 365 account of an app user that is used to log in to Teams client, web, or mobile app. | | [Teams identity](../tabs/how-to/authentication/tab-sso-overview.md) | The Microsoft account or Microsoft 365 account of an app user that is used to sign in to Teams client, web, or mobile app. | | [Teams Toolkit](../toolkit/teams-toolkit-fundamentals.md) | The Microsoft Teams Toolkit enables you to create custom Teams apps directly within the VS Code environment. | |
platform | Bots Create | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/resources/bot-v3/bots-create.md | - Title: Bot app in Teams -description: Learn how to create bots using Microsoft Bot Framework, and use Developer Portal for Teams to register or update app, and bot information in Teams. - Previously updated : 04/02/2023---# Build a bot --> [!IMPORTANT] -> -> This article is based on the v3 Bot Framework SDK. -> -> * If you want to create an AI bot, see [create an AI bot](~/bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md). -> * If you want to create a basic bot, see [get started](~/get-started/get-started-overview.md), and if you want to look for current documentation version 4.6 or later of the SDK, see [conversational bots](~/bots/what-are-bots.md). --All bots created using the Microsoft Bot Framework are configured and ready to work in Microsoft Teams. --For more information, see [Bot Framework Documentation](/azure/bot-service/?view=azure-bot-service-3.0&preserve-view=true) for general information on bots. --## Build a bot for Microsoft Teams --**Teams Developer Portal for Teams** is a tool that can help build your bot, and an app package that references your bot. It also contains a React control library and configurable samples for cards. For more information, see [Getting started with Teams Developer Portal for Teams](~/concepts/build-and-test/teams-developer-portal.md). The steps that follow assume that you are hand configuring your bot and not using **Teams Developer Portal for Teams**: --1. Create the bot using [Bot Framework](https://dev.botframework.com/bots/new). **Be sure to add Microsoft Teams as a channel from the featured channels list after creating your bot.** Feel free to reuse any Microsoft App ID you generated if you've already created your app package/manifest. -- ![Bot Framework registration page](~/assets/images/bots/bfregister.png) --> [!NOTE] -> If you do not wish to create your bot in Azure, you **must** use this link to create a new bot: [Bot Framework](https://dev.botframework.com/bots/new). If you click on the **Create a bot** in the Bot Framework portal instead, you will [create your bot in Microsoft Azure](#bots-and-microsoft-azure) instead. --2. Build the bot using the [Microsoft.Bot.Connector.Teams](https://www.nuget.org/packages/Microsoft.Bot.Connector.Teams) NuGet package, the [Bot Framework SDK](https://github.com/microsoft/botframework-sdk), or the [Bot Connector API](/bot-framework/rest-api/bot-framework-rest-connector-api-reference). --3. Test the bot using the [Bot Framework Emulator](/bot-framework/debug-bots-emulator). --4. Deploy the bot to a cloud service, such as [Microsoft Azure](https://azure.microsoft.com/). Alternatively, run your app locally and use a tunneling service such [ngrok](https://ngrok.com) to expose an https:// endpoint for your bot, such as `https://45az0eb1.ngrok-free.app/api/messages`. --> [!NOTE] -> -> ## Bots and Microsoft Azure -> -> As of December, 2017, the Bot Framework portal is optimized for registering bots in Microsoft Azure. Here are some things to know: -> -> * The Microsoft Teams channel for bots registered on Azure is free. Messages sent over the Teams channel will not count towards the consumed messages for the bot. -> * While it's possible to [create a new Bot Framework bot](https://dev.botframework.com/bots/new) without using Azure, you must use [create a new Bot Framework bot](https://dev.botframework.com/bots/new), which is no longer exposed in the Bot Framework portal. -> * When you edit the properties of an existing bot in the [list of your bots in Bot Framework](https://dev.botframework.com/bots) such as its "messaging endpoint," which is common when first developing a bot, especially if you use [ngrok](https://ngrok.com), you will see "Migration status" column and a blue "Migrate" button that will take you into the Microsoft Azure portal. Don't click on the "Migrate" button unless that's what you want to do; instead, click on the name of the bot and you can edit its properties:</br> - ![Edit Bot Properties](~/assets/images/bots/bf-migrate-bot-to-azure.png) -> * If you register your bot using Microsoft Azure, your bot code does not need to be *hosted* on Microsoft Azure. -> * If you do register a bot using Azure portal, you must have a Microsoft Azure account. You can [create one for free](https://azure.microsoft.com/free/). To verify your identity when you create one, you must provide a credit card, but it won't be charged; it's always free to create and use bots with Teams. -> * You can now use Developer Portal for Teams to register/update app and bot information directly within Teams. You'll only have to use the Azure portal for adding or configuring other Bot Framework channels such as Direct Line, Web Chat, Skype, and Facebook Messenger. --## See also --[Bot Framework samples](https://github.com/OfficeDev/Microsoft-Teams-Samples/blob/main/README.md). |
platform | Bots Overview | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/resources/bot-v3/bots-overview.md | Last updated 04/02/2023 # Add bots to Microsoft Teams apps +> [!IMPORTANT] +> +> This article is based on the v3 Bot Framework SDK. +> +> * If you want to create an AI bot, see [create an AI bot](~/bots/how-to/teams-conversational-ai/teams-conversation-ai-overview.md). +> * If you want to create a basic bot, see [get started](~/get-started/get-started-overview.md), and if you want to look for current documentation version 4.6 or later of the SDK, see [conversational bots](~/bots/what-are-bots.md). Build and connect intelligent bots to interact with Microsoft Teams users naturally through chat. Or provide a simple commands-based bot, to be used as your "command-line" interface for your broader Teams app experience. You can make a notification-only bot, which can push information relevant to your users directly to them in a channel or direct message. You can even bring your existing Bot Framework-based bot and add Teams-specific support to make your experience shine. The following articles will guide you through the process of creating a great bo * [Using tabs with bots](~/resources/bot-v3/bots-with-tabs.md): Making tabs and bots work together. * [Test your bot](~/resources/bot-v3/bots-test.md): Add your bot for personal or team conversations to see it in action. +<details> ++<summary><b>Bots SDK V3</b></summary> ++> [!IMPORTANT] +> +> This article is based on the v3 Bot Framework SDK. +> +> * If you want to create an AI bot, see [create an AI bot](~/bots/how-to/teams-conversational-ai/teams-conversation-ai-overview.md). +> * If you want to create a basic bot, see [get started](~/get-started/get-started-overview.md), and if you want to look for current documentation version 4.6 or later of the SDK, see [conversational bots](~/bots/what-are-bots.md). ++**Teams Developer Portal for Teams** is a tool that can help build your bot, and an app package that references your bot. It also contains a React control library and configurable samples for cards. For more information, see [Getting started with Teams Developer Portal for Teams](~/concepts/build-and-test/teams-developer-portal.md). The steps that follow assume that you are hand configuring your bot and not using **Teams Developer Portal for Teams**: ++1. Create the bot using [Bot Framework](https://dev.botframework.com/bots/new). **Be sure to add Microsoft Teams as a channel from the featured channels list after creating your bot.** Feel free to reuse any Microsoft App ID you generated if you've already created your app package/manifest. ++ ![Bot Framework registration page](~/assets/images/bots/bfregister.png) ++> [!NOTE] +> If you do not wish to create your bot in Azure, you **must** use this link to create a new bot: [Bot Framework](https://dev.botframework.com/bots/new). If you click on the **Create a bot** in the Bot Framework portal instead, you will [create your bot in Microsoft Azure](#bots-and-microsoft-azure) instead. ++2. Build the bot using the [Microsoft.Bot.Connector.Teams](https://www.nuget.org/packages/Microsoft.Bot.Connector.Teams) NuGet package, the [Bot Framework SDK](https://github.com/microsoft/botframework-sdk), or the [Bot Connector API](/bot-framework/rest-api/bot-framework-rest-connector-api-reference). ++3. Test the bot using the [Bot Framework Emulator](/bot-framework/debug-bots-emulator). ++4. Deploy the bot to a cloud service, such as [Microsoft Azure](https://azure.microsoft.com/). Alternatively, run your app locally and use a tunneling service such [ngrok](https://ngrok.com) to expose an https:// endpoint for your bot, such as `https://45az0eb1.ngrok-free.app/api/messages`. ++> [!NOTE] +> +> ### Bots and Microsoft Azure +> +> As of December, 2017, the Bot Framework portal is optimized for registering bots in Microsoft Azure. Here are some things to know: +> +> * The Microsoft Teams channel for bots registered on Azure is free. Messages sent over the Teams channel will not count towards the consumed messages for the bot. +> * While it's possible to [create a new Bot Framework bot](https://dev.botframework.com/bots/new) without using Azure, you must use [create a new Bot Framework bot](https://dev.botframework.com/bots/new), which is no longer exposed in the Bot Framework portal. +> * When you edit the properties of an existing bot in the [list of your bots in Bot Framework](https://dev.botframework.com/bots) such as its "messaging endpoint," which is common when first developing a bot, especially if you use [ngrok](https://ngrok.com), you will see "Migration status" column and a blue "Migrate" button that will take you into the Microsoft Azure portal. Don't click on the "Migrate" button unless that's what you want to do; instead, click on the name of the bot and you can edit its properties:</br> + ![Edit Bot Properties](~/assets/images/bots/bf-migrate-bot-to-azure.png) +> * If you register your bot using Microsoft Azure, your bot code does not need to be *hosted* on Microsoft Azure. +> * If you do register a bot using Azure portal, you must have a Microsoft Azure account. You can [create one for free](https://azure.microsoft.com/free/). To verify your identity when you create one, you must provide a credit card, but it won't be charged; it's always free to create and use bots with Teams. +> * You can now use Developer Portal for Teams to register/update app and bot information directly within Teams. You'll only have to use the Azure portal for adding or configuring other Bot Framework channels such as Direct Line, Web Chat, Skype, and Facebook Messenger. ++</details> + ## See also [Bot Framework samples](https://github.com/OfficeDev/Microsoft-Teams-Samples/blob/main/README.md). |
platform | Build A RAG Bot In Teams | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/toolkit/build-a-RAG-bot-in-teams.md | export class GraphApiSearchDataSource implements DataSource { ## See also -[Teams AI library](../bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md) +[Teams AI library](../bots/how-to/teams-conversational-ai/teams-conversation-ai-overview.md) |
platform | Build A Basic AI Chatbot In Teams | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/toolkit/build-a-basic-AI-chatbot-in-teams.md | Last updated 05/21/2024 # Build a basic AI chatbot -The AI chatbot template showcases a bot app, similar to ChatGPT, that responds to user questions and allows users to interact with the AI bot in Microsoft Teams. [Teams AI library](../bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md) is used to build the app template, providing the capabilities to create AI-based Teams applications. +The AI chatbot template showcases a bot app, similar to ChatGPT, that responds to user questions and allows users to interact with the AI bot in Microsoft Teams. [Teams AI library](../bots/how-to/teams-conversational-ai/teams-conversation-ai-overview.md) is used to build the app template, providing the capabilities to create AI-based Teams applications. ## Prerequisites You can add customizations on top of the basic app to build complex scenarios as ## See also -[Teams AI library](../bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md) +[Teams AI library](../bots/how-to/teams-conversational-ai/teams-conversation-ai-overview.md) |
platform | Build An AI Agent In Teams | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/toolkit/build-an-AI-agent-in-Teams.md | When the assistant provides a function and its arguments for execution, the SDK ## See also -[Teams AI library](../bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md) +[Teams AI library](../bots/how-to/teams-conversational-ai/teams-conversation-ai-overview.md) |
platform | Whats New | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/whats-new.md | Discover Microsoft Teams platform features that are generally available (GA). Yo ## Microsoft Build 2024 :::image type="icon" source="assets/images/bullhorn.png" border="false" --> +-> ## Generally available Teams platform features that are available to all app developers. |12/04/2024|Share code snippets as richly formatted Adaptive Cards in Teams chats, channels, and meetings with the CodeBlock element.|Build cards and dialogs > [CodeBlock in Adaptive Cards](task-modules-and-cards/cards/cards-format.md#codeblock-in-adaptive-cards)| |12/04/2024|Introduced bot configuration experience that helps you to enable the bot settings for users to configure their bot during installation and reconfigure the bot.|Build bots > [Bot configuration experience](bots/how-to/bot-configuration-experience.md)| |10/04/2024|Define and deploy Outlook Add-ins in version 1.17 and later of the app manifest schema.|Extend your app across Microsoft 365 > [Outlook Add-ins](m365-apps/overview.md#outlook-add-ins)|-|04/04/2024|Added support for python in Teams AI library.|Build bots > Teams AI library > [Teams AI library](bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md)| +|04/04/2024|Added support for python in Teams AI library.|Build bots > Teams AI library > [Teams AI library](bots/how-to/teams-conversational-ai/teams-conversation-ai-overview.md)| |04/04/2024|Stageview API with the openmode property allows you to open your app content in different Stageview experience.|Build tabs > [Open content in Stageview](tabs/open-content-in-stageview.md)| |03/04/2024|Updated the common reasons for app validation failure to help your app pass the Teams Store submission process.|Distribute your app > Publish to the Teams Store > [Common reasons for app validation failure](concepts/deploy-and-publish/appsource/common-reasons-for-app-validation-failure.md)| |27/03/2024|Configure Teams deep links using the msteams:// and https:// protocol handlers.|Integrate with Teams > Create deep links > Overview > [Protocol handlers in deep links](concepts/build-and-test/deep-links.md#protocol-handlers-in-deep-links)| Teams platform features that are available to all app developers. |20/12/2023|Introduced RSC permissions for users to access different resources.| Utilize Teams data with Microsoft Graph > [Resource-specific consent for your Teams app](graph-api/rsc/resource-specific-consent.md#rsc-permissions-for-user-access) | |18/12/2023|App caching in chat, channel, and meeting tab scopes is available for iOS.| Build tabs > [App caching for your tab app](tabs/how-to/app-caching.md) | |15/12/2023|Bots can mention tags in text messages and Adaptive Cards posted in Teams channels.| Build bots > Bot conversation > [Channel and group chat conversations with a bot](bots/how-to/conversations/channel-and-group-conversations.md#tag-mention) |-|12/12/2023|Use Teams AI library to build apps that can leverage LLMs to facilitate more natural conversational interactions with users, guiding that conversation into your apps skills.|Build bots > [Teams AI library](bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md)| +|12/12/2023|Use Teams AI library to build apps that can leverage LLMs to facilitate more natural conversational interactions with users, guiding that conversation into your apps skills.|Build bots > [Teams AI library](bots/how-to/teams-conversational-ai/teams-conversation-ai-overview.md)| |21/11/2023|Terminology update. LOB apps is referred to as custom apps built for your org (LOB apps).|| |20/11/2023|Use captureImage API to capture an image or select media from the gallery for mobile clients.|Integrate device capabilities > [Integrate media capabilities](concepts/device-capabilities/media-capabilities.md)| |17/11/2023|Terminology update. Sideload is referred to as upload a custom app.|| Developer preview is a public program that provides early access to unreleased T |31/07/2023| Bots can mention tags in Adaptive Cards. |Build bots > Bot conversations > Message in bot conversations > Channel and group chat conversation with a bot > [Bots can mention tags in Adaptive Cards](bots/how-to/conversations/channel-and-group-conversations.md#tag-mention)| |13/07/2023| Extend static tabs to group chat or meetings with a customizable experience. |Build tabs > [Overview](tabs/what-are-tabs.md)| | 25/05/2023 | Use a deep link to open a tab app in meeting side panel in Teams mobile client. | Build apps for Teams meetings and calls > Enable and configure apps for Teams meeting > [Build tabs for meeting](apps-in-teams-meetings/build-tabs-for-meeting.md#deep-link-to-meeting-side-panel) |-|23/05/2023 | Teams AI library helps you build AI-powered Teams apps. | Build Bots > [Teams AI library](bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md)| +|23/05/2023 | Teams AI library helps you build AI-powered Teams apps. | Build Bots > [Teams AI library](bots/how-to/teams-conversational-ai/teams-conversation-ai-overview.md)| |23/05/2023| Extend Microsoft 365 Copilot to integrate with Microsoft Teams apps to turn your app into the most powerful productivity tool. | [Extend Microsoft 365 Copilot](messaging-extensions/how-to-extend-copilot.md)| |31/01/2023| Send notifications to specific participants on a meeting stage with targeted in-meeting notification. |Build apps for Teams meetings and calls > Enable and configure apps for meetings > Build in-meeting notification for Teams meeting > Build tabs for meeting > [Targeted in-meeting notification](apps-in-teams-meetings/in-meeting-notification-for-meeting.md#targeted-in-meeting-notification)| |30/01/2023| Enable app caching to improve subsequent launch time of the apps to the meeting side panel.|Build tabs > [App caching for your tab app](tabs/how-to/app-caching.md) | |