Updates from: 05/22/2024 04:19:08
Service Microsoft Docs article Related commit history on GitHub Change details
platform Teams Live Share Canvas https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/apps-in-teams-meetings/teams-live-share-canvas.md
yarn add @microsoft/teams-js
## Setting up the package
-Live Share canvas has two primary classes that enable turn-key collaboration: `InkingManager` and `LiveCanvas`. `InkingManager` is responsible for attaching a fully-featured `<canvas>` element to your app, while `LiveCanvas` manages the remote synchronization with other meeting participants. Used together, your app can have complete whiteboard-like functionality in just a few lines of code.
+Live Share canvas has two primary classes that enable turn-key collaboration: `InkingManager` and `LiveCanvas`. `InkingManager` is responsible for attaching a fully-featured `<canvas>` element to your app, while `LiveCanvas` manages the remote synchronization with other connected participants. Used together, your app can have complete whiteboard-like functionality in just a few lines of code.
| Classes | Description | | | |
document.getElementById("point-eraser").onclick = () => {
:::image type="content" source="../assets/images/teams-live-share/canvas-laser-tool.gif" alt-text="GIF shows an example of drawing strokes on the canvas using the the laser pointer tool.":::
-The laser pointer is unique as the tip of the laser has a trailing effect as you move your mouse. When you draw strokes, the trailing effect renders for a short period before it fades out completely. This tool is perfect to point out information on the screen during a meeting, as the presenter doesn't have to switch between tools to erase strokes.
+The laser pointer is unique as the tip of the laser has a trailing effect as you move your mouse. When you draw strokes, the trailing effect renders for a short period before it fades out completely. This tool is perfect to point out information on the screen during collaboration, as the presenter doesn't have to switch between tools to erase strokes.
```html <div>
You can customize this behavior in the following ways:
- Change the scale level of the viewport. > [!NOTE]
-> Reference points, offsets, and scale levels are local to the client and aren't synchronized across meeting participants.
+> Reference points, offsets, and scale levels are local to the client and aren't synchronized across connected participants.
Example:
platform Teams Live Share Capabilities https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/apps-in-teams-meetings/teams-live-share-capabilities.md
Last updated 04/07/2022
:::image type="content" source="../assets/images/teams-live-share/Teams-live-share-core-capabilities-hero.png" alt-text="Screenshot shows an example of users playing agile poker game in a Teams meeting, which showcases the Live share capability.":::
-The Live Share SDK can be added to your meeting extension's `sidePanel` and `meetingStage` contexts with minimal effort.
+The Live Share SDK can be added to your meeting extension's `sidePanel` and `meetingStage` contexts with minimal effort. You can also use the SDK in chat and channel `content` contexts like configurable tabs, static tabs, and collaborative stageview.
+
+> [!NOTE]
+> Live Share `content` contexts in chat and channels is only supported on Teams desktop and web clients.
This article focuses on how to integrate the Live Share SDK into your app and key capabilities of the SDK.
This article focuses on how to integrate the Live Share SDK into your app and ke
### Install the JavaScript SDK
-The [Live Share SDK](https://github.com/microsoft/live-share-sdk) is a JavaScript package published on [npm](https://www.npmjs.com/package/@microsoft/live-share), and you can download through npm or yarn. You must also install Live Share peer dependencies, which include `fluid-framework` and `@fluidframework/azure-client`. If you're using Live Share in your tab application, you must also install `@microsoft/teams-js` version `2.11.0` or later. If you want to use the `TestLiveShareHost` class for local browser development, you must install `@fluidframework/test-client-utils` and `start-server-and-test` packages in your `devDependencies`.
+The [Live Share SDK](https://github.com/microsoft/live-share-sdk) is a JavaScript package published on [npm](https://www.npmjs.com/package/@microsoft/live-share), and you can download through npm or yarn. You must also install Live Share peer dependencies, which include `fluid-framework` and `@fluidframework/azure-client`. If you're using Live Share in your tab application, you must also install `@microsoft/teams-js` version `2.23.0` or later. If you want to use the `TestLiveShareHost` class for local browser development, you must install `@fluidframework/test-client-utils` and `start-server-and-test` packages in your `devDependencies`.
#### npm
yarn add @fluidframework/test-client-utils -dev
### Register RSC permissions
-To enable the Live Share SDK for your meeting extension, you must first add the following RSC permissions into your app manifest:
+To enable the Live Share SDK for your tab extension, you must first add the following RSC permissions into your app manifest:
```json {
To enable the Live Share SDK for your meeting extension, you must first add the
"team" ], "context": [
+ // meeting contexts
"meetingSidePanel",
- "meetingStage"
+ "meetingStage",
+ // content contexts
+ "privateChatTab",
+ "channelTab",
+ "meetingChatTab"
] } ],
To enable the Live Share SDK for your meeting extension, you must first add the
} ```
-## Join a meeting session
+## Join a session
-Follow the steps to join a session that's associated with a user's meeting:
+Follow the steps to join a session that's associated with a user's meeting, chat, or channel:
1. Initialize `LiveShareClient`. 2. Define the data structures you want to synchronize. For example, `LiveState` or `SharedMap`.
const { container } = await liveShare.joinContainer(schema);
```jsx import { LiveShareHost } from "@microsoft/teams-js";
-import { LiveShareProvider, useLiveShareContext } from "@microsoft/live-share-react";
+import {
+ LiveShareProvider,
+ useLiveShareContext,
+} from "@microsoft/live-share-react";
import { useState } from "react"; export const App = () => {
- // Create the host as React state so that it doesn't get reset on mount
- const [host] = useState(LiveShareHost.create());
+ // Create the host as React state so that it doesn't get reset on mount
+ const [host] = useState(LiveShareHost.create());
- // Live Share for React does not require that you define a custom Fluid schema
- return (
- <LiveShareProvider host={host} joinOnLoad>
- <LiveShareLoading />
- </LiveShareProvider>
- );
-}
+ // Live Share for React does not require that you define a custom Fluid schema
+ return (
+ <LiveShareProvider host={host} joinOnLoad>
+ <LiveShareLoading />
+ </LiveShareProvider>
+ );
+};
const LiveShareLoading = () => {
- // Any live-share-react hook (e.g., useLiveShareContext, useLiveState, etc.) must be a child of <LiveShareProvider>
- const { joined } = useLiveShareContext();
- if (joined) {
- return <p>{"Loading..."}</p>;
- }
- return <p>{"Your app here..."}</p>;
-}
+ // Any live-share-react hook (e.g., useLiveShareContext, useLiveState, etc.) must be a child of <LiveShareProvider>
+ const { joined } = useLiveShareContext();
+ if (joined) {
+ return <p>{"Loading..."}</p>;
+ }
+ return <p>{"Your app here..."}</p>;
+};
```
-That's all it took to setup your container and join the meeting's session. Now, let's review the different types of _distributed data structures_ that you can use with the Live Share SDK.
+That's all it took to setup your container and join the session mapped to the meeting, chat, or channel. Now, let's review the different types of _distributed data structures_ that you can use with the Live Share SDK.
> [!TIP] > Ensure that the Teams Client SDK is initialized before calling `LiveShareHost.create()`. ## Live Share data structures
-The Live Share SDK includes a set of new distributed-data structures that extend Fluid's `DataObject` class, providing new types of stateful and stateless objects. Unlike Fluid data structures, Live Share's `LiveDataObject` classes donΓÇÖt write changes to the Fluid container, enabling faster synchronization. Further, these classes were designed from the ground up for common meeting scenarios in Teams meetings. Common scenarios include synchronizing what content the presenter is viewing, displaying metadata for each user in the meeting, or displaying a countdown timer.
+The Live Share SDK includes a set of new distributed-data structures that extend Fluid's `DataObject` class, providing new types of stateful and stateless objects. Unlike Fluid data structures, Live Share's `LiveDataObject` classes donΓÇÖt write changes to the Fluid container, enabling faster synchronization. Further, these classes were designed from the ground up for common scenarios in Teams meetings, chats, and channels. Common scenarios include synchronizing what content the presenter is viewing, displaying metadata for each user in the session, or displaying a countdown timer.
-| Live Object | Description |
-| | |
-| [LivePresence](/javascript/api/@microsoft/live-share/livepresence) | See which users are online, set custom properties for each user, and broadcast changes to their presence. |
-| [LiveState](/javascript/api/@microsoft/live-share/livestate) | Synchronize any JSON serializable `state` value. |
-| [LiveTimer](/javascript/api/@microsoft/live-share/livetimer) | Synchronize a countdown timer for a given interval. |
-| [LiveEvent](/javascript/api/@microsoft/live-share/liveevent) | Broadcast individual events with any custom data attributes in the payload. |
-| [LiveFollowMode](/javascript/api/@microsoft/live-share/livefollowmode) | Follow specific users, present to everyone in the session, and start or end suspensions. |
+| Live Object | Description |
+| - | |
+| [LivePresence](/javascript/api/@microsoft/live-share/livepresence) | See which users are online, set custom properties for each user, and broadcast changes to their presence. |
+| [LiveState](/javascript/api/@microsoft/live-share/livestate) | Synchronize any JSON serializable `state` value. |
+| [LiveTimer](/javascript/api/@microsoft/live-share/livetimer) | Synchronize a countdown timer for a given interval. |
+| [LiveEvent](/javascript/api/@microsoft/live-share/liveevent) | Broadcast individual events with any custom data attributes in the payload. |
+| [LiveFollowMode](/javascript/api/@microsoft/live-share/livefollowmode) | Follow specific users, present to everyone in the session, and start or end suspensions. |
### LivePresence example
const presence = container.initialObjects.presence;
// Register listener for changes to each user's presence. // This should be done before calling `.initialize()`. presence.on("presenceChanged", (user, local) => {
- console.log("A user presence changed:")
+ console.log("A user presence changed:");
console.log("- display name:", user.displayName); console.log("- state:", user.state); console.log("- custom data:", user.data);
export const MyCustomPresence = () => {
Users joining a session from a single device have a single `LivePresenceUser` record that is shared to all their devices. To access the latest `data` and `state` for each of their active connections, you can use the `getConnections()` API from the `LivePresenceUser` class. This returns a list of `LivePresenceConnection` objects. You can see if a given `LivePresenceConnection` instance is from the local device using the `isLocalConnection` property.
-Each `LivePresenceUser` and `LivePresenceConnection` instance has a `state` property, which can be either `online`, `offline`, or `away`. An `presenceChanged` event is emitted when a user's state changes. For example, if a user leaves a meeting, their state changes to `offline`.
+Each `LivePresenceUser` and `LivePresenceConnection` instance has a `state` property, which can be either `online`, `offline`, or `away`. An `presenceChanged` event is emitted when a user's state changes. For example, if a user disconnects from the session or closes the application, their state changes to `offline`.
> [!NOTE]
-> It can take up to 20 seconds for an `LivePresenceUser`'s `state` to update to `offline` after leaving a meeting.
+> It can take up to 20 seconds for an `LivePresenceUser`'s `state` to update to `offline` after a user disconnects from the session.
### LiveState example :::image type="content" source="../assets/images/teams-live-share/live-share-state.png" alt-text="Screenshot shows an example of Live Share state to synchronize what planet in the solar system is actively presented to the meeting.":::
-The `LiveState` class enables synchronizing simple application state for everyone in a meeting. `LiveState` synchronizes a single `state` value, allowing you to synchronize any JSON serializable value, such as a `string`, `number`, or `object`.
+The `LiveState` class enables synchronizing simple application state for connected participants. `LiveState` synchronizes a single `state` value, allowing you to synchronize any JSON serializable value, such as a `string`, `number`, or `object`.
The following are a few examples in which `LiveState` can be used in your application:
const MY_UNIQUE_KEY = "selected-planet-key";
// Example component for using useLiveState export const MyCustomState = () => {
- const [planet, setPlanet] = useLiveState(MY_UNIQUE_KEY, planets[0]);
+ const [planet, setPlanet] = useLiveState(MY_UNIQUE_KEY, planets[0]);
- // Render UI
- return (
- <div>
- {`Current planet: ${planet}`}
- {'Select a planet below:'}
- {planets.map((planet) => (
- <button key={planet} onClick={() => {
- setPlanet(planet);
- }}>
- {planet}
- </button>
- ))}
- </div>
- );
-}
+ // Render UI
+ return (
+ <div>
+ {`Current planet: ${planet}`}
+ {"Select a planet below:"}
+ {planets.map((planet) => (
+ <button
+ key={planet}
+ onClick={() => {
+ setPlanet(planet);
+ }}
+ >
+ {planet}
+ </button>
+ ))}
+ </div>
+ );
+};
```
export const MyCustomState = () => {
:::image type="content" source="../assets/images/teams-live-share/live-share-event.png" alt-text="Screenshot shows an example of Teams client displaying notification when there's a change in the event.":::
-`LiveEvent` is a great way to send simple events to other clients in a meeting that are only needed at the time of delivery. It's useful for scenarios like sending session notifications or implementing custom reactions.
+`LiveEvent` is a great way to send simple events to other connected clients that are only needed at the time of delivery. It's useful for scenarios like sending session notifications or implementing custom reactions.
# [JavaScript](#tab/javascript)
await customReactionEvent.send(kudosReaction);
```jsx import { useLiveEvent } from "@microsoft/live-share-react";
-const emojis = [
- "❤️",
- "😂",
- "👍",
- "👎",
-];
+const emojis = ["❤️", "😂", "👍", "👎"];
// Define a unique key that differentiates this usage of `useLiveEvent` from others in your app const MY_UNIQUE_KEY = "event-key"; // Example component for using useLiveEvent export const MyCustomEvent = () => {
- const {
- latestEvent,
- sendEvent,
- } = useLiveEvent(MY_UNIQUE_KEY);
+ const { latestEvent, sendEvent } = useLiveEvent(MY_UNIQUE_KEY);
- // Render UI
- return (
- <div>
- {`Latest event: ${latestEvent?.value}, from local user: ${latestEvent?.local}`}
- {'Select a planet below:'}
- {emojis.map((emoji) => (
- <button key={emoji} onClick={() => {
- sendEvent(emoji);
- }}>
- {emoji}
- </button>
- ))}
- </div>
- );
-}
+ // Render UI
+ return (
+ <div>
+ {`Latest event: ${latestEvent?.value}, from local user: ${latestEvent?.local}`}
+ {"Select a planet below:"}
+ {emojis.map((emoji) => (
+ <button
+ key={emoji}
+ onClick={() => {
+ sendEvent(emoji);
+ }}
+ >
+ {emoji}
+ </button>
+ ))}
+ </div>
+ );
+};
```
export const MyCustomEvent = () => {
:::image type="content" source="../assets/images/teams-live-share/live-share-timer.png" alt-text="Screenshot shows an example of a count down timer with 9 seconds remaining.":::
-`LiveTimer` provides a simple countdown timer that is synchronized for all participants in a meeting. ItΓÇÖs useful for scenarios that have a time limit, such as a group meditation timer or a round timer for a game. You can also use it to schedule tasks for everyone in the session, such as displaying a reminder prompt.
+`LiveTimer` provides a simple countdown timer that is synchronized for all connected participants. ItΓÇÖs useful for scenarios that have a time limit, such as a group meditation timer or a round timer for a game. You can also use it to schedule tasks for everyone in the session, such as displaying a reminder prompt.
# [JavaScript](#tab/javascript)
const MY_UNIQUE_KEY = "timer-key";
// Example component for using useLiveTimer export function CountdownTimer() {
- const { milliRemaining, timerConfig, start, pause, play } = useLiveTimer("TIMER-ID");
+ const { milliRemaining, timerConfig, start, pause, play } =
+ useLiveTimer("TIMER-ID");
return ( <div>
export function CountdownTimer() {
start(60 * 1000); }} >
- { timerConfig === undefined ? "Start" : "Reset" }
+ {timerConfig === undefined ? "Start" : "Reset"}
</button>
- { timerConfig !== undefined && (
+ {timerConfig !== undefined && (
<button onClick={() => { if (timerConfig.running) {
export function CountdownTimer() {
} }} >
- {timerConfig.running ? "Pause" : "Play" }
+ {timerConfig.running ? "Pause" : "Play"}
</button> )}
- { milliRemaining !== undefined && (
+ {milliRemaining !== undefined && (
<p>
- { `${Math.round(milliRemaining / 1000)} / ${Math.round(timerConfig.duration) / 1000}` }
+ {`${Math.round(milliRemaining / 1000)} / ${
+ Math.round(timerConfig.duration) / 1000
+ }`}
</p> )} </div>
export function CountdownTimer() {
:::image type="content" source="../assets/images/teams-live-share/live-share-follow-mode.png" alt-text="Image shows three clients with three separate views: a presenter, a user who follows the presenter, and a user with their own private view with the option to sync back to the presenter.":::
-> [!NOTE]
-> `LiveFollowMode` is in Beta and provided as a preview only. Don't use this API in a production environment.
- The `LiveFollowMode` class combines `LivePresence` and `LiveState` into a single class, enabling you to easily implement follower and presenter modes into your application. This allows you to implement familiar patterns from popular collaborative apps such as PowerPoint Live, Excel Live, and Whiteboard. Unlike screen sharing, `LiveFollowMode` allows you to render content with high quality, improved accessibility, and enhanced performance. Users can easily switch between their private views and follow other users. You can use the `startPresenting()` function to **take control** of the application for all other users in the session. Alternatively, you can allow users to individually select specific users they want to follow using the `followUser()` function. In both scenarios, users can temporarily enter a private view with the `beginSuspension()` function or synchronize back to the presenter with the `endSuspension()` function. Meanwhile, the `update()` function allows the local user to inform other clients in the session of their own personal `stateValue`. Similar to `LivePresence`, you can listen to changes to each user's `stateValue` through a `presenceChanged` event listener.
followMode.on("stateChanged", (state, local, clientId) => {
console.log("- state value:", state.value); console.log("- follow mode type:", state.type); console.log("- following user id:", state.followingUserId);
- console.log("- count of other users also following user", state.otherUsersCount);
- console.log("- state.value references local user's stateValue", state.isLocalValue);
+ console.log(
+ "- count of other users also following user",
+ state.otherUsersCount
+ );
+ console.log(
+ "- state.value references local user's stateValue",
+ state.isLocalValue
+ );
// Can optionally get the relevant user's presence object const followingUser = followMode.getUserForClient(clientId); switch (state.type) { case FollowModeType.local: {
- // Update app to reflect that the user is not currently following anyone and there is no presenter.
- infoText.innerHTML = "";
- // Show a "Start presenting" button in your app.
- button.innerHTML = "Start presenting";
- button.onclick = followMode.startPresenting;
- // Note: state.isLocalValue will be true.
- break;
+ // Update app to reflect that the user is not currently following anyone and there is no presenter.
+ infoText.innerHTML = "";
+ // Show a "Start presenting" button in your app.
+ button.innerHTML = "Start presenting";
+ button.onclick = followMode.startPresenting;
+ // Note: state.isLocalValue will be true.
+ break;
} case FollowModeType.activeFollowers: {
- // Update app to reflect that the local user is being followed by other users.
- infoText.innerHTML = `${state.otherUsersCount} users are following you`;
- // Does not mean that the local user is presenting to everyone, so you can still show the "Start presenting" button.
- button.innerHTML = "Present to all";
- button.onclick = followMode.startPresenting;
- // Note: state.isLocalValue will be true.
- break;
+ // Update app to reflect that the local user is being followed by other users.
+ infoText.innerHTML = `${state.otherUsersCount} users are following you`;
+ // Does not mean that the local user is presenting to everyone, so you can still show the "Start presenting" button.
+ button.innerHTML = "Present to all";
+ button.onclick = followMode.startPresenting;
+ // Note: state.isLocalValue will be true.
+ break;
} case FollowModeType.activePresenter: {
- // Update app to reflect that the local user is actively presenting to everyone.
- infoText.innerHTML = `You are actively presenting to everyone`;
- // Show a "Stop presenting" button in your app.
- button.innerHTML = "Stop presenting";
- button.onclick = followMode.stopPresenting;
- // Note: state.isLocalValue will be true.
- break;
+ // Update app to reflect that the local user is actively presenting to everyone.
+ infoText.innerHTML = `You are actively presenting to everyone`;
+ // Show a "Stop presenting" button in your app.
+ button.innerHTML = "Stop presenting";
+ button.onclick = followMode.stopPresenting;
+ // Note: state.isLocalValue will be true.
+ break;
} case FollowModeType.followPresenter: {
- // The local user is following a remote presenter.
- infoText.innerHTML = `${followingUser?.displayName} is presenting to everyone`;
- // Show a "Take control" button in your app.
- button.innerHTML = "Take control";
- button.onclick = followMode.startPresenting;
- // Note: state.isLocalValue will be false.
- break;
+ // The local user is following a remote presenter.
+ infoText.innerHTML = `${followingUser?.displayName} is presenting to everyone`;
+ // Show a "Take control" button in your app.
+ button.innerHTML = "Take control";
+ button.onclick = followMode.startPresenting;
+ // Note: state.isLocalValue will be false.
+ break;
} case FollowModeType.suspendFollowPresenter: {
- // The local user is following a remote presenter but has an active suspension.
- infoText.innerHTML = `${followingUser?.displayName} is presenting to everyone`;
- // Show a "Sync to presenter" button in your app.
- button.innerHTML = "Sync to presenter";
- button.onclick = followMode.endSuspension;
- // Note: state.isLocalValue will be true.
- break;
+ // The local user is following a remote presenter but has an active suspension.
+ infoText.innerHTML = `${followingUser?.displayName} is presenting to everyone`;
+ // Show a "Sync to presenter" button in your app.
+ button.innerHTML = "Sync to presenter";
+ button.onclick = followMode.endSuspension;
+ // Note: state.isLocalValue will be true.
+ break;
} case FollowModeType.followUser: {
- // The local user is following a specific remote user.
- infoText.innerHTML = `You are following ${followingUser?.displayName}`;
- // Show a "Stop following" button in your app.
- button.innerHTML = "Stop following";
- button.onclick = followMode.stopFollowing;
- // Note: state.isLocalValue will be false.
- break;
+ // The local user is following a specific remote user.
+ infoText.innerHTML = `You are following ${followingUser?.displayName}`;
+ // Show a "Stop following" button in your app.
+ button.innerHTML = "Stop following";
+ button.onclick = followMode.stopFollowing;
+ // Note: state.isLocalValue will be false.
+ break;
} case FollowModeType.suspendFollowUser: {
- // The local user is following a specific remote user but has an active suspension.
- infoText.innerHTML = `You were following ${followingUser?.displayName}`;
- // Show a "Resume following" button in your app.
- button.innerHTML = "Resume following";
- button.onclick = followMode.endSuspension;
- // Note: state.isLocalValue will be true.
- break;
+ // The local user is following a specific remote user but has an active suspension.
+ infoText.innerHTML = `You were following ${followingUser?.displayName}`;
+ // Show a "Resume following" button in your app.
+ button.innerHTML = "Resume following";
+ button.onclick = followMode.endSuspension;
+ // Note: state.isLocalValue will be true.
+ break;
} default: {
- break;
+ break;
} } const newCameraPosition = state.value;
await followMode.initialize(startingCameraPosition);
// For something like a camera change event, you should use a debounce function to prevent sending updates too frequently. // Note: it helps to distinguish changes initiated by the local user (e.g., drag mouse) separately from other change events. function onCameraPositionChanged(position, isUserAction) {
- // Broadcast change to other users so that they have their latest camera position
- followMode.update(position);
- // If the local user changed the position while following another user, we want to suspend.
- // Note: helps to distinguish changes initiated by the local user (e.g., drag mouse) separately from other change events.
- if (!isUserAction) return;
- switch (state.type) {
- case FollowModeType.followPresenter:
- case FollowModeType.followUser: {
- // This will trigger a "stateChanged" event update for the local user only.
- followMode.beginSuspension();
- break;
- }
- default: {
- // No need to suspend for other types
- break;
- }
+ // Broadcast change to other users so that they have their latest camera position
+ followMode.update(position);
+ // If the local user changed the position while following another user, we want to suspend.
+ // Note: helps to distinguish changes initiated by the local user (e.g., drag mouse) separately from other change events.
+ if (!isUserAction) return;
+ switch (state.type) {
+ case FollowModeType.followPresenter:
+ case FollowModeType.followUser: {
+ // This will trigger a "stateChanged" event update for the local user only.
+ followMode.beginSuspension();
+ break;
}
+ default: {
+ // No need to suspend for other types
+ break;
+ }
+ }
} ```
const startingCameraPosition = {
// Example component for using useLiveFollowMode export const MyLiveFollowMode = () => {
- const {
- state,
- localUser,
- otherUsers,
- allUsers,
- liveFollowMode,
- update,
- startPresenting,
- stopPresenting,
- beginSuspension,
- endSuspension,
- followUser,
- stopFollowing,
- } = useLiveFollowMode(MY_UNIQUE_KEY, startingCameraPosition);
-
- // Example of an event listener for a camera position changed event.
- // For something like a camera change event, you should use a debounce function to prevent sending updates too frequently.
- // Note: it helps to distinguish changes initiated by the local user (e.g., drag mouse) separately from other change events.
- function onCameraPositionChanged(position, isUserAction) {
- // Broadcast change to other users so that they have their latest camera position
- update(position);
- // If the local user changed the position while following another user, we want to suspend.
- // Note: helps to distinguish changes initiated by the local user (e.g., drag mouse) separately from other change events.
- if (!isUserAction) return;
- switch (state.type) {
- case FollowModeType.followPresenter:
- case FollowModeType.followUser: {
- // This will trigger a "stateChanged" event update for the local user only.
- followMode.beginSuspension();
- break;
- }
- default: {
- // No need to suspend for other types
- break;
- }
- }
+ const {
+ state,
+ localUser,
+ otherUsers,
+ allUsers,
+ liveFollowMode,
+ update,
+ startPresenting,
+ stopPresenting,
+ beginSuspension,
+ endSuspension,
+ followUser,
+ stopFollowing,
+ } = useLiveFollowMode(MY_UNIQUE_KEY, startingCameraPosition);
+
+ // Example of an event listener for a camera position changed event.
+ // For something like a camera change event, you should use a debounce function to prevent sending updates too frequently.
+ // Note: it helps to distinguish changes initiated by the local user (e.g., drag mouse) separately from other change events.
+ function onCameraPositionChanged(position, isUserAction) {
+ // Broadcast change to other users so that they have their latest camera position
+ update(position);
+ // If the local user changed the position while following another user, we want to suspend.
+ // Note: helps to distinguish changes initiated by the local user (e.g., drag mouse) separately from other change events.
+ if (!isUserAction) return;
+ switch (state.type) {
+ case FollowModeType.followPresenter:
+ case FollowModeType.followUser: {
+ // This will trigger a "stateChanged" event update for the local user only.
+ followMode.beginSuspension();
+ break;
+ }
+ default: {
+ // No need to suspend for other types
+ break;
+ }
}
+ }
- // Can optionally get the relevant user's presence object
- const followingUser = liveFollowMode?.getUser(state.followingUserId);
+ // Can optionally get the relevant user's presence object
+ const followingUser = liveFollowMode?.getUser(state.followingUserId);
- // Render UI
- return (
+ // Render UI
+ return (
+ <div>
+ {state.type === FollowModeType.local && (
<div>
- {state.type === FollowModeType.local && (
- <div>
- <p>{""}</p>
- <button onClick={startPresenting}>
- {"Start presenting"}
- </button>
- </div>
- )}
- {state.type === FollowModeType.activeFollowers && (
- <div>
- <p>{`${state.otherUsersCount} users are following you`}</p>
- <button onClick={startPresenting}>
- {"Present to all"}
- </button>
- </div>
- )}
- {state.type === FollowModeType.activePresenter && (
- <div>
- <p>{`You are actively presenting to everyone`}</p>
- <button onClick={stopPresenting}>
- {"Stop presenting"}
- </button>
- </div>
- )}
- {state.type === FollowModeType.followPresenter && (
- <div>
- <p>{`${followingUser?.displayName} is presenting to everyone`}</p>
- <button onClick={startPresenting}>
- {"Take control"}
- </button>
- </div>
- )}
- {state.type === FollowModeType.suspendFollowPresenter && (
- <div>
- <p>{`${followingUser?.displayName} is presenting to everyone`}</p>
- <button onClick={endSuspension}>
- {"Sync to presenter"}
- </button>
- </div>
- )}
- {state.type === FollowModeType.followUser && (
- <div>
- <p>{`You are following ${followingUser?.displayName}`}</p>
- <button onClick={stopFollowing}>
- {"Stop following"}
- </button>
- </div>
- )}
- {state.type === FollowModeType.suspendFollowUser && (
- <div>
- <p>{`You were following ${followingUser?.displayName}`}</p>
- <button onClick={stopFollowing}>
- {"Resume following"}
- </button>
- </div>
- )}
- <div>
- <p>{"Follow a specific user:"}</p>
- {otherUsers.map((user) => (
- <button onClick={() => {
- followUser(user.userId);
- }} key={user.userId}>
- {user.displayName}
- </button>
- ))}
- </div>
- <Example3DModelViewer
- cameraPosition={state.value}
- onCameraPositionChanged={onCameraPositionChanged}
- />
+ <p>{""}</p>
+ <button onClick={startPresenting}>{"Start presenting"}</button>
</div>
- );
+ )}
+ {state.type === FollowModeType.activeFollowers && (
+ <div>
+ <p>{`${state.otherUsersCount} users are following you`}</p>
+ <button onClick={startPresenting}>{"Present to all"}</button>
+ </div>
+ )}
+ {state.type === FollowModeType.activePresenter && (
+ <div>
+ <p>{`You are actively presenting to everyone`}</p>
+ <button onClick={stopPresenting}>{"Stop presenting"}</button>
+ </div>
+ )}
+ {state.type === FollowModeType.followPresenter && (
+ <div>
+ <p>{`${followingUser?.displayName} is presenting to everyone`}</p>
+ <button onClick={startPresenting}>{"Take control"}</button>
+ </div>
+ )}
+ {state.type === FollowModeType.suspendFollowPresenter && (
+ <div>
+ <p>{`${followingUser?.displayName} is presenting to everyone`}</p>
+ <button onClick={endSuspension}>{"Sync to presenter"}</button>
+ </div>
+ )}
+ {state.type === FollowModeType.followUser && (
+ <div>
+ <p>{`You are following ${followingUser?.displayName}`}</p>
+ <button onClick={stopFollowing}>{"Stop following"}</button>
+ </div>
+ )}
+ {state.type === FollowModeType.suspendFollowUser && (
+ <div>
+ <p>{`You were following ${followingUser?.displayName}`}</p>
+ <button onClick={stopFollowing}>{"Resume following"}</button>
+ </div>
+ )}
+ <div>
+ <p>{"Follow a specific user:"}</p>
+ {otherUsers.map((user) => (
+ <button
+ onClick={() => {
+ followUser(user.userId);
+ }}
+ key={user.userId}
+ >
+ {user.displayName}
+ </button>
+ ))}
+ </div>
+ <Example3DModelViewer
+ cameraPosition={state.value}
+ onCameraPositionChanged={onCameraPositionChanged}
+ />
+ </div>
+ );
+};
+```
+++
+In `meetingStage` contexts, your users are collaborating and presenting synchronously to facilitate more productive discussions. When a user presents content to the meeting stage, you should call the `startPresenting()` API for the initial presenter. In `content` contexts like collaborative stageview, content is most commonly consumed asynchronously. In this case, it is best to let users opt into realtime collaboration, such as through a "Follow" button. Using the `teamsJs.app.getContext()` API in the Teams JavaScript SDK, you can easily adjust your functionality accordingly.
+
+Example:
+
+# [JavaScript](#tab/javascript)
+
+```javascript
+import {
+ LiveShareClient,
+ LiveFollowMode,
+ FollowModeType,
+} from "@microsoft/live-share";
+import {
+ app,
+ meeting,
+ FrameContexts,
+ LiveShareHost,
+} from "@microsoft/teams-js";
+
+// Join the Fluid container
+const host = LiveShareHost.create();
+const liveShare = new LiveShareClient(host);
+const schema = {
+ initialObjects: {
+ followMode: LiveFollowMode,
+ },
+};
+const { container } = await liveShare.joinContainer(schema);
+const followMode = container.initialObjects.followMode;
+
+// Get teamsJs context
+const context = await app.getContext();
+// Take control if in meetingStage context and local user is initial presenter
+if (context.page?.frameContext === FrameContexts.meetingStage) {
+ // Take control if in meetingStage context and local user is initial presenter
+ meeting.getAppContentStageSharingState((error, state) => {
+ const isShareInitiator = state?.isShareInitiator;
+ if (!isShareInitiator) return;
+ // The user is the initial presenter, so we "take control"
+ await followMode.startPresenting();
+ });
+}
+// TODO: rest of app logic
+```
+
+# [TypeScript](#tab/typescript)
+
+```TypeScript
+import {
+ LiveShareClient,
+ LiveFollowMode,
+ FollowModeType,
+ IFollowModeState,
+ FollowModePresenceUser,
+} from "@microsoft/live-share";
+import {
+ app,
+ meeting,
+ FrameContexts,
+ LiveShareHost,
+} from "@microsoft/teams-js";
+
+// Join the Fluid container
+const host = LiveShareHost.create();
+const liveShare = new LiveShareClient(host);
+const schema = {
+ initialObjects: {
+ followMode: LiveFollowMode,
+ },
+};
+const { container } = await liveShare.joinContainer(schema);
+// Force casting is necessary because Fluid does not maintain type recognition for `container.initialObjects`.
+// Casting here is always safe, as the `initialObjects` is constructed based on the schema you provide to `.joinContainer`.
+const followMode = container.initialObjects.followMode as unknown as LiveFollowMode;
+
+// Get teamsJs context
+const context: app.Context = await app.getContext();
+// Take control if in meetingStage context and local user is initial presenter
+if (context.page?.frameContext === FrameContexts.meetingStage) {
+ // Check if user is initial presenter
+ meeting.getAppContentStageSharingState((error, state) => {
+ // isShareInitiator is not currently declared in the typedocs in the SDK, so we cast as any
+ const isShareInitiator = (state as any)?.isShareInitiator;
+ if (!isShareInitiator) return;
+ // The user is the initial presenter, so we "take control"
+ await followMode.startPresenting();
+ });
}
+// TODO: rest of app logic
+```
+
+# [React](#tab/react)
+
+```jsx
+import { FollowModeType, LiveDataObjectInitializeState } from "@microsoft/live-share";
+import { useLiveFollowMode } from "@microsoft/live-share-react";
+import { useRef, useEffect, useState } from "react";
+// As an example, we will use a fake component to denote what a 3D viewer might look like in an app
+import { Example3DModelViewer } from "./components";
+
+// Define a unique key that differentiates this usage of `useLiveFollowMode` from others in your app
+const MY_UNIQUE_KEY = "follow-mode-key";
+
+// Example component for using useLiveFollowMode
+export const MyLiveFollowMode = () => {
+ const {
+ liveFollowMode,
+ startPresenting,
+ } = useLiveFollowMode(MY_UNIQUE_KEY, undefined);
+
+ const [isShareInitiator, setIsShareInitiator] = useState(false);
+
+ // Check if user is using app in meeting stage and is the initial presenter
+ useEffect(() => {
+ // Get teamsJs context
+ app.getContext()
+ .then(async (context: app.Context) => {
+ if (context.page?.frameContext !== FrameContexts.meetingStage) return;
+ meeting.getAppContentStageSharingState((error, state) => {
+ // isShareInitiator is not currently declared in the typedocs in the SDK, so we cast as any
+ const isShareInitiator = (state as any)?.isShareInitiator;
+ if (!isShareInitiator) return;
+ setIsShareInitiator(true);
+ });
+ });
+ }, []);
+
+ // Take control if in meetingStage context and local user is initial presenter
+ useEffect(() => {
+ // Wait for liveFollowMode to be initialized
+ if (liveFollowMode?.initializeState !== LiveDataObjectInitializeState.succeeded) return;
+ // Skip if user is not the initial presenter
+ if (!isShareInitiator) return;
+ startPresenting();
+ }, [liveFollowMode, startPresenting, isShareInitiator])
+
+ // TODO: proceed with rest of app setup
+ return (
+ <></>
+ );
+};
```
export const MyLiveFollowMode = () => {
Meetings in Teams include calls, all-hands meetings, and online classrooms. Meeting participants might span across organizations, have different privileges, or have different goals. Hence, itΓÇÖs important to respect the privileges of different user roles during meetings. Live objects are designed to support role verification, allowing you to define the roles that are allowed to send messages for each individual live object. For example, you've selected the option that permits only meeting presenters and organizers to control video playback. However, guests and attendees can still request the next videos to watch.
+> [!NOTE]
+> When accessing Live Share from a `content` chat or channel context, all users will have the `Organizer` and `Presenter` roles.
+ In the following example where only presenters and organizers can take control, `LiveState` is used to synchronize which user is the active presenter: # [JavaScript](#tab/javascript)
const ALLOWED_ROLES = [UserMeetingRole.organizer, UserMeetingRole.presenter];
// Example component for using useLiveState export const MyCustomState = () => {
- const [state, setState] = useLiveState(MY_UNIQUE_KEY, INITIAL_STATE, ALLOWED_ROLES);
-
- const onTakeControl = async () => {
- try {
- await setState({
- ...state,
- presentingUserId: "<LOCAL_USER_ID>",
- });
- } catch (error) {
- console.error(error);
- }
- };
+ const [state, setState] = useLiveState(
+ MY_UNIQUE_KEY,
+ INITIAL_STATE,
+ ALLOWED_ROLES
+ );
- // Render UI
- return (
- <div>
- {`Current document: ${state.documentId}`}
- {`Current presenter: ${state.presentingUserId}`}
- <button onClick={onTakeControl}>
- Take control
- </button>
- </div>
- );
-}
+ const onTakeControl = async () => {
+ try {
+ await setState({
+ ...state,
+ presentingUserId: "<LOCAL_USER_ID>",
+ });
+ } catch (error) {
+ console.error(error);
+ }
+ };
+
+ // Render UI
+ return (
+ <div>
+ {`Current document: ${state.documentId}`}
+ {`Current presenter: ${state.presentingUserId}`}
+ <button onClick={onTakeControl}>Take control</button>
+ </div>
+ );
+};
```
In some cases, a user might have multiple roles. For example, an **Organizer** i
The Live Share SDK supports any [distributed data structure](https://fluidframework.com/docs/data-structures/overview/) included in Fluid Framework. These features serve as a set of primitives you can use to build robust collaborative scenarios, such as real-time updates of a task list or co-authoring text within an HTML `<textarea>`.
-Unlike the `LiveDataObject` classes mentioned in this article, Fluid data structures don't reset after your application is closed. This is ideal for scenarios such as the meeting side panel, where users frequently close and reopen your app while using other tabs in the meeting, such as chat.
+Unlike the `LiveDataObject` classes mentioned in this article, Fluid data structures don't reset after your application is closed. This is ideal for scenarios such as the meeting `sidePanel` and `content` contexts, where users frequently close and reopen your app.
Fluid Framework officially supports the following types of distributed data structures:
Fluid Framework officially supports the following types of distributed data stru
| -- | | | [SharedMap](https://fluidframework.com/docs/data-structures/map/) | A distributed key-value store. Set any JSON-serializable object for a given key to synchronize that object for everyone in the session. | | [SharedSegmentSequence](https://fluidframework.com/docs/data-structures/sequences/) | A list-like data structure for storing a set of items (called segments) at set positions. |
-| [SharedString](https://fluidframework.com/docs/data-structures/string/) | A distributed-string sequence optimized for editing the text of documents or text areas. |
+| [SharedString](https://fluidframework.com/docs/data-structures/string/) | A distributed-string sequence optimized for editing the text of documents or text areas. |
Let's see how `SharedMap` works. In this example, we've used `SharedMap` to build a playlist feature.
import { useSharedMap } from "@microsoft/live-share-react";
import { v4 as uuid } from "uuid"; // Unique key that distinguishes this useSharedMap from others in your app
-const UNIQUE_KEY = "CUSTOM-MAP-ID"
+const UNIQUE_KEY = "CUSTOM-MAP-ID";
export function PlaylistMapExample() { const { map, setEntry, deleteEntry } = useSharedMap(UNIQUE_KEY);
export function PlaylistMapExample() {
</button> <div> {[...map.values()].map((video) => (
- <div key={video.id}>
- {video.title}
- </div>
+ <div key={video.id}>{video.title}</div>
))} </div> </div>
Example:
# [JavaScript](#tab/javascript) ```javascript
-import { LiveShareClient, TestLiveShareHost, LiveState } from "@microsoft/live-share";
+import {
+ LiveShareClient,
+ TestLiveShareHost,
+ LiveState,
+} from "@microsoft/live-share";
import { LiveShareHost } from "@microsoft/teams-js"; import { SharedMap } from "fluid-framework";
import { SharedMap } from "fluid-framework";
*/ const inTeams = process.env.IN_TEAMS; // Join the Fluid container
-const host = inTeams
- ? LiveShareHost.create()
- : TestLiveShareHost.create();
+const host = inTeams ? LiveShareHost.create() : TestLiveShareHost.create();
const liveShare = new LiveShareClient(host); const schema = { initialObjects: {
const { container } = await liveShare.joinContainer(schema);
```jsx import { TestLiveShareHost } from "@microsoft/live-share"; import { LiveShareHost } from "@microsoft/teams-js";
-import { LiveShareProvider, useLiveShareContext } from "@microsoft/live-share-react";
+import {
+ LiveShareProvider,
+ useLiveShareContext,
+} from "@microsoft/live-share-react";
import { useState } from "react"; /**
import { useState } from "react";
const inTeams = process.env.IN_TEAMS; export const App = () => {
- // Create the host as React state so that it doesn't get reset on mount
- const [host] = useState(
- inTeams ? LiveShareHost.create() : TestLiveShareHost.create()
- );
+ // Create the host as React state so that it doesn't get reset on mount
+ const [host] = useState(
+ inTeams ? LiveShareHost.create() : TestLiveShareHost.create()
+ );
- // Live Share for React does not require that you define a custom Fluid schema
- return (
- <LiveShareProvider host={host} joinOnLoad>
- <LiveShareLoading />
- </LiveShareProvider>
- );
-}
+ // Live Share for React does not require that you define a custom Fluid schema
+ return (
+ <LiveShareProvider host={host} joinOnLoad>
+ <LiveShareLoading />
+ </LiveShareProvider>
+ );
+};
const LiveShareLoading = () => {
- // Any live-share-react hook (e.g., useLiveShareContext, useLiveState, etc.) must be a child of <LiveShareProvider>
- const { joined } = useLiveShareContext();
- if (joined) {
- return <p>{"Loading..."}</p>;
- }
- return <p>{"Your app here..."}</p>;
-}
+ // Any live-share-react hook (e.g., useLiveShareContext, useLiveState, etc.) must be a child of <LiveShareProvider>
+ const { joined } = useLiveShareContext();
+ if (joined) {
+ return <p>{"Loading..."}</p>;
+ }
+ return <p>{"Your app here..."}</p>;
+};
```
The `TestLiveShareHost` class utilizes `tinylicious` test server from Fluid Fram
```json {
- "scripts": {
- "start": "start-server-and-test start:server 7070 start:client",
- "start:client": "{YOUR START CLIENT COMMAND HERE}",
- "start:server": "npx tinylicious@latest"
- },
- "devDependencies": {
- "@fluidframework/test-client-utils": "^1.3.6",
- "start-server-and-test": "^2.0.0"
- }
+ "scripts": {
+ "start": "start-server-and-test start:server 7070 start:client",
+ "start:client": "{YOUR START CLIENT COMMAND HERE}",
+ "start:server": "npx tinylicious@latest"
+ },
+ "devDependencies": {
+ "@fluidframework/test-client-utils": "^1.3.6",
+ "start-server-and-test": "^2.0.0"
+ }
} ```
When you start your application this way, the `LiveShareClient` adds `#{containe
## Code samples
-| Sample name | Description | JavaScript | TypeScript |
-| -- | | - | - |
-| Dice Roller | Enable all connected clients to roll a die and view the result. | [View](https://aka.ms/liveshare-diceroller) | [View](https://aka.ms/liveshare-diceroller-ts) |
-| Agile Poker | Enable all connected clients to play Agile Poker. | [View](https://aka.ms/liveshare-agilepoker) | NA |
-| 3D Model | Enable all connected clients to view a 3D model together. | NA | [View](https://aka.ms/liveshare-3dviewer-ts) |
-| Timer | Enable all connected clients to view a countdown timer. | NA | [View](https://aka.ms/liveshare-timer-ts) |
-| Presence avatars | Display presence avatars for all connected clients. | NA | [View](https://aka.ms/liveshare-presence-ts) |
+| Sample name | Description | JavaScript | TypeScript |
+| - | | - | - |
+| Dice Roller | Enable all connected clients to roll a die and view the result. | [View](https://aka.ms/liveshare-diceroller) | [View](https://aka.ms/liveshare-diceroller-ts) |
+| Agile Poker | Enable all connected clients to play Agile Poker. | [View](https://aka.ms/liveshare-agilepoker) | NA |
+| 3D Model | Enable all connected clients to view a 3D model together. | NA | [View](https://aka.ms/liveshare-3dviewer-ts) |
+| Timer | Enable all connected clients to view a countdown timer. | NA | [View](https://aka.ms/liveshare-timer-ts) |
+| Presence avatars | Display presence avatars for all connected clients. | NA | [View](https://aka.ms/liveshare-presence-ts) |
## Next step
platform Teams Live Share Faq https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/apps-in-teams-meetings/teams-live-share-faq.md
Get answers to common questions when using Live Share.<br>
<summary><b>Can I use my own Azure Fluid Relay service?</b></summary>
-Yes! When initializing Live Share, you can define your own `AzureConnectionConfig`. Live Share associates containers you create with meetings, but you need to implement the `ITokenProvider` interface to sign tokens for your containers. For example, you can use a provided `AzureFunctionTokenProvider`, which uses an Azure cloud function to request an access token from a server.
+Yes! When initializing Live Share, you can define your own `AzureConnectionConfig`. Live Share associates containers you create with meetings, chats, or channels, but you need to implement the `ITokenProvider` interface to sign tokens for your containers. For example, you can use a provided `AzureFunctionTokenProvider`, which uses an Azure cloud function to request an access token from a server.
While most of you find it beneficial to use our free hosted service, there may still be times where it's beneficial to use your own Azure Fluid Relay service for your Live Share app. Consider using a custom AFR service connection if you:
-* Require storage of data in Fluid containers beyond the lifetime of a meeting.
-* Transmit sensitive data through the service that requires a custom security policy.
-* Develop features through Fluid Framework, for example, `SharedMap`, for your application outside of Teams.
+- Require storage of data in Fluid containers beyond six hours after the container is first created.
+- Transmit sensitive data through the service that requires a custom security policy.
+- Develop features through Fluid Framework, for example, `SharedMap`, for your application outside of Teams.
For more information, see [how to guide](./teams-live-share-how-to/how-to-custom-azure-fluid-relay.md) or visit the [Azure Fluid Relay documentation](/azure/azure-fluid-relay/).
Scheduled meetings, one-on-one calls, group calls, meet now, and channel meeting
<details>
+<summary><b>Can I use Live Share for my tab outside of meetings?</b></summary>
+
+Yes! Live Share supports chat and channel content contexts, including configurable tabs, static tabs, and Collaborative Stageview for Microsoft Teams desktop and web clients. Personal apps aren't supported.
+
+> [!NOTE]
+> Microsoft Teams iOS and Android clients don't support Live Share sessions outside of meeting contexts.
+
+<br>
+
+</details>
+
+<details>
+ <summary><b>Will Live Share's media package work with DRM content?</b></summary>
-No. Teams currently doesn't support encrypted media for tab applications on desktop. Chrome, Edge, and mobile clients are supported. For more information, you can [track the issue here](https://github.com/microsoft/live-share-sdk/issues/14).
+Yes, DRM is supported in the new Teams desktop, web, iOS, and Android clients. It's not supported in Teams classic. To enable DRM encryption for Teams desktop, enable the `media` device permission in your app manifest.
<br>
To fix errors resulting from changes to `initialObjects` when testing locally in
If you plan to update your app with new `SharedObject`, `DataObject`, or `LiveDataObject` instances, you must consider how you deploy new schema changes to production. While the actual risk is relatively low and short lasting, there might be active sessions at the time you roll out the change. Existing users in the session must not be impacted, but users joining that session after you deployed a breaking change may have issues connecting to the session. To mitigate this, you might consider some of the following solutions:
-* Use our experimental [Live Share Turbo](https://aka.ms/liveshareturbo) or [Live Share for React](https://aka.ms/livesharereact) packages.
-* Deploy schema changes for your web application outside of normal business hours.
-* Use `dynamicObjectTypes` for any changes made to your schema, rather than changing `initialObjects`.
+- Use our experimental [Live Share Turbo](https://aka.ms/liveshareturbo) or [Live Share for React](https://aka.ms/livesharereact) packages.
+- Deploy schema changes for your web application outside of normal business hours.
+- Use `dynamicObjectTypes` for any changes made to your schema, rather than changing `initialObjects`.
> [!NOTE]
-> Live Share does not currently support versioning your `ContainerSchema`, nor does it have any APIs dedicated to migrations.
+> Live Share doesn't support versioning your `ContainerSchema` and doesn't have any APIs dedicated to migrations.
<br>
While there aren't any enforced limits, you must be mindful of how many messages
<details> <summary><b>Is Live Share supported for Government Community Cloud (GCC), Government Community Cloud High (GCC-High), and Department of Defense (DOD) tenants?</b></summary>
-Live Share isn't supported for GCC, GCC-High, and DOD tenants.
+Live Share is only supported in Government Community Cloud (GCC) tenants.
<br>
No, Live Share doesn't support Teams Rooms devices.
</details>
+<details>
+<summary><b>Does Live Share support the Fluid Framework version 2 beta?</b></summary>
+
+Yes, Live Share supports Fluid Framework version `^2.0.0-rc` and later in preview. If you're interested in using these preview versions, update your Live Share packages to version `2.0.0-preview.0` or later.
+
+<br>
+
+</details>
+ ## Have more questions or feedback? Submit issues and feature requests to the SDK repository for [Live Share SDK](https://github.com/microsoft/live-share-sdk). Use the `live-share` and `microsoft-teams` tag to post how-to questions about the SDK at [Stack Overflow](https://stackoverflow.com/questions/tagged/live-share+microsoft-teams). ## See also
-* [Apps for Teams meetings](teams-apps-in-meetings.md)
-* [GitHub repository](https://github.com/microsoft/live-share-sdk)
-* [Live Share SDK reference docs](/javascript/api/@microsoft/live-share/)
-* [Live Share Media SDK reference docs](/javascript/api/@microsoft/live-share-media/)
-* [Use Fluid with Teams](../tabs/how-to/using-fluid-msteam.md)
+- [Apps for Teams meetings](teams-apps-in-meetings.md)
+- [GitHub repository](https://github.com/microsoft/live-share-sdk)
+- [Live Share SDK reference docs](/javascript/api/@microsoft/live-share/)
+- [Live Share Media SDK reference docs](/javascript/api/@microsoft/live-share-media/)
+- [Use Fluid with Teams](../tabs/how-to/using-fluid-msteam.md)
platform Teams Live Share Overview https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/apps-in-teams-meetings/teams-live-share-overview.md
ms.localizationpriority: high
Last updated 04/07/2022 + # Live Share SDK
-Live Share is an SDK designed to transform Teams apps into collaborative multi-user experiences without writing any dedicated back-end code. With Live Share, your users can co-watch, co-create, and co-edit during meetings.
+
+Live Share is an SDK designed to transform Teams apps into collaborative multi-user experiences without writing any dedicated back-end code. With Live Share, your users can co-watch, co-create, and co-edit together in Microsoft Teams. Whether your users are presenting during a meeting or viewing content shared to a chat, Live Share securely connects them into a shared session with just a few lines of code.
Sometimes screen sharing just isn't enough, which is why Microsoft built tools like PowerPoint Live and Whiteboard directly into Teams. By bringing your web application directly to center stage in the meeting interface, your users can seamlessly collaborate during meetings and calls.
+Collaboration doesn't need to stop after meetings end, either. Live Share sessions work in chat and channel contexts, allowing your users to see who is viewing what content, follow one another, and more.
+ > [!div class="nextstepaction"] > [Get started](teams-live-share-quick-start.md)
Live Share seamlessly integrates meetings with [Fluid Framework](https://fluidfr
### Live Share core
-Live Share enables connecting to a special Fluid Container associated with each meeting in a few lines of code. In addition to the data structures provided by Fluid Framework, Live Share also supports a new set of DDS classes to simplify synchronizing app state in meetings.
+Live Share enables connecting to a special Fluid Container associated with each meeting, chat, or channel in a few lines of code. In addition to the data structures provided by Fluid Framework, Live Share also supports a new set of DDS classes to simplify synchronizing app state.
Features supported by the Live Share core package include: -- Join a meeting's Live Share session with `LiveShareClient`.-- Track meeting presence and synchronize user metadata with `LivePresence`.
+- Join a Live Share session with `LiveShareClient` for meetings, chats, or channels.
+- Track presence and synchronize user metadata with `LivePresence`.
- Coordinate app state that disappears when users leave the session with `LiveState`. - Synchronize a countdown timer with `LiveTimer`. - Send real-time events to other clients in the session with `LiveEvent`.
You can find more information about this package on the [Live Share media page](
:::image type="content" source="../assets/images/teams-live-share/Teams-live-share-schematics.png" alt-text="Screenshots shows an example of multiple users drawing on a canvas during a meeting.":::
-When collaborating in meetings, it's essential for users to be able to point out and emphasize content on the screen. Live Share canvas makes it easy to add inking, laser pointers, and cursors to your app for seamless collaboration.
+When collaborating in realtime, it's essential for users to be able to point out and emphasize content on the screen. Live Share canvas makes it easy to add inking, laser pointers, and cursors to your app for seamless collaboration.
Features supported by Live Share canvas include:
Like other Azure services, Azure Fluid Relay is designed to tailor to your indiv
### Live Share hosted service
-Live Share provides a turn-key Azure Fluid Relay service backed by the security of Microsoft Teams meetings. Live Share containers are restricted to meeting participants, maintain tenant residency requirements, and can be accessed in a few lines of client code.
+Live Share provides a turn-key Azure Fluid Relay service backed by the security of Microsoft Teams. All sessions adhere to tenant data residency requirements, global regulations, and security commitments. In just a few lines of code, you can connect to Live Share containers that are accessible only to members of a meeting, chat, or channel.
# [JavaScript](#tab/javascript)
While most of you find it preferable to use our free hosted service, there are s
Consider using a custom service if you: -- Require storage of data in Fluid containers beyond the lifetime of a meeting.
+- Require long-term storage of data in Fluid containers.
- Transmit sensitive data through the service that requires a custom security policy. - Develop features through Fluid Framework, for example, `SharedMap`, for your application outside of Teams. For more information, see the custom Azure Fluid Relay service [how-to guide](./teams-live-share-how-to/how-to-custom-azure-fluid-relay.md).
+## Live Share collaborative contexts
+
+Live Share sessions enable seamless collaboration in meetings, chats, and channels. When you connect to a session through the `joinContainer()` API, Teams connects your user to the appropriate Fluid container. While you don't need to write any context-specific code, you should understand the differences in user scenarios for each tab surface.
+
+> [!NOTE]
+> Live Share sessions used across different contexts should connect to the same Fluid container. If you want to synchronize data differently across different contexts, you can create different distributed-data objects (DDS) for each context and only listen to changes for those that are relevant to your scenario.
+
+The Teams JavaScript SDK's `getContext()` API tells you which `FrameContexts` your app is running in. You can use this to conditionally enable different features and UX patterns in your application for each context. Live Share supports the following `FrameContexts` contexts:
+
+- `meetingStage`
+- `sidePanel`
+- `content`
+
+The following example shows how you can add context-specific functionality to your application:
+
+# [JavaScript](#tab/javascript)
+
+```javascript
+import { LiveShareClient, LiveFollowMode } from "@microsoft/live-share";
+import {
+ app,
+ liveShare,
+ LiveShareHost,
+ FrameContexts,
+} from "@microsoft/teams-js";
+
+// Check if Live Share is supported in the current host / context
+if (!liveShare.isSupported()) return;
+
+// Join the Fluid container for the current scope
+const host = LiveShareHost.create();
+const liveShare = new LiveShareClient(host);
+const schema = {
+ initialObjects: { followMode: LiveFollowMode },
+};
+const { container } = await liveShare.joinContainer(schema);
+
+// Get teamsJs context
+const context = await app.getContext();
+switch (context.page?.frameContext) {
+ case FrameContexts.meetingStage: {
+ // Optimize app for meeting stage
+ // e.g., followMode.startPresenting()
+ break;
+ }
+ case FrameContexts.sidePanel: {
+ // Optimize app for meeting side panel
+ // e.g., provide simplified UX for selecting content
+ break;
+ }
+ case FrameContexts.content: {
+ // Optimize app for content
+ // e.g., hide presenter settings not appropriate for async contexts
+ break;
+ }
+ default: {
+ throw new Error("Received unexpected frameContext");
+ }
+}
+
+// ... ready to start app sync logic
+```
+
+# [TypeScript](#tab/typescript)
+
+```TypeScript
+import { LiveShareClient, LiveFollowMode } from "@microsoft/live-share";
+import { app, liveShare, LiveShareHost, FrameContexts } from "@microsoft/teams-js";
+import { ContainerSchema } from "fluid-framework";
+
+// Check if Live Share is supported in the current host / context
+if (!liveShare.isSupported()) return;
+
+// Join the Fluid container for the current scope
+const host = LiveShareHost.create();
+const liveShare = new LiveShareClient(host);
+const schema: ContainerSchema = {
+ initialObjects: { followMode: LiveFollowMode },
+};
+const { container } = await liveShare.joinContainer(schema);
+
+// Get teamsJs context
+const context: app.Context = await app.getContext();
+switch(context.page?.frameContext) {
+ case FrameContexts.meetingStage: {
+ // Optimize app for meeting stage
+ // e.g., followMode.startPresenting()
+ break;
+ }
+ case FrameContexts.sidePanel: {
+ // Optimize app for meeting side panel
+ // e.g., provide simplified UX for selecting content
+ break;
+ }
+ case FrameContexts.content: {
+ // Optimize app for content
+ // e.g., hide presenter settings not appropriate for async contexts
+ break;
+ }
+ default: {
+ throw new Error("Received unexpected frameContext");
+ }
+}
+
+// ... ready to start app sync logic
+```
+++
+### Meeting contexts
++
+As we've mentioned earlier, there are two meeting contexts: `meetingStage` and `sidePanel`. In the following section, we'll explore how to optimize these contexts to enhance the user experience.
+
+#### Meeting stage
+
+The `meetingStage` context allows users to share your app content to the meeting stage for participants in the meeting. In this context, users typically expect to collaborate in realtime. Unlike when loading a collaborative app like Microsoft Loop or Word in a web browser, presenters usually expect to have more control of the experience. For example, in PowerPoint Live, presenters expect to have control over which PowerPoint slide is visible to attendees by default, even if attendees can choose to stop following them temporarily.
++
+Consider making the following optimizations for your `meetingStage` app:
+
+- Put the active presenter control of the app, such as by controlling the camera position for all users viewing a 3D model.
+- Allow eligible users to take control of the app, such as taking control of media playback while co-watching a video.
+- Let users temporarily stop following the presenter, such as showing a "Sync to presenter" button when an attendee clicks on a different image in a slideshow.
+- Provide settings that give the presenter control, such as disabling the ability for other users to stop following them.
+
+#### Meeting side panel
+
+The meeting `sidePanel` context allows users to pin your app as a tab in a meeting, alongside default tabs like chat. While any meeting participant may have the option to open a `sidePanel` tab, each user must open it individually. This makes it ideal for asynchronous scenarios during a meeting, such as searching for content to share to the meeting stage. While your users won't want to co-watch, co-create, or co-edit rich content from this surface, Live Share can still improve your `sidePanel` app.
++
+Consider making the following optimizations for your `sidePanel` app:
+
+- Companion experiences to the meeting stage, such as collaborative video or audio playlists.
+- Configuration settings prior to sharing content to the meeting stage, such as disabling the "take control" feature.
+- Performance optimizations, such as broadcasting new content once while sharing has already started, rather than reloading the application.
+
+### Content contexts
+
+The `content` context is designed for asynchronous consumption of your app's content. There are a couple different surfaces for `content` contexts in chat and channels, including:
+
+- Chat and channel tabs
+- Collaborative stage view
+
+> [!NOTE]
+> The `content` context is also used for personal apps, which Live Share doesn't support. Live Share only supports `content` contexts on Teams desktop and web clients.
+
+#### Chat and channel tabs
++
+Chat and channel tabs allow users to pin your application to a chat or channel. A tab that supports both `sidePanel` and `content` will feature the same pinned URL, but the use cases are usually fairly different. For starters, chat and channel tabs generally have more horizontal space to work with. As a best practice, allow users to search for content to "pin" to the tab. For example, teachers using a note app may pin notes containing educational resources for their students.
+
+While chat and channel tabs are most commonly used for asynchronous consumption, it is possible for your users to have the same content at the same time. When this happens, it is useful to keep content in sync to prevent data conflicts or duplication of work. Live Share allows you to show what content each user is viewing, what they are doing, and more. This can provide social incentives that draw users into app content, increasing engagement and collaboration. We call this "coincidental collaboration."
++
+Consider making the following optimizations for your `content` chat and channel tab:
+
+- Show which users are currently viewing content pinned to the tab, such as users actively viewing each whiteboard.
+- Nudge users to join an ongoing collaboration session, such as displaying a toast to join an ongoing standup for a task app.
+- Allow users to follow a specific user or group of users, such as by clicking on the avatar of another connected user they'd like to follow.
+
+#### Collaborative stageview
++
+When users share your app's content with their colleagues in Teams, we recommend using [collaborative stageview](../tabs/open-content-in-stageview.md#collaborative-stageview). In this scenario, users open content that was shared in a popout window with chat on the side, allowing users to engage with your content while continuing the conversation flow. Similar to chat and channel tabs, this content is primarily consumed asynchronously. However, if users share the content through an Adaptive Card, users are more likely to view the content and chat with one another, increasing the need for collaborative features.
++
+Consider making the following optimizations for your collaborative stageview apps:
+
+- Show which users are currently viewing the content and what they are doing, such as displaying a user's avatar at the position they are at in a video.
+- Allow users to follow a specific user or group of users, such as by clicking on the avatar of another connected user they'd like to follow.
+- Facilitate ad-hoc communication, such as by enabling inking tools and laser pointers while in follow mode.
+ ## React integration Live Share has a dedicated React integration, making Live Share features even easier to integrate into React apps. Rather than using `LiveShareClient` directly, you can use the `LiveShareProvider` component to join a Live Share session when the component first mounts. Each `LiveDataObject` has a corresponding React hook, designed to make using Live Share incredibly easy. For more information, see Live Share for React [GitHub page](https://aka.ms/livesharereact) for more information.
Live Share has a dedicated React integration, making Live Share features even ea
| :-- | :- | | During a marketing review, a user wants to collect feedback on their latest video edit. | User shares the video to the meeting stage and starts the video. As needed, the user pauses the video to discuss the scene and participants draw over parts of the screen to emphasize key points. | | A project manager plays Agile Poker with their team during planning. | Manager shares an Agile Poker app to the meeting stage that enables playing the planning game until the team has consensus. |
-| A financial advisor reviews PDF documents with clients before signing. | The financial advisor shares the PDF contract to the meeting stage. All attendees can see each other's cursors and highlighted text in the PDF, after which both parties sign the agreement. |
+| A financial advisor reviews PDF documents with clients before signing. | The financial advisor shares the PDF contract to the meeting stage. All attendees can see each other's cursors and highlighted text in the PDF, after which both parties sign the agreement. |
+| Engineers view a 3D model together. | An engineering team views a 3D model that was shared in chat. They can see each other's camera positions, edit the model, and follow each other. |
> [!IMPORTANT] > Live Share is licensed under the [Microsoft Live Share SDK License](https://github.com/microsoft/live-share-sdk/blob/main/LICENSE). To use these capabilities in your app, you must first read and agree to these terms.
platform How To Extend Copilot https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/archive/how-to-extend-copilot.md
Last updated 05/23/2023
> [!IMPORTANT] > This article is deprecated. For more information, see [Extend Microsoft Copilot for Microsoft 365](/microsoft-365-copilot/extensibility/).
-Microsoft 365 Copilot is powered by an advanced processing and orchestration engine that seamlessly integrates Microsoft 365 apps, Microsoft Graph, and large language models (LLMs) to turn your words into the most powerful productivity tool. While Copilot is already able to use the apps and data within the Microsoft 365 ecosystem, many users still depend on various external tools and services for work management and collaboration. You can address this gap by extending Copilot to enable users to work with their third-party tools and services, unlocking the full potential of Microsoft 365 Copilot.
+Microsoft 365 Copilot is powered by an advanced processing and orchestration engine that seamlessly integrates Microsoft 365 apps, Microsoft Graph, and Large Language Models (LLMs) to turn your words into the most powerful productivity tool. While Copilot is already able to use the apps and data within the Microsoft 365 ecosystem, many users still depend on various external tools and services for work management and collaboration. You can address this gap by extending Copilot to enable users to work with their third-party tools and services, unlocking the full potential of Microsoft 365 Copilot.
You can extend Microsoft 365 Copilot by building a plugin or by connecting to an external data source.
platform Assistants Api Quick Start https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/bots/how-to/Teams conversational AI/assistants-api-quick-start.md
+
+ Title: Assistants API quick start guide
+
+description: In this module, learn how to quickly try the Assistants API.
+
+ms.localizationpriority: high
+zone_pivot_groups: assistant-ai-library-quick-start
+ Last updated : 05/20/2024++
+# Quick start guide for using Assistants API with Teams AI library
+
+Get started using OpenAI or Azure OpenAI Assistants API with Teams AI library in Math tutor assistant sample. This guide uses the OpenAI Code Interpreter tool to help you create an assistant that specializes in mathematics. The bot uses the gpt-3.5-turbo model to chat with Microsoft Teams users and respond in a polite and respectful manner, staying within the scope of the conversation.
+
+## Prerequisites
+
+To get started, ensure that you have the following tools:
+
+| Install | For using... |
+| | |
+| [Visual Studio Code](https://code.visualstudio.com/download) | JavaScript, TypeScript, or C Sharp build environments. Use the latest version. |
+| [Teams Toolkit](https://marketplace.visualstudio.com/items?itemName=TeamsDevApp.ms-teams-vscode-extension) | Microsoft Visual Studio Code extension that creates a project scaffolding for your app. Use the latest version.|
+|[Git](https://git-scm.com/downloads)|Git is a version control system that helps you manage different versions of code within a repository. |
+| [Node.js](https://nodejs.org/en/download/) | Back-end JavaScript runtime environment. For more information, see [Node.js version compatibility table for project type](~/toolkit/build-environments.md#nodejs-version-compatibility-table-for-project-type).|
+| [Microsoft Teams](https://www.microsoft.com/microsoft-teams/download-app) | To collaborate with everyone, you work with through apps for chat, meetings, and call-all in one place.|
+| [OpenAI](https://openai.com/api/) or [Azure OpenAI](https://oai.azure.com/portal)| First create your OpenAI API key to use OpenAI's GPT. If you want to host your app or access resources in Azure, you must create an Azure OpenAI service.|
+| [Microsoft&nbsp;Edge](https://www.microsoft.com/edge) (recommended) or [Google Chrome](https://www.google.com/chrome/) | A browser with developer tools. |
+| [Microsoft 365 developer account](/microsoftteams/platform/concepts/build-and-test/prepare-your-o365-tenant) | Access to Teams account with the appropriate permissions to install an app and [enable custom Teams apps and turn on custom app uploading](../../../concepts/build-and-test/prepare-your-o365-tenant.md#enable-custom-teams-apps-and-turn-on-custom-app-uploading). |
+
+<br/>
+If you've already run the samples before or encountered a runtime error, follow these steps to start fresh:
+
+* Check all the `.env` and `env/.env.*.*` files in the sample and delete any automatically populated values to ensure that Teams Toolkit generates new resources for you.
+* If you donΓÇÖt want Teams Toolkit to generate the app ID and password, update the `MicrosoftAppId` and `MicrosoftAppPassword` in the `.env` file with your own values.
+* Remove values or leave the values blank for **SECRET_BOT_PASSWORD** and **TEAMS_APP_UPDATE_TIME** in the `.env` file to avoid conflicts.
+
+Teams Toolkit automatically provisions `MicrosoftAppId` and `MicrosoftAppPassword` resources. If you want to use your own resources, you need to manually add them to the `.env` file. Teams Toolkit doesn't auto-generate the following resources:
+
+* An Azure OpenAI or OpenAI key
+* A database or similar storage options
+
+## Build and run the sample app
+
+Get started with Teams AI library using the **Math tutor assistant** sample. It enables your computerΓÇÖs localhost to quickly execute a Teams AI library-based sample.
+
+1. Go to the [sample](https://github.com/microsoft/teams-ai/tree/main/js/samples).
+
+1. Run the following command to clone the repository:
+
+ ```cmd
+ git clone https://github.com/microsoft/teams-ai.git
+ ```
+
+1. Go to **Visual Studio Code**.
+
+1. Select **File** > **Open Folder**.
+
+1. Go to the location where you cloned teams-ai repo and select the **teams-ai** folder.
+
+1. Select **Select Folder**.
+
+ :::image type="content" source="../../../assets/images/bots/ai-library-dot-net-select-folder.png" alt-text="Screenshot shows the teams-ai folder and the Select Folder option.":::
+
+1. Select **View** > **Terminal**. A terminal window opens.
+
+1. In the terminal window, run the following command to go to the **js** folder:
+
+ ```
+ cd .\js\
+ ```
+
+1. Run the following command to install dependencies:
+
+ ```terminal
+ yarn install
+ ```
+
+1. Run the following command to build dependencies:
+
+ ```terminal
+ yarn build
+ ```
+
+1. After the dependencies are installed, select **File** > **Open Folder**.
+
+1. Go to **teams-ai > js > samples > 04.ai-apps > d.assistants-mathBot** and select **Select Folder**. All the files for the Math tutor assistant sample are listed under the **EXPLORER** section in Visual Studio Code.
+
+1. Under **EXPLORER**, duplicate the `sample.env` file and update the duplicate file to `.env`.
+
+1. Update the following steps based on the AI services you select.
+
+ # [OpenAI key](#tab/OpenAI-key)
+
+ 1. Go to `env` folder and update the following code in `./env/.env.local.user` file:
+
+ ```text
+ SECRET_OPENAI_KEY=<your OpenAI key>
+ ASSISTANT_ID=<your Assistant ID>
+ ```
+ 1. Go to the `infra` folder and ensure that the following lines in the `azure.bicep` file are commented out:
+
+ ```bicep
+ // {
+ // name: 'AZURE_OPENAI_KEY'
+ // value: azureOpenAIKey
+ // }
+ // {
+ // name: 'AZURE_OPENAI_ENDPOINT'
+ // value: azureOpenAIEndpoint
+ // }
+ ```
+
+ # [Azure OpenAI](#tab/Azure-OpenAI)
+
+ 1. Go to `env` folder and update the following code in `./env/.env.local.user` file:
+
+ ```text
+ SECRET_AZURE_OPENAI_KEY=<your Azure OpenAI key>
+ SECRET_AZURE_OPENAI_ENDPOINT=<your Azure OpenAI Endpoint>
+ ```
+
+ 1. Go to `teamsapp.local.yml` file and modify the last step to use Azure OpenAI variables:
+
+ ```yaml
+ - uses: file/createOrUpdateEnvironmentFile
+ with:
+ target: ./.env
+ envs:
+ BOT_ID: ${{BOT_ID}}
+ BOT_PASSWORD: ${{SECRET_BOT_PASSWORD}}
+ #OPENAI_KEY: ${{SECRET_OPENAI_KEY}}
+ AZURE_OPENAI_KEY: ${{SECRET_AZURE_OPENAI_KEY}}
+ AZURE_OPENAI_ENDPOINT: ${{SECRET_AZURE_OPENAI_ENDPOINT}}
+ ```
+
+ 1. Go to the `infra` folder and ensure that the following lines in the `azure.bicep` file are commented out:
+
+ ```bicep
+ // {
+ // name: 'OPENAI_KEY'
+ // value: openAIKey
+ // }
+ ```
+
+ 1. Go to `infra` > `azure.parameters.json` and replace the lines from [20 to 25](https://github.com/microsoft/teams-ai/blob/main/js/samples/04.ai-apps/d.assistants-mathBot/infra/azure.parameters.json#L20-L25) with the following code:
+
+ ```json
+ "azureOpenAIKey": {
+ "value": "${{SECRET_AZURE_OPENAI_KEY}}"
+ },
+ "azureOpenAIEndpoint": {
+ "value": "${{SECRET_AZURE_OPENAI_ENDPOINT}}"
+ }
+ ```
+
+
+1. Copy the sample to a new directory that isn't a subdirectory of `teams-ai`.
+
+1. From the left pane, select **Teams Toolkit**.
+
+1. Under **ACCOUNTS**, sign in to the following:
+
+ * **Microsoft 365 account**
+ * **Azure account**
+
+1. To debug your app, select the **F5** key.
+
+ A browser tab opens a Teams web client requesting to add the bot to your tenant.
+
+1. Select **Add**.
+
+ :::image type="content" source="../../../assets/images/bots/math-bot-sample-app-add.png" alt-text="Screenshot shows the option to add the app in Teams web client.":::
+
+ A chat window opens.
+
+1. In the message compose area, send a message to invoke the bot.
+
+ :::image type="content" source="../../../assets/images/bots/mathbot-output.png" alt-text="Screenshot shows an example of the mathbot output." lightbox="../../../assets/images/bots/mathbot-output.png":::
++
+> [!NOTE]
+> If you're building a bot for the first time, it's recommended to use Teams Toolkit extension for Visual Studio Code to build a bot, see [build your first bot app using JavaScript](../../../sbs-gs-bot.yml).
+
+## Additional tools
+
+You can also use the following tools to run and set up a sample:
+
+1. **Teams Toolkit CLI**: You can use the Teams Toolkit CLI to create and manage Microsoft Teams apps from the command line. For more information, see [Teams Toolkit CLI set up instructions](https://github.com/microsoft/teams-ai/blob/main/getting-started/OTHER/TEAMS-TOOLKIT-CLI.md).
+
+1. **Bot Framework Emulator**: The [Bot Framework Emulator](https://github.com/microsoft/BotFramework-Emulator) is a desktop application that allows you to test and debug your bot locally. You can connect to your bot by entering the botΓÇÖs endpoint URL and Microsoft app ID and password. You can then send messages to your bot and see its responses in real-time. For more information, see [Bot Framework Emulator set up instructions](https://github.com/microsoft/teams-ai/blob/main/getting-started/OTHER/BOTFRAMEWORK-EMULATOR.md).
+
+1. **Manual setup**: If you prefer to set up your resources manually, you can do so by following the instructions provided by the respective services. For more information, see [manual set up instructions](https://github.com/microsoft/teams-ai/blob/main/getting-started/OTHER/MANUAL-RESOURCE-SETUP.md).
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Assistants API](teams-conversation-ai-overview.md#assistants-api)
platform Conversation Ai Quick Start https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/bots/how-to/Teams conversational AI/conversation-ai-quick-start.md
Last updated 12/06/2022
# Teams AI library quick start guide
-Get started with Teams AI library using the Chef bot sample, which is designed to to help you cook apps using the Teams AI Library. The bot uses the gpt-3.5-turbo model to chat with Teams users and respond in a polite and respectful manner, staying within the scope of the conversation.
+Get started with Teams AI library using the LightBot sample, which is designed to help you through the process of creating apps that can control lights, such as turning them on and off using Teams AI library. The bot uses the gpt-3.5-turbo model to chat with Microsoft Teams users and respond in a polite and respectful manner, staying within the scope of the conversation.
+ ## Prerequisites
To get started, ensure that you have the following tools:
| Install | For using... | | | |
-| &nbsp; | &nbsp; |
-| [Visual Studio Code](https://code.visualstudio.com/download) or [Visual Studio](https://visualstudio.microsoft.com/downloads/) | JavaScript, TypeScript, or CSharp build environments. Use the latest version. |
+| [Visual Studio Code](https://code.visualstudio.com/download) | JavaScript, TypeScript, and Python build environments. Use the latest version. |
| [Teams Toolkit](https://marketplace.visualstudio.com/items?itemName=TeamsDevApp.ms-teams-vscode-extension) | Microsoft Visual Studio Code extension that creates a project scaffolding for your app. Use the latest version.| |[Git](https://git-scm.com/downloads)|Git is a version control system that helps you manage different versions of code within a repository. | | [Node.js](https://nodejs.org/en/download/) | Back-end JavaScript runtime environment. For more information, see [Node.js version compatibility table for project type](~/toolkit/build-environments.md#nodejs-version-compatibility-table-for-project-type).|
-| [Microsoft Teams](https://www.microsoft.com/microsoft-teams/download-app) | Microsoft Teams to collaborate with everyone you work with through apps for chat, meetings, and call-all in one place.|
-| [OpenAI](https://openai.com/api/) or [Azure OpenAI](https://oai.azure.com/portal)| First create your OpenAI API key to use OpenAI's GPT. If you want to host your app or access resources in Azure, you must create an Azure OpenAI service.|
+| [Microsoft Teams](https://www.microsoft.com/microsoft-teams/download-app) | To collaborate with everyone, you work with apps for chat, meetings, and call all in one place.|
+| [OpenAI](https://openai.com/api/) or [Azure OpenAI](https://oai.azure.com/portal)| First create your OpenAI API key to use OpenAI's GPT. If you want to host your app or access resources in Microsoft Azure, you must create an Azure OpenAI service.|
| [Microsoft&nbsp;Edge](https://www.microsoft.com/edge) (recommended) or [Google Chrome](https://www.google.com/chrome/) | A browser with developer tools. | | [Microsoft 365 developer account](/microsoftteams/platform/concepts/build-and-test/prepare-your-o365-tenant) | Access to Teams account with the appropriate permissions to install an app and [enable custom Teams apps and turn on custom app uploading](../../../concepts/build-and-test/prepare-your-o365-tenant.md#enable-custom-teams-apps-and-turn-on-custom-app-uploading). | <br/>
-If you ran the samples before or encounter a runtime error, follow these steps to start fresh:
+If you've already run the samples before or encountered a runtime error, follow these steps to start fresh:
* Check all the `.env` and `env/.env.*.*` files in the sample and delete any automatically populated values to ensure that Teams Toolkit generates new resources for you.
-* If you donΓÇÖt want Teams Toolkit to generate the appId and password, update the `MicrosoftAppId` and `MicrosoftAppPassword` in the `.env` file with your own values.
+* If you donΓÇÖt want Teams Toolkit to generate the app ID and password, update the `BOT_ID` and `BOT_PASSWORD` in the `.env` file with your own values.
* Remove values or leave the values blank for **SECRET_BOT_PASSWORD** and **TEAMS_APP_UPDATE_TIME** in the `.env` file to avoid conflicts.
-Teams Toolkit automatically provisions`MicrosoftAppId` and `MicrosoftAppPassword` resources. If you want to use your own resources, you need to manually add them to the `.env` file. Teams Toolkit doesn't auto-generate the following resources:
+Teams Toolkit automatically provisions `BOT_ID` and `BOT_PASSWORD` resources. If you want to use your own resources, you need to manually add them to the `.env` file. Teams Toolkit doesn't auto-generate the following resources:
* An Azure OpenAI or OpenAI key * A database or similar storage options - ## Build and run the sample app
-Get started with Teams AI library using the **ChefBot** sample. It enables your computerΓÇÖs localhost to quickly execute a Teams AI library-based sample.
+Get started with Teams AI library using the LightBot sample. It enables your computerΓÇÖs localhost to quickly execute a Teams AI library-based sample.
1. Go to the [sample](https://github.com/microsoft/teams-ai/tree/main/js/samples).
-1. Run the following command to clone the repository.
+1. Run the following command to clone the repository:
```cmd git clone https://github.com/microsoft/teams-ai.git
Get started with Teams AI library using the **ChefBot** sample. It enables your
1. Select **View** > **Terminal**. A terminal window opens.
-1. In the terminal window, run the following command to go to the **JS** folder:
+1. In the terminal window, run the following command to go to the **js** folder:
``` cd .\js\
Get started with Teams AI library using the **ChefBot** sample. It enables your
1. After the dependencies are installed, select **File** > **Open Folder**.
-1. Go to **teams-ai > js > samples> 04.ai.a.teamsChefBot** and select **Select Folder**. All the files for the chef bot sample are listed under the **EXPLORER** section in Visual Studio Code.
+1. Go to **teams-ai > js > samples> 03.ai-concepts> c.actionMapping-lightBot** and select **Select Folder**. All the files for the LightBot sample are listed under the **EXPLORER** section in Visual Studio Code.
-1. Under **EXPLORER**, duplicate the `sample.env` file and update the duplicate file to `.env`.
+1. Update the following steps based on the AI services you select.
-1. In the sample folder, update the following code in the `.env` configuration file:
+ # [OpenAI key](#tab/OpenAI-key)
- ```text
- OPENAI_KEY=<your OpenAI key>
+ 1. Go to the `env` folder and update the following code in `./env/.env.local.user` file:
- ```
+ ```text
+ SECRET_OPENAI_KEY=<your OpenAI key>
+ ```
+ 1. Go to the `infra` folder and ensure that the following lines in the `azure.bicep` file are commented out:
+
+ ```bicep
+ // {
+ // name: 'AZURE_OPENAI_KEY'
+ // value: azureOpenAIKey
+ // }
+ // {
+ // name: 'AZURE_OPENAI_ENDPOINT'
+ // value: azureOpenAIEndpoint
+ // }
+ ```
+
+ # [Azure OpenAI](#tab/Azure-OpenAI)
+
+ 1. Go to the `env` folder and update the following code in `./env/.env.local.user` file:
+
+ ```text
+ SECRET_AZURE_OPENAI_KEY=<your Azure OpenAI key>
+ SECRET_AZURE_OPENAI_ENDPOINT=<your Azure OpenAI Endpoint>
+ ```
+
+ 1. Go to the `teamsapp.local.yml` file and modify the last step to use Azure OpenAI variables:
+
+ ```yaml
+ - uses: file/createOrUpdateEnvironmentFile
+ with:
+ target: ./.env
+ envs:
+ BOT_ID: ${{BOT_ID}}
+ BOT_PASSWORD: ${{SECRET_BOT_PASSWORD}}
+ #OPENAI_KEY: ${{SECRET_OPENAI_KEY}}
+ AZURE_OPENAI_KEY: ${{SECRET_AZURE_OPENAI_KEY}}
+ AZURE_OPENAI_ENDPOINT: ${{SECRET_AZURE_OPENAI_ENDPOINT}}
+ ```
+
+ 1. Go to the `infra` folder and ensure that the following lines in the `azure.bicep` file are commented out:
+
+ ```bicep
+ // {
+ // name: 'OPENAI_KEY'
+ // value: openAIKey
+ // }
+ ```
+
+ 1. Go to `infra` > `azure.parameters.json` and replace the lines from [20 to 22](https://github.com/microsoft/teams-ai/blob/main/js/samples/03.ai-concepts/c.actionMapping-lightBot/infra/azure.parameters.json#L20-L22) with the following code:
+
+ ```json
+ "azureOpenAIKey": {
+ "value": "${{SECRET_AZURE_OPENAI_KEY}}"
+ },
+ "azureOpenAIEndpoint": {
+ "value": "${{SECRET_AZURE_OPENAI_ENDPOINT}}"
+ }
+ ```
+
1. From the left pane, select **Teams Toolkit**.
-1. Under **ACCOUNTS**, sign in to the following:
+1. Under **ACCOUNTS**, sign-in to the following:
* **Microsoft 365 account** * **Azure account**
Get started with Teams AI library using the **ChefBot** sample. It enables your
1. Select **Add**.
- :::image type="content" source="../../../assets/images/bots/Conversation-AI-sample-app-add.png" alt-text="Screenshot shows the option to add the app in Teams web client.":::
+ :::image type="content" source="../../../assets/images/bots/lightbot-add.png" alt-text="Screenshot shows adding the LightBot app.":::
A chat window opens. 1. In the message compose area, send a message to invoke the bot.
- :::image type="content" source="../../../assets/images/bots/conversation-AI-quick-start-final.png" alt-text="Screenshot shows an example of conversation with Teams chef bot in Teams.":::
-
-The bot uses the GPT turbo 3.5 model to chat with Teams users and respond in a polite and respectful manner, staying within the scope of the conversation.
+ :::image type="content" source="../../../assets/images/bots/lightbot-output.png" alt-text="Screenshot shows an example of the LightBot output." lightbox="../../../assets/images/bots/lightbot-output.png":::
> [!NOTE]
-> If you're building a bot for the first time, it's recommended to use Teams Toolkit extension for Visual Studio code to build a bot, see [Build your first bot app using JavaScript](../../../sbs-gs-bot.yml).
+> If you're building a bot for the first time, it's recommended to use Teams Toolkit extension for Visual Studio Code to build a bot, see [build your first bot app using JavaScript](../../../sbs-gs-bot.yml).
::: zone-end ::: zone pivot="qs-csharp"
+## Prerequisites
+
+To get started, ensure that you have the following tools:
+
+| Install | For using... |
+| | |
+| [Visual Studio](https://visualstudio.microsoft.com/downloads/) | C Sharp build environments. Use the latest version. |
+| [Teams Toolkit](https://marketplace.visualstudio.com/items?itemName=TeamsDevApp.ms-teams-vscode-extension) | Microsoft Visual Studio Code extension that creates a project scaffolding for your app. Use the latest version.|
+|[Git](https://git-scm.com/downloads)|Git is a version control system that helps you manage different versions of code within a repository. |
+| [Microsoft Teams](https://www.microsoft.com/microsoft-teams/download-app) | To collaborate with everyone, you work with through apps for chat, meetings, and call all in one place.|
+| [OpenAI](https://openai.com/api/) or [Azure OpenAI](https://oai.azure.com/portal)| First create your OpenAI API key to use OpenAI's GPT. If you want to host your app or access resources in Microsoft Azure, you must create an Azure OpenAI service.|
+| [Microsoft&nbsp;Edge](https://www.microsoft.com/edge) (recommended) or [Google Chrome](https://www.google.com/chrome/) | A browser with developer tools. |
+| [Microsoft 365 developer account](/microsoftteams/platform/concepts/build-and-test/prepare-your-o365-tenant) | Access to Teams account with the appropriate permissions to install an app and [enable custom Teams apps and turn on custom app uploading](../../../concepts/build-and-test/prepare-your-o365-tenant.md#enable-custom-teams-apps-and-turn-on-custom-app-uploading). |
+
+<br/>
+If you've already run the samples before or encountered a runtime error, follow these steps to start fresh:
+
+* Check all the `.env` and `env/.env.*.*` files in the sample and delete any automatically populated values to ensure that Teams Toolkit generates new resources for you.
+* If you donΓÇÖt want Teams Toolkit to generate the app ID and password, update the `MicrosoftAppId` and `MicrosoftAppPassword` in the `.env` file with your own values.
+* Remove values or leave the values blank for **SECRET_BOT_PASSWORD** and **TEAMS_APP_UPDATE_TIME** in the `.env` file to avoid conflicts.
+
+Teams Toolkit automatically provisions `MicrosoftAppId` and `MicrosoftAppPassword` resources. If you want to use your own resources, you need to manually add them to the `.env` file. Teams Toolkit doesn't auto-generate the following resources:
+
+* An Azure OpenAI or OpenAI key
+* A database or similar storage options
+ ## Build and run the sample app
-1. Go to the [sample](https://github.com/microsoft/teams-ai/tree/main/js/samples).
+1. Go to the [sample](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples).
1. Clone the repository to test the sample app.
The bot uses the GPT turbo 3.5 model to chat with Teams users and respond in a p
cd teams-ai/dotnet ```
-1. Go to the folder where you cloned the repository and select **04.ai.a.teamsChefBot**.
-1. Select **TeamsChefBot.sln**. The solution opens in Visual Studio.
+1. Go to the folder where you cloned the repository and select **04.ai.c.actionMapping.lightBot**.
+
+1. Select **LightBot.sln**. The solution opens in Visual Studio.
1. In Visual Studio, update your OpenAI related settings in the `appsettings.Development.json` file. ```json
- "OpenAI": {
- "ApiKey": "<your-openai-api-key>"
+ "Azure": {
+ "OpenAIApiKey": "<your-azure-openai-api-key>",
+ "OpenAIEndpoint": "<your-azure-openai-endpoint>"
}, ```
+1. Go to `Prompts/sequence/skprompt.txt` and update the following code in `skprompt.txt` file:
+
+ ```skprompt.txt
+ The following is a conversation with an AI assistant.
+ The assistant can turn a light on or off.
+ The assistant must return the following JSON structure:
+
+ {"type":"plan","commands":[{"type":"DO","action":"<name>","entities":{"<name>":<value>}},{"type":"SAY","response":"<response>"}]}
+
+ The following actions are supported:
+
+ - LightsOn
+ - LightsOff
+ - Pause time=<duration in ms>
+ - LightStatus
+
+ The lights are currently {{getLightStatus}}.
+
+ Always respond in the form of a JSON based plan. Stick with DO/SAY.
+ ```
+ 1. In the debug dropdown menu, select **Dev Tunnels** > **Create a Tunnel..**. :::image type="content" source="../../../assets/images/bots/dotnet-ai-library-dev-tunnel.png" alt-text="Screenshot shows an example of the Dev Tunnel and Create a Tunnel option in Visual Studio.":::
-1. Select the Account to use to create the tunnel. Azure, Microsoft Account (MSA), and GitHub are the account types that are supported. Update the following options:
+1. Select the **Account** to use to create the tunnel. Azure, Microsoft Account (MSA), and GitHub accounts are supported. Update the following options:
1. **Name**: Enter a name for the tunnel. 1. **Tunnel Type**: Select **Persistent** or **Temporary**. 1. **Access**: Select **Public**.
The bot uses the GPT turbo 3.5 model to chat with Teams users and respond in a p
The tunnel you created is listed under **Dev Tunnels > (name of the tunnel)**. 1. Go to **Solution Explorer** and select your project.
-1. Right-click the menu and select **Teams Toolkit** > **Prepare Teams App Dependencies**.
- :::image type="content" source="../../../assets/images/bots/dotnet-ai-library-prepare-teams-app.png" alt-text="Screenshot shows an example of the Prepare Teams app Dependencies option under Teams Toolkit section in Visual Studio.":::
+1. Right-click menu and select **Teams Toolkit** > **Prepare Teams App Dependencies**.
+
+ :::image type="content" source="../../../assets/images/bots/dotnet-ai-library-prepare-teams.png" alt-text="Screenshot shows an example of the Prepared Teams app Dependencies option under Teams Toolkit section in Visual Studio.":::
- If prompted, sign in to your Microsoft 365 account. You'll receive a message that Teams app is successfully prepared.
+ If prompted, sign in to your Microsoft 365 account. You receive a message that Teams app dependencies are successfully prepared.
1. Select **OK**. 1. Select **F5** or select **Debug** > **Start**.+ 1. Select **Add**. The app is added to Teams and a chat window opens.
- :::image type="content" source="../../../assets/images/bots/dotnet-ai-library-add-app.png" alt-text="Screenshot shows the add option to add the app to Microsoft Teams.":::
+ :::image type="content" source="../../../assets/images/bots/lightbot-add.png" alt-text="Screenshot shows adding the LightBot app.":::
1. In the message compose area, send a message to invoke the bot.
- :::image type="content" source="../../../assets/images/bots/dotnet-ai-library-invoke-chef-bot.png" alt-text="Screenshot shows an example of a chat window and a message from the chef bot as a reply to users message.":::
+ :::image type="content" source="../../../assets/images/bots/lightbot-output.png" alt-text="Screenshot shows an example of the LightBot output.":::
You can also deploy the samples to Azure using Teams Toolkit. To deploy, follow these steps: 1. In Visual Studio, go to **Solution Explorer** and select your project.
-1. Right-click the menu and select **Teams Toolkit** > **Provision in the Cloud**. Toolkit provisions your sample to Azure.
-1. Right-click the menu and select **Teams Toolkit** > **Deploy to the Cloud**.
+1. Right-click menu and select **Teams Toolkit** > **Provision in the Cloud**. Toolkit provisions your sample to Azure.
+1. Right-click menu and select **Teams Toolkit** > **Deploy to the Cloud**.
+++
+## Prerequisites
+
+To get started, ensure that you have the following tools:
+
+| Install | For using... |
+| | |
+| [Visual Studio Code](https://code.visualstudio.com/download) | JavaScript, TypeScript, and Python build environments. Use the latest version. |
+| [Teams Toolkit](https://marketplace.visualstudio.com/items?itemName=TeamsDevApp.ms-teams-vscode-extension) | Microsoft Visual Studio Code extension that creates a project scaffolding for your app. Use the latest version.|
+| [Python](https://www.python.org/) | Python is an interpreted and object-oriented programming language with dynamic semantics. Use versions between 3.8 to 4.0. |
+| [Poetry](https://python-poetry.org/docs/#installing-with-pipx) | Dependency management and packaging tool for Python.|
+| [Python VSCode Extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) | Provides rich support for Python on VSCode. |
+|[Git](https://git-scm.com/downloads)|Git is a version control system that helps you manage different versions of code within a repository. |
+| [Microsoft Teams](https://www.microsoft.com/microsoft-teams/download-app) | To collaborate with everyone, you work with through apps for chat, meetings, and call all in one place.|
+| [OpenAI](https://openai.com/api/) or [Azure OpenAI](https://oai.azure.com/portal)| First create your OpenAI API key to use OpenAI's GPT. If you want to host your app or access resources in Microsoft Azure, you must create an Azure OpenAI service.|
+| [Microsoft&nbsp;Edge](https://www.microsoft.com/edge) (recommended) or [Google Chrome](https://www.google.com/chrome/) | A browser with developer tools. |
+| [Microsoft 365 developer account](/microsoftteams/platform/concepts/build-and-test/prepare-your-o365-tenant) | Access to Teams account with the appropriate permissions to install an app and [enable custom Teams apps and turn on custom app uploading](../../../concepts/build-and-test/prepare-your-o365-tenant.md#enable-custom-teams-apps-and-turn-on-custom-app-uploading). |
+
+<br/>
+If you've already run the samples before or encountered a runtime error, follow these steps to start fresh:
+
+* Check all the `.env` and `env/.env.*.*` files in the sample and delete any automatically populated values to ensure that Teams Toolkit generates new resources for you.
+* If you donΓÇÖt want Teams Toolkit to generate the app ID and password, update the `BOT_ID` and `BOT_PASSWORD` in the `.env` file with your own values.
+* Remove values or leave the values blank for **SECRET_BOT_PASSWORD** and **TEAMS_APP_UPDATE_TIME** in the `.env` file to avoid conflicts.
+
+Teams Toolkit automatically provisions `BOT_ID` and `BOT_PASSWORD` resources. If you want to use your own resources, you need to manually add them to the `.env` file. Teams Toolkit doesn't auto-generate the following resources:
+
+* An Azure OpenAI or OpenAI key
+* A database or similar storage options
+
+## Build and run the sample app
+
+1. Go to the [sample](https://github.com/microsoft/teams-ai/tree/main/python/samples).
+
+1. Clone the repository to test the sample app.
+
+ ```
+ git clone https://github.com/microsoft/teams-ai.git
+ ```
+
+1. Go to the **python** folder.
+
+ ```
+ cd teams-ai/python
+ ```
+
+1. Go to the folder where you cloned the repository and select **04.ai.c.actionMapping.lightBot**. All the files for the LightBot sample are listed under the **EXPLORER** section in Visual Studio Code.
+
+1. Under **EXPLORER**, duplicate the **sample.env** file and update the duplicate file to **.env**.
+
+ # [OpenAI key](#tab/OpenAI-key2)
+
+ Go to the `env` folder and update the following code in `./env/.env.local.user` file:
+
+ ```text
+ SECRET_OPENAI_KEY=<your OpenAI key>
+
+ ```
+
+ # [Azure OpenAI](#tab/Azure-OpenAI2)
+
+ Go to the `env` folder and update the following code in `./env/.env.local.user` file:
+
+ ```text
+ SECRET_AZURE_OPENAI_KEY=<your Azure OpenAI key>
+ SECRET_AZURE_OPENAI_ENDPOINT=<your Azure OpenAI Endpoint>
+
+ ```
+
+
+
+1. To install the following dependencies, go to **View** > **Terminal** and run the following commands:
+
+ |Dependencies |Command |
+ | | |
+ | python-dotenv | pip install python-dotenv |
+ | load-dotenv | pip install load-dotenv |
+ | teams-ai | pip install teams-ai |
+ | botbuilder-core | pip install botbuilder-core |
+
+1. Update `config.json` and `bot.py` with your model deployment name.
+
+1. Go to **View** > **Command Palette...** or select **Ctrl+Shift+P**.
+
+1. Enter **Python: Create Environment** to create a virtual environment.
+
+1. To debug your app, select the **F5** key.
+
+ A browser tab opens a Teams web client requesting to add the bot to your tenant.
+
+1. Select **Add**.
+
+ :::image type="content" source="../../../assets/images/bots/lightbot-add.png" alt-text="Screenshot shows adding the LightBot app.":::
+
+ A chat window opens.
+
+1. In the message compose area, send a message to invoke the bot.
+
+ :::image type="content" source="../../../assets/images/bots/lightbot-output.png" alt-text="Screenshot shows an example of the LightBot output.":::
::: zone-end
You can also deploy the samples to Azure using Teams Toolkit. To deploy, follow
You can also use the following tools to run and set up a sample:
-1. **Teams Toolkit CLI**: You can use the Teams Toolkit CLI to create and manage Microsoft Teams apps from the command line. For more information, see [Teams Toolkit CLI set up instructions](https://github.com/microsoft/teams-ai/blob/main/getting-started/OTHER/TEAMS-TOOLKIT-CLI.md).
+1. **Teams Toolkit CLI**: You can use the Teams Toolkit CLI to create and manage Teams apps from the command line. For more information, see [Teams Toolkit CLI set up instructions](https://github.com/microsoft/teams-ai/blob/main/getting-started/OTHER/TEAMS-TOOLKIT-CLI.md).
-1. **Bot Framework Emulator**: The [Bot Framework Emulator](https://github.com/microsoft/BotFramework-Emulator) is a desktop application that allows you to test and debug your bot locally. You can connect to your bot by entering the botΓÇÖs endpoint URL and Microsoft App ID and password. You can then send messages to your bot and see its responses in real-time. For more information, see [Bot Framework Emulator set up instructions](https://github.com/microsoft/teams-ai/blob/main/getting-started/OTHER/BOTFRAMEWORK-EMULATOR.md).
+1. **Bot Framework Emulator**: The [Bot Framework Emulator](https://github.com/microsoft/BotFramework-Emulator) is a desktop application that allows you to test and debug your bot locally. You can connect to your bot by entering the botΓÇÖs endpoint URL and Microsoft app ID and password. You can then send messages to your bot and see its responses in real-time. For more information, see [Bot Framework Emulator set up instructions](https://github.com/microsoft/teams-ai/blob/main/getting-started/OTHER/BOTFRAMEWORK-EMULATOR.md).
1. **Manual setup**: If you prefer to set up your resources manually, you can do so by following the instructions provided by the respective services. For more information, see [manual set up instructions](https://github.com/microsoft/teams-ai/blob/main/getting-started/OTHER/MANUAL-RESOURCE-SETUP.md).
platform Coversational Ai Faq https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/bots/how-to/Teams conversational AI/coversational-ai-faq.md
Last updated 04/07/2022
<details> <summary>What does the Teams AI library do?</summary>
-Teams AI library provides abstractions for you to build robust applications that utilize OpenAI large language model (LLM)s.
+Teams AI library provides abstractions for you to build robust applications that utilize OpenAI Large Language Models (LLMs).
<br> </details> </br>
Teams AI library provides abstractions for you to build robust applications that
<details> <summary>Does Microsoft provide a hosted version of OpenAI models that are used by the AI library?</summary>
-No, you need to have your large language model (LLM)s, hosted in Azure OpenAI or elsewhere.
+No, you need to have your Large Language Models (LLMs), hosted in Azure OpenAI or elsewhere.
<br> </details> </br>
No, you need to have your large language model (LLM)s, hosted in Azure OpenAI or
<details> <summary>Can we use the AI library with other large language models apart from OpenAI?</summary>
-Yes, it's possible to use Teams AI library with other large language model (LLM)s.
+Yes, it's possible to use Teams AI library with other Large Language Models (LLMs).
<br> </details> </br>
Yes, it's possible to use Teams AI library with other large language model (LLM)
<details> <summary>Does a developer need to do anything to benefit from LLMs? If yes, why?</summary>
-Yes, Teams AI library provides abstractions to simplify utilization of large language model (LLM)s in conversational applications. However, you (developer) must tweak the prompts, topic filters, and actions depending upon your scenarios.
+Yes, Teams AI library provides abstractions to simplify utilization of Large Language Models (LLMs) in conversational applications. However, you (developer) must tweak the prompts, topic filters, and actions depending upon your scenarios.
<br> </details> </br>
platform How Conversation Ai Core Capabilities https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/bots/how-to/Teams conversational AI/how-conversation-ai-core-capabilities.md
Last updated 05/24/2023
Teams AI library supports JavaScript and is designed to simplify the process of building bots that can interact with Microsoft Teams, and facilitates the migration of existing bots. The AI library supports the migration of messaging capabilities, Message extension (ME) capabilities, and Adaptive Cards capabilities to the new format. It's also possible to upgrade existing Teams apps with these features.
-Earlier, you were using BotBuilder SDK directly to create bots for Microsoft Teams. Teams AI library is designed to facilitate the construction of bots that can interact with Microsoft Teams. While one of the key features of Teams AI library is the AI support that customers can utilize, the initial objective might be to upgrade their current bot without AI. After you upgrade, the bot can connect to AI or large language model (LLM) available in the AI library.
+Earlier, you were using BotBuilder SDK directly to create bots for Microsoft Teams. Teams AI library is designed to facilitate the construction of bots that can interact with Microsoft Teams. While one of the key features of Teams AI library is the AI support that customers can utilize, the initial objective might be to upgrade their current bot without AI. After you upgrade, the bot can connect to AI or Large Language Models (LLMs) available in the AI library.
Teams AI library supports the following capabilities:
app.messageExtensions.selectItem(async (context: TurnContext, state: TurnState,
# [Python](#tab/python5)
-* [Code sample](https://github.com/microsoft/teams-ai/tree/main/python/samples/04.ai.b.messageExtensions.AI-ME)
+* [Code sample](https://github.com/microsoft/teams-ai/tree/main/python/samples/02.messageExtensions.a.searchCommand)
-* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/samples/04.ai.b.messageExtensions.AI-ME/src/bot.py#L75)
+* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/samples/02.messageExtensions.a.searchCommand/src/bot.py#L44)
```python
-# Implement Message Extension logic
-@app.message_extensions.fetch_task("CreatePost")
-async def create_post(context: TurnContext, _state: AppTurnState) -> TaskModuleTaskInfo:
- # Return card as a TaskInfo object
- card = create_initial_view()
- return create_task_info(card)
+@app.message_extensions.query("searchCmd")
+async def search_command(
+ _context: TurnContext, _state: AppTurnState, query: MessagingExtensionQuery
+) -> MessagingExtensionResult:
+ query_dict = query.as_dict()
+ search_query = ""
+ if query_dict["parameters"] is not None and len(query_dict["parameters"]) > 0:
+ for parameter in query_dict["parameters"]:
+ if parameter["name"] == "queryText":
+ search_query = parameter["value"]
+ break
+ count = query_dict["query_options"]["count"] if query_dict["query_options"]["count"] else 10
+ url = "http://registry.npmjs.com/-/v1/search?"
+ params = {"size": count, "text": search_query}
+
+ async with aiohttp.ClientSession() as session:
+ async with session.get(url, params=params) as response:
+ res = await response.json()
+
+ results: List[MessagingExtensionAttachment] = []
+
+ for obj in res["objects"]:
+ results.append(create_npm_search_result_card(result=obj["package"]))
+
+ return MessagingExtensionResult(
+ attachment_layout="list", attachments=results, type="result"
+ )
```
app.adaptiveCards.actionSubmit('StaticSubmit', async (context, _state, data: Sub
# [Python](#tab/python4)
-[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/packages/ai/teams/adaptive_cards/adaptive_cards.py#L129C1-L136C67)
+* [Code sample](https://github.com/microsoft/teams-ai/tree/main/python/samples/03.adaptiveCards.a.typeAheadBot)
+
+* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/samples/03.adaptiveCards.a.typeAheadBot/src/bot.py#L39C1-L78C1)
```python
-# Use this method as a decorator
-@app.adaptive_cards.action_submit("submit")
-async def execute_submit(context: TurnContext, state: TurnState, data: Any):
- print(f"Execute with data: {data}")
+@app.message(re.compile(r"static", re.IGNORECASE))
+async def static_card(context: TurnContext, _state: AppTurnState) -> bool:
+ attachment = create_static_search_card()
+ await context.send_activity(Activity(attachments=[attachment]))
return True
-# Pass a function to this method
-app.adaptive_cards.action_submit("submit")(execute_submit)
+@app.adaptive_cards.action_submit("StaticSubmit")
+async def on_static_submit(context: TurnContext, _state: AppTurnState, data) -> None:
+ await context.send_activity(f'Statically selected option is: {data["choiceSelect"]}')
+
+@app.adaptive_cards.action_submit("DynamicSubmit")
+async def on_dynamic_submit(context: TurnContext, _state: AppTurnState, data) -> None:
+ await context.send_activity(f'Dynamically selected option is: {data["choiceSelect"]}')
+
+@app.message(re.compile(r"dynamic", re.IGNORECASE))
+async def dynamic_card(context: TurnContext, _state: AppTurnState) -> bool:
+ attachment = create_dynamic_search_card()
+ await context.send_activity(Activity(attachments=[attachment]))
+ return True
```
app.messageExtensions.selectItem(async (context, state, item) => {
# [Python](#tab/python2)
-[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/packages/ai/teams/message_extensions/message_extensions.py#L68C1-L75C55)
+* [Code sample](https://github.com/microsoft/teams-ai/tree/main/python/samples/02.messageExtensions.a.searchCommand)
-```python
-# Use this method as a decorator
-@app.message_extensions.query("test")
-async def on_query(context: TurnContext, state: TurnState, url: str):
- return MessagingExtensionResult()
+* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/samples/02.messageExtensions.a.searchCommand/src/bot.py#L44)
-# Pass a function to this method
-app.message_extensions.query("test")(on_query)
+```python
+@app.message_extensions.query("searchCmd")
+async def search_command(
+ _context: TurnContext, _state: AppTurnState, query: MessagingExtensionQuery
+) -> MessagingExtensionResult:
+ query_dict = query.as_dict()
+ search_query = ""
+ if query_dict["parameters"] is not None and len(query_dict["parameters"]) > 0:
+ for parameter in query_dict["parameters"]:
+ if parameter["name"] == "queryText":
+ search_query = parameter["value"]
+ break
+ count = query_dict["query_options"]["count"] if query_dict["query_options"]["count"] else 10
+ url = "http://registry.npmjs.com/-/v1/search?"
+ params = {"size": count, "text": search_query}
+
+ async with aiohttp.ClientSession() as session:
+ async with session.get(url, params=params) as response:
+ res = await response.json()
+
+ results: List[MessagingExtensionAttachment] = []
+
+ for obj in res["objects"]:
+ results.append(create_npm_search_result_card(result=obj["package"]))
+
+ return MessagingExtensionResult(
+ attachment_layout="list", attachments=results, type="result"
+ )
++
+# Listen for item tap
+@app.message_extensions.select_item()
+async def select_item(_context: TurnContext, _state: AppTurnState, item: Any):
+ card = create_npm_package_card(item)
+
+ return MessagingExtensionResult(attachment_layout="list", attachments=[card], type="result")
```
All entities are required parameters to actions.
return `items removed. think about your next action`; }); ```+
+# [Python](#tab/python1)
+
+* [Code sample](https://github.com/microsoft/teams-ai/tree/main/python/samples/04.ai.d.chainedActions.listBot)
+
+* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/samples/04.ai.d.chainedActions.listBot/src/bot.py#L96C1-L123C57)
+
+```python
+@app.ai.action("addItems")
+async def on_add_items(
+ context: ActionTurnContext[Dict[str, Any]],
+ state: AppTurnState,
+):
+ parameters = ListAndItems.from_dict(context.data, infer_missing=True)
+ state.ensure_list_exists(parameters.list)
+ items = state.conversation.lists[parameters.list]
+ if parameters.items is not None:
+ for item in parameters.items:
+ items.append(item)
+ state.conversation.lists[parameters.list] = items
+ return "items added. think about your next action"
+
+@app.ai.action("removeItems")
+async def on_remove_items(
+ context: ActionTurnContext[Dict[str, Any]],
+ state: AppTurnState,
+):
+ parameters = ListAndItems.from_dict(context.data, infer_missing=True)
+ state.ensure_list_exists(parameters.list)
+ items = state.conversation.lists[parameters.list]
+ if parameters.items is not None and len(parameters.items) > 0:
+ for item in parameters.items:
+ if item in items:
+ items.remove(item)
+ state.conversation.lists[parameters.list] = items
+ return "items removed. think about your next action"
+```
+ ## Next step
platform Teams Conversation Ai Overview https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/bots/how-to/Teams conversational AI/teams-conversation-ai-overview.md
Last updated 02/12/2024
# Teams AI library
-Teams AI library is a Teams-centric interface to GPT-based common language models and user intent engines which, moderates the need for you to take on complex and expensive tasks of writing and maintaining conversational bot logic to integrate with large language models (LLMs).
+Teams AI library is a Teams-centric interface to GPT-based common language models and user intent engines which, moderates the need for you to take on complex and expensive tasks of writing and maintaining conversational bot logic to integrate with Large Language Models (LLMs).
:::image type="content" border="false" source="../../../assets/images/bots/teams-ai-library.png" alt-text="Visual representation of a user input and a bot response."lightbox="../../../assets/images/bots/teams-ai-library.png":::
Although, Teams AI library is built to use Open AIΓÇÖs GPT model, you have the f
Teams AI library allows you to create ethical and responsible conversational apps by: -- Moderation hooks: To regulate bot responses against any moderation API.-- Conversation sweeping: To monitor conversations and intervene when the conversation goes astray through proactive detection and remediation.-- Feedback loops: To evaluate the performance of the bot for high quality conversations and enhance user experience.
+* Moderation hooks: To regulate bot responses against any moderation API.
+* Conversation sweeping: To monitor conversations and intervene when the conversation goes astray through proactive detection and remediation.
+* Feedback loops: To evaluate the performance of the bot for high quality conversations and enhance user experience.
Teams AI library offers support from low code to complex scenarios. The library extends capabilities with AI constructs to build natural language modeling, scenario-specific user intent, personalization, and automated context-aware conversations.
The bot framework using Teams AI library requires the following:
Action Planner is the main component calling your Large Language Model (LLM) and includes several features to enhance and customize your model. Model plugins simplify configuring your selected LLM to the planner and ships with an OpenAIModel that supports both OpenAI and Azure OpenAI LLMs. Additional plugins for other models like Llama-2 can easily be added, giving you the flexibility to choose what model is best for your use case. An internal feedback loop increases reliability by fixing the subpar responses from the LLM.
+## Assistants API
+
+> [!NOTE]
+> Teams AI library supports both OpenAI and Azure OpenAI Assistants API in [public developer preview](~/resources/dev-preview/developer-preview-intro.md) for you to get started with building intelligent assistants.
+
+Assistants API allows you to create powerful AI assistants capable of performing a variety of tasks that are difficult to code using traditional methods. It provides programmatic access to OpenAIΓÇÖs GPT system for tasks ranging from chat to image processing, audio processing, and building custom assistants. The API supports natural language interaction, enabling the development of assistants that can understand and respond in a conversational manner.
+
+Follow the [quick start guide](assistants-api-quick-start.md) to create an assistant that specializes in mathematics.
+ ## Prompt management Dynamic prompt management is a feature of the AI system that allows it to adjust the size and content of the prompt that is sent to the language model, based on the available token budget and the data sources or augmentations. It can improve the efficiency and accuracy of the model by ensuring that the prompt doesn't exceed the context window or include irrelevant information.
The following table lists the updates to the Teams AI library:
| Sample name | Description | .NET | Node.js | Python | | -- | | -- | - | -- | | Echo bot | This sample shows how to incorporate a basic conversational flow into a Microsoft Teams application using Bot Framework and the Teams AI library. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/01.messaging.echoBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/01.getting-started/a.echoBot) | [View](https://github.com/microsoft/teams-ai/tree/main/python/samples/01.messaging.a.echoBot) |
-| Search command message extension | This sample shows how to incorporate a basic Message Extension app into a Microsoft Teams application using Bot Framework and the Teams AI library. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/02.messageExtensions.a.searchCommand) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/02.teams-features/a.messageExtensions.searchCommand) |
-| Typeahead bot | This sample shows how to incorporate the typeahead search functionality in Adaptive Cards into a Microsoft Teams application using Bot Framework and the Teams AI library. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/03.adaptiveCards.a.typeAheadBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/02.teams-features/b.adaptiveCards.typeAheadBot) |
+| Search command message extension | This sample shows how to incorporate a basic Message Extension app into a Microsoft Teams application using Bot Framework and the Teams AI library. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/02.messageExtensions.a.searchCommand) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/02.teams-features/a.messageExtensions.searchCommand) | [View](https://github.com/microsoft/teams-ai/tree/main/python/samples/02.messageExtensions.a.searchCommand)|
+| Typeahead bot | This sample shows how to incorporate the typeahead search functionality in Adaptive Cards into a Microsoft Teams application using Bot Framework and the Teams AI library. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/03.adaptiveCards.a.typeAheadBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/02.teams-features/b.adaptiveCards.typeAheadBot) | [View](https://github.com/microsoft/teams-ai/tree/main/python/samples/03.adaptiveCards.a.typeAheadBot)|
| Conversational bot with AI: Teams chef | This sample shows how to incorporate a basic conversational bot behavior in Microsoft Teams. The bot is built to allow GPT to facilitate the conversation on its behalf, using only a natural language prompt file to guide it. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.a.teamsChefBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.ai-apps/a.teamsChefBot) | | Message extensions: GPT-ME | This sample is a message extension (ME) for Microsoft Teams that uses the text-davinci-003 model to help users generate and update posts. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.b.messageExtensions.gptME) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/03.ai-concepts/b.AI-messageExtensions) | [View](https://github.com/microsoft/teams-ai/tree/main/python/samples/04.ai.b.messageExtensions.AI-ME) | | Light bot | This sample illustrates more complex conversational bot behavior in Microsoft Teams. The bot is built to allow GPT to facilitate the conversation on its behalf and manually defined responses, and maps user intents to user defined actions. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.c.actionMapping.lightBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/03.ai-concepts/c.actionMapping-lightBot) | [View](https://github.com/microsoft/teams-ai/tree/main/python/samples/04.ai.c.actionMapping.lightBot) |
-| List bot | This sample shows how to incorporate a basic conversational bot behavior in Microsoft Teams. The bot harnesses the power of AI to simplify your workflow and bring order to your daily tasks and showcases the action chaining capabilities. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.d.chainedActions.listBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/03.ai-concepts/d.chainedActions-listBot) |
-| DevOps bot | This sample shows how to incorporate a basic conversational bot behavior in Microsoft Teams. The bot uses the gpt-3.5-turbo model to chat with Teams users and perform DevOps action such as create, update, triage and summarize work items. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.e.chainedActions.devOpsBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.ai-apps/b.devOpsBot) |
-| Twenty questions | This sample shows showcases the incredible capabilities of language models and the concept of user intent. Challenge your skills as the human player and try to guess a secret within 20 questions, while the AI-powered bot answers your queries about the secret. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.e.twentyQuestions) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/03.ai-concepts/a.twentyQuestions) |
-| Math tutor assistant | This example shows how to create a basic conversational experience using OpenAI's Assistants APIs. It uses OpenAI's Code Interpreter tool to create an assistant that's an expert on math. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/06.assistants.a.mathBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.ai-apps/d.assistants-mathBot) |
-| Food ordering assistant | This example shows how to create a conversational assistant that uses tools to call actions in your bots code. It's a food ordering assistant for a fictional restaurant called The Pub and is capable of complex interactions with the user as it takes their order. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/06.assistants.b.orderBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.ai-apps/e.assistants-orderBot) |
+| List bot | This sample shows how to incorporate a basic conversational bot behavior in Microsoft Teams. The bot harnesses the power of AI to simplify your workflow and bring order to your daily tasks and showcases the action chaining capabilities. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.d.chainedActions.listBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/03.ai-concepts/d.chainedActions-listBot) |[View](https://github.com/microsoft/teams-ai/tree/main/python/samples/04.ai.d.chainedActions.listBot)|
+| DevOps bot | This sample shows how to incorporate a basic conversational bot behavior in Microsoft Teams. The bot uses the gpt-3.5-turbo model to chat with Teams users and perform DevOps action such as create, update, triage and summarize work items. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.e.chainedActions.devOpsBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.ai-apps/b.devOpsBot) |[View](https://github.com/microsoft/teams-ai/tree/main/python/samples/04.ai.e.chainedActions.devOpsBot)|
+| Twenty questions | This sample shows showcases the incredible capabilities of language models and the concept of user intent. Challenge your skills as the human player and try to guess a secret within 20 questions, while the AI-powered bot answers your queries about the secret. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.e.twentyQuestions) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/03.ai-concepts/a.twentyQuestions) |[View](https://github.com/microsoft/teams-ai/tree/main/python/samples/04.ai.a.twentyQuestions)|
+| Math tutor assistant | This example shows how to create a basic conversational experience using OpenAI's Assistants APIs. It uses OpenAI's Code Interpreter tool to create an assistant that's an expert on math. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/06.assistants.a.mathBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.ai-apps/d.assistants-mathBot) |[View](https://github.com/microsoft/teams-ai/tree/main/python/samples/06.assistants.a.mathBot)|
+| Food ordering assistant | This example shows how to create a conversational assistant that uses tools to call actions in your bots code. It's a food ordering assistant for a fictional restaurant called The Pub and is capable of complex interactions with the user as it takes their order. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/06.assistants.b.orderBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.ai-apps/e.assistants-orderBot) |[View](https://github.com/microsoft/teams-ai/tree/main/python/samples/06.assistants.b.orderBot)|
## Next step
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Teams AI library capabilities](how-conversation-ai-core-capabilities.md)-
platform Locally With An Ide https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/bots/how-to/debug/locally-with-an-ide.md
ngrok http <port> --host-header=localhost:<port>
Use the https endpoint provided by ngrok in your [app manifest](../../../resources/schem). > [!NOTE]
-> If you close your command window and restart, a new URL is generated and you need to update your bot endpoint address to use it.
+>
+> * If you close your command window and restart, a new URL is generated and you need to update your bot endpoint address to use it.
+> * Bots built through Microsoft Bot Framework must be accessible through the https endpoint, however the endpoint isn't exposed. The endpoint is linked only between Bot Framework and your internal address.
## Test your bot without uploading to Teams
platform Apps Package https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/concepts/build-and-test/apps-package.md
To distribute your Microsoft Teams app, you need to zip the files in the app pac
## Teams doesn't host your app
-When a user installs your app in Teams, they install an app package that contains only a configuration file (also known as an app manifest) and your app's icons. The app's logic and data storage are hosted elsewhere, such as on localhost during development and Azure Web Services. Teams accesses these resources via HTTPS.
+When a user installs your app in Teams, they install an app package that contains only a configuration file (also known as an app manifest) and your app's icons. The app's logic and data storage are hosted elsewhere, such as on localhost during development and Microsoft Azure for production. Teams accesses these resources via HTTPS.
:::image type="content" source="../../assets/images/teams-app-host.png" alt-text="Illustration showing app hosting for Teams app":::
platform Tool Sdk Overview https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/concepts/build-and-test/tool-sdk-overview.md
The following flow diagram explains the different SDKs, libraries, and its relat
| -- | -- | -- | | [Bot Framework SDK](/azure/bot-service/bot-service-overview) | Microsoft Bot Framework and Azure AI Bot Service are a collection of libraries, tools, and services that enable you to build, test, deploy, and manage intelligent bots. The Bot Framework includes a modular and extensible SDK for building bots and connecting to AI services. | :::image type="icon" source="../../assets/icons/grey-dot.png" border="false"::: Based on **Azure Bot Service**. | | [Microsoft Graph SDKs](/graph/sdks/sdks-overview) | The Microsoft Graph SDKs are designed to simplify the creation of high-quality, efficient, and resilient applications that access Microsoft Graph. The SDKs include two components such as service library and core library. | :::image type="icon" source="../../assets/icons/grey-dot.png" border="false"::: Based on **Microsoft Graph**. |
-| [Teams AI library](../../bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md) | Teams AI library is a Teams-centric interface to GPT-based common language models and user intent engines. This reduces the requirement for you to handle on complex and expensive tasks of writing and maintaining conversational bot logic to integrate with large language models (LLMs). | :::image type="icon" source="../../assets/icons/blue-dot.png" border="false"::: Depends on **Bot Framework SDK**. </br> :::image type="icon" source="../../assets/icons/grey-dot.png" border="false"::: Based on **Azure OpenAI**. |
+| [Teams AI library](../../bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md) | Teams AI library is a Teams-centric interface to GPT-based common language models and user intent engines. This reduces the requirement for you to handle on complex and expensive tasks of writing and maintaining conversational bot logic to integrate with Large Language Models (LLMs). | :::image type="icon" source="../../assets/icons/blue-dot.png" border="false"::: Depends on **Bot Framework SDK**. </br> :::image type="icon" source="../../assets/icons/grey-dot.png" border="false"::: Based on **Azure OpenAI**. |
### Additional libraries and UI utilities to build Teams apps
platform Feedback https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/feedback.md
Use **Azure Admin Center** or **Microsoft 365 Admin Center** for any **business-
Teams community of developers use Stack Overflow and Microsoft Q&A to connect with other developers to ideate, get clarifications, and submit queries. In addition, you can also use other contacts or sites, depending on the type of community help required to raise issues, limitations, and general questions.
+📢 Read our latest [announcements](https://aka.ms/M365PlatformAnnouncement) and join the conversation with community members and platform engineers!
+ ### Developer community forums Post your questions and help other community members by sharing and responding to Teams App Development questions.
platform Glossary https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/get-started/glossary.md
Common terms and definitions used in Microsoft Teams developer documentation.
| [Content URL](../resources/schem#statictabs)| An app manifest property (`contentUrl`) where the HTTPS URL points to the entity UI to be displayed in the Teams canvas. <br> **See also**: [App manifest](#a)| | Conversation | A series of messages sent between your Microsoft Teams app (tab or bot) and one or more users. A conversation can have three scopes: channel, personal, and group chat. <br>**See also**: [One-on-one chat](#o); [Group chat](#g); [Channel](#c) | | [Conversational bot](../bots/how-to/conversations/conversation-messages.md) | It lets a user interact with your web service using text, interactive cards, and dialogs. <br>**See also** [Chat bot](#c); [Standalone app](#s) |
-| [Copilot](../messaging-extensions/build-bot-based-plugin.md)|Microsoft 365 Copilot is powered by an advanced processing and orchestration engine that seamlessly integrates Microsoft 365 apps, Microsoft Graph, and large language models (LLMs) to turn your words into the most powerful productivity tool. |
+| [Copilot](../messaging-extensions/build-bot-based-plugin.md)|Microsoft 365 Copilot is powered by an advanced processing and orchestration engine that seamlessly integrates Microsoft 365 apps, Microsoft Graph, and Large Language Models (LLMs) to turn your words into the most powerful productivity tool. |
| Customer-owned apps | An app created by you or your organization that is meant for use by other Teams app users outside the organization. It can be made available on Microsoft Teams Store. <br> **See also**: [Teams Store validation guidelines](#s); [Microsoft Store](#s); [LOB apps](#l); [Personal tab](#p); [Shared apps](#s) | | Custom app built for your org (LOB app) | An app created only for Teams by you or your organization. | | [Custom app upload](../toolkit/publish.md#publish-to-individual-scope-or-custom-app-upload-permission) | A process where a Teams app is loaded to the Teams client to test it in the Teams environment before distributing it. |
Common terms and definitions used in Microsoft Teams developer documentation.
| [Task info](../task-modules-and-cards/task-modules/invoking-task-modules.md#dialoginfo-object) | The `TaskInfo` object contains the metadata for a dialogs (referred as task modules in TeamsJS v.1.0).| | [Thread discussion](../tabs/design/tabs.md#thread-discussion) | A conversation posted on a channel or chat between users. <br>**See also** [Conversation](#c); [Channel](#c) | | [Teams](../overview.md) | Microsoft Teams is the ultimate message app for your organization. It's a workspace for real-time collaboration and communication, meetings, file and app sharing. |
-| [Teams AI library](../bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md) | A Teams-centric interface to GPT-based common language models and user intent engines. You can take on complex and expensive tasks of writing and maintaining conversational bot logic to integrate with large language models (LLMs).|
+| [Teams AI library](../bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md) | A Teams-centric interface to GPT-based common language models and user intent engines. You can take on complex and expensive tasks of writing and maintaining conversational bot logic to integrate with Large Language Models (LLMs).|
| [Teams identity](../tabs/how-to/authentication/tab-sso-overview.md) | The Microsoft account or Microsoft 365 account of an app user that is used to log in to Teams client, web, or mobile app. | | [Teams identity](../tabs/how-to/authentication/tab-sso-overview.md) | The Microsoft account or Microsoft 365 account of an app user that is used to sign in to Teams client, web, or mobile app. | | [Teams Toolkit](../toolkit/teams-toolkit-fundamentals.md) | The Microsoft Teams Toolkit enables you to create custom Teams apps directly within the VS Code environment. |
platform Build Bot Based Message Extension https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/messaging-extensions/build-bot-based-message-extension.md
You must add the following parameters to your `composeExtensions.commands` array
| `id` | Unique ID that you assign to search command. The user request includes this ID. | Yes | 1.0 | | `title` |Command name. This value appears in the user interface (UI). | Yes | 1.0 | | `description` | Help text indicating what this command does. This value appears in the UI. | Yes | 1.0 |
-|`semanticDescription`|Semantic description of the command for consumption by the large language model.|No|1.17|
+|`semanticDescription`|Semantic description of the command for consumption by the Large Language Models (LLMs).|No|1.17|
| `type` | Type of command. Default is `query`. | No | 1.4 | |`initialRun` | If this property is set to **true**, it indicates this command should be executed as soon as the user selects this command in the UI. | No | 1.0 | | `context` | Optional array of values that defines the context the search action is available in. The possible values are `message`, `compose`, or `commandBox`. The default is `compose`,`commandBox`. | No | 1.5 |
You must add the following search parameter details that define the text visible
| `parameters` | Defines a static list of parameters for the command. | No | 1.0 | | `parameter.name` | Describes the name of the parameter. The `parameter.name` is sent to your service in the user request. | Yes | 1.0 | | `parameter.description` | Describes the parameterΓÇÖs purposes or example of the value that must be provided. This value appears in the UI. | Yes | 1.0 |
-|`parameter.semanticDescription`|Semantic description of the parameter for consumption by the large language model.|No|1.17|
+|`parameter.semanticDescription`|Semantic description of the parameter for consumption by the Large Language Models (LLMs).|No|1.17|
| `parameter.title` | Short user-friendly parameter title or label. | Yes | 1.0 | | `parameter.inputType` | Set to the type of the input required. Possible values include `text`, `textarea`, `number`, `date`, `time`, `toggle`. Default is set to `text`. | No | 1.4 | | `parameters.value` | Initial value for the parameter. Currently the value isn't supported | No | 1.5 |
platform Build Bot Based Plugin https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/messaging-extensions/build-bot-based-plugin.md
Last updated 11/14/2023
> * Bot-based search message extension plugin is available in [**public developer preview**](../resources/dev-preview/developer-preview-intro.md). > * Only *bot-based* message extensions with *search* commands can be extended as plugins for Copilot for Microsoft 365.
-Microsoft Copilot for Microsoft 365, powered by an advanced processing and orchestration engine, integrates Microsoft 365 apps, Microsoft Graph, and large language models (LLMs) to transform your words into a potent productivity tool. Although Copilot for Microsoft 365 can utilize apps and data within the Microsoft 365 ecosystem, many users rely on various external tools and services for work management and collaboration. By extending your message extension as a plugin in Copilot for Microsoft 365, you can enable users to interact with third-party tools and services, therefore empowering them to achieve more with Copilot for Microsoft 365. You can achieve this extension by developing a plugin or connecting to an external data source.
+Microsoft Copilot for Microsoft 365, powered by an advanced processing and orchestration engine, integrates Microsoft 365 apps, Microsoft Graph, and Large Language Models (LLMs) to transform your words into a potent productivity tool. Although Copilot for Microsoft 365 can utilize apps and data within the Microsoft 365 ecosystem, many users rely on various external tools and services for work management and collaboration. By extending your message extension as a plugin in Copilot for Microsoft 365, you can enable users to interact with third-party tools and services, therefore empowering them to achieve more with Copilot for Microsoft 365. You can achieve this extension by developing a plugin or connecting to an external data source.
:::image type="content" source="../assets/images/Copilot/ailib-copilot-diff.png" alt-text="Graphic shows the user interaction flow between the user, Microsoft Teams and M365 Chat." lightbox="../assets/images/Copilot/ailib-copilot-diff.png":::
platform Manifest Schema Dev Preview https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/resources/schema/manifest-schema-dev-preview.md
The app manifest describes how the app integrates into the Microsoft Teams platf
"id": "AAD App ID", "resource": "Resource URL for acquiring auth token for SSO" },
+ "showLoadingIndicator": false,
+ "isFullScreen": false,
+ "defaultBlockUntilAdminAction": false,
+ "publisherDocsUrl": "https://contoso.com/teamtabapp/admin-doc",
+ "scopeConstraints": {
+ "teams": [
+ { "id": "%TEAMS-THREAD-ID" }
+ ],
+ "groupChats": [
+ { "id": "%GROUP-CHATS-THREAD-ID" }
+ ]
+ },
"authorization": { "permissions": { "resourceSpecific": [
The app manifest describes how the app integrates into the Microsoft Teams platf
] } },
+"actions": [
+ {
+ "id": "addTodoTask",
+ "displayName": "Add ToDo task",
+ "intent": "addTo",
+ "description": "Add this file with a short note to my to do list",
+ "handlers": [
+ {
+ "type": "openPage",
+ "supportedObjects": {
+ "file": {
+ "extensions": [
+ "doc",
+ "pdf"
+ ]
+ }
+ },
+ "pageInfo": {
+ "pageId": "newTaskPage",
+ "subPageId": ""
+ }
+ }
+ ]
+ },
+ ],
"configurableProperties": [ "name", "shortDescription",
The app manifest describes how the app integrates into the Microsoft Teams platf
"privacyUrl", "termsOfUseUrl" ],
+ "supportedChannelTypes": [
+ "sharedChannels",
+ "privateChannels"
+ ],
"defaultInstallScope": "meetings", "defaultGroupCapability": { "meetings": "tab",
Specifies information about the developer and their business. For Teams Store ap
**Optional** &ndash; Object
-Your contact information that is used by customers to contact you through Teams chat or email. Customers may need extra information when evaluating your app or if they have any queries about your app when it doesn't work. Customers can contact you using Teams chat, so request your IT admins to [enable external communications](/microsoftteams/communicate-with-users-from-other-organizations) in your organization. For more information, see [developer provided app and contact information](/MicrosoftTeams/manage-apps#developer-provided-app-information-support-and-documentation).
+Your contact information that is used by customers to contact you through Teams chat or email. Customers may need extra information when evaluating your app or if they have any queries about your app when it doesn't work. Customers can contact you using Teams chat, so request your IT admins to [enable external communications](/microsoftteams/communicate-with-users-from-other-organizations) in your organization. For more information, see [developer provided app and contact information](/MicrosoftTeams/manage-apps#developer-provided-app-information-support-and-documentation).
> [!Note] > You must provide only one contact email address.
-We recommend triaging your customer queries in a timely manner and route those internally within your organization based on the queries shared by the customers. It helps improve app adoption, developer trust, and revenue if you monetize your app.
+We recommend triaging your customer queries in a timely manner and route those internally within your organization, say to other functions to get the answers. It helps improve app adoption, builds developer trust, and increase revenue if you monetize the app.
-| Name | Type | Maximum size | Required | Description |
-|||--|-|-|
-|`defaultsupport`|Object||✔️| The default contact information for your app.|
-|`defaultsupport.userEmailsForChatSupport`|Array|10|✔️|Email address to receive customer queries using Teams chat. While the app manifest allows up to 10 email addresses, only the first email is considered for routing. The object is an array with all elements of the type string. The maximum length of email is 80 characters. |
+| Name | Type | Maximum size | Required | Description |
+|-|--|--|-|-|
+|`defaultsupport`|Object| |✔️|The default contact information for your app.|
+|`defaultsupport.userEmailsForChatSupport`|Array|10|✔️|Email address to receive customer queries using Teams chat. While the app manifest allows up to 10 email addresses, Teams uses only the first email address to let IT admins communicate with you. The object is an array with all elements of the type string. The maximum length of email is 80 characters.|
|`defaultsupport.emailsForEmailSupport`|Array|1|✔️|Contact email for customer inquiry (Minimum: 1; maximum: 1). The object is an array with all elements of the type string. The maximum length of email is 80 characters.| ## localizationInfo
Each command item is an object with the following structure:
|`context`|Array of Strings|3 characters||Defines where the message extension can be invoked from. Any combination of `compose`, `commandBox`, `message`. <br>Default values: `compose, commandBox`| |`title`|String|32 characters|✔️|The user-friendly command name.| |`description`|String|128 characters||The description that appears to users to indicate the purpose of this command.|
-|`semanticDescription`|String|5000 characters||Semantic description of the command for consumption by Copilot using large language model (LLM).|
+|`semanticDescription`|String|5000 characters||Semantic description of the command for consumption by Copilot using Large Language Models (LLMs).|
|`initialRun`|Boolean|||A Boolean value that indicates whether the command runs initially with no parameters. <br>Default value: `false`| |`fetchTask`|Boolean|||A Boolean value that indicates if it must fetch the dialog dynamically.| |`taskInfo`|Object|||Specify the dialog to preload when using a message extension command.|
Each command item is an object with the following structure:
|`parameter.name`|String|64 characters|✔️|The name of the parameter as it appears in the client. This is included in the user request. </br> For Api-based message extension, The name must map to the `parameters.name` in the OpenAPI Description. If you're referencing a property in the request body schema, then the name must map to `properties.name` or query parameters. | |`parameter.title`|String|32 characters|✔️|User-friendly title for the parameter.| |`parameter.description`|String|128 characters||User-friendly string that describes this parameter’s purpose.|
-|`parameter.semanticDescription`|String|2000 characters||Semantic description of the parameter for consumption by the large language model.|
+|`parameter.semanticDescription`|String|2000 characters||Semantic description of the parameter for consumption by the Large Language Models (LLMs).|
|`parameter.inputType`|String|||Defines the type of control displayed on a dialog for `fetchTask: false`. One of `text`, `textarea`, `number`, `date`, `time`, `toggle`, `choiceset`.| |`parameter.value`|String|512 characters||Initial value for the parameter.| |`parameter.choices`|Array of objects|10||The choice options for the `choiceset`. Use only when `parameter.inputType` is `choiceset`.|
platform Manifest Schema https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/resources/schema/manifest-schema.md
Each command item is an object with the following structure:
|`context`|Array of Strings|3 characters||Defines where the message extension can be invoked from. Any combination of `compose`, `commandBox`, `message`. <br>Default values: `compose, commandBox`| |`title`|String|32 characters|✔️|The user-friendly command name.| |`description`|String|128 characters||The description that appears to users to indicate the purpose of this command.|
-|`semanticDescription`|String|5000 characters||Semantic description of the command for consumption by Copilot using large language model (LLM).|
+|`semanticDescription`|String|5000 characters||Semantic description of the command for consumption by Copilot using Large Language Models (LLMs).|
|`initialRun`|Boolean|||A Boolean value indicates whether the command runs initially with no parameters. Default is **false**.| |`fetchTask`|Boolean|||A Boolean value that indicates if it must fetch the dialog (referred as task module in TeamsJS v1.x) dynamically. Default is **false**.| |`taskInfo`|Object|||Specify the dialog to pre-load when using a message extension command.|
Each command item is an object with the following structure:
|`parameter.name`|String|64 characters|✔️|The name of the parameter as it appears in the client. The parameter name is included in the user request.| |`parameter.title`|String|32 characters|✔️|User-friendly title for the parameter.| |`parameter.description`|String|128 characters||User-friendly string that describes this parameter’s purpose.|
-|`parameter.semanticDescription`|String|2000 characters||Semantic description of the parameter for consumption by Copilot using large language model (LLM).|
+|`parameter.semanticDescription`|String|2000 characters||Semantic description of the parameter for consumption by Copilot using Large Language Models (LLMs).|
|`parameter.value`|String|512 characters||Initial value for the parameter. Currently the value isn't supported| |`parameter.inputType`|String|||Defines the type of control displayed on a dialog for`fetchTask: false` . Input value can only be one of `text, textarea, number, date, time, toggle, choiceset` .| |`parameter.choices`|Array of objects|10 items||The choice options for the`choiceset`. Use only when`parameter.inputType` is `choiceset`.|
platform Tabs In Sharepoint https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/tabs/how-to/tabs-in-sharepoint.md
Download the [sample app manifest](https://github.com/MicrosoftDocs/msteams-docs
## Use Teams tab in SharePoint
-1. Upload and deploy your Teams app package to your SharePoint App Catalog by visiting `https://YOUR_TENANT_NAME.sharepoint.com/sites/apps/AppCatalog/Forms/AllItems.aspx`. For example, `https://contoso.sharepoint.com/sites/apps/AppCatalog/Forms/AllItems.aspx`.
+1. Upload and deploy your Teams app package to your SharePoint App Catalog by visiting `https://YOUR_TENANT_NAME.sharepoint.com/sites/appcatalog/AppCatalog/Forms/AllItems.aspx`. For example, `https://contoso.sharepoint.com/sites/appcatalog/AppCatalog/Forms/AllItems.aspx`.
1. When prompted, enable **Make this solution available to all sites in the organization**. The following image displays the corresponding screen:
platform Teams Faq https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/teams-faq.md
Pre-existing pinned configurable tab instances of your app continue to work the
<details> <summary>What does the Teams AI library do?</summary>
-Teams AI library provides abstractions for you to build robust applications that utilize OpenAI large language model (LLM)s.
+Teams AI library provides abstractions for you to build robust applications that utilize OpenAI Large Language Models (LLMs).
<br> </details> <details> <summary>Does Microsoft provide a hosted version of OpenAI models that are used by the AI library?</summary>
-No, you need to have your large language model (LLM)s, hosted in Azure OpenAI or elsewhere.
+No, you need to have your Large Language Models (LLMs), hosted in Azure OpenAI or elsewhere.
<br> </details> <details>
-<summary>Can we use the AI library with other large language models apart from OpenAI?</summary>
+<summary>Can we use the AI library with other Large Language Models (LLMs) apart from OpenAI?</summary>
-Yes, it's possible to use Teams AI library with other large language model (LLM)s.
+Yes, it's possible to use Teams AI library with other Large Language Models (LLMs).
<br> </details> <details> <summary>Does a developer need to do anything to benefit from LLMs? If yes, why?</summary>
-Yes, Teams AI library provides abstractions to simplify utilization of large language model (LLM)s in conversational applications. However, you (developer) must tweak the prompts, topic filters, and actions depending upon your scenarios.
+Yes, Teams AI library provides abstractions to simplify utilization of Large Language Models (LLMs) in conversational applications. However, you (developer) must tweak the prompts, topic filters, and actions depending upon your scenarios.
<br> </details>
platform Build A RAG Bot In Teams https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/toolkit/build-a-RAG-bot-in-teams.md
+
+ Title: Build a RAG bot in Teams
+
+description: In this module, learn how to build RAG bot using Teams AI library.
+
+ms.localizationpriority: high
+ Last updated : 05/21/2024++
+# Build a RAG bot in Teams
+
+The advanced Q&A chatbots are powerful apps built with the help of Large Language Models (LLMs). The chatbots answer questions by pulling information from specific sources using a method called Retrieval Augmented Generation (RAG). The RAG architecture has two main flows:
+
+* **Data ingestion**: A pipeline for ingesting data from a source and indexing it. This usually happens offline.
+
+* **Retrieval and generation**: The RAG chain, which takes the user query at run time and retrieves the relevant data from the index, then passes it to the model.
+
+Microsoft Teams enables you to build a conversational bot with RAG to create an enhanced experience to maximize productivity. Teams Toolkit provides a series of ready to use app templates in the **Chat With Your Data** category that combines the functionalities of Azure AI search, Microsoft 365 SharePoint, and custom API as different data source and LLMs to create a conversational search experience in Teams.
+
+## Prerequisites
+
+| Install | For using... |
+| | |
+| [Visual Studio Code](https://code.visualstudio.com/download) | JavaScript, TypeScript, or Python build environments. Use the latest version. |
+| [Teams Toolkit](https://marketplace.visualstudio.com/items?itemName=TeamsDevApp.ms-teams-vscode-extension) | Microsoft Visual Studio Code extension that creates a project scaffolding for your app. Use the latest version.|
+| [Node.js](https://nodejs.org/en/download/) | Back-end JavaScript runtime environment. For more information, see [Node.js version compatibility table for project type](~/toolkit/build-environments.md#nodejs-version-compatibility-table-for-project-type).|
+| [Microsoft Teams](https://www.microsoft.com/microsoft-teams/download-app) | Microsoft Teams to collaborate with everyone you work with through apps for chat, meetings, and calls all in one place.|
+| [Azure OpenAI](https://oai.azure.com/portal)| First create your OpenAI API key to use OpenAI's Generative Pretrained Transformer (GPT). If you want to host your app or access resources in Azure, you must create an Azure OpenAI service.|
+
+## Create a new basic AI chatbot project
+
+1. Open **Visual Studio Code**.
+
+1. Select the Teams Toolkit :::image type="icon" source="~/assets/images/teams-toolkit-v2/teams-toolkit-sidebar-icon.PNG" border="false"::: icon in the Visual Studio Code **Activity Bar**.
+
+1. Select **Create a New App**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/create-new-app.png" alt-text="Screenshot shows the location of the Create New Project link in the Teams Toolkit sidebar.":::
+
+1. Select **Custom Copilot**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/custom-copilot.png" alt-text="Screenshot shows the option to select custom Copilot as the new project to create.":::
+
+1. Select **Chat With Your Data**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/chat-with-your-data.png" alt-text="Screenshot shows the option to select app features using AI library list.":::
+
+1. Select **Customize**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/chat-with-data-customize.png" alt-text="Screenshot shows the option to select the data customization for loading.":::
+
+1. Select **JavaScript**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/language-javascript.png" alt-text="Screenshot shows the option to select the programming language.":::
+
+1. Select **Azure OpenAI** or **OpenAI**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/azure-openai.png" alt-text="Screenshot shows the option to select the LLM.":::
+
+1. Enter your **Azure OpenAI** or **OpenAI** credentials based on the service you select. Select **Enter**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/azure-open-api-key-optional.png" alt-text="Screenshot shows the location to enter Azure open API key.":::
+
+1. Select **Default folder**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/default-folder.png" alt-text="Screenshot shows the location app folder to save.":::
+
+ To change the default location, follow these steps:
+
+ 1. Select **Browse**.
+ 1. Select the location for the project workspace.
+ 1. Select **Select Folder**.
+
+1. Enter an app name for your app and then select the **Enter** key.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/application-name.png" alt-text="Screenshot shows the option to enter the suitable name.":::
+
+ You've successfully created your **Chat With Your Data** project workspace.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/rag-project-output.png" alt-text="Screenshot shows the ai chatbot created and readme file is available.":::
+
+1. Under **EXPLORER**, go to the **env** > **.env.testtool.user** file.
+
+1. Update the following values:
+
+ * `SECRET_AZURE_OPENAI_API_KEY=<your-key>`
+ * `AZURE_OPENAI_ENDPOINT=<your-endpoint>`
+ * `AZURE_OPENAI_DEPLOYMENT_NAME=<your-deployment>`
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/env-testtool-user.png" alt-text="Screenshot shows the details updated in the env file.":::
+
+1. To debug your app, select the **F5** key or from the left pane, select **Run and Debug (Ctrl+Shift+D)** and then select **Debug in Test Tool (Preview)** from the dropdown list.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/debug-test-tool.png" alt-text="Screenshot shows the selection of debugging option from the list of options.":::
+
+Test Tool opens the bot in a webpage.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/rag-final-output.png" alt-text="Screenshows the chat with your data final output." lightbox="../assets/images/teams-toolkit-v2/custom-copilot/rag-output.png":::
+
+## Take a tour of the bot app source code
+
+| Folder | Contents |
+| - | - |
+| `.vscode` | Visual Studio Code files for debugging. |
+| `appPackage` | Templates for the Teams app manifest. |
+| `env` | Environment files. |
+| `infra` | Templates for provisioning Azure resources. |
+| `src` | The source code for the app. |
+|`src/index.js`| Sets up the bot app server.|
+|`src/adapter.js`| Sets up the bot adapter.|
+|`src/config.js`| Defines the environment variables.|
+|`src/prompts/chat/skprompt.txt`| Defines the prompt.|
+|`src/prompts/chat/config.json`| Configures the prompt.|
+|`src/app/app.js`| Handles business logics for the RAG bot.|
+|`src/app/myDataSource.js`| Defines the data source.|
+|`src/dat`| Raw text data sources.|
+|`teamsapp.yml`|This is the main Teams Toolkit project file. The project file defines the properties and configuration stage definitions. |
+|`teamsapp.local.yml`|This overrides `teamsapp.yml` with actions that enable local execution and debugging.|
+|`teamsapp.testtool.yml`| This overrides `teamsapp.yml` with actions that enable local execution and debugging in Teams App Test Tool.|
+
+## RAG scenarios for Teams AI
+
+In AI context, the vector databases are widely used as RAG storages, which store embeddings data and provide vector-similarity search. Teams AI library provides utilities to help create embeddings for the given inputs.
+
+> [!Tip]
+> Teams AI library doesn't provide the vector database implementation, so you need to add your own logic to process the created embeddings.
+
+# [JavaScript](#tab/javascript)
+
+```javascript
+// create OpenAIEmbeddings instance
+const model = new OpenAIEmbeddings({ ... endpoint, apikey, model, ... });
+
+// create embeddings for the given inputs
+const embeddings = await model.createEmbeddings(model, inputs);
+
+// your own logic to process embeddings
+```
+
+# [Python](#tab/python)
+
+```python
+# create OpenAIEmbeddings instance
+model = OpenAIEmbeddings(OpenAIEmbeddingsOptions(api_key, model))
+
+# create embeddings for the given inputs
+embeddings = await model.create_embeddings(inputs)
+
+# your own logic to process embeddings
+```
+++
+The following diagram shows how Teams AI library provides functionalities to ease each step of the retrieval and generation process:
++
+1. **Handle input**: The most straight forward way is to pass the userΓÇÖs input to the retrieval without any change. However, if you'd like to customize the input before retrieval, you can add an [activity handler](https://github.com/OfficeDev/TeamsFx/wiki/) to certain incoming activities.
+
+1. **Retrieve DataSource**: Teams AI library provides `DataSource` interface to let you add your own retrieval logic. You need to create your own `DataSource` instance, and the Teams AI library calls it on demand.
+
+ # [JavaScript](#tab/javascript1)
+
+ ```javascript
+ class MyDataSource implements DataSource {
+ /**
+ * Name of the data source.
+ */
+ public readonly name = "my-datasource";
+
+ /**
+ * Renders the data source as a string of text.
+ * @param context Turn context for the current turn of conversation with the user.
+ * @param memory An interface for accessing state values.
+ * @param tokenizer Tokenizer to use when rendering the data source.
+ * @param maxTokens Maximum number of tokens allowed to be rendered.
+ * @returns The text to inject into the prompt as a `RenderedPromptSection` object.
+ */
+ renderData(
+ context: TurnContext,
+ memory: Memory,
+ tokenizer: Tokenizer,
+ maxTokens: number
+ ): Promise<RenderedPromptSection<string>> {
+ ...
+ }
+ }
+ ```
+
+ # [Python](#tab/python1)
+
+ ```python
+ class MyDataSource(DataSource):
+ def __init__(self):
+ self.name = "my_datasource_name"
+
+ def name(self):
+ return self.name
+
+ async def render_data(self, _context: TurnContext, memory: Memory, tokenizer: Tokenizer, maxTokens: int):
+ # your render data logic
+ ```
+
+
+
+1. **Call AI with prompt**: In the Teams AI prompt system, you can easily inject a `DataSource` by adjusting the `augmentation.data_sources` configuration section. This connects the prompt with the `DataSource` and the library orchestrator to inject the `DataSource` text into the final prompt. For more information, see [authorprompt](https://github.com/OfficeDev/TeamsFx/wiki/). For example, in the prompt's `config.json` file:
+
+ ```json
+ {
+ "schema": 1.1,
+ ...
+ "augmentation": {
+ "data_sources": {
+ "my-datasource": 1200
+ }
+ }
+ }
+ ```
+
+1. **Build response**: By default, Teams AI library replies to the AI generated response as a text message to the user. If you want to customize the response, you can override the default [SAY actions](https://github.com/OfficeDev/TeamsFx/wiki/) or explicitly call the [AI Model](https://github.com/OfficeDev/TeamsFx/wiki/) to build your replies, for example, with Adaptive Cards.
+
+Here's a minimal set of implementations to add RAG to your app. In general, it implements `DataSource` to inject your `knowledge` into prompt, so that AI can generate response based on the `knowledge`.
+
+* Create `myDataSource.ts` file to implement `DataSource` interface:
+
+ ```typescript
+ export class MyDataSource implements DataSource {
+ public readonly name = "my-datasource";
+ public async renderData(
+ context: TurnContext,
+ memory: Memory,
+ tokenizer: Tokenizer,
+ maxTokens: number
+ ): Promise<RenderedPromptSection<string>> {
+ const input = memory.getValue('temp.input') as string;
+ let knowledge = "There's no knowledge found.";
+
+ // hard-code knowledge
+ if (input?.includes("shuttle bus")) {
+ knowledge = "Company's shuttle bus may be 15 minutes late on rainy days.";
+ } else if (input?.includes("cafe")) {
+ knowledge = "The Cafe's available time is 9:00 to 17:00 on working days and 10:00 to 16:00 on weekends and holidays."
+ }
+
+ return {
+ output: knowledge,
+ length: knowledge.length,
+ tooLong: false
+ }
+ }
+ }
+ ```
+
+* Register the `DataSource` in `app.ts` file:
+
+ # [JavaScript](#tab/javascript2)
+
+ ```javascript
+ // Register your data source to prompt manager
+ planner.prompts.addDataSource(new MyDataSource());
+ ```
+
+ # [Python](#tab/python2)
+
+ ```python
+ planner.prompts.add_data_source(MyDataSource())
+ ```
+
+
+* Create the `prompts/qa/skprompt.txt` file and add the following text:
+
+ ```
+ The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly to answer user's question.
+
+ Base your answer off the text below:
+ ```
+
+* Create the `prompts/qa/config.json` file and add the following code to connect with the data source:
+
+ ```json
+ {
+ "schema": 1.1,
+ "description": "Chat with QA Assistant",
+ "type": "completion",
+ "completion": {
+ "model": "gpt-35-turbo",
+ "completion_type": "chat",
+ "include_history": true,
+ "include_input": true,
+ "max_input_tokens": 2800,
+ "max_tokens": 1000,
+ "temperature": 0.9,
+ "top_p": 0.0,
+ "presence_penalty": 0.6,
+ "frequency_penalty": 0.0,
+ "stop_sequences": []
+ },
+ "augmentation": {
+ "data_sources": {
+ "my-datasource": 1200
+ }
+ }
+ }
+ ```
+
+## Select data sources
+
+In the **Chat With Your Data** or RAG scenarios, Teams Toolkit provides the following types of data sources:
+
+* **Customize**: Allows you to fully control the data ingestion to build your own vector index and use it as data source. For more information, see [build your own data ingestion](#build-your-own-data-ingestion).
+
+ You can also use Azure Cosmos DB Vector Database Extension or Azure PostgreSQL Server vector Extension as vector databases, or Bing Web Search API to get latest web content to implement any data source instance to connect with your own data source.
+
+* **Azure AI Search**: Provides a sample to add your documents to Azure AI Search Service, then use the search index as data source.
+
+* **Custom API**: Allows your chatbot to invoke the API defined in the OpenAPI description document to retrieve domain data from the API service.
+
+* Microsoft Graph and SharePoint: Provides a sample to use Microsoft 365 content from Microsoft Graph Search API as data source.
+
+## Build your own data ingestion
+
+To build your data ingestion, follow these steps:
+
+1. **Load your source documents**: Ensure that your document has a meaningful text as the embedding model takes only text as a input.
+
+1. **Split into chunks**: Ensure you split the document to avoid API call failures as the embedding model has an input token limitation.
+
+1. **Call embedding model**: Call the embedding model APIs to create embeddings for the given inputs.
+
+1. **Store embeddings**: Store the created embeddings into a vector database. Also include useful metadata and raw content to further reference.
+
+## Sample code
+
+# [JavaScript](#tab/javascript4)
+
+* `loader.ts`: Plain text as source input.
+
+ ```javascript
+ import * as fs from "node:fs";
+
+ export function loadTextFile(path: string): string {
+ return fs.readFileSync(path, "utf-8");
+ }
+ ```
+
+* `splitter.ts`: Split text into chunks, with an overlap.
+
+ ```javascript
+
+ // split words by delimiters.
+ const delimiters = [" ", "\t", "\r", "\n"];
+
+ export function split(content: string, length: number, overlap: number): Array<string> {
+ const results = new Array<string>();
+ let cursor = 0, curChunk = 0;
+ results.push("");
+ while(cursor < content.length) {
+ const curChar = content[cursor];
+ if (delimiters.includes(curChar)) {
+ // check chunk length
+ while (curChunk < results.length && results[curChunk].length >= length) {
+ curChunk ++;
+ }
+ for (let i = curChunk; i < results.length; i++) {
+ results[i] += curChar;
+ }
+ if (results[results.length - 1].length >= length - overlap) {
+ results.push("");
+ }
+ } else {
+ // append
+ for (let i = curChunk; i < results.length; i++) {
+ results[i] += curChar;
+ }
+ }
+ cursor ++;
+ }
+ while (curChunk < results.length - 1) {
+ results.pop();
+ }
+ return results;
+ }
+
+ ```
+
+* `embeddings.ts`: Use Teams AI library `OpenAIEmbeddings` to create embeddings.
+
+ ```javascript
+ import { OpenAIEmbeddings } from "@microsoft/teams-ai";
+
+ const embeddingClient = new OpenAIEmbeddings({
+ azureApiKey: "<your-aoai-key>",
+ azureEndpoint: "<your-aoai-endpoint>",
+ azureDeployment: "<your-embedding-deployment, e.g., text-embedding-ada-002>"
+ });
+
+ export async function createEmbeddings(content: string): Promise<number[]> {
+ const response = await embeddingClient.createEmbeddings(content);
+ return response.output[0];
+ }
+ ```
+
+* `searchIndex.ts`: Create Azure AI Search Index.
+
+ ```javascript
+ import { SearchIndexClient, AzureKeyCredential, SearchIndex } from "@azure/search-documents";
+
+ const endpoint = "<your-search-endpoint>";
+ const apiKey = "<your-search-key>";
+ const indexName = "<your-index-name>";
+
+ const indexDef: SearchIndex = {
+ name: indexName,
+ fields: [
+ {
+ type: "Edm.String",
+ name: "id",
+ key: true,
+ },
+ {
+ type: "Edm.String",
+ name: "content",
+ searchable: true,
+ },
+ {
+ type: "Edm.String",
+ name: "filepath",
+ searchable: true,
+ filterable: true,
+ },
+ {
+ type: "Collection(Edm.Single)",
+ name: "contentVector",
+ searchable: true,
+ vectorSearchDimensions: 1536,
+ vectorSearchProfileName: "default"
+ }
+ ],
+ vectorSearch: {
+ algorithms: [{
+ name: "default",
+ kind: "hnsw"
+ }],
+ profiles: [{
+ name: "default",
+ algorithmConfigurationName: "default"
+ }]
+ },
+ semanticSearch: {
+ defaultConfigurationName: "default",
+ configurations: [{
+ name: "default",
+ prioritizedFields: {
+ contentFields: [{
+ name: "content"
+ }]
+ }
+ }]
+ }
+ };
+
+ export async function createNewIndex(): Promise<void> {
+ const client = new SearchIndexClient(endpoint, new AzureKeyCredential(apiKey));
+ await client.createIndex(indexDef);
+ }
+ ```
+
+* `searchIndexer.ts`: Upload created embeddings and other fields to Azure AI Search Index.
+
+ ```javascript
+ import { AzureKeyCredential, SearchClient } from "@azure/search-documents";
+
+ export interface Doc {
+ id: string,
+ content: string,
+ filepath: string,
+ contentVector: number[]
+ }
+
+ const endpoint = "<your-search-endpoint>";
+ const apiKey = "<your-search-key>";
+ const indexName = "<your-index-name>";
+ const searchClient: SearchClient<Doc> = new SearchClient<Doc>(endpoint, indexName, new AzureKeyCredential(apiKey));
+
+ export async function indexDoc(doc: Doc): Promise<boolean> {
+ const response = await searchClient.mergeOrUploadDocuments([doc]);
+ return response.results.every((result) => result.succeeded);
+ }
+ ```
+
+* `index.ts`: Orchestrate above components.
+
+ ```javascript
+ import { createEmbeddings } from "./embeddings";
+ import { loadTextFile } from "./loader";
+ import { createNewIndex } from "./searchIndex";
+ import { indexDoc } from "./searchIndexer";
+ import { split } from "./splitter";
+
+ async function main() {
+ // Only need to call once
+ await createNewIndex();
+
+ // local files as source input
+ const files = [`${__dirname}/dat`];
+ for (const file of files) {
+ // load file
+ const fullContent = loadTextFile(file);
+
+ // split into chunks
+ const contents = split(fullContent, 1000, 100);
+ let partIndex = 0;
+ for (const content of contents) {
+ partIndex ++;
+ // create embeddings
+ const embeddings = await createEmbeddings(content);
+
+ // upload to index
+ await indexDoc({
+ id: `${file.replace(/[^a-z0-9]/ig, "")}___${partIndex}`,
+ content: content,
+ filepath: file,
+ contentVector: embeddings,
+ });
+ }
+ }
+ }
+
+ main().then().finally();
+ ```
+
+# [Python](#tab/python4)
+
+* `loader.py`: Plain text as source input.
+
+ ```python
+ def load_text_file(path: str) -> str:
+ with open(path, 'r', encoding='utf-8') as file:
+ return file.read()
+ ```
+
+* `splitter.py`: Split text into chunks, with an overlap.
+
+ ```python
+ def split(content: str, length: int, overlap: int) -> list[str]:
+ delimiters = [" ", "\t", "\r", "\n"]
+ results = [""]
+ cursor = 0
+ cur_chunk = 0
+ while cursor < len(content):
+ cur_char = content[cursor]
+ if cur_char in delimiters:
+ while cur_chunk < len(results) and len(results[cur_chunk]) >= length:
+ cur_chunk += 1
+ for i in range(cur_chunk, len(results)):
+ results[i] += cur_char
+ if len(results[-1]) >= length - overlap:
+ results.append("")
+ else:
+ for i in range(cur_chunk, len(results)):
+ results[i] += cur_char
+ cursor += 1
+ while cur_chunk < len(results) - 1:
+ results.pop()
+ return results
+ ```
+
+* `embeddings.py`: Use Teams AI library `OpenAIEmbeddings` to create embeddings.
+
+ ```python
+ async def create_embeddings(text: str, embeddings):
+ result = await embeddings.create_embeddings(text)
+
+ return result.output[0]
+ ```
+
+* `search_index.py`: Create Azure AI Search Index.
+
+ ```python
+ async def create_index_if_not_exists(client: SearchIndexClient, name: str):
+ doc_index = SearchIndex(
+ name=name,
+ fields = [
+ SimpleField(name="docId", type=SearchFieldDataType.String, key=True),
+ SimpleField(name="docTitle", type=SearchFieldDataType.String),
+ SearchableField(name="description", type=SearchFieldDataType.String, searchable=True),
+ SearchField(name="descriptionVector", type=SearchFieldDataType.Collection(SearchFieldDataType.Single), searchable=True, vector_search_dimensions=1536, vector_search_profile_name='my-vector-config'),
+ ],
+ scoring_profiles=[],
+ cors_options=CorsOptions(allowed_origins=["*"]),
+ vector_search = VectorSearch(
+ profiles=[VectorSearchProfile(name="my-vector-config", algorithm_configuration_name="my-algorithms-config")],
+ algorithms=[HnswAlgorithmConfiguration(name="my-algorithms-config")],
+ )
+ )
+
+ client.create_or_update_index(doc_index)
+ ```
+
+* `search_indexer.py`: Upload created embeddings and other fields to Azure AI Search Index.
+
+ ```python
+ from embeddings import create_embeddings
+ from search_index import create_index_if_not_exists
+ from loader import load_text_file
+ from split import split
+
+ async def get_doc_data(embeddings):
+ file_path=f'{os.getcwd()}/my_file_path_1'
+ raw_description1 = split(load_text_file(file_path), 1000, 100)
+ doc1 = {
+ "docId": "1",
+ "docTitle": "my_titile_1",
+ "description": raw_description1,
+ "descriptionVector": await create_embeddings(raw_description1, embeddings=embeddings),
+ }
+
+ file_path=f'{os.getcwd()}/my_file_path_2'
+ raw_description2 = split(load_text_file(file_path), 1000, 100)
+ doc2 = {
+ "docId": "2",
+ "docTitle": "my_titile_2",
+ "description": raw_description2,
+ "descriptionVector": await create_embeddings(raw_description2, embeddings=embeddings),
+ }
+
+ return [doc1, doc2]
+
+ async def setup(search_api_key, search_api_endpoint):
+ index = 'my_index_name'
+ credentials = AzureKeyCredential(search_api_key)
+ search_index_client = SearchIndexClient(search_api_endpoint, credentials)
+ await create_index_if_not_exists(search_index_client, index)
+
+ search_client = SearchClient(search_api_endpoint, index, credentials)
+ embeddings=AzureOpenAIEmbeddings(AzureOpenAIEmbeddingsOptions(
+ azure_api_key="<your-aoai-key>",
+ azure_endpoint="<your-aoai-endpoint>",
+ azure_deployment="<your-embedding-deployment, e.g., text-embedding-ada-002>"
+ ))
+ data = await get_doc_data(embeddings=embeddings)
+ await search_client.merge_or_upload_documents(data)
+ ```
+
+* `index.py`: Orchestrate above components.
+
+ ```python
+ from search_indexer import setup
+
+ search_api_key = '<your-key>'
+ search_api_endpoint = '<your-endpoint>'
+ asyncio.run(setup(search_api_key, search_api_endpoint))
+ ```
+++
+## Azure AI Search as data source
+
+In this section you'll learn how to:
+
+* [Add your document to Azure AI Search through Azure OpenAI Service](#add-document-to-azure-ai-search).
+* [Use Azure AI Search index as data source in the RAG app](#use-azure-ai-search-index-data-source).
+
+### Add document to Azure AI Search
+
+> [!Note]
+> This approach creates an end-to-end chat API called as AI model. You can also use the index created earlier as a data source, and use Teams AI library to customize the retrieval and prompt.
+
+You can ingest your knowledge documents to Azure AI Search Service and create a vector index with Azure OpenAI on your data. After ingestion, you can use the index as a data source.
+
+1. Prepare your data in Azure Blob Storage.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/assistant-set-up.png" alt-text="Screenshot shows to do assistant setup in Azure OpenAI Studio.":::
+
+1. In Azure OpenAI Studio, select **Add a data source**.
+
+1. Update the required fields.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/add-data.png" alt-text="Screenshot shows the option to add data source.":::
+
+1. Select **Next**.
+
+ The **Data management** page appears.
+
+1. Update the required fields.
+
+1. Select **Next**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/select-add-data-source.png" alt-text="Screenshot shows the option to select add data source.":::
+
+1. Update the required fields. Select **Next**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/data-management.png" alt-text="Screenshot shows the option to add data management.":::
+
+1. Select **Save and close**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/review-and-finish.png" alt-text="Screenshot shows the option to review and finish.":::
+
+### Use Azure AI Search index data source
+
+After ingesting data into Azure AI Search, you can implement your own `DataSource` to retrieve data from search index.
+
+# [JavaScript](#tab/javascript3)
+
+```javascript
+import { AzureKeyCredential, SearchClient } from "@azure/search-documents";
+import { DataSource, Memory, OpenAIEmbeddings, RenderedPromptSection, Tokenizer } from "@microsoft/teams-ai";
+import { TurnContext } from "botbuilder";
+
+export interface Doc {
+ id: string,
+ content: string, // searchable
+ filepath: string,
+ // contentVector: number[] // vector field
+ // ... other fields
+}
+
+// Azure OpenAI configuration
+const aoaiEndpoint = "<your-aoai-endpoint>";
+const aoaiApiKey = "<your-aoai-key>";
+const aoaiDeployment = "<your-embedding-deployment, e.g., text-embedding-ada-002>";
+
+// Azure AI Search configuration
+const searchEndpoint = "<your-search-endpoint>";
+const searchApiKey = "<your-search-apikey>";
+const searchIndexName = "<your-index-name>";
+
+export class MyDataSource implements DataSource {
+ public readonly name = "my-datasource";
+ private readonly embeddingClient: OpenAIEmbeddings;
+ private readonly searchClient: SearchClient<Doc>;
+
+ constructor() {
+ this.embeddingClient = new OpenAIEmbeddings({
+ azureEndpoint: aoaiEndpoint,
+ azureApiKey: aoaiApiKey,
+ azureDeployment: aoaiDeployment
+ });
+ this.searchClient = new SearchClient<Doc>(searchEndpoint, searchIndexName, new AzureKeyCredential(searchApiKey));
+ }
+
+ public async renderData(context: TurnContext, memory: Memory, tokenizer: Tokenizer, maxTokens: number): Promise<RenderedPromptSection<string>> {
+ // use user input as query
+ const input = memory.getValue("temp.input") as string;
+
+ // generate embeddings
+ const embeddings = (await this.embeddingClient.createEmbeddings(input)).output[0];
+
+ // query Azure AI Search
+ const response = await this.searchClient.search(input, {
+ select: [ "id", "content", "filepath" ],
+ searchFields: ["rawContent"],
+ vectorSearchOptions: {
+ queries: [{
+ kind: "vector",
+ fields: [ "contentVector" ],
+ vector: embeddings,
+ kNearestNeighborsCount: 3
+ }]
+ }
+ queryType: "semantic",
+ top: 3,
+ semanticSearchOptions: {
+ // your semantic configuration name
+ configurationName: "default",
+ }
+ });
+
+ // Add documents until you run out of tokens
+ let length = 0, output = '';
+ for await (const result of response.results) {
+ // Start a new doc
+ let doc = `${result.document.content}\n\n`;
+ let docLength = tokenizer.encode(doc).length;
+ const remainingTokens = maxTokens - (length + docLength);
+ if (remainingTokens <= 0) {
+ break;
+ }
+
+ // Append do to output
+ output += doc;
+ length += docLength;
+ }
+ return { output, length, tooLong: length > maxTokens };
+ }
+}
+```
+
+# [Python](#tab/python3)
+
+```python
+async def get_embedding_vector(text: str):
+ embeddings = AzureOpenAIEmbeddings(AzureOpenAIEmbeddingsOptions(
+ azure_api_key='<your-aoai-key>',
+ azure_endpoint='<your-aoai-endpoint>',
+ azure_deployment='<your-aoai-embedding-deployment>'
+ ))
+
+ result = await embeddings.create_embeddings(text)
+ if (result.status != 'success' or not result.output):
+ raise Exception(f"Failed to generate embeddings for description: {text}")
+
+ return result.output[0]
+
+@dataclass
+class Doc:
+ docId: Optional[str] = None
+ docTitle: Optional[str] = None
+ description: Optional[str] = None
+ descriptionVector: Optional[List[float]] = None
+
+@dataclass
+class MyDataSourceOptions:
+ name: str
+ indexName: str
+ azureAISearchApiKey: str
+ azureAISearchEndpoint: str
+
+from azure.core.credentials import AzureKeyCredential
+from azure.search.documents import SearchClient
+import json
+
+@dataclass
+class Result:
+ def __init__(self, output, length, too_long):
+ self.output = output
+ self.length = length
+ self.too_long = too_long
+
+class MyDataSource(DataSource):
+ def __init__(self, options: MyDataSourceOptions):
+ self.name = options.name
+ self.options = options
+ self.searchClient = SearchClient(
+ options.azureAISearchEndpoint,
+ options.indexName,
+ AzureKeyCredential(options.azureAISearchApiKey)
+ )
+
+ def name(self):
+ return self.name
+
+ async def render_data(self, _context: TurnContext, memory: Memory, tokenizer: Tokenizer, maxTokens: int):
+ query = memory.get('temp.input')
+ embedding = await get_embedding_vector(query)
+ vector_query = VectorizedQuery(vector=embedding, k_nearest_neighbors=2, fields="descriptionVector")
+
+ if not query:
+ return Result('', 0, False)
+
+ selectedFields = [
+ 'docTitle',
+ 'description',
+ 'descriptionVector',
+ ]
+
+ searchResults = self.searchClient.search(
+ search_text=query,
+ select=selectedFields,
+ vector_queries=[vector_query],
+ )
+
+ if not searchResults:
+ return Result('', 0, False)
+
+ usedTokens = 0
+ doc = ''
+ for result in searchResults:
+ tokens = len(tokenizer.encode(json.dumps(result["description"])))
+
+ if usedTokens + tokens > maxTokens:
+ break
+
+ doc += json.dumps(result["description"])
+ usedTokens += tokens
+
+ return Result(doc, usedTokens, usedTokens > maxTokens)
+```
+++
+## Add more API for Custom API as data source
+
+Follow these steps to extend the custom copilot from Custom API template with more APIs.
+
+1. Update `./appPackage/apiSpecificationFile/openapi.*`.
+
+ Copy corresponding part of the API you want to add from your spec and append to `./appPackage/apiSpecificationFile/openapi.*`.
+
+1. Update `./src/prompts/chat/actions.json`.
+
+ Update the necessary info and properties for path, query, and body for the API in the following object:
+
+ ```json
+ {
+ "name": "${{YOUR-API-NAME}}",
+ "description": "${{YOUR-API-DESCRIPTION}}",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "query": {
+ "type": "object",
+ "properties": {
+ "${{YOUR-PROPERTY-NAME}}": {
+ "type": "${{YOUR-PROPERTY-TYPE}}",
+ "description": "${{YOUR-PROPERTY-DESCRIPTION}}",
+ }
+ // You can add more query properties here
+ }
+ },
+ "path": {
+ // Same as query properties
+ },
+ "body": {
+ // Same as query properties
+ }
+ }
+ }
+ }
+ ```
+
+1. Update `./src/adaptiveCards`.
+
+ Create a new file with name `${{YOUR-API-NAME}}.json` and fill in the Adaptive Card for the API response of your API.
+
+1. Update the`./src/app/app.js` file.
+
+ Add following code before `module.exports = app;`:
+
+ ```javascript
+ app.ai.action(${{YOUR-API-NAME}}, async (context: TurnContext, state: ApplicationTurnState, parameter: any) => {
+ const client = await api.getClient();
+
+ const path = client.paths[${{YOUR-API-PATH}}];
+ if (path && path.${{YOUR-API-METHOD}}) {
+ const result = await path.${{YOUR-API-METHOD}}(parameter.path, parameter.body, {
+ params: parameter.query,
+ });
+ const card = generateAdaptiveCard("../adaptiveCards/${{YOUR-API-NAME}}.json", result);
+ await context.sendActivity({ attachments: [card] });
+ } else {
+ await context.sendActivity("no result");
+ }
+ return "result";
+ });
+ ```
+
+## Microsoft 365 as data source
+
+Learn to utilize the Microsoft Graph Search API to query Microsoft 365 content as a data source for the RAG app. To learn more about Microsoft Graph Search API, you can refer to use the Microsoft Search API to search OneDrive and SharePoint content.
+
+**Prerequisite**: You must create a Graph API client and grant it the `Files.Read.All` permission scope to access SharePoint and OneDrive files, folders, pages, and news.
+
+### Data ingestion
+
+The Microsoft Graph Search API, which can search SharePoint content, is available. Therefore, you only need to ensure your document is uploaded to SharePoint or OneDrive, with no extra data ingestion required.
+
+> [!NOTE]
+> SharePoint server indexes a file only if its file extension is listed on the manage file types page. For a complete list of supported file extensions, refer to the default indexed file name extensions and parsed file types in SharePoint server and SharePoint in Microsoft 365.
+
+### Data source implementation
+
+An example of searching for the text files in SharePoint and OneDrive is as follows:
+
+```javascript
+import {
+ DataSource,
+ Memory,
+ RenderedPromptSection,
+ Tokenizer,
+} from "@microsoft/teams-ai";
+import { TurnContext } from "botbuilder";
+import { Client, ResponseType } from "@microsoft/microsoft-graph-client";
+
+export class GraphApiSearchDataSource implements DataSource {
+ public readonly name = "my-datasource";
+ public readonly description =
+ "Searches the graph for documents related to the input";
+ public client: Client;
+
+ constructor(client: Client) {
+ this.client = client;
+ }
+
+ public async renderData(
+ context: TurnContext,
+ memory: Memory,
+ tokenizer: Tokenizer,
+ maxTokens: number
+ ): Promise<RenderedPromptSection<string>> {
+ const input = memory.getValue("temp.input") as string;
+ const contentResults = [];
+ const response = await this.client.api("/search/query").post({
+ requests: [
+ {
+ entityTypes: ["driveItem"],
+ query: {
+ // Search for markdown files in the user's OneDrive and SharePoint
+ // The supported file types are listed here:
+ // https://learn.microsoft.com/sharepoint/technical-reference/default-crawled-file-name-extensions-and-parsed-file-types
+ queryString: `${input} filetype:txt`,
+ },
+ // This parameter is required only when searching with application permissions
+ // https://learn.microsoft.com/graph/search-concept-searchall
+ // region: "US",
+ },
+ ],
+ });
+ for (const value of response?.value ?? []) {
+ for (const hitsContainer of value?.hitsContainers ?? []) {
+ contentResults.push(...(hitsContainer?.hits ?? []));
+ }
+ }
+
+ // Add documents until you run out of tokens
+ let length = 0,
+ output = "";
+ for (const result of contentResults) {
+ const rawContent = await this.downloadSharepointFile(
+ result.resource.webUrl
+ );
+ if (!rawContent) {
+ continue;
+ }
+ let doc = `${rawContent}\n\n`;
+ let docLength = tokenizer.encode(doc).length;
+ const remainingTokens = maxTokens - (length + docLength);
+ if (remainingTokens <= 0) {
+ break;
+ }
+
+ // Append do to output
+ output += doc;
+ length += docLength;
+ }
+ return { output, length, tooLong: length > maxTokens };
+ }
+
+ // Download the file from SharePoint
+ // https://docs.microsoft.com/en-us/graph/api/driveitem-get-content
+ private async downloadSharepointFile(
+ contentUrl: string
+ ): Promise<string | undefined> {
+ const encodedUrl = this.encodeSharepointContentUrl(contentUrl);
+ const fileContentResponse = await this.client
+ .api(`/shares/${encodedUrl}/driveItem/content`)
+ .responseType(ResponseType.TEXT)
+ .get();
+
+ return fileContentResponse;
+ }
+
+ private encodeSharepointContentUrl(webUrl: string): string {
+ const byteData = Buffer.from(webUrl, "utf-8");
+ const base64String = byteData.toString("base64");
+ return (
+ "u!" + base64String.replace("=", "").replace("/", "_").replace("+", "_")
+ );
+ }
+}
+```
+
+## See also
+
+[Teams AI library](../bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md)
platform Build A Basic AI Chatbot In Teams https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/toolkit/build-a-basic-AI-chatbot-in-teams.md
+
+ Title: Build a basic AI Chatbot in Teams
+
+description: In this module, learn how to build a basic AI Chatbot using Teams AI library.
+
+ms.localizationpriority: high
+ Last updated : 05/21/2024++
+# Build a basic AI chatbot
+
+The AI chatbot template showcases a bot app, similar to ChatGPT, that responds to user questions and allows users to interact with the AI bot in Microsoft Teams. [Teams AI library](../bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md) is used to build the app template, providing the capabilities to create AI-based Teams applications.
+
+## Prerequisites
+
+| Install | For using... |
+| | |
+| [Visual Studio Code](https://code.visualstudio.com/download) | JavaScript, TypeScript, or Python build environments. Use the latest version. |
+| [Teams Toolkit](https://marketplace.visualstudio.com/items?itemName=TeamsDevApp.ms-teams-vscode-extension) | Microsoft Visual Studio Code extension that creates a project scaffolding for your app. Use the latest version.|
+| [Node.js](https://nodejs.org/en/download/) | Back-end JavaScript runtime environment. For more information, see [Node.js version compatibility table for project type](~/toolkit/build-environments.md#nodejs-version-compatibility-table-for-project-type).|
+| [Microsoft Teams](https://www.microsoft.com/microsoft-teams/download-app) | Microsoft Teams to collaborate with everyone you work with through apps for chat, meetings, and calls all in one place.|
+| [Azure OpenAI](https://oai.azure.com/portal)| First create your OpenAI API key to use OpenAI's Generative Pretrained Transformer (GPT). If you want to host your app or access resources in Azure, you must create an Azure OpenAI service.|
+
+## Create a new basic AI chatbot project
+
+1. Open **Visual Studio Code**.
+
+1. Select the Teams Toolkit :::image type="icon" source="~/assets/images/teams-toolkit-v2/teams-toolkit-sidebar-icon.PNG" border="false"::: icon in the Visual Studio Code **Activity Bar**
+
+1. Select **Create a New App**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/create-new-app.png" alt-text="Screenshot shows the location of the Create New Project link in the Teams Toolkit sidebar.":::
+
+1. Select **Custom Copilot**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/custom-copilot.png" alt-text="Screenshot shows the option to select custom Copilot as the new project to create.":::
+
+1. Select **Basic AI Chatbot**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/basic-ai-chatbot.png" alt-text="Screenshot shows the option to select app features using AI library list.":::
+
+1. Select **JavaScript**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/language-javascript.png" alt-text="Screenshot shows the option to select the programming language.":::
+
+1. Select **Azure OpenAI**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/azure-openai.png" alt-text="Screenshot shows the option to select the LLM.":::
+
+1. Enter your **OpenAI** or **Azure OpenAI** credentials based on the service you select. Select **Enter**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/azure-open-api-key-optional.png" alt-text="Screenshot shows the location to enter Azure open API key.":::
+
+1. Select **Default folder**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/default-folder.png" alt-text="Screenshot shows the location app folder to save.":::
+
+ To change the default location, follow these steps:
+
+ 1. Select **Browse**.
+ 1. Select the location for the project workspace.
+ 1. Select **Select Folder**.
+
+1. Enter an application name for your app and then select the **Enter** key.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/application-name.png" alt-text="Screenshot shows the option to enter the suitable name.":::
+
+ You've successfully created your AI chat bot project workspace.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/ai-chatbot-project-output.png" alt-text="Screenshot shows the ai chatbot created and readme file is available.":::
+
+1. Under **EXPLORER**, go to the **env** > **.env.testtool.user** file.
+
+1. Update the following details:
+
+ * `SECRET_AZURE_OPENAI_API_KEY=<your-key>`
+ * `AZURE_OPENAI_ENDPOINT=<your-endpoint>`
+ * `AZURE_OPENAI_DEPLOYMENT_NAME=<your-deployment>`
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/env-testtool-user.png" alt-text="Screenshot shows the details updated in the env file.":::
+
+1. To debug your app, select the **F5** key or from the left pane, select **Run and Debug (Ctrl+Shift+D)** and then select **Debug in Test Tool (Preview)** from the dropdown list.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/debug-test-tool.png" alt-text="Screenshot shows the selection of debugging option from the list of options.":::
+
+Test Tool opens the bot in a webpage.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/basic-ai-chatbot-final-output.png" alt-text="Screenshot shows the bot response with basic AI chatbot." lightbox="../assets/images/teams-toolkit-v2/custom-copilot/chat-bot-output.png":::
+
+## Take a tour of the bot app source code
+
+# [JavaScript](#tab/javascript)
+
+| Folder | Contents |
+| - | - |
+| `.vscode` | Visual Studio Code files for debugging. |
+| `appPackage` | Templates for the Teams application manifest. |
+| `env` | Environment files. |
+| `infra` | Templates for provisioning Azure resources. |
+| `src` | The source code for the application. |
+|`teamsapp.yml`|This is the main Teams Toolkit project file. The project file defines two primary things: Properties and configuration Stage definitions. |
+|`teamsapp.local.yml`|This overrides `teamsapp.yml` with actions that enable local execution and debugging.|
+|`teamsapp.testtool.yml`|This overrides `teamsapp.yml` with actions that enable local execution and debugging in Teams App Test Tool.|
+|`src/index.js`| Sets up the bot app server.|
+|`src/adapter.js`| Sets up the bot adapter.|
+|`src/config.js`| Defines the environment variables.|
+|`src/prompts/chat/skprompt.txt`| Defines the prompt.|
+|`src/prompts/chat/config.json`| Configures the prompt.|
+|`src/app/app.js`| Handles business logics for the basic AI chatbot.|
+
+# [Python](#tab/python)
+
+| File | Contents |
+| - | - |
+| `.vscode` | Visual Studio Code files for debugging. |
+| `appPackage` | Templates for the Teams application manifest. |
+| `env` | Environment files. |
+| `infra` | Templates for provisioning Azure resources. |
+| `src` | The source code for the application. |
+|`teamsapp.yml`|This is the main Teams Toolkit project file. The project file defines two primary things: Properties and configuration Stage definitions. |
+|`teamsapp.local.yml`|This overrides `teamsapp.yml` with actions that enable local execution and debugging.|
+|`teamsapp.testtool.yml`|This overrides `teamsapp.yml` with actions that enable local execution and debugging in Teams App Test Tool.|
+|`src/app.py`| Hosts an aiohttp API server and exports an app module.|
+|`src/bot.py`| Handles business logics for the basic AI chatbot.|
+|`src/config.py`| Defines the environment variables.|
+|`src/prompts/chat/skprompt.txt`| Defines the prompt.|
+|`src/prompts/chat/config.json`| Configures the prompt.|
+++
+## How Teams AI chatbot works
+
+Teams AI library provides a flow to build an intelligent chatbot with AI capabilities as follows:
++
+* **TurnContext**: The turn context object provides information about the activity, such as the sender and receiver, the channel, and other data needed to process the activity.
+
+* **TurnState**: The turn state object, similar to a cookie, stores data for the current turn. This object, as the turn context, is carried through the entire application logic, including the activity handlers and the AI system.
+
+* **Authentication**: If user authentication is configured, Teams AI attempts to sign the user in. If the user is already signed in, the SDK retrieves the access token and continues. Otherwise, the SDK initiates the sign-in flow and ends the current turn.
+
+* **Activity Handlers**: Teams AI library executes a set of registered activity handlers, enabling you to handle several types of activities. The activity handler system is the primary method for implementing bot or message extension app logic. It's a set of methods and configurations that allow you to register callbacks, known as route handlers, which trigger based on the incoming activity. The incoming activity can be in the form of a message, message reaction, or virtually any interaction within the Teams app.
+
+* **AI system**: The AI system in Teams AI library is responsible for moderating input and output, generating plans, and executing them. It can be used as a standalone or routed to by the app object. The important concepts are as follows:
+
+ 1. [**Prompt manager**](https://github.com/microsoft/teams-ai/blob/main/getting-started/CONCEPTS/PROMPTS.md): Prompts play a crucial role in communicating and directing the behavior of Large Language Models (LLMs) AI.
+ 1. [**Planner**](https://github.com/microsoft/teams-ai/blob/main/getting-started/CONCEPTS/PLANNER.md): The planner receives the user's request, which is in the form of a prompt or prompt template, and returns a plan to fulfill it. This is achieved by using AI to mix and match atomic functions, known as actions, that are registered to the AI system. These actions are recombined into a series of steps that complete a goal.
+ 1. [**Actions**](https://github.com/microsoft/teams-ai/blob/main/getting-started/CONCEPTS/ACTIONS.md): An action is an atomic function that is registered to the AI system.
+
+* **AfterTurn handler**: After the activity handler or AI system is executed, Teams AI library executes an `afterTurn` handler. The handler allows you to perform an action after the turn. If it returns as `true`, the SDK saves the turn state to storage.
+
+* **Respond to user**: Teams AI library saves the state and the bot can send the responses to the user.
+
+## Customize basic AI chatbot
+
+You can add customizations on top of the basic app to build complex scenarios as follows:
+
+1. **Customize prompt**: Prompts play a crucial role in communicating and directing the behavior of LLMs AI. They serve as inputs or queries that users can provide to elicit specific responses from a model. Here's a prompt that asks the LLM for name suggestions:
+
+ **Request**
+ ```
+ Give me 3 name suggestions for my pet golden retriever.
+ ```
+ **Response**
+ ```
+ Some possible name suggestions for a pet golden retriever are:
+ - Bailey
+ - Sunny
+ - Cooper
+ ```
+
+ To use project generated with Teams Toolkit, you can author the prompts in the `src/prompts/chat/skprompt.txt` file. The prompts written in this file are inserted into the prompt used to instruct the LLM. Teams AI library defines the following syntax that you can use in the prompt text:
+
+ # [Syntax 1](#tab/syntax1)
+
+ 1. `{{ $[scope].property }}`: Teams AI library renders the value of a property that is scoped and defined within the turn state. It defines three such scopes: temp, user, and conversation. If no scope is specified, by default, the library uses the temp scope.
+
+ 1. The `{{$[scope].property}}` is used in the following way:
+
+ # [JavaScript](#tab/javascript1)
+
+ 1. In the `src/app/turnState.ts` file, define your temp state, user state, conversation state, and app turn state. If the `turnState.ts` file doesn't exist in your project, create it under `src/app`.
+
+ ```javascript
+ import { DefaultConversationState, DefaultTempState, DefaultUserState, TurnState } from "@microsoft/teams-ai";
+
+ export interface TempState extends DefaultTempState { }
+ export interface UserState extends DefaultUserState { }
+ export interface ConversationState extends DefaultConversationState {
+ tasks: Record<string, Task>;
+ }
+
+ export interface Task {
+ Title: string;
+ description: string;
+ }
+
+ export type ApplicationTurnState = TurnState<ConversationState, UserState, TempState>;
+ ```
+
+ 1. In the `src/app/app.ts` file, use app turn state to initialize the app.
+
+ ```javascript
+ const storage = new MemoryStorage();
+ const app = new Application<ApplicationTurnState>({
+ storage,
+ ai: {
+ planner,
+ },
+ });
+ ```
+
+ 1. In the `src/prompts/chat/skprompt.txt` file, use the scoped state property such as `{{$conversation.tasks}}`.
+
+ # [Python](#tab/python1)
+
+ 1. In the `src/state.py` file, define your temp state, user state, conversation state, and app turn state.
+
+ ```python
+ from teams.state import TempState, ConversationState, UserState, TurnState
+
+ class AppConversationState(ConversationState):
+ tasks: Dict[str, Task] # Your data definition here
+
+ @classmethod
+ async def load(cls, context: TurnContext, storage: Optional[Storage] = None) -> "AppConversationState":
+ state = await super().load(context, storage)
+ return cls(**state)
+
+ class AppTurnState(TurnState[AppConversationState, UserState, TempState]):
+ conversation: AppConversationState
+
+ @classmethod
+ async def load(cls, context: TurnContext, storage: Optional[Storage] = None) -> "AppTurnState":
+ return cls(
+ conversation=await AppConversationState.load(context, storage),
+ user=await UserState.load(context, storage),
+ temp=await TempState.load(context, storage),
+ )
+ ```
+
+ 1. In the `src/bot.py` file, user app turn state to initialize app.
+
+ ```python
+ from state import AppTurnState
+
+ app = Application[AppTurnState](...)
+ ```
+
+ 1. In the `src/prompts/chat/skprompt.txt` file, use the scoped state property such as `{{$conversation.tasks}}`.
+`
+
+
+ # [Syntax 2](#tab/syntax2)
+
+ 1. `{{ functionName }}`: To call an external function and embed the result in your text, use the `{{ functionName }}` syntax. For example, if you have a function called `getTasks` that can return a list of task items, you can embed the results into the prompt:
+
+ # [JavaScript](#tab/javascript2)
+
+ 1. Register the function in the prompt manager in the `src/app/app.ts` file:
+
+ ```typescript
+ prompts.addFunction("getTasks", async (context: TurnContext, memory: Memory, functions: PromptFunctions, tokenizer: Tokenizer, args: string[]) => {
+ return ...
+ });
+ ```
+
+ 1. Use the function in `src/prompts/chat/skprompt.txt: Your tasks are: {{ getTasks }}`.
+
+ # [Python](#tab/python2)
+
+ 1. Register the function into prompt manager in the `src/bot.py` file:
+
+ ```python
+ @prompts.function("getTasks")
+ async def get_tasks(
+ _context: TurnContext,
+ state: MemoryBase,
+ _functions: PromptFunctions,
+ _tokenizer: Tokenizer,
+ _args: List[str],
+ ):
+ return state.get("conversation.tasks")
+ ```
+
+ 1. Use the function in the `src/prompts/chat/skprompt.txt: Your tasks are: {{ getTasks }}`.
+
+
+ # [Syntax 3](#tab/syntax3)
+
+ `{{ functionName arg1 arg2 }}`: This syntax enables you to call the specified function with the provided arguments and renders the result. Similar to the usage of calling a function, you can:
+
+ 1. Register the function in prompt
+ * For JavaScript language, register it in `src/app/app.ts`.
+ * For Python language, register it in `src/bot.py`.
+
+ 1. Use the function in `src/prompts/chat/skprompt.txt: Your `task is: {{ getTasks taskTitle }}`.
+
+
+
+1. **Customize user input**: Teams AI library allows you to augment the prompt sent to LLM by including the user inputs. When including user inputs, you need to specify it in a prompt configuration file by setting `completion.include_input` to `true` in `src/prompts/chat/config.json`. You can also optionally configure the maximum number of user input tokens in `src/prompts/chat/config.json` by changing `completion.max_input_tokens`. This is useful when you want to limit the length of user inputs to avoid exceeding the token limit.
+
+1. **Customize conversation history**: The SDK automatically manages the conversation history, and you can customize as follows:
+
+ * In `src/prompts/chat/config.json`, configure `completion.include_history`. If `true`, the history is inserted into the prompt to let LLM aware of the conversation context.
+
+ * Maximum number of history messages. Configure `max_history_messages` when initializing `PromptManager`.
+
+ # [JavaScript](#tab/javaScript3)
+
+ ```javascript
+
+ const prompts = new PromptManager({
+ promptsFolder: path.join(__dirname, "../prompts"),
+ max_history_messages: 3,
+ });
+ ```
+
+ # [Python](#tab/python3)
+
+ ```python
+
+ prompts = PromptManager(PromptManagerOptions(
+ prompts_folder=f"{os.getcwd()}/prompts",
+ max_history_messages=3,
+ ))
+
+ ```
+
+
+ * Maximum number of history tokens. Configure `max_conversation_history_tok`ens when initializing `PromptManager`.
+
+ # [JavaScript](#tab/javaScript4)
+
+ ```javascript
+
+ const prompts = new PromptManager({
+ promptsFolder: path.join(__dirname, "../prompts"),
+ max_conversation_history_tokens: 1000,
+ });
+
+ ```
+
+ # [Python](#tab/python4)
+
+ ```python
+
+ prompts = PromptManager(PromptManagerOptions(
+ prompts_folder=f"{os.getcwd()}/prompts",
+ max_conversation_history_tokens=1000,
+ ))
+
+ ```
+
+
+1. **Customize model type**: You can use a specific model for a prompt. In the `src/prompts/chat/config.json` file, configure `completion.model`. If no model is configured for the prompt, the default model configured in `OpenAIModel` is used.
+
+ The models that support the SDK as follows:
+
+ | Model | Supported |
+ | | |
+ | gpt-3.5-turbo | Supported |
+ | gpt-3.5-turbo-16k | Supported |
+ | gpt-3.5-turbo-instruct | Not supported from 1.1.0 |
+ | gpt-4 | Supported |
+ | gpt-4-32k | Supported |
+ | gpt-4-vision | Supported |
+ | gpt-4-turbo | Supported |
+ | DALL┬╖E | Not supported |
+ | Whisper | Not supported |
+ | TTS | Not supported |
+
+1. **Customize model parameters**:
+
+ In the `src/prompts/chat/config.json` file, configure the model parameters under completion as follows:
+
+ * **Max_tokens**: The maximum number of tokens to generate.
+ * **Temperature**: The models temperature as a number between 0 and 2.
+ * **Top_p**: The models `top_p` as a number between 0 and 2.
+ * **Presence_penalty**: The models `presence_penalty` as a number between 0 and 1.
+ * **Frequency_penalty**: The models `frequency_penalty` as a number between 0 and 1.
+ * **Stop_sequences**: Array of stop sequences that when hit stops generation.
+
+## See also
+
+[Teams AI library](../bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md)
+
platform Build An AI Agent In Teams https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/toolkit/build-an-AI-agent-in-Teams.md
+
+ Title: Build an AI Agent in Teams
+
+description: In this module, learn how to build AI Agent using Teams AI library.
+
+ms.localizationpriority: high
+ Last updated : 05/21/2024++
+# Build an AI agent bot in Teams
+
+An AI agent in Microsoft Teams is a conversational chatbot that uses Large Language Models (LLMs) to interact with the users. It understands user intentions and selects a sequence of actions, enabling the chatbot to complete common tasks.
++
+## Prerequisites
+
+| Install | For using... |
+| | |
+| [Visual Studio Code](https://code.visualstudio.com/download) | JavaScript, TypeScript, or Python build environments. Use the latest version. |
+| [Teams Toolkit](https://marketplace.visualstudio.com/items?itemName=TeamsDevApp.ms-teams-vscode-extension) | Microsoft Visual Studio Code extension that creates a project scaffolding for your app. Use the latest version.|
+| [Node.js](https://nodejs.org/en/download/) | Back-end JavaScript runtime environment. For more information, see [Node.js version compatibility table for project type](~/toolkit/build-environments.md#nodejs-version-compatibility-table-for-project-type).|
+| [Microsoft Teams](https://www.microsoft.com/microsoft-teams/download-app) | Microsoft Teams to collaborate with everyone you work with through apps for chat, meetings, and calls all in one place.|
+| [Azure OpenAI](https://oai.azure.com/portal)| First create your OpenAI API key to use OpenAI's Generative Pretrained Transformer (GPT). If you want to host your app or access resources in Azure, you must create an Azure OpenAI service.|
+
+## Create a new AI agent project
+
+1. Open **Visual Studio Code**.
+
+1. Select the Teams Toolkit :::image type="icon" source="~/assets/images/teams-toolkit-v2/teams-toolkit-sidebar-icon.PNG" border="false"::: icon in the Visual Studio Code **Activity Bar**
+
+1. Select **Create a New App**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/create-new-app.png" alt-text="Screenshot shows the location of the Create New Project link in the Teams Toolkit sidebar.":::
+
+1. Select **Custom Copilot**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/custom-copilot.png" alt-text="Screenshot shows the option to select custom Copilot as the new project to create.":::
+
+1. Select **AI Agent**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/ai-agent.png" alt-text="Screenshot shows the option to select app features using AI library list.":::
+
+1. To build an app, select any of the following options:
+
+ # [Build new](#tab/buildnew)
+
+ 1. Select **Build New**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/build-new.png" alt-text="Screenshot shows the option to select the available AI agents.":::
+
+ 1. Select **JavaScript**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/language-javascript.png" alt-text="Screenshot shows the option to select the programming language.":::
+
+ 1. By default **OpenAI** service gets selected, you can optionally enter the credentials to access OpenAI. Select **Enter**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/azure-open-api-key-optional.png" alt-text="Screenshot shows the location to enter Azure open API key.":::
+
+ 1. Select **Default folder**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/default-folder.png" alt-text="Screenshot shows the location app folder to save.":::
+
+ To change the default location, follow these steps:
+
+ 1. Select **Browse**.
+ 1. Select the location for the project workspace.
+ 1. Select **Select Folder**.
+
+ 1. Enter an app name for your app and then select the **Enter** key.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/application-name.png" alt-text="Screenshot shows the option to enter the suitable name.":::
+
+ You've successfully created your AI agent bot.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/ai-agent-project-output-biuld-new.png" alt-text="Screenshot shows the ai chatbot created and readme file is available.":::
+
+ 1. Under **EXPLORER**, go to the **env** > **.env.testtool.user** file.
+
+ 1. Update the following values:
+ * `SECRET_AZURE_OPENAI_API_KEY=<your-key>`
+ * `AZURE_OPENAI_ENDPOINT=<your-endpoint>`
+ * `AZURE_OPENAI_DEPLOYMENT_NAME=<your-deployment>`
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/env-testtool-user.png" alt-text="Screenshot shows the details updated in the env file.":::
+
+ 1. To debug your app, select the **F5** key or from the left pane, select **Run and Debug (Ctrl+Shift+D)** and then select **Debug in Test Tool (Preview)** from the dropdown list.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/debug-test-tool.png" alt-text="Screenshot shows the selection of debugging option from the list of options.":::
+
+ Test Tool opens the bot in a webpage.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/ai-agent-build-new-final-output.png" alt-text="Screenshot shows the final output of AI agent build new bot." lightbox="../assets/images/teams-toolkit-v2/custom-copilot/ai-agent-new-output.png":::
+
+ ## Take a tour of the bot app source code
+
+ | Folder | Contents |
+ | - | - |
+ | `.vscode` | Visual Studio Code files for debugging. |
+ | `appPackage` | Templates for the Teams app manifest. |
+ | `env` | Environment files. |
+ | `infra` | Templates for provisioning Azure resources. |
+ | `src` | The source code for the app. |
+
+ The following files can be customized and they demonstrate an example of implementation to get you started:
+
+ | File | Contents |
+ | - | - |
+ |`src/index.js`| Sets up the bot app server.|
+ |`src/adapter.js`| Sets up the bot adapter.|
+ |`src/config.js`| Defines the environment variables.|
+ |`src/prompts/planner/skprompt.txt`| Defines the prompt.|
+ |`src/prompts/planner/config.json`| Configures the prompt.|
+ |`src/prompts/planner/actions.json`| Defines the actions.|
+ |`src/app/app.js`| Handles business logics for the AI Agent.|
+ |`src/app/messages.js`| Defines the message activity handlers.|
+ |`src/app/actions.js`| Defines the AI actions.|
+
+ The following are Teams Toolkit specific project files. For more information on how Teams Toolkit works, see [a complete guide on GitHub](https://github.com/OfficeDev/TeamsFx/wiki/Teams-Toolkit-Visual-Studio-Code-v5-Guide#overview):
+
+ | File | Contents |
+ | - | - |
+ |`teamsapp.yml`|This is the main Teams Toolkit project file. The project file defines the properties and configuration stage definitions. |
+ |`teamsapp.local.yml`|This overrides `teamsapp.yml` with actions that enable local execution and debugging.|
+ |`teamsapp.testtool.yml`|This overrides `teamsapp.yml` with actions that enable local execution and debugging in Teams App Test Tool.|
+
+ # [Assistants API](#tab/assistantsapi)
+
+ 1. Select **Build with Assistants API Preview**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/build-assistants-api.png" alt-text="Screenshot shows the option to select the available AI agents.":::
+
+ 1. Select **JavaScript**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/language-javascript.png" alt-text="Screenshot shows the option to select the programming language.":::
+
+ > [!NOTE]
+ >
+ > * If the building agent is selected as Build with Assistants API, Azure OpenAI service has not provided support for Assistants API.
+ > * The `AssistantsPlanner` in Teams AI Library is in preview.
+
+ 1. Select **Azure OpenAI** or **OpenAI**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/azure-openai.png" alt-text="Screenshot shows the option to select the LLM.":::
+
+ 1. Enter your **Azure OpenAI** or **OpenAI** credentials based on the service you select. Select **Enter**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/azure-open-api-key-optional.png" alt-text="Screenshot shows the location to enter Azure open API key.":::
+
+ 1. Select **Enter**.
+
+ 1. Select **Default folder**.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/default-folder.png" alt-text="Screenshot shows the location app folder to save.":::
+
+ To change the default location, follow these steps:
+
+ 1. Select **Browse**.
+ 1. Select the location for the project workspace.
+ 1. Select **Select Folder**.
+
+ 1. Enter an app name for your app and then select the **Enter** key.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/application-name.png" alt-text="Screenshot shows the option to enter the suitable name.":::
+
+ You've successfully created your AI agent bot.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/ai-agent-project-output-biuld-assistant-api.png" alt-text="Screenshot shows the ai chatbot created and readme file is available.":::
+
+ **Create your own OpenAI Assistant**
+
+ Before running or debugging your bot, follow these steps to set up your own [OpenAI Assistant](https://platform.openai.com/docs/assistants/overview).
+
+ **If you haven't setup any Assistant yet**
+
+ * This app template provides script `src/creator.js` to help create assistant. You can change the instructions and settings in the script to customize the assistant.
+
+ * After creation, you can change and manage your assistants on [OpenAI](https://platform.openai.com/assistants).
+
+ 1. Open terminal and run the following command to install all dependency packages:
+ ```
+ > npm install
+ ```
+ 1. Run the following command to run assistant:
+ ```
+ > npm run assistant:create -- <your-openai-api-key>
+ ```
+ 1. You'll get an output as **Created a new assistant with an ID of: asst_xxx...**.
+
+ 1. Go to **Visual Studio Code**, Under **EXPLORER**, select the **env** > **.env.*.users** file.
+
+ 1. Update the following values:
+ * `SECRET_OPENAI_API_KEY=<your-openai-api-key>`
+ * `OPENAI_ASSISTANT_ID=<your-openai-assistant-id>`
+
+
+ 1. To debug your app, select the **F5** key or from the left pane, select **Run and Debug (Ctrl+Shift+D)** and then select **Debug in Test Tool (Preview)** from the dropdown list.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/debug-test-tool.png" alt-text="Screenshot shows the selection of debugging option from the list of options.":::
++
+ Test Tool opens the bot in a webpage.
+
+ :::image type="content" source="../assets/images/teams-toolkit-v2/custom-copilot/ai-agent-build-assistant-api-final-output.png" alt-text="Screenshot shows the final output of AI agent build with assistants API bot." lightbox="../assets/images/teams-toolkit-v2/custom-copilot/ai-agent-assistant-api-output.png":::
+
+ ## Take a tour of the bot app source code
+
+ | Folder | Contents |
+ | - | - |
+ | `.vscode` | Visual Studio Code files for debugging. |
+ | `appPackage` | Templates for the Teams application manifest. |
+ | `env` | Environment files. |
+ | `infra` | Templates for provisioning Azure resources. |
+ | `src` | The source code for the application. |
+
+ The following files can be customized and they demonstrate an example of implementation to get you started:
+
+ | File | Contents |
+ | - | - |
+ |`src/index.js`| Sets up the bot app server.|
+ |`src/adapter.js`| Sets up the bot adapter.|
+ |`src/config.js`| Defines the environment variables.|
+ |`src/creator.js`| One-time tool to create OpenAI Assistant.|
+ |`src/app/app.js`| Handles business logics for the AI Agent.|
+ |`src/app/messages.js`| Defines the message activity handlers.|
+ |`src/app/actions.js`| Defines the AI actions.|
+
+ The following are Teams Toolkit specific project files. For more information on how Teams Toolkit works, see [a complete guide on GitHub](https://github.com/OfficeDev/TeamsFx/wiki/Teams-Toolkit-Visual-Studio-Code-v5-Guide#overview):
+
+ | File | Contents |
+ | - | - |
+ |`teamsapp.yml`|This is the main Teams Toolkit project file. The project file defines two primary things: Properties and configuration Stage definitions. |
+ |`teamsapp.local.yml`|This overrides `teamsapp.yml` with actions that enable local execution and debugging.|
+ |`teamsapp.testtool.yml`|This overrides `teamsapp.yml` with actions that enable local execution and debugging in Teams App Test Tool.|
+
+
+
+## Create an AI agent using Teams AI library
+
+### Build new
+
+Teams AI library provides a comprehensive flow that simplifies the process of building your own AI agent. The important concepts that you need to understand are as follows:
+
+* [**Actions**](https://github.com/microsoft/teams-ai/blob/main/getting-started/CONCEPTS/ACTIONS.md): An action is an atomic function that is registered to the AI system.
+* [**Planner**](https://github.com/microsoft/teams-ai/blob/main/getting-started/CONCEPTS/PLANNER.md): The planner receives the user's request, which is in the form of a prompt or prompt template, and returns a plan to fulfill it. This is achieved by using AI to mix and match atomic functions, known as actions, that are registered to the AI system. These actions are recombined into a series of steps that complete a goal.
+* [**Action Planner**](https://github.com/microsoft/teams-ai/blob/main/getting-started/CONCEPTS/ACTION-PLANNER.md): Action Planner uses an LLM to generate plans. It can trigger parameterized actions and send text based responses to the user.
+
+### Build with Assistants API
+
+Assistants API from OpenAI simplify the development effort of creating an AI agent. OpenAI as a platform offers prebuilt tools such as Code Interpreter, Knowledge Retrieval, and Function Calling that simplifies the code you need to write for common scenarios.
+
+ | Comparison | Build new | Build with Assistants API |
+ | - | - | - |
+ | Cost | Only costs for LLM services | Costs for LLM services and if you use tools in Assistants API leads to extra costs. |
+ | Dev effort | Medium | Relatively small |
+ | LLM services | Azure OpenAI or OpenAI | OpenAI only |
+ | Example implementations in template | This app template can chat and help users to manage the tasks. | This app templates use the Code Interpreter tool to solve math problems and also the Function Calling tool to get city weather. |
+ | Limitations | NA | The Teams AI library doesn't support the Knowledge Retrieval tool. |
+
+## Customize the app template
+
+### Customize prompt augmentation
+
+The SDK provides a functionality to augment the prompt.
+
+* The actions, which are defined in the `src/prompts/planner/actions.json` file, are inserted into the prompt. This allows the LLM to be aware of the available functions.
+* An internal piece of prompt text is inserted into the prompt to instruct LLM to determine which functions to call based on the available functions. This prompt text orders LLM to generate the response in a structured json format.
+* The SDK validates the LLM response and lets LLM correct or refine the response if the response is in wrong format.
+
+In the `src/prompts/planner/config.json` file, configure `augmentation.augmentation_type`. The options are:
+
+* `Sequence`: Suitable for tasks that require multiple steps or complex logic.
+* `Monologue`: Suitable for tasks that require natural language understanding and generation, and more flexibility and creativity.
+
+### Build new add functions
+
+ * In the `src/prompts/planner/actions.json` file, define your actions schema.
+
+ ```json
+ [
+ ...
+ {
+ "name": "myFunction",
+ "description": "The function description",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "parameter1": {
+ "type": "string",
+ "description": "The parameter1 description"
+ },
+ },
+ "required": ["parameter1"]
+ }
+ }
+ ]
+ ```
+
+ * In the `src/app/actions.ts` file, define the action handlers.
+
+ ```typescript
+ // Define your own function
+ export async function myFunction(context: TurnContext, state: TurnState, parameters): Promise<string> {
+ // Implement your function logic
+ ...
+ // Return the result
+ return "...";
+ }
+ ```
+
+ * In the `src/app/app.ts` file, register the actions.
+
+ ```typescript
+ app.ai.action("myFunction", myFunction);
+ ```
+
+### Customize assistant creation
+
+The `src/creator.ts` file creates a new OpenAI Assistant. You can customize the assistant by updating the parameters including instruction, model, tools, and functions.
+
+### Build with Assistants API add functions
+
+When the assistant provides a function and its arguments for execution, the SDK aligns this function with a preregistered action. It later activates the action handler and submits the outcome back to the assistant. To integrate your functions, register the actions within the app.
+
+ * In the `src/app/actions.ts` file, define the action handlers.
+
+ ```typescript
+ // Define your own function
+ export async function myFunction(context: TurnContext, state: TurnState, parameters): Promise<string> {
+ // Implement your function logic
+ ...
+ // Return the result
+ return "...";
+ }
+ ```
+
+ * In the `src/app/app.ts` file, register the actions.
+
+ ```typescript
+ app.ai.action("myFunction", myFunction);
+ ```
+
+## See also
+
+[Teams AI library](../bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md)
platform Deploy Teams App To Container Service https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/toolkit/deploy-Teams-app-to-container-service.md
+
+ Title: Deploy Teams app to container service
+
+description: In this module, learn how to deploy a Teams App to a Container Service.
+
+ms.localizationpriority: medium
+ Last updated : 04/16/2024++
+# Deploy Teams app to container service
+
+You can deploy a Teams bot or tab app to an Azure Container Apps, Azure Kubernetes Service (AKS), or on-premises Kubernetes Cluster.
+
+## Prerequisites
+
+Download the [sample Teams bot](https://github.com/OfficeDev/TeamsFx-Samples/tree/dev/bot-sso-docker) or the [sample Teams tab app](https://github.com/OfficeDev/TeamsFx-Samples/tree/dev/hello-world-tab-docker), which offers a ready-to-use experience for Azure Container Apps development. You can make a few configuration changes and deploy it to AKS or an on-premises Kubernetes Cluster.
+
+Before you get started, ensure that you have the following tools:
+
+* Azure account.
+
+* Azure Command Line Interfaces (CLI) for Azure Container Apps or AKS deployment.
+
+> [!NOTE]
+> The commands in the article are based on Git Bash. If you're using any other interface, update the commands as required.
+
+## Deploy to Azure Container Apps
+
+Azure Container Apps is a fully managed service that enables you to run containerized applications in the cloud. It's an ideal choice if you don't need direct access to all native Kubernetes APIs and cluster management and you prefer a fully managed experience grounded on best practices.
+
+With the help of sample applications, you can run the provision and deploy commands in Teams Toolkit. Teams Toolkit creates an Azure Container Registry and Azure Container Apps for you and constructs your app into a container image and deploys it to Azure Container Apps.
+
+The `provision` command creates and configures the following resources:
+
+* A Teams app with tab or bot capability.
+* An Azure Container Registry to host your container image.
+* An Azure Container App environment and an Azure Container Apps to host your app.
+* A Microsoft Entra App for authentication.
+
+In the sample Teams bot, the `provision` command also creates an Azure Bot Service to channel Teams client and Azure Container Apps.
+
+The `deploy` command executes the following actions:
+
+* Builds the app into a container image.
+* Pushes the container image to Azure Container Registry.
+* Deploys the image to Azure Container Apps.
+
+## Deploy Teams bot to Azure Kubernetes Service
+
+AKS is a managed container orchestration service provided by Azure. With AKS, you can fully manage Kubernetes experience within Azure.
+
+### Architecture
++
+The Teams backend server interacts with your bot through the Azure Bot Service. This service requires your bot to be reachable through a public HTTPS endpoint. To set up, deploy an ingress controller on your Kubernetes cluster and secure it with a TLS certificate.
+
+You can use Microsoft Entra ID to authenticate your bot with Azure Bot Service. Create a Kubernetes secret that includes the app ID and password and integrate the secret into your container's runtime configuration.
+
+### Setup ingress with HTTPS on AKS
+
+1. Ensure that your AKS is connected to your Azure Container Registry, which hosts your container images. For more information, see [use the Azure CLI](/azure/aks/learn/quick-kubernetes-deploy-cli/).
+
+1. Run the following command to install the ingress controller and certificate
+
+ ```bash
+ NAMESPACE=teams-bot
+
+ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+ helm repo update
+ helm install ingress-nginx ingress-nginx/ingress-nginx --create-namespace --namespace $NAMESPACE \
+ --set controller.nodeSelector."kubernetes\.io/os"=linux \
+ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
+ --set controller.healthStatus=true \
+ --set controller.service.externalTrafficPolicy=Local \
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
+
+ helm repo add jetstack https://charts.jetstack.io
+ helm repo update
+ helm install cert-manager jetstack/cert-manager --namespace $NAMESPACE --set installCRDs=true --set nodeSelector."kubernetes\.io/os"=linux
+ ```
+
+ > [!NOTE]
+ > You can also follow the instructions available in [create an unmanaged ingress controller](/azure/aks/ingress-basic?tabs=azure-cli) and [use TLS with Let's encrypt certificates](/azure/aks/ingress-tls#use-tls-with-lets-encrypt-certificates) to set up ingress and TLS certificates on your Kubernetes cluster.
+
+1. Run the following command to update the DNS for the ingress public IP and get the ingress endpoint:
+
+ ```bash
+ > kubectl get services --namespace $NAMESPACE -w ingress-nginx-controller
+
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
+ ingress-nginx-controller LoadBalancer $CLUSTER_IP $EXTERNAL_IP 80:32514/TCP,443:32226/TCP
+
+ > PUBLICIPID=$(az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$EXTERNAL_IP')].[id]" --output tsv)
+ > az network public-ip update --ids $PUBLICIPID --dns-name $DNSLABEL
+ > az network public-ip show --ids $PUBLICIPID --query "[dnsSettings.fqdn]" --output tsv
+
+ $DNSLABEL.$REGION.cloudapp.azure.com
+ ```
+
+### Provision resources with Teams Toolkit
+
+You can use the `provision` command in Teams Toolkit to create a Teams app with bot capability, incorporate the Azure Bot Service, and add the Microsoft Entra ID for authentication.
+
+To provision resources with Teams Toolkit, follow these steps:
+
+1. Open the sample app that you've downloaded earlier.
+
+1. Go to the `env/.env.${envName}` file and update the `BOT_DOMAIN` value with your FQDN.
+
+1. Go to the `teamsapp.yml` file and update the following `arm/deploy` action to ensure that Teams Toolkit provisions an Azure Bot Service during the execution of the `provision` command:
+
+ ```bash
+ - uses: arm/deploy
+ with:
+ subscriptionId: ${{AZURE_SUBSCRIPTION_ID}}
+ resourceGroupName: ${{AZURE_RESOURCE_GROUP_NAME}}
+ templates:
+ - path: ./infra/botRegistration/azurebot.bicep
+ parameters: ./infra/botRegistration/azurebot.parameters.json
+ deploymentName: Create-resources-for-bot
+ bicepCliVersion: v0.9.1
+ ```
+
+1. Run the `provision` command in Teams Toolkit.
+
+1. After provisioning, locate the `BOT_ID` in `env/.env.${envName}` file and the encrypted `SECRET_BOT_PASSWORD` in `env/.env.${envName}.user` file. To obtain the actual value of `BOT_PASSWORD`. Select the Decrypt secret annotation.
+
+1. To create a Kubernetes secret that contains `BOT_ID` and `BOT_PASSWORD`, save the key value pair in the `./deploy/.env.dev-secrets` file and execute the following command to provision the secret:
+
+ ```bash
+ kubectl create secret generic dev-secrets --from-env-file ./deploy/.env.dev-secrets -n $NAMESPACE
+ ```
+
+### Apply the deployment
+
+The sample includes a deployment file, `deploy/sso-bot.yaml`, for your reference.
+
+1. Update the following placeholders:
+
+ 1. `<image>`: Update your image. For example, `myacr.azurecr.io/sso-bot:latest`.
+
+ 1. `<hostname>`: Update your ingress FQDN.
+
+1. To apply `deploy/tab.yaml`, run the following command:
+
+ ```bash
+ kubectl apply -f deploy/sso-bot.yaml -n $NAMESPACE
+ ```
+
+1. Go to Visual Studio Code.
+
+1. In the **Run and Debug** panel, select the **Launch Remote** configuration.
+
+1. To preview the Teams bot application deployed on AKS, select **Start Debugging (F5)**.
+
+## Deploy Teams bot to an on-premises Kubernetes Cluster
+
+You can deploy a Teams bot to your personal Kubernetes cluster or a Kubernetes service from different cloud services by following similar steps that are used to deploy Teams bot on AKS.
+
+### Architecture
++
+The Teams backend server interacts with your bot through the Azure Bot Service. This service requires your bot to be reachable through a public HTTPS endpoint. To set up, deploy an ingress controller on your Kubernetes cluster and secure it with a TLS certificate.
+
+You can use Microsoft Entra ID to authenticate your bot with Azure Bot Service. Create a Kubernetes secret that includes the app ID and password and integrate the secret into your container's runtime configuration.
+
+### Provision resources with Teams Toolkit
+
+You can use the `provision` command in Teams Toolkit to create a Teams app with bot capability, incorporate the Azure Bot Service, and add the Microsoft Entra ID for authentication.
+
+To provision resources with Teams Toolkit, follow these steps:
+
+1. Open the sample app that you've downloaded earlier.
+
+1. Go to the `env/.env.${envName}` file and update the `BOT_DOMAIN` value with your FQDN.
+
+1. Go to the `teamsapp.yml` file and update the following `arm/deploy` action to ensure that Teams Toolkit provisions an Azure Bot Service during the execution of the `provision` command:
+
+ ```yml
+ - uses: arm/deploy
+ with:
+ subscriptionId: ${{AZURE_SUBSCRIPTION_ID}}
+ resourceGroupName: ${{AZURE_RESOURCE_GROUP_NAME}}
+ templates:
+ - path: ./infra/botRegistration/azurebot.bicep
+ parameters: ./infra/botRegistration/azurebot.parameters.json
+ deploymentName: Create-resources-for-bot
+ bicepCliVersion: v0.9.1
+ ```
+
+1. In the `teamsapp.yml` file, update the `botFramework/create` action during the provision stage. This action enables Teams Toolkit to create a bot registration with the appropriate messaging endpoint.
+
+ >[!NOTE]
+ > We recommend you use Azure Bot Service for channeling. If you don't have an Azure account and can't create Azure Bot Service, you can create a bot registration.
+
+ ```yml
+ - uses: botFramework/create
+ with:
+ botId: ${{BOT_ID}}
+ name: <Bot display name>
+ messagingEndpoint: https://${{BOT_DOMAIN}}/api/messages
+ description: ""
+ channels:
+ - name: msteams
+ ```
+
+ You can remove the `arm/deploy` action in `teamsapp.yml` file, as we don't need any Azure resources.
+
+1. Run the `provision` command in Teams Toolkit.
+
+1. After provisioning, locate the `BOT_ID` in the `env/.env.${envName}` file and the encrypted `SECRET_BOT_PASSWORD` in the `env/.env.${envName}.user` file. To obtain the actual value of `BOT_PASSWORD`. Select the Decrypt secret annotation.
+
+1. To create a Kubernetes secret that contains `BOT_ID` and `BOT_PASSWORD`, save the key value pair in the `./deploy/.env.dev-secrets` file and execute the following command to provision the secret:
+
+ ```bash
+ kubectl create secret generic dev-secrets --from-env-file ./deploy/.env.dev-secrets -n $NAMESPACE
+ ```
+
+### Apply the deployment
+
+The sample includes a deployment file, `deploy/sso-bot.yaml`, for your guidance.
+
+1. Update the following placeholders:
+
+ 1. `<image>`: Update your image. For example, `myacr.azurecr.io/sso-bot:latest`.
+
+ 1. `<hostname>`: Update your ingress FQDN.
+
+1. To apply `deploy/tab.yaml`, run the following command:
+
+ ```bash
+ kubectl apply -f deploy/sso-bot.yaml -n $NAMESPACE
+ ```
+
+1. Go to Visual Studio Code.
+
+1. In the **Run and Debug** panel, select the **Launch Remote** configuration.
+
+1. To preview the Teams bot application deployed on AKS, select **Start Debugging (F5)**.
+
+## Deploy Teams tab app to Kubernetes
+
+AKS serves as a managed container orchestration service offered by Azure. With AKS, you can fully manage Kubernetes experience within Azure.
+
+Deploy a Teams tab app to AKS is similar to deploying a web app to AKS. However, since a Teams tab app requires an HTTPS connection, you need to own a domain and setup TLS ingress in your AKS.
+
+You can also deploy a Teams tab app to your personal Kubernetes cluster or a Kubernetes service on different cloud platforms. This involves steps similar to those used when deploying on Azure Kubernetes Service.
+
+### Setup ingress with HTTPS on AKS
+
+1. Ensure that your AKS is already connected to your Azure Container Registry, which hosts your container images. For more information, see [Azure CLI](/azure/aks/learn/quick-kubernetes-deploy-cli).
+
+1. Run the following command to install the ingress controller and certificate
+
+ ```yml
+ NAMESPACE=teams-tab
+
+ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
+ helm repo update
+ helm install ingress-nginx ingress-nginx/ingress-nginx --create-namespace --namespace $NAMESPACE \
+ --set controller.nodeSelector."kubernetes\.io/os"=linux \
+ --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
+ --set controller.healthStatus=true \
+ --set controller.service.externalTrafficPolicy=Local \
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
+
+ helm repo add jetstack https://charts.jetstack.io
+ helm repo update
+ helm install cert-manager jetstack/cert-manager --namespace $NAMESPACE --set installCRDs=true --set nodeSelector."kubernetes\.io/os"=linux
+ ```
+
+ > [!NOTE]
+ > You can also follow the instructions available in [create an unmanaged ingress controller](/azure/aks/ingress-basic?tabs=azure-cli) and [use TLS with Let's encrypt certificates](/azure/aks/ingress-tls#use-tls-with-lets-encrypt-certificates) to set up ingress and TLS certificates on your Kubernetes cluster.
+
+1. Run the following command to update the DNS for the ingress public IP and get the ingress endpoint:
+
+ ```bash
+ > kubectl get services --namespace $NAMESPACE -w ingress-nginx-controller
+
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
+ ingress-nginx-controller LoadBalancer $CLUSTER_IP $EXTERNAL_IP 80:32514/TCP,443:32226/TCP
+
+ > PUBLICIPID=$(az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$EXTERNAL_IP')].[id]" --output tsv)
+ > az network public-ip update --ids $PUBLICIPID --dns-name $DNSLABEL
+ > az network public-ip show --ids $PUBLICIPID --query "[dnsSettings.fqdn]" --output tsv
+
+ $DNSLABEL.$REGION.cloudapp.azure.com
+ ```
+
+### Provision resources with Teams Toolkit
+
+You can use the `provision` command in Teams Toolkit to create a Teams app with tab capability, incorporate the Azure Bot Service, and add the Microsoft Entra ID for authentication.
+
+To provision resources with Teams Toolkit, follow these steps:
+
+1. Open the sample app that you've downloaded earlier.
+
+1. Go to the `env/.env.${envName}` file and update the `TAB_DOMAIN` value with your FQDN.
+
+1. Go to the `teamsapp.yml` file and remove the `arm/deploy` action, as no additional Azure resources are required.
+
+1. Run the `provision` command in Teams Toolkit.
+
+1. Use the Teams Toolkit to create a Microsoft Entra ID, which you might want to set as your apps environment variables.
+
+1. After provisioning, locate the `AAD_APP_CLIENT_ID` in the `env/.env.${envName}` file and the encrypted `SECRET_AAD_APP_CLIENT_SECRET` in the `env/.env.${envName}.user` file.
+
+1. To obtain the actual value of `SECRET_AAD_APP_CLIENT_SECRET`. Select the Decrypt secret annotation.
+
+### Apply the deployment
+
+The sample includes a deployment file, `deploy/tab.yaml`, for your reference.
+
+1. Update the following placeholders:
+
+ 1. `<tab-image>`: Update your image. For example, `myacr.azurecr.io/tab:latest`.
+
+ 1. `<api-image>`: Update your API image. If you don't have an API, remove the `hello-world-api`service and deploy from the yaml file.
+
+ 1. `<hostname>`: Update your ingress FQDN.
+
+1. To apply `deploy/tab.yaml`, run the following command:
+
+ ```bash
+ kubectl apply -f deploy/tab.yaml -n $NAMESPACE
+ ```
+
+1. Go to Visual Studio Code.
+
+1. In the **Run and Debug** panel, select the **Launch Remote** configuration.
+
+1. To preview the Teams bot application deployed on AKS, select **Start Debugging (F5)**.
platform Whats New https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/whats-new.md
zone_pivot_groups: What-new-features
Discover Microsoft Teams platform features that are generally available (GA). You can now get latest Teams platform updates by subscribing to the RSS feed [![download feed](~/assets/images/RSSfeeds.png)](https://aka.ms/TeamsPlatformUpdates). For more information, see [configure RSS feed](#get-latest-updates).
+## Microsoft Build 2024 :::image type="icon" source="assets/images/bullhorn.png" border="false"
+
+| **Date** | **Update** | **Find here** |
+| -- | | -|
+| May 21, 2024 | Introduced Assistants API to create powerful AI assistants capable of performing a variety of tasks. | Build bots > Teams AI library > [Overview](bots/how-to/teams%20conversational%20ai/teams-conversation-ai-overview.md#assistants-api) |
+| May 21, 2024 | Get started with the process of building apps with the Teams AI library using the LightBot sample. | Build bots > Teams AI library > [Quick start guide](bots/how-to/teams%20conversational%20ai/conversation-ai-quick-start.md)|
+| May 21, 2024 | Introduced a step-by-step guide to build a custom copilot to chat with your data using the Teams AI library and Teams Toolkit. | Build bots > Teams AI library > Build custom copilot > [Build custom copilot using Teams Toolkit](teams-ai-library-tutorial.yml)|
+ ## Generally available :::row:::
Discover Microsoft Teams platform features that are generally available (GA). Yo
Teams platform features that are available to all app developers.
-**2024 April**
+**2024 May**
-* ***April 12, 2024***: [Implement authentication in API-based search message extensions to provide secure and seamless access to your app.](messaging-extensions/build-api-based-message-extension.md#authentication)
-* ***April 12, 2024***: [Introducing app manifest v1.17 with semanticDescription, samplePrompts, and dashboardCards](resources/schem).
-* ***April 12, 2024***: [Outlook extensions specifies Outlook Add-ins within an app manifest and simplify the distribution and acquisition across the Microsoft 365 ecosystem](resources/schem#extensionsrequirements).
-* ***April 12, 2024***: [Create Dashboardcards that can be pinned to a dashboard such as Microsoft Viva Connections to provide a summarized view of app information](resources/schem#dashboardcards).
-* ***April 12, 2024***: [Share code snippets as richly formatted Adaptive Cards in Teams chats, channels, and meetings with the CodeBlock element.](task-modules-and-cards/cards/cards-format.md#codeblock-in-adaptive-cards)
-* ***April 12, 2024***: [Introduced bot configuration experience that helps you to enable the bot settings for users to configure their bot during installation and reconfigure the bot.](bots/how-to/bot-configuration-experience.md)
-* ***April 12, 2024***: [Use sample prompts to guide users on how to use the various plugins within Copilot.](messaging-extensions/high-quality-message-extension.md#sample-prompts)
-* ***April 10, 2024***: [Define and deploy Outlook Add-ins in version 1.17 and later of the app manifest schema.](m365-apps/overview.md#outlook-add-ins)
-* ***April 04, 2024***: [Added support for python in Teams AI library.](bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md)
-* ***April 04, 2024***: [Stageview API with the openmode property allows you to open your app content in different Stageview experience.](tabs/open-content-in-stageview.md)
-* ***April 03, 2024***: [Updated the common reasons for app validation failure to help your app pass the Teams Store submission process.](concepts/deploy-and-publish/appsource/common-reasons-for-app-validation-failure.md)
+***May 17, 2024***: [Deploy Teams app to container service.](toolkit/deploy-Teams-app-to-container-service.md)
:::column-end::: :::row-end:::
Teams platform features that are available to all app developers.
| **Date** | **Update** | **Find here** | | -- | | -|
+|12/04/2024|Implement authentication in API-based search message extensions to provide secure and seamless access to your app.|Build message extensions > Build message extensions using API > [Authentication](messaging-extensions/build-api-based-message-extension.md#authentication)|
+|12/04/2024|Introducing app manifest v1.17 with semanticDescription, samplePrompts, and dashboardCards.|[App manifest](resources/schem)|
+|12/04/2024|Outlook extensions specifies Outlook Add-ins within an app manifest and simplify the distribution and acquisition across the Microsoft 365 ecosystem.|App manifest > [extensions.requirements](resources/schem#extensionsrequirements)|
+|12/04/2024|Create Dashboardcards that can be pinned to a dashboard such as Microsoft Viva Connections to provide a summarized view of app information|App manifest > [dashboardCards](resources/schem#dashboardcards)|
+|12/04/2024|Share code snippets as richly formatted Adaptive Cards in Teams chats, channels, and meetings with the CodeBlock element.|Build cards and dialogs > [CodeBlock in Adaptive Cards](task-modules-and-cards/cards/cards-format.md#codeblock-in-adaptive-cards)|
+|12/04/2024|Introduced bot configuration experience that helps you to enable the bot settings for users to configure their bot during installation and reconfigure the bot.|Build bots > [Bot configuration experience](bots/how-to/bot-configuration-experience.md)|
+|12/04/2024|Use sample prompts to guide users on how to use the various plugins within Copilot.|Build message extensions > Build message extensions using Bot Framework > Search commands > [Sample prompts](messaging-extensions/high-quality-message-extension.md#sample-prompts)|
+|10/04/2024|Define and deploy Outlook Add-ins in version 1.17 and later of the app manifest schema.|Extend your app across Microsoft 365 > [Outlook Add-ins](m365-apps/overview.md#outlook-add-ins)|
+|04/04/2024|Added support for python in Teams AI library.|Build bots > Teams AI library > [Teams AI library](bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md)|
+|04/04/2024|Stageview API with the openmode property allows you to open your app content in different Stageview experience.|Build tabs > [Open content in Stageview](tabs/open-content-in-stageview.md)|
+|03/04/2024|Updated the common reasons for app validation failure to help your app pass the Teams Store submission process.|Distribute your app > Publish to the Teams Store > [Common reasons for app validation failure](concepts/deploy-and-publish/appsource/common-reasons-for-app-validation-failure.md)|
|27/03/2024|Configure Teams deep links using the msteams:// and https:// protocol handlers.|Integrate with Teams > Create deep links > Overview > [Protocol handlers in deep links](concepts/build-and-test/deep-links.md#protocol-handlers-in-deep-links)| |26/03/2024|Adaptive Cards responsive layout helps you to design cards to look great on any device in order to provide an enhanced user experience across chat, channels, and meeting chat.|Build cards and dialogs > Build cards > Format cards in Microsoft Teams > [Adaptive Card responsive layout](task-modules-and-cards/cards/cards-format.md#adaptive-card-responsive-layout)| |07/03/2024|Introduced Adaptive Card Previewer to view the realtime changes for Visual Studio 2022.|Tools and SDKs > Tools > [Adaptive Card Previewer for Visual Studio](concepts/build-and-test/adaptive-card-previewer-vs.md)|