Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
platform | How Conversation Ai Core Capabilities | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/bots/how-to/Teams conversational AI/how-conversation-ai-core-capabilities.md | In the following section, we've used the samples from the [AI library](https:// ## Send or receive message -You can send and receive messages using the Bot Framework. The app listens for the user to send a message , and when it receives this message, it deletes the conversation state and sends a message back to the user. The app also keeps track of the number of messages received in a conversation and echoes back the user’s message with a count of messages received so far. +You can send and receive messages using the Bot Framework. The app listens for the user to send a message, and when it receives this message, it deletes the conversation state and sends a message back to the user. The app also keeps track of the number of messages received in a conversation and echoes back the user’s message with a count of messages received so far. ++# [.NET](#tab/dotnet6) ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/01.messaging.echoBot) ++* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/01.messaging.echoBot/Program.cs#L49) ++```csharp + // Listen for user to say "/reset" and then delete conversation state + app.OnMessage("/reset", ActivityHandlers.ResetMessageHandler); ++ // Listen for ANY message to be received. MUST BE AFTER ANY OTHER MESSAGE HANDLERS + app.OnActivity(ActivityTypes.Message, ActivityHandlers.MessageHandler); ++ return app; +``` # [JavaScript](#tab/javascript6) app.activity(ActivityTypes.Message, async (context: TurnContext, state: Applicat }); ``` -# [C#](#tab/dotnet6) --* [Code sample](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/01.messaging.echoBot) +# [Python](#tab/python6) -* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/01.messaging.echoBot/Program.cs#L49) --```csharp - // Listen for user to say "/reset" and then delete conversation state - app.OnMessage("/reset", ActivityHandlers.ResetMessageHandler); +* [Code sample](https://github.com/microsoft/teams-ai/tree/main/python/samples/01.messaging.a.echoBot) - // Listen for ANY message to be received. MUST BE AFTER ANY OTHER MESSAGE HANDLERS - app.OnActivity(ActivityTypes.Message, ActivityHandlers.MessageHandler); +* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/samples/01.messaging.a.echoBot/src/bot.py#L25) - return app; +```python +@app.activity("message") +async def on_message(context: TurnContext, _state: TurnState): + await context.send_activity(f"you said: {context.activity.text}") + return True ``` app.activity(ActivityTypes.Message, async (context: TurnContext, state: Applicat In the Bot Framework SDK's `TeamsActivityHandler`, you needed to set up the Message extensions query handler by extending handler methods. The app listens for search actions and item taps, and formats the search results as a list of HeroCards displaying package information. The result is used to display the search results in the messaging extension. +# [.NET](#tab/dotnet5) ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/02.messageExtensions.a.searchCommand) ++* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/02.messageExtensions.a.searchCommand/Program.cs#L47) ++* [Search results reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/02.messageExtensions.a.searchCommand/ActivityHandlers.cs#L39) ++```csharp +// Listen for search actions + app.MessageExtensions.OnQuery("searchCmd", activityHandlers.QueryHandler); + // Listen for item tap + app.MessageExtensions.OnSelectItem(activityHandlers.SelectItemHandler); ++ return app; ++ // Format search results in ActivityHandlers.cs ++ List<MessagingExtensionAttachment> attachments = packages.Select(package => new MessagingExtensionAttachment + { + ContentType = HeroCard.ContentType, + Content = new HeroCard + { + Title = package.Id, + Text = package.Description + }, + Preview = new HeroCard + { + Title = package.Id, + Text = package.Description, + Tap = new CardAction + { + Type = "invoke", + Value = package + } + }.ToAttachment() + }).ToList(); ++ // Return results as a list ++ return new MessagingExtensionResult + { + Type = "result", + AttachmentLayout = "list", + Attachments = attachments + }; ++``` + # [JavaScript](#tab/javascript5) Now, the app class has `messageExtensions` features to simplify creating the handlers: app.messageExtensions.selectItem(async (context: TurnContext, state: TurnState, }); ``` -# [C#](#tab/dotnet5) +# [Python](#tab/python5) -* [Code sample](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/02.messageExtensions.a.searchCommand) +* [Code sample](https://github.com/microsoft/teams-ai/tree/main/python/samples/04.ai.b.messageExtensions.AI-ME) -* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/02.messageExtensions.a.searchCommand/Program.cs#L47) +* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/samples/04.ai.b.messageExtensions.AI-ME/src/bot.py#L75) -* [Search results reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/02.messageExtensions.a.searchCommand/ActivityHandlers.cs#L39) +```python +# Implement Message Extension logic +@app.message_extensions.fetch_task("CreatePost") +async def create_post(context: TurnContext, _state: AppTurnState) -> TaskModuleTaskInfo: + # Return card as a TaskInfo object + card = create_initial_view() + return create_task_info(card) +``` -```csharp -// Listen for search actions - app.MessageExtensions.OnQuery("searchCmd", activityHandlers.QueryHandler); - // Listen for item tap - app.MessageExtensions.OnSelectItem(activityHandlers.SelectItemHandler); + - return app; +## Adaptive Cards capabilities - // Format search results in ActivityHandlers.cs +You can register Adaptive Card action handlers using the `app.adaptiveCards` property. The app listens for messages containing the keywords `static` or `dynamic` and returns an Adaptive Card using the `StaticMessageHandler` or `DynamicMessageHandler` methods. The app also listens for queries from a dynamic search card, submit buttons on the Adaptive Cards. - List<MessagingExtensionAttachment> attachments = packages.Select(package => new MessagingExtensionAttachment - { - ContentType = HeroCard.ContentType, - Content = new HeroCard - { - Title = package.Id, - Text = package.Description - }, - Preview = new HeroCard - { - Title = package.Id, - Text = package.Description, - Tap = new CardAction - { - Type = "invoke", - Value = package - } - }.ToAttachment() - }).ToList(); +# [.NET](#tab/dotnet4) - // Return results as a list +* [Code sample](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/03.adaptiveCards.a.typeAheadBot) - return new MessagingExtensionResult - { - Type = "result", - AttachmentLayout = "list", - Attachments = attachments - }; +* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/03.adaptiveCards.a.typeAheadBot/Program.cs#L52) -``` +```csharp +// Listen for messages that trigger returning an adaptive card + app.OnMessage(new Regex(@"static", RegexOptions.IgnoreCase), activityHandlers.StaticMessageHandler); + app.OnMessage(new Regex(@"dynamic", RegexOptions.IgnoreCase), activityHandlers.DynamicMessageHandler); -+ // Listen for query from dynamic search card + app.AdaptiveCards.OnSearch("nugetpackages", activityHandlers.SearchHandler); + // Listen for submit buttons + app.AdaptiveCards.OnActionSubmit("StaticSubmit", activityHandlers.StaticSubmitHandler); + app.AdaptiveCards.OnActionSubmit("DynamicSubmit", activityHandlers.DynamicSubmitHandler); -## Adaptive Cards capabilities + // Listen for ANY message to be received. MUST BE AFTER ANY OTHER HANDLERS + app.OnActivity(ActivityTypes.Message, activityHandlers.MessageHandler); -You can register Adaptive Card action handlers using the `app.adaptiveCards` property. The app listens for messages containing the keywords `static` or `dynamic` and returns an Adaptive Card using the `StaticMessageHandler` or `DynamicMessageHandler` methods. The app also listens for queries from a dynamic search card, submit buttons on the Adaptive Cards. + return app; +``` # [JavaScript](#tab/javascript4) app.adaptiveCards.actionSubmit('StaticSubmit', async (context, _state, data: Sub }); ``` -# [C#](#tab/dotnet4) +# [Python](#tab/python4) -* [Code sample](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/03.adaptiveCards.a.typeAheadBot) +[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/packages/ai/teams/adaptive_cards/adaptive_cards.py#L129C1-L136C67) -* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/03.adaptiveCards.a.typeAheadBot/Program.cs#L52) +```python +# Use this method as a decorator +@app.adaptive_cards.action_submit("submit") +async def execute_submit(context: TurnContext, state: TurnState, data: Any): + print(f"Execute with data: {data}") + return True -```csharp -// Listen for messages that trigger returning an adaptive card - app.OnMessage(new Regex(@"static", RegexOptions.IgnoreCase), activityHandlers.StaticMessageHandler); - app.OnMessage(new Regex(@"dynamic", RegexOptions.IgnoreCase), activityHandlers.DynamicMessageHandler); -- // Listen for query from dynamic search card - app.AdaptiveCards.OnSearch("nugetpackages", activityHandlers.SearchHandler); - // Listen for submit buttons - app.AdaptiveCards.OnActionSubmit("StaticSubmit", activityHandlers.StaticSubmitHandler); - app.AdaptiveCards.OnActionSubmit("DynamicSubmit", activityHandlers.DynamicSubmitHandler); -- // Listen for ANY message to be received. MUST BE AFTER ANY OTHER HANDLERS - app.OnActivity(ActivityTypes.Message, activityHandlers.MessageHandler); -- return app; +# Pass a function to this method +app.adaptive_cards.action_submit("submit")(execute_submit) ``` The Bot responds to the user's input with the action `LightsOn` to turn the ligh The following example illustrates how Teams AI library makes it possible to manage the bot logic for handling an action `LightsOn` or `LightsOff` and connect it to the prompt used with OpenAI: -# [JavaScript](#tab/javascript3) --* [Code sample](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.ai.c.actionMapping.lightBot) --* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/04.ai.c.actionMapping.lightBot/src/index.ts#L93) --```typescript --// Create AI components -const model = new OpenAIModel({ - // OpenAI Support - apiKey: process.env.OPENAI_KEY!, - defaultModel: 'gpt-3.5-turbo', -- // Azure OpenAI Support - azureApiKey: process.env.AZURE_OPENAI_KEY!, - azureDefaultDeployment: 'gpt-3.5-turbo', - azureEndpoint: process.env.AZURE_OPENAI_ENDPOINT!, - azureApiVersion: '2023-03-15-preview', -- // Request logging - logRequests: true -}); --const prompts = new PromptManager({ - promptsFolder: path.join(__dirname, '../src/prompts') -}); --const planner = new ActionPlanner({ - model, - prompts, - defaultPrompt: 'sequence', -}); --// Define storage and application -const storage = new MemoryStorage(); -const app = new Application<ApplicationTurnState>({ - storage, - ai: { - planner - } -}); --// Define a prompt function for getting the current status of the lights -planner.prompts.addFunction('getLightStatus', async (context: TurnContext, memory: Memory) => { - return memory.getValue('conversation.lightsOn') ? 'on' : 'off'; -}); --// Register action handlers -app.ai.action('LightsOn', async (context: TurnContext, state: ApplicationTurnState) => { - state.conversation.lightsOn = true; - await context.sendActivity(`[lights on]`); - return `the lights are now on`; -}); --app.ai.action('LightsOff', async (context: TurnContext, state: ApplicationTurnState) => { - state.conversation.lightsOn = false; - await context.sendActivity(`[lights off]`); - return `the lights are now off`; -}); --interface PauseParameters { - time: number; -} --app.ai.action('Pause', async (context: TurnContext, state: ApplicationTurnState, parameters: PauseParameters) => { - await context.sendActivity(`[pausing for ${parameters.time / 1000} seconds]`); - await new Promise((resolve) => setTimeout(resolve, parameters.time)); - return `done pausing`; -}); --``` --# [C#](#tab/dotnet3) +# [.NET](#tab/dotnet3) * [Code sample](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.c.actionMapping.lightBot) builder.Services.AddTransient<IBot>(sp => ``` -+# [JavaScript](#tab/javascript3) -### Message extension query +* [Code sample](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.ai.c.actionMapping.lightBot) -The Teams AI library offers you a more intuitive approach to create handlers for various message-extension query commands when compared to previous iterations of Teams Bot Framework SDK. The new SDK works alongside the existing Teams Bot Framework SDK. +* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/04.ai.c.actionMapping.lightBot/src/index.ts#L93) -The following is an example of how you can structure their code to handle a message-extension query for the `searchCmd` command. +```typescript -# [JavaScript](#tab/javascript2) +// Create AI components +const model = new OpenAIModel({ + // OpenAI Support + apiKey: process.env.OPENAI_KEY!, + defaultModel: 'gpt-3.5-turbo', -* [Code sample](https://github.com/microsoft/teams-ai/tree/main/js/samples/02.messageExtensions.a.searchCommand) + // Azure OpenAI Support + azureApiKey: process.env.AZURE_OPENAI_KEY!, + azureDefaultDeployment: 'gpt-3.5-turbo', + azureEndpoint: process.env.AZURE_OPENAI_ENDPOINT!, + azureApiVersion: '2023-03-15-preview', -* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/02.messageExtensions.a.searchCommand/src/index.ts#L81) + // Request logging + logRequests: true +}); -```typescript +const prompts = new PromptManager({ + promptsFolder: path.join(__dirname, '../src/prompts') +}); -// Listen for search actions -app.messageExtensions.query('searchCmd', async (context, state, query) => { - const searchQuery = query.parameters.queryText ?? ''; - const count = query.count ?? 10; - const response = await axios.get( - `http://registry.npmjs.com/-/v1/search?${new URLSearchParams({ - size: count.toString(), - text: searchQuery - }).toString()}` - ); +const planner = new ActionPlanner({ + model, + prompts, + defaultPrompt: 'sequence', +}); +// Define storage and application +const storage = new MemoryStorage(); +const app = new Application<ApplicationTurnState>({ + storage, + ai: { + planner + } +}); - // Format search results - const results: MessagingExtensionAttachment[] = []; - response?.data?.objects?.forEach((obj: any) => results.push(createNpmSearchResultCard(obj.package))); +// Define a prompt function for getting the current status of the lights +planner.prompts.addFunction('getLightStatus', async (context: TurnContext, memory: Memory) => { + return memory.getValue('conversation.lightsOn') ? 'on' : 'off'; +}); +// Register action handlers +app.ai.action('LightsOn', async (context: TurnContext, state: ApplicationTurnState) => { + state.conversation.lightsOn = true; + await context.sendActivity(`[lights on]`); + return `the lights are now on`; +}); - // Return results as a list - return { - attachmentLayout: 'list', - attachments: results, - type: 'result' - }; +app.ai.action('LightsOff', async (context: TurnContext, state: ApplicationTurnState) => { + state.conversation.lightsOn = false; + await context.sendActivity(`[lights off]`); + return `the lights are now off`; }); -And here’s how they can return a card when a message-extension result is selected. +interface PauseParameters { + time: number; +} -// Listen for item tap -app.messageExtensions.selectItem(async (context, state, item) => { - // Generate detailed result - const card = createNpmPackageCard(item); +app.ai.action('Pause', async (context: TurnContext, state: ApplicationTurnState, parameters: PauseParameters) => { + await context.sendActivity(`[pausing for ${parameters.time / 1000} seconds]`); + await new Promise((resolve) => setTimeout(resolve, parameters.time)); + return `done pausing`; +}); +``` - // Return results - return { - attachmentLayout: 'list', - attachments: [card], - type: 'result' - }; -}); +# [Python](#tab/python3) ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/python/samples/04.ai.c.actionMapping.lightBot) ++* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/samples/04.ai.c.actionMapping.lightBot/src/bot.py#L35) ++```python +# Create AI components +model: OpenAIModel +if config.OPENAI_KEY: + model = OpenAIModel( + OpenAIModelOptions(api_key=config.OPENAI_KEY, default_model="gpt-3.5-turbo") + ) +elif config.AZURE_OPENAI_KEY and config.AZURE_OPENAI_ENDPOINT: + model = OpenAIModel( + AzureOpenAIModelOptions( + api_key=config.AZURE_OPENAI_KEY, + default_model="gpt-35-turbo", + api_version="2023-03-15-preview", + endpoint=config.AZURE_OPENAI_ENDPOINT, + ) + ) ``` -# [C#](#tab/dotnet2) +++### Message extension query ++The Teams AI library offers you a more intuitive approach to create handlers for various message-extension query commands when compared to previous iterations of Teams Bot Framework SDK. The new SDK works alongside the existing Teams Bot Framework SDK. ++The following is an example of how you can structure their code to handle a message-extension query for the `searchCmd` command. ++# [.NET](#tab/dotnet2) * [Code sample](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/02.messageExtensions.a.searchCommand) app.messageExtensions.selectItem(async (context, state, item) => { }; ``` +# [JavaScript](#tab/javascript2) ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/js/samples/02.messageExtensions.a.searchCommand) ++* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/02.messageExtensions.a.searchCommand/src/index.ts#L81) ++```typescript ++// Listen for search actions +app.messageExtensions.query('searchCmd', async (context, state, query) => { + const searchQuery = query.parameters.queryText ?? ''; + const count = query.count ?? 10; + const response = await axios.get( + `http://registry.npmjs.com/-/v1/search?${new URLSearchParams({ + size: count.toString(), + text: searchQuery + }).toString()}` + ); +++ // Format search results + const results: MessagingExtensionAttachment[] = []; + response?.data?.objects?.forEach((obj: any) => results.push(createNpmSearchResultCard(obj.package))); +++ // Return results as a list + return { + attachmentLayout: 'list', + attachments: results, + type: 'result' + }; +}); ++And here’s how they can return a card when a message-extension result is selected. ++// Listen for item tap +app.messageExtensions.selectItem(async (context, state, item) => { + // Generate detailed result + const card = createNpmPackageCard(item); +++ // Return results + return { + attachmentLayout: 'list', + attachments: [card], + type: 'result' + }; +}); ++``` ++# [Python](#tab/python2) ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/packages/ai/teams/message_extensions/message_extensions.py#L68C1-L75C55) ++```python +# Use this method as a decorator +@app.message_extensions.query("test") +async def on_query(context: TurnContext, state: TurnState, url: str): + return MessagingExtensionResult() ++# Pass a function to this method +app.message_extensions.query("test")(on_query) +``` + ## Intents to actions A simple interface for actions and predictions allows bots to react when they have high confidence for taking action. Ambient presence lets bots learn intent, use prompts based on business logic, and generate responses. -Thanks to our AI library, the prompt needs only to outline the actions supported by the bot, and supply a few-shot examples of how to employ those actions. Conversation history helps with a natural dialogue between the user and bot, such as *add cereal to groceries list*, followed by *also add coffee*, which should indicate that coffee is to be added to the groceries list. +Thanks to our AI library, the prompt needs only to outline the actions supported by the bot, and supply a few-shot example of how to employ those actions. Conversation history helps with a natural dialogue between the user and bot, such as *add cereal to groceries list*, followed by *also add coffee*, which should indicate that coffee is to be added to the groceries list. The following is a conversation with an AI assistant. The AI assistant is capable of managing lists and recognizes the following commands: The following actions are supported: * `removeItem list="<list name>" item="<text>"` * `summarizeLists` -All entities are required parameters to actions +All entities are required parameters to actions. * Current list names: All entities are required parameters to actions * AI: The bot logic is streamlined to include handlers for actions such as `addItem` and `removeItem`. This distinct separation between actions and the prompts guiding the AI on how to execute the actions and prompts serves as a powerful tool. -# [JavaScript](#tab/javascript1) --* [Code sample](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.ai.d.chainedActions.listBot) --* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/04.ai.d.chainedActions.listBot/src/index.ts#L154) --```typescript - app.ai.action('addItems', async (context: TurnContext, state: ApplicationTurnState, parameters: ListAndItems) => { - const items = getItems(state, parameters.list); - items.push(...(parameters.items ?? [])); - setItems(state, parameters.list, items); - return `items added. think about your next action`; - }); -- app.ai.action('removeItems', async (context: TurnContext, state: ApplicationTurnState, parameters: ListAndItems) => { - const items = getItems(state, parameters.list); - (parameters.items ?? []).forEach((item: string) => { - const index = items.indexOf(item); - if (index >= 0) { - items.splice(index, 1); - } - }); - setItems(state, parameters.list, items); - return `items removed. think about your next action`; - }); -``` --# [C#](#tab/dotnet1) +# [.NET](#tab/dotnet1) * [Code sample](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.d.chainedActions.listBot) All entities are required parameters to actions } ``` +# [JavaScript](#tab/javascript1) ++* [Code sample](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.ai.d.chainedActions.listBot) ++* [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/04.ai.d.chainedActions.listBot/src/index.ts#L154) ++```typescript + app.ai.action('addItems', async (context: TurnContext, state: ApplicationTurnState, parameters: ListAndItems) => { + const items = getItems(state, parameters.list); + items.push(...(parameters.items ?? [])); + setItems(state, parameters.list, items); + return `items added. think about your next action`; + }); ++ app.ai.action('removeItems', async (context: TurnContext, state: ApplicationTurnState, parameters: ListAndItems) => { + const items = getItems(state, parameters.list); + (parameters.items ?? []).forEach((item: string) => { + const index = items.indexOf(item); + if (index >= 0) { + items.splice(index, 1); + } + }); + setItems(state, parameters.list, items); + return `items removed. think about your next action`; + }); +``` ## Next step |
platform | How Conversation Ai Get Started | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/bots/how-to/Teams conversational AI/how-conversation-ai-get-started.md | Teams AI library is built on top of the Bot Framework SDK and uses its fundament > [!NOTE] > The adapter class that handles connectivity with the channels is imported from [Bot Framework SDK](/azure/bot-service/bot-builder-basics?view=azure-bot-service-4.0#the-bot-adapter&preserve-view=true). +# [.NET](#tab/dotnet1) ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/04.ai.a.teamsChefBot/Program.cs) ++```csharp +using Microsoft.Teams.AI; +using Microsoft.Bot.Builder; +using Microsoft.Bot.Builder.Integration.AspNet.Core; +using Microsoft.Bot.Connector.Authentication; +using Microsoft.TeamsFx.Conversation; ++var builder = WebApplication.CreateBuilder(args); ++builder.Services.AddControllers(); +builder.Services.AddHttpClient("WebClient", client => client.Timeout = TimeSpan.FromSeconds(600)); +builder.Services.AddHttpContextAccessor(); ++// Prepare Configuration for ConfigurationBotFrameworkAuthentication +var config = builder.Configuration.Get<ConfigOptions>(); +builder.Configuration["MicrosoftAppType"] = "MultiTenant"; +builder.Configuration["MicrosoftAppId"] = config.BOT_ID; +builder.Configuration["MicrosoftAppPassword"] = config.BOT_PASSWORD; ++// Create the Bot Framework Authentication to be used with the Bot Adapter. +builder.Services.AddSingleton<BotFrameworkAuthentication, ConfigurationBotFrameworkAuthentication>(); ++// Create the Cloud Adapter with error handling enabled. +// Note: some classes expect a BotAdapter and some expect a BotFrameworkHttpAdapter, so +// register the same adapter instance for all types. +builder.Services.AddSingleton<CloudAdapter, AdapterWithErrorHandler>(); +builder.Services.AddSingleton<IBotFrameworkHttpAdapter>(sp => sp.GetService<CloudAdapter>()); +builder.Services.AddSingleton<BotAdapter>(sp => sp.GetService<CloudAdapter>()); +``` + # [JavaScript](#tab/javascript4) [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/04.ai.a.teamsChefBot/src/index.ts#L9) const adapter = new CloudAdapter(botFrameworkAuthentication); ``` -# [C#](#tab/dotnet1) --[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/04.ai.a.teamsChefBot/Program.cs) --```csharp -using Microsoft.Teams.AI; -using Microsoft.Bot.Builder; -using Microsoft.Bot.Builder.Integration.AspNet.Core; -using Microsoft.Bot.Connector.Authentication; -using Microsoft.TeamsFx.Conversation; +# [Python](#tab/python4) -var builder = WebApplication.CreateBuilder(args); +[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/samples/01.messaging.a.echoBot/src/bot.py#L8C1-L23C2) -builder.Services.AddControllers(); -builder.Services.AddHttpClient("WebClient", client => client.Timeout = TimeSpan.FromSeconds(600)); -builder.Services.AddHttpContextAccessor(); +```python +import sys +import traceback -// Prepare Configuration for ConfigurationBotFrameworkAuthentication -var config = builder.Configuration.Get<ConfigOptions>(); -builder.Configuration["MicrosoftAppType"] = "MultiTenant"; -builder.Configuration["MicrosoftAppId"] = config.BOT_ID; -builder.Configuration["MicrosoftAppPassword"] = config.BOT_PASSWORD; +from botbuilder.core import TurnContext +from teams import Application, ApplicationOptions, TeamsAdapter +from teams.state import TurnState -// Create the Bot Framework Authentication to be used with the Bot Adapter. -builder.Services.AddSingleton<BotFrameworkAuthentication, ConfigurationBotFrameworkAuthentication>(); +from config import Config -// Create the Cloud Adapter with error handling enabled. -// Note: some classes expect a BotAdapter and some expect a BotFrameworkHttpAdapter, so -// register the same adapter instance for all types. -builder.Services.AddSingleton<CloudAdapter, AdapterWithErrorHandler>(); -builder.Services.AddSingleton<IBotFrameworkHttpAdapter>(sp => sp.GetService<CloudAdapter>()); -builder.Services.AddSingleton<BotAdapter>(sp => sp.GetService<CloudAdapter>()); +config = Config() +app = Application[TurnState]( + ApplicationOptions( + bot_app_id=config.APP_ID, + adapter=TeamsAdapter(config), + ) +) ``` Add AI capabilities to your existing app or a new Bot Framework app. **ActionPlanner**: The ActionPlanner is the main component calling your Large Language Model (LLM) and includes several features to enhance and customize your model. It's responsible for generating and executing plans based on the user's input and the available actions. -# [JavaScript](#tab/javascript1) --[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/04.ai.a.teamsChefBot/src/index.ts#L82) --```javascript -/// Create AI components -const model = new OpenAIModel({ - // OpenAI Support - apiKey: process.env.OPENAI_KEY!, - defaultModel: 'gpt-3.5-turbo', -- // Azure OpenAI Support - azureApiKey: process.env.AZURE_OPENAI_KEY!, - azureDefaultDeployment: 'gpt-3.5-turbo', - azureEndpoint: process.env.AZURE_OPENAI_ENDPOINT!, - azureApiVersion: '2023-03-15-preview', -- // Request logging - logRequests: true -}); --const prompts = new PromptManager({ - promptsFolder: path.join(__dirname, '../src/prompts') -}); --const planner = new ActionPlanner({ - model, - prompts, - defaultPrompt: 'chat', -}); --``` --# [C#](#tab/dotnet2) +# [.NET](#tab/dotnet2) [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/04.ai.c.actionMapping.lightBot/Program.cs#L33). const planner = new ActionPlanner({ ``` +# [JavaScript](#tab/javascript1) ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/04.ai.a.teamsChefBot/src/index.ts#L82) ++```javascript +/// Create AI components +const model = new OpenAIModel({ + // OpenAI Support + apiKey: process.env.OPENAI_KEY!, + defaultModel: 'gpt-3.5-turbo', ++ // Azure OpenAI Support + azureApiKey: process.env.AZURE_OPENAI_KEY!, + azureDefaultDeployment: 'gpt-3.5-turbo', + azureEndpoint: process.env.AZURE_OPENAI_ENDPOINT!, + azureApiVersion: '2023-03-15-preview', ++ // Request logging + logRequests: true +}); ++const prompts = new PromptManager({ + promptsFolder: path.join(__dirname, '../src/prompts') +}); ++const planner = new ActionPlanner({ + model, + prompts, + defaultPrompt: 'chat', +}); ++``` ++# [Python](#tab/python1) ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/samples/04.ai.c.actionMapping.lightBot/src/bot.py#L35) ++```python +# Create AI components +model: OpenAIModel ++if config.OPENAI_KEY: + model = OpenAIModel( + OpenAIModelOptions(api_key=config.OPENAI_KEY, default_model="gpt-3.5-turbo") + ) +elif config.AZURE_OPENAI_KEY and config.AZURE_OPENAI_ENDPOINT: + model = OpenAIModel( + AzureOpenAIModelOptions( + api_key=config.AZURE_OPENAI_KEY, + default_model="gpt-35-turbo", + api_version="2023-03-15-preview", + endpoint=config.AZURE_OPENAI_ENDPOINT, + ) + ) +``` + ## Define storage and application The application object automatically manages the conversation and user state of * **Application**: The application class has all the information and bot logic required for an app. You can register actions or activity handlers for the app in this class. +# [.NET](#tab/dotnet3) ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/04.ai.c.actionMapping.lightBot/Program.cs#L99) ++```csharp + return new TeamsLightBot(new() + { + Storage = sp.GetService<IStorage>(), + AI = new(planner), + LoggerFactory = loggerFactory, + TurnStateFactory = () => + { + return new AppState(); + } + }); +``` ++`TurnStateFactory` allows you to create a custom state class for your application. You can use it to store additional information or logic that you need for your bot. You can also override some of the default properties of the turn state, such as the user input, the bot output, or the conversation history. To use `TurnStateFactory`, you need to create a class that extends the default turn state and pass a function that creates an instance of your class to the application constructor. + # [JavaScript](#tab/javascript3) [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/04.ai.a.teamsChefBot/src/index.ts#L108) const app = new Application<ApplicationTurnState>({ The `MemoryStorage()` function stores all the state for your bot. The `Application` class replaces the Teams Activity Handler class. You can configure your `ai` by adding the planner, moderator, prompt manager, default prompt and history. The `ai` object is passed into the `Application`, which receives the AI components and the default prompt defined earlier. -# [C#](#tab/dotnet3) --[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/04.ai.c.actionMapping.lightBot/Program.cs#L99) --```csharp - return new TeamsLightBot(new() - { - Storage = sp.GetService<IStorage>(), - AI = new(planner), - LoggerFactory = loggerFactory, - TurnStateFactory = () => - { - return new AppState(); - } - }); +# [Python](#tab/python3) ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/samples/04.ai.c.actionMapping.lightBot/src/bot.py#L52C1-L62C2) ++```python +storage = MemoryStorage() +app = Application[AppTurnState]( + ApplicationOptions( + bot_app_id=config.APP_ID, + storage=storage, + adapter=TeamsAdapter(config), + ai=AIOptions(planner=ActionPlanner( + ActionPlannerOptions(model=model, prompts=prompts, default_prompt="sequence") + )), + ) +) ``` -`TurnStateFactory` allows you to create a custom state class for your application. You can use it to store additional information or logic that you need for your bot. You can also override some of the default properties of the turn state, such as the user input, the bot output, or the conversation history. To use `TurnStateFactory`, you need to create a class that extends the default turn state and pass a function that creates an instance of your class to the application constructor. - ## Register data sources A new object based prompt system breaks a prompt into sections and each section The following are a few guidelines to create prompts: * Provide instructions, examples, or both.-* Provide quality data. Ensure that there are enough examples and proofread your examples. The model is smart enough to see through basic spelling mistakes and give you a response, but it also might assume that the input is intentional and it might affect the response. +* Provide quality data. Ensure that there are enough examples and proofread your examples. The model is smart enough to see through basic spelling mistakes and give you a response, but it also might assume that the input is intentional, and it might affect the response. * Check your prompt settings. The temperature and top_p settings control how deterministic the model is in generating a response. Higher value such as 0.8 makes the output random, while lower value such as 0.2 makes the output focused and deterministic. Create a folder called prompts and define your prompts in the folder. When the user interacts with the bot by entering a text prompt, the bot responds with a text completion. You must register a handler for each action listed in the prompt and also add a In the following example of a light bot, we have the `LightsOn`, `LightsOff`, and `Pause` action. Every time an action is called, you return a `string`. If you require the bot to return time, you don't need to parse the time and convert it to a number. The `PauseParameters` property ensures that it returns time in number format without pausing the prompt. -# [JavaScript](#tab/javascript2) --[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/04.ai.c.actionMapping.lightBot/src/index.ts#L133) --```javascript -// Register action handlers -app.ai.action('LightsOn', async (context: TurnContext, state: ApplicationTurnState) => { - state.conversation.lightsOn = true; - await context.sendActivity(`[lights on]`); - return `the lights are now on`; -}); --app.ai.action('LightsOff', async (context: TurnContext, state: ApplicationTurnState) => { - state.conversation.lightsOn = false; - await context.sendActivity(`[lights off]`); - return `the lights are now off`; -}); --interface PauseParameters { - time: number; -} --app.ai.action('Pause', async (context: TurnContext, state: ApplicationTurnState, parameters: PauseParameters) => { - await context.sendActivity(`[pausing for ${parameters.time / 1000} seconds]`); - await new Promise((resolve) => setTimeout(resolve, parameters.time)); - return `done pausing`; -}); -``` --# [C#](#tab/dotnet4) +# [.NET](#tab/dotnet4) [Sample code reference](https://github.com/microsoft/teams-ai/blob/main/dotnet/samples/04.ai.c.actionMapping.lightBot/LightBotActions.cs) public class LightBotActions ``` +# [JavaScript](#tab/javascript2) ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/js/samples/04.ai.c.actionMapping.lightBot/src/index.ts#L133) ++```javascript +// Register action handlers +app.ai.action('LightsOn', async (context: TurnContext, state: ApplicationTurnState) => { + state.conversation.lightsOn = true; + await context.sendActivity(`[lights on]`); + return `the lights are now on`; +}); ++app.ai.action('LightsOff', async (context: TurnContext, state: ApplicationTurnState) => { + state.conversation.lightsOn = false; + await context.sendActivity(`[lights off]`); + return `the lights are now off`; +}); ++interface PauseParameters { + time: number; +} ++app.ai.action('Pause', async (context: TurnContext, state: ApplicationTurnState, parameters: PauseParameters) => { + await context.sendActivity(`[pausing for ${parameters.time / 1000} seconds]`); + await new Promise((resolve) => setTimeout(resolve, parameters.time)); + return `done pausing`; +}); +``` ++# [Python](#tab/python2) ++[Sample code reference](https://github.com/microsoft/teams-ai/blob/main/python/samples/04.ai.c.actionMapping.lightBot/src/bot.py#L85C1-L113C26) ++```python +@app.ai.action("LightsOn") +async def on_lights_on( + context: ActionTurnContext[Dict[str, Any]], + state: AppTurnState, +): + state.conversation.lights_on = True + await context.send_activity("[lights on]") + return "the lights are now on" +++@app.ai.action("LightsOff") +async def on_lights_off( + context: ActionTurnContext[Dict[str, Any]], + state: AppTurnState, +): + state.conversation.lights_on = False + await context.send_activity("[lights off]") + return "the lights are now off" +++@app.ai.action("Pause") +async def on_pause( + context: ActionTurnContext[Dict[str, Any]], + _state: AppTurnState, +): + time_ms = int(context.data["time"]) if context.data["time"] else 1000 + await context.send_activity(f"[pausing for {time_ms / 1000} seconds]") + time.sleep(time_ms / 1000) + return "done pausing" +``` + If you use either `sequence`, `monologue` or `tools` augmentation, it's impossible for the model to hallucinate an invalid function name, action name, or the correct parameters. You must create a new actions file and define all the actions you want the prompt to support for augmentation. You must define the actions to tell the model when to perform the action. Sequence augmentation is suitable for tasks that require multiple steps or complex logic. |
platform | Teams Conversation Ai Overview | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/bots/how-to/Teams conversational AI/teams-conversation-ai-overview.md | Teams AI Library offers an integrated fact-checking system to tackle bot halluc ## Feedback loop -Feedback loop allows the bot to validate and correct the output of the language model. It checks the structure and parameters of the plan or monologue that the model returns, and provides feedback on errors or missing information. The model then tries to fix its mistakes and returns a valid output. The feedback loop can improve the reliability and accuracy of the AI system, and reduce the chances of hallucination or invalid actions. +Feedback loop allows the bot to validate and correct the output of the language model. It checks the structure and parameters of the plan or monologue that the model returns and provides feedback on errors or missing information. The model then tries to fix its mistakes and returns a valid output. The feedback loop can improve the reliability and accuracy of the AI system and reduce the chances of hallucination or invalid actions. The following table lists the updates to the Teams AI library: -|Type |Description |JavaScript|C#| -||||| -|OpenAIModel |The OpenAIModel class lets you call both OAI and Azure OAI with one single component. New models can be defined for other model types like LLaMA2. | ✔️ |✔️| -|Embeddings | The OpenAIEmbeddings class lets you generate embeddings using either OAI or Azure OAI. New embeddings can be defined for things like OSS Embeddings. | ✔️ |❌| -|Prompts | A new object-based prompt system enables better token management and reduces the likelihood of overflowing the model's context window. | ✔️ |✔️| -| Augmentation | Augmentations simplify prompt engineering tasks by letting the developer add named augmentations to their prompt. Only `functions`, `sequence`, and `monologue` style augmentations are supported. | ✔️ |✔️| -|Data Sources | A new DataSource plugin makes it easy to add RAG to any prompt. You can register a named data source with the planner and then specify the name[s] of the data sources they wish to augment the prompt. | ✔️ |❌| +|Type |Description |.NET|JavaScript|Python| +|||||| +|OpenAIModel |The OpenAIModel class lets you call both OAI and Azure OAI with one single component. New models can be defined for other model types like LLaMA2. | ✔️ |✔️|✔️| +|Embeddings | The OpenAIEmbeddings class lets you generate embeddings using either OAI or Azure OAI. New embeddings can be defined for things like OSS Embeddings. | ❌ |✔️|✔️| +|Prompts | A new object-based prompt system enables better token management and reduces the likelihood of overflowing the model's context window. | ✔️ |✔️|✔️| +| Augmentation | Augmentations simplify prompt engineering tasks by letting the developer add named augmentations to their prompt. Only `functions`, `sequence`, and `monologue` style augmentations are supported. | ✔️ |✔️|✔️| +|Data Sources | A new DataSource plugin makes it easy to add RAG to any prompt. You can register a named data source with the planner and then specify the name[s] of the data sources they wish to augment the prompt. | ❌ |✔️|✔️| ## Code samples -|Sample name | Description |.NET | Node.js | -|-|--|--|--| -|Echo bot| This sample shows how to incorporate a basic conversational flow into a Microsoft Teams application using Bot Framework and the Teams AI library. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/01.messaging.echoBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/01.messaging.a.echoBot) | +|Sample name | Description |.NET | Node.js | Python| +|-|--|--|--|--| +|Echo bot| This sample shows how to incorporate a basic conversational flow into a Microsoft Teams application using Bot Framework and the Teams AI library. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/01.messaging.echoBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/01.messaging.a.echoBot) | [View](https://github.com/microsoft/teams-ai/tree/main/python/samples/01.messaging.a.echoBot)| | Search command message extension| This sample shows how to incorporate a basic Message Extension app into a Microsoft Teams application using Bot Framework and the Teams AI library. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/02.messageExtensions.a.searchCommand) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/02.messageExtensions.a.searchCommand) | | Typeahead bot| This sample shows how to incorporate the typeahead search functionality in Adaptive Cards into a Microsoft Teams application using Bot Framework and the Teams AI library. | [View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/03.adaptiveCards.a.typeAheadBot) | [View](https://github.com/microsoft/teams-ai/tree/main/js/samples/03.adaptiveCards.a.typeAheadBot) | |Conversational bot with AI: Teams chef|This sample shows how to incorporate a basic conversational bot behavior in Microsoft Teams. The bot is built to allow GPT to facilitate the conversation on its behalf, using only a natural language prompt file to guide it.|[View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.a.teamsChefBot)|[View](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.ai.a.teamsChefBot) |-|Message extensions: GPT-ME|This sample is a message extension (ME) for Microsoft Teams that uses the text-davinci-003 model to help users generate and update posts.|[View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.b.messageExtensions.gptME)|[View](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.ai.b.messageExtensions.AI-ME) | -|Light bot|This sample illustrates more complex conversational bot behavior in Microsoft Teams. The bot is built to allow GPT to facilitate the conversation on its behalf and manually defined responses, and maps user intents to user defined actions.|[View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.c.actionMapping.lightBot)|[View](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.ai.c.actionMapping.lightBot) | +|Message extensions: GPT-ME|This sample is a message extension (ME) for Microsoft Teams that uses the text-davinci-003 model to help users generate and update posts.|[View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.b.messageExtensions.gptME)|[View](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.ai.b.messageExtensions.AI-ME)|[View](https://github.com/microsoft/teams-ai/tree/main/python/samples/04.ai.b.messageExtensions.AI-ME)| +|Light bot|This sample illustrates more complex conversational bot behavior in Microsoft Teams. The bot is built to allow GPT to facilitate the conversation on its behalf and manually defined responses, and maps user intents to user defined actions.|[View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.c.actionMapping.lightBot)|[View](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.ai.c.actionMapping.lightBot)|[View](https://github.com/microsoft/teams-ai/tree/main/python/samples/04.ai.c.actionMapping.lightBot)| |List bot|This sample shows how to incorporate a basic conversational bot behavior in Microsoft Teams. The bot harnesses the power of AI to simplify your workflow and bring order to your daily tasks and showcases the action chaining capabilities.|[View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.d.chainedActions.listBot)|[View](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.ai.d.chainedActions.listBot) | |DevOps bot|This sample shows how to incorporate a basic conversational bot behavior in Microsoft Teams. The bot uses the gpt-3.5-turbo model to chat with Teams users and perform DevOps action such as create, update, triage and summarize work items.|[View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.ai.e.chainedActions.devOpsBot)|[View](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.ai.e.chainedActions.devOpsBot) | |Twenty questions|This sample shows showcases the incredible capabilities of language models and the concept of user intent. Challenge your skills as the human player and try to guess a secret within 20 questions, while the AI-powered bot answers your queries about the secret.|[View](https://github.com/microsoft/teams-ai/tree/main/dotnet/samples/04.e.twentyQuestions)|[View](https://github.com/microsoft/teams-ai/tree/main/js/samples/04.e.twentyQuestions) | |
platform | Deep Link Application | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/concepts/build-and-test/deep-link-application.md | Deep links allow users to know more about an app and install it in different sco * [To go to a chat with the application](#deep-link-to-a-chat-with-the-application) * [To share deep link for a tab](#share-deep-link-for-a-tab) * [To open a dialog (referred as task module in TeamsJS v1.x)](#deep-link-to-open-a-dialog)+* [To invoke Stageview](#deep-link-to-invoke-stageview) [!INCLUDE [sdk-include](~/includes/sdk-include.md)] A deep link doesn't open in the meeting side panel in the following scenarios: * The deep link is selected outside of the meeting window or component. * The deep link doesn't match the current meeting, for example, if the deep link is created from another meeting. -## Code Sample +## Deep link to invoke Stageview ++You can invoke Stageview through deep link from your tab by wrapping the deep link URL in the `app.openLink(url)` API. The deep link can also be passed through an `OpenURL` action in the card. The `openMode` property defined in the API determines the Stageview response. For more information, see [invoke Stageview through deep link](../../tabs/tabs-link-unfurling.md#invoke-from-deep-link). ++## Code sample | Sample name | Description | .NET |Node.js| |-|-||-| |
platform | Deep Link Workflow | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/concepts/build-and-test/deep-link-workflow.md | In this article, youΓÇÖll learn to create a deep link: * [To share content to stage in meetings](#generate-a-deep-link-to-share-content-to-stage-in-meetings) * [To meeting side panel](#deep-link-to-meeting-side-panel) * [To join a meeting](#deep-link-to-join-a-meeting)-* [Invoke Stageview through deep link](#invoke-stageview-through-deep-link) ## Deep link to start a new chat Deep link doesn't open in the meeting side panel in the following scenarios: Teams app can read the URL for joining a meeting URL through Graph APIs. This deep link brings up the UI for the user to join the meeting. For more information, see [Get `onlineMeeting`](/graph/api/onlinemeeting-get#response-1) and [Get meeting details](~/apps-in-teams-meetings/meeting-apps-apis.md#get-meeting-details-api). -## Invoke Stageview through deep link --To invoke the Stageview through deep link from your tab, you must wrap the deep link URL in the `app.openLink(url)` API. The deep link can also be passed through an `OpenURL` action in the card. For more information see, [Stageview](~/tabs/tabs-link-unfurling.md#invoke-stageview-through-deep-link). --## Code Sample +## Code sample | Sample name | Description | .NET |Node.js| |-|-||-| |
platform | Share To Teams From Personal App Or Tab | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/concepts/build-and-test/share-to-teams-from-personal-app-or-tab.md | The following image shows the Share to Teams option: Share to Teams button can be hosted or embedded in an app running inside Teams. You can add Share to Teams button to the app created by using [Teams JavaScript client library](../../tabs/how-to/using-teams-client-library.md). > [!NOTE]-> Share to Teams isn't supported inside a [modal dialog](~/task-modules-and-cards/what-are-task-modules.md) (referred as task modules in TeamsJS v1.x) or [Stage View](~/tabs/tabs-link-unfurling.md#stageview) in Teams web client. You can't open a modal on top of another modal. +> Share to Teams isn't supported inside a [modal dialog](~/task-modules-and-cards/what-are-task-modules.md) (referred as task modules in TeamsJS v1.x) or [Stageview](../../tabs/tabs-link-unfurling.md) in Teams web client. You can't open a modal on top of another modal. ## Response codes |
platform | Glossary | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/get-started/glossary.md | Common terms and definitions used in Microsoft Teams developer documentation. | [Configurable tab](../resources/schem#configurabletabs)| Configurable tabs are also known as channel or group tabs. Configurable tabs are used when your app experience has a team channel tab experience, which requires extra configuration before it's added. <br> **See also**: [App manifest](#a)| | [Configuration URL](../resources/schem#configurabletabs)| An app manifest property (`configurationUrl`) where the HTTPS URL to use when configuring the tab or connector. <br> **See also**: [App manifest](#a)| | [Collaboration app](../concepts/extensibility-points.md) | An app with capabilities for a user to work in a collaborative workspace with other users. <br> **See also**: [Standalone app](#s) |-| [Collaborative Stageview](../tabs/tabs-link-unfurling.md#collaborative-stageview) | Collaborative Stageview is an enhancement to Stageview that allows users to engage with your app content in a new Teams window. <br> **See also**: [Stageview](#s)| +| [Collaborative Stageview](../tabs/tabs-link-unfurling.md#collaborative-stageview) | Collaborative Stageview is an enhancement to Stageview that allows users to engage with your app content in a new Teams window accompanied by a side panel conversation. <br> **See also**: [Stageview](#s)| | [Compose Extensions](../resources/schem#composeextensions) | An app manifest property (`composeExtensions`) that refers to message extension capability. It's used when your extension needs to either authenticate or configure to continue. <br>**See also**: [App manifest](#a); [Message extension](#m) | | [Command Box](../resources/schem) | A type of context in app manifest (`commandBox`) that you can configure to invoke a message extension from Teams command box. <br>**See also**: [Message extension](#m) | | [Command lists](../resources/schem#botscommandlists)| An app manifest property (`commandLists`) that consists of a list of commands that the bot supplies, including their usage, description, and the scope for which the commands are valid. For each scope, you must use a specific command list. <br> **See also**: [App manifest](#a)| |
platform | Stageview Deep Link Query | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/includes/stageview-deep-link-query.md | +| Property name | Type | Character limit | Required | Description | +| | | | | | +| entityId | String | 64 | Yes | A unique ID of the entity that the tab displays. | +| appId | String | 64 | Yes | The ID of the Teams app that's to be opened. | +| name | String | 128 | Optional | The display name of the tab in the channel interface. If no value is provided, the app name is displayed. | +| contentUrl | String | 2048 | Yes | The https:// URL that points to the entity UI to be displayed in Teams. | +| websiteUrl | String | 2048 | Yes | The https:// URL to point at, if a user selects to view in a browser. | +| threadId | String | 2048 | Optional | The ID defines the conversation shown in the Collaborative Stageview side panel. If no is value passed, `threadId` is inherited from the context where Collaborative Stageview is opened. <br> **Note**: The optional `threadId` parameter only supports chat threads. If a channel `threadId` is used, the side panel isn't displayed.| +| openMode | String | 2048 | Optional | The property defines the open behavior for stage content in the desktop client. | |
platform | Micro Capabilities For Website Links | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/messaging-extensions/how-to/micro-capabilities-for-website-links.md | Title: Micro-capabilities for website links -description: In this article, lean how to use micro-capability templates and schema.org metadata to unfurl rich previews for your links in Microsoft Teams. +description: In this article, learn how to use micro-capability templates and schema.org metadata to unfurl rich previews for your links in Microsoft Teams. ms.localizationpriority: high |
platform | Open Content In Stageview | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/tabs/open-content-in-stageview.md | + + Title: Open content in Stageview ++description: Learn about the types of Stageview, a full screen UI component invoked to surface your app content. Open content in multi-window experiences using deep links, Adaptive Cards, or Teams JavaScript client library (TeamsJS). +++ms.localizationpriority: high Last updated : 06/05/2023+++# Open content in Stageview ++Microsoft Teams provides multiple methods to open your app content in immersive canvas experiences. Stageview allows users to adopt multitasking inside Teams, for example, you can open your app content in a new Teams window with a specific chat in the side panel. Stageview is designed to: ++* Facilitate multitasking within Teams. +* Support collaboration in a Teams multi-window. +* Focus on specific tasks in a large modal experience. ++> [!NOTE] +> The article is based on Teams JavaScript client library (TeamsJS) version 2.0.x. If you're using an earlier version, see [TeamsJS](how-to/using-teams-client-library.md) for guidance between the latest and earlier versions. ++## Types of Stageview ++ Based on the UI and functionality, Stageview offers three ways to open your app content: ++* [Collaborative Stageview](#collaborative-stageview) +* [Stageview Multi-window](#stageview-multi-window) +* [Stageview Modal](#stageview-modal) ++### Collaborative Stageview ++Collaborative Stageview enables multitasking scenarios for your app content in Teams. Users can open and view your app content inside a new Teams window while accompanied by a side panel conversation. This view enables meaningful content engagement and collaboration from within the same window. ++**Best usage**: When the content is opened from a conversation such as chat, channel, or channel tab. +++### Stageview Multi-window ++Stageview Multi-window is useful for scenarios that require a user to multitask in Teams without the need for collaboration. This view opens the app content in a new Teams window without a side panel conversation allowing users to focus on their task. ++**Best usage**: When the content is opened from a nonconversational surface such as a personal app. +++### Stageview Modal ++Stageview Modal is a full-screen UI component used to render your app content inside the Teams main window. This view provides users with a focused experience to engage with the app content. Stageview Modal is useful for displaying rich content that doesn't require a user to multitask. ItΓÇÖs the default view when Collaborative Stageview and Stageview Multi-window aren't supported. ++> [!NOTE] +> Teams web client supports Stageview Modal only. +++## Invoke Stageview ++You can invoke Stageview in Teams through one of the following methods and configure the expected Stageview response. The following table provides the default and defined response for each Stageview invoke method: ++| Invoke method | Default response | Defined response | +| | | | +| [Adaptive Card](#invoke-collaborative-stageview-from-adaptive-card) | Opens in Collaborative Stageview. | Opens in Stageview Modal, if Collaborative Stageview or Stageview Multi-window isn't supported. | +| [stageView API](#invoke-from-stageview-api) | Opens in Collaborative Stageview. | Opens in the respective Stageview based on the `openMode` [defined](#openmode). | +| [Deep link](#invoke-from-deep-link)| Opens in Collaborative Stageview. | Opens in the respective Stageview based on the `openMode` [defined](#openmode). | ++<br> +<details> +<summary id="openmode" ><b>openMode property</b></summary> ++`openMode` is a property in [StageViewParams interface](/javascript/api/@microsoft/teams-js/stageview.stageviewparams). The `openMode` property must be defined in a [stageView API](#invoke-from-stageview-api) or a [deep link](#invoke-from-deep-link) to determine the type of Stageview response. The `openMode` property has the following three values: ++* `popoutWithChat` +* `popout` +* `modal` ++The following table provides the Stageview response of the `openMode` values: ++| Input | Response | +| | | +| `openMode` defined as `popoutWithChat` | Opens in Collaborative Stageview with an associated side panel conversation. | +| `openMode` defined as `popout`| Opens in Stageview Multi-window without a side panel conversation. | +| `openMode` defined as `modal` | Opens in Stageview Modal. | ++When `openMode` isn't defined, the content opens by default in Collaborative Stageview with an associated side panel conversation. The fallback hierarchy for a Stageview response is `popoutWithChat` > `popout` > `modal`. ++> [!NOTE] +> +> * The `openMode` values are case sensitive. If you don't use the correct casing, the content opens in Stageview Modal. +> * When pop-out experience isn't supported, for example in a Teams web client, the content opens in Stageview Modal even when the `openMode` property is defined. ++</details> ++### Invoke Collaborative Stageview from Adaptive Card ++Collaborative Stageview from an Adaptive Card allows users to engage with your content while continuing the conversation flow. If Collaborative Stageview is invoked from an Adaptive Card JSON in Teams web client, it opens in a Stageview Modal. ++The following steps help you to understand how Collaborative Stageview is invoked from an Adaptive Card: ++1. When the user shares a URL for an app content in a Teams chat, the bot receives a `composeExtensions/queryLink` invoke request. The bot returns an Adaptive Card with the type `tab/tabInfoAction`. ++1. After the user selects the action button on the Adaptive Card, Collaborative Stageview opens based on the content in the Adaptive Card. +++The following JSON code is an example to create an action button in an Adaptive Card: ++```json +{ + "type": "Action.Submit", + "title": "Open", + "data": { + "msteams": { + "type": "invoke", + "value": { + "type": "tab/tabInfoAction", + "tabInfo": { + "contentUrl": "contentUrl", + "websiteUrl": "websiteUrl", + "name": "Sales Report", + "entityId": "entityId" + } + } + } + } +} +``` ++**Best practices to create an Adaptive Card** ++* The content URL must be within the list of `validDomains` in your app manifest. +* The invoke request type must be `composeExtensions/queryLink`. +* The `invoke` workflow must be similar to the `appLinking` workflow. +* The `Action.Submit` must be configured as `Open` to maintain consistency. ++If your app isn't optimized to work in Teams mobile client, Stageview for apps distributed through the [Microsoft Teams Store](../concepts/deploy-and-publish/apps-publish-overview.md) opens in a default web browser. ++### Invoke from stageView API ++The stageView API from TeamsJS allows you to open the Teams window in a Stageview experience based on the `openMode` defined. If the `openMode` property isn't defined, the default response is a Collaborative Stageview with an associated side panel conversation. In a Collaborative Stageview experience, the side panel conversation is the same thread from where the Stageview was invoked such as chat or group chat. ++> [!NOTE] +> The stageView API supports an optional `threadId` parameter that allows you to bring a specific conversation to the Collaborative Stageview side panel. Mapping `contentUrl` to `threadId` allows you to persist a conversation alongside the content. ++The following codes are the samples for each `openMode` value in stageView API: ++# [popoutWithChat](#tab/withchat) ++The `openMode` property is defined as `popoutWithChat` in [StageViewParams](/javascript/api/@microsoft/teams-js/stageview.stageviewparams) to open in Collaborative Stageview. ++ ```json + { + "appId": "2c19df50-1c3c-11ea-9327-cd28e4b6f7ba", + "contentUrl": "https://teams-test-tab.azurewebsites.net", + "title": "Test tab ordering", + "websiteUrl": "https://teams-test-tab.azurewebsites.net", + "openMode": "popoutWithChat" + } + ``` ++# [popout](#tab/popout) ++The `openMode` property is defined as `popout` in [StageViewParams](/javascript/api/@microsoft/teams-js/stageview.stageviewparams) to open in Stageview Multi-window. ++ ```json + { + "appId": "2c19df50-1c3c-11ea-9327-cd28e4b6f7ba", + "contentUrl": "https://teams-test-tab.azurewebsites.net", + "title": "Test tab ordering", + "websiteUrl": "https://teams-test-tab.azurewebsites.net", + "openMode": "popout" + } + ``` ++# [modal](#tab/modal) ++The `openMode` property is defined as `modal` in [StageViewParams](/javascript/api/@microsoft/teams-js/stageview.stageviewparams) to open in Stageview Modal. ++ ```json + { + "appId": "2c19df50-1c3c-11ea-9327-cd28e4b6f7ba", + "contentUrl": "https://teams-test-tab.azurewebsites.net", + "title": "Test tab ordering", + "websiteUrl": "https://teams-test-tab.azurewebsites.net", + "openMode": "modal" + } + ``` ++++When `openMode` isn't defined in [StageViewParams](/javascript/api/@microsoft/teams-js/stageview.stageviewparams), the default response is Collaborative Stageview. ++ ```json + { + "appId": "2c19df50-1c3c-11ea-9327-cd28e4b6f7ba", + "contentUrl": "https://teams-test-tab.azurewebsites.net", + "title": "Test tab ordering", + "websiteUrl": "https://teams-test-tab.azurewebsites.net" + } + ``` ++For more information on stageView API, see [stageView module](/javascript/api/@microsoft/teams-js/stageview). ++#### stageView API parameters +++### Invoke from deep link ++To invoke Stageview through deep link from your tab or personal app, wrap the deep link URL in the [app.openLink(url) API](/javascript/api/%40microsoft/teams-js/app#@microsoft-teams-js-app-openlink) and define the `openMode` property for the chat content to open. When the openMode property isn't specified, Stageview response from a deep link defaults to Collaborative Stageview. ++To display a specific chat in the side panel, you must specify a `threadId`. Otherwise, the side panel conversation brings the group chat or channel thread from which the deep link is invoked. ++> [!NOTE] +> +> * All deep links must be encoded before you paste the URL. Unencoded URLs aren't supported. +> * When you invoke Stageview from a certain context, ensure that your app works in that context. +> * When adding a threadId, ensure your app works in the context of the threadId that's passed. If the context fails, the experience falls back to the personal context. ++#### Syntax ++**Deep link syntax for Collaborative Stageview:** ++`https://teams.microsoft.com/l/stage/{appId}/0?context={"contentUrl":"contentUrl","websiteUrl":"websiteUrl","name":"Contoso","openMode":"popoutWithChat","threadId":"threadId"}` ++**Encoded deep link syntax for Collaborative Stageview:** ++`https://teams.microsoft.com/l/stage/%7BappId%7D/0?context=%7B%22contentUrl%22:%22contentUrl%22,%22websiteUrl%22:%22websiteUrl%22,%22name%22:%22Contoso%22,%22openMode%22:%22popoutWithChat%22,%22threadId%22:%22threadId%22%7D` ++<br> +<details> +<summary><b>Example</b></summary> ++**Encoded deep link URL to invoke Collaborative Stageview:** ++`https://teams.microsoft.com/l/stage/6d621545-9c65-493c-b069-2b978b37c117/0?context=%7B%22appId%22%3A%226d621545-9c65-493c-b069-2b978b37c117%22%2C%22contentUrl%22%3A%22https%3A%2F%2F3282-115-111-228-84.ngrok-free.app%22%2C%22websiteUrl%22%3A%22https%3A%2F%2F3282-115-111-228-84.ngrok-free.app%22%2C%22name%22%3A%22DemoStageView%22%2C%22openMode%22%3A%22popoutWithChat%22%2C%22threadId%22%3A%2219%3Abe817b823c204cde8aa174ae146251dd%40thread.v2%22%7D` ++</details> ++#### Deep link query parameters +++Whether you want to facilitate multitasking, enhance collaboration, or provide focused user experience, Stageview has a mode to suit your requirements. ++## FAQs ++</br> ++<details> ++<summary>Which Stageview should I use?</summary> ++Collaborative Stageview allows the users to open content along with a side panel conversation in a Teams window. This view is best suited for most of the collaboration scenarios. ++</br> ++</details> ++<details> ++<summary>What's the difference between Stageview Modal and dialogs?</summary> ++Stageview Modal is useful to display rich content to the users, such as page, dashboard, or file. <br> Dialogs (referred as task modules in TeamsJS v1.x) are useful to display messages that need users' attention or collect information required to move to the next step. ++</br> ++</details> ++<details> ++<summary>When Stageview is invoked, the content opens in Collaborative Stageview but gets loaded in the main Teams window instead of a new window. How to open the content in a new window?</summary> ++Ensure that your `contentUrl` domain is accurately reflected in the manifest `validDomains` property. For more information, see [app manifest schema](../resources/schem). ++</br> ++</details> ++<details> ++<summary>Why isn't any content displayed in a new Teams window even when `contentUrl` matches with `validDomains`?</summary> ++Call `app.notifySuccess()` in all iframe-based contents to notify Teams that your app is loaded successfully. If applicable, Teams hides the loading indicator. If `notifySuccess` isn't called within 30 seconds, Teams assumes that the app is timed out and displays an error screen with a retry option. For app updates, this step is applicable for tabs that are already configured. If you don't perform this step, an error screen is displayed for the existing users. ++</br> ++</details> ++<details> ++<summary>Can I include a deep link in my `contentUrl`?</summary> ++No, deep links aren't supported in `contentUrl`. ++</br> ++</details> ++<details> ++<summary>How do I keep a specific thread shown alongside my content?</summary> ++Collaborative Stageview from a deep link or a stageView API comes with the additional `threadId` parameter. You can explicitly define the chat thread to be displayed in the side panel for your specific `contentUrl`. For more information about retrieving a `threadId`, see [get conversation thread](/graph/api/group-get-thread). ++</br> ++</details> ++## See also ++[Create deep links](../concepts/build-and-test/deep-links.md) |
platform | Tabs Link Unfurling | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/tabs/tabs-link-unfurling.md | - Title: Tabs link unfurling and Stageview- -description: Learn about Stageview and Collaborative Stageview, a full screen UI component invoked to surface your web content. Link unfurling is used to turn URLs into a tab using Adaptive Cards. -- Previously updated : 06/05/2023---# Tabs link unfurling and Stageview --Stageview is a user interface (UI) component that allows you to render content in a full screen within Teams or as a new window. --> [!NOTE] -> This article is based on Microsoft Teams JavaScript client library version 2.0.x. If you are using an earlier version, see [TeamsJS](how-to/using-teams-client-library.md) for guidance on the differences between the latest TeamsJS and earlier versions. --<!--It allows users to maintain their context within their new window experience while continuing group chat conversation. <br> Developers have to enable Tab link Unfurling for their app to get Stageview update for free. Users are still able to pin the app content as a tab. It's a new entry point to pinning app content but it will not change the existing functionality of tabs or pinning. ---## Stageview --Stageview is a full screen UI component that can be used to render your app content, providing users with a focused experience to engage with your app. Stageview can be invoked from either an Adaptive Card or a deep link, in both chats and channels. --* When users invoke Stageview from Adaptive Cards, Stageview opens in a new Teams window along with the originating chat or channel thread in the side panel. This new app canvas is called the [Collaborative Stageview](#collaborative-stageview). The Collaborative Stageview allows users to multi-task and collaborate with each other. --* The Collaborative Stageview surfaces the originating chat or thread from where it was invoked and helps the users to engage with content and conversation side-by-side. --The following image is an example of the Collaborative Stageview: ---### Stageview vs. Dialog --| Stageview | Dialog (referred as task module in TeamsJS v1.x)| -|:--|:--| -| Stageview is useful to display rich content to the users, such as a page, a dashboard, or a file. It provides rich features that help to render your content in the new pop-up window and the full-screen canvas. <br><br> After your app content opens in Stageview, users can choose to pin the content as a tab. <br><br> For more collaborative capabilities, opening your content in Collaborative Stageview (through an Adaptive Card) allows users to engage with content and conversation side-by-side, while enabling multi-window scenarios.| [Dialog](../task-modules-and-cards/task-modules/task-modules-tabs.md) is especially useful to display messages that need users' attention, or collect information required to move to the next step.| --> [!WARNING] -> Microsoft's cloud services, including web versions of Teams (*teams.microsoft.com*), Outlook (*outlook.com*), and Microsoft 365 (*microsoft365.com*) domains are migrating to the *cloud.microsoft* domain. Perform the following steps before June 2024 to ensure your app continues to render on the Teams web client: -> -> 1. Update TeamsJS SDK to v.2.19.0 or higher. For more information about the latest release of TeamsJS SDK, see [Microsoft Teams JavaScript client library](https://www.npmjs.com/package/@microsoft/teams-js). -> -> 2. Update your [Content Security Policy](https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP) headers in your Teams app to allow your app to access the ***teams.cloud.microsoft*** domain. If your Teams app extends across Outlook and Microsoft 365, ensure you allow your app to access ***teams.cloud.microsoft***, ***outlook.cloud.microsoft***, and ***m365.cloud.microsoft*** domains. --### Invoke Stageview through deep link --To invoke the Stageview through deep link from your tab, you must wrap the deep link URL in the `app.openLink(url)` API. Stageview from a deep link always defaults to the modal experience (and not a Teams window). While the Stageview deep link can be passed through an `OpenURL` action in the card, the Stageview deep link is intended for the tab canvas. For Stageview from Adaptive Cards, it's recommended to follow the JSON [Adaptive Card example](#example). --The following image is an example of Stageview when it's invoked from a deep link: ---#### Syntax --Following is the deep link syntax for Stageview: --`https://teams.microsoft.com/l/stage/{appId}/0?context={"contentUrl":"contentUrl","websiteUrl":"websiteUrl","name":"Contoso"}` --#### Examples --Following are the deep link examples to invoke Stageview: --<br> --<details> -<summary><b>Example 1</b></summary> -URL with threadId --Unencoded URL: --`https://teams.microsoft.com/l/stage/be411542-2bd5-46fb-8deb-a3d5f85156f6/0?context={"contentUrl":"https://teams-alb.wakelet.com/teams/collection/e4173826-5dae-4de0-b77d-bfabafd6f191","websiteUrl":"https://teams-alb.wakelet.com/teams/collection/e4173826-5dae-4de0-b77d-bfabafd6f191?standalone=true","title":"Quotes: Miscellaneous","threadId":"19:9UryYW9rjwnq-vwmBcexGjN1zQSNX0Y4oEAgtUC7WI81@thread.tacv2"}` --Encoded URL: --`https://teams.microsoft.com/l/stage/be411542-2bd5-46fb-8deb-a3d5f85156f6/0?context=%7B%22contentUrl%22%3A%22https%3A%2F%2Fteams-alb.wakelet.com%2Fteams%2Fcollection%2Fe4173826-5dae-4de0-b77d-bfabafd6f191%22%2C%22websiteUrl%22%3A%22https%3A%2F%2Fteams-alb.wakelet.com%2Fteams%2Fcollection%2Fe4173826-5dae-4de0-b77d-bfabafd6f191%3Fstandalone%3Dtrue%22%2C%22title%22%3A%22Quotes%3A%20Miscellaneous%22%2C%22threadId%22%3A%2219:9UryYW9rjwnq-vwmBcexGjN1zQSNX0Y4oEAgtUC7WI81@thread.tacv2%22%7D` --</details> --<br> --<details> -<summary><b>Example 2</b></summary> -URL without threadId --Unencoded URL: --`https://teams.microsoft.com/l/stage/43f56af0-8615-49e6-9635-7bea3b5802c2/0?context={"contentUrl":"https://teams-alb.wakelet.com/teams/collection/e4173826-5dae-4de0-b77d-bfabafd6f191","websiteUrl":"https://teams-alb.wakelet.com/teams/collection/e4173826-5dae-4de0-b77d-bfabafd6f191?standalone=true","title":"Quotes: Miscellaneous"}` --Encoded URL: --`https://teams.microsoft.com/l/stage/43f56af0-8615-49e6-9635-7bea3b5802c2/0?context=%7B%22contentUrl%22%3A%22https%3A%2F%2Fteams-alb.wakelet.com%2Fteams%2Fcollection%2Fe4173826-5dae-4de0-b77d-bfabafd6f191%22%2C%22websiteUrl%22%3A%22https%3A%2F%2Fteams-alb.wakelet.com%2Fteams%2Fcollection%2Fe4173826-5dae-4de0-b77d-bfabafd6f191%3Fstandalone%3Dtrue%22%2C%22title%22%3A%22Quotes%3A%20Miscellaneous%22%7D` --</details> --<br> --> [!NOTE] -> -> * All deep links must be encoded before pasting the URL.The unencoded URLs aren't supported. -> * The `name` property is optional in a deep link. If not included, the app name replaces it. -> * The deep link can also be passed through an `OpenURL` action. -> * When you launch a Stageview from a certain context, ensure that your app works in that context. For example, if the Stageview is launched from a personal app, you must ensure your app has a personal scope. --## Collaborative Stageview --> [!NOTE] -> Collaborative Stageview isn't supported in Teams web and mobile clients. --Collaborative Stageview is an enhancement to Stageview that allows users to engage with your app content in a new Teams window. When a user opens Collaborative Stageview from an Adaptive Card, the app content pops-out in a new Teams window instead of the default Stageview modal. --In the new Teams window, the Collaborative Stageview also opens a chat in the side panel. The chat brings the conversation from the group chat or channel thread where the users' Adaptive Card is originally shared. Users can continue to collaborate directly within the new window. --The following image is an example of Collaborative Stageview: ---### Advantages of Collaborative Stageview --Collaborative Stageview helps unlock multi-tasking scenarios with your app content in Teams. Users can open and view your app content inside a new Teams window, while having meaningful conversation and collaboration from the same window. The ability to engage with content while also having a conversation on the content leads to higher user engagement for your app. --|Feature |Notes |Desktop |Web |Mobile| -| |:-- |:-- |:- |:-- | -|Collaborative Stageview| Invoke from Adaptive Card action. |Chat or Channel: Opens Teams pop-out window with chat pane.| Opens Stageview modal. |Opens Stageview modal.| -|Stageview |Invoke from Deep link. Only recommended when calling from your tab app, and not an Adaptive Card. |Opens Stageview modal.| Opens Stageview modal.| Opens Stageview modal.| --### Invoke Collaborative Stageview from Adaptive Card --When the user enters an app content URL in a chat, the bot is invoked and returns an Adaptive Card with the option to open the URL. Depending on the context and the usersΓÇÖ client, the URL opens in the appropriate Stageview UI. When the Collaborative Stageview is invoked from an Adaptive Card in a chat or channel (and not from a deep link), a new window opens. --The following image is an example of a Collaborative Stageview from an Adaptive Card: ----#### Example --The following is a JSON code example to create a Collaborative Stageview button in an Adaptive Card: --```json -{ - "type": "Action.Submit", - "title": "Open", - "data": { - "msteams": { - "type": "invoke", - "value": { - "type": "tab/tabInfoAction", - "tabInfo": { - "contentUrl": "contentUrl", - "websiteUrl": "websiteUrl", - "name": "Sales Report", - "entityId": "entityId" - } - } - } - } -} --``` --The following steps show how to invoke Collaborative Stageview: --* When the user shares a URL in a Teams chat, the bot receives an `composeExtensions/queryLink` invoke request. The bot returns an Adaptive Card with the type `tab/tabInfoAction`. -* When the user selects the action button on the Adaptive Card, Collaborative Stageview opens based on the content in the Adaptive Card. --> [!NOTE] -> -> * Passing a Stageview deep link into an Adaptive Card doesn't open the Collaborative Stageview. A Stageview deep link always opens the Stageview Modal. -> * Ensure that the URL of the content is within the list of `validDomains` in your app manifest. -> * The invoke request type must be a `composeExtensions/queryLink`. -> * `invoke` workflow is similar to the `appLinking` workflow. -> * To maintain consistency, it is recommended to name `Action.Submit` as `Open`. -> * `websiteUrl` is a required property to be passed in the `TabInfo` object. -> * If you don't have an optimized mobile experience for Teams mobile client, the Stageview for apps distributed through the [Microsoft Teams Store](../concepts/deploy-and-publish/apps-publish-overview.md) opens in a default web browser. The browser opens the URL specified in the `websiteUrl` parameter of the `TabInfo` object. --#### Query parameters --| Property name | Type | Number of characters | Description | -|:--|:|:|:--| -| `entityId` | String | 64 | This property is a unique identifier for the entity that the tab displays and it's a required field.| -| `name` | String | 128 | This property is the display name of the tab in the channel interface and it's an optional field.| -| `contentUrl` | String | 2048 | This property is the `https://` URL that points to the entity UI to be displayed in the Teams canvas and it's a required field.| -| `websiteUrl?` | String | 2048 | This property is the `https://` URL to point at, if a user selects to view in a browser and it's a required field.| -| `removeUrl?` | String | 2048 | This property is the `https://` URL that points to the UI to be displayed when the user deletes the tab and it's an optional field.| --## Code sample --| Sample name | Description | .NET |Node.js| Manifest| -|-|-||-|-| -|Tab in Stageview |Microsoft Teams tab sample app for demonstrating tab in Stageview.|[View](https://github.com/OfficeDev/Microsoft-Teams-Samples/tree/main/samples/tab-stage-view/csharp)|[View](https://github.com/OfficeDev/Microsoft-Teams-Samples/tree/main/samples/tab-stage-view/nodejs)|[View](https://github.com/OfficeDev/Microsoft-Teams-Samples/tree/main/samples/tab-stage-view/csharp/demo-manifest/tab-stage-view.zip)| --## Next step --> [!div class="nextstepaction"] -> [Create conversational tabs](~/tabs/how-to/conversational-tabs.md) --## See also --* [Build tabs for Teams](what-are-tabs.md) -* [Add link unfurling](../messaging-extensions/how-to/link-unfurling.md) -* [Build tabs with Adaptive Cards](how-to/build-adaptive-card-tabs.md) -* [Create deep links](../concepts/build-and-test/deep-links.md) -* [Cards](../task-modules-and-cards/what-are-cards.md) -* [App manifest schema for Teams](../resources/schem) |
platform | What Are Tabs | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/tabs/what-are-tabs.md | A custom tab is declared in the app manifest of your app package. For each webpa Whether you choose to expose your tab within the channel or group, or personal scope, you must present an <iframe\> HTML [content page](~/tabs/how-to/create-tab-pages/content-page.md) in your tab. For static tabs, the content URL is set directly in your Teams [app manifest](../resources/schem#statictabs) by the `contentUrl` property in the `staticTabs` array. Your tab's content is the same for all users. -> [!Note] -> Teams app doesn't recognize sub iframes. Therefore, it'll not load if there is an iframe within the tab app. +> [!NOTE] +> Teams apps can't use native plugins because they run inside sandboxed iframes. For channel or group tabs, you can also create an extra configuration page. This page allows you to configure content page URL, typically by using URL query string parameters to load the appropriate content for that context. This is because your channel or group tab can be added to multiple teams or group chats. On each subsequent install, your users can configure the tab, allowing you to tailor the experience as required. When users add or configure a tab, a URL is associated with the tab that is presented in the Teams user interface (UI). Configuring a tab simply adds more parameters to that URL. For example, when you add the Azure Boards tab, the configuration page allows you to choose, which board the tab loads. The configuration page URL is specified by the `configurationUrl` property in the `configurableTabs` array in your [app manifest](../resources/schem#configurabletabs). -For static tabs you can pin a `contentUrl` to chat or meeting tabs. This allows you to skip the mandatory configuration dialog and get your users to use the app much faster. You can also change the `contentUrl` at runtime. This allows you to build one tab object that works in all surface areas of Teams. For more information, see [migrate your configurable tab to static tab.](~/tabs/how-to/create-channel-group-tab.md#migrate-your-configurable-tab-to-static-tab) +For static tabs you can pin a `contentUrl` to chat or meeting tabs. This allows you to skip the mandatory configuration dialog and get your users to use the app faster. You can also change the `contentUrl` at runtime. This allows you to build one tab object that works in all surface areas of Teams. For more information, see [migrate your configurable tab to static tab.](~/tabs/how-to/create-channel-group-tab.md#migrate-your-configurable-tab-to-static-tab) You can have multiple channels or group tabs, and up to 16 static tabs per app. |
platform | Teams Faq | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/teams-faq.md | You can test or validate the Adaptive Card schema using the **Adaptive cards edi App registration is disabled for the user or the user doesn't have enough permissions to create an app. For more information, see [limitations and known issues.](~/bots/bot-features.md#limitations-and-known-issues) </details> +## Live share ++<details> +<summary>Can I use my own Azure Fluid Relay service?</summary> ++Yes! When initializing Live Share, you can define your own `AzureConnectionConfig`. Live Share associates containers you create with meetings, but you need to implement the `ITokenProvider` interface to sign tokens for your containers. For example, you can use a provided `AzureFunctionTokenProvider`, which uses an Azure cloud function to request an access token from a server. ++While most of you find it beneficial to use our free hosted service, there might still be times where it's beneficial to use your own Azure Fluid Relay service for your Live Share app. Consider using a custom AFR service connection if you: ++* Require storage of data in Fluid containers beyond the lifetime of a meeting. +* Transmit sensitive data through the service that requires a custom security policy. +* Develop features through Fluid Framework, for example, `SharedMap`, for your application outside of Teams. ++For more information, see [how to guide](apps-in-teams-meetings/teams-live-share-how-to/how-to-custom-azure-fluid-relay.md) or visit the [Azure Fluid Relay documentation](/azure/azure-fluid-relay/). +<br> + +</details> +<details> +<summary>How long is data stored in Live Share's hosted service accessible?</summary> ++Any data sent or stored through Fluid containers created by Live Share's hosted Azure Fluid Relay service is accessible for 24 hours. If you want to persist data beyond 24 hours, you can replace our hosted Azure Fluid Relay service with your own. Alternatively, you can use your own storage provider in parallel to Live Share's hosted service. +<br> + +</details> +<details> +<summary>What meeting types does Live Share support?</summary> ++Live Share supports scheduled meetings, one-on-one calls, group calls, and meet now. Channel meetings aren't yet supported. +<br> + +</details> +<details> +<summary>Will Live Share's media package work with DRM content?</summary> ++Live Share's media package work doesn't with DRM content. Currently, Teams doesn't support encrypted media for tab applications on desktop. Chrome, Edge, and mobile clients are supported. ++For more information, you can [track the issue here](https://github.com/microsoft/live-share-sdk/issues/14). +<br> + +</details> +<details> +<summary>How many people can attend a Live Share session?</summary> ++Currently, Live Share supports a maximum of 100 attendees per session. If it's something you're interested in, you can [start a discussion here](https://github.com/microsoft/live-share-sdk/discussions). +<br> + +</details> +<details> +<summary>Can I use Live Share's data structures outside of Teams?</summary> ++Currently, Live Share packages require the Teams Client SDK to function properly. Features in `@microsoft/live-share` or `@microsoft/live-share-media` don't work outside Microsoft Teams. If this is something you're interested in, you can [start a discussion here](https://github.com/microsoft/live-share-sdk/discussions). +<br> + +</details> +<details> +<summary>Can I use multiple Fluid containers?</summary> ++Currently, Live Share only supports having one container using our provided Azure Fluid Relay service. However, it's possible to use both a Live Share container and a container created by your own Azure Fluid Relay instance. +<br> + +</details> +<details> +<summary>Can I change my Fluid container schema after creating the container?</summary> ++Currently, Live Share doesn't support adding new `initialObjects` to the Fluid `ContainerSchema` after creating or joining a container. Because Live Share sessions are short-lived, this is most commonly an issue during development after adding new features to your app. ++> [!NOTE] +> If you are using the `dynamicObjectTypes` property in the `ContainerSchema`, you can add new types at any point. If you later remove types from the schema, existing DDS instances of those types will gracefully fail. ++To fix errors resulting from changes to `initialObjects` when testing locally in your browser, remove the hashed container ID from your URL and reload the page. If you're testing in a Teams meeting, start a new meeting and try again. ++If you plan to update your app with new `SharedObject` or `LiveObject` instances frequently, you should consider how you deploy new schema changes to production. While the actual risk is relatively low and short lasting, there might be active sessions at the time you roll out the change. Existing users in the session shouldn't be impacted, but users joining that session after you deployed a breaking change might have issues connecting to the session. To mitigate this, you might consider some of the following solutions: ++* Deploy schema changes for your web application outside of normal business hours. +* Use `dynamicObjectTypes` for any changes made to your schema, rather than changing `initialObjects`. ++> [!NOTE] +> Live Share does not currently support versioning your `ContainerSchema`, nor does it have any APIs dedicated to migrations. ++<br> + +</details> +<details> +<summary>Are there limits to how many change events I can emit through Live Share?</summary> ++While Live Share is in Preview, any limit to events emitted through Live Share isn't enforced. For optimal performance, you must debounce changes emitted through `SharedObject` or `LiveObject` instances to one message per 50 milliseconds or more. This is especially important when sending changes based on mouse or touch coordinates, such as when synchronizing cursor positions, inking, and dragging objects around a page. +<br> + +</details> ++<details> +<summary>Is Live Share supported for Government Community Cloud (GCC), Government Community Cloud High (GCC-High), and Department of Defense (DOD) tenants?</summary> ++Live Share isn't supported for GCC, GCC-High, and DOD tenants. ++<br> ++</details> ++<details> +<summary>Does Live Share support external and guest users?</summary> ++Yes, Live Share supports guest and external users for most meeting types. However, guest users aren't supported in channel meetings. ++<br> ++</details> ++<details> +<summary>Does Live Share support Teams Rooms devices?</summary> ++No, Live Share doesn't support Teams Rooms devices. ++</details> ++<details> +<summary>Do Live Share apps support meeting recordings?</summary> ++No, Live Share doesn't support meeting recordings. ++</details> + ## Microsoft 365 Chat <details> Ensure your app manifest (previously called Teams app manifest) is descriptive. If the problem continues, use the thumbs down indicator in the Microsoft 365 Chat reply and prefix your reply with [MessageExtension]. </details>-</br> <details> <summary> What descriptions should I include in app manifest? </summary> Here's an example description that work for NPM Finder. ``` </details>-</br> <details> <summary> Microsoft 365 Chat includes my plugin in the response, but the Microsoft 365 ChatΓÇÖs response doesnΓÇÖt meet my expectations. What should I do?</summary> Here's an example description that work for NPM Finder. Use the downvoting option in the Microsoft 365 Chat reply and prefix your reply with [MessageExtension]. </details>-</br> <details> <summary> Can I build my own Teams message extension? </summary> Yes, you can. Ensure that you have a descriptive app manifest and upload the app to Outlook and interacted with it.</br> </details>-</br> <details> <summary> How can I get my existing Teams message extension to work with Microsoft 365 Chat? </summary> Yes, you can. Ensure that you have a descriptive app manifest and upload the app 1. Upload the app to Outlook. </details>-</br> - <details> <summary>What are the guidelines for Teams apps extensible as plugin for Microsoft Copilot for Microsoft 365? </summary> You can read the [Teams Store validation guidelines](concepts/deploy-and-publish/appsource/prepare/teams-store-validation-guidelines.md#teams-apps-extensible-as-plugin-for-microsoft-copilot-for-microsoft-365) for Teams apps extensible as plugin for Microsoft Copilot for Microsoft 365. </details>-</br> - <details> <summary> What is the certification process?</summary> After publishing the plugin, start the App Compliance flow in Partner Center. If To start the [Microsoft 365 Certification process](/microsoft-365-app-certification/docs/certification), upload initial documents that define the assessment scope for the plugin and operating environment. Depending on the scope, provide evidence for specific controls related to application security, operational security, and data handling or privacy. If you build your plugin on Azure, you can use the App Compliance Automation Tool (ACAT) to scan the environment and generate evidence for several controls, reducing the manual workload. For more information, see [App Compliance Automation Tool for Microsoft 365](/microsoft-365-app-certification/docs/acat-overview). </details>-</br> <details> <summary> How are plugins certified?</summary> After the app passes the proactive validation, developers of both existing and new message extensions that aren't certified will be encouraged to certify their plugin. This is communicated through an email confirming their message extension is validated. </details>-</br> <details> <summary> How are new plugins certified?</summary> Developers will be encouraged to certify their new plugin after successfully completing validation. </details>-</br> - <details> <summary>How can I create or upgrade a message extension plugin for Copilot for Microsoft 365?</summary> You can [create or upgrade a message extension as a plugin in Copilot for Microsoft 365](messaging-extensions/build-bot-based-plugin.md) to interact with third-party tools and services and achieve more with Copilot for Microsoft 365. Additionally, your extensions must meet the standards for compliance, performance, security, and user experience outlined in [guidelines to create or upgrade a message extension plugin for Copilot for Microsoft 365](messaging-extensions/high-quality-message-extension.md). </details> -## Live share --<details> -<summary>Can I use my own Azure Fluid Relay service?</summary> --Yes! When initializing Live Share, you can define your own `AzureConnectionConfig`. Live Share associates containers you create with meetings, but you need to implement the `ITokenProvider` interface to sign tokens for your containers. For example, you can use a provided `AzureFunctionTokenProvider`, which uses an Azure cloud function to request an access token from a server. --While most of you find it beneficial to use our free hosted service, there might still be times where it's beneficial to use your own Azure Fluid Relay service for your Live Share app. Consider using a custom AFR service connection if you: --* Require storage of data in Fluid containers beyond the lifetime of a meeting. -* Transmit sensitive data through the service that requires a custom security policy. -* Develop features through Fluid Framework, for example, `SharedMap`, for your application outside of Teams. --For more information, see [how to guide](apps-in-teams-meetings/teams-live-share-how-to/how-to-custom-azure-fluid-relay.md) or visit the [Azure Fluid Relay documentation](/azure/azure-fluid-relay/). -<br> - -</details> -<details> -<summary>How long is data stored in Live Share's hosted service accessible?</summary> --Any data sent or stored through Fluid containers created by Live Share's hosted Azure Fluid Relay service is accessible for 24 hours. If you want to persist data beyond 24 hours, you can replace our hosted Azure Fluid Relay service with your own. Alternatively, you can use your own storage provider in parallel to Live Share's hosted service. -<br> - -</details> -<details> -<summary>What meeting types does Live Share support?</summary> --Live Share supports scheduled meetings, one-on-one calls, group calls, and meet now. Channel meetings aren't yet supported. -<br> - -</details> -<details> -<summary>Will Live Share's media package work with DRM content?</summary> --Live Share's media package work doesn't with DRM content. Currently, Teams doesn't support encrypted media for tab applications on desktop. Chrome, Edge, and mobile clients are supported. --For more information, you can [track the issue here](https://github.com/microsoft/live-share-sdk/issues/14). -<br> - -</details> -<details> -<summary>How many people can attend a Live Share session?</summary> --Currently, Live Share supports a maximum of 100 attendees per session. If it's something you're interested in, you can [start a discussion here](https://github.com/microsoft/live-share-sdk/discussions). -<br> - -</details> -<details> -<summary>Can I use Live Share's data structures outside of Teams?</summary> --Currently, Live Share packages require the Teams Client SDK to function properly. Features in `@microsoft/live-share` or `@microsoft/live-share-media` don't work outside Microsoft Teams. If this is something you're interested in, you can [start a discussion here](https://github.com/microsoft/live-share-sdk/discussions). -<br> - -</details> -<details> -<summary>Can I use multiple Fluid containers?</summary> --Currently, Live Share only supports having one container using our provided Azure Fluid Relay service. However, it's possible to use both a Live Share container and a container created by your own Azure Fluid Relay instance. -<br> - -</details> -<details> -<summary>Can I change my Fluid container schema after creating the container?</summary> --Currently, Live Share doesn't support adding new `initialObjects` to the Fluid `ContainerSchema` after creating or joining a container. Because Live Share sessions are short-lived, this is most commonly an issue during development after adding new features to your app. --> [!NOTE] -> If you are using the `dynamicObjectTypes` property in the `ContainerSchema`, you can add new types at any point. If you later remove types from the schema, existing DDS instances of those types will gracefully fail. --To fix errors resulting from changes to `initialObjects` when testing locally in your browser, remove the hashed container ID from your URL and reload the page. If you're testing in a Teams meeting, start a new meeting and try again. --If you plan to update your app with new `SharedObject` or `LiveObject` instances frequently, you should consider how you deploy new schema changes to production. While the actual risk is relatively low and short lasting, there might be active sessions at the time you roll out the change. Existing users in the session shouldn't be impacted, but users joining that session after you deployed a breaking change might have issues connecting to the session. To mitigate this, you might consider some of the following solutions: --* Deploy schema changes for your web application outside of normal business hours. -* Use `dynamicObjectTypes` for any changes made to your schema, rather than changing `initialObjects`. --> [!NOTE] -> Live Share does not currently support versioning your `ContainerSchema`, nor does it have any APIs dedicated to migrations. --<br> - -</details> -<details> -<summary>Are there limits to how many change events I can emit through Live Share?</summary> --While Live Share is in Preview, any limit to events emitted through Live Share isn't enforced. For optimal performance, you must debounce changes emitted through `SharedObject` or `LiveObject` instances to one message per 50 milliseconds or more. This is especially important when sending changes based on mouse or touch coordinates, such as when synchronizing cursor positions, inking, and dragging objects around a page. -<br> - -</details> --<details> -<summary>Is Live Share supported for Government Community Cloud (GCC), Government Community Cloud High (GCC-High), and Department of Defense (DOD) tenants?</summary> --Live Share isn't supported for GCC, GCC-High, and DOD tenants. --<br> --</details> --<details> -<summary>Does Live Share support external and guest users?</summary> --Yes, Live Share supports guest and external users for most meeting types. However, guest users aren't supported in channel meetings. --<br> --</details> --<details> -<summary>Does Live Share support Teams Rooms devices?</summary> --No, Live Share doesn't support Teams Rooms devices. --<br> --</details> --<details> -<summary>Do Live Share apps support meeting recordings?</summary> --No, Live Share doesn't support meeting recordings. --<br> --</details> - ## Microsoft Graph <details> Open the sign in simple start page instead of opening login page directly to res </details> <details>-<summary>How can I generate the access token using the endpoint oauth2/v2.0/token with grant type as "authorization_code"?</summary> +<summary>How can I generate the access token using the endpoint oauth2/v2.0/token with grant type as authorization_code?</summary> Configure the application you're using to only execute HTML encoding of the scopes once, so the scopes can be correctly sent and evaluated by Microsoft Entra ID. <br> For more information about Node js code sample, see [Bot SSO quick-start](https: </details> +## Stageview ++</br> ++<details> ++<summary>Which Stageview should I use?</summary> ++Collaborative Stageview allows the users to open content along with a side panel conversation in a Teams window. This view is best suited for most of the collaboration scenarios. ++</br> ++</details> ++<details> ++<summary>What's the difference between Stageview Modal and dialogs?</summary> ++Stageview Modal is useful to display rich content to the users, such as page, dashboard, or file. <br> Dialogs (referred as task modules in TeamsJS v1.x) are useful to display messages that need users' attention or collect information required to move to the next step. ++</br> ++</details> ++<details> ++<summary>When Stageview is invoked, the content opens in Collaborative Stageview but gets loaded in the main Teams window instead of a new window. How to open the content in a new window?</summary> ++Ensure that your `contentUrl` domain is accurately reflected in the manifest `validDomains` property. For more information, see [app manifest schema](resources/schem). ++</br> ++</details> ++<details> ++<summary>Why isn't any content displayed in a new Teams window even when contentUrl matches with validDomains?</summary> ++Call `app.notifySuccess()` in all iframe-based contents to notify Teams that your app is loaded successfully. If applicable, Teams hides the loading indicator. If `notifySuccess` isn't called within 30 seconds, Teams assumes that the app is timed out and displays an error screen with a retry option. For app updates, this step is applicable for tabs that are already configured. If you don't perform this step, an error screen is displayed for the existing users. ++</br> ++</details> ++<details> ++<summary>Can I include a deep link in my contentUrl?</summary> ++No, deep links aren't supported in `contentUrl`. ++</br> ++</details> ++<details> ++<summary>How do I keep a specific thread shown alongside my content?</summary> ++Collaborative Stageview from a deep link or a stageView API comes with the additional `threadId` parameter. You can explicitly define the chat thread to be displayed in the side panel for your specific `contentUrl`. For more information about retrieving a `threadId`, see [get conversation thread](/graph/api/group-get-thread). ++</br> ++</details> + ## Tabs <details> |
platform | Whats New | https://github.com/MicrosoftDocs/msteams-docs/commits/main/msteams-platform/whats-new.md | Teams platform features that are available to all app developers. **2024 April** -***April 3, 2024***: [Updated the common reasons for app validation failure to help your app pass the Teams Store submission process.](concepts/deploy-and-publish/appsource/common-reasons-for-app-validation-failure.md) +***April 04, 2024***: [Added support for python in Teams AI library.](bots/how-to/Teams%20conversational%20AI/teams-conversation-ai-overview.md) +* ***April 04, 2024***: [Stageview API with the openmode property allows you to open your app content in different Stageview experience.](tabs/open-content-in-stageview.md) +* ***April 03, 2024***: [Updated the common reasons for app validation failure to help your app pass the Teams Store submission process.](concepts/deploy-and-publish/appsource/common-reasons-for-app-validation-failure.md) :::column-end::: :::row-end::: |