Imagine a workday where all the answers you need are just a message away. No more switching between apps, no more digging through files and folders, no more endless searches. Just ask, and the information finds you. That’s the future we’re building with our AI-powered chatbot for Zoom Team Chat. By integrating state-of-the-art language models, neural search capabilities, and the rich context of the Zoom platform, we’re creating an assistant that can understand your questions, find the most relevant information, and deliver accurate answers – all within the flow of your Zoom conversations.
What We’re Building
This bot does three key things:
-
- Understands questions asked in Team Chat
- Decides whether to search for current information or answer it from its own knowledgebase
- Generates accurate, contextual responses
- Maintains conversation history for better context
The Building Blocks
The Building Blocks
- Zoom Developer Platform: Provides the APIs to interact with Team Chat Bot. This gives us the foundation for our bot’s interface.
- Cerebras: The heart of our system, handling AI inference through their various available models.Cerebras’ architecture is crucial here because it:
- Makes near-instantaneous decisions about when to search
- Processes search results and generates responses with minimal latency
- Maintains high quality output despite the speed requirements
- Handles responses for better user experience
- Exa: Powers our real-time search capability, providing neural search with automatic prompt optimization.
- Node.js & Express: For our server implementation and handling HTTP requests.
What’s Zoom Team Chat?
Before we dive into the technical stuff, let me quickly explain what Zoom Team Chat is. While most people know Zoom for video meetings, Zoom also has a full-featured messaging platform built right into the Zoom desktop and mobile apps. Think of it as your workspace hub where you can:
-
- Send messages and files to colleagues
- Create channels for teams and projects
- Share and organize content
- Use chatbots and integrations to automate work
The best part? If you’re already using Zoom for meetings, you already have access to Team Chat – it’s included in every Zoom installation. For developers, this presents an exciting opportunity: you can build powerful integrations that live where your users already work.
Setting Up Your Zoom Chat Bot
I highly recommend reading these key concepts before you build on the Zoom Platform. Once you have done it, let’s create our bot in the Zoom App Marketplace:
-
-
- Go to marketplace.zoom.us and click “Develop” → “Build App”
- Select “Create” under “OAuth app”
- Complete the Basic Information:
- Choose a meaningful name for your bot
- Select “User-managed app” for installation type
- Save your Client ID and Client Secret
- Configure the Features:
- Enable “Team Chat” under Features
- Enable “Team Chat Subscription”
- Set up your Bot Endpoint URL, this endpoint is the place where Zoom will forward you the messages that are sent to your bot.
-
Building the Core Components
1. Building an incoming webhook receiver
When you send a message to your Chatbot, Zoom forwards the message to your bot endpoint URL that you provided in the marketplace. If you don’t have one, then here is the logic that I used to build mine. I use ngrok to open my local web server to the public internet and expose it to a temporary URL, which is then used as the bot endpoint URL.
async function handleZoomWebhook(req, res) { try { if (req.body.event === 'bot_notification') { console.log('Zoom Team Chat message received.'); await callCerebrasAPI(req.body.payload); } res.status(200).send('Event processed.'); } catch (error) { console.error('Error handling webhook:', error); res.status(500).send('Internal Server Error'); } }
Webhook Event Structure
When Zoom sends a bot_notification
event, it includes important information about the message:
{ "event": "bot_notification", "payload": { "accountId": "abc123", "cmd": "What's the latest news about AI?", "name": "John Doe", "robotJid": "v1abc1234@xmpp.zoom.us", "timestamp": 1699301234567, "toJid": "user@xmpp.zoom.us", "userId": "user@xmpp.zoom.us" } }
If you need to know the query that was asked by the user, it will be in the cmd
field of the webhook
3. Cerebras and Exa integration
The intelligence of our bot comes from Cerebras’. We use it for two key decisions:
-
- Determining if we need to search for current information
- Generating the final response
4. Deciding When to Search
One of the key features of our bot is its ability to intelligently decide when it needs to search for updated information. We use Cerebras’s language model to make this decision.
Here’s how we structure the prompts:
function createDecisionRequest(cmd) { return { model: 'llama3.1-8b', messages: [ { role: 'system', content: `You are an expert AI assistant with access to current information through Exa search. When a user asks about current events, sports, or real-time information: 1. Output "SEARCH" on first line 2. Format a precise search query on second line 3. Never apologize or mention limitations` }, { role: 'user', content: cmd } ] }; }
5. Performing a Search
If Cerebras decides a search is needed, we use Exa’s API to find relevant information.
async function performExaSearch(query) { try { const response = await axios({ method: 'post', url: 'https://api.exa.ai/search', headers: { 'x-api-key': process.env.EXA_API_KEY }, data: { query: query, numResults: 3, contents: { text: true } } }); return response.data.results .map(result => result.text) .join('\n'); } catch (error) { console.error('Search failed:', error); return ''; } }
6. Response Generation
With the search results (if applicable), we can now generate a response to the user’s query. Again, we leverage Cerebras’ language model to generate a high-quality, contextual response:
function createResponseRequest(history, searchContext, cmd) { return { model: 'llama3.1-8b', messages: [ { role: 'system', content: `You are a helpful assistant with access to current information. When given search results: - Extract the relevant information - Present it clearly and directly - Focus on answering the specific question - Never apologize or mention being an AI model - If search results aren't helpful, suggest checking official sources` }, ...history, { role: 'user', content: searchContext + cmd } ], stream: true }; }
The search context is prepended to the user’s message, allowing the model to factor in this additional information when generating its response. We also pass in the conversation history to maintain context across multiple interactions.
We use Cerebras’ streaming feature to start sending the response to the user as it’s being generated, creating a more fluid, interactive experience.
7. Sending the response in Zoom Chat to the User
Finally, we send the response back to Zoom:
async function sendChatToZoom(chatbotToken, message, payload) { const data = { 'robot_jid': process.env.ZOOM_BOT_JID, 'to_jid': payload.toJid, 'content': { 'head': { 'text': 'AI Assistant', }, 'body': [{ 'type': 'message', 'text': message, }], }, }; await axios.post('https://api.zoom.us/v2/im/chat/messages', data, { headers: { 'Authorization': 'Bearer ' + chatbotToken, }, }); }
8. Authentication Flow
In order to send a message via your chatbot you will need a token. The chatbot authentication uses the OAuth 2.0 Client Credentials grant type, which is designed for server-to-server authentication where a human user’s permission isn’t needed. Here’s how it works:
- Credentials Setup
-
- You need two pieces of information stored as environment variables:
ZOOM_CLIENT_ID
: Your application’s client ID from ZoomZOOM_CLIENT_SECRET
: Your application’s client secret from Zoom
- You need two pieces of information stored as environment variables:
b. Token Request
-
- The code creates a Base64-encoded string combining the client ID and secret:
Buffer.from(process.env.ZOOM_CLIENT_ID + ':' + process.env.ZOOM_CLIENT_SECRET).toString('base64')
-
- This encoded string is sent in the Authorization header as a Basic auth credential
- The request is made to
https://zoom.us/oauth/token
withgrant_type=client_credentials
- If successful, Zoom returns an object containing:
access_token
: The token you’ll use for subsequent API callsexpires_in
: 1 hourtoken_type
: “bearer”
- Here is the code :
async function getChatbotToken() { try { const response = await axios.post( 'https://zoom.us/oauth/token?grant_type=client_credentials', {}, { headers: { 'Authorization': 'Basic ' + Buffer.from(process.env.ZOOM_CLIENT_ID + ':' + process.env.ZOOM_CLIENT_SECRET).toString('base64'), }, } ); return response.data.access_token; } catch (error) { console.error('Error getting chatbot_token:', error); throw error; } }
What’s Next?
While the current implementation provides a solid foundation, the upcoming Zoom Realtime Media Streams (RTMS) opens exciting new possibilities for enhancing this neural search assistant:
-
- Real-Time Meeting Intelligence: Using RTMS’s transcripts with Cerebras’ fast inference speeds, the bot could process meeting conversations in real-time and provide instant, relevant responses.
- Multi-Context Understanding: By combining meeting audio/transcripts with Team Chat through RTMS, Cerebras could maintain more accurate conversation context across all communication channels.
Conclusion
The combination of Cerebras’ fast inference, Zoom’s robust platform, and Exa’s neural search demonstrate how modern AI can significantly improve team collaboration. With instant, streamed responses and intelligent web search, this bot integrates seamlessly into conversations, creating a more natural and responsive experience.
The full source code for this project is available on GitHub, and we welcome contributions from the community. Whether you’re building a custom chat assistant or just curious about integrating AI into team communications, I hope this project provides a useful starting point.
For developers interested in exploring more:
-
- Experiment with the Zoom Developer Platform
- Try out the Cerebras API in their playground: cloud.cerebras.ai
- Check out Exa’s neural search documentation: https://docs.exa.ai/reference/getting-started
About the fellows program
Cerebras inference is powering the next generation of AI applications — 70x faster than on GPUS. The Cerebras x Bain Capital Ventures Fellows Program invites engineers, researchers, and students to build impactful, next-level products unlocked by instant AI. Learn more at cerebras.ai/fellows
About Ojus Save
Ojus Save is a Developer Advocate at Zoom. He focuses on enhancing the developer experience, integrating AI technologies, building developer focused solutions and fostering community interaction.
Contact Ojus at
Email: i@saveoj.us
Twitter: x.com/ojusave
LinkedIn: linkedin.com/in/ojus
Ojus Save
Ojus Save is a Developer Advocate at Zoom. He focuses on enhancing the developer experience, integrating AI technologies, building developer focused solutions and fostering community interaction.
Contact Ojus at
Email: i@saveoj.us
Twitter: x.com/ojusave
LinkedIn: linkedin.com/in/ojus