Customer support chatbot that answers questions about the Crustdata API. Try it out at- https://crustdata-chatbot-ratish.vercel.app/
About · How to Use · AI Usage and RAG · Features · Model Providers · Running locally ·
This is a webapp built for the Crustdata Build challenge (https://www.linkedin.com/feed/update/urn:li:activity:7281490942816071680/). Currently the webapp is built for level 0 of the challenge. It's built on a template and with a RAG system on the Crustdata API docs for contextful questions. Free tier Google models are used both for the chat AI model client as well as for embeddings generation. Supabase is used as the Vector store for the Crustdata API doc embeddings.
Credits: This webapp is built on top of an official Open-Source AI Chatbot Template Built With Next.js and the AI SDK by Vercel- https://github.com/vercel-labs/gemini-chatbot/tree/main.
Go to the live demo at https://crustdata-chatbot-ratish.vercel.app/ and sign up with an email and password (this is necessary to store your previous chats with the Crustdata support agent). Sign in if you have already signed up.
Then ask away about your Crustdata API questions. You may ask follow up questions too.
You can start a new chat, go to old saved chats or delete old chats by opening the sidebar.
Retrieval Augmented Generation (RAG) is used by this support AI. Crustdata API docs were chunked by endpoint (chunks/
) into .txt files, and embeddings created on API endpoint overviews in scripts/index_data.ts
(a standalone script executed only once to create the embeddings vector database for later RAG) and ingested into a Supabase vector database.
An internal LLM generation step is used to condense the latest question sent by the user in the chat interface and chat history into a standalone question for meaningful RAG querying. A Vercel serverless function is used for this and all the steps mentioned here are at app/(chat)/api/chat/route.ts
. The Supabase vector database is then queried and the relevant documents are loaded as context. The Crustdata data dictionary is also hard coded into the prompts for each call so that the AI model can comprehend what it receives as context.
Finally, the model's responses are then streamed back to the user to answer their queries and provide support.
- Next.js App Router
- Advanced routing for seamless navigation and performance
- React Server Components (RSCs) and Server Actions for server-side rendering and increased performance
- AI SDK
- Unified API for generating text, structured objects, and tool calls with LLMs
- Hooks for building dynamic chat and generative user interfaces
- Supports Google (default), OpenAI, Anthropic, Cohere, and other model providers
- shadcn/ui
- Styling with Tailwind CSS
- Component primitives from Radix UI for accessibility and flexibility
- Data Persistence
- Vercel Postgres powered by Neon for saving chat history and user data
- Vercel Blob for efficient object storage
- NextAuth.js
- Simple and secure authentication
This web app uses Google's Gemini Flash Model to answer Crustdata API questions. Relevant API documentation chunks are retrieved based on embeddings created by Google's Text Embedding 004 model.
You will need to use the environment variables defined in .env.example
to run Next.js AI Chatbot. It's recommended you use Vercel Environment Variables for this, but a .env
file is all that is necessary.
Note: You should not commit your
.env
file or it will expose secrets that will allow others to control access to your various Google Cloud and authentication provider accounts.
- Install Vercel CLI:
npm i -g vercel
- Link local instance with Vercel and GitHub accounts (creates
.vercel
directory):vercel link
- Download your environment variables:
vercel env pull
pnpm install
pnpm dev
Your app template should now be running on localhost:3000.