Code
LLM-Powered FAQ Chatbot for 40-Person SaaS Scale-up
Overview
What this challenge is about.
You have access to TaskFlow's internal documentation, help articles, and a sample of 500 support tickets. Your task is to build a retrieval-augmented generation (RAG) pipeline: index the documents, retrieve relevant chunks given a user query, and generate a concise answer using an LLM (e.g., GPT-3.5 via API). You must also evaluate the chatbot's accuracy on a set of 50 test questions. Success means the chatbot correctly answers at least 70% of test questions and provides a demo.
The Brief
What you'll do, and what you'll demonstrate.
How can TaskFlow leverage LLMs to automatically answer customer FAQs and reduce support workload?
Earning criteria — what you'll demonstrate
- Understand and implement retrieval-augmented generation (RAG) architecture
- Use embeddings and vector databases for document retrieval
- Integrate with an LLM API (e.g., OpenAI) to generate answers
- Evaluate chatbot performance using metrics like accuracy and relevance
Program Fit
Where this fits in your program.
Sharpens the same skills your degree expects you to demonstrate.
Skills
Skills you'll demonstrate.
Each one shows up on your verified credential.
Careers
Roles this prepares you for.
Real titles. Real skill bridges. Pick the one closest to your trajectory.
Career mappings coming soon.