Teach a Workshop: Build a Dining Recommender Micro-App Using LLMs in One Afternoon
Run a one‑afternoon workshop to build a dining recommender micro‑app with LLMs — slides, exercises, and capstone included.
Build a Dining Recommender Micro‑App Using LLMs in One Afternoon — A Workshop Plan
Hook: Tired of long course lists, fuzzy objectives, and students who can't turn theory into a portfolio piece? Run a focused, hands‑on workshop that teaches community learners how to design, build, and demo a micro app — a dining recommender powered by LLMs — in one afternoon. This plan gives you the slide deck, materials, timed agenda, exercises, and capstone challenges so attendees leave with a working project and a clear next step toward a job‑ready portfolio.
Why this workshop matters in 2026
By late 2025 and into 2026, the rise of accessible, efficient LLMs, combined with lightweight vector databases and easy hosting, made micro apps a practical teaching tool. People are 'vibe coding' — building small, personal apps in days — and that trend is perfect for skills workshops: students get immediate, portfolio‑grade artifacts without months of backend engineering.
Bring these realities into your classroom or community meetup: teach product thinking, prompt design, RAG (retrieval‑augmented generation), lightweight APIs, and user testing — all inside a 3–4 hour session.
Workshop outcomes (what attendees will have)
- A working dining recommender micro‑app (web or notebook) that suggests restaurants based on group preferences.
- Practical experience with LLM prompt design and RAG patterns.
- A short demo and a plan for a capstone extension to add features like diet filters, budgets, or on‑device caching.
- Templates: repo starter, slide deck, prompt bank, and assessment rubric for instructors.
Target audience & prerequisites
This workshop is designed for mixed groups: students, teachers, and lifelong learners. No deep ML background required. Recommended prerequisites:
- Basic familiarity with web concepts (HTML/CSS/JS) or comfort with Python for a notebook-based build.
- Laptop, GitHub account, and willingness to sign up for a free tier of an LLM provider and a vector DB (if using RAG).
- Optional: basic JavaScript/React or Python/Streamlit experience for faster UI work.
Materials & tools (minimal, reliable stack)
Pick one stack and prepare templates before the session. Here are two recommended tracks:
Web track (recommended for front-end practice)
- Starter repo (Vite + React or simple HTML/JS) with a single page and REST calls to an LLM proxy.
- LLM provider account (OpenAI/Anthropic/other; instructor decides based on budget).
- Lightweight backend: Node/Express or Cloud Functions with an API key (template code provided).
- Optional vector DB: Pinecone, Milvus, or hosted embedding service for RAG.
Notebook/Streamlit track (fastest to ship)
- Google Colab or Streamlit template that calls an LLM and renders results.
- Same LLM provider and optional vector DB for RAG exercises.
Common extras: GitHub Classroom or a shared repo, sample dataset of local restaurants (CSV/JSON) with fields like cuisine, price, dietary tags, ratings, and short descriptions.
Pre‑work for attendees (30–60 minutes, optional)
- Create GitHub account and fork the starter repo.
- Sign up for a free LLM provider key (instructors can provide tokens for demo purposes; remind about cost and safety).
- Install Node / Python and an editor (VS Code recommended) or have Colab ready.
One‑Afternoon Agenda (3.5–4 hours)
This agenda is timeboxed for a single afternoon meetup and balances lecture, hands‑on coding, and demo time.
- 0:00–0:15 — Welcome & learning goals
- Introduce micro apps and the dining recommender objective.
- Show a 2‑minute demo of the finished app to motivate attendees.
- 0:15–0:35 — Architecture & design patterns
- Explain LLM prompt + RAG architecture, data sources, and UI flow.
- Quick safety and privacy and cost tips: token limits, caching, and content filters.
- 0:35–1:20 — Guided build: Basic recommender (hands‑on)
- Clone repo / open notebook. Wire up the LLM call with a simple prompt that maps preferences to restaurant options.
- Exercise: Create 3 prompt variants and test differences. Use the provided prompt bank templates to speed iteration.
- 1:20–1:40 — Break + lightning demos
- 1:40–2:20 — Add RAG & structured data
- Load the sample restaurant dataset and index embeddings into a vector DB (or use local approximate nearest neighbour).
- Exercise: Build a retrieval step that returns 5 candidates used as LLM context.
- 2:20–3:00 — UI polish & group preferences
- Implement a simple group preference input (checkboxes, slider weights) and show how to combine preferences into a single prompt or scoring function.
- Exercise: Implement a 'group vote' simulation and test edge cases.
- 3:00–3:30 — Demo time + code review
- Pairs present quick demos; instructor gives targeted feedback.
- 3:30–3:45 — Capstone brief & next steps
- Assign capstone challenges and sharing logistics (GitHub PR or demo night).
Detailed exercises and talking points
Exercise 1 — Prompt variants (0:35–1:20)
Goal: Learn how prompt structure affects outputs and cost.
- Create three prompt styles: a short direct prompt, a structured system + user prompt, and an example-based prompt with 2–3 examples (few‑shot).
- Compare outputs for the input: "Friends: vegetarian + budget $20 + likes spicy"
- Discuss: Where did the model hallucinate details? How predictable are recommendations?
Exercise 2 — Retrieval step (1:40–2:20)
Goal: Teach RAG basics so recommendations are grounded in real data.
- Embed restaurant descriptions and query embeddings for a user's preference string.
- Return top 5 matches and pass them as context to the LLM with a template: "Given these candidates, recommend the best match for: [user prefs]."
- Discuss: How many candidates are optimal? (Common answer: 3–7 depending on token budget.)
Exercise 3 — Group preference aggregation (2:20–3:00)
Goal: Implement simple scoring to combine multiple users' inputs into one decision.
- Implement three aggregation strategies: majority vote, weighted sum (weights = attendees' priority), and negotiation prompts (generate three compromise choices and let group pick).
- Compare outcomes and discuss tradeoffs: explainability vs. creativity.
Instructor slide deck outline
Use this 10–12 slide structure. Keep slides visual and short; most learning happens during coding.
- Title & learning goals (what you'll build)
- Why micro apps and LLMs matter in 2026 (trend slide)
- Demo of finished dining recommender (1‑2 minute)
- Architecture: LLM, RAG, UI, data sources
- Prompt design patterns & safety checklist
- Starter repo walkthrough / how to run
- Exercise 1 prompt variants instructions
- Exercise 2 RAG instructions + vector DB tips
- Exercise 3 group aggregation instructions
- Capstone challenges & rubric
- Resources: templates, readings, and community links
- Q&A / troubleshooting
Sample prompt templates (teachable artifacts)
Give attendees copy/paste templates. Here are three foundational prompts to start with:
1) Short prompt (fast, cost‑efficient)
Recommend 3 restaurants for a group with these preferences: {preferences}. Use the candidate list below and include cuisine, one reason, and estimated price range.
2) Structured system + user prompt (safer, more consistent)
System: You are a concise dining assistant. Do not invent facts; only use the candidate list. User: Given preferences {preferences} and candidates {candidates}, list top 3 with short justification.
3) RAG template (grounded)
Use this candidate data as context. Then answer: Based on {preferences}, rank the best matches and explain why. If none match, propose one nearby alternative and justify.
Safety, bias, and cost controls (talking points)
- Never pass sensitive personal data to LLMs; anonymize group info.
- Set query length and token limits; cache frequent queries to reduce calls.
- Test for hallucinations: cross‑check recommended restaurants against the dataset or an API like Yelp/Google Places if available.
- Design for transparency: show the candidate list and a brief rationale so users can understand suggestions.
Capstone challenges (post‑workshop, 1–2 weeks)
Assign one or more capstones that scale the micro app and teach product thinking plus technical depth. Use these as grading or showcase criteria.
- Capstone A — Personalization & Profiles
- Add user profiles that persist favorite cuisines, allergies, and location history. Reward: personalized ranking and a saved‑preferences toggle.
- Capstone B — Live data & verification
- Integrate a live places API (e.g., Google Places) and reconcile LLM outputs with live data to prevent hallucinations.
- Capstone C — Offline/On‑device mode
- Implement a lightweight on‑device cache and an option to run the recommender offline using a small quantized model or local embeddings.
- Capstone D — Explainability & audit trail
- Build a feature that shows which candidate documents influenced the final decision and a simple confidence score.
Rubric (suggested)
- Functionality (40%): Working app with LLM integration and basic UI.
- Grounding (25%): Use of real data and RAG to reduce hallucinations.
- UX & Explainability (20%): Clear interface and rationale for suggestions.
- Creativity & bonus features (15%): Any additional useful feature (filters, maps, saved lists).
Teaching tips & common pitfalls
- Keep teams small (2–3) so everyone codes and presents.
- Have a ‘troubleshooting’ slide with 6 common error fixes (API auth, CORS, token limits, embedding mismatches, vector DB permissions, prompt failures).
- Use feature flags: start with a single user, then add group logic to avoid early complexity.
- Encourage artifact‑driven learning: require a 90‑second demo and a 200‑word README as submission.
Real‑world example & case study
Case: In 2023–2025, individual creators like Rebecca Yu used LLMs to quickly build personal dining apps (the "Where2Eat" style). In workshops that adapted this concept, learners reported faster portfolio growth because the project is relatable and immediately useful — perfect for show‑and‑tell interviews.
Advanced strategies for follow‑on workshops (future predictions)
By 2026, expect these directions to be high‑value for learners moving toward jobs:
- Multi‑modal recommendations: Use images and menus as context for richer suggestions.
- On‑device models: Teach quantization and local inference to build privacy‑preserving micro apps; see a field guide to hybrid edge workflows for inspiration.
- LLMOps: Add monitoring, cost dashboards, and model‑version rollbacks as classroom skills — think beyond a single notebook.
- Ethical recommendation design: Audit biases in locality and price assumptions and design equity checks.
Workshop takeaway checklist for instructors
- Starter repo + sample dataset uploaded and tested.
- Slide deck downloaded and edited with local logistics (Wi‑Fi, API tokens, safety policy).
- A simple demo that is guaranteed to run live in 5 minutes (backup recorded demo if APIs fail).
- Capstone rubric and submission process (GitHub Classroom, Discord channel, demo night).
Free templates & resources
Provide attendees links to a GitHub template, a blank slide deck, and a prompt bank. Offer an optional follow‑up session where teams present capstones and instructors provide interview‑ready feedback.
Closing: How to run this as a community leader or teacher
Run this workshop at a local library, school lab, or community center. Keep participant fees low or free; micro apps are low cost to host and high value for learners. Pair novices with a mentor for the 3:00–3:30 demo slot so everyone gets feedback. Use the capstone as a showcase night to attract local recruiters or partners.
"Micro apps let learners ship and iterate — that momentum is what turns a few hours of learning into a career asset."
Call to action
Ready to teach this workshop? Download the starter repo, slide deck, and prompt bank, and run it at your next meetup. Want the instructor kit (with tested API templates and a grading rubric)? Join our community at skilling.pro/workshops to get the full pack, schedule a train‑the‑trainer session, or request a custom version for classrooms. Ship a micro app today — one afternoon is all it takes.
Related Reading
- Micro‑Apps Case Studies: 5 Non-Developer Builds That Improved Ops (and How They Did It)
- Why On‑Device AI Is Now Essential for Secure Personal Data Forms (2026 Playbook)
- Automating Metadata Extraction with Gemini and Claude: A DAM Integration Guide
- Field Guide: Hybrid Edge Workflows for Productivity Tools in 2026
- Security & Privacy for Career Builders: Safeguarding User Data in Conversational Recruiting Tools (2026 Checklist)
- Global Fan Outreach: How the Yankees Could Partner with International Artists and Labels
- DIY Shrubs and Cocktail Syrups for Seafood Ceviche and Crudo
- AFCON Moving to a Four-Year Cycle: How Seasonal Shifts Affect Fans’ Weather Planning
- What Creators Can Learn from the Filoni-Era Star Wars List: Avoiding Risky Franchise Bets
- Mini-Me, Mini-Mist: Matching Scents for You and Your Dog
Related Topics
Unknown
Contributor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you