I am leading EduLLM (Beyond the ChatBot) under the SNSF Spark grant to rethink how large language models can support non-computer science students as they learn introductory programming. The goal is to move past generic chatbots and design evidence-based interactions that respect pedagogy, ethics, and students’ confidence.
Why this project
- Many learners rely on ad-hoc chatbot answers that can be misleading or counter-productive.
- Educators need guardrails that align with their course design, assessment policies, and academic integrity expectations.
- HCI-driven experimentation can show where generative feedback helps or harms early-stage learners.
What we’re building
- LLM-mediated learning flows that scaffold problem-solving steps instead of providing full solutions.
- Ethical and transparency layers so students understand model limits, provenance of hints, and acceptable use.
- Instructor controls to tune guidance levels, restrict disallowed prompts, and surface analytics about help-seeking behaviors.
- Benchmarks and rubrics tailored to novice errors in Scala and Python exercises to evaluate LLM responses.
Research approach
- Mixed-method classroom studies combining telemetry, think-aloud protocols, and graded outcomes to measure effectiveness and trust.
- Iterative prototyping with rapid A/B comparisons of prompt strategies, moderation filters, and UI cues.
Funding and collaboration
- Funded by the Swiss National Science Foundation (Grant No. 228765).
- If you’re teaching introductory programming and want to collaborate on trials or share datasets of student questions, feel free to reach out.