Designed an end-to-end AI tool that helps tutors guide students more effectively during live sessions through smart suggestions, step-by-step solutions, and adaptive questioning.
Product Type
End-to-end Design, Chatbot, AI
Duration
Team
Bill Guo (Product Manager)
Zach Levonian (ML Engineer)
Responsibility
Research, Product Strategy,
Prototyping, User Testing
PLUS connects highly trained human tutors with cutting-edge, AI-driven software to boost learning gains for middle school students from historically underserved communities. It has more than 10K hours of tutoring every month, helping thousands of middle school students.
Tutors at PLUS conduct 30-minute sessions with about 5 students, giving them only 6 minutes per student on average. At the same time, they often struggle to explain math concepts clearly and efficiently, or to keep students engaged. In this race against time, tutors must maximize both clarity and engagement to ensure each student grasps the material and feel more motivated.
We developed an LLM-powered co-pilot to assist tutors clearly explain math problems, provide effective encouragement, and ask strategic leading questions, ensuring they make the most of their limited time with each student to enhance engagement and learning outcomes.
monthly active users
decrease in time spent explaining math concepts
increase in student engagement
Copilot provides detailed, step-by-step math explanations, with answers that users can easily add or reduce as needed
Users can expand each step to find suggested encouragements and leading questions that help students build confidence and think independently
Users can provide feedback by using presets of categories and options, ensuring they are actionable to the engineers while saving their own time.
Understand session structure
Observe tutor and student behaviors and interaction patterns
Identify challenges and frictions
The numerous pain points and inadequate existing support highlight a significant opportunity for intervention during the session, where help is most needed.
Tutors struggle most with soft-skill-related challenges, such as maintaining student engagement, guiding students through problem-solving, and offering effective praise.
Ideation and Prioritization
I facilitated a workshop with the head of product and the ML engineer to assess the technical difficulty of each idea. At this point, we didn't use their input as a strict yes-or-no decision maker, but as a reference to guide our design direction.
I also designed a survey for tutors to assess the relevance (validating needs) and helpfulness (validating solutions) of each idea using Likert Scale. To make tutors better understand and relate to them, we created textual storyboards in a "Problem-Solution-Resolution" format to provide context. Finally, we plotted the average scores of all ideas on a Relevance vs. Helpfulness matrix.
I cross-referenced the earlier assessed technical difficulty of each idea with their relevance and helpfulness to identify the low-hanging fruit—ideas with the highest impact and lowest technical difficulty.
I thought about different ways to progressively disclose for its proven effectiveness in reducing cognitive load, enhancing readability, and improving engagement through an intuitive and smooth experience.
Little info shown at a time
Easily scannable and digestible
Doesn’t Match Users’ Viewing Habits
Poor responsiveness when width is restricted
Aligning with users needs of viewing steps first
Fewer clicks required to see everything
Poor responsiveness when width is restricted
It was encouraging to see that tutors found many ideas relevant—proof that we were solving the right problems. Some of these didn’t make it into the final version, but that’s a win in itself: we uncovered meaningful needs. I see this as a chance to go back and design even stronger solutions.
Focusing on the most impactful ideas helped us deliver a lean, effective MVP—but it also opened the door to more possibilities. There’s exciting potential to extend AI support beyond the session itself. Several ideas we didn’t pursue were still strong contenders, and I’m excited by the opportunity to grow the tool in ways that keep meeting tutors where they are.
Shipping the MVP was a great milestone, and now there’s a clear path forward. I’d keep building on what’s working—refining based on real-world feedback, collaborating with engineers to improve the model, and watching tutors interact with the tool to uncover small wins. I’m also excited to revisit ideas we parked early on; now that the foundation is in place, there’s room to expand thoughtfully.