




Scroll section
CanvasAI
Automating grading in Learning Management Systems (LMS) - a SaaS for universities
OVERVIEW
Designed an AI grading assistant that streamlines workflows and saves time for Teaching Assistants (TA) and faculties.
This project was in collaboration with a GenAI expert faculty, Prof. Justin Hodgson, executed in 16 weeks.
ROLE
Product designer
End-to-end user experience, AI and user research, prototyping, interactions, design system
PLATFORM | INDUSTRIES
Web and Mobile | EdTech, B2B SaaS, Enterprise
PROTOTYPE
CanvasAI
Automating grading in Learning Management Systems (LMS) - a SaaS for universities
OVERVIEW
Designed an AI grading assistant that streamlines workflows and saves time for Teaching Assistants (TA) and faculties.
This project was in collaboration with a GenAI expert faculty, Prof. Justin Hodgson, executed in 16 weeks.
CONTEXT
Imagine this: You’re a grad student working as a Teaching Assistant, done with all your hectic assignments and ready to take it slow this weekend - until you remember the 60+ assignments waiting to be graded. So much for a relaxing weekend :(
Imagine this: You’re a grad student working as a Teaching Assistant, done with all your hectic assignments and ready to take it slow this weekend - until you remember the 60+ assignments waiting to be graded. So much for a relaxing weekend :(


Who are Teaching Assistants (TA)?
Who are Teaching Assistants (TA)?
They're grad/PhD students working part-time to:
They're grad/PhD students working part-time to:
Support faculties and gain experience
Support faculties and gain experience
Earn cash to sustain living expenses
Earn cash to sustain living expenses
They have to balance their own course work, personal stuff, social life and part-time job responsibilities
They have to balance their own course work, personal stuff, social life and part-time job responsibilities
Scroll section
CHALLENGES
Conducted 4 user interviews and 3 contextual inquiries with TAs to observe their grading process - uncovering how they work, time spent, shortcuts used, and key pain points.
Conducted 4 user interviews and 3 contextual inquiries with TAs to observe their grading process - uncovering how they work, time spent, shortcuts used, and key pain points.
01
01
Time consuming: 10-12mins per assignment x 60 (on average). Long assignments = longer
Time consuming: 10-12mins per assignment x 60 (on average). Long assignments = longer
02
02
No balance: Hard to find time for personal stuff and social life (…and sleep )
No balance: Hard to find time for personal stuff and social life (…and sleep )
03
03
Feedback fluctuations: Based on individual expertise, energy level and time pressure
Feedback fluctuations: Based on individual expertise, energy level and time pressure


Grading's hectic! Too time consuming






CURRENT METHOD
Instructors grade assignments manually on the web app
Left: Manually read submitted assignments (pdf, slides)
Right: Manually update rubrics and comments


GOAL

To reduce grading time by at least 40%
To reduce grading time by at least 40%

Why over 40%?
Why over 40%?
Saves over 5 hours - good to finish a couple of personal assignments for a week
Saves over 5 hours - good to finish a couple of personal assignments for a week
SOLUTION
AI-powered grading assistant (browser extension) for TAs, trained on past assignments and instructor grading patterns.



Scroll section
Scroll section




AI-powered grading assistant (browser extension) for TAs, trained on past assignments and instructor grading patterns.
SOLUTION
Features
01
Chatbot
Natural conversational style, familiar to users of AI chatbots like ChatGPT (no learning curve)
02
Auto-grade
One-click - assesses the assignment and suggests grades and feedback on each rubric point
03
Suggest improvements
Improvements - Offers targeted recommendations and resources from the web (like papers, articles, blogs)
Web
Phone
01
Chatbot
Natural conversational style, familiar to users of AI chatbots like ChatGPT (no learning curve)
02
Auto-grade
One-click - assesses the assignment and suggests grades and feedback on each rubric point
03
Suggest improvements
Improvements - Offers targeted recommendations and resources from the web (like papers, articles, blogs)
Features
Rationale and Interactions
Tool tips: Inform user about each interactive button's action
Visual hierarchy: Generated content in the center, all action buttons at the bottom in 1 place - easier to interact
Hidden actions: Kept only primary actions visible; moved secondary tasks under expandable menus for a cleaner UI.
User agency: No direct actions by AI without user's input
Progressive disclosure: To guide user through intended workflow
Web vs Phone: Key interactions and differences
ACCESSING THE ASSIGNMENT
Web
Assignment and student can be accessed on the LMS directly. CanvasAI directly scans the current student.
Phone
Canvas LMS doesn't support grading on phone. So, added the flow to select an assignment + student
IMPACT


Grading time reduced by over 60%
Grading time reduced by over 60%

Wizard of Oz testing with over 4 users
Wizard of Oz testing with over 4 users
I simulated CanvasAI by manually grading sample assignments, then had another TA grade the same ones to compare time-on-task and measure efficiency.
I simulated CanvasAI by manually grading sample assignments, then had another TA grade the same ones to compare time-on-task and measure efficiency.
Scroll section
DESIGN SYSTEM
Created a custom design system from scratch.
Tokens and Variables


Scroll section
DESIGN SYSTEM
Created a custom design system from scratch.
Tokens and Variables




Component library


Component library

Responsive
The above provided a quick overview of the project. What follows is a detailed exploration—covering gaps in the current onboarding, research insights, ideation and iterations.
THE STORY
The case study for this project is WIP on mobile. Please view it on a web device :)
Thank you
THE STORY
How did I get here?
RESEARCH
Intro Scroll section
Scroll section
Deep research on AI, it's functionality and applications. Here are some interesting facts, useful tips and questions I raised when researching about AI.

AI hallucinations
When AI interprets false data as real and builds on it
Good when
Bad when
Ideating
Researching
Always ask for source of the information when researching. ChatGPT now provides the source by default.

Prompt engineering
Framing directly influences the depth of AI's insights. One of the effective method is ROCKIT
Role, Opportunity, Context, Knowledge, Instruction, ToneIt highly depends on your context, but try these:
Context, Objective, Restriction, Action (CORA) - Search with restriction.Purpose, Action, Result, Adjust (PARA) - Multiple results/iterate

Biases
AI learns from the data it's trained on, and biases arise when that data is one-sided.
While humans are needed to detect bias, they often have subconscious biases themselves. So how can we trust humans to evaluate it fairly?

Facts vs opinions
AI only argues when it's related to clear facts like 2+2=4. If you say 2+2=22, it'll acknowledge the playfulness but corrects with facts.
However, if you ask for opinions, it draws from biased web sources. If you disagree, it'll just change it's answer and agree with you.
Best to draw own insights. Rely more for facts, not opinions.
Here are some interesting facts, useful tips and questions I raised when researching about AI.

AI hallucinations
When AI interprets false data as real and builds on it
Good when
Bad when
Ideating
Researching
Always ask for source of the information when researching. ChatGPT now provides the source by default.

Prompt engineering
Framing directly influences the depth of AI's insights. One of the effective method is ROCKIT
Role, Opportunity, Context, Knowledge, Instruction, ToneIt highly depends on your context, but try these:
Context, Objective, Restriction, Action (CORA) - Search with restriction.Purpose, Action, Result, Adjust (PARA) - Multiple results/iterate

Biases
AI learns from the data it's trained on, and biases arise when that data is one-sided.
While humans are needed to detect bias, they often have subconscious biases themselves. So how can we trust humans to evaluate it fairly?

Facts vs opinions
AI only argues when it's related to clear facts like 2+2=4. If you say 2+2=22, it'll acknowledge the playfulness but corrects with facts.
However, if you ask for opinions, it draws from biased web sources. If you disagree, it'll just change it's answer and agree with you.
Best to draw own insights. Rely more for facts, not opinions.
FUN AI
EARLY IDEATION
AI integration into Canvas


Left - shows submitted assignment. Right side - Split in 2, AI insights on the top, rubrics at the bottom.
Scroll section
ITERATION
Instead of Canvas integration, the solution shifted to browser extension that pops-up on grading sites.

Why?
Can iterate quickly, update independently, and fix bugs without risking institutional systems
Working through partnership ecosystem is complicated
For scalability, the workflow can be adapted to support different LMS platforms
Scroll section
THE FLOW
Mapped the entire user journey. Iterated and created the flow for the solution


CHATBOT IDEAS
First few chatbot iterations used preset prompt buttons for interactions. Tabs (red/orange) were introduced to keep track of all actions.



However, users found the experience rigid and unnatural, lacking a conversational flow.
More iterations
Redesigned the interface to include a chat box, allowing for more interactive and natural user engagement. Tried, tested and iterated more.



Tabs - Users didn't find the need for it
Quick prompts - was taking too much real-estate
Replaced tabs with most used actions
Users found them inconvenient at the top
Visualization of student's performance
MOBILE-FIRST APPROACH
Approached the mobile-first method to identify most crucial features and workflow for the MVP

Scroll section
FINAL SOLUTION
Scaled the mobile designs up to web platform to design the final solution


Features and rationale

The flow starts before grading, on the assignment description page
Why is this screen important?




Long descriptions - Easy for TAs to miss important details
Helps TAs with quick summary and the ability to set rigor for the assignment
How does auto-grading work?


Long descriptions - Easy for TAs to miss important details
Helps TAs with quick summary and the ability to set rigor for the assignment
Suggesting improvements


Long descriptions - Easy for TAs to miss important details
Helps TAs with quick summary and the ability to set rigor for the assignment
More features




Tooltips on hover - informs users of the button's action
Common and saved prompts for faster workflow
FUTURE SCOPE
Data visualization on the chatbot to see students' progress

REFLECTIONS
Natural Interactions
Initial versions were rigid - users felt boxed in by the prompt buttons. I learned how to balance consistency with natural, open-ended interactions, especially in tools that live alongside existing user habit.
Collaborate, not replace
How critical it is to design AI interventions that feel collaborative, rather than replacing human efforts altogether.
User agency
I learned how important user agency is. Even if the end goal is clear, the process shouldn't be too autonomous. The control must remain with the user
Balancing UX
It was tempting to overload the chatbot with features, but real impact came from simplifying interactions and surfacing only what users needed at the right moment.
Prototype
The real experience
THANK YOU
Hope you liked it ;)
Rationale and Interactions
Tool tips: Inform user about each interactive button's action
Visual hierarchy: Generated content in the center, all action buttons at the bottom in 1 place - easier to interact
Hidden actions: Kept only primary actions visible; moved secondary tasks under expandable menus for a cleaner UI.
User agency: No direct actions by AI without user's input
Progressive disclosure: To guide user through intended workflow