


Scroll section
CanvasAI
AI in Learning Management Systems (LMS) like Canvas.
OVERVIEW
Designed an AI grading assistant that streamlines workflows and saves time for Teaching Assistants (TA) and faculties.
This project was in collaboration with a GenAI expert faculty, Prof. Justin Hodgson, executed in 16 weeks.
ROLE
Product designer
End-to-end user experience, AI research, prototyping, interactions and testing, design system, AI
PLATFORM
Web
PROTOTYPE
Let's go
ACCOMPLISHMENT
Grading time reduced by over 63%
Easily accessible tools - most used tasks
Derived from Wizard of Oz testing with 4 users
CONTEXT
PROBLEM
Who are TAs?
Often Graduate/PhD students working part-time
Grade between 15-32 (or more) students each
CHALLENGES
Exhausting - Juggling course work and grading
Limited assignment feedback - to students based on individual TA knowledge
SOLUTION


AI-powered grading assistant (browser extension) for TAs, trained on past assignments and instructor grading patterns.
Scroll section
Auto-grade
Suggest improvements
One-click suggestions - assesses the assignment and suggests grades and feedback on each rubric point
Improvements - Offers targeted recommendations and resources from the web (like papers, articles, blogs)
Compare
Interactions

Improvements - offers targeted recommendations and resources from the web (like papers, articles, blogs)
Tooltips - inform users
Most used commands easily accessible
DESIGN SYSTEM
Created a custom design system from scratch.
Tokens and Variables


Scroll section
Component library

THE STORY
How did I get here?
RESEARCH
Deep research on AI, it's functionality and applications. Here are some interesting facts, useful tips and questions I raised when researching about AI.
Intro Scroll section
Scroll section
CURRENT METHOD
Instructors grade assignments manually. TAs go through the submitted assignment on the left, and grade on the right using the Rubric.


Grading manually leads to inconsistencies when multiple instructors are involved. Long submissions, sometimes well over 20 pages, make the process time-consuming, especially alongside their other responsibilities.
EARLY IDEATION
AI integration into Canvas


Left - shows submitted assignment. Right side - Split in 2, AI insights on the top, rubrics at the bottom.
Scroll section
ITERATION
Instead of Canvas integration, the solution shifted to browser extension that pops-up on grading sites.

Why?
Can iterate quickly, update independently, and fix bugs without risking institutional systems
Working through partnership ecosystem is complicated
For scalability, the workflow can be adapted to support different LMS platforms
Scroll section
THE FLOW
Mapped the entire user journey. Iterated and created the flow for the solution


CHATBOT IDEAS
First few chatbot iterations used preset prompt buttons for interactions. Tabs (red/orange) were introduced to keep track of all actions.



However, users found the experience rigid and unnatural, lacking a conversational flow.
More iterations
Redesigned the interface to include a chat box, allowing for more interactive and natural user engagement. Tried, tested and iterated more.



Tabs - Users didn't find the need for it
Quick prompts - was taking too much real-estate
Replaced tabs with most used actions
Users found them inconvenient at the top
Visualization of student's performance
FINAL SOLUTION
Mapped the entire user journey. Iterated and created the flow for the solution


Features and rationale

The flow starts before grading, on the assignment description page
Why is this screen important?




Long descriptions - Easy for TAs to miss important details
Helps TAs with quick summary and the ability to set rigor for the assignment
How does auto-grading work?


Long descriptions - Easy for TAs to miss important details
Helps TAs with quick summary and the ability to set rigor for the assignment
Suggesting improvements


Long descriptions - Easy for TAs to miss important details
Helps TAs with quick summary and the ability to set rigor for the assignment
More features




Tooltips on hover - informs users of the button's action
Common and saved prompts for faster workflow
FUTURE SCOPE
Data visualization on the chatbot to see students' progress

REFLECTIONS
Natural Interactions
Initial versions were rigid - users felt boxed in by the prompt buttons. I learned how to balance consistency with natural, open-ended interactions, especially in tools that live alongside existing user habit.
Collaborate, not replace
How critical it is to design AI interventions that feel collaborative, rather than replacing human efforts altogether.
User agency
I learned how important user agency is. Even if the end goal is clear, the process shouldn't be too autonomous. The control must remain with the user
Balancing UX
It was tempting to overload the chatbot with features, but real impact came from simplifying interactions and surfacing only what users needed at the right moment.
Prototype
The real experience
Let's go
THANK YOU
Hope you liked it ;)