CanvasAI | Case Study

AI in Learning Management Systems (LMS) like Canvas.

Role

Product designer

Timeline

12 weeks

Designed an AI grading assistant that streamlines workflows and saves time for Teaching Assistants (TA) and faculties by collaborating with GenAI expert Prof. Justin Hodgson.

Context

Who are TAs?

  • Often Graduate/PhD students working part-time

  • Grade between 15-32 students each

Challenges

  • Exhausting - Juggle course work and grading

  • Limited assignment feedback - to students based on individual TA knowledge

Final solution

AI-powered grading assistant (browser extension) for TAs, trained on past assignments and instructor grading patterns.

Current solution based on Canvas LMS

Features

Auto-grade

Suggest improvements

  • One-click suggestions - assesses the assignment and suggests grades and feedback on each rubric point

  • Improvements - Offers targeted recommendations and resources from the web (like papers, articles, blogs)

Compare

Interactions

  • Improvements - offers targeted recommendations and resources from the web (like papers, articles, blogs)

  • Tooltips - inform users

  • Most used commands easily accessible

Impact

Through Wizard of Oz, comparative usability testing

Reduced grading time

by 63%

Easy interactions

  • Easily accessible tools - most used tasks

Design system

How did I get here?

Let's explore the entire story

Research

Deep research on AI, it's functionality and applications. Here are some interesting facts, useful tips and questions I raised when researching about AI.

  • AI hallucinations

    When AI interprets false data as real and builds on it

    Good when

    Bad when

    Ideating

    Researching

    Always ask for source of the information when researching. ChatGPT now provides the source by default.

  • Prompt engineering

    Framing directly influences the depth of AI-generated insights. One of the effective method is ROCKIT
    Role, Opportunity, Context, Knowledge, Instruction, Tone

    It highly depends on your context, but try these:
    Context, Objective, Restriction, Action (CORA) - Search with restriction.

    Purpose, Action, Result, Adjust (PARA) - Multiple results/iterate

  • Biases

    AI learns from the data it's trained on, and biases arise when that data is one-sided.

    While humans are needed to detect bias, they often have subconscious biases themselves. So how can we trust humans to evaluate it fairly?

  • Facts vs opinions

    AI only argues when it's related to clear facts like 2+2=4. If you say 2+2=22, it'll acknowledge the playfulness but corrects with facts.

    However, if you ask for opinions, it draws from biased web sources. If you disagree, it'll just change it's answer and agree with you.

    Best to draw own insights. Rely more for facts, not opinions.

Current method

Instructors grade assignments manually. TAs go through the submitted assignment on the left, and grade on the right using the Rubric.

Grading manually leads to inconsistencies when multiple instructors are involved. Long submissions, sometimes well over 20 pages, make the process time-consuming, especially alongside their other responsibilities.

Early ideation

Integration of AI into Canvas itself

Left - shows submitted assignment. Right side - Split in 2, AI insights on the top, rubrics at the bottom.

Iteration

Instead of Canvas integration, the solution shifted to browser extension that pops-up on grading sites.

Why?

  • Can iterate quickly, update independently, and fix bugs without risking institutional systems

  • Working through partnership ecosystem is complicated

  • For scalability, the workflow can be adapted to support different LMS platforms

The flow

Mapped the entire user journey. Iterated and created the flow for the solution

Chatbot initial idea

First few chatbot iterations used preset prompt buttons for interactions. Tabs (red/orange) were introduced to keep track of all actions.

However, users found the experience rigid and unnatural, lacking a conversational flow.

More iterations

Redesigned the interface to include a chat box, allowing for more interactive and natural user engagement. Tried, tested and iterated more.

  • Tabs - Users didn't find the need for it

  • Quick prompts - was taking too much real-estate

  • Replaced tabs with most used actions

  • Users found them inconvenient at the top

  • Visualization of student's performance

Final design

A conversational chatbot with accessible action buttons proved most effective with users.

Final features and rationale

The flow starts before grading on the assignment description page

Why need this screen?

  • Assignment descriptions are long

  • It's easy for TAs to miss important details

  • Summarizes the assignments according to rubrics

  • Instructors can set the rigor - the AI grades accordingly

While grading

Though the goal is to suggest grades, users prefer the agency. So, instead of directly suggesting grades upon launch, users have the control on every action.

  • Informs users while loading

  • Summary according to assignment description

Next - areas of improvement. CanvasAI offers targeted recommendations and resources from the web based on missing points and added comments on the rubric. Instructors can also compare the student's assignment with their past assignments to see progress.

  • Suggests and humanizes the comment for 'improvements'

  • Users can edit before posting

  • Auto-generates insights from comparing assignments

More features

  • Tooltips on hover - informs users of the button's action

  • Academic rigor can be set at any point

  • Further grading

  • Academic rigor can be set at any point

  • Further grading

Future scope

  • Student's performance - Data visualization on the the chatbot

Reflection

Natural Interactions

Initial versions were rigid - users felt boxed in by the prompt buttons. I learned how to balance consistency with natural, open-ended interactions, especially in tools that live alongside existing user habit.

Collaborate, not replace

How critical it is to design AI interventions that feel collaborative, rather than replacing human efforts altogether.

User agency

I learned how important user agency is. Even if the end goal is clear, the process shouldn't be too autonomous. The control must remain with the user

Balancing UX

It was tempting to overload the chatbot with features, but real impact came from simplifying interactions and surfacing only what users needed at the right moment.

Thank you