CanvasAI | Case Study

AI in Learning Management Systems (LMS) like Canvas.

Role

Product designer

Timeline

12 weeks

Designed an AI grading assistant that streamlines workflows and saves time for Teaching Assistants (TA) and faculties by collaborating with GenAI expert Prof. Justin Hodgson.

Context

Who are TAs?

  • Often Graduate/PhD students working part-time

  • Grade between 15-32 students each

Challenges

  • Exhausting - Juggle course work and grading

  • Limited assignment feedback - to students based on individual TA knowledge

Final solution

AI-powered grading assistant (browser extension) for TAs, trained on past assignments and instructor grading patterns.

Current solution based on Canvas LMS

Features

Auto-grade

Suggest improvements

  • One-click suggestions - assesses the assignment and suggests grades on each rubric point

  • Improvements - Offers targeted recommendations and resources from the web (like papers, articles, blogs)

Compare

Interactions

  • Improvements - Offers targeted recommendations and resources from the web (like papers, articles, blogs)

  • Tooltips - informs user

  • Most used commands easily accessible

Impact

Through Wizard of Oz, comparative usability testing

Reduced grading time

by 63%

Easy interactions

  • Easily accessible tools - most used tasks

Design system

Through Wizard of Oz testing

How did I get here?

Let's explore the entire story

Research

Deep research on AI, it's functionality and applications. Here are some interesting facts and questions raised when researching about AI.

  • AI hallucinations

    When AI interprets false data as real and builds on it

    Good when

    Bad when

    Ideating

    Researching

    Always ask for source of the information when researching. ChatGPT now provides the source by default.

  • Prompt engineering

    Framing directly influences the depth of AI-generated insights. One of the effective method is ROCKIT
    Role, Opportunity, Context, Knowledge, Instruction, Tone

    It highly depends on your context, but try these:
    Context, Objective, Restriction, Action (CORA) - Search with restriction.

    Purpose, Action, Result, Adjust (PARA) - Multiple results/iterate

  • Biases

    AI learns from the data it's trained on, and biases arise when that data is one-sided.

    While humans are needed to detect bias, they often have subconscious biases themselves. So how can we trust humans to evaluate it fairly?

  • Facts vs opinions

    AI only argues when it's related to clear facts like 2+2=4. If you say 2+2=22, it'll acknowledge the playfulness but corrects with facts.

    However, if you ask for opinions, it draws from biased web sources. If you disagree, it'll just change it's answer and agree with you.

    Best to draw own insights. Rely more for facts, not opinions.

Current method

Instructors often grade assignments manually. TAs go through the submitted assignment on the left, and grade on the right using the Rubric.

Grading manually lead to inconsistencies when multiple instructors are involved. Long submissions, sometimes well over 20 pages, make the process time-consuming, especially alongside their other responsibilities.

Early ideation

Integration of AI into Canvas itself

Left - shows submitted assignment. Right side - Split in 2, AI insights on the top, rubrics at the bottom.

Iteration

In future, the AI assistant needs to grade multiple LMS, not just Canvas. So, instead of Canvas integration, the solution shifted to browser extension that pops-up on grading sites.

The flow

Mapped the entire user journey. Iterated and created the flow for the solution

Chatbot initial idea

Initially, CanvasAI was designed to auto-suggest grades and feedback through a button-based interface. Tabs (red/orange) were introduced to keep track of all actions. However, users found the experience rigid and unnatural, lacking a conversational flow.

More iterations

Redesigned the interface to resemble a chatbox, allowing for more interactive and natural user engagement. Tried, tested and iterated more features to get the solution right

  • Tabs - Users didn't find the need for it

  • Quick prompts - was taking too much real-estate

  • Replaced tabs with most used actions

  • Users found them inconvenient at the top

  • Visualization of student's performance - users didn't need it

Final designs and rationale

Tried, tested and iterated more features to get the solution right

The flow actually starts before grading on the assignment description page

  • Assignment descriptions are long

  • Summarizes the assignments according to rubrics

  • Instructors can set the rigor - the AI grades accordingly

Though the goal is to suggest grades, users prefer the agency. So, instead of directly suggesting grades upon launch, it asks for user input.

  • Informs users while loading

  • Summary according to assignment description

Next step is to provide areas of improvement. CanvasAI offers targeted recommendations and resources from the web based on missing points and added comments on the rubric. Instructors can also compare the student's assignment with their past assignments to see progress.

  • Suggests and humanizes the comment for 'improvements'

  • Auto-generates insights from comparing assignments