Kiddom offers a free digital gradebook for teachers; however, when they pivoted towards curriculum management, it required the team to rethink the gradebook.
The project objective was to support standards-based grading with a more robust system for assessment building.
I was the sole product designer working with a PM, Head of Product, with some oversight from Head of Design in the beginning.
Important to note, this was during a transition time where we no longer had our Head of Design mid-way through the project.
As we were defining the project, I gathered user feedback, interviews, and personas that we already had in our internal resources to understand how teachers grade.
- Inputting grades is necessary but time-consuming
- Teachers are used to grid formats
- Teachers who do mastery based grading need more tools and detailed reports
A lot was going on at this time. Kiddom, previously a learning management system (LMS), pivoted towards robust curriculum and assessment tools. This necessitated rethinking the gradebook.
My goal: satisfy the requirements from digital curriculum and delight the user with workflow improvements.
- How might we support standards-based grading, and more specifically, mastery-based as well?
- What other UX debt may we address?
- How might we enable quick, painless grading?
There was not a formal design brief, so I gathered our PMs in order to order through the problem and priorities to keep it in mind for planning and design.
From our conversation, I focused on developing questions and key goals that would incapsulate the problem.
My research reinforced:
- We have the potential to address pain points with our current gradebook
- Gradebooks are usually grids. Mastery-based grading would typically use color to distinguish levels and assignments are vertically spread at the top
- No mastery-based gradebook enabled question level reporting
I set out to design something better than a grid.
I investigated the known data on our users, did very comprehensive audits for student and grading experiences, and researched not just digital and print gradebooks but ones that touted as "standards-based."
I knew that standards-based did not necessarily mean that it supported mastery-based grading, which requires more detailed grading and reports.
Some takeaways from this session:
- Move away from legacy designs
- Enable horizontal, vertical input (optimize inputting)
- Solve for question-level grading
While awaiting some Product and Design direction, I used existing, legacy components to make some quick high-fidelity mock ups to get feedback from our CEO, Product, Engineering, and Design on our direction and bandwidth.
With the feedback that I received, I decided to hit the drawing board for ideation.
I met with my product manager to talk through the shifting landscape for our components. We decided that the trajectory was to support the curriculum builder design (which was being developed concurrently with my designs). In addition, I need to explore how to support question-level inputs.
I focused on these problems to solve:
- reflect curriculum builder style
- explore question-level input
- explore horizontal, vertical grading
- explore ways to represent rubrics
My PM shared some information that she had done with our User Researcher. Teachers were looking for a grid-layout because teachers may want to grade a single student across for all questions or grade each question for each student down the line.
We collaborated on the end result: the vertical roster would enable workflow (who am I grading? what remains to be graded?) and result. The horizontal bar would support question level (what am I grading?)
I played with a variety of formats, not even represented above. In my head I thought of it as:
Option 1: Popover input
Option 2: Line input
Option 3: Toggle screens for workflow
Option 4: Modified grid for question level
And, of course, keeping in mind how mobile might support the options.
Just when I was gathering the mock ups above for a prototype test it with users, the project was reassigned to the new Senior Product Designer for a fresh perspective. The feedback that I received from the CEO and Lead of Product was that it was too complicated and they wanted an outside perspective to simplify things.
Looking back, I also see this as an opportunity to learn about communication and revision. While I did create a document for the Lead of Product to check in on the progress, I should have included the CEO into the design choices that my PM and I discussed as well. I tend to design the most complicated view first, to test the complexity of the system and choices, so I had not yet pared down the designs to show a less complicated use case.
I was looking forward to getting a fresh perspective from our users. I would have asked new and old users to grade some papers in order to see if it was intuitive, and if not, what was confusing. Then based on user feedback decide what was necessary and what could be pared away. Without actual testing, I did a mock up of some revisions.