Automated grading for programming assignments used by
Instructors spend tens of hours every week compiling and debugging submissions
Students don’t get valuable feedback and often receive inaccurate grades
Current solutions are fragmented, outdated, and largely internal to universities
Programming courses are in high demand in public schools in the US and abroad.
There aren’t enough teachers who can teach programming, because they don’t code.
Teachers who want to teach, don’t have access to quality materials to kickstart their curriculum.
AutoGradr is on it’s third version, having evolved from a lightweight solution that worked for high-touch customers to one that can be a plug-and-play solution for instructors all over the world who speak and teach in any language, in any format of classrooms.
Standard workflow for instructors
Instructors currently download, run, and debug student solutions in order to grade it accurately. As student body sizes grow, this is becoming increasingly unsustainable, and finally a blocker to expanding course sizes.
Manual grading is extremely inefficient and not scalable. Any instructor in the world cannot be certain whether the code will run for a number of test cases in a reasonable amount of time without actually running the code itself. As a result, when the instructor has to grade hundreds of such pieces of code.
Feedback for Students
Every student is unique, and can require help in different areas to help them grow, but when instructors are busy they cannot assist students or provide specific feedback on where to improve.
When tens or hundreds of students submit their code to be graded, any one of them who has worked hard expects to get an accurate grade and most likely a better grade than students whose solutions are not nearly as good. A classroom without AutoGradr is unable to differentiate these students.
Considerations from v2
We were receiving a lot of support requests from instructors who were struggling to get started with AG
In attempting to design a flexible interface that works for everyone, we did not have any set pathways for new instructors to follow
Instructors were not able to test out how the students experienced the product
Below are screens that instructors would experience soon after their sign up flow in AG v2
Considerations from v2
Students would contact support about not understanding the feedback from their test cases
Instructors wanted students to be able to code larger projects on AG itself
Students wanted the ability to keep track of previous attempts
AG 3.0 Design Principles - Instructors
Empowering, yet familiar
Our instructors have been teaching for a long time using the same processes and flows. Disrupting this flow, even if it’s for the better can seem overwhelming. Leveraging patterns instructors are familiar with can help them see their course in AG.
Help & encourage actions
Instructors come from various backgrounds, speak different languages, and may not have taught CS before. An ever-present help section can encourage instructors to navigate the interface with confidence, and clear actions on every screen to help instructors move forward in the flow they are in.
Migrating an existing workflow can seem daunting, and oversharing steps early on can make the migration seem longer. Getting instructors through small achievable tasks sets them on a path to success.
Symmetry of experiences
Believe that instructors want the best for their students, be it knowing how they would experience a question, or how they have progressed through attempts. Provide insight into the student’s journey as much as possible to enable a stronger feedback loop.
When an instructor signs up, they are given five concrete steps to go through in order to get their course up and running. These steps were designed based on behavior we noticed in v2, and having talked to instructors asking them what they would need to successfully migrate.
Instructors also receive a series of emails after signing up that reinforce next steps after their first session.
Welcome email series
Creating questions & test cases
Known issues with v2
Oversimplified UI for a complex problem
No clear path of how to approach all the parts
Little to no help content on an overwhelming screen
Labs vs questions is not clear
No student preview
The creating questions interface from v2 is on the right.
Creating questions with a stepper in v3
Special emphasis on help content that is persistent across the experience in the right rail. Help content on complex screens is based on support requests received in v2
Test case set-up
Creating test cases is now broken out into a discrete step due to several considerations:
Once an instructor has decided to create a test case, they don’t need to see any other information at this point
Test case set up has several steps embedded in it which requires its own help content
Console outputs can get fairly long, and instructors need to be able to scroll and see all the required inputs/outputs
Console based test case creation
Following our principle for symmetrical experiences and student insight, Preview as Student is now the final step of creating a question. Instructors can also use the student preview the question they are creating at any point during the creation process.
This helps instructors get an insight into how their questions work, as well as tweak it based on what they want students to see.
The final component to provide instructors with insight into their classroom was to create a centralized view for status, attempts, and performance.
Questions are at the core of AG and any instructor’s ability to teach. Providing an interface that allows them to manage questions such that they can be reused semester after semester is essential to both the instructors’ and AG’s success.
The Course Overview can help instructors see what is active, and pending their review at a quick glance
The Roster provides instructors with tools to manage their classroom and get their course set up
The Gradebook includes access to the student’s IDE so they can not only see the results, but also the code, and past attempts to see the student’s journey that led them to the final attempt.
Student experience: Attempts & IDE
While the design system from the Instructor Experience can largely be leveraged to revamp the student experience, the core part of students using AutoGradr is making attempts and submissions. The viewing of questions, and a fully-loaded web based IDE are the two main components that are unique to the student experience and are outlined below.
Understanding feedback was the top support request we received from students, so the focus for v3 was to break down results and provide more than one view of the difference between expected and real test results.
Students further needed an IDE that would more accurately reflect real-world coding, and their own local environments if they wanted to switch back and forth between coding on their own devices, and bringing their workspace to AG to make attempts and edit based on feedback.
Keeping our focus on help content, the default state provides guidance on what students are able to do in the IDE, and settings that can help them feel comfortable with. Once students make attempts, an overview of their attempts appears in the sidebar, with access to previous attempts as they continue to make changes to their code. The console’s default state is to show the test case so students know what they are working towards, and after every attempt, the test cases are then juxtaposed with their own code’s output to provide students with context and a view of where their code might have failed the tests.
A quick thought on design systems
While most of the arguments for design systems are made for larger organizations and managing for scale, a design system is ever more important for a one person design team, and two person company. I would be remiss if I did not emphasize the importance of having a design system, and it’s role in ensuring quick development cycles. Everything from rudimentary atoms like color, to text styles, and buttons, to larger components such as the stepper, the tree structure cards for overview, and content blocks helped us move faster than we did in building and testing previous versions. A small snapshot of the design system is shown on the right.