Using automated feedback in an undergraduate programming class on climate data analysis
Author
In Climate Data Analysis (ATS 301) at Oregon State University, students develop basic Python programming skills for plotting and statistical analysis of climate data. With the transition to remote instruction in Fall 2020, the instructor and TA were no longer able to provide informal feedback in person. To mitigate this, we used the Jupyter Notebook packages nbgrader, plotchecker, and matplotcheck to set up "autograding" of notebooks. Students ran scripts within their notebooks for instant feedback (for example "Y-axis label missing", "Incorrect number of points plotted; check the year range") while working through their assignments. A human grader still assigned the final grade for the "autograded" questions, as well as the short answer questions. The intentions were for students to gain confidence about their coding when the instructor was not available and to free instructor time for more student interaction.
Informal polling indicated that all students found the automated feedback at least somewhat useful. Submitted assignments had fewer of the common plotting errors (e.g., missing legends, incorrect data plotted) seen in previous years. Grading required less time, as the grader could use the autograder output to target flagged answers. The biggest drawback was the large amount of time (and proficiency with Python) needed to write the scripts specific to each assignment. Clear instructions were needed regarding plot details, variable names, etc., and there were many corner cases to address. In future years, these tests will be refined to be easier to use and provide more feedback to students regarding common errors. While this tool was developed partially due to the switch to remote learning necessitated by COVID-19, we will continue to use it upon returning to the classroom.
Instructors are welcome to contact the author for examples of testing scripts and nbgrader configuration.