Assessment

Content on this page is derived from participant presentations, discussions, and breakout groups at the 2015, 2016, and 2017 workshops, as well as assessment resources from Pedagogy in Action.

In order to develop student competency in computation and modeling, STEM faculty need methods of assessment that accurately capture student understanding and growth (learn more about assessment from Pedagogy in Action). Assessment of student computation and modeling is a non-trivial task: the format and volume of material associated with exercises can be unwieldy to create and evaluate. From a student's perspective, assessment-related anxiety common in many STEM fields is often exacerbated by the real and perceived difficulty of computation and modeling. Overcoming these barriers and producing effective assessments is crucial to successfully teaching computation and modeling to students.

There are a variety of ways in which faculty can approach assessing student computation and modeling. Instructors can consider the types of assessment they use - summative or formative - and in which scenarios they are most useful. Faculty use a variety of strategies, such as scaffolding activities, to teach and assess computation and modeling with MATLAB while reducing anxiety and increasing student self-efficacy. In order to simplify and streamline student assessment, free tools such as MATLAB Grader and IKAATS provide environments in which faculty can create, grade, and manage assessments.

Jump down to: Assessment Functions | Assessment Strategies | Resources

« Back to Teaching Computation in the Sciences

Automated Assessment Tools

MATLAB Grader

MATLAB Grader is a free online system for automatically grading MATLAB code. Instructors can create problems (functions or scripts), lessons, reference solutions, and templates for student assessment. It provides immediate feedback and automated grading of student work. Key features include:

  • A "Solution Map" for visualizing student responses and class progress.
  • Existing MATLAB based problems that are ready for automated grading.
  • The ability to export reports and import data into Learning Management Systems.
Using MATLAB Grader:

IKAATS

The Interactive Knowledge Assessment and Teaching System (IKAATS) is a MATLAB based system designed by workshop participant Ben Luce (Lyndon State College) for managing student learning and assessment. IKAATS allows for the streamlined creation, grading, and organization of student work. Key features include:

  • Embedded calculation boxes that allow students to write and run short MATLAB scripts. These are fully retained, leaving a record for students and instructors.
  • Automated grading of multiple choice questions.
  • Electronic grade books fully linked to assessments.

More information on IKAATS:

Assessment Functions

The first step to assessment is to identify measurable learning objectives and then determine what assessment tools are appropriate for your course. It may be most effective to do a summative assessment Provides a description of students' level of attainment upon completion of an activity, module, or course of the abilities that are important for final outcomes, and do formative assessments Provides diagnostic feedback to students and instructors at short-term intervals (e.g., during a class or on a weekly basis) for the steps they take to get to those final outcomes.

Formative Assessment

Formative assessment allows the instructor to asses whether students (individually and collectively) are learning and provides a scaffolded check-in as students build their knowledge and skills toward to overarching course goal.

For instance, instructors can have students explain the function of a code structure (skeleton code, planned algorithm, or completed code). Explanations can be in the form of "comments on steroids," short written responses, or class discussion. This can be done at different levels based on students' prerequisite knowledge:
  • High level: Students give a description of the goals of the code and its context in a field. Also a detailed (but variable depth) description of how algorithms are working.
  • Mid- or lower level: Students describe the big picture of the code in question, but may be missing key components like variables or an operation (e.g. a function call). The types and number of levels are variable and depend on the course.

Summative Assessment

Summative assessments can be used to determine if students have met the overarching goal of a unit or course. For instance, a summative assessment may be in the form of a capstone project, paper, presentation where students design, run, and analyze results of a computational model.

Workshop participants encourage using rubrics to set students' expectations and give them guidelines for success. Research literature indicates that rubrics increase the reliability of scoring and can improve learning and instruction (Arter & McTighe, 2000; Jonsson & Svingby, 2007; Wiggins, 1998). Since each course and student population is different, they recommend that the instructor customizes the rubric for their assignment and that they (or students, for peer-review) complete the rubric, rather than automating it. Rubrics can also be used to assess whether code provided correct answers, and whether the student can interpret the answer in terms of the original question. This can be automated with assessment tools or a graded survey.

Assignments with rubrics:

Assessment Strategies

A wide range of assessment strategies are available for both formative and summative assessments. Participants from 2015, 2016, and 2017 workshops brainstormed strategies they used in classes, some of which are highlighted below:

  • Use scaffolded exercises to give students feedback early and often
    • Scale assignments in size and complexity
      • Begin with small assignments that feature basic concepts and skills
      • Follow-up with more extensive assessments that require higher-level thinking, more advanced skills
    • Have students create pseudocode before writing functioning code
    • Include exercises in which students have to fix errors in existing code
    • Provide students with code templates, given as PDF's or hard copy, and have them transcribe and modify the code
    • Give students small exercises or tasks which allow for rapid or immediate feedback
    • Example activity: Conservation Equation Model by Gregory Hancock (College of William and Mary)
  • Use assignments that stress either code output or the code itself, e.g.
  • Create reduced-stress examinations
    • Designate time at the beginning of examinations (prior to creating code) for reading and planning
    • Allow students to use prepared and commented code in examinations
    • Allow students to retake or revise exams
    • Essay: Alleviating computational anxiety of chemistry students by Kristina Closser (California State University-Fresno)
  • Use peer- and self-assessment

Resources

References

  • Arter, J., & McTighe, J. (2000). Scoring rubrics in the classroom: Using performance criteria for assessing and improving student performance. Corwin Press.
  • Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130-144.
  • Wiggins, G. (1998). Educative Assessment. Designing Assessments To Inform and Improve Student Performance. Jossey-Bass Publishers, San Francisco, CA.

Community

« Back to Teaching Computation in the Sciences