Assessing QR
Jump down to: Challenges to QR assessment | Assessment Tools | Standardized multiple choice tests | National QR rubrics | Designing assessment rubrics
Introduction
Curricular reform is expensive in faculty time and energy. If course and assignment changes aren't producing better quantitative reasoning (QR) performance, then we need to do something different. But how do we know whether students are growing in their QR capability?
The very nature of QR poses challenges for traditional assessment tools. But the field has been very busy addressing those challenges. This page provides several choices you might consider to assess QR at the course or individual student level.
Challenges to QR assessment
While definitions of QR vary from author to author, most speak to four facets of the discipline:
- QR operates on a basic math skill set
- QR applies that skill set in context to problems in personal, professional, and public problems
- QR includes the ability to communicate the results of that application
- QR is a habit of mind--"a predisposition to look at the world through mathematical eyes, to see the benefits (and risks) of thinking quantitatively" (Steen 2001 p. 2)
Traditional standardized tests do very well at assessing the first of these characteristics. But as assessment expert Grant Wiggins has noted "standardized conditions are decontextualized by design" (2003 p. 125). Further, multiple choice items provide no opportunity to show communication. Finally, explicit tests of QR which prompt students to show what they can do cannot uncover a habit of mind.
Assessment tools
The response to these challenges has broadly followed two paths: The careful introduction of context into multiple choice items (while avoiding cultural biases) and the development of rubrics to assess QR in short-answer or essay contexts. These assessment tools can be applied at the institution level to measure changes in student capacity over time or at the course level to see whether students' abilities have improved. But most professors do not set QR itself as a primary course or assignment goal (and even if it were one goal, courses and assignments nearly always have other discipline-specific objectives). The third section below provides guidance on how to create assignment- or course-specific assessment rubrics.
Standardized multiple choice tests
One response to the challenges above is the careful introduction of context into multiple choice items (while avoiding cultural biases). This approach has two obvious advantages. 1) Scoring multiple choice tests is inexpensive. 2) Because the assessment tool is not tailored to a single institutional context, student scores can be compared with those at other institutions.
National QR rubrics
Some facets of QR aren't visible in responses to multiple-choice prompts. This section introduces two nationally-tested assessment rubrics that can be applied (with modest adaptation) to papers, short essays, and portfolios.
Designing assessment rubrics for specific assignments or courses
Most faculty members are less interested in assessing student QR capacity as in assessing assignments or course work (e.g. portfolios) that include QR. This section provides detailed guidance on how to create grading rubrics in general with specific application to QR assignments.
References
Steen, Lynn Arthur. 2001. Mathematics and Democracy: The Case for Quantitative Literacy. Washington DC: Mathematical Association of America.
Wiggins, Grant. 2003. "'Get Real!': Assessing for Quantitative Literacy" in Quantitative Literacy: Why Numeracy Matters for Schools and Colleges, Bernard Madison and Lynn Arthur Steen, eds. Princeton, NJ: National Council on Education and the Disciplines.