National Numeracy Network > News, Events, and Opportunities > NSF Numeracy Projects Supporting QL in Education > NSF-Supported Project Profiles > Quantitative Inquiry, Reasoning, and Knowledge (QuIRK)

Quantitative Inquiry, Reasoning, and Knowledge (QuIRK)

Grawe bio pic
Nathan D. Grawe, Carleton College
Funding provided by NSF Grant Number DUE-0717604 and a grant from the W. M. Keck Foundation with prior support from the US Department of Education's Fund for the Improvement of Post-Secondary Education (grant number P116B04081).

For More Information


serc.carleton.edu/quirk

Summary

With NSF support, QuIRK is refining and adapting for dissemination a rubric for assessing quantitative reasoning (QR) in student writing. The project includes feasibility studies at four diverse partner institutions (Iowa State University, Morehouse College Seattle Central Community College, and Wellesley College) to determine how our assessment protocol can be adapted to other contexts. To ensure a rich sample of papers for assessment, the project also includes support for revisions of courses and assignments to enhance QR instuction.

Project Goals

Audience:Teachers of undergraduate students make up QuIRK's primary audience, although our work could easily be applied to high school or graduate students.

Our conception of QR:QuIRK understands QR to be the habit of mind to consider the power and limitations of quantitative evidence in the evaluation, construction, and communication of arguments in public, professional, and personal life.

Project Goals:

Project Design/Elements

Rubric Refinement:
In the pilot project, the reliability of rubric items was tested by a single pair of readers. These readers achieved roughly 80% agreement in a reading of around 100 papers. Following some revision, the rubric was tested by a group of about a dozen readers. The larger group came to similarly strong levels of agreement when assessing relevance and extent of QR. But evaluations of the quality of implementation, interpretation, and communication (three separate scores in that version of the rubric) were far less reliable.

To address this, we have refined our approach in three ways. First, we simplified quality assessment to a single holistic score. Second, we improved our reader training processes. Finally, we expanded and revised the rubric language describing the various levels of proficiency.

The revised rubric produced reliable measures of QR use and proficiency in a sample of student papers. Readers agreed on the relevance and extent of QR in 75.0 and 81.9 percent of cases respectively (corresponding Cohen's κ= 0.611 and 0.693). A four-category measure of quality produced slightly less agreement (66.7 percent, κ = 0.532). Collapsing the index into a 3-point scale raise inter-rater agreement to 77.8 percent (κ = 0.653).

In the next step of our project, we will apply the rubric at other institutions to learn how the tool might be adapted to other institutional contexts. Feasibility studies will be completed at Wellesley College in June 2009 and at Morehouse College in December 2009. Two more studies will be done in 2010 at Seattle Central Community College and Iowa State University.

Professional Development:
Our professional development program begins with assessment. Few activities more effectively motivate faculty to change teaching patterns than the assessment of student work. What is more, by situating QR in the context of argument, we have made it relevant to a wide range of faculty–including those from traditionally non-quantitative disciplines. When these same faculty read papers from their own students and see the way in which QR was used (either effectively or ineffectively), they become aware of the many ways in which they might better achieve their course's goals by attending to QR.

At the end of the assessment session, participants discuss what they saw in the writing samples and how we might address identified concerns. These conversations inform the design of subsequent faculty development workshops. Where possible, we seek out intersections with other campus initiatives (e.g. the writing program, the ethics program, and the visuality initiative) to reach a broader audience. The goal of these equipping workshops is that each participant will leave with a draft of an assignment (or an assignment revision) that will enhance QR instruction.

Finally, we provide small summer grants supporting faculty to follow through on course revisions.

The project has been successful in engaging faculty from all four divisions of the college. In the first year of NSF funding, 67 percent of faculty in the sciences and social sciences participated in the project. Perhaps more notably, 41 percent of faculty in the arts, literature, and humanities took part. Our collection or QR writing assignments has grown to 22 activities. (These are part of an inter-institution collection almost twice that size which is part of the NNN site).

Evaluation and Assessment Strategies

The evaluation of our approach to QR assessment will be based on the four feasibility studies.

The effectiveness of course and assignment revision will be based on data generated from our assessment that has been linked to student transcript data.

The power of our professional development workshops will be evaluated using participant surveys and focus groups.

Products, Key Findings, Publications



Related or Similar Projects

Professional Development
Assessment
Products
Research Focus
Disciplinary
Audience

See more NSF-Supported Project Profiles »