A Question of Numeracy: Is Self-Assessed Competency Registered on Knowledge Surveys Meaningful?
Monday
11:30am-1:30pm
UMC Aspen Rooms
Poster Presentation Part of
Geoscience Education Research
Authors
Edward Nuhfer, U of WY
Karl Wirth, Macalester College
Christopher Cogan, Ventura College
Steven Fleisher, California State University-Channel Islands
Eric Gaze, Bowdoin College
Geoscientists often use knowledge surveys to collect self-assessed competency data about learning and learning gains. If people believe that they can do something, how well can they actually do it? At first glance, quantifying the accuracy of a person's self-assessment of competency appears simple. It involves comparing direct measures of confidence taken by one instrument, such as a knowledge survey, with direct measures of competence taken by another instrument, usually a test. In accurate self-assessments, the scores on both measures would be about equal. Disparities from this perfect score would register as measures of either over-confidence or under-confidence. However, deducing self-assessment accuracy is not that simple. Both instruments used to obtain paired measures must have sufficient reliability to permit good comparisons, and both must measure the same learning construct. Competence and confidence have no established units, so the default measures are scores reported in percents. These constitute arrays bounded by 0% and 100%, a fact that introduces complications. Sorting of data needed to report results in aggregate imparts bias, and the probability of overestimating or underestimating is not uniform across all participants. To deduce this, we employed reliable, tightly aligned instruments to measure self-assessed competency (a knowledge survey of the Science Literacy Concept Inventory) and actual competency (Science Literacy Concept Inventory) of 1154 participants in understanding the nature of science. We used random number simulations to discover how mathematical artifacts can be (and have been in published literature) mistaken as human measures of self-assessed competencies. Innumeracy leads to misinterpretations so severe as to contradict what the data actually reveals. In our study, knowledge survey self-assessments of competence proved strongly related to actual performances.