The Use of Metacognitive Prompts in a Sampling-Distribution Exercise

Lawrence University, Joy Jordan

Project Summary

In introductory statistics, the idea of a sampling distribution is an essential building block, yet a conceptually difficult idea for students. It's a slippery idea that students blithely think they understand, yet struggle to explain. Bingo! This appeared a juicy class space to add metacognitive prompts. Specifically, I created a group-work activity with problems centered on the sampling distribution of the mean. These problems incrementally asked for more metacognition from the students. I also created a short assessment to give both before and after the activity.

Knowing the difficult nature of in-class research, I still formed a challenging research question: Can a group-work activity that engages students' metacognition improve students' understanding (based on test performance) about the sampling distribution of a sample average?

Project Context

I conducted my research all three terms of the 2009-10 academic year. The first term was a pilot study in a 200-level statistics course of 26 students. Based on their feedback and performance, I modified the activity and used that modified activity (and pre- and post-assessments) during the next two terms, both of which were 100-level introductory statistics courses (with 31 and 16 students, respectively). All of these courses had an additional computer lab meeting each week. I used one of those lab sessions for the students to complete the exercise in groups.

Of special note is my winter-term statistics class. This course is filled completely with psychology majors, who need the class as a pre-requisite for research methods. In preparation for my project, I collected information from my winter-term class in 2009 (e.g., common mistakes on exams). Then I collected the same information from my winter-term class in 2010—this is the group that received the "treatment" of the sampling-distribution exercise. Because the winter-term class, regardless of year, is filled with similar students—psychology majors—I found it my closest link to a controlled-experiment environment. That is, making comparisons between winter-term classes is reasonable (albeit not comparisons that can indicate causation).

Effect on Teaching Practice

In the classroom, my work with metacognition focused solely on the sampling-distribution exercise. Since this was my only intervention, and because I wanted to compare to the previous year, I specifically didn't mention metacognition in other parts of the class. (I thought this the cleanest design.) That said, I regularly spoke with students—in class or in office hours—about methods to gauge their understanding and about "reasonable checks" that can be applied to many problems. This has always been a part of how I teach. But I didn't name it (to the students) as "metacognition." [After completion of my project, I made big changes in my classroom during the 2010-11 academic year; these changes were clearly grounded in the metacognition literature and my discussions with Collegium members. See the Postscript for more explanation.]

Evidence and Conclusions

Student Feedback

I received helpful student feedback during the fall-term pilot. Specifically, my original activity was too long, and the students didn't reach the last problems—the ones that were most metacognitively challenging. Furthermore, it was clear from the pre- and post-assessment results that one of my assessment questions was worded in a confusing way. The changes I then made (shortening the activity and re-wording the assessment) were incredibly important for a seamless use of the intervention during winter term.

During winter and spring term, the sampling-distribution exercise got positive reviews from the students. In terms of helpfulness towards understanding, 30% of students said it was very helpful, 64% said it was somewhat helpful, and only 6% said it was not helpful. In terms of providing new problem-solving tools, 20% of students said they learned multiple new strategies, 66% said they learned one new strategy, and 14% said they learned no new strategies.

The written comments from students were generally positive, yet focused on the helpfulness of practice problems and working in groups (not on the metacognitive process). A few of the comments were particularly interesting; they corroborated the commonly-found issue that students with weak understanding (as measured by class performance) often have the weakest metacognitive skills—yet this is the group who would most benefit from metacognition. For example, an A-student wrote, "I think writing an explanation in words was helpful to ensure that I actually knew what I was doing and not just plugging in numbers." Yet a C-student wrote that the least helpful part of the exercise was "the explanations of the processes behind it. It's very easy to stop thinking and just act on auto pilot, getting the right answer in the right way, but not thinking about it." (He thought the auto-pilot route was a good thing.)

Summary of Data Analysis

At the conclusion of my project there was much information to process and data to analyze. On the assessments, I not only posed content questions to students, but also asked for confidence judgments. That is, for every question, each student provided a confidence percentage in her answer. I also had performance data on both my winter-term 2009 students (control group) and my winter-term 2010 students (treatment group). Lastly, I returned to my winter-term students midway through the very next term—while they were in research methods class—and they again completed the assessment (content questions and confidence judgments).

After sifting through the copious amounts of data, these are the most interesting findings:

  • Considering all 43 students (winter and spring term together) there was no significant increase—between post-activity and pre-activity—in percentage correct on the assessment. This assessment was four multiple-choice questions created by me; each question directly related to the topic of the sampling distribution of an average. Yet the assessment was not validated in any way. In fact, the non-validated assessment instrument limited my results (yet sped up the process of my work—that is, actually bringing this exercise to the classroom).


  • Interestingly, there was a significant increase (between post-activity and pre-activity) in average confidence in answers on the assessment. Note this is on students' average confidence over four questions (not on individual questions).


  • I further investigated this increase in confidence, and found it most prevalent (significantly so) with the bottom-performing-half of the class, in terms of course grade. As already mentioned, there wasn't a significant increase in scores on the assessment. This held up when breaking the students in half by course grade. So the bottom-performing-half of the class was, on average, more confident (significantly so) after the activity, yet they did not perform significantly better on the post-activity assessment.


  • Comparison of 2010's winter term to 2009's winter term (my "like" groups of students—of whom only 2010 students completed the activity) showed a decrease—from 34% to 17%—in the percentage of students who missed a sampling-distribution T/F exam question (common to both classes). This decrease was not statistically significant, yet still seems noteworthy (i.e., as a teacher, from a practical standpoint, I'm glad for this fairly dramatic drop). Furthermore, anecdotally, it seemed the worded answers, on a different sampling-distribution question, were better in 2010.


  • For the 2010 winter-term students, I was able to give the same assessment midway through the spring term. The follow-up results—comparing data from the winter-term post-assessment to the spring-term assessment—show a (non-significant) decrease in percentage correct on the assessment and a (significant) decrease in average confidence, both of which are not surprising.


  • I looked more deeply at the average confidence in spring term. Even though there was a significant drop in confidence from winter term to spring term, there still appeared some overconfidence in the spring-term follow-up. Considering only the spring-term follow-up results, I separated the students who attained over 50% on the assessment and the students who received 50% or less on the assessment. (Again, I fully acknowledge the limitations of the assessment instrument.) There was no significant difference in average confidence for these students. In fact, the average confidence levels for these two groups were surprisingly similar (perhaps another indication of the lack of metacognition within lower-performing students).
    Jordan6


Future Implications

Overall, the sampling-distribution exercise was useful for the class. The students had interesting, deep discussions, and—for at least a brief period—seemed to have better conceptual knowledge of sampling distributions. That is, regardless of the statistical significance (or insignificance) of the results, as a teacher I found the exercise worthwhile.

What most surprised me was the repeated overconfidence in the lower-performing-half of the class. These are only initial results, but they seem meaningful. For example, when lower-performing students work in groups, do they perceive an increase in confidence even though objectively their understanding hasn't changed? This question—far afield from my initial research question—now has my interest piqued.

Postscript

During the 2010-11 academic year, I continued to work with my sampling-distribution exercise. During fall term, I noticed an additional student misconception about sampling distributions (confusion of when to work with an individual value and when to use a sample average). For this reason, I added another question to my group-work activity. The question seemed to—based on the quality of the student discussions—directly address this misconception, and it's now a permanent part of the exercise.

Furthermore, during my winter-term class of psychology students, I included the same true-false test question about sampling distributions that I used the previous two years. Recall in 2009 (pre-intervention), 34% of students missed this question, and in 2010 (post-intervention) only 17% missed this question. Interestingly, in 2011 (with the intervention of the sampling-distribution exercise), 33% of students missed this same question—back to the pre-intervention level. One possible explanation is the question appeared on the final exam in 2009 and 2011, but on the second exam in 2010 (the week after the intervention). Perhaps this exercise helps students better understand sampling distributions, but the effect is short-lived. This is an issue faced consistently by educators: How do we teach for long-term understanding? When I entered this project, I purposely chose a limited scope. Yet if I want stronger, long-term student understanding of sampling distributions, it appears a single exercise (not surprisingly) does not accomplish this goal.

Lastly, and perhaps most importantly, my work with the Collegium lead to big changes in my classroom during winter term 2011. Based on the student-learning research, I had many interesting ideas swirling in my head, yet wasn't sure how to implement them. Then it hit me: put away the lecture. Instead, expect students to engage with the reading, ask questions, struggle with concepts and applications, and, in the end, own the material for themselves. I'm a guide—not a talking textbook—providing alternative or extended explanations, creating intriguing problems, and offering context (among other things). Leaving behind the lecture-format class, was both freeing and terrifying, but ultimately something in which I deeply believe—both for student learning and for my own authentic happiness as a teacher. This aha! moment might not have happened without the support, guidance, and commitment of the Collegium, for which I am very grateful.

Bibliography

(For my specific Collegium project)

Dunlosky, J. and Metcalfe, J. (2008) Metacognition, Sage Publications

This book was integral to my general understanding of metacognition (e.g., what are the different metacognitive judgments?). Furthermore, the many examples of actual research studies gave me ideas about my own project. In fact, the confidence judgments I included with my assessment (which became the most interesting piece of the results) was an idea I got from this text.

Schoenfeld, A.H. (1992), "Learning to Think Mathematically: Problem‐Solving, Metacognition, and Sense‐ Making in Mathematics," in D. Grouws (Ed.) Handbook for Research on Mathematics Teaching and Learning (pp. 334—370), New York:MacMillan

This article strongly guided my actual activity—that is, the questions I asked, the expectations I had. My exercise builds metacognitively from problem to problem, and for my last problem I used Schoenfeld's model of questioning: 1) What exactly are you doing? (Can you describe it precisely?), 2) Why are you doing it? (How does it fit into the solution?), and 3) How does it help you? (What will you do with the outcome when you obtain it?)
(For my growth as a teacher)

Artino, A. R. (2005), Review of the Motivated Strategies for Learning Questionnaire. (ERIC Document Reproduction Service No. ED499083)

The Motivated Strategies for Learning Questionnaire (MSLQ) consists of 81, self-report items divided into two main categories: motivation and learning strategies (each of which has sub-categories). This questionnaire was published in 1991 by Pintrich, Smith, Garcia, and McKeachie. (Their 76-page manual for the use of the MSLQ is available on ERIC.) Since that time, the MSLQ has mostly been used as a research assessment tool (e.g., did a certain intervention have significant impact on student motivation?). More recently, though, teachers and college learning centers have used the MSLQ to make students more aware of their own learning. Artino provides an interesting and brief summary of the MSLQ (and the actual questionnaire is included in his paper).

Bransford, J.D., Brown, A.L., and Cocking, R.R. (eds.) (1999), How People Learn: Brain, Mind, Experience, and School, Washington, DC: National Academy Press.

Extensive overview of the research on learning. Main findings: 1) students enter the classroom with their own preconceptions—preconceptions that if not actively engaged might impede learning; 2) competence in a discipline comes from factual knowledge, factual and conceptual understanding in context, and ability to retrieve and apply knowledge; and 3) metacognitive teaching approaches allow students to set learning goals and practice self-regulation.

Dunlosky, J. and Metcalfe, J. (2009), Metacognition, Los Angeles: Sage Publications.

An accessible textbook on metacognition. Incorporates research studies (29 pages of references) within its discussion of metacognitive judgments, applications, and life-span development. Excellent introduction to metacognition, if you have time and interest.

Gourgey, A.F. (1999), "Teaching Reading from a Metacognitive Perspective," Journal of College Reading and Learning, 30(1), 85 – 93.

Brief article that describes the characteristics of expert versus novice readers. Provides two classroom exercises designed for metacognitive growth (re: reading). Although the examples are from a "remedial" first-year college class, they can be generalized (I think) to any college classroom.

Halpern, D.F. and Hakel, M.D. (2003), "Teaching for Long-Term Retention and Transfer," Change, July/August, 37 – 41.

Directly addresses the fallacy that anyone with a PhD can teach effectively. More importantly, provides ten basic (evidence-based) principles on long-term retention and transfer. Excellent big-picture resource for educators.

King, P.M. and Baxter Magolda, M.B. (1996), "A Developmental Perspective on Learning," Journal on College Student Development, 37(2), 163 –173.

Thought-provoking article on both cognitive and personal development—and their interconnectedness. Provides suggestions to educators about an integrated view of student learning, including constructed knowledge, cognitive and personal learning, and the gradual development of this learning.

Kruger, J. and Dunning, D. (1999), "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments," Journal of Personality and Social Psychology, 77(6), 1121 –1134.

Describes three studies done on Cornell University students. These studies revealed (among other things) that "participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability." These striking results are supplemented with an interesting discussion.

Svinicki, M.D. (2010), "Student Learning: From Teacher-Directed to Self-Regulation," New Directions for Teaching and Learning, 2010(123), 73 – 83.

Self-regulation (e.g., goal setting, behavior control, autonomy) is a key component of metacognition. Svinicki nicely summarizes the recent research on self-regulation. This summary includes a few answers and, perhaps more interestingly, many thought-provoking open questions.

Taylor, S. (1999), "Better Learning Through Better Thinking: Developing Students' Metacognitive Abilities," Journal of College Reading and Learning, 30(1), 34 – 45.

Makes the case for teaching students how to learn. Specifically addresses self-appraisal and self-management (two components of metacognition). Suggests a question-based teaching and learning approach.