What kinds of questions do undergraduates generate while exploring data visualizations pertaining to sea level rise and climate change?
Monday
2:45pm
Ritchie Hall: 366
Oral Session Part of
Monday A: Course Development to Engage your Students
Authors
Kim Kastens, Lamont-Doherty Earth Observatory
Melissa Zrada, The College of New Jersey
Margie Turrin, Lamont-Doherty Earth Observatory
Question-asking is a necessary step towards generating hypotheses, making decisions, and solving problems, and is Practice #1 in the Next Generation Science Standards. However, research on student question-asking has been sparse. In this study, undergraduates viewed data maps pertaining to sealevel and climate from the Polar Explorer iPad app (https://thepolarhub.org/project/polar-explorer) and generated questions about what they were viewing. Experimental conditions contrasted: (a) question-generating v. non-question generating; (b) use of the app versus use of a paper atlas; (c) exposure to a mapset about causes of sealevel change versus a mapset about who is vulnerable to sealevel change; and (d) a prompt to generate as many questions as possible versus to write down questions you would like to ask the scientists who collected the data.
From the questions that students generated, we developed a hierarchical taxonomy of question types and assigned Bloom's levels to each type. The "Ask a Scientist" prompt elicited more questions about how the data was collected and represented, while the "Many Questions" prompt seemed more effective in freeing students to ask for explanations of things they didn't understand. Students viewing the paper atlas asked more questions over all, and more questions about spatial patterns and trends, than those viewing the app. The "Who's Vulnerable?" mapset elicited more questions classified as "Earth: adaptation/intervention/mitigation"; but disappointingly for EER attendees concerned about human/Earth interactions, fewer than 20% of participants generated even one question of this type. There were wide individual differences in both quality and quantity of questions generated. But, encouragingly, over 50% of participants generated at least one question at the highest Bloom's level, including questions that query an apparent discrepancy between the mapped data and the students' mental model, and questions that suggest a process or mechanism that may have influenced the mapped phenomenon.
From the questions that students generated, we developed a hierarchical taxonomy of question types and assigned Bloom's levels to each type. The "Ask a Scientist" prompt elicited more questions about how the data was collected and represented, while the "Many Questions" prompt seemed more effective in freeing students to ask for explanations of things they didn't understand. Students viewing the paper atlas asked more questions over all, and more questions about spatial patterns and trends, than those viewing the app. The "Who's Vulnerable?" mapset elicited more questions classified as "Earth: adaptation/intervention/mitigation"; but disappointingly for EER attendees concerned about human/Earth interactions, fewer than 20% of participants generated even one question of this type. There were wide individual differences in both quality and quantity of questions generated. But, encouragingly, over 50% of participants generated at least one question at the highest Bloom's level, including questions that query an apparent discrepancy between the mapped data and the students' mental model, and questions that suggest a process or mechanism that may have influenced the mapped phenomenon.