Geoscience Education Research
Monday 11:30am-1:30pm UMC Aspen Rooms
Anne Gold, University of Colorado at Boulder
Dave Mogk, Montana State University-Bozeman
Katherine Ryker, University of South Carolina-Columbia
Karen Kortz, Community College of Rhode Island
Related Monday Oral Session
with discussion @ 4:00pm Related Tuesday Oral Session
with discussion @ 4:00pm
with discussion @ 4:00pm Related Tuesday Oral Session
with discussion @ 4:00pm
A Question of Numeracy: Is Self-Assessed Competency Registered on Knowledge Surveys Meaningful?
Edward Nuhfer, U of WY
Karl Wirth, Macalester College
Christopher Cogan, Ventura College
Steven Fleisher, California State University-Channel Islands
Eric Gaze, Bowdoin College
Geoscientists often use knowledge surveys to collect self-assessed competency data about learning and learning gains. If people believe that they can do something, how well can they actually do it? At first glance, quantifying the accuracy of a person's self-assessment of competency appears simple. It involves comparing direct measures of confidence taken by one instrument, such as a knowledge survey, with direct measures of competence taken by another instrument, usually a test. In accurate self-assessments, the scores on both measures would be about equal. Disparities from this perfect score would register as measures of either over-confidence or under-confidence. However, deducing self-assessment accuracy is not that simple. Both instruments used to obtain paired measures must have sufficient reliability to permit good comparisons, and both must measure the same learning construct. Competence and confidence have no established units, so the default measures are scores reported in percents. These constitute arrays bounded by 0% and 100%, a fact that introduces complications. Sorting of data needed to report results in aggregate imparts bias, and the probability of overestimating or underestimating is not uniform across all participants. To deduce this, we employed reliable, tightly aligned instruments to measure self-assessed competency (a knowledge survey of the Science Literacy Concept Inventory) and actual competency (Science Literacy Concept Inventory) of 1154 participants in understanding the nature of science. We used random number simulations to discover how mathematical artifacts can be (and have been in published literature) mistaken as human measures of self-assessed competencies. Innumeracy leads to misinterpretations so severe as to contradict what the data actually reveals. In our study, knowledge survey self-assessments of competence proved strongly related to actual performances.
Understanding Graph Reading and Comprehension through Eye-Tracking: Evidence for the Expert/Novice Dichotomy
James Lindgren, Macalester College
Karl Wirth, Macalester College
Graphical information is used in many aspects of our lives, including vocation, media, civic processes, scientific inquiry, and education, so graph comprehension is an essential skill for informed citizenry. However, relatively little is understood about how individuals perform graph-reading tasks or how these skills develop over time. Furthermore, many different forms of graphical information are used in earth science courses (e.g., "upside down" binary plots with a depth variable increasing in a downward direction; log-scales; normalized trace element diagrams; ternary plots) and can present significant thresholds for student learning. Here, we describe the results from an on-going two-year collaborative study on the skills and challenges behind graph reading and scientific literacy. Our data provides interesting insights into the differences between and within expert and novice populations that we hope will eventually illuminate new ways for improving students' graph comprehension skills. Within our expert and novice pools (distinguished by level of education), measures of the accuracy of graph interpretation show a clear dichotomy between the two groups. Experts (faculty and staff) are more accurate in interpreting graphical media. In comparison, novices (undergraduate students), regardless of their level of degree completion, exhibit significantly different approaches (based on eye-fixation dwell times, fixation order, interest-area regressions, interest-area eye dwell times) to graph reading. Interestingly, most study participants exhibited similar eye-track metrics while examining graph after being prompted to find specific information. However, novices and experts show very different eye-track behaviors when they are asked to examine a graph without a specific prompt; the expert behavior remains largely the same as under the prompted conditions, but the novice behavior does not. Analyses of "think-alouds" during the eye-track experiments suggest that experts, with their more developed metacognitive skills, more commonly engage in self-questioning, narrative construction, monitoring, and self-assessment while examining graphs.
Factors that contribute and inhibit successful transfer for two-year college students: Implications on strengthening the geoscience student transfer pathway
Ben Wolfe, University of Kansas Main Campus
Kaatje van der Hoeven Kraft, Whatcom Community College
Carolyn Wilson, American Geosciences Institute
Two-year colleges (2YCs) play an important role in postsecondary education in the U.S., and are essential in the education of science, technology, engineering, and mathematics (STEM) fields. Research indicates that a number of factors contribute to potential obstacles for successful student completion, both in the 2YCs as well as at the receiving institution. Open access policies and low tuition at 2YCs bring a rich diversity of students: racial, cultural identity, socioeconomic, and age. Many of these students may be new to the academic process, balancing work and family, and academically underprepared for college requiring developmental coursework in reading, writing, and mathematics. After overcoming these challenges, upon successful transfer, students continue to face obstacles due to the cultural shift between institutions which may lack community support. A recent survey of graduating students in geoscience programs administered by the American Geosciences Institute (AGI) provided insight into students who attended a 2YC for at least one semester during their postsecondary education. Commonly self-identified success factors to transfer and completion of a geoscience degree included personal motivation and clear articulation of 2YC coursework. Geoscience 2YC students also identified obstacles, such as personal issues (e.g. time, family, and money), academic challenges (e.g. coursework), and institutional barriers (e.g., course articulation and degree requirements) that impacted their time to degree completion. While some of these challenges are a reality for all students, some of these issues may be more acute for students transferring from a 2YC. Recent trends indicate that the geoscience transfer student pipeline may be strengthening as percentages of geoscience graduates who report attending a 2YC for at least a semester are increasing (at least 25% started at a 2YC; AGI, 2014). Working to improve the effectiveness of the 2YC transfer pathway can have numerous returns for broadening participation and building the geoscience workforce.
Essential terms for introductory students: Do textbooks provide the answer?
Karen Kortz, Community College of Rhode Island
Jessica Smay, San Jose City College
Amber R. Caulkins, University of Rhode Island
Geologic terms create a common language for communicating geoscience concepts, but introductory students can learn only a limited number of these terms. To determine if there is consistency in which terms are emphasized in textbooks, we analyzed the glossaries of ten full-version introductory geology textbooks. We identified a large number of terms (2,776 total unique terms with an average of 678 terms per textbook) but found very little overlap between textbooks (44 terms, which is 1.6% of the total unique terms, were in all the textbooks). This minimal overlap suggests that the terms essential for introductory students are not usefully defined by textbooks. We argue that the widely used glossary terms are not necessarily the most essential terms for introductory students to learn and that some of the terms that are in only a few glossaries may be considered important for the field. Examples of the glossary terms overlapping in all textbooks include abrasion, barrier island, epicenter, igneous rock, joint, mantle, plate tectonics, and volcano. On the other hand, examples of terms that are in three or fewer glossaries include bedrock, climate, global warming, landfill, oil, outcrop, plate, and system. We will present results of our study, including lists of terms common to all and most textbooks, and will encourage interaction with attendees to discuss which terms should be emphasized in an introductory course.
How Faculty can Affect Student Texting, Distraction, Grades, and Attitudes
Douglas Duncan, University of Colorado at Boulder
Angela Hoekstra, University of Colorado at Boulder
Bethany Wilcox, University of Colorado at Boulder
There is considerable pressure on faculty members to use technology in teaching. Students also bring technology into class in the form of laptop computers, smart phones, and iPads. Does this technology increase or decrease learning? We report two years of data studying 14 different classes with a total of approximately 1200 students. We find that, on the average, approximately 70% of students use their own digital devices during class and 30% do not. The grades earned by the former group average nearly half a grade point average lower than the non-use group. Faculty policies are found to dramatically influence student behavior. Extensive student interview data will be reported that shows that students expect faculty members to set technology policies and summarizes their attitudes about technology use.
Assessment of learning outcomes in introductory geoscience classes at University of California, Irvine
Julie Ferguson, University of California-Irvine
Justin Shaffer, University of California-Irvine
All college students are required to take science courses, and many geoscience departments offer non-majors courses that fulfill these undergraduate science general education requirements. These courses represent an important opportunity to increase the level of understanding of geoscience and to teach scientific literacy skills in the broader undergraduate community. In order to determine whether students were acquiring geoscience content knowledge and/or developing scientific literacy skills, pre- and post-tests were administered to students enrolled in four large (~400 students) introductory geoscience classes at University of California, Irvine. The pre-test data provided a baseline for each class on the students' incoming scientific literacy and geoscience knowledge, and matched pre-post test were used to calculate learning gains over the 10-week-long courses. This information was then used in combination with data on class performance, gender, year of study, major, ethnicity, and incoming SAT scores, to identify groups of students who showed the greatest learning gains. Initial analyses suggest that non-major students enter undergraduate geoscience courses with several misconceptions about the climate system, plate tectonics, and ocean circulation. The long-term goal of this study is to create modules specifically designed to address weaknesses in scientific literacy skills and identify geoscience misconceptions which are not being addressed adequately in these classes.
Research on Learning during Field Trips in the Earth Sciences
Daniel Vice, Pennsylvania State University-Main Campus
Field trips have been used as an educational tool in the teaching of Earth Science since the Civil War (Camp, 2006). The consensus of research is that field trips can be effective at teaching both comprehension of subject matter and in giving the students a more positive attitude toward science. However, it is difficult to measure the learning that occurs because all of the research has flaws in design or application. These flaws occur because the available instruments do not deal well with emotions and perceptions.
Rock, Paper, Hammer: Where do thoughts and actions count in making a geologic map?
Caitlin Callahan, Grand Valley State University
Heather Petcovic, Western Michigan University
Kathleen M Baker, Western Michigan University
In this study, we integrate data capturing the physical actions, spoken thoughts, and navigation paths of geologists as each makes a geologic map of a field site in the Tobacco Root Mountains, Montana, USA. Four geologists wore a head-mounted video camera with an attached microphone to record their visible actions and their spoken thoughts, creating "video logs" while in the field. They also wore GPS units to record their position throughout the day. The GPS data and video logs are time-stamped, enabling the data sets to be synchronized using a series of Matlab programs. The research question we address is: How and where do actions and expressed thoughts differ between novice and expert geologists when solving a geologic mapping problem? The video portion of each log was coded for instances of the visible actions of collecting data (e.g. hammering rock, measuring strike and dip, testing with HCl, and inspecting rock) and recording data (e.g. writing in a field notebook or on a map). The verbal portion was coded for instances of synthesis when participants expressed large-scale thinking or interpretations of the geologic structures that they were mapping. From these analyses, we find that all geologists engage in all four types of data collection, although there are differences between individuals and between levels of expertise. The novices had notably more instances of collecting and recording data than did the experts. For the experts, instances of synthesis occur at nearly every site of collecting and recording data. For the novices, synthesis occurs only sporadically and at relatively fewer collecting and recording sites. An implication of these findings is that novices could become more expert-like if they make a greater effort to link their observations and data with their ideas about the large-scale geologic setting of the field area.
Assessing validity of the Geoscience Literacy Exam (GLE)
Emily Geraghty Ward, University of Colorado at Boulder
David Steer, University of Akron Main Campus
Ellen Iverson, Carleton College
Cathy Manduca, Carleton College
The Geoscience Literacy Exam (GLE) consists of 52 multiple choice and 30 essay questions that align with the content presented in four literacy documents: Earth, Ocean, Atmosphere and Climate. For each literacy principle, the GLE has a set of tiered questions that target increasing cognitive levels. Level 1 multiple choice questions are considered within the "remembering and understanding" level, while Level 2 multiple choice questions are multi-select and within the "applying and analyzing" cognitive level. A third level of questions are essays that focus on analysis. Item analysis of student responses (n=1275+) and qualitative feedback from expert review (n=12) were used to determine the validity of items composing the GLE. Revisions incorporated best practices for design of multiple choice items and altered test language and format based on the data collected. Revised questions were grouped according to the content of the literacy documents and the cross-cutting themes (i.e. "Bigger Ideas") developed by Duggan-Haas and Clark (2009) to meet the needs of the interdisciplinary nature of the curricula designed through the NSF-funded InTeGrate Project. The next phase of the assessment design will continue to focus on item validity as well as on the reliability of the revised multiple choice questions as the InTeGrate curricula and GLE are tested more widely among institutions across the nation. The presentation will provide an overview of the question design, item analysis, and further information about the research project. Members of the assessment team also include Stuart Birnbaum, Leilani Arthurs, Aida Awad, Barbara Bekken, Susan Buhr, Josh Caulkins, Kristin O'Connell, Megan Plenge, Mary Savina, and Karen Viskupic.
Exploring the relationship between peer review of scientific writing and student self-assessment skills
Gabrielle Katz, Appalachian State University
Stacey D. Smith, University of Colorado at Boulder
We implemented peer review of writing in a junior level biology class, Plants and Society, in the Department of Ecology and Evolutionary Biology at the University of Colorado Boulder in spring 2015. The 30 person class was divided into permanent groups of three students each for conducting peer reviews. Individuals with different initial writing skills (based on an in-class assessment) were grouped together. Students conducted peer reviews for three writing assignments (a press release, a pro/con essay, and a persuasive essay) during the semester. For each peer review session, students brought two copies of their draft papers to their small groups. The peer review session consisted of three steps: students provided written comments on ("marked up") the two other group members' papers, answered questions about each paper on a feedback worksheet tailored to each assignment, and discussed each paper as a small group. Students were guided to provide feedback as readers, not to grade or edit the papers. After each peer review session, students had one week to turn in a final revised version of the assignment. All drafts, marked up papers, feedback worksheets, and final papers were uploaded digitally. There are many benefits of participating in peer review in undergraduate science classes. Reviewers practice thinking critically about the work of their peers, and communicating feedback in a productive form. Recipients of reviews obtain information about how their work is perceived by their peers, and have the opportunity to revise their work in light of the feedback received. It has also been suggested that engaging in peer review may benefit student learning by enhancing student self-assessment abilities. We explored this question by asking students about how engaging in the peer review process affected their writing. We will present mid-semester and end-of-semester survey data to address whether peer review impacted self-assessment.
Calculating Slope Failure: Math Anxiety and Geoscience Undergraduates
Rachel Headley, University of Wisconsin-Parkside
Math anxiety, commonly referred to as math phobia, is a phenomena where people experience moderate to extreme fear associated with anticipating or performing mathematical tasks, in some cases tied to actual pain and anxiety. Lack of math phobia and general high self-efficacy belief have been tied to success in STEM fields. In particular, geoscience courses involve building and using a diverse skill set, including chemistry, physics, and math. Specifically, quantitative skills include calculating seismic velocities in a geophysics course, balancing chemical equations in a geochemistry course, or quantifying the slope of a river in a geomorphology course. Despite these necessary quantitative skills, regardless of actual mathematical background, math phobia is commonly expressed by many geoscience students. As supported by the University of Wisconsin System Teaching Fellow program, this project is proposed to find if students who hold poor attitudes about their math skills and preparation tend to perform differently than students with positive self-efficacy beliefs in a select number of geoscience courses ranging from introductory to advanced courses. Attitude will be assessed according to existing standards, such as the MARS (Mathematics Anxiety Rating Scale) test, and through qualitative interviews given at multiple times throughout different courses. The aptitude analysis will come from both self-reporting of students' previous math classes, short assessments based on concept inventories of mathematical skills, self-assessments of student progress in the course, and overall course grades and selective discussions with the professors. The outcome of this study will determine if interventions dealing with math attitude and possibly math aptitude should become part of some or most geoscience courses.
Teaching and learning evolutionary trees: how cognitive science predicts "better" and "worse" diagram styles
Andrea Bair, Delta College
A cladogram is a branching diagram commonly used to depict hypotheses of evolutionary relationships, and is a central representation of evolution in the fossil record. Biological education research has identified expert-like skills and student learning difficulties associated with interpreting and constructing cladograms. Cognitive science informs diagram design that reduces or avoids misconceptions, and many common misconceptions and difficulties learning "tree-thinking" skills relate to the design of evolutionary tree diagrams. However, recent research has not been incorporated into teaching and communicating evolutionary trees in geology contexts. The purposes of this study are to: summarize recent research applicable to teaching and learning evolutionary tree diagrams in geology contexts, test whether modifying a "tree-thinking" assessment explicitly to avoid cognitive interference improves scores, and test whether research-based teaching materials and strategies promote improvement in tree-thinking skills and avoidance of misconceptions.Materials and strategies for teaching evolution and "tree-thinking" in undergraduate geology courses were designed to avoid cognitive interference through diagram design, emphasize the hierarchical structure of cladograms,and reduce common misconceptions about evolution and evolutionary trees. Specific recommendations from recent research are: 1) use "tree-style" cladograms to support understanding of evolutionary concepts; 2) initial work should not use real organisms in trees to avoid cognitive interference; 3) explicitly teach multiple representations and diagram reading strategies to promote hierarchical thinking; and 4) teach using shared, derived characters (synapomorphies) to support understanding of cladistic analysis and evolutionary relationships. Modifying the assessment to reduce cognitive interference produced comparable results between post-test scores of students in an upper level majors' paleontology course using partially reformed instruction and pre-test scores of students in introductory courses. A pilot study of reformed teaching materials and strategies shows improved tree-thinking skills and reduction in misconceptions in both an upper level majors' paleontology course and introductory courses.