A Concept Mapping Assessment of Climate Change Concepts

Dave Dempsey
San Francisco State University
After a brief tutorial on hierarchical concept maps, we ask students in a course on planetary climate change to construct a hierarchical concept map about climate, prompted by several leading questions about climate, climate science, and climate change around which we (roughly) organize the course. We conduct this exercise at the beginning and end of the semester, score the concept maps (two scorers with results reconciled and averaged), and evaluate changes in the scores statistically.

What learning is this evaluation activity designed to assess?

We organize our course around five broad questions about climate, climate science, and climate change. By the end of the course we want students to demonstrate significant increases in:

(1) their knowledge and understanding of the key concepts that underlie answers to these questions;
(2) their ability to organize and structure their knowledge coherently; and
(3) their ability to make meaningful and significant interconnections among concepts.

We think that student-drawn concept maps can reveal useful information about these types of learning, if we pose the map construction task appropriately and if we design a suitable scoring system and apply it accurately and reliably. We administer the assessment both at the beginning and the end of the semester, score the concept maps, and evaluate the differences in scores statistically.

(To increase our confidence that we can attribute significant changes in concept map scores to our course, we administered the same assessment to students in several other geosciences courses that address the topic of climate, at least marginally, and statistically compared the changes in scores from those courses with the changes in scores from our course.)

What is the nature of the teaching/learning situation for which your evaluation has been designed?

"Planetary Climate Change" is an upper division course offered by the Department of Geosciences. It was funded in 2000 by a NASA-NOVA grant and was designed to try to address the National Research Council's National Science Education Standards for science teaching, science-as-inquiry, earth and space sciences, and several others. It was also designed to try to address California state subject matter standards in geology, meteorology, oceanography (plus some astronomy) for future high school science teachers, integrating these geoscience disciplines as closely as possible. The course prerequisite comprises 12 semester units of physical sciences of some sort. The required text is "The Earth System" by Kump, Kasting, and Crane.

Many of the students are biology, chemistry, or physics science majors (many of them post-bachelors degree) preparing for a career in high school science teaching. Some are already in a fifth-year teaching credential program (as required in California). Other students include meteorology, geology, and oceanography undergraduate majors (the course is an elective for those majors), and some non-science majors with sufficient science backgrounds, including Environmental Studies majors and students from other majors who are preparing for careers in broadcast meteorology.

The course meets six hours per week in two, three-hour blocks. It employs lecture only as needed (and usually for limited periods). The dominant in-class pedagogical activities consist of (1) computer-based, inquiry-based exercises, many of which involve display, characterization, and analysis of geophysical datasets (accessible via the Web or as part of WorldWatcher software) or numerical model construction (using STELLA), followed by instructor-led, whole-class discussion and summary; and (2) student-led discussions of articles from the climate science literature (especially Scientific American). We employ small-group exercises occasionally, including at least three small-group concept mapping exercises.

What advice would you give others using this evaluation?

Concept maps (and their relatives, mind maps and other, nonhierarchical graphical knowledge organizing methods) have been investigated actively as an instructional strategy since at least the early 1970s and, more recently, as an assessment tool. The literature makes clear that the following should be taken into account when using them to assess student learning:

(1) Hierarchical concept maps are a very flexible tool, and mapping exercises can be designed with widely varying degrees of constraints imposed on them, ranging from "construct a map about Topic A" to "fill in the blanks in this pre-constructed map from this specified list of concepts and/or connecting phrases". Of course, you'll need to define the task for your students consistent with the student learning that you want to assess. Moreover, some implementations are far easier to score than others, so your choice of concept mapping task has practical implications.

(2) Choose the form of your concept map exerise, which can range from pencil and fixed paper (or chalk/pen and chalk/white board), to movable post-it notes on posterboard, to computer software (possibly even with automatic scoring). Each of these forms has its own advantages and disadvantages, both pedagogically and practically.

(3) Design a scoring system that reflects what you want to assess, and design a practial rubric for it that produces reliable results among different (trained) evaluators. The scoring system also has practical implications, since not all scoring systems are equally easy to implement.

(4) Constructing a good concept map requires not only good knowledge of the subject matter but also some skill in the medium. Since you can probably expect few students already to have mastered concept mapping conventions and techniques, you'll probably need to spend class time training students to make concept maps. In the process they can potentially learn a great deal about the subject matter, so this isn't necessarily time wasted (though concept maps do frustrate some students, particularly rote learners according to the literature). Students do generally seem to get the knack of it reasonably quickly (one source says 90 minutes), though they will need occasional repeated practice. Concept mapping seems to work particularly well when done as small-group collaborative exercises. Given that some skill is required, concept mapping as an assessment tool makes the most sense if it is also used as an instructional tool. (On the flip side, some other assessment tools, such as essays, speeches, posters, and the like, arguably require more performance skill--and hence suitable training--than concept mapping does.)

The literature that I perused recently (yesterday, in fact!) reveals considerable attention being paid to (1) and (3) above, suggesting that these are the most problematic aspects of concept mapping. People are still figuring out how and when they can best be employed.

I provide herewith links to the concept mapping assessment exercise that we used in our course and a summary of our results. The results were encouraging, but I'd not hesitate to revise the exercise in light of some of the more recent research in the literature.

Are there particular things about this evaluation that you would like to discuss with the workshop participants? Particular aspects on which you would like feedback?

(1) How might the concept mapping task be revised to provide a better assessment of student understanding of key concepts underlying climate, climate science, and climate change; their organization and structure; and the connections among them?

(2) How might the scoring system be modified for the same purpose?

Evaluation Materials

Additional Instructions and Rubrics