In a much underappreciated paper, Aaron C. Clark & Eric N. Wiebe of North Carolina State University draw a distinction that should be front and center in the minds of every person who teaches with or learns from scientific visualizations: a distinction between what they call "concept-driven" and "data-driven" visualizations.
In creating a visualization, the initial design is typically driven by classifying graphics into two major categories. ... A concept-driven visualization is typically generated from a concept or theory and not directly tied to any empirical data. It does not mean that there isn't any data that either supports or refutes the theory, but this particular exploration does not require [data.] ... A data-driven visualization uses empirically or mathematically derived data values to formulate the visualization. In this case, a specific relationship between data values and the graphic elements is defined so that a graphic characteristic varies in some predetermined fashion. (Clark & Wiebe, 2000, p. 28.)
From the point of view of a teacher or learner, data-driven and concept-driven visualizations have different affordances and different pitfalls.
Here is an example of how this distinction plays out in geosciences. The data-driven visualization (below left) and the concept-driven visualization (below right) are from the same book, and both center on a divergent plate boundary and its surrounding seafloor.
Both show rough irregular bathymetry, but the data-driven version shows fine scale details of actual bathymetry, whereas the concept-driven version shows an idealized, simplified rift valley and abyssal hills.
The concept-driven visualization includes features that are important in the curriculum but cannot be directly captured by any single data set: the upwelling magma beneath the spreading center, the divergent sense of motion across the plate boundary, and the thickening of the lithosphere with distance away from the spreading center.
In the concept-driven visualization, everything shown is pertinent to the surrounding lesson on plate tectonics. The data-driven visualization, in contrast, contains some prominent features that do not fit in easily to the Geology 101 view of the world, including the Azores and the large equatorial fracture zone ridges.
I don't think that it's correct to consider that one of these visualizations is "better" than the other; they work towards different learning goals. I will have more to say about both concept-driven and data-driven visualizations in my next several posts.
I think that data-driven visualizations are becoming more abundant in the visual experience of students and the general public. The current Wired magazine has a piece about the illustrations in the classic science textbooks
of the 1950's and 60's by Roy A. Gallant.
These classics include Exploring the Weather
(1957) and Exploring Mars
(1956), both illustrated by Lowell Hess (1957), and Exploring Under the Earth
(1960), illustrated by John Polgreen. All or most of the illustrations in these books were concept-driven.
Wired magazine's illustrators have updated a few of these illustrations using 21st century science and 21st century graphic tools, and explain their techniques on the magazine's website. One of the updated images appears to be purely data driven ("Mars Mapping"), and the other two appear to be partially data-driven (the clouds and land surface of "Atmospheric Imagery" and the seismic sensor locations of "Earthquake Visualization.")
In the current generation of geoscience textbooks, Reynolds et al (2007, 2010) Exploring Geology is notable for its use of data-driven (as well as concept-driven) visualizations. Steve, did you consciously choose the title of your illustration-rich textbook to echo the classic works of Roy A. Gallant?
One final thought about the Clark & Wiebe (2000) article: they lay out an entire two year long curriculum to prepare students to make scientific visualizations. In geosciences, we tend to have the notion that science majors and graduate students are supposed to just pick this up skill along the way without explicit instruction.
- Clark, A. C., & Wiebe, E. N. (2000). Scientific visualization for secondary and post-secondary schools. Journal of Technology Studies, 26(1).
- Sandra Swenson of Columbia Teachers College drew my attention to the Clark & Wiebe paper and the distinction between data-driven and concept-driven visualizations.
- Wired magazine article: Detorres, C. (2009, November) Intelligence, Redesigned. Wired, 136-143.
- Reynolds, S., Johnson, J., Kelly, M., Morin, P., & Carter, C. (2007, 2010). Exploring Geology: McGraw-Hill Science/Engineering/Math. I saw Steve Reynolds and Paul Morin talk about how they had made the illustrations in their book, including those built from authentic geoscience data sets, at a workshop on Teaching Geosciences with Visualizations. By the way, the lead illustrator and co-author of the book was Chuck Carter, who was also the lead illustrator for the computer game Myst.
First, let me say you’ve done a nice job of taking our ideas and linking them directly to geoscience education. Certainly the geosciences have been on the forefront of thinking about how graphics can be used for teaching and learning. Both Bertin (1983) and MacEachren (1995) have been important influences on my work, as have more focused work on visualization in geoscience education (e.g., Piburn, et al., 2002). More recent work in modeling in science education (e.g., Schwarz, 2009) has also been helping my thinking about concept-driven visualizations
My current work has been working in elementary science education and working with teachers on ways they can more effectively support student-produced graphics as a vehicle for science learning. We are looking closely at the kinds of graphics that are being produced at each stage of classroom activities that are done in conjunction with popular kit-based science curricula (Wiebe, et al., 2009). Concept-driven visualizations are very common during the pre-activity (lab) phase when the science concept that is the focus of the activity is introduced. This can be a text-book image or from another source. In most cases, as is your example, it represented a simplified or ‘idealized’ representation of a phenomena or idea. Students will then conduct an experiment/activity where they may be collecting data and creating a visualization. This empirically-driven visualization, as part of their post-lab activities, needs to be part of a reflective process where they link the ‘messy’ real-world results of their study back to the idealized concept-driven visualization. This can be a huge challenge for both the teacher and the student. I’m very interested in how students make these connections between concept-driven and data-driven visualizations, both of which are becoming an increasingly common part of science lab activities at all levels. Clearly, making these connections are central to inquiry learning in the geosciences.
Bertin, J. (1983). Semiology of graphics: Diagrams networks maps (W. J. Berg, Trans.). Madison, WI: University of Wisconsin Press.
MacEachren, A. M. (1995). How maps work: Representation, visualization, and design. New York: Guilford Press.
Piburn, M. D., Reynolds, S. J., Leedy, D. E., McAuliffe, C. M., Birke, J. P., & Johnson, J. K. (2002). The Hidden Earth; Visualization of Geologic Features and their Subsurface Geometry. Annual Meeting of the National Association for Research in Science Teaching. New Orleans, LA.
Schwarz, C. V., Reiser, B. J., Davis, E. A., Kenyon, L., Acher, A., Fortus, D., Shwartz, Y., Hug, B., & Krajcik, J. (2009). Developing a Learning Progression for Scientific Modeling: Making Scientific Modeling Accessible and Meaningful for Learners. Journal of Research in Science Teaching, 46(6), 632-654.
Wiebe, E. N., Madden, L., Bedward, J., Minogue, J., & Carter, M. (2009). Examining Science Inquiry Practices in the Elementary Classroom through Science Notebooks. Annual Meeting of the National Association for Research in Science Teaching, Garden Grove, CA.
Like Eric stated, thanks for helping take ideas to the next level and in different disciplines. I have found over the years that the data-driven is difficult for many to understand, but the linking of a conceptual to a data-driven model is even more difficult for students. I have a project with others titled GRIDc that basically has students work with real time data to analyze in visual form, but activities that come from this research that is conceptual is simple; have students take two variables from the data sets, look at the charts and graphs, and make a conceptual model that expresses that data. Students find that it is difficult but rewarding once they see how in the world of scientific and technical visualization. Go to gridc.net for more information
This post was edited by Ayumi Tachida on Jul, 2013
Dear Aaron and Eric,
Thanks so much for coming over to join our conversation in geoscience education.
I just read your 2009 NARST paper which brought me to your 2008 Spatial Intelligence paper, and I found both to be of great interest. Two questions:
* For Earth Science topics, hands-on activities tend to be built around physical models rather than the real phenomena of the Earth. This is true of the FOSS landforms kit (archive.fossweb.com/modules3-6/Landforms/index.html) mentioned in your 2008 paper, which uses a stream table. Can you get any sense from your data about how well the students are connecting what they see in the model in front of their eyes with what goes on in the big outside world?
* Although the value of having students sketch their conceptual models seems well established in your own research and others, I think that an obstacle to using sketches in classroom practice is that both teachers and students must believe that student sketches can be graded "fairly" (at least as fairly as a verbal short answer.) Do you have any suggestions on this?
I totally agree with you about the difficulty that students have connecting data-based evidence with concept-based models.
The "Knowledge Integration" paper that is coming out of the Synthesis of Research on Thinking & Learning in the Geosciences project is going to have a section on using physical models and digital models to integrate data and concepts by comparing model behavior against empirical observations and data, then refining the model until its behavior better matches the Earth data. But I find very few examples of educational models being used in this manner. In most curriculum materials, the physical model (e.g stream table) or digital model (e.g. of solar system) is used as a _substitute_ for the real world. Students do an "experiment" manipulating the model system, and no real Earth Science data from the earth are involved.