Initial Publication Date: June 25, 2012
Credibility as a Challenge in the Integration of Science and Sustainability
Holly Ewing, Environmental Studies, Bates College I teach interdisciplinary courses in the sciences (combining geology, ecology, and chemistry) within an environmental studies program, and I would describe my teaching as being primarily about interdisciplinary science and not sustainability. There is now a push on campus to connect the curriculum to sustainability initiatives, and of course everyone is looking to Environmental Studies for that leadership. And yet, I do not think of myself as teaching sustainability science, and I resist such descriptions by others—even though I know that the long-term viability of ecological, economic, cultural, and social systems may well rest in part on our ability to effectively draw on scientific understanding as we set policies and make choices. Why is it that I, as a now-tenured faculty member who no longer has to convince colleagues in traditional science departments that I really do science, still resist coupling science and sustainability? The short answer involves the same word that I might have used pre-tenure: credibility. That is, I am still negotiating how it is that we have conversations about the practice and role of science in sustainability initiatives and how we help non-scientists understand the subjectivity and uncertainty inherent in science as part of our credibility rather than as a threat to it. Here I reflect briefly on two dimensions of credibility in science: context and uncertainty.
To help students engage in discussions of the role of science in realms that can be controversial—for example, conservation, sustainability, and policy—I present science as one of the many subjective human endeavors. That is, I begin by placing science in the context of human activity—that we must make choices as scientists about what questions we ask, how we go about sampling to answer those questions, and how we analyze and interpret data. Science is fallible and is only as good as the process, and yet when done well, it can provide an important perspective and essential understanding as we think about the choices we make in other arenas. From the scientist's perspective the context includes the importance of doing science through established, repeatable, and to the extent possible "unbiased" methods to answer questions rather than simply to reinforce an opinion. Since this method is part of what I perceive to give science credibility in the public sphere, I tend to err on the side of teaching the tools, methods, and habits of the mind behind the scientific study of systems and secondarily some of the understanding of the structure and function of ecological systems such science has revealed.
In teaching scientific approaches to understanding systems, I have grappled with how best to deal with uncertainty—a topic difficult for students and a substantial part of what I consider to be ill-informed public dialogue about what science does or does not say about topics such as climatic change. Any practicing scientist knows that there are many kinds of uncertainty. Those stemming from measurement error and inherent variability in systems are usually the first to come to mind. But there are also those that come from the way in which we chose to measure—our choices of methods and the spatial and temporal extent of our sampling (as constrained by access, time, money, and our own conceptions of the world—part of the aforementioned subjectivity). Despite my ease at recognizing sources of uncertainty in my own and others' work and my comfort with the scientific endeavor of interpretation in the face of uncertainty, I have found uncertainty difficult for students. How is uncertainty manifest in scientific studies? How it is displayed in graphs? Where does it change an "answer" or interpretation? And where, despite uncertainty, are there still conclusions that can be defended? Discussing these things can be difficult in introductory courses where I can assume no statistical background and have a substantial portion of the class that is afraid of anything numerical. I have found that graphical representation of information—both data students generate and data from other sources (e.g., the results of different model simulations as portrayed in the IPCC report)—is the easiest avenue into the material for students who are not quantitatively inclined. The challenge, though, even with graphical approaches is helping students come to an understanding of how and when an interpretation can be made (and what such an interpretation might be) when data are variable.