Building the Climate Change Collection

Creating premier digital collections for specific scientific topics is a challenge. The issue is important because teachers are increasingly supplementing traditional classroom materials such as textbooks with information, lesson plans and activities from the World Wide Web. Unlike texts and other edited materials, resources found on the web may not go through rigorous editing procedures. Moreover, the sheer number of available sites makes it time consuming and often frustrating for teachers to find scientifically accurate materials that are usable in a classroom.

The Climate Change Collection pilot project, funded through NSF grant EAR-0435645, brought together three middle school teachers, a climatologist, and a cognitive scientist to create a tool for evaluating and selecting digital resources. The team members then applied this tool (called a rubric and nicknamed "scorecard" by review team) to select materials. The process was facilitated by Mark McCaffrey and evaluated by Tim Weston. Following is an overview of our experience.

The model used for selecting materials was one of several possible alternatives. It was characterized by direct involvement by community members and scientists as opposed to a model that used full-time professionals. The team first reviewed an existing rubric used by the DLESE Community Review System (CRS), but found that evaluating resources with this tool involved a time commitment incommensurate with the goals of the project. A simplified rubric allowed reviewers to cover more material and resulted in a collection that embodied high quality and usable resources.

Issues raised by this selection model include the transferability of the rubric to other reviewers and the role of differential expertise in the review process. Simplifying the rubric involves a trade-off. An easier to use rubric allows community members with limited time to review a greater number of resources. However, simplified rubrics provide only limited information about the exact basis of quality judgments to others wishing to participate in the review process independently. The basis for making judgments about what is, and is not a good resource, while shared among the team members through discussions during meetings, may not be evident to those outside of the team.

Differential expertise and experience was also an issue because of the importance of assuring that resources are scientifically accurate and usable in the classroom. While many resources fit both of these important criteria, the climate scientist on the project performed double duty reviewing each resource for accuracy. One recommendation would be to separate this function from the wider review process and then allow teachers to review resources that pass initial vetting for scientific accuracy.

Steps in Developing the Climate Change Collection
1) Recruit the interdisciplinary review team
Climate research scientist, three 6-8 grade science teachers, one learning researcher, one evaluator
2) Review prior collection efforts, including scope statements and rubrics
Examine DLESE Community Review System (CRS) and Digital Water Education Library (DWEL). Also examine "SmartSearch" tools from DLESE, the Strand Maps of AAAS Benchmarks, and SERC Content Management System,
3) Develop scope statement and rubric (scorecard); test and fine tune scorecard.Ease of use by review team crucial to streamlining review process. Reviewers prefer a word processing document template rather than online tool.
4) Develop framework of key concepts relating to climate change and variability.Include key misconceptions, natural greenhouse effect, the carbon cycle, societal impacts of climate change.
5) Identify and review high quality resources inside and outside of DLESE relating to climate change.Completed scorecards and potential sites to review are listed on online SWIKI used for intra-team communications. Require two or more favorable reviews and at least one review by climate scientist to be included in collection. Resources not in DLESE added to catalog.
6) Review reviews with team and agree on what to include in collection
7) Compile reviews in summary and link to Climate Change Collection homepage (in progress)
8) Publicize and Market Climate Change Collection

Key Lessons Learned

1) Fully analytic rubrics (e.g. CRS) may be burdensome for reviews to use because they are time-consuming and difficult to fill out. More holistic rubrics than the CCC scorecard can be used but the results may produce a less transferable and reliable assessment system.

2) Getting teachers and scientists together to search and rate resources is a good model because they are a captive (and compensated) audience. Instead of monthly meetings, a "boot camp" model (holding them as a captive audience in a room for a week) would probably work the best since there was difficulty in continuity and follow-through with monthly meetings.

3) Scientific accuracy is the most important criterion for judging resources.In the future, at least two scientists should be involved in the review teams. The standard for this criterion is higher than others and if this standard isn't met, the resource should not be in DLESE; if it is, it should be deaccessioned and/or the creator of the resource should be contacted.

4) The searching and rating functions should be separated. Having the reviewers search for resources inside and outside of DLESE proved to be a major time sink. Having an individual or team focus on finding relevant resources will allow the team to focus on reviewing particular resources that they are assigned. This will save time and also encourages wider domain representation and a higher standard for acceptance.

5) High standards should be set from the outset. Resources that do not meet the high standards (especially scientific accuracy) should be methodically noted and tracked.

scope.html