This material was originally developed as part of the Carleton College Teaching Activity Collection
through its collaboration with the SERC Pedagogic Service.

Assessing the Senior Thesis at Carleton College: Strategies and Concerns

Cameron Davidson, Department of Geology, Carleton College


Part of the graduation requirements at Carleton College include the completion of a "Comprehensive Exercise", fondly referred to as "Comps" by the Carleton community. In true Darwinian fashion, the expectations and nature of Comps has evolved in the semi-isolated local environments of the various academic departments found on campus. In the Geology department, a typical Comps looks a lot like what most would call a senior thesis. That is, original research including the collection and analysis of data, culminating in a substantial piece of scientific writing and a formal public presentation in front of faculty and peers. Because Comps is required of all students, and because we (the faculty) firmly believe that the geology major should be a Liberal Arts degree, we also encourage and embrace alternative models for what a geology Comps should look like. Examples from past projects include the development of teaching modules used in K-12 education, the development and construction of interpretive signs for local, state, and national parks, or the production of an audio documentary on water issues in Minnesota. However, most of our students tend to stick with the more traditional research-based Comps project.

When I came to Carleton in 2002, the Comps process began in the junior year when junior geology majors met with the faculty as a group to discuss potential research options for the upcoming summer such as REU's. Then, during the senior year, we would formally meet three times as a group to 1) have students give the "elevator talk" version of their project, or tell us what they plan to do for Comps; 2) discuss the do's and don'ts of writing science; and 3) discuss the do's and don'ts of giving a science presentation. This worked reasonably well, and mainly consisted of the faculty sharing their wisdom and responding to questions. However, over the past few years our process has evolved and become more structured thanks in part to student response to the Comps process. In addition to the meetings described above, all our seniors enroll in a Senior Seminar course taught in the fall. This course meets once a week and the students use the time to share ideas, write abstracts, make figures, and discuss anything and everything about Comps. An unintended consequence of this course was the start of a new tradition where the seniors meet at a local restaurant for early morning breakfast before class. The other major change is a formal speaker series during winter term where we invite graduate students from nearby universities to give a talk on Friday afternoons. This allows our students to see cutting-edge research presentations and gives us (students and faculty) a chance to discuss what makes an effective presentation. Comps papers are due at the end of winter term (mid March), and students present their work in a GSA-style presentation early in the spring term. We have two readers for each paper, with a primary reader taking the lead in giving feedback during the writing process. The amount of feedback entirely depends the size of the class, and the motivation of the student and their first reader.

So that is our Comps process in a nutshell. The question we as a department continue to struggle with is how should Comps be evaluated? We all think we know good thinking and writing when we see it, but we don't have a formal mechanism for evaluating Comps other than meeting as a group to decide which comps fail (rare), pass (most common), or pass with distinction (less common). We all give feedback to the students, and the students can act on that feedback and "fix" various problems with their paper before submitting the final version for binding and storage in the library. This works reasonably well, and perhaps we don't need to change anything. However, some of us have discussed designing a rubric for the Comps paper as a way to organize and perhaps quantify what we consider to be the most important parts of an excellent paper. At first this seemed like a good idea because it might help us see how our curriculum aligns with our expectations for Comps and perhaps give clear guidance to our students. It's the latter part of these perceived benefits that gave us pause. How do you construct a non-prescriptive rubric? This concern is on two levels. First, we do not want to send the message that one type of Comps is any better than another type (e.g. research-based vs audio documentary). Second, we don't want students to use the rubric as a checklist, thus putting undue constraints on their thinking, or worse, assume that proper use of the rubric is the true path to distinction. The later might work in some cases, but I can imagine instances where faculty and student interpretations of how well a Comps fulfills the rubric might not align. Therefore, one of my goals for the workshop is to work through some of these issues and learn potential solutions that we might be able to adapt to Carleton.