Collaborative Research Design

Workshop participants developed a collaborative research project consisting of two tiers of assessment.

Tier 1:assessment specific to the goals of the course, program, and institution
Tier 2:global assessment of first-year students at each institution

The second tier provides a sizable control group (indeed, several different control groups, within and among our institutions) and thus a context for interpreting first tier findings; the first tier lends depth and specificity to second-tier data.

Tier 1:

  • Develop and share specific assignments within and across institutions
  • Participation driven by and assessed in light of assignment, course, and program objectives

Tier 2:

  • A simple value-added assessment instrument administered to all or many incoming first-year students during the first and last weeks of class
  • During the first week, all (or many) students complete brief written responses to the following two questions:
  1. Identify one learning experience from high school that was especially meaningful and memorable—something that you think will have lasting impact on you. Explain why it had such an effect on you, and how you think it will affect your life in the future.
  2. Identify two learning goals you have for your four years at Beloit College, and explain how you may be able to achieve them and what will be required of yourself and others in order to do so.
  • During the final week, all (or many) students complete brief written responses to the following two questions:
  1. Identify one learning experience from your FYI seminar that was especially meaningful and memorable—something that you think will have lasting impact on you. Explain why it had such an effect on you, and how you think it will affect your life in the future.
  2. Identify two learning goals you have for your four years at Beloit College, and explain how you may be able to achieve them and what will be required of yourself and others in order to do so.

Interim results:

The initial data analysis showed low levels of inter-grader reliability at Beloit and Coe (k=.12 and k=.36) with medium levels at Lawrence (k=.42). At Beloit, this discrepancy has been attributed in part to the decentralized nature of the grading process and the length of time between the norming session and the submission of grades by most graders (with the norming session occurring before winter break and most grading taking place afterwards). To account for this discrepancy at Beloit, a paired-t test was performed that compared only writing samples that had been graded in both pre- and post-test by the same graders, while at Lawrence paired T-tests were performed based on averages between multiple graders. At both schools, we did not observe changes that were significant at a level of p=.05; at Beloit, changes were similar for the "test" group (community-focused courses) and the "control" group. At Beloit, this may be partly explained by the differing environments and time constraints of the two administrations (with the post-test administered along with course evaluations in a more time-restrained setting and with the faculty member absent) and possible weak intra-rater reliability for similar reasons as the lower inter-rater reliability.

Although the pilot administration encountered methodological challenges, it has proven successful on several levels for the participants. Given both the high interest in and limited experience with rubric-based assessment of liberal arts outcomes at the three institutions, the grant has already provided a crucial opportunity for project leaders at each college to develop a replicable process of cyclical in-house assessment. At Beloit, the campus group charged with assessing student growth in "Liberal Arts in Practice" (integrative learning resulting from out-of-classroom learning experiences) as part of the National Wabash Study has already used our process and products as an informative model for collaborative rubric development for institution-level assessment. On all three campuses, the experience has helped to develop a group of faculty with experience in developing large-scale assessments in collaboration with their institutional research offices, and provided students at Lawrence and Beloit with the opportunity to take research skills acquired in the classroom from theory to practice. While the rubric and assessment instrument will undergo minor improvements, all institutions were happy with the general structure and language of the rubric, and the prompts generally seemed to provoke the type of reflection and self-assessment that we hoped. Most importantly, they reflect a more direct and meaningful articulation of the value and outcomes we associate with student agency in a way that will make them easier to communicate to students, faculty, and advisors and that may challenge some of the structures and language we currently use in order to encourage and foster those outcomes. Finally, one of the great strengths of this research model is that it has the potential to foster in students the very qualities we have designed it to measure. We see significant scope for enhancing this potential in future years.

Next Steps

As suggested above, one of the immediate next steps will include a targeted rewording of the questions prompts and rubric in an effort to ensure further clarity and accuracy, a process that will take place in consultation with graders, student-participants, and instructors at each campus.

It is also likely that given their mutually supportive scopes and objectives that our work on this grant will be combined with our participation in the ACM FaCE Grant for the Assessment of First Year Programs, allowing the examination of agency and student writing in the first year to mutually reinforce one another. While the individual campuses participating in this grant may continue to examine the use of community-engagement in particular, the focus will shift towards identifying and improving experiences of all kinds during the first year that support the development of student agency. If ACM approves, we propose merging the culminating conferences of these two FaCE Grants so as to present a more extensive exploration of the assessment of first-year programming to (hopefully) a larger audience of ACM institutions (tentatively in May 2012).