EvaluateUR

Evaluate UR: Evaluating Undergraduate Research

Scaling up a Novel Approach to Evaluating Undergraduate Research

This project will test, refine, and disseminate an evidence-based model for improving student outcomes by incorporating evaluation directly into undergraduate research experiences. The model has been employed successfully for eight years at SUNY-Buffalo State and additional testing and refinement activity are underway to ensure applicability across diverse institutional settings. (more information about the program at Buffalo State). The project will create and provide the resources needed to support successful implementation at any undergraduate institution. Through collaborations with the Council on Undergraduate Research (CUR) and the Science Education Resource Center (SERC) at Carleton College, the project intends to foster national communities of STEM scholars who are trained and motivated to sustain and expand this model.1

Participating in the Pilot »

The Evaluation Model

A guiding principle of this evaluation model was the desire to obtain reliable independent assessments of program impact without creating a measurement burden, while at the same time providing information to participating students and their mentors that could help them gain new insights into student academic strengths and weaknesses. The model focuses on repeated conversations between the student and mentor in which they both complete identical assessment surveys that address each component of 11 outcome categories (34 components altogether). The outcome categories are shown in Table 1, below. The surveys are completed before the student research begins, in the middle of the research, and at the end of the research experience. This gives mentors multiple opportunities to review and assess student work and provides time for students to reflect on their strengths and weaknesses, so that the evaluation process provides essential information used by both students and faculty to advance learning objectives. The survey items are scored using a five-point scale to denote that a student always, usually, often, seldom, or never displays the desired outcome for each component. Faculty mentors assess students on each component and students also assess their own progress using the identical instrument.

Following each of the three assessments, the mentor and student meet to discuss how each scored the survey and to explore the reasons for any differences in their respective assessments. There is also an option for the student-mentor pair to add additional outcomes and outcome components. This feature provides each student-mentor pair flexibility to assess discipline-specific outcomes or any other aspect of the research experience they are interested in assessing. A summer research coordinator conducts an orientation session for students and mentors to explain the evaluation goals and methods. A web-based administration page shows the status of each student-mentor pair and helps the administrator track each pair and ensure that surveys and reports are completed in the proper sequence and at the correct time in the research program. The administrator releases forms only when the pair is ready to complete them and there are automated reminders sent to remind the student-mentor pair about completing the form and meeting to discuss how each scored the survey items. The design approach transcends specific STEM disciplines, thus offering an opportunity to aggregate impact results across a range of undergraduate research experiences. The model's built-in evaluation feature also provides summary data on student learning that can be used to inform resource allocation decisions.

Table 1: Outcome Categories

  • Communication
  • Creativity
  • Autonomy
  • Ability to deal with obstacles
  • Practice and process of inquiry
  • Nature of disciplinary knowledge
  • Critical thinking and problem solving
  • Understanding ethical conduct
  • Intellectual development
  • Culture of scholarship
  • Content knowledge skills/methodology
To learn more about the development of the model, additional details of model implementation, and evaluation results at Buffalo State, refer to Singer and Weiler, 2009 (Acrobat (PDF) 482kB Mar5 12) and Singer and Zimmerman, 2012 (Acrobat (PDF) 991kB Mar7 12).

Project Timeline

Scaling up the evaluation model will involve several phases between 2017 and 2019.

Phase I: Limited to 3-5 institutions with summer research programs structured like the program at Buffalo State. This phase will allow us to further validate the Buffalo State UR model by showing that it can be successfully implemented on other campuses with similar summer research programs.

Phase II: A group of 10-12 institutions (including those involved in Phase I) with programs that are similar to Buffalo State's program, but with some differences that will allow us to explore how best to meet the needs of a more diverse group of institutions.

Phase III: To be determined based on formative assessment of Phase II.

We are in the process of migrating web support for the evaluation from the Buffalo State server to a new host site at SERC. At the same time, we are adding new features and developing resources to guide evaluation implementation on campuses participating in our pilot efforts in 2017 and 2018. This change will open the web site to many more campuses. Web resources will be available to orient new users, there will be options to modify the evaluation to add campus-specific questions, and the site will support the easy generation of reports with a limited number of statistical measures.
Participating in the Pilot »


This project is supported by the National Science Foundation's WIDER program under Grant Number 1347681. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

      Next Page »