About EvaluateUR

The Evaluation Model

A guiding principle of this evaluation model was the desire to obtain reliable independent assessments of program impact without creating a measurement burden, while at the same time providing information to participating students and their mentors that could help them gain new insights into student academic strengths and weaknesses. The model focuses on repeated conversations between the student and mentor in which they both complete identical assessment surveys that address each component of 11 outcome categories (34 components altogether). The outcome categories are shown in Table 1, below. The surveys are completed before the student research begins, in the middle of the research, and at the end of the research experience. This gives mentors multiple opportunities to review and assess student work and provides time for students to reflect on their strengths and weaknesses, so that the evaluation process provides essential information used by both students and faculty to advance learning objectives. The survey items are scored using a five-point scale to denote that a student always, usually, often, seldom, or never displays the desired outcome for each component. Faculty mentors assess students on each component and students also assess their own progress using the identical instrument.

Following each of the three assessments, the mentor and student meet to discuss how each scored the survey and to explore the reasons for any differences in their respective assessments. There is also an option for the student-mentor pair to add additional outcomes and outcome components. This feature provides each student-mentor pair flexibility to assess discipline-specific outcomes or any other aspect of the research experience they are interested in assessing. A summer research coordinator conducts an orientation session for students and mentors to explain the evaluation goals and methods. A web-based administration page shows the status of each student-mentor pair and helps the administrator track each pair and ensure that surveys and reports are completed in the proper sequence and at the correct time in the research program. The administrator releases forms only when the pair is ready to complete them and there are automated reminders sent to remind the student-mentor pair about completing the form and meeting to discuss how each scored the survey items. The design approach transcends specific STEM disciplines, thus offering an opportunity to aggregate impact results across a range of undergraduate research experiences. The model's built-in evaluation feature also provides summary data on student learning that can be used to inform resource allocation decisions.

Table 1: Outcome Categories

  • Communication
  • Creativity
  • Autonomy
  • Ability to deal with obstacles
  • Practice and process of inquiry
  • Nature of disciplinary knowledge
  • Critical thinking and problem solving
  • Understanding ethical conduct
  • Intellectual development
  • Culture of scholarship
  • Content knowledge skills/methodology

To learn more about the development of the model, additional details of model implementation, and evaluation results at Buffalo State, refer to Singer and Weiler, 2009 (Acrobat (PDF) 482kB Mar5 12) and Singer and Zimmerman, 2012 (Acrobat (PDF) 991kB Mar7 12).

About the Program

This project is testing, refining, and disseminating a promising evidence-based model for guiding effective undergraduate research. The model is based on five years of data. Additional testing and refinement activity are underway to ensure applicability across diverse institutional settings. The project will create and provide the resources needed to support successful implementation at any undergraduate institution. Through collaborations with the Council on Undergraduate Research (CUR) and the Science Education Resource Center SERC) at Carleton College, the project intends to foster national communities of STEM scholars who are trained and motivated to sustain and expand this model.

The model is designed to ensure that the student learning outcomes from participation in undergraduate research are as strong as possible. It deliberately generates formative guidance during the research through a series of quantitative, built-in formative evaluations conducted over the duration of each research experience. Evaluation is a central component of this model. The model emphasizes multiple progress assessments by both faculty and students across a wide range of desirable outcomes, so that the evaluation process provides essential information used by both students and faculty to advance learning objectives. The design approach transcends specific STEM disciplines. This transcendent feature offers an opportunity to aggregate impact results across a series of undergraduate research experiences. The model's built-in evaluation feature will provide summary data on student learning that could be used to inform resource allocation decisions.

Project Timeline

Scaling up the evaluation model involves several phases between 2017 and 2019.

Phase I' Limited to 3-5 institutions with summer research programs structured like the program at Buffalo State. This phase allowed us to further validate the Buffalo State UR model by showing that it can be successfully implemented on other campuses with similar summer research programs.

Phase II: A group of 10-12 institutions (including those involved in Phase I) with programs that are similar to Buffalo State's program, but with some differences that will allowed us to explore how best to meet the needs of a more diverse group of institutions.

Phase III: We're currently recruiting a larger cohort of institutions for this final phase testing. Applications are being accepted until December 7, 2019.

After the testing phases we anticipating opening up the process and tools for broad use.

Project Team

  • Jill Singer, SUNY Buffalo State
  • Elizabeth Ambos, Council on Undergraduate Research
  • Sean Fox, Science Education Resource Center, Carleton College
  • James Hewlett, Finger Lakes Community College
  • Dan Weiler, Daniel Weiler Associates
  • Bridget Zimmerman, SUNY Buffalo State
This material is based upon work supported by the National Science Foundation under Grant No.1347681 . Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.