Cutting Edge > Develop Program-Wide Abilities > Undergraduate Research > Upper Division Strategies Collection > Assessment of Research as Pedagogy

Assessment of Undergraduate Research

By Jill Singer, SUNY - Buffalo State and Dave Mogk, Montana State University, Bozeman

Jump down to: Group Discussions to Explore Student and Mentor Reactions | Evaluation Instruments to Assess Student Gains and Facilitate Student-Mentor Structured Conversations | Research Skill Development (RSD) framework | Undergraduate Research Student Self-Assessment (URSSA) | Electronic Portfolios to Measure Student Gains | Case Studies | Supporting Resources

Students and their mentors at the 2011 Student Research and Creativity Celebration at SUNY-Buffalo State. Image provided by J. Singer, SUNY-Buffalo State.
Undergraduate research programs can be evaluated to assess student gains, mentor experiences, and determine the overall success and impact of the program. Faculty involved in mentoring students conducting research and coordinators running larger research programs often collect formative evaluation data to inform decisions for improving the program and summative evaluation data that are used to document the impact and success of the program. A range of instruments and methodology are available ranging from largely perception based to those evaluations that strive to anchor perceptions to observations and behaviors. Some methodologies facilitate structured conversations between the students and mentors so each has an opportunity to share their experiences and better help students explore their strengths and weaknesses. While smaller scale programs may not have the resources to hire an external/independent evaluator, there are many benefits to partnering with an evaluator to ensure that the data are collected by someone independent of the research activity and that the data can be analyzed and reported using appropriate statistical measures. An evaluator also can offer advice about strategies for further program improvement.

Documenting the impact of an undergraduate research experience begins with identifying the desired student learning outcomes and program goals. The next step involves identifying the instruments and methodology for measuring progress toward these outcomes. This page offers information about available instruments and methodologies for evaluating undergraduate research and assessment of student gains and mentor experiences. Case studies also are provided to illustrate how others have evaluated their research programs.

Assessment Instruments and Methods

Group Discussions to Explore Student and Mentor Reactions

Used by David Mogk

The main purpose of the group discussion is to provide an in-depth exploration of student and mentor reactions to the research program and their experiences. If you plan to conduct group discussions with student researchers and faculty mentors at the end of a research experience (AY or summer), consider using some or all of the student and mentor candidate questions found at the end of this section.

The discussions should be facilitated by an evaluator or a facilitator experienced in conducting focus groups. The role of the facilitator is to raise issues and ask questions that the students and mentors can address, ensure that everyone gets a chance to speak, ensure that the conversation stays focused and does not wander off into irrelevant areas, and ensure that all of the topics of interest get covered in the time allowed. Although the discussion leader may take notes, it is recommended that a recorder be present in order to capture as much of the conversation as possible. It is very useful to include direct quotes when possible. The recorder should not participate in the discussions. Following the discussion the recorder should code the student and mentor remarks into discrete categories and prepare a summary of the student and mentor responses organized according to those categories. This draft should be shared with the facilitator to check against their notes and the summary revised as needed. Items that could be coded can be found in Table 5 (Lopatto, 2004), Table 1 (Hunter et al., 2006), and Tables 2 and 4 (Seymour et al., 2004); see 'Resources' for links to these articles.


  • What were the program's most important benefits to you?
  • Did the program impose any personal costs? If so, what were they?
  • How do you think the program could be improved; how could it have been more helpful to you?
  • How well prepared were you to conduct research; what additional knowledge or skills did you need before you could begin?
  • What did the program help you to understand that you had not understood – or understood as well – before you began your work?
  • What was the value of discussing your self-assessments with your faculty mentor and comparing them to the mentor's assessments?
  • What were the pluses and minuses of your relationship with your faculty mentor?
  • What were the most important obstacles you faced in conducting your research?
  • How did you deal with those obstacles?
  • How did your experience differ from what you expected before you began?
  • Did the program confirm your academic and/or work career decisions, sharpen your specific career focus, or cause you to change your plans?
  • What advice would you give to new entrants?
  • What advice would you give to mentors?
  • If you had it to do over, what would you do differently?
  • Overall, what were the program's greatest strengths/best features?
  • Overall, what were the program's greatest weaknesses/worst features?
  • What is your overall assessment of your experience?

  • What were the program's most important benefits for students?
  • Did you benefit professionally from your participation in the program? If so, how?
  • Did your participation influence your views about the conduct of instruction?
  • Did the program impose any personal or professional costs? If so, what were they?
  • How do you think the program could be improved; how could it have been more helpful to students?
  • How well prepared were your students to conduct research; what additional knowledge or skills did they need before they could begin?
  • What did the program help students to understand that they had not understood – or understood as well – before they began to work with you?
  • How useful were the assessment rubrics?
  • What was the value of discussing your assessments with your students and comparing them to the students' self-assessments?
  • What were the pluses and minuses of your relationship with your student(s)?
  • What were the most important obstacles students faced in conducting their research?
  • How did they deal with those obstacles?
  • How did your experience as a mentor differ from what you expected before you began?
  • What advice would you give to new student entrants?
  • What advice would you give to new mentors?
  • If you had it to do over, what would you do differently?
  • Overall, what were the program's greatest strengths/best features?
  • Overall, what were the program's greatest weaknesses/worst features?
  • What is your overall assessment of your experience?


Evaluation Instruments to Assess Student Gains and Facilitate Student-Mentor Structured Conversations

Professor Sarah Titus, a structural geologist with an interest in the distribution of strain across the central San Andreas fault, discusses her work with one of her students. Details
Developed by Daniel Weiler and Jill Singer

Used by Jill Singer (refer to Case Study for Buffalo State for more information)

A methodology for measuring student learning and related student outcomes has been developed at SUNY Buffalo State. The purposes of the evaluation are to obtain a reliable assessment of the program's impact on participating students and provide information to participating students that helps them assess their academic strengths and weaknesses. Working with faculty from a wide range of disciplines (including arts, humanities, and social sciences, as well as STEM faculty), the evaluation selected 11 student outcomes to be measured: communication, creativity, autonomy, ability to deal with obstacles, practice and process of inquiry, nature of disciplinary knowledge, critical thinking and problem solving, understanding ethical conduct, intellectual development, culture of scholarship, and content knowledge skills/methodology. A detailed rubric describes the specific components of interest for each outcome, and faculty mentors assess students on each component, using a five-point scale. Students evaluate their own progress using the same instrument, and meet with the faculty mentor to compare assessments as a way to sharpen their self-knowledge. A range of complementary instruments and procedures rounds out the evaluation. A preliminary version of the methodology was field-tested with a small number of faculty mentors and students during the summer of 2007 and a refined evaluation has been implemented since 2008. The surveys can be found on the Undergraduate Research web page from Buffalo State College and include:

Research Skill Development (RSD) framework

Student performing field work on the Buffalo River. Image courtesy of J. Singer, SUNY-Buffalo State.
Developed by J. Willison and K. O'Regan at The University of Adelaide.

The framework and other resources are available at The Research Skill Development (RSD) homepage, hosted by the University of Adelaide.

The Framework considers six aspects of research skills (listed below). Courses are developed to provide students opportunities to move from Level I to Level V with an increasing level of student autonomy at each successive level. Levels I, II, and III are structured experiences with Levels IV and V providing open inquiry. The RSD website provides an example of how the framework has been used in a human biology course.

  1. Students embark on inquiry and determine a need for knowledge/understanding
  2. Students find/generate needed information/data using appropriate methodology
  3. Students critically evaluate information or data and the process to find or generate the information/data
  4. Students organize information collected or generated and manage the research process
  5. Students synthesize and analyze and apply new knowledge
  6. Students communicate knowledge and the processes used to generate it, with an awareness of ethical, social and cultural issues

Undergraduate Research Student Self-Assessment (URSSA)

Developed by Anne-Barrie Hunter, Timothy Weston, Sandra Laursen, and Heather Thiry, University of Colorado, Boulder.

The Undergraduate Research Student Self-Assessment (URSSA) is an online survey instrument used to evaluate student outcomes of research experiences in the sciences. URSSA is hosted by salgsite.org (SALG – Student Assessment of their Learning Gains – is a survey instrument for undergraduate course assessment). URSSA supports collection of information about what students gain or do not gain from participating in undergraduate research in the sciences. A set of core items is fixed and cannot be changed, but users can customize an existing survey. URSSA is designed to measure:

Electronic Portfolios to Measure Student Gains

Developed by Kathryn Wilson, J. Singh, A. Stamatoplos, E. Rubens, and J. Gosney, Indiana University-Purdue University Indianapolis, Mary Crowe, University of North Carolina at Greensboro, D. Dimaculangan, Winthrop University, F. Levy and R. Pyles, East Tennessee State University, and M. Zrull, Applachian State University

Electronic portfolios (ePort) is an evaluation tool to examine student research products before and after a research experience. The criteria used in ePort to assess student intellectual growth are:

In addition to uploading examples of research products, students use an evaluation tool to evaluate research skills; mentors also use this tool to rate student products. Other surveys developed as part of ePort collects information about the student's relationship with their mentor and demographic information. Information can be viewed for this NSF-funded project, including the evaluation tool, and mentoring experience survey.

Case Studies

Case Study 1 (SUNY Buffalo State) - Developing Instruments to Evaluate a Summer Research Program

Student performing field work in El Salvador. Image courtesy of J. Singer, SUNY-Buffalo State.
The summer research program at SUNY – Buffalo State accounts for a major portion of Buffalo State's Office of Undergraduate Research's operational budget. After determining that existing instruments were not adequate for our purposes because most were designed to assess laboratory science research experiences, a process and timeline was established that would result in instruments and an evaluation protocol that could be used across all academic disciplines. We initiated this process by holding a two-day evaluation workshop in June 2006 led by Daniel Weiler (Daniel Weiler Associates).

Developing Instruments to Evaluate a Summer Research Program (SUNY Buffalo State)

Table 1: Outcome Categories and Components. Click image to enlarge.
The workshop was attended by twelve faculty members that included new and experienced mentors in the arts, humanities, and natural and social sciences. During the retreat, a number of learning outcomes were identified and for each outcome, statements were developed to define the outcome category; Table 1 summarizes the 11 outcome categories and statements about each (referred to as components). Mentors and students also can add additional outcomes as appropriate to the discipline and project. After the retreat, Weiler and Singer continued to develop and refine the rubric as well as a methodology. The program is divided into three phases: pre- to early research; mid research; and late- to post research. At each stage the student and mentor complete the assessment survey and meet to discuss how each scored the outcome components. Table 2 summarizes the sequence of steps followed in the evaluation of the summer research program. All forms are completed online and responses entered in an electronic database.

The instruments and methodology were pilot tested by six faculty mentor/student researcher pairs participating in the 2007 summer research program. In September, 2007 Dan Weiler conducted focus groups with the student researchers and their faculty mentors in order to gain feedback necessary for instrument refinement and in preparation of the next step in this effort involving all students and mentors participating in the summer research program. In 2008, all 17 student/mentor pairs awarded fellowships used the instruments (described below). Some additional refinements in the forms were made in 2009 (n=20), 2010 (n=24), and 2011 (n=24). To date a total of 85 mentor/student researcher pairs have participated in the summer research program evaluation.

The evaluation approach we developed is intended to be discipline-neutral and our evaluation findings confirm that we have been successful in achieving this goal. We also desired to develop an evaluation approach that would be constructed in a way that reduced the reliance on perceptions and judgments and helped students and mentors assign scores that reflected the frequency of particular behaviors. This has been done by developing a five-point scale linked to an explanatory rubric to denote that a student always (5), usually (4), often (3), seldom (2), or never (1) displays a given outcome for each component in the 11 outcome categories.

Table 2.  Sequence of Student Research and Program Evaluation Activities Table 2. Sequence of Student Research and Program Evaluation Activities. Click image to enlarge.
Data for three years (N = 61; 2008, N=17, 2009, N=20, and 2010, N=24) have been analyzed by Bridget Zimmerman. Based on a comparison of pre- and post-research self-assessments, students reported growth on all 34 outcome components. The differences between student pre- and post-research assessments were statistically significant less frequently than were the comparable mentors' assessments. Student self assessment scores showing pre- to post-research academic growth on 13 of 34 assessment components were statistically significant at p < .05 or better, which is strong evidence of knowledge growth on the 13 items. Student open-ended comments also focused on the impact of the program on gains in their knowledge, their contribution to the discipline, value for future endeavors (graduate school application, resume, etc), and knowledge gained above and beyond the classroom setting. A paper by Singer and Zimmerman is published in the spring 2012 CUR Quarterly describes our evaluation approach and findings.



Supporting Resources





« Previous Page      Next Page »