Assessment of Undergraduate Research
By Jill Singer, SUNY - Buffalo State and Dave Mogk, Montana State University, Bozeman
Jump down to: Group Discussions to Explore Student and Mentor Reactions | Evaluation Instruments to Assess Student Gains and Facilitate Student-Mentor Structured Conversations | Research Skill Development (RSD) framework | Undergraduate Research Student Self-Assessment (URSSA) | Electronic Portfolios to Measure Student Gains | Case Studies | Supporting Resources
Documenting the impact of an undergraduate research experience begins with identifying the desired student learning outcomes and program goals. The next step involves identifying the instruments and methodology for measuring progress toward these outcomes. This page offers information about available instruments and methodologies for evaluating undergraduate research and assessment of student gains and mentor experiences. Case studies also are provided to illustrate how others have evaluated their research programs.
Assessment Instruments and Methods
Group Discussions to Explore Student and Mentor Reactions
Used by David Mogk
The main purpose of the group discussion is to provide an in-depth exploration of student and mentor reactions to the research program and their experiences. If you plan to conduct group discussions with student researchers and faculty mentors at the end of a research experience (AY or summer), consider using some or all of the student and mentor candidate questions found at the end of this section.
The discussions should be facilitated by an evaluator or a facilitator experienced in conducting focus groups. The role of the facilitator is to raise issues and ask questions that the students and mentors can address, ensure that everyone gets a chance to speak, ensure that the conversation stays focused and does not wander off into irrelevant areas, and ensure that all of the topics of interest get covered in the time allowed. Although the discussion leader may take notes, it is recommended that a recorder be present in order to capture as much of the conversation as possible. It is very useful to include direct quotes when possible. The recorder should not participate in the discussions. Following the discussion the recorder should code the student and mentor remarks into discrete categories and prepare a summary of the student and mentor responses organized according to those categories. This draft should be shared with the facilitator to check against their notes and the summary revised as needed. Items that could be coded can be found in Table 5 (Lopatto, 2004), Table 1 (Hunter et al., 2006), and Tables 2 and 4 (Seymour et al., 2004); see 'Resources' for links to these articles.
Evaluation Instruments to Assess Student Gains and Facilitate Student-Mentor Structured Conversations
Developed by Daniel Weiler and Jill SingerUsed by Jill Singer (refer to Case Study for Buffalo State for more information)
A methodology for measuring student learning and related student outcomes has been developed at SUNY Buffalo State. The purposes of the evaluation are to obtain a reliable assessment of the program's impact on participating students and provide information to participating students that helps them assess their academic strengths and weaknesses. Working with faculty from a wide range of disciplines (including arts, humanities, and social sciences, as well as STEM faculty), the evaluation selected 11 student outcomes to be measured: communication, creativity, autonomy, ability to deal with obstacles, practice and process of inquiry, nature of disciplinary knowledge, critical thinking and problem solving, understanding ethical conduct, intellectual development, culture of scholarship, and content knowledge skills/methodology. A detailed rubric describes the specific components of interest for each outcome, and faculty mentors assess students on each component, using a five-point scale. Students evaluate their own progress using the same instrument, and meet with the faculty mentor to compare assessments as a way to sharpen their self-knowledge. A range of complementary instruments and procedures rounds out the evaluation. A preliminary version of the methodology was field-tested with a small number of faculty mentors and students during the summer of 2007 and a refined evaluation has been implemented since 2008. The surveys can be found on the Undergraduate Research web page from Buffalo State College and include:
- The student survey, which is completed by the student before the research experience begins. This survey is designed to help mentors understand the views, expectations, interests, and knowledge and skills of the student researcher. This is the basis for completing the Pre-Research Assessment Survey.
- The Pre-Research Assessment Survey (student and mentor versions). This survey is completed by the student and mentor at the beginning of the summer research program and is intended to help define the pre-research baseline measure.
- Mid-Research Assessment Survey (student and mentor versions). The same rubric completed in the pre-research survey is completed in the middle of the program.
- Post-Research Assessment Survey (student and mentor versions). This is completed at the end of the summer. Changes in scores will help determine growth (or absence thereof) and impact of the summer research program.
- Student on-line journal designed to help document the experience.
Research Skill Development (RSD) framework
The Framework considers six aspects of research skills (listed below). Courses are developed to provide students opportunities to move from Level I to Level V with an increasing level of student autonomy at each successive level. Levels I, II, and III are structured experiences with Levels IV and V providing open inquiry. The RSD website provides an example of how the framework has been used in a human biology course.
- Students embark on inquiry and determine a need for knowledge/understanding
- Students find/generate needed information/data using appropriate methodology
- Students critically evaluate information or data and the process to find or generate the information/data
- Students organize information collected or generated and manage the research process
- Students synthesize and analyze and apply new knowledge
- Students communicate knowledge and the processes used to generate it, with an awareness of ethical, social and cultural issues
Undergraduate Research Student Self-Assessment (URSSA)
Developed by Anne-Barrie Hunter, Timothy Weston, Sandra Laursen, and Heather Thiry, University of Colorado, Boulder.
The Undergraduate Research Student Self-Assessment (URSSA) is an online survey instrument used to evaluate student outcomes of research experiences in the sciences. URSSA is hosted by salgsite.org (SALG – Student Assessment of their Learning Gains – is a survey instrument for undergraduate course assessment). URSSA supports collection of information about what students gain or do not gain from participating in undergraduate research in the sciences. A set of core items is fixed and cannot be changed, but users can customize an existing survey. URSSA is designed to measure:
- Personal/professional gains, such as gains in confidence and establishing collegial relationships with faculty and peers
- Intellectual gains, including the application of their knowledge and critical thinking skills to research work
- Gains in professional socialization, such as changes in students' attitudes and behaviors that indicate adoption of professional norms
- Gains in various skills (communication skills, technical skills, computers skills, etc.)
- Enhanced preparation for graduate school and the workplace
- Gains in career clarification, confirmation and refinement
Electronic Portfolios to Measure Student Gains
Developed by Kathryn Wilson, J. Singh, A. Stamatoplos, E. Rubens, and J. Gosney, Indiana University-Purdue University Indianapolis, Mary Crowe, University of North Carolina at Greensboro, D. Dimaculangan, Winthrop University, F. Levy and R. Pyles, East Tennessee State University, and M. Zrull, Applachian State University
Electronic portfolios (ePort) is an evaluation tool to examine student research products before and after a research experience. The criteria used in ePort to assess student intellectual growth are:
- Core communication and quantitative skills
- Critical thinking
- Integration and application of knowledge
In addition to uploading examples of research products, students use an evaluation tool to evaluate research skills; mentors also use this tool to rate student products. Other surveys developed as part of ePort collects information about the student's relationship with their mentor and demographic information. Information can be viewed for this NSF-funded project, including the http://www.cur.org/assets/1/7/spring09wilson.pdf.
Case Studies
Case Study 1 (SUNY Buffalo State) - Developing Instruments to Evaluate a Summer Research Program
The summer research program at SUNY – Buffalo State accounts for a major portion of Buffalo State's Office of Undergraduate Research's operational budget. After determining that existing instruments were not adequate for our purposes because most were designed to assess laboratory science research experiences, a process and timeline was established that would result in instruments and an evaluation protocol that could be used across all academic disciplines. We initiated this process by holding a two-day evaluation workshop in June 2006 led by Daniel Weiler (Daniel Weiler Associates).
Supporting Resources
- BIO Research Experiences for Undergraduates, Undergraduate Research Student Self-Assessment, Assessment and Evaluation.
- Boyle, A., Using Alignment and Reflection to Improve Student Learning, Elements, v. 3, 113-117.
- Crowe, M. and Brakke D., Assessing the Impact of Undergraduate- Research Experiences on Students: An Overview of Current Literature (Acrobat (PDF) 359kB Jan26 12) CUR Quarterly, Summer 2008, v 28, n 4.
- Hunter, A.B., Laursen, S.L., and Seymour, E., 2006, Becoming a Scientist: The Role of Undergraduate Research in Students' Cognitive, Personal, and Professional Development, Science Education, v. 91, 36-74.
- Lopatto, D., 2009, Science in Solution: The Impact of Undergraduate Research on Student Learning, Research Coroporation for Science Advancement, 117 pp.
- Lopatto, D., 2004, Survey of Undergraduate Research Experiences (SURE): First Findings, Cell Biology Education, v. 3, 270-277.
- Lopatto, D., Assessment of Undergraduate Research and Scientific Teaching Research Instruments - includes three surveys of undergraduate research: the SURE III, the Research Follow Up, and the CURE.
- Seymour, E. Hunter, A.B., Laursen, L., and DeAntoni, T., 2004, Establishing the Benefits of Research Experiences for Undergraduates: First Findings From a Three-year Study, Science Education, v. 88, 495-594.
- Singer, J. and Weiler, D., A Longitudinal Student Outcomes Evaluation o the Buffalo State College Summer Undergraduate Research Program (Acrobat (PDF) 482kB Mar5 12), CUR Quarterly, Spring 2009, v. 29, n 3.
- Singer, J. and Zimmerman, B., Evaluating a Summer Undergraduate Research Program: Measuring Student Outcomes and Program Impact (Acrobat (PDF) 991kB Mar7 12), CUR Quarterly, Spring 2012, v 32, n 3.
- Willison, J. Multiple Contexts, Multiple Outcomes, One Conceptual Framework for Research Skill Development in the Undergraduate Curriculum (Acrobat (PDF) 1MB Jan26 12) CUR Focus, Spring 2009, v 29, n 3.