Institutional Change: Use Data »

Sustaining, Systematizing, Institutionalizing

Propelling inclusive excellence through thoughtful assessment

Ongoing, formative, and thoughtful assessment is critical to persistence efforts. Programs launched to broaden the pool of scientists and improve the quality of educational outcomes must incorporate evaluations that provide feedback about targeted outcomes.

Below we articulate metrics for measuring these outcomes as well as problems and future directions for even more effective assessments.

Common Metrics

There are a number of useful metrics used by Capstone Institutions to benchmark access and persistence efforts. Many of them focus on objective, well-accepted benchmarks, including:

  • Performance in science classes (e.g., gateway science courses; overall GPA in a major)
  • Participation in opportunities identified as hallmarks of high quality science education (e.g., participation in research opportunities)
  • Persistence in science. For admitted students, persistence is typically defined as a match between stated interest in majoring in science at entry into higher education and final declared major. Using this operationalization, "success" occurs when students who intended to major in a STEM discipline ultimately end up doing so.
  • Educational trajectories in the long-term (e.g., rates of graduation with STEM majors; rates of alumni graduate degree completion)

Other measures focus on student self-reported sense of scientific engagement and identity as well as intention to continue in science given the relationship between these variables and other measures of persistence as defined above.

  • Common measures include use of the CURE and SURE surveys, developed by David Lopatto at Grinnell College.

Changes in science graduates between the 5-year periods, 1994–1998 and 2011–2015


Caveats and Recommendations

Understand persistence using holistic assessment

Institutions can only paint a complete picture of their students by examining multiple sources of data that examine outcomes and perceptions of stakeholders from a variety of viewpoints. Well-developed programs often gather both qualitative (e.g., open-ended survey, focus group responses, casual observation) and quantitative (e.g., graduation and GPA data) data in both the short- and long-term to understand not only whether they have made gains in persistence but also to identify why this change is (or is not) occurring.

Do not set your bar too low or too high

A priori benchmarks of progress that are set for persistence programs need to be chosen carefully. All of the Capstone Institutions shared examples of times during program development that data revealed expectations that started as either too low or too high. Comparisons of an individual institution's students to those from other institutions or to nationwide data needs to be done carefully, in the context of your institution's overall profile in higher education. If at the start of an institution's efforts, its STEM degree rate exceeds national norms but students from certain groups underperform on measures of academic achievement and access to opportunity, then there is still a problem that clearly needs addressing. We also agree that positive changes can take time. It is important not to get discouraged if a developed program does not move the needle sharply and quickly. Nonetheless, institutions must look for opportunities to assess changes formatively and regularly to know whether programs are moving outcomes steadily in the desired direction.

Chose your comparison groups intentionally

One important assessment decision involves the choice and definition of comparison groups to compare outcomes for program participants with the outcomes for students who have not participated. Create these groups intentionally and wisely based on who it is you are trying to serve and how you would define success. Often, institutions compare groups of students with various social identities (e.g., underrepresented minority students; first-generation college students; low-income students) to all other groups in order to determine whether the entirety of their student body evidences equivalent levels of success across meaningful metrics.

Determine who it is you are trying to serve and how you would define success. These are important decisions based on your own institution's goals and current outcomes across underrepresented groups. Institutions need to understand their own student populations' strengths and needs through dialogue and casual observation paired with more objective analysis of outcomes. It is especially important to know your students' academic/college level preparedness. With this knowledge, strategies developed to "level the playing field" can have maximum impact.

When possible, try to compare equivalent groups who do—versus do not—receive the program in order to create meaningful comparisons related to the impact of your programming.

Hope College
At Hope, the FACES program compares outcomes for students who were invited to participate but declined versus those invited who participated.

Beware the most common measure of persistence in science; it is incomplete

One of the most popular ways to measure program success in higher education is to examine whether students' intention to major in science matches their eventual declared major. There are a number of problems in relying on the integrity of self-reported intended major report at the time of enrollment in order to assess persistence. This singular data point might hide important information. For example, a student who had taken significant coursework in STEM with a long-standing commitment to becoming a scientist would be considered the same as a student who, feeling pressured to pick an area of interest, decided in the moment to make the same choice. Ultimately, their persistence (or lack thereof) would be considered equivalent when they were beginning from rather different places. Our institutions agreed that persistence in science might not always be a desirable goal at our liberal arts institutions. When our students chose a non-STEM major, this might be viewed differently depending on a variety of factors, including the strength of the original stated intent and the reason for their choice.

We need better measures of persistence that go beyond examining whether there is a match between intended and declared major. We lose too much information given this operationalization. Better metrics of persistence would go beyond measuring intent through a singular time point and instead track a student's intended major from time of application through the declaration of the major. Another promising avenue for understanding persistence is to invest in efforts to understand both those students who declare a non-STEM major as well as students who are recruited to STEM. In that way, we will have a richer sense of the barriers and entry points to STEM, allowing more targeted interventions that obviate or capitalize on factors related to students' educational and career trajectories. Other promising complementary approaches for assessing persistence include:

Swarthmore College
At Swarthmore, we have used gateway course enrollment as a baseline for interest in STEM, defining persistence as continuation of additional coursework and science majors.
Grinnell College
Grinnell has looked at disaggregated data (such as transcript analysis) from different demographic groups to observe student trajectories.
Hope College
We are now taking snapshots of major declarations (and undeclarations) to move beyond the challenge of unreliability of "intended major" declarations for incoming students. We are also using enrollment, success, and persistence in all STEM major introductory course sequences (first three semesters) to define persistence and measure the impact of first-year programs.
Smith College
At Smith, we examine a variety of student outcomes to gauge the impact of programs targeting inclusive excellence, including gateway course GPA, persistence in science, and rate of participation in advanced and credit-bearing scientific research in the junior and senior years.

Fostering institutional cultures that support inclusive excellence

When values that affirm the importance of access and diversity in science are shared broadly by a community, they work in concert with the goals and values of inclusive excellence.

Faculty Perspectives

Historically, some science faculty embraced a mentality that part of their job was to "weed out" students who were not likely to be successful as scientists. This attitude is antithetical to inclusive excellence. Faculty development needs to focus on how to help "weave in" all students, rather than to weed them out. This means that faculty must view their responsibility to teach every student who wants to learn science, rather than only focusing on the most well-prepared students in our classrooms and programs.

Faculty need to be educated about the power of student mindset on academic outcomes and trajectories (the work of Carol Dweck is particularly instructive here). We must cultivate growth rather than fixed mindsets in our students, emphasizing the idea that one must not come to science with natural talent or brilliance (fixed mindset) but rather that successful scientists are ones who view their learning through opportunities for practice, growth, and opportunity (growth mindset).

Following this, faculty, programs, and institutions need to watch the way that everyone in the community understands and discusses the purpose of any bridge or other programs that help get underprepared students ready for the rigors of college. Framing these programs as remedial or the students as needing remediation undermines their sense of belonging and is inconsistent with evidence finding that the right environments can foster success for all students. Instead, we should talk about these programs as providing opportunities for what Claude Steele calls "wise schooling" (pairing language around faculty/the institution's high expectations for students with the unquestioning belief that the students are capable of reaching them).

Hope College
Hope College has created a professional learning community which has fostered discussion of many topics, based on faculty interests and student needs. Recent topics of focus have been the development and assessment of course-based research experiences, fostering growth mindset development in students and faculty as well as active learning pedagogy.

Institutional Perspectives

Campus climate is key to persistence and access. The more that the values above are collectively shared by stakeholders, the better off persistence efforts will be on your campus. Strategies that we have found effective at our Capstone Institutions include: broadening conversations about access to include large groups of faculty; sharing ideas and data with administrators; and connecting with strategic planning efforts that can incorporate the mission of fostering inclusive excellence for departments or institutions.

Smith College
Smith's Science Center Committee on Diversity (SCCD) brings together students, staff, and faculty to ensure that offerings and opportunities in STEM disciplines are accessible to all students.

Related Resources

Cohen, Steele, & Ross (1999). The mentor's dilemma: Providing critical feedback across the racial divide. Personality and Social Psychology Bulletin, 25, 1302–1318.

Dweck, C.S. (2007). Mindset: The new psychology of success. Ballantine Books: New York.

Steele, C.S. (2010). Whistling Vivaldi: How stereotypes affect us and what we can do. W.W. Norton & Co: New York.


« Previous Page