Initial Publication Date: April 13, 2015
Measuring the Impact of our Programs on Students
Just as we need to assess our teaching to know what our students are learning, we need to assess our programs to know whether the programs are achieving our goals. Measuring the impact of programs on our students allows us to make informed choices about where we will invest our time and energy.
This presentation, by Cathy Manduca and Ellen Iverson, was given at the January 2007 workshop on the Role of Departments in Preparing Future Geoscience Professionals. You can also download the original presentation (PowerPoint 2.3MB Jan19 07) if you like.
Many faculty members appear to see assessment as yet another time sink. We invite you to look at it, instead, as a vitally important research project. The overarching research questions to be answered might be "What are we doing right?" and "What could we be doing to achieve our departmental goals more efficiently?" As with any research project, your first step will be to gather data.
The answers to these questions, asked by the professional evaluator that we work with, can be used to frame a plan for collecting data.
- Does the program address an important challenge?
What is the quality of the implementation?
What is the utility -- are participants using it?
Is it being used in different situations?
Is it being shared by participants?
Assessment can also be an invaluable tool when you want to convince your colleagues -- who are scientists, and therefore susceptible to logical arguments supported by data -- to make changes.
Your colleagues, of course, will want to know the answers to these questions before they agree with any proposed changes. In general, if you can answer these questions favorably and support your answers with data, the greater your chances of convincing them to try something new.
Many of the skills that we have developed as geoscientists can be helpful in designing program assessments. Our programs can be thought of as similar to the complex systems we study as geoscientists.
Likewise, we are already familiar with the principles of experimental design. Again, while we are accustomed to applying these principles to geological experiments, it is not so different to apply them to the study of departmental program effectiveness.
There are many similarities between the processes of assessment and geoscience research. From data-gathering through experimental design to data analysis and interpretation, the questions asked are much the same.
For example, if you wanted to know why undergraduate research experiences are valuable, you could design an assessment of such experiences. You could hypothesize what makes them valuable, come up with ways to test your hypotheses, and think of ways to quantify your observations.
As you did so, you would undoubtedly use multiple working hypotheses; you would find ways to describe, classify, and code your observations; and you would begin to infer process and causal relationships from your data.
One way to design an assessment is to "map it out" using a logic model. There are goals associated with each program you are assessing. What assumptions are you making about HOW the program achieves those goals? Writing out your model for how in specific your program elements will lead to the changes you seek allows you to articulate and examine those assumptions. That, in turn, allows you to figure out what to measure and how to measure it to test your assumptions.
For example, you might want to evaluate your department's internship experiences. Perhaps your expectation of these experiences is that they improve potential employers' perceptions and students' perceptions of your program. But HOW would internships do that? Students who participate in internships might gain knowledge, skills, attitudes, networks, and resources from their participation. This might make it easier for those students to find employment when they graduate, and to stay in their first post-graduation jobs longer than they otherwise would. These measurable outcomes might produce the anticipated changes in perceptions of your program.
Having drawn a conceptual model of how your program works, you can then assess its effectiveness. Are you, in fact, observing the expected impacts from your program? Are the impacts, in fact, caused by your program?
The data you collect to answer your assessment questions can be either quantitative or qualitative. Quantitative data could include how many students and employers participate in internships, what percentage of your students participate, and what percentage of students and employers are satisfied with their experiences (based on the results of surveys using the Likert scale). Qualitative data could include responses to open-ended surveys or interviews, embedded assessments (such as a journal the student keeps, online discussion artifacts, an oral presentation by the student to an introductory geology class), or focus group discussions prior to the internship experience.
Ultimately, your choices about why and how to assess the impacts of your programs on your students will be guided by your purpose in conducting the assessment, who will see the results of the assessment, and what will make a compelling argument for that audience. Well-designed assessments can be used to help you choose how best to invest your time, to lobby your departmental colleagues to make changes, or to impress your Dean with the effectiveness of your programs.