Program Evaluation
Page prepared by Ellen Iverson
Evaluation vs Research| External vs Internal| Formative vs Summative|SERC Examples|SERC Related Links|Resources
Program evaluation emerged as a field during the 1960's era of social programs. Researchers define evaluation as the systematic determination of the merit, worth, value, or success of a policy, program, process, project, or organization (Weiss, 1998; Torres, Preskill, & Piontek, 1996; Patton, 2008).
Essentially, program evaluation:
- Uses a systematic process
- Involves the collection of data and measurements
- Uses a process for enhancing knowledge and decision-making (Russ-Eft & Preskill, 2009)
Evaluators can play a critical role in grant proposal writing and the success of the proposal.
Program and evaluation design for the common good:The role of evaluators in grant proposal writing as presented at the 2017 American Evaluation Association Conference (Acrobat (PDF) 615kB Nov10 17)
Evaluation vs. Educational Research
Funding agencies often require an evaluation plan or a research study as part of proposal. Because the methods between the two are similar, the distinguishing features can seem ambiguous.
Evaluation Research
*Blome (2009) – Office of Program Analysis and Evaluation, National Institute of General Medical Sciences
External vs. Internal Evaluation
External evaluation: Evaluation conducted by an evaluator outside the organization within which the project is housed (NSF, 2002). External evaluator may: Internal evaluator may:
Formative vs Summative Evaluation
- Formative evaluation: Evaluation designed and used to improve an intervention, program, or policy, particularly when it is still being developed.
- Summative evaluation: Evaluation designed to present conclusions about the merit or worth of an intervention, program, or policy and recommendations about whether it should be retained, altered, or eliminated. (NSF, 2002).
Examples from the SERC portfolio:
The On the Cutting Edge project developed a comprehensive professional development program for the geosciences. This project pioneered the use of online tools, developed by SERC, to promote sharing among workshop participants while creating a website that brought the results of workshops to a broader audience. Established in 2002 with NSF funding, more than 3,000 faculty, post-docs and graduate students from more than 900 institutions have participated in 109 face-to-face and virtual workshops and community-based research projects, producing a website that has more than 9,000 pages of content and is visited by more than 1,000,000 users annually. This transformation was made possible by an investment of more than $8.5 million by the National Science Foundation over a thirteen year period. The evaluation plan addresses two major effort domains of the project: the workshops and the website. All evaluations provide information to enable project staff to answer the following questions:"What are the results we are trying to achieve?" "Given these, who do we have to reach?" "What programs do we have with which to reach them?" "What evidence do we have that we did what we planned?" The intent of the evaluation is both formative and summative. From a formative perspective, we look to improve our overall program including the workshops, website collections, and leadership program.The current objectives of the summative evaluation plan are to assess the impact of the workshops and website on faculty teaching and student learning. A second aim is to evaluate how the program contributes to the research base on effective faculty development. Learn More About Cutting Edge Evaluation
The study shows the largest impacts of professional development is the spread of ideas, values, and practices across the campus. This work is summarized in an article in Change: The Magazine of Higher Learning, May/June 2012: Faculty Professional Development and Student Learning: What is the Relationship? (Rutz, C., Condon, W., Iverson, E., Manduca, C., Willett, G.) ; the methods are described in an article in Assessing Writing, Vol. 20. April. p. 19-36: Measures matter: Evidence of faculty development effects on faculty and student learning (Willett, Gudrun, Ellen Iverson, Carol Rutz, and Cathryn Manduca). The study is described in its entirety in a the book: Faculty Development Matters: Connecting Faculty Learning to Student Learning.Internal Evaluation
On the Cutting Edge:
Project design, implementation, and managementEducational Research
Tracer Study:
Research on professional development
SERC leads the InTeGrate STEP Center as it seeks to transform learning about the Earth in higher education. To increase the number and diversity of students developing Earth 'literacy, InTeGrate focuses on ensuring that teaching integrates science into real-world environmental and resource challenges. The project works on multiple scales to achieve large scale systemic change assisting faculty in transforming their courses; programs in testing new strategies for engaging and supporting students of all types; and a national community of Earth educators who can share successes and challenges. To date, nearly 1,000 educators from across various disciplines are engaged in developing a new breed of tested teaching materials, designing model programs, contributing to the InTeGrate resource collections and participating in community activities.
Learn more about InTeGrate Assessment of Materials and Student Learning Outcomes The GARNET project sought to better understand student experiences in introductory geology classes and how faculty can attempt to improve their attitudes and motivations toward learning by examining three overarching questions: (1) Who are the students enrolling in introductory geology classrooms? (2) What, if any, relationship is there between learning environments and learning outcomes? (3) What, if any, relationship is there between learning environments and students' motivation? SERC served as external evaluator in two capacities: 1) to formatively evaluate the effectiveness of the research network in facilitating their work via focus groups and interviews, and 2) to provide a critical external perspective to the research findings and analysis. Teaching with Data.org is a portal where faculty can find resources and ideas to reduce the challenges of bringing real data into post-secondary classes. Using real data is a great way for students to become more engaged in the content of a course, but significant barriers, largely in terms instructor preparation, exist that can make using data a challenge. TeachingWithData.org allows faculty to introduce and build students' quantitative reasoning abilities with readily available, user-friendly, data-driven teaching materials. SERC served as external evaluator in multiple capacities: 1) conducted a national survey to investigate how social science faculty used web resources to inform their teaching and how quantitative literacy factored into decisions about teaching, 2) consulted on the design of usability protocols, and 3) conducted in-depth interviews of social science faculty to investigate how they sought resources to improve their teaching practices, and 4) developed webpage profiles based on interviews conducted with faculty about their teaching practices related to quantitative literacy. Learn more about Teaching with Data Faculty ProfilesAssessment
InTeGrate:
A systems model for educational changeExternal Evaluation
GARNET: Geoscience Affective Research Network
Teaching with Data.org Pathway to Quantitative Literacy in the Social Sciences. [thumbimage http://serc.carleton.edu/sp/qssdl/index.html ]
SERC Related Links
- Assessment of student learning:
- Program assessment:Program Assessment
- Logic models:Logic Models
- Systems thinking:Systems Thinking
- Formative and Summative assessment: Develop an Assessment Plan
Evaluation Resources
- American Evaluation Association website includes Find an Evaluator listing
- American Educational Research Association website
- Comprehensive overview of evaluation practices: Alkin, M. C. (2012). Evaluation roots: A wider perspective of theorists' views and influences. Thousand Oaks, CA: SAGE Publications. Amazon site for Evaluation roots book
- Guide to the design and use of logic models: Foundation, W. K. K. W.K. Kellogg Foundation Logic Model Development Guide. from Kellogg's Logic Model Guide
- One of the most common evaluation approaches: Patton, M. Q. (1997). Utilization-Focused Evaluation. Thousand Oaks, CA: Sage Publications, Inc. Utilization-Focused Evaluation Checklist
- Evaluation approaches related to equity programs: Donaldson, S. I., & Picciotto, R. (Eds.). (2016). Evaluation for an equitable society. Charlotte, NC: Information Age Publishing.
- Systems approaches:
- Patton, M. Q. (2010). Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. New York, NY: Guilford Press. Blandin Foundation example of a developmental evaluation approach
- Preskill, H., Parkhurst, M., & Juster, J. S. (2015). Learning and Evaluation in the Collective Impact Context Guide to Evaluating Collective Impact: Collective Impact Forum: FSG. Evaluating collective impact projects