Initial Publication Date: July 14, 2016

Program Evaluation

Page prepared by Ellen Iverson

Evaluation vs Research| External vs Internal| Formative vs Summative|SERC Examples|SERC Related Links|Resources

Program evaluation emerged as a field during the 1960's era of social programs. Researchers define evaluation as the systematic determination of the merit, worth, value, or success of a policy, program, process, project, or organization (Weiss, 1998; Torres, Preskill, & Piontek, 1996; Patton, 2008).

Essentially, program evaluation:

  • Uses a systematic process
  • Involves the collection of data and measurements
  • Uses a process for enhancing knowledge and decision-making (Russ-Eft & Preskill, 2009)

Evaluators can play a critical role in grant proposal writing and the success of the proposal.

Program and evaluation design for the common good: The role of evaluators in grant proposal writing as presented at the 2017 American Evaluation Association Conference (Acrobat (PDF) 615kB Nov10 17)

Evaluation vs. Educational Research

Funding agencies often require an evaluation plan or a research study as part of proposal. Because the methods between the two are similar, the distinguishing features can seem ambiguous.

Evaluation

  • Recognizes the interests of program leaders or other stakeholders in the design of the inquiry
  • Judges merit or worth
  • Informs decision-making related to the program
  • Lacks a controlled setting

Research

  • Involves intellectual curiosity in a design that includes hypothesis or theory of change testing
  • Produces knowledge
  • Advances theory
  • May attempt to control the setting (e.g., randomized control trials)

*Blome (2009) – Office of Program Analysis and Evaluation, National Institute of General Medical Sciences

External vs. Internal Evaluation

External evaluation: Evaluation conducted by an evaluator outside the organization within which the project is housed (NSF, 2002).

External evaluator may:

  • Act as independent "critical friend"
  • Be viewed as more objective
  • Bring particular knowledge or methods (e.g., social network analysis)
  • Take variety of forms in higher education (e.g., advisory board, set of consultants, full-time evaluator)

Internal evaluator may:

  • Possess in-depth knowledge about the context, history,processes or populations related to the program
  • Be viewed as an insider
  • Bring consistency, alignment, and synergy across projects
  • Take variety of forms in higher education (e.g., faculty, staff, center)

Formative vs Summative Evaluation

  • Formative evaluation: Evaluation designed and used to improve an intervention, program, or policy, particularly when it is still being developed.
  • Summative evaluation: Evaluation designed to present conclusions about the merit or worth of an intervention, program, or policy and recommendations about whether it should be retained, altered, or eliminated. (NSF, 2002).

Examples from the SERC portfolio:

Internal Evaluation

On the Cutting Edge:
Project design, implementation, and management

CE thumb

The On the Cutting Edge project developed a comprehensive professional development program for the geosciences. This project pioneered the use of online tools, developed by SERC, to promote sharing among workshop participants while creating a website that brought the results of workshops to a broader audience. Established in 2002 with NSF funding, more than 3,000 faculty, post-docs and graduate students from more than 900 institutions have participated in 109 face-to-face and virtual workshops and community-based research projects, producing a website that has more than 9,000 pages of content and is visited by more than 1,000,000 users annually. This transformation was made possible by an investment of more than $8.5 million by the National Science Foundation over a thirteen year period.

The evaluation plan addresses two major effort domains of the project: the workshops and the website. All evaluations provide information to enable project staff to answer the following questions:"What are the results we are trying to achieve?" "Given these, who do we have to reach?" "What programs do we have with which to reach them?" "What evidence do we have that we did what we planned?"

The intent of the evaluation is both formative and summative. From a formative perspective, we look to improve our overall program including the workshops, website collections, and leadership program.The current objectives of the summative evaluation plan are to assess the impact of the workshops and website on faculty teaching and student learning. A second aim is to evaluate how the program contributes to the research base on effective faculty development.

Learn More About Cutting Edge Evaluation

Educational Research

 

Tracer Study:
Research on professional development

Change Cover
Combining an ethnographic approach with an evaluation study, the Tracer project investigated the impact of professional development activities on faculty teaching and student learning. SERC researchers worked with collaborators at Carleton College and Washington State University to develop techniques for assessing changes in teaching and learning using faculty evaluation of assignments and student work.

The study shows the largest impacts of professional development is the spread of ideas, values, and practices across the campus. This work is summarized in an article in Change: The Magazine of Higher Learning, May/June 2012: Faculty Professional Development and Student Learning: What is the Relationship? (Rutz, C., Condon, W., Iverson, E., Manduca, C., Willett, G.) ; the methods are described in an article in Assessing Writing, Vol. 20. April. p. 19-36: Measures matter: Evidence of faculty development effects on faculty and student learning (Willett, Gudrun, Ellen Iverson, Carol Rutz, and Cathryn Manduca). The study is described in its entirety in a the book: Faculty Development Matters: Connecting Faculty Learning to Student Learning.


Assessment

 

InTeGrate:
A systems model for educational change

ITG thumb

SERC leads the InTeGrate STEP Center as it seeks to transform learning about the Earth in higher education. To increase the number and diversity of students developing Earth 'literacy, InTeGrate focuses on ensuring that teaching integrates science into real-world environmental and resource challenges. The project works on multiple scales to achieve large scale systemic change assisting faculty in transforming their courses; programs in testing new strategies for engaging and supporting students of all types; and a national community of Earth educators who can share successes and challenges. To date, nearly 1,000 educators from across various disciplines are engaged in developing a new breed of tested teaching materials, designing model programs, contributing to the InTeGrate resource collections and participating in community activities.

The project takes a two-fold approach to assessing the quality of the materials and courses. First, all curricula are independently reviewed prior to field-testing in the classroom. Second, all curricula are to be field-tested in three different classroom settings using a range of assessment measures to gauge student learning gains, student attitude and aspiration changes, and the role of the teaching circumstances in the success of the curriculum. Student data from the field-tests are independently assessed and used to inform the revision plan for finishing all materials.

Learn more about InTeGrate Assessment of Materials and Student Learning Outcomes

External Evaluation

GARNET: Geoscience Affective Research Network
Go to http://serc.carleton.edu/garnet/index.html

The GARNET project sought to better understand student experiences in introductory geology classes and how faculty can attempt to improve their attitudes and motivations toward learning by examining three overarching questions: (1) Who are the students enrolling in introductory geology classrooms? (2) What, if any, relationship is there between learning environments and learning outcomes? (3) What, if any, relationship is there between learning environments and students' motivation?

SERC served as external evaluator in two capacities: 1) to formatively evaluate the effectiveness of the research network in facilitating their work via focus groups and interviews, and 2) to provide a critical external perspective to the research findings and analysis.

Teaching with Data.org Pathway to Quantitative Literacy in the Social Sciences. [thumbimage http://serc.carleton.edu/sp/qssdl/index.html ]

Teaching with Data.org is a portal where faculty can find resources and ideas to reduce the challenges of bringing real data into post-secondary classes. Using real data is a great way for students to become more engaged in the content of a course, but significant barriers, largely in terms instructor preparation, exist that can make using data a challenge. TeachingWithData.org allows faculty to introduce and build students' quantitative reasoning abilities with readily available, user-friendly, data-driven teaching materials.

SERC served as external evaluator in multiple capacities: 1) conducted a national survey to investigate how social science faculty used web resources to inform their teaching and how quantitative literacy factored into decisions about teaching, 2) consulted on the design of usability protocols, and 3) conducted in-depth interviews of social science faculty to investigate how they sought resources to improve their teaching practices, and 4) developed webpage profiles based on interviews conducted with faculty about their teaching practices related to quantitative literacy. Learn more about Teaching with Data Faculty Profiles


SERC Related Links

Evaluation Resources