An Interactive Introduction to Randomized Evaluation
and is replicated here as part of the SERC Pedagogic Service.
This activity introduces a Randomized Control Trial (RCT) in the classroom. The experiment is straightforward to implement, and provides students experiential learning opportunities to the nuts and bolts of impact evaluation. Student choices in the experiment are used to demonstrate the effect of a treatment, and the critical notion of Average Treatment Effect (ATE).
Context for Use
By participating in the experiment the students have the opportunity not only to learn the core measure in any evaluation program (the Average Treatment Effect), they also get exposed to an intuitive understanding of the impact evaluation technique. The Classroom RCT Game does not need to be restricted just to undergraduate introductions to the topic of randomized evaluation. The game can be used in a graduate course as well, since it can be time-consuming if not impossible to take the whole class to the field to provide a first-hand exposure to an actual randomized evaluation program. Additionally, the idea of control and trial is now common enough in courses other than development economics. Courses in behavioral economics and experimental economics routinely have a topic on measurement and experiment design. Here again, the classroom game can provide a personal experience into the design and process of an experiment which can make the logic of designing an experiment – to evaluate and estimate "treatment differentials" more vivid to the participating student.
The experiment takes about 35-45 minutes for a class of 20 students. The activity is introduced before starting with lectures on RCT. Some simple preparation before hand is needed as indicated below.
Description and Teaching Materials
1. A set of poker chips of two different colors. Alternatively one can use two different suits from a deck of cards (Diamonds and Clubs).2. A list of words with associated meaning written next to each. A GRE wordlist would be perfect. We use terms in economics as an example Wordlist (Acrobat (PDF) 55kB Feb18 13) here. Multiple copies of the list is needed to distribute to about half the students in class.
3. A quiz comprising of these words is needed. See example Quiz (Acrobat (PDF) 67kB Feb18 13). Copies of the quiz need to be distributed to all students.
Overview of the experiment
In the experiment students participate in an intervention where a random subset of them are exposed to a list of words with associated meanings, while the rest are not. All students then participate in a quiz on word meanings containing these words. It is expected that due to the "intervention", the students exposed to the wordlist will have a higher average score than the students who were not exposed to the wordlist. The random placement of students in a treatment and the control group ensures that pre-existing differences are averaged out between the two groups.
The objective of this activity is to provide students an intuitive understanding of how treatment differences arise, and the concept of Average Treatment Effect (ATE). The Average Treatment Effect is the foremost variable of interest in any randomized control trial, since it captures the impact of the treatment on the outcome-variable of interest.
Description of the classroom activity
1. Students need to be placed randomly in a Treatment and a Control group, first. To construct the treatment and the control group, poker chips are handed out to the students at the beginning of the experiment. Students with red chips are assigned to the treatment group and are asked to sit on the right side of the classroom. Students with white chips are assigned to the control group and are asked to sit on the left side of the classroom. Handing out the chips provides a useful depiction of random assignment into groups, a critical methodology for disentangling treatment effects from pre-existing differences.2. Each student in the Treatment group is given a copy of "Wordlist" to review for five minutes. Students in the Control group do not have any task at that time.
3. At the end of the review period, the instructor collects back the wordlists from the treatment group, distributes the "Quiz" to all students in the treatment as well as the control group. They are allowed five minutes to complete the quiz.
4. At the end of five minutes, the instructor reads out the correct answers for students to score their tests. The students are asked to write their total points on the left hand corner of the test – a point for each correct answer.
5. The instructor collects the scored quiz sheets and computes the average score for the treatment group, and the average score for the control group.
6. The difference in the average quiz scores of the two groups is the Average Treatment Effect of the intervention. A simple excel graph can be used for visual elaboration. This can be readily done using an excel sheet.
Teaching Notes and Tips
The natural way one can use the activity is to start with the Excel Graphs (Excel 2007 (.xlsx) 33kB Feb18 13) of the computed results before introducing ATE formally (see concept below).
The fact that the students themselves have generated the data allows them to identify with all the components of the experiment design readily, and allows the instructor to describe and define the core measure and other subsequent measures more naturally (see Mani and Dasgupta 2010 for some extensions).
Note, our intervention can also be used to allow students revise concepts that have been just covered in the lectures.
Post activity discussion
Definition of ATE: Consider a pool of applicants (N) for a job training program. A randomly selected subset NT gets assigned to the treatment group (T), and receives the treatment (for example: the job training program). The remaining sample NC = N-NT gets assigned to the control (C) group which does not receive the training. In our example we are interested in measuring the impact of the training program on some measurable outcome variable (Y) such as wage earnings. The Average Treatment Effect (ATE) measures the overall impact of a program on an observable outcome variable. Under perfect compliance, it is defined to be the difference in the empirical means of the outcome variable (Y collected at the end of the program) between the treatment and the control group. Thus, under perfect compliance,
ATE = Y-BART - Y-BARC ,where Y-BART is the sample mean of the outcome variable for everyone in the treatment group and Y-BARC is the sample mean of the outcome variable for everyone in the control group.
After introducing the formal concept of ATE, the instructor can follow up on some of the classic studies below to discuss some of the actual applications of RCT and the usage of average treatment effects.
Conditional cash transfer program: In an effort to improve children's schooling outcomes (test scores, completed grades, and enrollment), cash transfer payments have been provided as incentives to parents' who send their children regularly to school. A randomized control trial implemented to understand the effectiveness of conditional cash transfers find positive association between the program and - schooling enrollment, and completed grades of schooling in Mexico (Parker, Rubalcava, & Teruel, 2008; Behrman, Sengupta & Todd, 2005).
Deworming pills program: In an attempt to improve children's health and schooling, Miguel & Kremer (2004) and Bobonis, Miguel, & Puri-Sharma (2006) evaluate the effectiveness of providing deworming pills to school age children using a randomized control trial. Both papers find positive impact of the intervention on children's schooling attendance.
Microfinance program: Banerjee et. al (2009) conduct the first randomized evaluation study to assess the effectiveness of microcredit on poverty. The authors find that increased access to microcredit is associated with increased expenditure on durable goods though, not associated with improvements in average household per capita expenditure – an important measure of well-being.
References and Resources
This activity was based on the paper Mani, S., and Dasgupta, U. (2010): "Explaining Randomized Evaluation Techniques Using Classroom Games. Available at SSRN: http://ssrn.com/abstract=1676876 or http://dx.doi.org/10.2139/ssrn.1676876
- Banerjee, A.V., Duflo, E., Glennerster, R., & Kothari, D. (2010). Improving immunisation coverage in rural India: clustered randomised controlled evaluation of immunisation campaigns with and without incentives. BMJ2010; 340:c2220.
- Behrman, J. R., Sengupta, P., & Todd, P. (2005). Progressing through PROGRESA: An Impact Assessment of a School Subsidy Experiment in Rural Mexico, Economic Development and Cultural Change, University of Chicago Press, vol. 54(1), pages 237-75, October.
- Bobonis, G. J., Miguel, E., & Puri-Sharma, C. (2006). Anemia and School Participation. J. Human Resources, XLI(4), 692–721.
- Miguel, E., & Kremer, M. (2004). Worms: Identifying Impacts on Education and Health in the Presence of Treatment Externalities. Econometrica, 72(1), 159–217.
- Parker, S. W., Rubalcava, L., & Teruel, G. (2008). Evaluating Conditional Schooling and Health Programs. Handbook of Development Economics.