Last week, Abhijit Banerjee, Esther Duflo and Michael Kremer accepted the 2019 Nobel Memorial Prize in Economic Sciences. They were recognized for their use of randomized controlled trials, also known as randomized evaluations, to rigorously measure which anti-poverty programs go beyond the hype and measurably improve the lives of some of the poorest people in the world. Evidence-based programs evaluated by the Nobel laureates and their colleagues at the Abdul Latif Jameel Poverty Action Lab (J-PAL), which the laureates co-founded, have reached more than 400 million people.
We are also seeing a growing movement to use rigorous evidence in our own backyard. Along with the passage of the bipartisan federal Foundations for Evidence-Based Policymaking Act (FEBP) as well as recent federal legislation that provides funding for government investment in evaluation, there has been a growing interest among governments at all levels in the United States in using data and evidence to inform public policy.
The FEBP requires federal agencies to develop learning agendas, which identify and set priorities for evidence-building and create a plan to incorporate the generation and use of evidence into their decision-making. Here are three reasons why government agencies at all levels should consider randomized evaluations as an effective and feasible tool for building rigorous and actionable evidence:
1. They are an effective tool for determining causal impact.
Randomized evaluations help us understand what would have happened if a program had not taken place by comparing the outcomes of a group that receives a program's services against a group that does not. By using random assignment to determine each group, we ensure the groups do not differ systematically. Therefore, any difference in outcomes can be solely attributed to the program rather than to other factors.
Consider, for example, the impact of giving public-school students free computers, a popular policy aimed at leveling the playing field. Students in many higher-performing school districts have unfettered access to computers. Intuition suggests that computers should help improve student outcomes in lower-performing districts.
However, a randomized evaluation of a computer-distribution initiative in California found that access to a free home computer had no effect on educational outcomes, including grades, standardized test scores and attendance. These findings mirror a larger body of global evidence that finds that access to technology alone generally does not improve kindergarten to 12th grade students' grades and test scores.
Why might this be the case? High-performing and low-performing districts are likely different in other ways that could affect academic outcomes. For example, higher-performing districts may be more affluent or have households with higher educational attainment. It may be these other factors — not access to technology — that leads to strong academic performance.
With a randomized evaluation, decision-makers can understand the true causal impact of a program and determine with confidence whether or not their agency's policies are truly benefiting those they aim to help.
2. Existing data sources can enable lean and nimble randomized evaluations.
Contrary to popular belief, randomized evaluations do not need to cost millions of dollars. Many organizations already collect enormous amounts of data as part of their normal operating procedures. This type of information (think college graduation or employment records) is known as administrative data, and it can drastically reduce the costs of an impact evaluation.
In Chicago, for example, researchers affiliated with J-PAL and the University of Chicago Urban Labs used existing administrative data to analyze the impact of summer jobs programs on youth involvement in crime and violence. Access to administrative data such as city-level arrest and incarceration numbers eliminated the need to track down and survey thousands of youth long after their participation in summer jobs programs.
3. Randomized evaluations don't have to take years.
When we talk to government partners about evaluation, we often ask questions like: What do you want to learn? What kinds of outcomes are you interested in analyzing? The answers can help us understand how long an evaluation may take and whether it is appropriate in the organization's context.
For some agencies, a six-month evaluation to analyze take-up of social benefits might be sufficient. If an organization is interested in longer-term outcomes — say, universal pre-K's impact on college attendance — an evaluation might go on for decades. But evaluations can also be structured to track both short- and long-term outcomes. You can learn how your program impacts end-of-year test scores in a year, and then also how it impacts future graduation rates.
This year's Nobel Prize in Economic Sciences recognizes the potential of randomized evaluations to reduce poverty at a global scale. Let's recognize its potential for improving public policy here in the United States as well.