State and local government officials are constantly looking for ways to improve services for the people they serve. There's a growing movement to look to data for scientific answers -- instead of relying on anecdote or sticking with the status quo on the unproven assumption that it works -- to help policymakers make more informed decisions. This momentum is reflected in bipartisan legislation recently introduced in Congress that includes provisions designed to help policymakers at all levels of government do so.
The bill -- the Foundations for Evidence-Based Policymaking Act, introduced in the House by Republican Speaker Paul Ryan and in the Senate by Democrat Patty Murray -- draws on the recommendations of the bipartisan Commission on Evidence-Based Policymaking. Ryan, Murray and President Obama came together in 2016 to create the commission, tasked with improving the use of data to evaluate the effectiveness of government programs. In its final report, the commission made meaningful recommendations to ensure that rigorous evidence is created as a routine part of government operations and used to construct effective policy.
First, the bill would create critical infrastructure to ensure that government agencies embed evaluation into program design. Too often, the question of whether a program had its intended impact is asked only after the fact. Evaluation is much more powerful -- and often costs less -- when designed before a program is rolled out. Building evaluation in from the start can ensure that data is collected and analyzed in a way that enhances a program's implementation and ability to achieve its goals cost-effectively.
Randomized evaluations can be an especially effective way to test whether a specific program had its intended effect. By randomly assigning program slots to eligible individuals, researchers can compare their outcomes later on and be sure that any difference they measure is due to the program rather than other factors that may have differed from the start. When there aren't enough resources to serve everyone, allocating program slots by lottery can also be the fairest and most transparent way to select individuals off a waitlist (rather than first-come first-served, which can favor those with a greater ability to sign up quickly).
The bill would also create a plan to move toward another of the commission's important recommendations: creating a new National Secure Data Service (NSDS), charged with facilitating data access for evidence building while ensuring transparency, security and privacy. One of the NSDS's primary responsibilities would be to enable secure, temporary linkages across federal, state and local government records for qualified researchers to use for specific, pre-approved projects.
Data linkages are key because government programs affect people's lives in ways that are not confined to bureaucratic silos. A tutoring program, for example, can reduce the likelihood that a student is arrested for a violent crime, while a housing-voucher program can have a profound impact on health. To understand the full impact of a program, we need to match an individual in one dataset (such as school enrollment) with their outcomes capture elsewhere (such as in employment records).
For example, researchers evaluated the impact of summer-jobs programs for disadvantaged youth in Chicago and New York City on a wide range of outcomes. The programs helped participants find employment and make money during the summer, but they weren't more likely to have a job later on. The programs did, however, have a big impact on crime in the long run. In Chicago, the chance that a youth offered a summer job would be arrested for a violent crime fell by 43 percent over 16 months. In New York, summer jobs led to a 10 percent drop in incarceration rates and an 18 percent drop in mortality (primarily due to fewer homicides) years later.
If researchers had only looked at employment data, without linking to other datasets, they might have simply concluded that the programs didn't have their intended long-term effects on employment. Instead, by linking program data with arrest records, incarceration and cause-of-death datasets, they were able to uncover the programs' holistic, life-or-death consequences. In Chicago, these results attracted enough philanthropic support to scale up summer jobs to serve four times as many young people.
Improving access to data for research and embedding evaluation into program design are practical ways to help our governments at all levels better understand how policies are working. By turning these sensible and bipartisan ideas into common practices, state and local policymakers can build smarter government that works better for us all.