Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Knowing What Works: Evaluating Evidence-Based Programs

What if a program is not producing the results desired? Should state and local officials immediately scrap it?

After several years of impassioned advocacy for evidenced-based programs, it might be time to consider where our commitment to knowing what works has taken us, and where we might go from here.

In this period of intense fiscal austerity, we may well find valuable programs jeopardized because their impact and cost-effectiveness fails to reach the precision of scientific clinical trials.

I raise this issue with some caution. As a public administrator I routinely challenged the effectiveness of programs, and watched carefully for signs indicating that proponents were too enthralled with program "inputs" to question the absence of strong results.

We all want to fund evidenced-based programs while avoiding the ones that don't work. If the data suggests the program is not producing the results desired, should we just scrap it and go on to the next intervention? Is suggesting that the evaluation was somehow flawed just a rationalization by those who want to continue believing the intervention works?

A recently-released report from the Manpower Development Research Corporation (MDRC), which examines first-year results from four post-release transitional job programs for ex-offenders, offers interesting perspectives on the use of evaluations in program policy.

Transitional work programs have long been a staple in efforts to move marginal workers into the mainstream workforce. In these programs, individuals are employed in time-limited, supported and subsidized positions while learning both soft and hard work skills that will hopefully make them more competitive in the unsubsidized job market. They are often combined with additional elements such as health benefits, life counseling, incentive pay, tax credits and child care. (Disclosure: As a public welfare administrator, I helped found such a program for welfare recipients in Philadelphia.)

In its first-year evaluation of the four sites, which comprise Chicago, Detroit, Milwaukee and St. Paul, MDRC found that although the transitional work programs were executed as planned, having a transitional job made little difference in the probability of employment in the formal job market. Moreover, the transitional job programs had no consistent impacts on recidivism in the first year, a major disappointment for program proponents.

Faced with these rather unpromising results, how should program leaders respond? Suspend investment in transitional programs for ex-offenders? Find contrary results in other evaluations, including the MDRC evaluation of work in New York by the Center for Employment Opportunities? Rely on a succession of inspirational stories of success, including a recent New York Times article on the immense investment in post-release programs, to build a statistical record from anecdotes?

Dan Bloom, the author of the MDRC report, reflects on this dilemma. Noting the many program models underway in the field, and the inability to evaluate each of them, he writes, "the results described … do not support the view that 'nothing works.'"

Ironically, the rigor of the control group trial design may be its Achilles' heel. The more a single intervention is isolated from other elements of a program, for purposes of testing its singular effectiveness, the more its interaction with other program features remains obscured and unexamined as part of the program evaluation.

In other words, the more tightly the program design is structured, the more latent assumptions may influence outcomes. Consider some of the factors in post-release programs that might influence outcomes:

  • The specifics of the enrollment process, such as the amount of time that has elapsed between release and enrollment.
  • Each program operator's unique set of capabilities, connections with community providers and relationships with parole officers.
  • Whether the program operator's skills in tailoring the program to those most likely to benefit from the intervention are constrained.
  • Programs being evaluated in highly challenging job markets may simply not be able to provide the "lift" to overcome the immense barriers facing ex-offenders.
Evaluators are understandably careful about weighing in on the messier factors of program administration, which can easily invalidate the reliability of their work. But a rigorous evaluation of program impacts should not be used to make a thumbs-up or thumbs-down decision on program investments. A recent Public/Private Ventures workforce evaluation suggests that operator capacity is a key to success.

Given the extreme scarcity of resources looming, evaluators and program administrators must place increased emphasis on teasing lessons from these investments in high-quality evaluations -- lessons on how to make complex programs work more effectively. It may prove messier, but program recipients can only benefit.

Elizabeth Daigneau is GOVERNING's managing editor.
From Our Partners