Management Insights
| More

Knowing What Works: Evaluating Evidence-Based Programs

What if a program is not producing the results desired? Should state and local officials immediately scrap it?



Name

Feather O'Connor Houstoun

Feather O'Connor Houstoun is a senior adviser to the Wyncote Foundation and is a member of the Philadelphia School Reform Commission.

After several years of impassioned advocacy for evidenced-based programs, it might be time to consider where our commitment to knowing what works has taken us, and where we might go from here.

In this period of intense fiscal austerity, we may well find valuable programs jeopardized because their impact and cost-effectiveness fails to reach the precision of scientific clinical trials.

I raise this issue with some caution. As a public administrator I routinely challenged the effectiveness of programs, and watched carefully for signs indicating that proponents were too enthralled with program "inputs" to question the absence of strong results.

We all want to fund evidenced-based programs while avoiding the ones that don't work. If the data suggests the program is not producing the results desired, should we just scrap it and go on to the next intervention? Is suggesting that the evaluation was somehow flawed just a rationalization by those who want to continue believing the intervention works?

A recently-released report from the Manpower Development Research Corporation (MDRC), which examines first-year results from four post-release transitional job programs for ex-offenders, offers interesting perspectives on the use of evaluations in program policy.

Transitional work programs have long been a staple in efforts to move marginal workers into the mainstream workforce. In these programs, individuals are employed in time-limited, supported and subsidized positions while learning both soft and hard work skills that will hopefully make them more competitive in the unsubsidized job market. They are often combined with additional elements such as health benefits, life counseling, incentive pay, tax credits and child care. (Disclosure: As a public welfare administrator, I helped found such a program for welfare recipients in Philadelphia.)

In its first-year evaluation of the four sites, which comprise Chicago, Detroit, Milwaukee and St. Paul, MDRC found that although the transitional work programs were executed as planned, having a transitional job made little difference in the probability of employment in the formal job market. Moreover, the transitional job programs had no consistent impacts on recidivism in the first year, a major disappointment for program proponents.

Faced with these rather unpromising results, how should program leaders respond? Suspend investment in transitional programs for ex-offenders? Find contrary results in other evaluations, including the MDRC evaluation of work in New York by the Center for Employment Opportunities? Rely on a succession of inspirational stories of success, including a recent New York Times article on the immense investment in post-release programs, to build a statistical record from anecdotes?

Dan Bloom, the author of the MDRC report, reflects on this dilemma. Noting the many program models underway in the field, and the inability to evaluate each of them, he writes, "the results described … do not support the view that 'nothing works.'"

Ironically, the rigor of the control group trial design may be its Achilles' heel. The more a single intervention is isolated from other elements of a program, for purposes of testing its singular effectiveness, the more its interaction with other program features remains obscured and unexamined as part of the program evaluation.

In other words, the more tightly the program design is structured, the more latent assumptions may influence outcomes. Consider some of the factors in post-release programs that might influence outcomes:

  • The specifics of the enrollment process, such as the amount of time that has elapsed between release and enrollment.
  • Each program operator's unique set of capabilities, connections with community providers and relationships with parole officers.
  • Whether the program operator's skills in tailoring the program to those most likely to benefit from the intervention are constrained.
  • Programs being evaluated in highly challenging job markets may simply not be able to provide the "lift" to overcome the immense barriers facing ex-offenders.

Evaluators are understandably careful about weighing in on the messier factors of program administration, which can easily invalidate the reliability of their work. But a rigorous evaluation of program impacts should not be used to make a thumbs-up or thumbs-down decision on program investments. A recent Public/Private Ventures workforce evaluation suggests that operator capacity is a key to success.

Given the extreme scarcity of resources looming, evaluators and program administrators must place increased emphasis on teasing lessons from these investments in high-quality evaluations -- lessons on how to make complex programs work more effectively. It may prove messier, but program recipients can only benefit.


You may use or reference this story with attribution and a link to
http://www.governing.com/columns/mgmt-insights/evaluating-evidence-based-programs.html


If you enjoyed this post, subscribe for updates.

Comments



Add Your Comment

You are solely responsible for the content of your comments. GOVERNING reserves the right to remove comments that are considered profane, vulgar, obscene, factually inaccurate, off-topic, or considered a personal attack.

Comments must be fewer than 2000 characters.

Latest from Management Insights

  • The Immigration Debate We're Not Having
  • No matter what Washington does, it will fall to the states and localities to address the social, fiscal and economic effects. We need to talk about how that will play out.
  • What Is Successful Government?
  • It's not easy to determine what constitutes quality public-sector performance. Finding the answers to some crucial questions is the most important step toward a disciplined approach to high-performance government.
  • The Case for Lightweight Government
  • Getting better results needn't always mean massive spending and heavy infrastructure. There are innovative ways to get the same results at a fraction of the cost, or even at no cost.


Upcoming Webinars

  • Putting Crooks on Notice: How you can fight Identity Fraud
  • October 24, 2013
  • Fraud is on the rise. There is evidence that fraud has permeated virtually every government-based benefit program at the state, local and federal level. The federal government estimates that three to five percent of public assistance dollars are lost each year to fraud, and tax related identity fraud has grown 650% since 2008.



© 2011 e.Republic, Inc. All Rights reserved.    |   Privacy Policy   |   Site Map