Learning from Losers

Best practices are all well and good. We make a case for a center to study the worst.
March 1, 2008
Barrett and Greene
By Katherine Barrett & Richard Greene  |  Columnists
Government management experts. Their website is greenebarrett.com.

With a bow to Jonathan Swift, we offer A Modest Proposal of our own: A university, government group or foundation should establish The Center for Worst Practices. There are a ton of organizations promulgating best practices, and that's a worthwhile enterprise. But as far as we can see, states, counties and cities have at least as much to learn from the efforts that fail, flop or fizzle.

Unfortunately, the lessons to be learned from such efforts are often lost in the shuffle. "In public management, evaluations are not common when things don't work," says Larry Jacobs, who ponders this and other topics at the University of Minnesota's Hubert H. Humphrey Institute. "In some circles, that's known as 'bad politics' and the evaluator falls in the category of 'friend' or 'enemy.'"

Politics aside, when governments seriously study their shortcomings, better management and policy often follows. They can borrow a lesson from medical advances. One of the biggest hurdles in establishing modern scientific medicine was the development of departments of pathology. "If you didn't accept the scientific principle, pathology was sacrilegious," Jacobs says. "But it is essential because science progresses by confronting failure with adispassionate 'why?'"

The Washington State Institute for Public Policy, which was set up by the legislature in that state some 20 years ago, is one of the best examples of an organization that asks 'why' to good effect. When it reviewed intervention programs aimed at reducing justice costs and crime rates, it found that a number of juvenile programs actually had a negative cost-benefit ratio. The Institute found that so-called "Wilderness Challenge" programs, intensive parole intervention and "Scared Straight" efforts weren't all they were cracked up to be.

Results for Scared Straight, in which young people are confronted with the horrors of prison life -- includinginteractions with tough, scary prisoners -- were particularly alarming. It turned out that kids in the program were 7percent more likely to engage in criminal behavior than those who had missed out on the experience of its systematicintimidations.

The Center for the Study and Prevention of Violence at the University of Colorado has uncovered similar problems in programs. Here's one of them: Many primary school administrators make efforts to schedule school assemblies to talk about bullies and the problems they cause. Thinking back to when we were kids, it doesn't feel like the bullies we encountered would have been moved by such public interventions. That's just what the University of Colorado study found. Meanwhile, even though it has been shown that one-shot assemblies have no impact, they're prevalent. "There is research out there and we need to pay attention to it," says Jane Grady, associate director of the center.

Unfortunately, it's hard to pay attention to the UC data. Unless you pick up the phone and call them, you're unlikely to learn about their findings on not-so-good programs. The center, which does a bang-up job of writing about successes, doesn't have the money to devote the same kind of attention when results are less than positive.

A growing number of states, however, are setting up departments and agencies that are encouraged to find out what doesn't work. The Legislative Analyst's Office in California, the Office of Program Policy Analysis and Government Accountability in Florida, and Mississippi's Joint Legislative Committee on Performance Evaluation and Expenditure Review have been around for a while. New entities such as these are being created, most recently in New Jersey. But even though states are making progress on the research end, the folks laboring in the agencies often are oblivious to the lessons learned.

It's too bad, especially since documenting a failure is much harder than figuring out the reasons behind a success. With a success, people who back a program know why it should work, so when it does, they believe (correctly or not) that they know why. When things don't work out, however, it comes as a surprise to its backers. Figuring out the reasons for failure requires going back to square one.

But that's not the only obstacle. Policy makers and managers who have invested themselves in a new idea sometimes are inclined to downplay or bury evidence that their good idea wasn't so hot. Meanwhile, the press, in its zeal to uncover government waste, may exploit failures in exchange for good stories, discouraging experimentation.

Full disclosure: We've been accused of doing the latter. We try not to.