Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Skewed Results

Performance measurement has become a powerful tool for some government agencies. For others, it's been useless.

Jane Lynch, the social services commissioner in Ontario County, New York, got into her field for one reason: to help people. But the longer she's in the job, Lynch says, the more helping people seems secondary in her work. Instead, her working life is driven by a relentless paper chase.

"You go into someone's home to investigate a report of child abuse, and the first thing you have to try to do is establish eligibility. So you start asking questions about family income and savings, and it derails us from what we could and should be doing." The reason for the skewed focus is simple: If the county provides services to a family that turns out not to be eligible for federal funding, then the whole state faces the possibility of stiff penalties for not following reimbursement rules.

"We have three 3-inch-thick binders just for the financial regulations," Lynch goes on, "another two binders for food stamps, another binder for child support, three for Medicaid, two for public assistance, one for the home heating assistance program. And we get 10 to 12 administrative directives every week. The last one we got was for child care; it's 150 pages long. You can't possibly read or understand it all."

It wasn't supposed to be this way. In an era when the rhetoric of results is infusing the public administration lexicon, a new focus on the bottom line was supposed to transform programs all over the country--federal, state and local--from rule-bound to performance- based. No longer would managers such as Lynch be judged by whether they were holding down error rates in determining eligibility or whether they were checking the right boxes when it came to winning reimbursement. They would be cut loose to help those who they believe needed help.

All this was expected to happen as a result of the federal Government Performance and Results Act--the 1993 law that was touted as a landmark step in moving the federal government away from a slavish focus on process to a more consistent focus on results. By and large, it hasn't happened. In some key policy areas, GPRA has failed miserably.

Interestingly, though, it hasn't failed everywhere. To appreciate that, it helps to talk to Donald McNamara, administrator for the six- state Great Lakes Region of the National Highway Transportation and Safety Administration. Like his colleagues in social services, he was charged by GPRA with the job of moving beyond inputs and paperwork and making policy decisions based on actual data about whether programs were succeeding. Unlike his social services counterparts, he's been able to do that.

Not that it's been easy. In its drive to measure performance on the hard numbers of traffic fatality reduction, the safety agency has had to work with a diverse set of interests: state and local transportation officials, law enforcement, courts, emergency services, the liquor industry, drug and alcohol counselors and specialists in road design and pavement composition.

But after 10 years of GPRA, the strategy for improving highway safety has changed from an emphasis on distribution of federal money to a tough-minded analysis of numbers. Are state fatality rates going up or down? If they are going up in one state and down in another, NHTSA wants to know why, and wants to see if the one state can learn something useful from the other. The federal agency collects and analyzes vast amounts of accident data so it can make such comparisons.

What states are getting, in short, are not federal dictates with threats for failure to perform but tangible help in improving highway safety through strategies that have been arrived at cooperatively, focusing on trouble spots identified in the course of federal analysis of very detailed state data. "States used to feel like we were picking on them," says McNamara. "Now we use the data to say, 'We're not picking on you, but here's where you are. You need to improve.'"

As for whether the approach works, the proof is in the numbers. Last year, fatality rates per 100 million vehicle miles reached their lowest rate since the feds started crunching the data. The rates have been dropping fairly consistently since the 1960s, so the shift to a focus on performance measurement can't be given 100 percent of the credit. But few doubt that it has made an important difference.

FALSE HOPES

The obvious question is: If performance measurement is working so well in highway safety, why can't it work in child welfare? Not too long ago, the assumption was that it could. The most recent federal law on the subject, the 1997 Adoption and Safe Families Act, was directly inspired by GPRA, flowing from criticisms that the feds focused too much on regulatory compliance in welfare matters and not enough on performance.

Under the 1997 law, the U.S. Department of Health and Human Services was instructed to come up with a system for measuring child welfare outcomes from state to state. To do that, HHS created Child and Family Service Reviews, or "CFSRs." The reviews, implemented in 2001, are aimed at judging state performance in three basic areas: safety, stability and overall well being. They directly measure such things as mental and physical health, length of time kids spent in foster care, and whether or not caseworkers and foster parents were being adequately trained.

As teams of CFSR inspectors fanned out across the country to do the surveys, state and local officials hoped that they were seeing the first step toward a new era of federal, state and local cooperation; and that given the new focus on ground-level results, there would ensue a logical and long-overdue conversation about how to operate the whole system better.

That hasn't happened. To begin with, states failed their first round of child welfare performance reviews, and failed them spectacularly. Not a single one received a "satisfactory" rating from HHS. Reasons given for the widespread failure run a wide gamut. Some argue that the standards set by HHS were way too high. Others say that the number of cases analyzed--a mere 50 per state--offered a grossly undersized sample when it came to assessing complicated systems. Still others, though, say the scores accurately reflected the performance of child welfare programs at the state level. "We felt it was a reasonably accurate reflection of what was going on," concedes David Berns, head of Arizona's Department of Economic Security--that state's social services department.

The low scores caused a flurry of debate, complaint and, in some places, action. But they failed to inspire a major overhaul in focus or the beginnings of a reinvented state-federal partnership. That's because the federal response to the state scores was all too familiar: yet another directive. States were given 90 days to fashion "performance improvement plans"--"PIPs" for short--that were supposed to provide road maps for addressing the weaknesses identified by the CFSRs. HHS officials would review the PIPs and either accept them or send them back for more work.

In some cases, the PIP process did lead to constructive conversations, but what it led to more generally was confusion about exactly what HHS wanted to see in a PIP, what penalties might result from failure to pull performance up to satisfactory levels, and how the next round of reviews would be conducted. In other words, rather than generating a new discussion about state-to-state performance, the initial measurements yielded yet another tiresome and unproductive discussion about process.

What went wrong in child care that went right in highway safety? To Shelley Metzenbaum, a program performance specialist and visiting professor at the University of Maryland, the answer lies in deeply entrenched bureaucratic habits. "For so long," Metzenbaum says, "so many of the federal agencies have seen their role as making sure states spend federal money the way they're supposed to."

The result, Metzenbaum says, is an all-too-familiar game of cat and mouse. "Take welfare," she says. "The number one performance measure is reducing waste, fraud and error rates. That's not wrong, but it shouldn't be the primary driver of the federal, state and county relationship. Because when the feds go after waste, fraud and abuse, and threaten to pull money, then states focus on that and all the other stuff pales," says Metzenbaum. "All the other stuff" in the case of child protective service means the hard job of keeping kids and families safe and healthy.

In her recent report, "Strategies for Using State Information: Measuring and Improving Program Performance," Metzenbaum investigates areas where federal, state and local officials have actually managed to move beyond the old way of viewing each other, away from the game of cat and mouse, looking at highway safety, education and the environment.

CHANGE OF CULTURE

It won't be easy and it won't happen quickly, but Metzenbaum thinks that all the lessons learned at an agency such as NHTSA could be applied in agencies like HHS; that HHS could arrive at a more results- driven approach to its relationship with states. HHS leadership could push the shift simply by gathering useful performance data on its own, analyzing it and then feeding it back to the states in the form of lessons learned. "Any part of the organization could pick this up," Metzenbaum believes. "But it would really mean people thinking about themselves and their role very differently."

By and large, state and local officials in welfare and other bureaucratically challenged fields continue to support the basic idea of performance measurement. Arizona's Berns applauds the professed interest of HHS in outcomes. "Every state that I've talked to feels the general concept is a huge improvement over the old system of just following some paper trail." Berns says he would much prefer being held accountable for results, while being set free to invest resources where he, his managers and his caseworkers believe they will do the most good.

Some high-level officials at HHS say they agree wholeheartedly with such criticism. One of them is Wade Horn, the assistant secretary for Children and Families. Horn says he has the same dream that most people on the front lines of social service delivery have: a single point of entry for every client, and agencies empowered to customize their strategy for help, depending on the details of the case. Horn, like Berns and Jane Lynch, says his goal is a system that transforms the basic question from "What's the program, who can I serve?" to "Here's the client, how can I serve that client?"

Yet in spite of those mutually expressed goals and agreement on the basic problems, the three levels of government seem to have arrived at a familiar point in intergovernmental relations: stalemate. State and local officials want the feds to hand over strings-free entitlement money, and then trust them to spend it well and wisely based on bottom-line performance. Most federal officials, despite their professed commitment to judging by results, insist on spending limits and strict procedural requirements. "I don't think the federal government ought to just be the tax collector for the states," Horn says. "If the federal government is going to raise and distribute revenues, then it can and should hold states accountable in terms of using those funds."

Horn has offered some additional flexibility through block grants and waivers, but these have met with widespread suspicion among state officials. In any case, Metzenbaum says, such offers may be missing the point. "If you just come in with an offer of flexibility, you lose everything. If you don't first have the information to drive the change, then all you're doing is giving away money and no one is getting any smarter for it."

And so after more than a decade of GPRA, it remains uncertain whether a focus on results will ever move child welfare and other social services toward a system that is really focused on performance first. In other words, it has yet to be demonstrated whether the lessons of highway safety can be adapted to a profoundly different policy universe. Unless that takes place, the odds are that even after another decade under GPRA, well-intentioned managers at the local level will be up against the same sorts of frustrations Jane Lynch is up against now:

At the moment, her agency in Ontario County is looking for a specialist--not a specialist in child welfare but in finance. "One of the things we spend a whole lot of time doing," says Lynch, "is figuring out how to maximize revenue. Does that help kids? I don't know. But I'm talking about the best way to draw down our funding when I should be talking about functional family therapy or should we place Johnny in foster care. I often spend an entire day doing nothing of what I set out to do when I got into this business."

Special Projects