Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

The Good Book

A new report on making results-based state government work is chock- full of commonsense recommendations.

We have a theory that most new ideas for improving management in government go through three initial stages of evolution. In stage one, a relatively small number of people talk about it--but mostly to one another.

Stage two, which can last for some time, is when this narrow cadre start trying to explain the new discipline to the rest of the world. Sadly, like twins who develop their own language, they've grown so accustomed to talking only to one another that communicating with outsiders is an elusive effort. A period of frustration for all involved follows.

In stage three, either the advocates of the concept begin to figure out how to speak real English, or their idea--whether good or bad-- does no more good than Leif Ericson's visits to North America.

It is with that thought in mind that we happily read a 150-page publication produced by the Urban Institute with help from the National Conference of State Legislatures and the Governmental Accounting Standards Board. Released in April, it's called "Making Results-Based State Government Work." After a decade of our own research and reporting in the area of performance-informed management, we can't remember reading a more useful, clearly expressed document, chock-full of good, commonsense recommendations.

For example, one of our frustrations in trying to use much of the performance data generated by states and cities has been the absence of context or analysis. The Urban Institute report makes a straightforward recommendation that's very much to that point: "State governments," it suggests, "should encourage their agencies to provide explanatory information along with their performance reports. Such information is particularly needed where data indicate worse-than- anticipated outcomes. If managers are encouraged to provide explanatory information, this is likely to reduce any fears they might have that performance information will be misused or be used incorrectly by state officials or the legislature or be misinterpreted by the public."

Delaware's Department of Education provides a good example of the value of this kind of effort. Like many states, it monitors the SAT scores of its students by school district. But that information, in isolation, can be terribly misleading. A district that isn't doing a good job at encouraging young people to go to college may well wind up with artificially buoyed SAT scores if students who might perform poorly don't take the test in the first place. So, according to the Urban Institute report, "The department now reports the percentage of students taking the test in each school district. This provides potential explanatory information for comparing school districts and examining changes in SAT performance from year to year."

One recommendation we thought was particularly important was this: "When such data can be made available, state agencies should provide data on outcomes broken out by each local jurisdiction and make it widely available to local governments and the public."

As far as we're concerned, it's this kind of information--comparing geographic units with one another--that really makes performance information useful to taxpayers at all levels of government. We live in New York City. And frankly, as citizens, statistics about the citywide crime rate are all but meaningless to us. We've lived in the city for decades, and there are huge areas of the Bronx and Brooklyn that might as well be in Wyoming for all we know of them. But give us some stats about the crime rate in our little corner of the city and compare it with the next police precinct over--which New York does-- and you've got our attention.

And speaking of getting our attention, another recommendation in the report that was appealing to us was one which stated, "Each state agency should issue an annual report on its accomplishments. These reports should contain major highlights, both good and bad, from the agency's outcome measurement reports."

Of course, this presupposes that the agency has an outcome measurement report. And we would quibble with the exact wording here: The report should have talked about "major" state agencies; tiny, single-purpose agencies with small staffs simply can't be asked to divert time to this effort. That said, we're on board. As far as we can tell, the best hope for the future of results-based management is to get citizens accustomed to seeing this kind of information--and eventually demanding it.

Yet, when we ask state leaders, "How would a citizen of your state go about finding out how efficiently a given agency was operating?" many don't have a reasonable answer. What's more, they sometimes argue that there's really no need for such reports and challenge our belief that they're important.

Maybe that's our favorite thing about this report. It backs up things we've been saying and writing ourselves for a long time. What clever folks they have working at the Urban Institute.

Special Projects