Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Making Comparisons

People may not care how clean the average street is, but they sure want to know how their street compares to others.

Some years ago, an editor for whom we were working returned an article we had written. There were angry red comments all over it. Ordinarily, this particular editor's comments were crystal clear. But in this instance, he had repeatedly scrawled the same cryptic words all over our prose: "How's your wife?"

When we timidly approached him for some explanation, he told us, "It's a line from an old joke. One guy asks the other, 'How's your wife?' And the other guy says, 'Compared to what?'"

He went on to explain that we had offered a mass of facts and statistics in our piece but hadn't given readers any means for comparison. The numbers were useless without context.

These days, when we look at performance measures created by governments, our editor's little joke often springs to mind. Measures are far more useful when they are broken down in such a way that comparisons can be made. Does anybody in a large city really care how clean the average street is? Probably not. But they sure do care how their street compares to others.

Public education has used these kinds of breakouts to good effect. Many cities now allow citizens to compare reading and math scores in individual schools and school districts, for example.

As the Urban Institute's Harry Hatry, who has been studying performance measurement about as long as anyone, told us, "Overall data hides the real information. Breaking it out is the only way to make performance data really useful for managing. If you're looking at street cleanliness, you should know where the dirty streets are."

New York City has just taken a significant step forward in this effort. The city has begun a pilot program that allows interested observers to get information on a variety of indicators on a comparative basis. So, a quick look at www.nyc.gov/mmr reveals measures of park cleanliness, infant mortality, felony rates and 11 other indicators, neighborhood by neighborhood throughout the five boroughs.

Such public displays of comparative data have a significant fringe benefit: Support for good performance measures has always been contingent on the work of a dedicated cadre of public officials. Their efforts have been fragile at best and can be dropped if a newly elected official doesn't like being held accountable. But once the public gets used to seeing performance data, it's our guess that citizens will begin to demand it. And that will, in turn, help to institutionalize efforts to measure success and failure of government.

It would be nice if we could end this column right here, with a pat on the back for our hometown and a prescription for governments willing to undertake the Herculean task of assembling and disseminating performance data the way New York now does. But just as Hercules discovered that the monstrous Hydra grew seven new heads for every one he cut off, governments discover that each step forward in the quest for managerial competence reveals other problems that weren't necessarily foreseen.

Here's one of the Hydraesque problems with geographic breakdowns: They don't take into account any degree of difficulty for different regions. It's unquestionably more difficult to achieve low infant- mortality rates in poor neighborhoods than in wealthy ones. In fact, it may be that a city's health department is doing a great job at reducing infant mortality in its poor neighborhoods without that fact ever shining through the data.

The state of Minnesota, recognizing this issue, is trying to address it with a new program in its Families with Children Division (the department that oversees the state's public assistance program). It starts out acknowledging "there are a lot of reasons why performance might differ between counties that don't have anything to do with whether the counties are doing a good job," says Chuck Johnson, the division's director.

Johnson has headed up an effort to dig into these other environmental factors. The state is now using about 20 variables, including measures of child poverty and unemployment to predict how likely it would be that citizens of any given county would get off public assistance or find gainful employment without county intervention.

Then, when the state looks at actual success rates, compared to the predicted rates, county managers can be judged on whether they've exceeded or fallen short of these expectations. This project, too, is still only in its pilot stages and is supposed to be presented this coming legislative session. "I think we'll be recommending some gradual rollout of the system--trying it out for a year, without any money attached to reward the successful counties," Johnson says. "And then down the road, there may be funds available to provide bonuses for counties that come out as high performing."

We can safely predict that the Minnesota effort is going to be criticized and attacked before it's institutionalized. They're trying something very hard up there. And that makes it all the more worthwhile.

Special Projects