Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Truth in Measurement

Verifying the accuracy of statistics generated by performance measures seems to be the last step in the process.

In speeches about performance measurement, we've often painted city councils and state legislatures as bad guys who don't take advantage of this information for decision making. Of course, we haven't done this when the audience is full of legislators. We're no fools.

But on careful reflection, we think we owe these folks an apology. As we've explored the status of performance measurements in both states and cities, we've come to the conclusion that, in many cases, we would be dubious about using them to make decisions. The reason is simple: Although an increasing amount of work has gone into deciding what the right measures are, in the vast majority of cases, minimal effort is made to make sure that the actual statistics generated are accurate. Academics call this kind of thing validation. Journalists call it fact checking.

Contrast this with the financial information produced by cities and states for use in budgeting. This stuff is carefully audited to ensure that if a state says it raised $10.3 million in liquor taxes in 1999, that was, in fact, how much it took in. But when the same state's health and human services department announces proudly that it has somehow reduced the alcoholism rate by 8 percent--well, legislators are left to take it or leave it, without any substantive confirmation. In fact, only a minority of the nation's states and largest cities has a formal process for verifying performance measures. And most of those fall well short of being thorough. To be sure, budget offices talk about giving the measures "close scrutiny" or conducting "internal checks on numbers that don't look right." But this isn't enough. Over time, we've heard lots of legislators say that the basic reason they don't use performance measures is that they simply don't trust their accuracy. There may be other, less virtuous reasons, but it's hard to disagree with this one.

Governments may argue that they just don't have the time or money to take this extra step. We're sympathetic to their plight. But that's like buying a boat and then running short on cash to put it in the water.

You might think states and cities don't validate measures because they might wind up looking worse. But consider the experience in Indianapolis. There, the city's internal audit division examines performance measures on a random basis for accuracy. Their discovery: The city underreported its accomplishments more often than it overreported them. "People would leave their worksheets in the truck. Crews did more work than they reported," Sarah Burnham, special assistant to former Mayor Steven Goldsmith, told us.

Fortunately, many states and cities know they should hold agencies accountable for accurate information. Some are even taking the first steps. Washington, D.C., plans to develop an external program to make sure that agency reports on customer service jibe with reality. In Louisiana, which is further along than most cities and states, performance auditors are focused on the management controls and systems that are used to derive the measures.

Texas gets the lone gold star for effort in this field. The state auditor's office there actually goes through each and every agency every six or seven years and examines all the key measures being derived. That may sound like too long a cycle, and we think it should be shortened, but agencies that are deemed at higher risk are audited more frequently. Every measure audited is either certified as accurate; certified with qualifications, which means that it appears to be accurate, but the reporting is not adequate to confirm continued accuracy; uncertified because the agency doesn't have adequate controls or documentation (which means the number could be right or wrong, but nobody can be sure); or uncertified because the actual measure is downright inaccurate. The results are sent to the legislature and made public via the state's Web site.

Where there are problems, the agencies are required to take corrective actions. That's the stick. As for the carrot, agencies that achieve 80 percent of performance targets are eligible to grant special state-funded performance bonuses to employees. Five agencies qualified for bonuses two years ago; 30 did more recently.

The result of all this is clear: Average accuracy went from 55 percent to 75 percent, according to Deborah L. Kerr, director of the Texas State Auditor's Office. "When we did our most recent round of certifications, on 12 agencies--most of which were never audited before--the reliability of measures decreased to 53 percent." The obvious conclusion: Accountability pays.

Unlike many other cities and states, which seem to think that verifying accuracy of performance measurements is the last step in the process, Texas wisely realized that this was an integral element to the whole effort. And what was the genesis of this superior work? The legislature required that it be done when it first passed statutes requiring measures.

Any legislature could do the same thing.

Why don't they? Good question. Maybe we should take back part of our apology.

From Our Partners