John O'Leary is a former GOVERNING contributor. He is co-author of "If We Can Put a Man on the Moon: Getting Big Things Done in Government."E-mail: email@example.com
Rodney Dangerfield was once asked, "How do you like your wife?" His answer was, "Compared to what?"
Without a standard for comparison, evaluating public services can leave the same sense of mystery. How efficient is your city's police department? How well does your state's transportation agency perform? Is the Department of Agriculture doing a good job?
In government, efficiency answers can be hard to come by. We are often left with vague reputations and anecdotes, very rarely with hard facts. In fact, both ends of the efficiency equation -- outputs and costs -- can be scarce in the public sector.
This isn't anyone's fault. Public organizations lack a single standard of achievement. Companies can be judged by their profitability -- a firm that consumes $1 million in resources and brings in $1.2 million in revenue from their customers has created value, at least as judged by the customers.
Public entities lack the profit measure. The only way to try to gain insight into performance is through benchmarking, which is a vexing challenge in the public sector.
A new study takes a stab at municipal benchmarking, comparing the efficiencies of 100 American cities. The study from IBM, Smarter, Faster, Cheaper, compiles high-level spending data in a number of categories. For example, it finds that Chula Vista, Calif., spends $63 per capita on fire safety services compared to $333 per person in Cincinnati.
This sort of comparison doesn't provide an answer -- without more information we can't definitively say whether Chula Vista is more efficient or if Cincinnati is doing a better job in terms of fire services. What is valuable about this sort of data is that it highlights areas where it is worth asking more questions: What operational factors are driving the cost differences? What sort of outcomes are realized in the two cities? Are there environmental reasons for the large delta between these cities?
Perhaps the most intriguing outcome of the study is this: "The level of resources that cities dedicate to delivering basic municipal services varies enormously. In fact, per capita spending in certain service areas can vary by a factor of ten." The study further found that the large spending variance "does not seem to be driven by exogenous factors...."
Benchmarking alone won't improve anything. What benchmarking can do is begin a process of operational examination rooted in data, one that attempts to assess the two things that matter to citizens and taxpayers: the cost and quality of municipal services.
If you knew for certain that your road repair service was inefficient, you'd target it for improvement. If you knew for sure that San Antonio's road service was the most efficient in the nation, you'd look to see how they do things. Without any such data, you are flying blind.
A robust, objective measure to compare cost effectiveness of various cities is something of a holy grail in the quest to improve municipal government. Such a measure would be of value as both a guide and a catalyst for streamlining efforts. But creating a meaningful comparison of this sort can be extremely difficult. Nonetheless, looking at operational performance data both within and between cities is a key part of the effort to streamline public performance.