We’ve always thought of ourselves as good parents. You’d have to ask our children to see if that’s a fair statement. But here’s one thing we wish we had not done. When our daughter would come home from a soccer practice or play in the rare game that we missed, we would ask her, “How’d you do?” She would give us a mealy-mouthed answer that didn’t satisfy our curiosity. So, our next question—the one we probably should not have asked—was, “Were you one of the best?”
Why do we bring this up in a column about government management? The truth is that we were trying to get to a performance indicator by asking that she benchmark herself against her peer group. Just like many cities, counties and states do.
“It’s one of the ways you can tell how well you’re doing,” says Harry Hatry, distinguished fellow and director of the Public Management Program for the Urban Institute. “If all you have is your own number and you don’t have a comparison, you can’t know if you are doing well.”
Despite the idea that benchmarking can be a valuable tool, it’s not universally beloved. For one thing, when a bunch of cities are benchmarked against one another, at least a few are inevitably at the bottom of the list. This is a powerful reason to stay the heck away from a benchmarking effort in the first place.
Beyond fear of disclosure, there are a number of reasons states and localities may not want to engage in—or use—benchmarking. For one, it can be very difficult making sure there’s consistency in the data and definitions. This doesn’t even have to be a matter of apples and oranges. It’s obstacle enough to try to compare a McIntosh to a Golden Delicious.
A June article in Governing focused on a study funded by the Greater Boston Real Estate Board and the Building Owners and Managers Association on the economic impacts of mandatory building energy labeling. Both groups opposed the benchmark ordinance that Boston passed to support such labeling. One problem with benchmarking, they say, is that there are too many differences in buildings and too little control over occupant energy use to make fair comparisons. Supporters, naturally, disagree and see the benchmarking effort as a way to reduce energy consumption. In any case, it’s clear that controversies like this thwart benchmarking efforts.
Another obstacle is that benchmarking can be expensive and time-consuming, particularly when it involves establishing generally agreed-upon performance measures with common definitions. The International City/County Management Association’s multiyear effort to provide cities and counties with a set of consistent performance measures for comparison has seen participation ebb and flow with the economy. In the first recession of the 21st century, the number of cities and counties participating dropped by 25 percent, and it fell by more than 30 percent during and after the Great Recession. In both cases, the numbers escalated as the economy improved.
Similarly, a voluntary effort to provide a common set of performance measures for local governments in Minnesota has had some trouble getting participants. The legislature provided an incentive of 14 cents per capita for participation, but that amount hasn’t been enough to convince many cities to sign up. Of 854 cities and 87 counties in the state, 13 percent of the cities and 38 percent of the counties applied to the auditor and were approved to participate in 2011. In 2012, that dropped to 7 percent and 29 percent respectively.
Still, Rebecca Otto, the state auditor who oversees the effort, believes this has been an important initiative for the entities that use it well. “The entities that have implemented this realize how powerful a tool it is. There are very positive benefits and it’s very much worth doing even if the financial incentives are small,” she says. “It allows for an informed conversation with your taxpayers.”
We have a lot of personal experience with comparing entities through the Government Performance Project, a rating effort that Governing used to do in conjunction with the Pew Center on the States. Since our use of data was accompanied by thousands of interviews to provide context and additional information, we had faith in the results. We were always intrigued by how different governments reacted to a low rating. Some just hated the exercise and attacked it. Others, like Alabama, which rose from a C- in 2005 to a C+ in 2008, were inquisitive and intent on learning from the results.
Along those lines, we were recently impressed to see a report from Louisiana, which focused on the state’s bottom-of-the-pack performance on the United Health Foundation’s annual ranking of health indicators. The report from the state’s Department of Health and Hospitals provided the legislature with recommendations on how to improve the state health ranking with the goal of moving from the current position of number 49 to 35 in the next 10 years.
Conversations with governments and associations that have recently been involved in benchmarking efforts yielded the following guidelines for success with this tool:
• Focus on government improvement rather than a politically charged drive to reduce government’s size.
• Regard the actual data comparison as the first step of a multistep process, not the whole journey.
• Emphasize that the goal is providing areas for improvement, not targeting weak performers.
• Engage employees and labor in the effort.
• Show buy-in from top leadership.