Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Measuring Efficiency Efficiently

Such mandates often waste money on calculations that reveal little.

Where are the efficiency and productivity measures? I don't see any." I heard this critique a few weeks ago when discussing with a friend the city of Boston's newly posted quarterly performance reports, dubbed Boston About Results. He was right. There weren't any. My friend's expectation that he would see efficiency measures in the report, and his frustration that he did not, are not unusual. As more and more governments move to measure accomplishments and associated outputs, activities and costs, many face mounting pressure to produce efficiency and productivity measures which show "units of output or outcome per unit of input" for every program.

President Bush's Management Agenda Scorecard, for example, required every federal program to produce at least one efficiency metric to earn a top score. State budget directives have similarly mandated agencies to measure efficiency. And performance management trainers frequently instruct class participants to include efficiency measures among the family of indicators every program should collect.

The irony is that requiring programs to produce efficiency measures can itself be inefficient. Programs faced with an efficiency measurement mandate often waste money on calculations that reveal little about ways to improve.

That is not to suggest that agencies should not pay attention to efficiency or that efficiency measures are never useful. Au contraire. Government organizations should measure their costs and constantly look for opportunities to use resources more wisely, but not by making every program produce classic unit-cost measures each reporting period. Agencies are likely to find far more savings if they conduct occasional, discrete cost analyses of specific programs and processes.

Before we blindly increase the burden on governments of producing efficiency measures we must ask ourselves: What additional insights can we gain from efficiency or productivity measures? And how can we avoid the trap of requiring agencies to produce efficiency measures that they do not find useful? All too often, agencies may be tempted to game the measures or produce junky data for the sake of fulfilling a requirement.

A number of common problems impede use of efficiency indicators. Knowing "dollars spent per incident per time period," for example, imparts little useable insight when the seriousness and complexity of the incidents or tasks (such as fires, crimes or reviewed permits) vary significantly across time. To make the measure comparable across time periods, the agency would need to create a "seriousness" or "complexity" index, code the event for its relative import, and continually train coders to assure comparability of ratings across people and time -- a costly undertaking.

Efficiency indicators can also mislead when new program approaches are introduced. Consider a situation where public safety officials adopt an effective new fire prevention practice which successfully reduces the number of fires. Fewer fires would drive up an efficiency indicator of "dollars spent per fire per time period," implying inefficiency despite the adoption of a more effective, and likely more efficient, practice. Government agencies using cameras to complement on-site inspections could encounter a similar problem. Texas used aerial photographs of tanks to detect gas plumes that it would never have found through inspections, while Massachusetts used aerial photographs to discover construction in protected wetlands that its inspectors had missed. In both cases, if government had measured inspections per person, costs per inspection would have risen if cameras were substituted for some inspections, erroneously implying inefficiency.

Arguably, agencies can adjust efficiency measures to account for changes in practice, but that would require the most innovative organizations to adjust their efficiency measures continually, introducing an incentive not to innovate. It would make far more sense for Texas, Massachusetts, and the fire preventers to conduct occasional, discrete analyses comparing the relative impacts and costs of the different strategies available to them.

In certain circumstances, calculating classic "unit cost" measures can yield great insights. Measuring the cost of simple transactions, for example, such as collecting fines, fees and taxes or issuing payroll checks, can help an agency drive the cost per transaction down. Efficiency measures can also be useful when multiple sites deliver identical services, revealing cost-saving practices worth spreading to other locations, or when a program manages relatively similar tasks every year, such as sweeping the same set of streets. Even with street sweeping, however, efficiency indicators will convey an accurate story only if the same level of road cleanliness is sustained in all measured periods and the number of debris-spreading bad-weather days is counted and incorporated into the indicator.

Experienced government performance managers long ago realized the dangers of insisting on efficiency measures. Over a decade ago, in their annual comparative benchmark report, the Urban Institute and ICMA cautioned that improvements in outputs per unit of cost can easily come at the expense of service quality. The Washington Governor's performance management program currently uses a suite of efficiency metrics that, perhaps surprisingly, do not include a unit-cost indicator. Instead, to increase efficiency, the state tracks high-cost aspects of its internal processes and their causes, such as employee turnover rates and worker's compensation claim rates.

Expectations for improved efficiency are understandable and important, and in these tough budget times government leaders may be sorely tempted to require programs to measure efficiency. Resist the temptation. Instead, insist first that programs demonstrate their effectiveness because it makes no sense to pay for an ineffective program let alone make it more efficient. Then, expect programs to report outcome, output and input trends; hold data-rich discussions about ways to contain costs while improving outcomes; report targets, trends and strategies to the public; and conduct occasional in-depth cost analyses and comparisons.

Shelley Metzenbaum was a GOVERNING contributor. She is the director of the Edward J. Collins Jr. Center for Public Management at the McCormack Graduate School of Policy Studies, University of Massachusetts Boston.
Special Projects