John O'Leary is a former GOVERNING contributor. He is co-author of "If We Can Put a Man on the Moon: Getting Big Things Done in Government."E-mail: email@example.com
Imagine being confronted with two possible innovative approaches to solving a problem. The first is laid out in a slick Powerpoint presentation, and includes graphs, charts and spreadsheet projections about how much your agency could save.
The second is described by a public official who has already implemented the reform, and who explains what happened and what savings were realized when it was actually tried in practice, including real data.
This is a no-brainer, right? Hard evidence about how something actually works in practice is ten times more valuable than speculation about how something might work in theory.
Here comes the hard part. What if this novel approach has been implemented twice, with great results the first time and terrible results the second time? What if it has been implemented ten times, seven successfully and three unsuccessfully?
Welcome to the promising and frustrating world of data-based decision-making.
Authors Daniel Esty and Reece Rushing describe it in their 2007 report, Government by the Numbers, as "Data-driven decision-making brings insights and rationality to bear in the policy process. It provides a way to systematically check assumptions, spotlight problems, clarify choices, prioritize resources, target interventions, and identify successful policy solutions. This approach offers the promise of more effective and efficient government."
It sounds great. So let's apply data-driven decision-making to data-driven decision-making. Has the 'by the numbers' approach worked in practice? Not surprisingly, the answer is yes in some instances and no in others.
In early 1992, Baltimore instituted a data-driven improvement system known as CitiStat. It was designed to generate constructive confrontation based in hard data. In biweekly meetings, the manager of each city agency would stand at a podium and answer questions from a panel led by then-Mayor Martin O'Malley or his appointed inquisitors. The questions were all based on the numbers from CitiStat's statistical analyses of the agency's previous two-week performance.
O'Malley's focus on the numbers promoted efficiency, generated savings of an estimated $100 million in its first four years and in 2004 won the Innovations in American Government award from the Harvard Kennedy School.
The Baltimore experience with CitiStat is by no means unique. The National Highway Traffic Safety Administration (NHTSA) uses data to promote best practices among states. For example, following a successful Click it or Ticket program to promote seatbelt use in North Carolina, the NHTSA was able to show other states the real results of such a program. Ten other states followed with robust implementations, leading to an average seatbelt increase of 8.6 percent.
Outside of government, organizations as diverse as General Electric, which famously embraced a data-driven Six Sigma approach under CEO Jack Welch, and the Oakland A's, which likewise embraced a statistically rich analysis of player performance under general manager Billy Beane, embraced data-driven decision-making.
These examples show that management "by the numbers" can work wonders. Other examples tell a different story.
When the Environmental Protection Agency started "scoring" each of its 10 districts based on the number of compliance actions completed, a curious thing happened. Each district began reporting small problems such as asbestos removal cases, and ignoring big polluters. As Esty and Rushing note, "The crudeness of the 'bean counting' at headquarters -- where each case got scored the same -- resulted in an emphasis on less important policy matters rather than the harder but more significant cases."
One of the most significant management initiatives at the Office of Management and Budget (OMB) under President George W. Bush was the Program Assessment Rating Tool, which was supposed to provide transparency as to which programs were generating results and which were not. According to the OMB, "The Program Assessment Rating Tool (PART) is a diagnostic tool used to assess the performance of Federal programs and to drive improvements in program performance. Once completed, PART reviews help inform budget decisions and identify actions to improve results. Agencies are held accountable for implementing PART follow-up actions, also known as improvement plans, for each of their programs."
This is a textbook example of government by the numbers, a well-designed approach to assessing performance and improving outcomes. But the real world impact of PART has been minimal. During his confirmation hearings, Peter Orszag noted that PART focused "too much on process and not enough on outcomes." For example, Orszag noted that the Internal Revenue Service is judged in part on the number of audits it conducts each year, a process measure, rather than how well IRS achieves compliance rates, an outcome measure.
But here things get mighty tricky. Agencies and governments alike are reluctant, with reason, to be judged for outcomes that are out of their control. The entire No Child Left Behind exercise showed as much. Schools and school districts were to be judged on results, but factors outside their control made that impossible. Parental involvement, local immigration rates, crime and poverty all play a role in a child's performance. So how can you hold a school accountable?
The bottom line is this. Government by the numbers isn't a panacea, but it is an important tool in the effort to boost efficiency. If you don't have visibility into the numbers, you are managing blind. Managing for efficiency will still require insight, judgment and hard work, but a solid foundation of performance metrics is a great place to start. Wise managers will demand numbers, but peek behind the numbers as well.