The kerfuffle du jour in Washington policy-wonk circles is over something called "dynamic scoring." Put simply, the argument is over whether those who have the responsibility of estimating the costs of policies also should take into account those policies' macroeconomic effects.
The notion most closely associated with the idea, usually advanced by Republicans, is that tax cuts create positive feedbacks, and tax increases negative ones, for the economy. There are, of course, possible effects on the spending side as well; one certainly could expect positive economic benefits from investing in infrastructure or job training.
This has all caused me to think again about what it is that budgets measure and what they do not. Dynamic scoring is not the main place where this comes into play. Rather, it is the central issue surrounding this elusive thing usually referred to as "performance budgeting."
Everyone generally seems to agree on what the goal of performance budgeting is: to better connect resource investments with the results that are achieved. It has proved difficult, however, to achieve that goal in practice. Even though the budget is where we set priorities, it is much easier to think of those priorities in terms of inputs (how much did the police department budget grow?) than to think about the outcomes (what will be the effect on crime?).
Indeed, many state and local governments are legally required to produce a performance budget. In many cases, however, that really means performance reporting. I spent several years on a budgeting advisory panel for a local school district. This district argued that it could prove that it was practicing performance budgeting because it had received the Government Finance Officers Association award for budget presentation. This is a significant accomplishment, and it does require that performance measures appear in budget documents. But in this case, as in many others, there was little evidence of those measures being used to make decisions.
Actions taken by governments during the recent recession illustrates the mismatch between performance budgeting rhetoric and practice. In the face of budget deficits, many states and localities tend to resort to employee furloughs, across-the-board spending reductions and other decidedly non-performance-based strategies. The National Association of State Budget Officers reported that half of the states resorted to across-the-board cuts in fiscal years 2010 and 2011. While these strategies are defended in the name of fairness, they avoid asking the hard questions concerning what works and what doesn't that would be central to the actual practice of performance budgeting.
Why is performance budgeting so difficult in practice when it seems so simple in concept? There may be many reasons for this, including the fact that performance budgeting tends to challenge the status quo and that it wants to put pressure on managers to perform at higher levels. But we should not lose sight of what may be the most basic reason of all: Budgets, by design, count costs; they don't count benefits. Further, the benefits that are delivered may make government more effective or more responsive or even more efficient, but they do not necessarily save money. Thus the things that budgets d count may trump those that they don't.
In doing some recent research, I ran across an example from the health-care arena that illustrates this point. During the debate in Congress over the Affordable Care Act, the Congressional Budget Office (CBO) was criticized for not giving enough credit for the reduction in health-care costs that would accompany better preventive care. Certainly it seems to make sense that investing in preventive care would lower future health-care costs. One of the reasons that it is hard to get "credit" for these savings is that they have not necessarily been proven, and budget offices tend to be skeptical about such claims. And there are circumstances in which improved health care may not (at least in the long run) save money at all. A CBO study from a couple of years ago, for example, demonstrated that policies that reduce smoking are, in the long-run, actually bad for the federal budget. Why? Because people who don't smoke live longer on average and therefore end up collecting more in Social Security and Medicare benefits.
Thus estimates of cost can paint a radically incomplete picture of the effects of a given policy. Blaming the budget process for failing to systematically focus on performance seems unfair and misguided. It is no small challenge to systematically bring benefits -- to the economy or to society -- into the budgeting equation. There is no magic algorithm that will yield the "answer" on the optimal use of scarce funds.
The question that policymakers should ask concerns not just how much policies cost but also the best use of public resources. Public officials should continue to demand data on performance when making budgeting decisions, but they will need to recognize that the budget will not automatically present a balanced view of the effects of policies.