Intergovernmental Intelligence

Oversight of stimulus spending is important, but just as critical is sharing knowledge and experiences
April 22, 2009
She is the director of the Edward J. Collins Jr. Center for Public Management at the McCormack Graduate School of Policy Studies, University of Massachusetts Boston.
By Shelley Metzenbaum  |  Contributor
Shelley Metzenbaum was a GOVERNING contributor. She is the director of the Edward J. Collins Jr. Center for Public Management at the McCormack Graduate School of Policy Studies, University of Massachusetts Boston.

Frankly, I am worried. And I suspect seasoned state and local government professionals are worried, too.

In his first meeting with the nation's governors, President Obama identified two people to lead stimulus spending. One was the vice president. The second was Earl Devaney, a former federal criminal enforcer and inspector general. Devaney is a smart, skilled, and dedicated public servant. The problem is that, as chair of the Recovery Act Transparency and Accountability Board, his job is to look for trouble.

By warning about waste and naming the program's watchdog before its manager (not named until a month later), the president reinforced an unfortunate aspect of many U.S. intergovernmental arrangements -- placing primary emphasis on oversight rather than on helping intergovernmental delivery partners discover ways to work smarter.

Obama is wise to worry about waste, and Devaney will undoubtedly need to monitor state and local stimulus spending for inappropriate expenditures and malfeasance. But federal programs that depend on other levels of government to accomplish their objectives risk even greater waste if they fail to assume a strong knowledge management role along with their oversight activities. To avoid this problem, let me suggest seven critical functions that every federal program dependent on intergovernmental delivery partners should fulfill:

1. Count and Characterize Problems, their Causes, and the Market

Most federal agencies gather reams of data from state and local governments about their spending, activities, and, sometimes, societal and environmental conditions. Too few, however, organize, analyze, and return data in ways that help state and local data suppliers better understand the size and relative import of problems, likely causal factors, patterns and correlations, characteristics of the market (those to be served or influenced), and key market segments. Federal agencies should analyze and distribute the data they collect to help their intergovernmental delivery partners choose priorities more wisely and design interventions more precisely.

2. Search for Success

Federal agencies should search the data they collect for promising and proven practices. Atul Gawande writes of "positive deviants," situations where one organization outperforms peers or experiences sudden or significant improvements not evident elsewhere, in his wonderful book, Better . Sometimes, positive deviants simply reflect statistical anomalies or are explained by factors other than intentional actions. On occasion, though, they are caused by an organization's adoption of a promising practice. Federal agencies should systematically scan for positive deviants and, when found, try to identify likely causes. A simple phone call may reveal a changed law or program practice preceding a sudden performance improvement. Other times, further study is necessary. Federal programs dependent on other levels of government to accomplish their objectives should make the search for success a priority practice.

3. Replicate Promising Practices (Replication Demonstration)

When the search for success identifies promising practices in one jurisdiction -- such as a program that successfully reduced teen drinking, increased the high school graduation rate, lowered recidivism or increased regulatory compliance -- federal agencies should encourage a few others to test if it is replicable. They should carry out "replication demonstrations," enlisting a few communities to see if a practice that seemed successful in one place produces similarly beneficial results in others. The phrase "replication demonstration" is a mouthful, yet perhaps its rhyme will stick in the mind of state and local government officials, reminding them to press their federal program partners to practice it routinely.

4. Experiment

When data analysis fails to reveal effective interventions, federal agencies should enlist state and local governments to participate in controlled experiments -- testing different approaches in a range of environments, changing nothing in a few (the control group), and monitoring outcome changes everywhere. When effective interventions are found, federal programs should also enlist intergovernmental allies in experiments to find equally or more effective actions that cost less. These experiments need not be expensive or meet all the conditions of "gold standard" controlled studies to be useful, but they must be measured.

5. Promote Adoption of Proven Practices

Once successful practices are identified through replication demonstrations, controlled experiments or regression analyses, federal agencies should aggressively promote their adoption using multiple tools. These include marketing campaigns, tools provision, technical assistance, incentive grants and penalty threats where Congress has provided the authority. Federal agencies should not just assume that their promotional efforts are effective, though. They must measure both adoption rates and outcome changes.

6. Stimulate Data-Driven Discussions

Federal programs should facilitate data-driven discussions, in-person and online, among state and local program managers about patterns in the data, characteristics and causes of problems, what seems to work and what does not, and the possible underlying causes of performance variations. Data-driven discussions are a powerful way to improve both program effectiveness and program efficiency.

7. Motivate Recalcitrants

Where serious societal problems get insufficient or inept local attention, federal agencies may need to motivate state and local governments to pay attention to the problems and to adopt proven improvement practices. Toward that end, federal programs should study the data they collect to look for problems as well as successes and be prepared to motivate those reluctant to accept assistance with publicity and graduated penalties.

Seventy-five years ago, Justice Louis Brandeis called states the "laboratories of democracy." When those laboratories lack scientists to measure, document and disseminate experimental results, however, little learning takes place. Costly programs that did not work in one place are wastefully repeated in others while more productive practices are overlooked.

It is time for federal programs to assume the role of laboratory scientist or otherwise assure that state and local governments learn from each other's experience and cooperate on experiments. A few federal programs already do this well, but not enough. So do a few foundations and non-profit organizations (e.g., the Annie E. Casey and Robert Wood Johnson foundations, the Pew Center on the States, and the Education Trust). All federal programs dependent on other levels of government to accomplish their objectives should adopt these seven practices.

If they do not, state and local governments should press their federal counterparts to do so. Or they should work with each other to build their capacity to learn from their own and each others' experiences. To do otherwise, especially in the information age, would be the greatest waste of all.