Novices with the Numbers
In his first job out of graduate school, Will Barnow practices performance measurement every day. Barnow works in the Maricopa County Office of Justice Systems...
In his first job out of graduate school, Will Barnow practices performance measurement every day. Barnow works in the Maricopa County Office of Justice Systems Planning and Information, and one of his chief responsibilities is to help to design, manage and evaluate a program designed to reduce methamphetamine use countywide.
Barnow, 27, got his masters in public administration from Arizona State University in July. He always knew that he wanted to get involved in turning around distressed communities. And he knew that working for Maricopa County, famous in public-sector management circles as a leader in taking a results-based approach, would be a good place to do it. If Barnow is becoming well versed in performance measurement, however, nearly all of his training for it has had to come on the job. There wasn't much in the grad-school curriculum that would prepare him to apply the philosophy that underlies nearly every facet of his work.
"There was no particular course that I could say was geared toward that," Barnow says, although he does recall performance measurement being mentioned as "an important thing to know about." And so Barnow graduated extensively practiced in the arcane art of econometrics, able to whip up multi-variate regression equations with ease. What he didn't learn anything about was how to evaluate program success by looking at inputs, outputs, interim outcomes and ultimate results. Nor was he part of any classroom discussions about why performance measurement works in some places but flops in others.
To be sure, the gulf between academia and the working world is a problem that afflicts almost every profession -- and public administration is no exception. It's often the case that the professors teaching the next generation of government managers are themselves unsullied by actual toil in the public-sector trenches. Still, it's surprising that any MPA student these days could graduate ungrounded in the most tenacious management trend to surface in government in recent memory. To go into public-sector management today without at least some training in performance measurement is a little like becoming a lifeguard without knowing the side stroke.
Why have public-administration schools been slow to adapt? The biggest reason is that the idea of outcomes-focused government has been around for only about two decades -- not a very long time in Ivory Tower years. In the early 1990s, many academics viewed performance measurement as a passing fad. Only later, when the fad failed to go away as expected, did they begin to study it. According to Shelley Metzenbaum, who heads up a new center devoted to outcomes-focused government at the McCormack Graduate School of Policy Studies at the University of Massachusetts in Boston, any idea that's new to academia takes time to build a constituency. Grad schools don't usually bend their curricula around any subject matter until it's accumulated academic gravitas.
At the same time, many longtime professors of public administration didn't think performance measurement could really work in government. Elected officials -- especially legislators -- were cold to the idea. So academics had good reason to suspect that politics would always trump data when it came to how government operates. One can't simply put "information into a machine and have it spew out answers," argues Beryl Radin, who teaches at the American University School of Public Affairs. "It's quite rational for a politician to be skeptical of this."
Yet, in fits and starts and to varying degrees, schools are beginning to focus on teaching outcomes-focused government. At Arizona State, a smattering of classes touch on the topic but knowledge of it isn't required to graduate. A few schools have made graduate courses in performance measurement a requirement in Public Administration. They include American, Syracuse University's Maxwell School of Citizenship and Public Affairs and Georgia State University's Andrew Young School of Policy Studies. And Metzenbaum's center is looking into the possibility of developing an entire track of coursework on results-focused government.
If there is one sure sign that performance measurement has at least begun taking root as an official field of academic study, it's the following equation: P = S + T + E + C + M. That confection is the brainchild of Ed Jennings, a professor of public affairs and administration at the University of Kentucky's Martin School of Public Policy and Administration. Translated almost into English, the equation means that performance (P) is a product of government structure (S), program treatments (T), environmental factors (E), client characteristics (C) and managerial actions (M).
Jennings has been in the academic vanguard of teaching results-focused government, and has been thinking long and hard about how to analyze and explain it. "The work I've done is to examine whether it actually makes any difference," Jennings says. The performance equation is something that Jennings teaches his public-administration students. It may seem a bit esoteric to shoehorn the frequently messy work of administration into an equation. But Jennings' students head into the workplace with at least some background in what to expect when their bosses tell them to "manage for results."
At American, courses on results-based government are now part of the core curriculum. At the same time, the concept of performance measurement has begun to infuse itself into other course offerings, including those on budgeting and fiscal analysis. "Here, we do a lot of it," says Professor Robert Durant. "Basically what we try to do is talk about its strengths and weaknesses and under what conditions it is more or less effective. But we also make the argument that it's here to stay. Even though among our professors we have a range of opinions from 'Boy, is this a huge waste of time,' to 'Of course this is important -- how can one manage without knowing what taxpayers care about and what results government is delivering for them?'"
Some schools that have taken up teaching performance measurement take a very pragmatic approach to the subject. At Harvard University's Kennedy School of Government, professors skirted the relative lack of research into the efficacy of results-focused management by borrowing a strategy from the business school: They teach case studies. By comparing the good and bad experiences that governments have had, students pick up lessons that they can apply when they graduate. Meanwhile, says Bob Behn, a professor at Harvard, the school tends to hew to a very practical-minded approach to what it teaches; it's not just how to collect data but also how to make decisions based on that data.
David Van Slyke, who teaches performance-focused government at Syracuse, takes a similarly practical approach. Van Slyke has ditched the thick, traditional textbooks on public administration -- even the more contemporary ones that talk about results-focused government. Instead, he's creative about finding ways to tune in his students to what's going on right now in the real world around public-sector performance measurement. "My own approach is not one that's heavy on theory," he says.
For example, Van Slyke asks his students to scan daily newspapers for regular examples of governments and nonprofits using data on results to evaluate, manage and budget. They check out government Web sites to compare and critique the "dashboards" governments are increasingly using to tell citizens what outcomes they are getting for their tax dollars. At the same time, Van Slyke includes in his coursework a topic that is a new frontier within the performance-measurement field: how to get citizens involved in shaping what things government ought to be measuring.
For a growing number of students, indoctrination into contemporary ways of making policy and budgeting is just what they're after in graduate school. Today's young adults -- members of Generation Y -- are particularly motivated by a desire to make a difference. They like the idea of being able to measure the impact of their work. Take Sarah Breul, a recent Syracuse graduate. Breul says the performance-measurement training she received at the Maxwell School and her own interest in working for a very results-focused organization led her to take a job with Population Services International, a nonprofit focused on reducing the number of deaths from diseases such as malaria and AIDS. "I was drawn to PSI," Breul says, "because of their reputation for using research and metrics to prove health impacts."
Will Barnow is learning these things, too. He's just learning all of them at work. Right now, he's wrestling with the question of "how to measure the value of relationships" -- a topic worthy of Managing for Results 504. Barnow also has developed an appreciation for the nuances and difficulties inherent in the performance-measurement approach. For example, he's learned how hard it can be for government to influence human behavior -- even when he, as a public manager, is on the hook for behavioral goals, such as keeping people off drugs.
After just half a year with Maricopa County, Barnow speaks like a hardened performance-measurement veteran. Despite the troubles with the system, he says, he'd still rather have data on outcomes than not. It helps him think about whether he's actually impacting life at street level. Which is why, he says, a heavier dose of results-focused government in grad school would have been a welcome addition. "It would have been a more valuable tool, instead of doing pretty extensive coursework on econometrics and statistical monitoring.
"It's less complicated but more practical," Barnow goes on to say. "Just about every city or county now is doing some form of performance measurement."