We're always on the lookout for innovative management techniques. Over time, we've taken note that many of them seem to have a life cycle of four stages--much like the egg-to-butterfly scenario, except that there are a lot more butterflies than worthwhile management efforts.

In stage one, pioneers spend a lot of time explaining the novel concept to their peers. In stage two, the concept becomes widely known and easily identified by a catchy phrase, such as "total quality management." Then come the seminars and conferences in which that catchy phrase gets used in the way top-hatted magicians say "abracadabra," just before they pull a rabbit out of a hat. Multitudes of the converted begin to believe that this new idea, whatever it is, will cure what ails their government.

Finally, and perhaps unavoidably, in stage four, managers are once again reminded that there are no panaceas. They become disillusioned and turn their backs on an idea that may have had a great deal of promise but simply fell short of hyperventilated expectations.

We fear that in some corners the concept of using performance measurements--which has become widely dubbed "managing for results"-- is rounding the bend from stage three to stage four. The problem, as far as we can tell, isn't that the idea itself is flawed. It's that most states and cities fail to use the information they gather to make important decisions, especially those that involve budgeting scarce resources. And so, without real results to prove its value, the concept of managing for results starts to look shaky.

It is with that thought in mind that we were excited to see a new book, "Legislating for Results," which came out a few weeks ago. With help from the Urban Institute, the National Conference of State Legislatures has been working for the past couple of years to find ways to help legislators make effective use of results-oriented information. This report was part of the effort. The advisory group of legislators and legislative staff was an impressive one, including Senator Mary Cathcart of Maine; Representative Mark Miloscia of Washington; former Louisiana Representaive Jerry Luke LeBlanc, who is now Governor Kathleen Blanco's commissioner of administration; and John Turcotte, former director of the Office of Program Policy Analysis and Government Accountability in Florida.

While this effort was designed to help legislators, it can also be of enormous use to agency heads and other executive branch leaders who want to figure out how to use performance measures to improve communications with their legislators.

One of the first major suggestions the report makes for legislators is that they should "insist on user-friendly presentation of performance information." This seems so self-evident that you'd expect agencies would comply without anyone insisting. But clearly, from legislators' points of view, that's not the case. And from our own experience trying to machete our way through the thickets of measures some states present, we have to agree.

NCSL suggests a few action points here. The first is that "it is helpful to have the form of presentation reasonably standard across executive branch agencies. This will enable legislators and their staffs to more easily examine the information across multiple agencies." Secondly, the authors point out that "no matter how good the outcome information is or how clearly it is presented, it will not be useful if legislators and staff do not have access to the material in time to digest the information before a hearing."

Another idea the report strongly advocates is that unusual performance findings should be explained. We'll second that one, too. In our view, the performance findings that are most useful are the unusual ones. When a program accomplishes more or less what was anticipated, then there really shouldn't be a whole lot of questions to ask. But when it underperforms by a lot--or when it overperforms-- that's when things get interesting.

New Mexico, for example, fell far short of the number of people anticipated to use its state parks in 2003. Were the park rangers surly? Was the slogan--"Your Destination Adventure!"--a trifle ambiguous? Should the park budget be raised? Cut? All fair questions, answered effectively by the state's year-end performance report. Turns out the problem was largely attributable to the drought last year that required closing three parks because of fire danger.

The NCSL report makes a number of other suggestions, such as requiring regular checks of the quality of information that the agencies provide, using this information to inform constituents what they're getting for their tax dollars, and requiring results- monitoring for new or expanding programs.

These, like many of the thoughts in the book, are pretty much self- evident. That doesn't mean they should be ignored.