To avoid results like those, couldn't we do a better job of analyzing and planning at the beginning of the process and get better results in the end? The answer is yes, and the secrets of success are in the fundamentals.
Good analysis is pretty straightforward: Identify the objectives we care about and the alternatives available to reach those objectives; estimate the consequences of each alternative for each objective; and--finally--choose the alternative with the best estimated result. This requires making sensible trade-offs when one alternative is better for one objective while another is better for a different objective. While analysis can be big or small, it always relies on the same fundamentals: objectives, alternatives, consequences and tradeoffs.
Unfortunately, analysis in the world of government IT often misses one or more of these fundamentals:
Missing objectives. For too many analyses, the objective is simply getting the right technology rather than getting the right uses of technology. We seek cost-effective hardware and software rather than cost-effective information services (which must include the staffing and learning required to use the technology in capturing, storing, analyzing and communicating information). Even more dangerous, we ignore the biggest objectives, which are cost-effective ways of producing and delivering government services. What we really want isn't better technology but better outcomes, such as reduced crime, increased academic achievement, improved health care and efficient tax collection.
Missing alternatives. Too many analyses don't include key alternatives or key elements of the alternatives that are analyzed. To get value from IT, we need to take into account not only the technology but also the staff and work it takes to use the IT in producing information; the staff and work it takes to produce and deliver services in ways made possible by that new information; and the leadership and other activities required to transition to new patterns of work. Too many analyses ignore most of the real work required in creating value, and too many focus on a single recommendation instead of a range of options, especially those that might involve less risk or higher rewards than the option recommended.
Misestimated consequences. Too many analyses estimate consequences by relying on technology and financial models that ignore key behavioral concerns. Most governments require a project to show that the technology chosen is sufficiently mature and far enough away from the "bleeding edge" to be safe. Most require proof that, if the technology works as expected, the ongoing costs can be sustained and justified. Unfortunately, such analysis underemphasizes--or completely ignores--whether the stakeholders will actually produce the behavior required for implementation. Will stakeholders know what they need to do and how to do it? Will they want to do what they need to do? The problems that kill projects--and roughly half of large government IT projects have been rated as failures--usually arise when the police or teachers or doctors or tax collectors or their managers either can't or won't accept their proposed new assignments. Our key misestimated consequences flow not from bad engineering or finances but rather from bad education, bad negotiation and, in general, bad interpersonal relationships and leadership.
Missing tradeoffs. Too many analyses don't weigh and judge the tradeoffs among objectives, especially those involving uncertainty and risks. Predicting worker performance is much more uncertain than predicting technology performance. And the problems we care most about come from downside risks. Results that destroy trust in IT projects are those that go way late, way over budget and/or way short of predicted performance. Unfortunately, we rarely analyze the critically important behavioral uncertainties and conflicts up front. We define projects as "technology projects" rather than as what they really are: institutional-change projects that are enabled by technology.
Why do we define our analyses too narrowly? One reason is history. Early technology was so costly and rigid that it was applicable only to high-volume, well-structured transactions such as in accounting. Only recently has technology become a tool supporting truly new and networked means of organizing work, such as online service production and delivery, much quicker and more detailed data feedback for accountability, and the capacity to organize larger and more specialized work groups assembled over increasingly large geographic regions. The earlier problem was getting the IT to work. Today's problem is using it to innovate and change how the real work--the delivery of critical services--is conducted.
So, what should we do? First, we must analyze the real problem, which is not just the IT itself but using IT to change how we work. To analyze well, we need to assemble the right people and information and spend enough time and effort to do a thorough job.
A key piece of advice is to send analysis teams to interview the early movers. We can learn a lot by meeting privately with the right people and asking them the right questions one on one, especially questions about stakeholder perceptions and behavior. Start upfront with the people and the politics, not just the technology; that would be a great way to both understand and respond to the real forces in play and avoid ending up as another story of failure.