A Checklist for Evidence-Based Government

It's a way to measure progress toward building and strengthening a culture of learning and improvement.

Pilots use checklists for every routine step of their flights, and they use them in emergencies as well. Many operating rooms use checklists too, spurred by research showing that surgical teams using them were much less likely to miss key steps in crisis situations than those working from memory alone.

In public leadership, however, we rarely use checklists. Maybe it's a perception that running a public agency or leading a jurisdiction is too complex and unique for a list of essential steps. But checklists have a lot to offer the public sector, not only for meeting daily operational challenges but also for leaders looking to re-think the underlying infrastructure of governance.

To move the ball forward, we've created a checklist for an important element of public leadership: encouraging a culture of evidence-based decision-making -- a culture that values the use of research, evaluation, data and other analytical tools to strengthen results for residents and taxpayers.

Our checklist draws on the useful Invest in What Works State Standard of Excellence
from the nonprofit Results for America. We build on that effort and add criteria based on our experience in government and advising public agencies. The more boxes on the following list that agencies or jurisdictions are able to check off, the more progress they are likely to have made toward building an evidence-based governing culture:

Leadership: Our leadership regularly asks for credible evidence and data to back up staff recommendations, demonstrating and modeling the importance of evidence-based decision-making for the rest of the organization.

Vision: Our leadership can clearly and compellingly articulate our organization's approach to using evidence, performance and innovation, since that also sends an important signal to the staff about what is important.

Strategic management:
We have a well-written strategic plan, updated regularly and based on broad stakeholder input. It includes our mission statement, goals and strategies to achieve them, since results-focused government requires a clear sense of where we are aiming.

Learning agendas: We develop a multi-year learning agenda, updated each year, that identifies the most important research questions facing our organization and, in doing so, helps us prioritize our evidence and evaluation resources.

Performance leadership: We have a data-driven leadership strategy, often referred to as a PerformanceStat initiative, which we use to identify key challenges, diagnose problems, devise solutions and track results. It is a strategy that drives results, not just a "show and tell" exercise.

Evaluation capacity:
We have a chief evaluation office (or something similar) that has the staff capacity and skills for rigorous and independent assessments and is valued by leadership as a resource to inform evidence-based decision-making. We set aside programmatic funds -- perhaps a half-percent to 1 percent -- for evaluation and analytics activities.

Data sharing: We have a strategy for making our administrative data accessible to program managers and qualified researchers, while protecting privacy, to shed light on program trends, dynamics and impacts, and to identify ways to improve those programs.

Collaboration: We have at least one ongoing initiative to partner across agencies to strengthen results, whether within our own jurisdiction or with a different level of government. An example of the latter is a pilot program that allows a local government more flexibility with federal or state rules (such as the ability to blend funds) in exchange for clear goals, accountability for results and evidence-building to learn what works.

Rapid experimentation: Also called A/B testing, it's a low-cost way to compare the impact of our proposed operational improvements, including those that draw on behavioral insights such as the nudge. That could include, for example, testing multiple versions of an email to see which is most effective in an outreach campaign.

Evidence-based grant-making: At least some of our competitive grants include incentives that encourage applicants to prioritize evidence-based approaches or use a "tiered-evidence" design that encourages both evidence use and innovation.

Results-driven contracting: We use strategies to produce better results from procurement, including consolidating contracts when there are too many service providers to reasonably track performance; establishing clear performance metrics and tracking results; tying payments to outcomes rather than outputs; and adopting active contract management, meaning working closely with providers to make sure they are successful.

Today, few public agencies or jurisdictions can likely check off more than a few of those boxes, since the evidence-based policy movement is growing but still nascent. As with anything important, the key is simply to start or, for those already making progress, to keep taking new steps to strengthen a culture of learning and improvement. Our checklist can help your agency or jurisdiction set priorities, track your progress and celebrate success along the way.