Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

How's My Program Doing? The Question That Doesn't Always Have a Good Answer

Program evaluation offices have yet to become common throughout government -- and where they do exist, many lawmakers don't know about them.

Magnifying glass
(Shutterstock)
"Our third-grade reading scores were not where we wanted them to be,” says Charles Sallee, New Mexico’s deputy director of program evaluation. His office set out to determine which public programs were actually working to improve those statistics, and which weren’t. Sallee’s team found that the state’s pre-K efforts, and a suite of other early literacy programs, had a positive and lasting impact on the educational achievement of young people. So funding for literacy programs was expanded. The office also found that a different set of programs that focused on providing less formal, less education-oriented child care programs in New Mexico weren’t functioning effectively enough to make a difference. So the state promptly halted their expansion.

The New Mexico story is just one of many in which program evaluation efforts can make a huge difference and bring insight into the degree of success of state and local programs. Among the states, New Mexico’s evaluation office is widely known as one of the best, followed closely by those in Mississippi and Virginia.

Not many of us, however, have heard of evaluation offices, which are almost always housed in the legislative branch. Although a few exist within some agencies, the office and its services have not gained traction as a centralized executive branch function. The evaluators themselves, who utilize standards and guidelines to reach their conclusions about program effectiveness, are typically men and women who have expertise in the particular field they’re assessing. 

Notwithstanding a convincing list of the benefits an effective program evaluation office can have, there is no guarantee that these offices will continue to receive support to do their jobs. Consider the up-and-down history of Florida’s Office of Program Policy Analysis and Government Accountability (OPPAGA). It had long been known as one of the best program evaluation offices in the nation. In 2010, it had been well-funded at $8.2 million, with its money coming from a dedicated source. Then came the cuts. The program was put into the hands of legislative leadership and the dedicated source of funds was eradicated in favor of a year-to-year budgetary decision. Lawmakers clipped its funding back to $5.5 million in 2011 and kept chipping away in subsequent years. Its role is similar to what it’s always been, but fewer reports are being done -- and more and more of them are coming at the behest of a lawmaker rather than from OPPAGA staff. 

Countering the Florida situation is one from North Carolina. Back in 2015, the state’s Department of Health and Human Services failed to provide consistent and accurate performance information on its programs. This infuriated legislators, who channeled that anger into legislation, establishing a new Office of Program Evaluation Reporting and Accountability (OPERA) within the department. 

There things stood. A couple of years passed with no action until the State Appropriations Act for fiscal years 2017-2018 finally mandated its establishment and promised funding. With the help of John Turcotte, the director of the state’s program evaluation services division of the Legislative Services Office, OPERA started to take shape this summer. 

Turcotte is optimistic about the long-term benefits. One key reason: Its work can help give legislators cover for making the best use of tax dollars, even when the best use might not be the most politically popular.

Clearly, not all evaluation offices are equal in their effectiveness. We asked Kathryn Newcomer, president of the American Evaluation Association, what the keys are to building and maintaining a strong system. She suggested three criteria: The first is to educate politicians and senior executives on the need to collect data on how programs are being implemented and how they’re working. Unless you convince them about that, nothing will be done. Second, articulate and frame the questions that will be looked at in programs and policies. Prioritize the questions, because resources are not unlimited. And last, produce concrete evidence of why the set of findings delivered for creating or changing a program are more persuasive than other alternatives.

There’s action on this issue at the federal level as well. In March 2016, the federal government established the Commission on Evidence-Based Policymaking, which then assembled research detailing the significant benefits of evaluation efforts in a paper. One of the major recommendations was for all federal agencies to set up a chief evaluation officer to coordinate evaluation and policy research.

So far, the Labor Department is the only cabinet-level federal department to have created such a position, according to Newcomer. Some other federal agencies, such as the Centers for Disease Control and Prevention, have put such offices in place and others may soon follow suit. 

The commission’s recommendation holds for states and localities as well. One of the paper’s authors, Robert Shea, a principal of the public-sector practice at Grant Thornton, would like to see a continuum of evaluation offices at various levels of government. He sees the possibility that these offices “could be the central point through which you bring together these communities to consolidate funding streams, share best practices and so on. It could be a balm for what ails federalism.”

Shea notes that states and localities have the same needs as federal agencies in terms of coordinating evidence-based research. Establishing such offices should be, in Shea’s words, a “no-brainer.” 

From Our Partners