Training May Be Valuable, But Few Governments Measure Its Success

And without proof of its value, cash-strapped states are increasingly cutting training budgets.

  • Facebook
  • LinkedIn
  • Twitter
  • Email
  • linkText
Shutterstock
"In human resources, it’s an article of faith that training is valuable,” says Bob Lavigna, assistant vice chancellor for Human Resources at the University of Wisconsin–Madison. At the same time, there’s little question that training budgets are one step from the guillotine when states are under fiscal pressure.

Are there any ways the HR community can protect its training dollars? You would think that one solution would be providing the legislature with clear evidence that the training has real value -- best of all when the value exceeds the costs. Unfortunately, although a handful of states and localities have made small forays in this direction, this kind of performance measurement is practically nonexistent in most jurisdictions. “We haven’t done the research to make state legislatures hesitate to reduce training,” Lavigna says.

Strong performance measurements wouldn’t be an inoculation against budget cuts, of course. As Sara Wilson, director of Virginia’s Department of Human Resource Management, says, “Even when state officials fully understand the value of training, when push comes to shove, it can still wind up on the chopping block.” But we feel confident that a performance measurement effort can help persuade some legislators.

If it doesn’t derail budget cuts, at least investigating the benefits of training programs can be a powerful management tool for improving them. In most states, the only evaluations of training are surveys done with newly trained employees. These ask whether the employees liked the instructor, got what they wanted to out of the effort, thought it was a valuable use of their time and so on. To be sure, these surveys can be useful. But, says Harry Hatry, director of the Public Management Program at the Urban Institute, “They don’t get much at the value of training.”

Perhaps the most authoritative work about evaluating training was done by Donald Kirkpatrick, professor emeritus at the University of Wisconsin. Back as far as 1959, he published an evaluation model that is still held in high respect in the HR community. He argued that there were four levels of evaluation available for training, each more valuable than the one before.

Level one was “reaction,” and that’s precisely what most states and cities do in the surveys they conduct right after the training. Number two is “learning,” which is the effort in some states to conduct tests identifying the actual information imparted in a training program. Number three looks at how employee behavior changes. Number four -- and this is what we’re really getting at here -- is what Kirkpatrick calls “results.” This would tell employers exactly what the training has delivered in terms of outcomes that benefit the employees and the taxpayers.

Number four is kind of the Holy Grail for evaluating training. As Craig Kibbe, deputy director of human resources in Kansas, puts it, he would like to have that data “because if the budget gets cut, I can go back and see the consequences.”

Unfortunately, there doesn’t appear to be a single state that goes back six months or a year after training in order to see what the actual societal benefits of a training program were. Why not? In Kansas’ case, agencies are very decentralized and that makes it difficult to implement this kind of enterprisewide approach. In other states, the legislatures simply aren’t interested in yielding the power to make the easiest cuts; they don’t want too many facts messing up the equation. Then, of course, there’s the cost of the effort. As with many good ideas in the world of performance evaluation, there tends to be minimal recognition that spending to understand the benefits of spending can save money in the long term.

That said, a few states, such as Virginia and Tennessee, have taken some steps in the right direction.

In Tennessee, the state’s HR Commissioner Rebecca Hunter has focused on the importance of learning and development. A few years ago, after the state brought in its first “chief learning officer,” Trish Holliday, it started making changes. Now, 12 months after a training program is administered for classes of 120 up-and-coming leaders in the state, a survey is conducted of both participants and their supervisors. The idea is to gain a sense of the actual improvement in capacity and skills of the workers who received the training in the first place.

It’s pretty clear to us that measuring the value of training is worthwhile. But our conversation with the Urban Institute’s Hatry helped to bring our hopes and dreams a little more in line with reality (a typical phenomenon). He points to potential issues with this kind of work. For one thing, whatever the seeming benefits of training may be, it’s difficult to discern a direct cause and effect. There may be any number of other factors causing the change. Then too it would take a rather complex methodology to evaluate training programs that may be as short as a one-hour webinar or as long as six months of regular training sessions.

Hatry’s points are valid. But should they get in the way of examining the benefits of training? We don’t think so. Most of the time even debatable measures are better than none. If they make people think hard about what they’re doing, they’ve accomplished something valuable.

  • Facebook
  • LinkedIn
  • Twitter
  • Email
  • linkText
From Our Partners