Testing Period

Pilot programs don’t always fly right.
September 2019
(Shutterstock)
Barrett and Greene
By Katherine Barrett & Richard Greene  |  Columnists
Government management experts. Their website is greenebarrett.com.

Many cities have a common problem: the metal grates put around trees to protect them. As tree roots grow, they can shove up the protective material and create tripping hazards on surrounding sidewalks, opening cities to lawsuits.

Recently, Stamford, Conn., went looking for a fix to this problem. City leaders decided to try replacing the grates with a more flexible material. They found one made from recycled tires and other materials bound with moisture-cured polyurethane.

The new grates sounded like a good idea, but Erin McKenna, associate planner of Stamford’s Land Use Bureau, was concerned that the new material might not hold up for more than a year or two and could wind up failing a cost-benefit test. She, as well as others in the city, decided to establish a pilot program for the project. With some 200 trees with metal grates throughout the city, officials wanted to avoid putting in an unproven material at a total cost of nearly $160,000. The pilot started with one tree and has now expanded to four more, at a cost of about $250 apiece (the vendor is sharing the expense). Not only was the pilot set up to help preclude overspending and safety hazards, it was also established to allow the city time to gauge the reaction of store and restaurant owners whose businesses fronted these trees.

Pilots—for programs ranging from a new model of fire engine to teen pregnancy prevention efforts—are an accepted management technique almost everywhere. Unfortunately, there are lots of ways to get tripped up by them.

One such way is when a portion of a town’s residents are given access to the benefits of the pilot program. As officials await results on the program’s efficacy, those residents can become accustomed to the idea that they will have these new goods or services forever. “I don’t think citizens are aware that they’re getting support from a pilot,” says Marv Weidner, founder and CEO of Managing Results LLC. So, if the ultimate decision is made that permanently implementing the pilot program would be too expensive, those who’ve enjoyed the benefits of the new service will lose it. “This can be cruel,” says Weidner, “if people are getting healthier or more self-sufficient with a pilot and then the money goes away.”

In some cases, even though the pilots may be intended to help save money, they may end up being costly or set up in a way that makes it difficult to measure how well they are performing. As far back as 2008, North Carolina’s Fiscal Research Division reported that the state’s pilot programs had “flaws in evaluation design.” These drawbacks included such managerial no-no’s as a lack of controls and inadequate time frames for measuring outcomes. 

Although legislation was passed in 2017 to improve the way pilots are constructed and managed, North Carolina legislators have never seemed keen to use pilots to guide action. “If they got bad news about something they wanted to try, they’d tell the assessor to go away for a while, and then they’d try it out anyhow,” says John Turcotte, director of program evaluation for the state.

Many observers are particularly concerned that a pilot program that is very successful for a portion of the city, county or state—geographically or demographically—may not be scalable to the entire entity, especially if the pilot program isn’t truly representative. Too often the pilot locations aren’t thought through thoroughly.

It might sound like we’re condemning pilot programs. That’s certainly not our intent. Rather, it’s to offer a series of cautionary notes. With the right planning and parameters, pilot programs can be designed in a solid way that leads to successful results.