Does Energy Benchmarking Actually Lead to More Efficient Buildings?
Cities across the country require commercial buildings to track and publicize their energy and water use in an effort to reduce it. A recent report, however, suggests it may not be working.
The idea of energy benchmarking is simple: Building owners collect and report data on their annual energy and water use, and then use that data to make energy efficiency upgrades in an effort to reduce greenhouse gas emissions. Thanks to the added data-collection costs and the possible negative impact of a low grade, benchmarking—which can be found in cities across the globe—tends to be unpopular with many of the building owners themselves.
As it turns out, it also might not even be effective.
That’s the word from a new report by Robert Stavins, an environmental economist at Harvard University. Stavins and two co-authors looked at benchmarking in cities in the U.S. and elsewhere, and found that it imposes financial burdens on building owners with no guarantee of producing significant energy savings. “There is currently no real evidence,” the authors conclude, “that these mandatory programs lead to any changes whatsoever in energy use.”
The study was funded by the Greater Boston Real Estate Board and the Building Owners and Managers Association, which both oppose a new benchmarking ordinance in Boston. Introduced in February by Mayor Thomas Menino and approved by the city council last month, the ordinance requires commercial buildings larger than 25,000 square feet and residential buildings with 25 or more units to track and report their overall energy and water use. It is based on similar laws in Austin, New York, San Francisco, Washington, D.C., and elsewhere.
Benchmarking aims to help owners identify their buildings’ biggest energy inefficiencies and target investments to areas that will have the biggest impact in reducing a building’s energy use and costs.
But the report by Stavins finds a number of issues with that approach. For starters, disclosure rules generally don’t distinguish between occupant energy use patterns and a building’s inherent energy efficiency, which, Stavins writes, could impede efforts to target investments wisely.
The report even questions whether ratings really work for buildings at all. Consumer product ratings, like the U.S. Environmental Protection Agency’s Energy Star, can be a useful way to compare the efficiency of household appliances. But unlike appliances, the report finds, each building is unique. Energy Star-like ratings don’t account for factors like age, historic status or location of a building—all critical factors in a building’s energy efficiency. And appliances only need to be measured once. Buildings, on the other hand, must be monitored on an ongoing basis.
A low rating could hurt a property’s value. A 2011 policy brief from the Lawrence Berkeley National Laboratory cited five different studies indicating higher sale values for properties with energy efficiency or green labels. Less-green properties could “see depreciation of their asset values,” Stavins writes. “These transfers could have significant citywide and/or neighborhood effects, or even effects on different building sectors.”
That’s part of the point, say benchmarking advocates. Tying a building’s market value to its efficiency can help encourage owners to make energy upgrades. Research on benchmarking’s effectiveness is still relatively thin. But a 2012 study by the EPA suggests it does have an impact on energy use. Examining 35,000 buildings that from 2008 to 2011 consistently used the Energy Star Portfolio Manager measurement tool to track energy use, the study found that energy consumption in those buildings decreased by an average of 7 percent.
In Boston, buildings are responsible for two-thirds of all greenhouse gas emissions. The commercial and industrial sectors are responsible for about 75 percent of that. Even if these programs result in only limited gains in energy efficiency, advocates argue, that’s still a step in the right direction. Even Stavins doesn’t go so far as to discredit benchmarking programs entirely. At this point, he concludes, best practices for these programs just haven’t been established.