Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

The Consequences of ‘Survey Exhaustion’

It’s easier than ever to send out a survey instrument, and they’re an important tool for governments. But with so many of them out there, it’s harder than ever to reach a critical mass of respondents.

Multiple surveys
Adobe Stock
We really want to have our voices heard when it comes to matters of concern to us, and we suspect that most people feel much the same way. But over the last few years, as the flow of surveys that come our way has turned into a deluge, our inclination to reply to many of them has decreased.

And we’re far from alone. So-called “survey fatigue,” like the kind we’re experiencing, isn’t a new phenomenon, and there were articles being written about it several years ago. But we’d argue that it has now entered a new phase: Let’s call it “survey exhaustion.”

The overabundance of survey instruments is directly related to the advances in technology that allow online surveys to come directly to you, often through texts or e-mail. It has become nearly effortless for any organization to spread a survey instrument far and wide. And now that it’s cheaper and easier to put a survey out into the field (many of which are political in nature) lots of people want to do their own — notwithstanding that the more surveys that are in the field, the less likely it is to get response.

We know that this presents a problem for state and local organizations and researchers in the field, as the validity of their conclusions declines when they don’t reach a critical mass of respondents.

Mark Zandi, chief economist at Moody’s Analytics, said in a podcast early last year that there’s been an erosion of some of the data that is fundamental for his economic analyses, as well as to critical government decision-making, because “a lot of it is based on surveys and survey response rates are way down across the board.”

In fact, we’ve been at a number of conferences lately in which academics present their data based on surveys they’ve undertaken. But in a fair number of instances, they’ve expressed some concern that the lack of responses may lessen the value of their results and even introduce biases. For example, when a survey only triggers responses from the people who feel very strongly one way or another about the topic being explored, the opinions of the vast middle ground can be missed out on entirely.

According to a letter published in the Journal of Caring Sciences last year, there are four types of survey fatigue:
  • Over surveying
  • Question fatigue (when “the researcher asks the same questions in diverse ways”)
  • Long surveys
  • Disingenuous surveys (“This is a dangerous type of survey fatigue. It occurs when our participants think that their responses will not affect an outcome.”)
That last one is echoed by a report from McKinsey, which stated that “we reviewed results across more than 20 academic articles and found that, consistently, the number one driver of survey fatigue was the perception that the organization wouldn’t act on the results.”

As we wrote in a June article in Government Finance Review, “When a community chooses to use a citizen survey, respondents need to believe their responses have been heard and considered. Otherwise, they may well rebel by not participating the next time they’re asked for their views—or even lose some trust and faith in the government altogether.

“Even if it’s a matter of explaining why a particular priority is heavily favored by respondents but doesn’t get funded, people need to know why. If a community can’t close the loop and acknowledge that it heard from a resident, there is a natural assumption that this was just a paperwork exercise designed to make people think that leaders care when they don’t.”

Our concern about this phenomenon is rooted in our strong belief that surveys are an important tool for states and localities to gather information from residents that can help them to establish their priorities. We’ve seen ample evidence that surveys can help leaders set agendas and make important decisions.

Here’s a good example, which we’ve reported elsewhere:

When Hurricane Helene hit Ashville, N.C., in September 2024, the damage was devastating. In fact, the community is still recovering. But with devastating damage, the city was eager to use its limited resources in ways that would be of greatest consequence to its residents.

So, city leaders decided to reach out to the populace to see what they thought was of greatest importance to them.

As Dawa Hitch, communications and public engagement director in Asheville, told us, the city did everything in its power to get as many responses as it could. “We did daily social media promotions on our city platforms,” she told us, “we had media coverage in local news outlets; used announcements; city newsletters; and we did some email campaigns.” The most significant finding from this work was that 96 percent of respondents were concerned about the city’s infrastructure.

This wasn’t much of a surprise since the hurricane left many of the city’s assets far worse off than they had been before the storm. But the city was able to look closely into the comments and that helped it focus its vision. Explained Hitch, “It indicated that road repairs were essential or very important, while infrastructure improvements, like sidewalk repairs, greenways and bikeways were rated as lower importance.”

In that case, of course, residents were particularly motivated to reply to the city’s outreach because they saw that their comments had the potential of affecting their lives.

But it’s rare for residents to have that kind of motivation, and since we’ve long advocated that states and localities get as much citizen input as possible, we see the reticence to reply when a city county or state asks for views as a decidedly unfortunate turn of events.

This commentary originally appeared on the authors’ website. Read the original here.



Governing’s opinion columns reflect the views of their authors and not necessarily those of Governing’s editors or management.
Katherine Barrett and Richard Greene have analyzed, researched and written about state and local government for over 30 years.