Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Fighting for 2020 Elections by Combating Disinformation

Especially amid coronavirus concerns, voters are receiving more of their election information online. This means that candidates must combat disinformation from bots and trolls in addition to convincing voters.

(TNS) — As conservative conspiracy theories and deepfake videos race through the Internet, defying the fact-checkers and bruising political candidates, Curtis Hougland is trying to fight back by borrowing from the playbook of his adversaries.

Hougland, a technologist and online-extremism expert, is hiring small armies of social media mercenaries to do battle for Democrats.

These are not troops predisposed to political warfare. They are typically not aligned with the progressive candidate or cause that Hougland’s firm, Main Street One, is representing. But they hold a weapon that’s lacking among Internet activists in the echo chambers of the left: large and devoted followings of persuadable voters.

“We are making a bet that human networks can out-compete the bots and trolls and sock puppets,” said Hougland, whose background includes helping the Pentagon track and fight Islamic State online.

It’s a fraught bet, one of many Democrats are making as they confront the growing threat that disinformation presents to their hopes of retaking the White House. Since the pandemic took hold, the false narratives ricocheting through social media have surged. Vinesight, one of a crop of startups on the left focused on detecting and fighting the spread of toxic postings online, reports they are up 50 percent since the height of the Democratic nominating contests in February.

Conspiracy theories and false claims springing from the pandemic are fast blurring into political attacks, typically pointed at Democrats and sometimes propelled by President Donald Trump. While their party has grown adept at tracking the origin and spread of the disinformation, Democrats have yet to find an effective strategy for depriving it of oxygen, particularly as the social media companies — including Facebook and Twitter — are proving to be feckless partners in the fight.

The companies are taking down and labeling as suspect more content than in the past, but are unable to keep pace with all the posts that violate their terms. And they are unwilling to ban the large volume of material, such as Trump’s false claims, that could be branded political speech.

“The problem we are facing is not around detecting it,” said Gideon Blocq, founder of Vinesight. “There is more than enough data to go around. The question is how to minimize it.”

The shortcoming is a source of mounting anxiety. Presumptive Democratic presidential nominee Joe Biden is bulking up his digital team without any clear party strategy to keep him from getting overwhelmed by false narratives. Democrats agree: The tactics they have relied on so far won’t do the job.

“Too often campaigns take the bait and amplify things,” said Jiore Craig, head of digital practice at GQR, a Democratic firm. “They have to learn to avoid those traps.” The architects of disinformation, she added, “like it when we yell about how crazy they are and we can’t believe it. They win when we do that.”

For example, the president proudly announced he was ingesting hydroxychloroquine, despite studies questioning its efficacy and safety in combating the coronavirus, and endorsed conspiracy theories about voting fraud using absentee ballots.

Much ridicule and mockery came in response, energizing the social media hangouts of the left but possibly suggesting to many persuadable voters that this was just more noise from Trump-haters. As is often the case, the online shouting amplified the president’s misleading claims, drawing even more interest to them.

A more effective response, disinformation experts say, might provide useful information about the risks of ingesting such untested drugs or about the security of absentee voting, attributed to messengers persuadable voters are inclined to believe.

A network of trustworthy messengers is essential, the experts say, to slowing the spread of debunked material. Deepfakes — images or video manipulated to suggest a candidate did something or went someplace they did not — bounce through social media even after they have been detected. Discredited accounts of the origins of the pandemic and motivations of those fighting it continue to spill from the darkest corners of the Internet onto mainstream channels even after they were taken down by YouTube and Facebook.

As Democrats draft their counteroffensive, they are looking beyond the traditional tools — paid advertisements, media fact-checkers and unevenly enforced social-media platform rules. They are rethinking who needs to be drafted into this fight, when to engage these messengers, and how to advance their own narratives.

The emphasis is on empowering diverse voices online who may only be loosely affiliated with the Biden campaign. How to best enlist those people is a point of tension.

Craig, who has worked for foreign clients in emerging democracies, is skeptical of the Main Street One model of hiring influencers with appeal in various targeted communities — for example, one group for older African Americans, another for single moms, another for devout churchgoers.

“I don’t want our country to move campaigning to this transactional pay-to-play system,” she said. “It then becomes the expectation of voters that we are paying for our support. I see this happen in my work abroad in places that are not functional democracies. It is not an effective way to organize and will have long-term consequences.”

The folks at Main Street One, though, say they are seeing results in state and local campaigns by mobilizing influencers who have common cause with the candidate or ballot issue. They point to Kentucky, where an influencer they engaged to help undermine support for Senate Majority Leader Mitch McConnell was a mom with the social media tagline “bourbon, basketball and God,” who wasn’t a political activist but drew their attention for posting her disgust over McConnell’s push to dismantle Obamacare.

“If you want a more credible messenger, you don’t want them to be as explicit in their political views,” said Hougland, who has launched an anti-Trump political action committee called Defeat Disinfo. “These are all people who make a part of their livelihood by being approached by brands or organizations to express their passions. They are comfortable with this ecosystem. While they are somewhat new to politics, we are comfortable with that.”

Such tactics may have helped Democrats win the close Kentucky governor’s race last year, and their disciplined social media strategy also stifled false claims of ballot tampering from conservatives casting doubt on the election result.

“Everyone wants to cast this as a tech problem,” Hougland said. “It is a people problem. You can’t deal with it just with algorithms. You can’t just take it down. You have to deal with this proportionately.”

Others agree, but argue that solving the problem should not involve paying for posts. Even tactics that don’t involve payments to influencers can create questions and ethical quandaries. The methods both parties rely on to push their way into supporters’ social networks have become increasingly invasive and opaque, said Samuel C. Woolley, project director for propaganda research at the Center for Media Engagement at the University of Texas.

“Often, no one on either side is asking about the ethics of using large databases of voters gotten by suspect means,” he said. When Woolley surveyed his students about texts they were receiving from Democratic presidential campaigns, dozens told him they were getting blitzed with unsolicited messages.

“There are ethical issues around using these tools of automation that target voters without transparency or their consent,” he said. “I am just as suspicious of that as I am of some of the campaigns to spread disinformation.”

The dilemma for Democrats is that the disinformation threat has grown so big and complicated that small measures won’t dent it. European countries have had some success, but by making structural changes that are all but impossible in the U.S.: significant oversight of online platforms, expansive public education campaigns, aggressive enforcement of laws criminalizing certain disinformation.

“There is not an easy quick fix for groups on the ground to switch this off,” said Chloe Colliver, who heads digital research at the London-based Institute for Strategic Dialogue. The organization has been monitoring the surge of disinformation related to the COVID-19 pandemic, which she said will increasingly “be used as a gateway to funnel people into political misinformation and disinformation.”

In the 2016 presidential election, both the Democratic nominee and voters were caught off guard by the assault from bots, troll farms and fake Facebook groups, many of them controlled by foreign operatives.

“Now, there is a more coherent understanding of the threat, where this comes from and what it can do,” Colliver said. “But the tools campaigns have to deal with it are still limited.”

©2020 Los Angeles Times. Distributed by Tribune Content Agency, LLC.

Special Projects