Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Misinformation, Disinformation and What Government Can Do About Them

Government organizations should proactively support and lead with good cybersecurity practices, and they can help the public by spreading the word about how to spot dangerous lies.

The front of the White House
White House
The challenges of misinformation and disinformation are everywhere and, because they are so often spread via electronic communication, they are as much of a cybersecurity risk as anything that a hacker trying to steal network passwords or plant ransomware might attempt. But while Russian misinformation campaigns to distract American voters and the ongoing conspiracy theorizing about COVID-19 and vaccines are front and center, they are only the most visible. In many cases, cyberespionage exploits disinformation to manipulate data and take advantage of government organizations.

“Misinformation” is simply false information, while “disinformation” is the intentional spreading of misinformation. Disinformation is the greater challenge today because social media has created vast opportunities for sharing information that didn’t exist just 10 or 20 years ago.

A quote often attributed to Mark Twain goes, “A lie can travel around the globe while the truth is still putting on its shoes.” History is rife with misinformation — everything from flat-earthers’ science denial to sightings of Sasquatches, aliens and Elvis. “While history tells us that misinformation and disinformation have been with us since the beginning of time, something is new,” Bob Gourley, the CTO and co-founder of the security company OODA, told me. “The Internet now provides adversaries the ability to push messages very quickly from anywhere in the world, and social media means these messages can be targeted to citizens with devastating effect.”

A Twitter incident in 2013 is a good example of how quickly disinformation can get out of hand. Hackers compromised the Associated Press Twitter account and tweeted that there had been explosions at the White House and that President Obama had been injured. Because Twitter at the time was seen by most people as a trustworthy information source, many briefly assumed the information was true. The stock market even took a quick but dramatic dip.

A December 2020 cyberdisinformation attack that began with a fake press release and a fake Facebook account claimed that a Polish diplomat had been discovered smuggling contraband into Lithuania. The attack was believed to have targeted Polish-Lithuanian relations and was chalked up to another cybercampaign attributed to Russia.

And just last month it was reported that a pro-China network made up of fake social media accounts questioned the safety of American-approved COVID-19 vaccines and used the Jan. 6 riot at the U.S. Capitol to describe the U.S. as a “failed state.” These postings were subsequently re-posted by Chinese and other governments’ officials and quickly spread to millions of people around the world.

An Ipsos survey in 2020 found that more than half of Americans said they had become more concerned about their online safety and were spending more time trying to determine if their Internet searches were safe. That’s good news but also an unfortunate sign of the times that so many of us have become paranoid about what we read online.

Yet “while people are bombarded with targeted propaganda and disinformation on social media, many lack the know-how to discern fact from fiction,” said Bob Lord, the chief security officer at the Democratic National Committee. “State and local governments have a responsibility to protect their constituents’ privacy online, invest in public media and media literacy education, and support antitrust action that would curtail these companies’ power.”

Combatting disinformation is a challenge — make no bones about it — but there are a few things government officials and information security officers can do to protect both their own organizations and the public, and they include spreading the word about the imperative to critically evaluate information before sharing it. Government organizations need to proactively support and lead with good cybersecurity practices around misinformation and disinformation. Public leaders can’t repeat often enough the questions everyone needs to ask before reacting to social media clickbait:

• Can you readily identify the source of the information, and is the source credible? Sometimes it’s difficult to determine, but if it sounds sketchy, it probably is.

• Are there multiple sources providing the same information, or just one lone “enlightened” source? That’s a red flag.

• Does it provoke a strong and impassioned response? Memes have become notorious for having that effect because it’s so simple to just repost or retweet a meme without even thinking about it.

• Does it sound absurd on its surface? Again, if it sounds sketchy, it probably is.

• Check the dates: Stories and pictures often re-emerge years after originally posted, usually with a new twist on the message.

• Does it claim to be time-sensitive, or is it going to cost you something? These are always red flags.

• Does it leave you with questions like “something seems missing here” or “these facts don’t add up to a complete story”? If so, dig a little deeper before becoming another victim in the thread of disinformation.

All of us need to be consciously aware of how we are consuming information and, more importantly, how we are sharing and spreading it, to ensure we aren’t contributing to the problem. Government officials have an important part to play, and with some simple and active critical thinking, we can all be part of the solution.

Governing’s opinion columns reflect the views of their authors and not necessarily those of Governing’s editors or management.

Mark Weatherford, Governing's cybersecurity columnist, is the chief strategy officer for the National Cybersecurity Center.
From Our Partners