Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Best Way to Fight Deepfakes Is to Be Skeptical of Everything Online

Governments across the nation are proposing legislation that would curb the influence and prominence of online misinformation, but until those bills get signed into laws, read online information carefully and critically.

(TNS) — It should be no surprise at this point: bad actors around the country and across the globe are working to create and spread misinformation online. One of the most pernicious and worrying weapons in this digital warfare is the emergence of “deepfakes” — which use artificial intelligence to create fake video or audio, or doctor real recordings, in order to make it look like someone said or did something they didn’t actually do.

The proliferation of this technique is rightly sounding alarms across levels of government. And it should continue to draw attention, particularly as we approach the November election.

This summer, Clint Watts, a fellow with the Foreign Policy Research Institute, told Congress that Russia, China and other “authoritarian adversaries” will likely employ deepfake technology as part of misinformation campaigns aimed at “subverting democracy and demoralizing the American constituency.”

“These two countries, along with other authoritarian adversaries and their proxies, will likely use deepfakes as part of disinformation campaigns seeking to discredit domestic dissidents and foreign detractors; incite fear and promote conflict inside western-style democracies; and three, distort the reality of American audiences and the audiences of American allies,” Watts testified in June.

Last year, Sen. Angus King raised concerns about the federal government recognizing and informing the victims of deepfakes.

“I just want to be sure our policies keep pace with the magnitude and accelerated nature of the threat.” King told FBI Director Christopher Wray last January.

Several states, including California, have passed legislation targeting the use of deepfakes in elections. And now Sen. Rebecca Millett, a Democrat from Cape Elizabeth, has proposed similar legislation here in Maine.

“The use of artificial intelligence to alter an image or video to appear as though its real, commonly referred to as deepfake, is one of these advancements that could have disastrous effects for our democracy,” Millett said in testimony in support of her bill, L.D. 1988. “This technology has the capability to make it appear that elected officials, for example, said something that in reality, they did not.”

The bill, which closely resembles the new California law, would prohibit the publishing or distribution of “materially deceptive audio or visual media” of a candidate, within 60 days of an election, that intends to hurt the candidates reputation or deceive people into voting for or against them. It would provide certain exemptions, including for content that is disclosed as being manipulated, or content that is satire or parody.

“With how fast news travels the web, the danger of these doctored images or videos being shared millions of times and being taken as real makes deepfake even more dangerous,” Millett added.

Her concerns are well founded, and the more policymakers and the general public are aware of and discussing this issue, the better. But like several groups that have raised First Amendment and process concerns, we don’t believe that this proposal is the right solution to a very real but very complicated problem.

“This well-intentioned legislation poses far more questions than it answers and will create many more problems than it will solve,” Adam Crepeau, a policy analyst with the Maine Heritage Policy Center, said in testimony against the bill this week.

Crepeau raised concerns about the legislation infringing on First Amendment rights, and potentially leading to a flurry of legal action in political campaigns. Those are concerns that the American Civil Liberties Union of Maine, and this editorial board, share.

“This bill would give politicians a new right to file a lawsuit against virtually anyone who distributes a video or audio recording or image the politician believes is deceptive, unless the distributor states that the recording is inaccurate,” Michael Kebede, policy counsel for the ACLU of Maine, testified in opposition to the bill.

“While we appreciate the important governmental interest in promoting truth in the electoral process, election integrity cannot be attained by attaching a warning that every image or recording may have been altered,” Kebede added.

In an interview with the BDN, Millett made an important point — not only about the dangerous effect deepfakes and other techniques have in misinforming the public — but also in discouraging people to run for office, particularly at the local and state levels.

“It’s hard enough to convince people to run for office now,” she said.

There can be little doubt that the prospect of combating fake videos and audio makes public service less appealing. That is yet another reason to keep beating the drum about the dangers of deepfakes and misinformation generally, but we remain unconvinced that L.D. 1988 is the way forward.

The best way to combat misinformation, it seems to us, is with public awareness about these ongoing efforts to distort reality. We must all be cautious and skeptical about what we see online, and where that information is from.

While we have reservations about Millett’s proposal, we share her hope that it can lead to “some really constructive conversations” at the state level about a complex and concerning issue facing our democracy.

©2020 the Bangor Daily News (Bangor, Maine). Distributed by Tribune Content Agency, LLC.

From Our Partners