The sad reason why deepfakes do not pose a small threat to American politics

MIT researchers working on funding from the Google Jigsaw Group recently conducted a couple of studies to determine the potential impact of deepfake AI technology policy ads on U.S. voters.

In all, more than 7,500 people in the United States participated in the paired surveys – as far as we are able to determine, this makes this the largest set of experiments on the subject.

Participants were divided into three groups; one group watched a deepfake video, the other read a transcript of the video, and the third acted as a control group so that they did not receive any media messages.

In the next phase, participants from all three groups were asked questions to determine whether they believed in the media they had seen, read, or whether they agreed with certain statements.

The results suggest bad news, a worse news scenario. Let’s start with bad news.

According to the newspaper:

Overall, we find that individuals are more likely to believe that an event took place when presented in video versus text form.

It may not blow your socks off, but it turns out that people are more likely to believe what they see than what they read. That, of course, is a bad thing in a world where deepfakes are so easy to create.

But it’s getting so much worse. Again, according to the newspaper:

Moreover, when it comes to attitudes and engagement, the difference between the video and text ratios is comparable to, if not less than, the difference between the text and control ratios. Taken together, these results call into question widespread assumptions about the unique persuasive power of political video over text.

In other words: People in the United States are more likely to believe in a deepfake than fake news in text form, but it does very little to change their political views.

Researchers are quick to warn against drawing too many conclusions from these data. They warn that the conditions under which the surveys were conducted do not necessarily mimic those where American voters are likely to be deceived by deep false cases.

According to the newspaper:

It should be noted, however, that although we observe only small differences in the persuasive power of the video relative to text across our two studies, the effects of these two modalities may differ more sharply outside an experimental context.

In particular, it is possible that video is more sensational than text, such that people who scroll on social media are more likely to participate in and therefore be exposed to video in relation to text.

As a result, even though video has only a limited convincing advantage over text within a controlled, forced choice, it can still exert an excessive effect on attitudes and behaviors in an environment where it receives disproportionate attention.

Okay, so it’s possible that deepfakes can be far more effective in nature when it comes to influencing people to change their political opinions.

But precisely this research provides evidence to the contrary. From where we stand, it makes perfect sense.

More U.S. citizens voted in the recent election than anyone else in U.S. history. Yet the margins were so close that one side still (idiotically) claims that the election was manipulated. In fact, two of the last four U.S. presidents lost the popular vote. It indicates that American voters are anything but fickle.

It’s obvious that deepfakes are pretty low on the list of issues plaguing American politics. However, it is a little sad to see our country’s bias so easily codified by MIT research.

Leave a Comment

Advertise