Many years before ChatGPT was released, my research group, the University of Cambridge Social Decision-Making Laboratory, wondered whether it was possible to have neural networks generate misinformation. To achieve this, we trained ChatGPT’s predecessor, GPT-2, on examples of popular conspiracy theories and then asked it to generate fake news for us. It gave us thousands of misleading but plausible-sounding news stories. A few examples: “Certain Vaccines Are Loaded With Dangerous Chemicals and Toxins,” and “Government Officials Have Manipulated Stock Prices to Hide Scandals.” The question was, would anyone believe these claims?
We created the first psychometric tool to test this hypothesis, which we called the Misinformation Susceptibility Test (MIST). In collaboration with YouGov, we used the AI-generated headlines to test how susceptible Americans are to AI-generated fake news. The results were concerning: 41 percent of Americans incorrectly thought the vaccine headline was true, and 46 percent thought the government was manipulating the stock market. Another recent study, published in the journal Science, showed not only that GPT-3 produces more compelling disinformation than humans, but also that people cannot reliably distinguish between human and AI-generated misinformation.
My prediction for 2024 is that AI-generated misinformation will be coming to an election near you, and you likely won’t even realize it. In fact, you may have already been exposed to some examples. In May of 2023, a viral fake story about a bombing at the Pentagon was accompanied by an AI-generated image which showed a big cloud of smoke. This caused public uproar and even a dip in the stock market. Republican presidential candidate Ron DeSantis used fake images of Donald Trump hugging Anthony Fauci as part of his political campaign. By mixing real and AI-generated images, politicians can blur the lines between fact and fiction, and use AI to boost their political attacks.
Before the explosion of generative AI, cyber-propaganda firms around the world needed to write misleading messages themselves, and employ human troll factories to target people at-scale. With the assistance of AI, the process of generating misleading news headlines can be automated and weaponized with minimal human intervention. For example, micro-targeting—the practice of targeting people with messages based on digital trace data, such as their Facebook likes—was already a concern in past elections, despite its main obstacle being the need to generate hundreds of variants of the same message to see what works on a given group of people. What was once labor-intensive and expensive is now cheap and readily available with no barrier to entry. AI has effectively democratized the creation of disinformation: Anyone with access to a chatbot can now seed the model on a particular topic, whether it’s immigration, gun control, climate change, or LGBTQ+ issues, and generate dozens of highly convincing fake news stories in minutes. In fact, hundreds of AI-generated news sites are already popping up, propagating false stories and videos.
To test the impact of such AI-generated disinformation on people’s political preferences, researchers from the University of Amsterdam created a deepfake video of a politician offending his religious voter base. For example, in the video the politician joked: “As Christ would say, don’t crucify me for it.” The researchers found that religious Christian voters who watched the deepfake video had more negative attitudes toward the politician than those in the control group.
It is one thing to dupe people with AI-generated disinformation in experiments. It’s another to experiment with our democracy. In 2024, we will see more deepfakes, voice cloning, identity manipulation, and AI-produced fake news. Governments will seriously limit—if not ban—the use of AI in political campaigns. Because if they don’t, AI will undermine democratic elections.