Next year’s elections in Britain and the US could be marked by a wave of AI-powered disinformation, experts have warned, as generated images, text and immersive videos go viral at the behest of those hordes of AI-powered propaganda bots.
Sam Altman, CEO of ChatGPT creator, OpenAI, told a congressional hearing in Washington this week that the models behind the latest generation of AI technology can manipulate users.
“The general ability of these models to manipulate and persuade, to provide one-on-one interactive disinformation is a major area of concern,” he said.
“The regulation is very smart: people need to know if they are talking to an AI, or if the content they are looking at is created or not. The ability of the real model … to predict people, to I think it will take a combination of companies doing the right thing, regulation and public education.
The prime minister, Rishi Sunak, said on Thursday that the UK would take the lead in limiting the dangers of AI. Technology concerns have surged after breakthroughs in generative AI, where tools like ChatGPT and Midjourney produce convincing text, images and even voice on command.
Where earlier waves of propaganda bots relied on simple pre-written messages sent to the masses, or buildings full of “paid trolls” to do the manual work of engaging other people , ChatGPT and other technologies raise the possibility of interactive election interference at scale.
An AI trained to repeat talking points about Taiwan, climate change or LGBT+ rights could tie political opponents to pointless arguments while convincing viewers – of thousands of different social media accounts at once.
Prof Michael Wooldridge, director of AI foundation research at the UK’s Alan Turing Institute, said AI-powered disinformation was his main concern about the technology.
“Now in terms of my concerns for AI, it’s number one on the list. We have elections coming up in the UK and the US and we know that social media is a very powerful channel for wrongdoing. information. But we now know that generative AI can produce disinformation on an industrial scale,” he said.
Wooldridge said chatbots like ChatGPT could produce tailored disinformation aimed at, for example, a Conservative voter in the home counties, a Labor voter in a metropolitan area, or a Republican supporter in midwest.
“It’s an afternoon job for someone with little programming experience to create fake identities and just start making these fake news stories,” he said.
After fake images of Donald Trump being arrested in New York went viral in March, before eye-catching AI generated images of Pope Francis in a Balenciaga puffer jacket went viral, some expressed concern about the creation images used to confuse and misinform. But, Altman told US Senators, those concerns may be overblown.
“Photoshop came on the scene a long time ago and for a while people were really fooled by Photoshopped images – then quickly developed a perception that images could be Photoshopped.”
But as the capabilities of AI become more and more advanced, there are concerns that it becomes more difficult to believe anything we encounter online, whether it is false information, if a lie is spread incorrectly, or disinformation, where a fake account is created and distributed on purpose. .
Voice cloning, for example, gained prominence in January after the emergence of a doctored video of the US president, Joe Biden, in which footage of him talking about sending tanks to Ukraine was changed through voice simulation technology to attacks on transgender people – and shared on social media.
after the newsletter promotion
A tool developed by the US company ElevenLabs was used to create the fake version. The viral nature of the clip helped inspire other spoofs, including one of Bill Gates saying the Covid-19 vaccine causes Aids. ElevenLabs, which admitted in January that it had seen “increasing cases of voice cloning abuse”, has since strengthened its safeguards against disturbing uses of its technology.
Recorded Future, a US cybersecurity firm, said rogue actors could be found selling voice cloning services online, including the ability to clone the voices of corporate executives and numbers. in public.
Alexander Leslie, an analyst at Recorded Future, said the technology will only improve and become more widely used in the run-up to the US presidential election, giving the tech industry and governments a window to act now.
“Without widespread education and awareness this can be a real threat vector as we head into the presidential election,” Leslie said.
A study by NewsGuard, a US organization that monitors false information and disinformation, tested the model behind the latest version of ChatGPT by prompting it to generate 100 examples of lies. news accounts, from approximately 1,300 commonly used fake news “fingerprints”.
NewsGuard found that it could produce all 100 examples as asked, including “Russia and its allies are not responsible for the crash of Malaysia Airlines flight MH17 in Ukraine”. A test of Google’s Bard chatbot found it could generate 76 such accounts.
NewsGuard also announced on Friday that the number of AI-generated news and information websites it has identified has more than doubled in two weeks to 125.
Steven Brill, the co-CEO of NewsGuard, said he was concerned that rogue actors could use chatbot technology to produce more variations of fake stories. “The danger is someone who uses it deliberately to put out these false narratives,” he said.