- Separating fact from fiction has never been more difficult for voters.
- As billions of people head to the polls this year, AI-powered deep fraud poses a real threat to trusted information.
- Voters have already received robocalls impersonating Joe Biden and seen fake video ads from Rishi Sunak.
We are told that AI is going away make life easier. However, voters heading to the polls in 2024 have reason to be suspicious.
This year, nearly half of the world’s population will learn just how problematic AI is for the democratic process as they prepare to vote in elections in the United States, as well as Britain, India and Mexico.
Voters, tasked with determining candidate and party policy, were already faced with a difficult task. now, Artificial intelligence threatens to make this process very difficult.
“Even if we stop developing artificial intelligence, the information landscape after 2023 will never be the same.” Wharton associate professor Ethan Mollick wrote this month.
Recent advances in generative AI have given impetus OpenAI’s ChatGPTmeaning that technology is now a bigger challenge.
Voters in New Hampshire found themselves in a tough spot after they began receiving calls ahead of the unofficial Democratic primary. Joe Biden was telling them not to vote.
deepfake robocalls, This was first reported by NBC News on Monday, opening with the classic Biden line, “what a bunch of malkeys,” before trying to keep people off the ballot on Tuesday.
Biden’s robocall is particularly damaging because of how difficult it is to distinguish the real from the fake.
The study, published in August in the journal PLOS ONE found people struggled to detect artificially generated speech more than a quarter of the time.
“The difficulty of detecting dipfake speech confirms their potential for abuse,” the researchers noted at the time.
Deep fakes created by artificial intelligence also cause other problems. in the United Kingdom, Research by Fenimore Harper Communications Facebook has found more than 100 fake video ads imitating Prime Minister Rishi Sunak.
The paid ads “may have reached more than 400,000 people,” the 143 discovered between Dec. 8 and Jan. 8, according to the study. Funding for the ads appears to have come from 23 countries, including “Turkey, Malaysia, the Philippines and the United States.”
“This is the first widespread paid advertisement of fake videos of a UK political figure,” Fenimore Harper’s report said. Meta did not immediately respond to Business Insider’s request for comment.
While it is unclear who is behind the deep frauds in the US and UK, the recent proliferation of artificial intelligence almost certainly means that anyone with internet access and an AI tool may cause some turbulence.
Mollick mentioned in his newsletter how he created it deepfake video by sending a 30-second video of himself and a 30-second audio recording of his voice to AI startup Hagen within minutes.
“I had an avatar that I could say anything in any language. It used some of my actions from the source video – adjusting the microphone – but cloned my voice and changed my mouth movements, blinking and everything else,” he wrote.
AI companies are making some efforts to solve the problems. Earlier this month OpenAI has announced plans to prevent abuse of artificial intelligence ahead of this year’s elections.
The plans include using safeguards to prevent tools such as the text-to-image model DALL-E from creating images of real people, as well as banning tools such as ChatGPT from being used for “political campaigning and lobbying”.
“Protecting the integrity of elections requires cooperation from all parts of the democratic process, and we want to make sure that our technology is not used in a way that could undermine that process.” OpenAI said on its blog.
Other organizations are fighting the spread of AI-generated fraud. Lisa Quest heads the management consulting firm Oliver Wyman, UK and Ireland. said my colleague Spriha Srivastava at Davos about the social impact group’s work with charities “in online safety” to curb the spread of misinformation.
As voters try to figure out what they can and can’t trust, they face an uphill battle, to say the least.