Joe Biden in drag, drinking Bud Light; Donald Trump as a shady lawyer in ‘Breaking Bad’. The two likeliest contestants in the 2024 US presidential election have both been subjects of recent ‘deepfake’ hoax videos.
Boosted by ‘generative” artificial intelligence (AI) tools such as ChatGPT, deepfakes “have reached a level of sophistication that prevents detection by the naked eye,” according to the publishers of a recent multi-nation survey, which found people to be too sure of their own ability to spot a fake.
The 2023 Online Identity Study, conducted by Censuswide for Jumio, a California-based online safety business, canvassed more than 8,000 people across Britain, Mexico, Singapore and the US about deepfakes.
Around two-thirds of those asked claimed awareness of the technology, though that ranged from just 56% in Britain to almost 90% in Singapore, with over half saying they were confident they could tell the difference between an authentic clip and a mock-up.
Perhaps they should not be so sure of themselves.
“Deepfakes are getting exponentially better all the time and are becoming increasingly difficult to detect without the aid of AI,” said Stuart Wells, Jumio’s chief technology officer.
And while hoaxes based around public figures such as Biden and Trump are likely to be quickly debunked, that is unlikely to be always the case for lower profile scams targeting personal finances or identity.
Less than half of those surveyed in Britain and the UK felt that AI could be used for identity theft and related money-grabbing ruses.
The survey team described such a lack of awareness as “concerning”, pointing to data from UK Finance showing impersonation scams costing £177 million (US $220 million) in 2022. In the US, consumers lost $2.6 billion the same year, according to the Federal Trade Commission.