In 2022 as 24-year-old Cara Hunter was running for political office in Northern Ireland, her world and campaign were rocked by the release of a very graphic deepfake porn video purporting to be her.
The most likely motive was to embarrass and humiliate her, and ultimately to either force her to drop out of the race or to influence electors against her.
She didn’t, and she won, although by just a handful of votes.
This was one of the first recorded instances of deepfake porn being used to influence political races and elections, and likely won’t be the last.
An AI-generated video showing President Biden declaring a national draft to aid Ukraine’s war effort —was retweeted more than 8 million times before it was taken down
And a similar deepfake also depicted Senator Elizabeth Warren calling for Republicans to be barred from voting in 2024. Even though the quality was poor, it still spread quickly.
Many of these targets will be women and because of the long-term emotional damage that can be done even when it’s known that the video is a fake, it might deter more women from running for office at any level.
This was just one of the many tangible examples of how AI will drive an entirely new wave of misinformation, disinformation, and propaganda campaigns. Campaigns that will match our definition of SAP – Scale, Accuracy, and Plausibility.
This propaganda will be a combination of AI generated stories, images, videos, and voices, spread through networks of fake news sites that are almost impossible to tell from the real thing, and amplified and spread through poorly-regulated and monitored social media platforms.
And the goal is often to cause as much dissemination and disruption as possible before the stories can be called out or taken out.
So what are the goals of these campaigns?
- To change the outcome of an election.
- To create chaos, confusion, and anger.
- To make it more difficult for voters to comfortably accept any outcome.
Misinformation, disinformation, and propaganda campaigns are now the weapon of choice for adversaries, and thanks to AI will be almost impossible to combat.
A few years ago I wrote about how state-affiliated organizations in Russia employed hundreds of people to create and distribute disinformation as an attack on Presidential elections.
More recently I spoke about another type of war room, the thousands of scam call centers operating around the world and how the number and believability of these scams could surge if those humans are replaced by AI.
We’re expecting these same state-sponsored election disrupters to take the same approach to generate even more massive and destabilizing misinformation campaigns, and with little need for costly humans.
And AI is also being used to create a global network of new sites to gain maximum exposure. Over a nine-month period in 2023, organizations tracking AI generated news sites saw the number surge form fewer than 50 to more than 600, and available in 15 different languages.
So what can we expect?
- Endless conspiracy theories made to appear more credible by very carefully concocted evidence.
- A surge in fake new sites made to look very real and current.
- The hijacking of trusted brands to make message sources more credible.
- An avalanche of deepfake phone calls and videos.
- Highly targeted and very believable campaigns – targeted at a particular audience, with a specific message, around a particular election.
- An increase in the use of misinformation chatbots.
- Concocted evidence of bad behavior or corruption.
- Amplification of misinformation through social media.
- The creation of fake scandals to alter the outcome of an election before fact checkers have time to debunk them.
In an interview with Time, Andy Carvin, a senior fellow at the Atlantic Council’s Digital Forensic Research Lab observed that “It’s one of the reasons why Kremlin information operations focus so much on essentially generating chaos, causing contagion, causing a loss of morale, or just getting people simply confused about what’s true and what’s not.”
Inside the world of the scam call center
Interpol’s Crackdown On Scam Call Centers
In 2022 Interpol led a two-month crackdown on scam fraud centers around the world in partnership with law enforcement in more than 70 countries.
Codenamed First Light 2022, police in participating countries raided national call centers suspected of running all kinds of fraud operations including telephone deception, romance scams, e-mail deception, and connected financial crime.
The initial results?
- 1,770 locations raided worldwide
- 3,000 suspects identified
- 2,000 operators, fraudsters and money launderers arrested
- 4,000 bank accounts frozen
- $50 million worth of illicit funds intercepted
Will it make a difference? Sadly no. Raids like this are only expected to accelerate the move to AI to make it almost impossible to find and track these call centers.