Why deepfakes could threaten everything from biometrics to democracy
Deepfake technology has been rated the most serious AI crime threat, but there are ways to fight back


This article originally appeared in Issue 9 of IT Pro 20/20, available here. To sign up to receive each new issue in your inbox, click here.
Deepfakes, also known as synthetic media, are spreading. Today, deepfake technology is most commonly used to create more realistic fake images or videos, but it can also be used to develop fake biometric identifiers such as voice and fingerprints.
Most of us will have likely watched a film, tv show or advert that uses this technology, and we’ve probably all come across a ‘deepfaked’ photo or video – either knowingly or unknowingly – on social media. Some of us may even have played around with creating our own deepfakes using apps that let you superimpose your face onto that of your favourite actors.
“Until recently you needed the sophisticated technology of a Hollywood studio to create convincing deepfakes. Not anymore. The technology has become so advanced and readily available that one guy in his bedroom can create a very realistic deepfake,” says Andrew Bud, ceo and founder of fintech firm iProov. “A lot of people are using it for entertainment content, plus there’s legitimate firms whose entire business is creating synthetic video and audio content for advertising or marketing purposes.”
The dark side of deepfakes
But deepfake technology also has a dark side. For some time now it’s been used to create photos or videos to spread misinformation and influence public opinion or political discourse, often by attempting to discredit individuals or groups.
“Recent history has shown a proliferation of attacks to manipulate democratic elections and destabilise entire regions,” says Marc Rogers, VP of cybersecurity at technology firm Okta and co-founder of international cyberthreat intelligence group The CTI League. “The implication being a deepfake from a trusted authority could artificially enhance or destroy public confidence in a candidate, leader or perception of a public issue – such as Brexit, global warming, COVID-19 or Black Lives Matter – to influence an outcome beneficial to a malicious state or actor.”
IDC senior research analyst, Jack Vernon, notes: “With the US presidential election drawing closer, this will be an obvious arena in which we may see them deployed.”
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Deepfake pornography is another rapidly growing phenomenon, often used as blackmail, while another risk comes from criminals using faked biometric identifiers to carry out fraud.
“One notable example took place last year when attackers used deepfake technology to imitate the voice of a UK ceo in order to carry out financial fraud,” Rogers highlights.
It’s unsurprising, then, that last month the Dawes Centre for Future Crime at UCL published a report citing deepfakes as the most serious artificial intelligence (AI) crime threat. Ranked in order of concern, the technology was rated the most worrying use of AI in terms of its potential applications for crime or terrorism.
Who’s most at risk from deepfake crime?
Bud believes the areas most at risk from deepfake crime include the banking industry, governments, healthcare and media.
“Banking’s definitely at risk – that’s where the opportunity for money laundering is greatest. The government is also at risk: Benefits, pensions, visas and permits can all be defrauded. Access to someone’s medical records could be used against them and social media is at risk of weaponisation. It’s already being used for intimidation, fake news, conspiracy theories, destabilisation and destruction of trust.”
Experts say we can expect things to get worse before they get better, as the quality of deepfakes is only likely to improve. This will make it harder to distinguish which media is real, and the technology may get better at fooling our security systems.
Fighting back
The good news is the technology industry is fighting back, and we’re seeing deepfake detection technology emerge from a number of research fields, says Nick McQuire, senior vice president at Enterprise Research.
RELATED RESOURCE
IT Pro 20/20: The future of augmentation
The ninth issue of IT Pro 20/20 looks at our changing relationship with augmentation technology
“This is an area we’ve long predicted would emerge because firms like Microsoft, Google and Facebook are looking at ways to use neural networks and generative adversarial approaches (GANs) to analyse deepfakes to detect statistical signatures in their models.”
There are many initiatives to identify deepfakes, “for example the FaceForensics++ and Deepfake Detection Challenge (DFDC) dataset,” says Hoi Lam, a member of the Institution of Engineering and Technology’s (IET) Digital Panel.
Then there’s facial recognition cross-referencing, which is increasingly being used by video hosting services. “Various techniques are also being explored that implement digital watermarking,” explains Matt Lewis, research director at NCC Group. “This can help prove the origin and integrity of content creation.”
A number of the big tech firms have begun to promote tools in this area. Microsoft, for example, recently unveiled a new tool to help spot deepfakes and in August Adobe announced it would start tagging Photoshopped images as having been edited in an attempt to fight back against misinformation.
GCHQ also recently acknowledged deepfakes as a cybersecurity priority, launching a research fellowship set to delve into fake news and misinformation and AI. “New technologies present fresh challenges and this fellowship provides us with a great opportunity to work with the many experts in these fields,” a spokesperson said.
Businesses are also starting to understand the risk from deepfakes and implementing new technologies designed to detect fraudulent biometric identifiers. Banks in particular are ahead of the game, with HSBC, Chase,Caixa Bank and Mastercard just some of those who’ve signed up to a new biometric identification system.
We’re in an arms race
As malicious actors innovate to stay a step ahead of security teams, technologists being drawn into an arms race, and the work to identify deepfakes is ongoing.
“As security teams innovate new technology to identify deepfakes, techniques to circumvent this will proliferate and unfortunately serve to make deepfake creation more realistic and harder to detect,” notes Rogers. “There’s a feedback loop with all emerging technologies like these. The more they generate success the more that success is fed back into the technology, rapidly improving it and increasing its availability.”
RELATED RESOURCE
2020 Cyber Threat Intelligence (CTI) survey
How to measure the effectiveness of your CTI programme
While the technologists fight the good fight, the other important tool in the war against devious deepfakes is education.
The more aware the public is of the technology the more they’ll be able to critically think about their media consumption and apply caution where needed, says Nick Nigram, a principal at Samsung NEXT Europe. “After all, manipulation of media using technology is nothing new,” he concludes.
Keri Allan is a freelancer with 20 years of experience writing about technology and has written for publications including the Guardian, the Sunday Times, CIO, E&T and Arabian Computer News. She specialises in areas including the cloud, IoT, AI, machine learning and digital transformation.
-
Bigger salaries, more burnout: Is the CISO role in crisis?
In-depth CISOs are more stressed than ever before – but why is this and what can be done?
By Kate O'Flaherty Published
-
Cheap cyber crime kits can be bought on the dark web for less than $25
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
By Emma Woollacott Published
-
Five things to consider before choosing an MFA solution
In-depth Because we all should move on from using “password” as a password
By Rene Millman Published
-
The IT Pro Podcast: Going passwordless
IT Pro Podcast Something you are, or something you have, could be more important than a password you know in the near future
By IT Pro Published
-
Podcast transcript: Going passwordless
IT Pro Podcast Read the full transcript for this episode of the IT Pro Podcast
By IT Pro Published
-
UK police fails ethical tests with "unlawful" facial recognition deployments
News A University of Cambridge team audited UK police use of the tech and found frequent ethical and legal shortcomings
By Rory Bathgate Published
-
Snapchat settles for $35 million in Illinois biometrics lawsuit
News The social media giant had been accused of improperly collecting, storing facial geometry in violation of state legislation
By Rory Bathgate Published
-
Home Office to collect foreign offenders' biometric data using smartwatch scheme
News Facial recognition and geolocation data will be matched against Home Office, Ministry of Justice and police databases
By Rory Bathgate Published
-
Southern co-operative faces legal complaint for facial recognition CCTV
News Rights group Big Brother Watch has written to the Information Commissioner to “stop unlawful processing”
By Rory Bathgate Published
-
Amazon gave police departments Ring footage without permission
News The tech giant has done this 11 times this year
By Zach Marzouk Published