Preventing deepfake attacks: How businesses can stay protected
Innovative deepfake technology is powering the next generation of social engineering attacks, – preventing deepfake attacks should be a priority for any security team
When criminals scammed a Hong Kong firm out of $25m in February, all it took was for a senior employee to be duped by a video conference call.
The scam began with a phishing message purportedly from the firm’s chief financial officer (CFO) requesting an urgent and confidential transaction, per RTHK. Despite their skepticism, the employee was put at ease after joining a video conference call with the CFO and other senior managers.
The problem was that the supposedly real individual was actually an attacker using real-time deepfake technology to pull of a sophisticated social engineering attack. Using deepfakes, attackers can convincingly alter their appearance to resemble a trusted individual for fraudulent activity or for blackmail.
In another case of deepfake fraud, the CEO of an unnamed British energy provider was tricked into transferring €220,000 to a scammer. Per a WSJ report, workers received a call they thought was from the CEO of their German parent company asking them to wire money to a Hungarian supplier.
While instances of deepfake fraud of this magnitude have been rare up until, they are on the rise. According to a 2023 survey of more than 1,000 fraud detection decision-makers, 29% of businesses have fallen victim to deepfake videos, while 37% have been the victim of deepfake voice fraud.
There are already some deepfake detection tools on the market, which use machine learning (ML) to scan video and audio for inconsistencies. At the end of 2022, Intel launched FakeCatcher, a tool the firm claims is the world’s first real-time deepfake detector. It analyses blood flow beneath the skin visible in video pixels and can return low latency results with 96% accuracy.
The accuracy of these tools will continue to improve with the advancement of AI – multimodal generative AI models could help in this regard, with the power to analyze images and videos based on extensive training.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
While this will help reduce the likelihood of businesses becoming victims of deepfake scams in the long term, technology alone isn’t a silver bullet in the short term, especially as not all businesses have access to them.
While this will help reduce the likelihood of businesses becoming victims of deepfake scams in the long term, technology alone isn’t a silver bullet in the short term, especially as not all businesses have access to them. Leaders must take proactive steps to prevent deepfake attacks, at the cultural level as well as throughout their security strategy.
Preventing deepfake attacks: A zero trust culture
Training and education are essential for preventing deepfake attacks. Employees outside of an organization’s cyber security team are often too trusting of their employer’s security solutions and show a lack of caution. According to Verizon’s 2024 Data Breach Investigations report, 74% of breaches involved human error.
A recent report by Kaspersky recorded more than two critical cyber attacks with human involvement per day over the course of 2023, with targeted social engineering attacks accounting for 4% of all recorded incidents.
“As sophisticated deepfake frauds become more and more prevalent, the need for a robust defense strategy is no longer negotiable,” says David Emm, principal security researcher at Kaspersky, adding that “regular, structured training can cultivate a workforce that’s vigilant and capable of spotting and responding to deepfakes”.
Training should start with instilling a zero trust approach in all staff. Telltale signs include the individual on the other end of a video call speaking on out-of-context subjects not relevant or using phrasing you wouldn’t expect them to. The best online cyber security courses cover these in detail, but regular reminders to employees such as phishing tests can be a good starting point.
Melissa Bischoping, director of endpoint security research at Tanium, tells ITPro there are also “audible giveaways” for deepfakes including odd breathing sounds, issues with fluency and pacing, or unusual cadence. Visual clues to watch out for include unnatural blinking, mismatched lip-synching, and odd facial expressions.
“If it feels off, it probably is,” says Bischoping. “You should ask unrelated questions, change the nature and cadence of your own speech, and even say something that you would never normally say to the person who's calling, just to gauge their reaction,” advises Bischoping. “If it ends up not being a deepfake, then you can laugh about it with them later.”
Preventing deepfake attacks: Confirming identities
Another sign that a seemingly innocuous call could be a deepfake attack is if the individual on the other end makes an unusual request. Employees tasked with overseeing financial transactions should not agree to make payments under significant time pressure and should always double-check payment details against a legitimate baseline.
“You should consider asking questions that only the real requester would know the answer to. If it’s a video call you’re on, ask them to move around in their surroundings, or to change filters, as these things are harder for deepfake to achieve,” says Jon Renshaw, deputy director of commercial research at global cybersecurity agency NCC Group.
He adds that employees should look to confirm any requests via another communications channel – if the request was made via a video call, then ring a trusted number to double-check. Alternatively, employees might consider going through a colleague who was not on the video call for a second opinion.
“If you’re contacted to perform a transaction, it’s always better to seek additional verification when you’re unable to physically verify the individual,” says Bischoping, adding that it’s also worth relying on “more robust forms of authentication that can’t be spoofed by AI”.
Preventing deepfake attacks: Stringent verification
Educating and training employees to spot a deepfake can empower them to remain vigilant, but implementing multi-factor authentication can help create a more robust defence mechanism.
“Deepfakes thrive in environments where verification is lax, so it’s essential that your verification processes are stringent,” says Emm.
Spoof-resistant tools that generate encrypted keys from biometrics, which can then be used to verify identities before participants are allowed to join video calls, can add an extra layer of security. This can reduce the risk of unauthorized access and the potential for money or data to be stolen.
Adopting sign-in policies such as MFA can help ensure that employees always re-authenticate from the same secure device, network, and location. As the end of passwords come about, firms can make use of new identity management tools as a first line of defense against attempts to impersonate a trusted account or individual.
Passwords may also return to their original meaning – a shared phrase spoken aloud to show that the individual with whom you are speaking can be trusted. Business calls in the future might commence with participants being asked to recite a weekly codeword or secure phrase to quickly weed out would-be deepfake attackers.
While no security tool is 100% foolproof, by adopting them along with employees maintaining good cyber hygiene, businesses can strengthen their defenses against the threat of deepfakes.
Rich is a freelance journalist writing about business and technology for national, B2B and trade publications. While his specialist areas are digital transformation and leadership and workplace issues, he’s also covered everything from how AI can be used to manage inventory levels during stock shortages to how digital twins can transform healthcare. You can follow Rich on LinkedIn.