Six generative AI cyber security threats and how to mitigate them
What are the risks posed by generative AI and how can businesses protect themselves?
ChatGPT and competitors such as the recently-launched Google Bard have shot generative artificial intelligence (AI) into the mainstream. Allowing users to create, combine and remix content, generative AI is hailed as a transformative technology for businesses.
As the use of generative AI grows, though, so do concerns about cyber security, because the technology has the ability to drive more targeted cyber attacks. Hackers can use generative AI to compose impactful phishing emails, and the technology is making deepfakes even more convincing.
Creating malware is also easier with generative AI. In 2020, for example, researchers discovered a new type of malware called DeepLocker that used generative AI to create unique obfuscation techniques, making it difficult for security tools to detect and block.
Attackers use off-the-shelf machine learning libraries and frameworks such as TensorFlow or PyTorch to create generative models, says Adam Blake, CEO and founder ThreatSpike Labs. “These tools are widely available and easy to use, which has lowered the barrier to entry for adversaries looking to use AI in their attacks.”
There are several types of generative AI, each with potential uses in cyber attacks. So what are the new risks posed by the different types of generative AI and how can businesses protect themselves as the technology develops?
Text-based generative AI security threats
Text-based generative AI such as ChatGPT helps make phishing attacks far more sophisticated and difficult to spot. “AI-enhanced campaigns could create highly personalized emails to enable spear phishing at scale,” says Dane Sherrets, senior solutions architect at HackerOne.
Because text-based generative AI models are currently the most mature, experts say this type of attack will have the most impact in the near future. “They can be used to generate personalized phishing emails, or disinformation campaigns about an organization or individual,” adds Josh Zaretsky, partner at consulting firm Altman Solon.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
Using text-based generative AI, interactive chat capabilities could be honed in the future to automatically target companies via their web chat services, says Matt Aldridge, principal solutions consultant at OpenText.
Video-based generative AI security threats
Further down the line, video-based generative AI such as Runway’s Gen-1 could super-charge deep fake attacks to trick employees into transferring large amounts of cash to criminals. For example, an adversary could use video generation to create a deepfake of a company executive for social engineering attacks or to spread disinformation, says Blake.
Alternatively, he says, an attacker could use a video-generating model to create fake footage of a CEO instructing employees to transfer money or disclose sensitive information. Video models can be used to bypass facial recognition security measures in an identity-based attack, or impersonate company employees in spoofing attacks, according to Zaretsky.
Audio-based generative AI security threats
Voice cloning is just one use for audio-based generative AI, and it’s easy to see how it could be used for nefarious means. An attacker could use audio-based systems to create a convincing voice phishing call that appears to be from a trusted source, such as a bank or credit card company, says Blake. “Alternatively, an attacker could use an audio-generating model to create a fake audio clip of a CEO instructing employees to take a specific action.”
Text-to-speech generative AI such as Microsoft’s new neural codec language model VALL-E is able to accurately replicate a person’s voice using a combination of a text prompt and a short clip of a real speaker. Because VALL-E can replicate tone and intonation and convey emotion, voice clips produced using the model are very convincing.
The speed at which audio-based generative AI is developing is a major threat, according to Aldridge. “Audio fakes are a reality and the technology that makes them possible is improving at speed – we’ve seen huge developments in recent years, with computers creating conversations on their own.”
Image-based generative AI security threats
AI-generated images created by the likes of DALL·E 2 could also pose a major risk as the technology develops.
An attacker could use generative AI to create a convincing fake image or video that appears to show a company executive engaging in inappropriate or illegal behavior, for example. “The image or video could be used to blackmail or to spread disinformation,” says Blake.
Code-based generative AI security threats
As well as enabling less experienced attackers to create advanced malware, automated code generation by generative AI models can facilitate the bypass of traditional security tools, says Aldridge. Code-based generative AI tools include Tabnine and GitHub Copilot.
“It will do so by hiding malicious intent deeply within an otherwise benign application in an advanced trojan attack for example – in a similar way to how information can be hidden within an image using steganography.”
Combined generative AI security threats
Adversaries can also combine different types of generative AI models to carry out more complex attacks.
For example, an attacker wishing their victim to perform a specific action could use a text-generating model to compose a convincing email; a video-generating model to create a fake video; and an audio-generating model to create a phony audio clip, Blake explains.
“This combined attack could be particularly effective because it leverages multiple forms of media to create a more transparent and compelling message.”
How to mitigate generative AI security threats
Like any type of security threat, the risk posed by generative AI-based attacks is likely to evolve, making it integral that businesses are prepared. For now, it’s worth noting that security technology is not always able to spot and halt these attacks.
There are no known tools at this moment that can identify generative AI-derived attacks as “the modus operandi is to appear human-like”, says Kevin Curran, senior IEEE member, and professor of cyber security at Ulster University. He says the generation of realistic fake videos is particularly worrying.
With this in mind, businesses need to stay vigilant and adapt to new threats as they emerge, working closely with cyber security experts and technology providers, says Maher Yamout, senior security researcher at Kaspersky. He advises stringent authentication measures such as multi-factor authentication (MFA) to prevent unauthorized access.
Overarching this should be a strong strategy, taking into account the use of AI within the business. Introducing AI technology into the fabric of a business could be counterproductive if organizations fail to consider safety and security, says Sherrets.
As the threat posed by generative AI becomes more sophisticated, experts agree training and education are key. Sherrets advises businesses to bolster staff training around the latest methods of attack. “Humans will always be one of the easiest vectors for an adversary to exploit – organizations will do well to make sure their staff are educated about the new tools used by attackers.”
Kate O'Flaherty is a freelance journalist with well over a decade's experience covering cyber security and privacy for publications including Wired, Forbes, the Guardian, the Observer, Infosecurity Magazine and the Times. Within cyber security and privacy, her specialist areas include critical national infrastructure security, cyber warfare, application security and regulation in the UK and the US amid increasing data collection by big tech firms such as Facebook and Google. You can follow Kate on Twitter.