Cyber researchers have already identified several big security vulnerabilities on OpenAI’s Atlas browser

Security researchers have uncovered a Cross-Site Request Forgery (CSRF) attack and a prompt injection technique

OpenAI's ChatGPT Atlas browser download portal for macOS pictured on a laptop screen.
(Image credit: Getty Images)

With OpenAI’s Atlas browser just over a week old, cyber experts have already identified several vulnerabilities and potential security risks for users.

Researchers have discovered a vulnerability in the AI browser that allows attackers to inject malicious instructions directly into ChatGPT's memory and execute remote code.

According to researchers at LayerX, the flaw can affect ChatGPT users on any browser, but is particularly dangerous for users of OpenAI’s new agentic browser, ChatGPT Atlas.

"LayerX has found that Atlas currently does not include any meaningful anti-phishing protections, meaning that users of this browser are up to 90% more vulnerable to phishing attacks than users of traditional browsers like Chrome or Edge," researchers said.

Users are also logged in to ChatGPT by default, while LayerX also said testing indicates the Atlas browser is up to 90% more exposed than Chrome and Edge to phishing attacks.

In this exploit, attackers can use a Cross-Site Request Forgery (CSRF) request to 'piggyback' on the victim’s ChatGPT access credentials, and inject malicious instructions into ChatGPT’s memory.

When the user then attempts to use ChatGPT for legitimate purposes, the ‘tainted memories’ will be invoked. They can execute remote code that allows the attacker to gain control of the user account, their browser, code they are writing, or systems they have access to.

Notably, researchers warned the exploit can persist across devices and sessions, enabling remote code execution and potential takeover of a user account, browser, or connected systems without them realizing anything is wrong.

More security issues for Atlas

The findings from LayerX mark the latest in a string of warnings over the potential security risks associated with the new browser.

Researchers at NeuralTrust, for example, demonstrated a prompt injection attack that's also affecting Atlas, whereby its omnibox can be jailbroken by disguising a malicious prompt as a seemingly harmless URL to visit.

In this instance, an attacker crafts a string that appears to be a URL but is malformed, and won't be treated as a navigable URL by the browser. The string embeds explicit natural language instructions to the agent.

When the user pastes or clicks this string so it lands in the Atlas omnibox, the input fails URL validation, and Atlas treats the entire content as a prompt. The embedded instructions are now interpreted as trusted user intent with fewer safety checks.

The attackers can then execute the injected instructions with elevated trust.

Jamie Akhtar, CEO and co-founder at CyberSmart, said the recent findings are a prime example of the “security pitfalls of LLMs and AI browsers”.

“Although these technologies have ushered in a future of possibilities for cybersecurity, they’ve also been partly responsible for the democratization of cyber crime," he said.

"Threats like prompt injections aren’t particularly difficult for any cyber criminal with rudimentary knowledge to use (once they’ve been created), despite their sophistication,” Akhtar added.

“What makes them so dangerous is the ability to manipulate the AI's underlying decision-making processes and effectively turn the agent against the user."

Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.

MORE FROM ITPRO

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.