MCP servers used by developers and 'vibe coders' are riddled with vulnerabilities – here’s what you need to know
New research shows misconfigured MCP servers are putting devs at risk
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Hundreds of Model Context Protocol (MCP) servers around the world are open to abuse, with vulnerabilities that put vibe coders and their organization's sensitive assets at risk.
Introduced late last year, MCP servers are an easy-to-use extension of LLMs, thanks to the simplicity of their protocols, and have come into widespread use due to the broad availability of independently developed MCPs.
However, according to analysis from Backslash Security, around half of the 15,000-plus MCP servers in existence are dangerously misconfigured or carelessly built. The resulting vulnerabilities are in some cases catastrophic, the company warned.
They fall under two general headings. First is the MCP ‘NeighborJack’ vulnerability, whereby hundreds of MCP servers are explicitly bound to all network interfaces (0.0.0.0), making them accessible to anyone on the same local network.
This was the most common vulnerability found, with hundreds of cases discovered.
"Imagine you’re coding in a shared co-working space or café. Your MCP server is silently running on your machine," the researchers said.
"The person sitting near you, sipping their latte, can now access your MCP server, impersonate tools, and potentially run operations on your behalf."
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Meanwhile, dozens of MCP servers allowed arbitrary command execution on the host machine thanks to careless use of a subprocess, a lack of input sanitization, or security bugs such as path traversal.
Most concerning of all, on several MCP servers both vulnerabilities were present, allowing bad actors to take full control of the host machine running the server.
Malicious actors that come across these MCP servers would have full access to run any command, scrape memory, or impersonate tools used by AI agents, Backslash said.
Meanwhile, beyond code execution, MCPs can serve as stealthy pathways for prompt injection and context poisoning, Backslash warned. Malicious or manipulated public content can change what an LLM sees - returning misleading data, or rerouting agent logic.
“Our research highlights several prevalent MCP server weaknesses that can open enterprise environments to threat vectors including remote code execution, data exposure, and network traversal,” said Yossi Pik, co-founder and CTO of Backslash Security.
More trouble on the way for MCP servers
In a yet-to-be-released finding, Backslash said it also identified an exploit path involving a seemingly benign public document that can trigger a cascading compromise, because the MCP silently connected it into the LLM agent’s logic without proper boundaries.
The issue here wasn’t a vulnerability in the MCP code itself, but rather in the configuration of the data source it accessed. Backslash said the issue affects a 'very popular' tool with tens of thousands of users and that it's currently working with the vendor to coordinate responsible disclosure.
The company has now launched a free self-assessment tool for vibe coding environments to help security teams gain visibility into the vibe coding tools being used in their organizations, continuously gauging the risk posed by large language models (LLMs), MCP servers, and IDE AI rules in use.
"It's critical to give developers and vibe coders the tools and guidance to safely navigate this emerging attack service, which is why we’ve created the MCP Server Security Hub," said Pik.
"Developers will continue to tap MCP servers' flexibility and utility, so we wanted to give the community a safer means of doing so."
MORE FROM ITPRO
- The NCSC wants developers to get serious on software security
- Shifting left might improve software security, but developers are becoming overwhelmed
- Software security debt is spiraling out of control
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Low-budget devices are the biggest casualty of the RAM crisisNews Say goodbye to budget devices; vendors are doubling down on high-end options to absorb costs
-
Sectigo taps Clint Maddox to lead global field operationsReviews The appointment follows a year of strong momentum for the security vendor as it expands its global channel footprint
-
Microsoft CEO Satya Nadella says 'anyone can be a software developer' with AI, but skills and experience are still vitalNews AI will cause job losses in software development, Nadella admitted, but claimed many will reskill and adapt to new ways of working
-
Claude Code flaws left AI tool wide open to hackers – here’s what developers need to knowNews The trio of Claude code flaws could have put developers at risk of attacks
-
Anthropic says Claude Code can help streamline 'cost-prohibitive' COBOL modernization, but IBM says it's not that simple – 'decades of hardware-software integration cannot be replicated by moving code'News Research from Anthropic claims Claude Code can simplify modernization of COBOL systems
-
Automated code reviews are coming to Google's Gemini CLI Conductor extension – here's what users need to knowNews A new feature in the Gemini CLI extension looks to improve code quality through verification
-
Anthropic Labs chief Mike Krieger claims Claude is essentially writing itself – and it validates a bold prediction by CEO Dario AmodeiNews Internal teams at Anthropic are supercharging production and shoring up code security with Claude, claims executive
-
AI-generated code is fast becoming the biggest enterprise security risk as teams struggle with the ‘illusion of correctness’News Security teams are scrambling to catch AI-generated flaws that appear correct before disaster strikes
-
‘Not a shortcut to competence’: Anthropic researchers say AI tools are improving developer productivity – but the technology could ‘inhibit skills formation’News A research paper from Anthropic suggests we need to be careful deploying AI to avoid losing critical skills
-
UK government launches industry 'ambassadors' scheme to champion software security improvementsNews The Software Security Ambassadors scheme aims to boost software supply chains by helping organizations implement the Software Security Code of Practice.
