Developers relying on GitHub Copilot could be creating dangerously flawed code
Researchers found GitHub Copilot could generate flawed results if codebases already contained insecure code - and some developers may be oblivious to the problem


GitHub Copilot could be exacerbating software vulnerabilities due to the generative AI assistant’s tendency to replicate insecure code, researchers have warned.
Analysis from Snyk found that AI coding assistants such as GitHub Copilot can learn to imitate problematic patterns or utilize vulnerable material within a developer’s system of code.
If developers are inputting code into GitHub that has security issues or technical problems, then the AI model can succumb to the “broken windows” theory by taking inspiration from its problematic surroundings.
In essence, if GitHub Copilot is fed with prompts containing vulnerable material, then it will learn to regurgitate that material in response to user interactions.
“Generative AI coding assistants, such as Copilot, don’t actually understand code semantics and, as a result, cannot judge it,” researchers said.
By way of evidence, Snyk highlighted an example in which Copilot utilized the “neighboring tabs” feature to access code for the purposes of context.
In this instance, the code in question already contained security flaws. Copilot then went on to amplify these security vulnerabilities in its following suggestions, leaving the developer at risk of SQL injection.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“This means that existing security debt in a project can make insecure developers using Copilot even less secure,” said Snyk.
The exacerbation of security issues through GitHub should be a concern to developers for several key reasons, researchers said.
RELATED WHITEPAPER
Inexperienced or insecure developers, for example, could begin to develop bad habits as Copilot’s code suggestions reinforce mistakes or poor developer practice.
In a similar way, Copilot could pick up on coding patterns that, though previously acceptable, may have become outdated and vulnerable.
AI coding assistants also breed a culture which lacks oversight, the study suggested, meaning problematic code may not be checked and so could be proliferated extensively.
According to Snyk, data suggests that the average commercial project has around 40 vulnerabilities in first-party code, setting the perfect stage for the amplification of flaws if developers are diligent.
Coding assistants like GitHub Copilot should be used with caution
Snyk advised developers to fix issues at the source by ensuring their code is up-to-date and secure.
“Copilot is less likely to suggest insecure code in projects without security issues, as it has less insecure code context to draw from,” said Snyk.
The company also suggested some more specific mitigation methods for the various departments which this issue could affect.
For example, developers should “conduct manual reviews” of code generated by coding assistants that include within them comprehensive security assessments and vulnerability rectifications. This will help reduce oversight blind spots, researchers suggested.
Security teams, on the other hand, should put static application security testing (SAST) guardrails in place that contain policies for development teams to work to.
Security teams can also help to provide training and awareness to development teams, as well as “prioritize and triage” the backlog of development team issues.

George Fitzmaurice is a former Staff Writer at ITPro and ChannelPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.
-
Bigger salaries, more burnout: Is the CISO role in crisis?
In-depth CISOs are more stressed than ever before – but why is this and what can be done?
By Kate O'Flaherty Published
-
Cheap cyber crime kits can be bought on the dark web for less than $25
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
By Emma Woollacott Published
-
Turns out AI isn't that popular at work – just 4% of workers use the technology in the majority of daily tasks, but developers are among the top early adopters
News Research from Anthropic shows that while AI adoption is sluggish in most professions, software developers and writers are very keen.
By Nicole Kobie Published
-
GitHub's new 'Agent Mode' feature lets AI take the reins for developers
News GitHub has unveiled the launch of 'Agent Mode' - a new agentic AI feature aimed at automating developer activities.
By Ross Kelly Published
-
GitHub just launched a new free tier for its Copilot coding assistant – but only for a select group of developers
News Limited access to GitHub Copilot in VS Code is now available free of charge
By Nicole Kobie Published
-
Are ‘ghost engineers’ stunting productivity in software development? Researchers claim nearly 10% of engineers do "virtually nothing" and are a drain on enterprises
News The study used an algorithm to assess the amount of work being done by software engineers at hundreds of firms
By George Fitzmaurice Published
-
GitHub says Copilot improves code quality – but are AI coding tools actually producing results for developers?
News Questions over the true impact AI coding tools continue to linger
By Solomon Klappholz Published
-
Python just brushed past JavaScript to become the most popular programming language on GitHub – and a key factor is that AI developers love it
News The meteoric rise of Python shows no sign of stopping
By Nicole Kobie Published
-
“There is no one model to rule every scenario”: GitHub will now let developers use AI models from Anthropic, Google, and OpenAI
News Devs will be given access to a broader array of AI models on GitHub – but there's more in store for users
By Emma Woollacott Published
-
Not all software developers are sold on AI coding tools – while productivity gains are welcomed, over a third are concerned about AI-generated code quality
News Many software developers have concerns over the quality and security of AI-generated code despite marked productivity boosts
By George Fitzmaurice Last updated