Why can’t software firms sort their own security?
The tech giants have huge resources, but bounty hunters are better placed to discover many security flaws
Tech giants reportedly spend millions on development and security but, as frequent hacks, breaches and leaks demonstrate that software remains vulnerable to attacks from both individuals and organised groups.
This state of affairs raises an awkward question: if bounty hunters can identify flaws then why can't the major software developers that write the software in the first place? Shouldn't the well-paid in-house teams be able to spot the flaws either before launch or during testing?
"A company such as Google does have a huge team working on security it has tons of people looking for vulnerabilities," said Adam Bacchus, chief bounty officer at HackerOne, which acts as a go-between for companies and hackers for a 20% commission of the prizes paid for spotting bugs. "But there are tons of features being pushed out in tons of different products, plus there are acquisitions all the time, so things will slip through the cracks." According to Bacchus, when tech firms set up reward schemes, "they're not saying 'our security isn't good enough'; they're saying 'can I have a safety net?'"
The need for help, often from self-taught hackers, has seen Adobe, Apple, Facebook, Google, Microsoft, Tesla, Yahoo and government departments set up programmes that invite people to hack their systems in return for rewards, with top-performers netting more than $350,000 a year.
The schemes are a far cry from the not-too-distant days when firms took hackers to court over breaches, even when they were trying to report the vulnerability to the company involved. It's a mutually beneficial truce. "It provides them [the white hats] with the assurance that the company won't pursue legal action against them if they report a vulnerability in an online service," said Katie Moussouris, CEO and founder of Luta Security, a firm that helps companies and governments work with hackers on security bounty programmes.
"In most countries it's illegal to hack a system, so bug bounty programmes provide useful parameters for researchers to know that if they look for a bug and report it then they'll be free from legal action and repercussions. We have hackers from around the world; it's a way for them to support families or [help] students graduate without debt it's a legitimate occupation."
Outside the system
Research from HackerOne shows that three-quarters of the hackers working on reward programmes are self-taught, whereas tech companies often hire graduates from top universities all emerging with similar skillsets and attributes. The graduates might fit the corporate culture, but could be blind to issues that aren't taught in academia. "There's a huge difference between formal computer science education and actually knowing what you're talking about when it comes to security," said Moussouris.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
"Computer security is a new discipline. Most universities aren't teaching computer security, they're teaching functional computing computer programming, but not all the security techniques and methodologies."
Moussouris helped the US Department of Defense launch its "Hack the Pentagon" programme, and said there are few universities in the world that have staff qualified to teach security. "A formally educated computer scientist often comes out of university with zero clue about writing secure code, and that's why we see the same mistakes made over and over again," she said.
Motivation gap
There are other reasons why the software firms struggle to eliminate vulnerabilities, even those that have been identified internally or by third parties. The more secure a product, the less it will work with other applications.
Take, for example, the work of Canadian startup Copperhead, which has released a hardened version of Android. The company claims it enables quicker patching, with added protection from a hardened kernel, libraries and compilers. Security experts say it's a big deal, especially given the numerous problems with Android's general security. It seems strange that Google doesn't buy the technology or add elements to its version of Android.
However, security comes at a price, which might be too high in terms of performance, interoperability and data sharing. Google, for example, would receive limited benefit from hardening the Android OS, and might see consumer frustration rise. "There is a general perception that security implies a performance overhead (memory, energy, processing power) so Android may have decided that, for most users, this overhead is unnecessary," said Chris Hankin, director of the Institute for Security Science and Technology at Imperial College London.
Upgrades and refresh rates mean users are often on different versions of an operating system, making this even more complicated. "Often the most advanced platform protections require a bit of re-architecture of an operating system," said Moussouris. "Third parties can often offer stronger security solutions in the short term, because making architectural changes too quickly in the main codebase may break the most commonly installed applications, rendering the most secure solution into the most difficult solution to deploy in the real world."
Moussouris claims that, during a programme she organised for Microsoft in 2012, security enhancements were balanced against other criteria. For example, prototypes needed low overheads, with a CPU and memory cost of no more than 5%, and must not have any application compatibility or usability regressions. "There are great ideas out there, but only the solutions that struck a balance between security and usability could make it into the official codebase."
Greater security is also at odds with the business models of many of the biggest platforms it would impede sharing and connectivity. Even if a company such as Copperhead makes a locked-down version of Android, Facebook or Twitter, those companies may not take it into the codebase because such a move would threaten earnings.
"Above all, Google wants to collect all end-users' data for exploiting and social graphing - and the same goes for Facebook," said Nicolas Diaz, a consultant with the YesWeHack bug bounty platform. "Smartphones are connected devices and are doomed to vulnerabilities due to wireless features that are basically open to attack."
Google, Facebook and Microsoft all declined to comment on this story.
This story originally appeared in PC Pro