The more “software eats the world,” the more porous your company’s defenses become. If it connects to the internet, it’s hackable and almost certainly is in the process of getting hacked. Right now.
If you’re lucky, you employ the hackers trying to crack your code. Or, if you’re like a swelling number of organizations, you’re contributing to the roughly $42 million in bounties paid in 2018, based on data published in HackerOne’s annual report. The latter option is growing in popularity as enterprises race to hire the white hats to out-code the black hats. This security-inspired feeding frenzy is turning into steady income for a rising number of developers, though it remains to be seen whether bug bounties are the ideal way to resolve our security problems.
Hacking or hiring?
There is one interesting wrinkle in this security mess: Much of the software currently being compromised isn’t actually owned by any particular company. If software is eating the world, open source has steadily been eating software. Unsurprisingly, some of the biggest security exploits of recent memory have come through unpatched or simply unsecured open source software, like Heartbleed (dubbed “open source’s worst hour” by ZDNet contributing writer Steven Vaughan-Nichols).
One solution to improving open source software’s security is to fund the developers who write it. While corporations largely employ the developers behind big-name projects like Linux, Kubernetes, etc., smaller projects (which may still have broad adoption) don’t have equivalent backing. This leaves such projects in the hands of overworked developers and understaffed teams for whom security may not be their biggest priority—or, even if it is, they may not have the bandwidth to tackle it.
SEE: Incident response policy (Tech Pro Research)
It’s also likely true that a company paying a developer to work on PostgreSQL, for example, isn’t going to pay her to work on security. It’s easy to assume security is being developed by someone else and instead push one’s developers to work on features/functionality that the company needs to drive its business. Security is rarely anyone’s priority until code is breached.
Could bug bounties step in to cover the gap?
Funding fixes for someone else
Bug bounties have become big business. According to HackerOne’s report, roughly 600 hackers join bounty programs each day. While most hackers won’t collect big bucks for their troubles, the best make a bundle. As HackerOne CEO Marten Mickos told me, two hackers in its program both cleared $1 million in bounties in the last year. That’s real money.
Small wonder, then, that over 300,000 hackers have signed up to hunt bugs, uncovering over 100,000 validated vulnerabilities.
Unfortunately, bug bounties may not do much to improve security in open source. It’s not that they couldn’t work well, but rather that without the ownership incentive, companies have been slow to apply their funds to put up bounties to plug holes in open source code. As Gabriel Avner has highlighted,
When we purchase software from a commercial vendor, we expect them to do the work of keeping it in good working order, fixing vulnerabilities that may come up along the way and issuing patches and version updates. This is not the case in many open source projects, where maintainers work on the project in their spare time. As they are not meant to be commercial enterprises, they are not staffed to respond to issue updates at the same rate as, say, Microsoft’s Windows.
For those who remember just how terrible Microsoft was at responding to security holes in Windows, this is not a particularly apt comparison. It’s also true that the open source community has responded really well to security breaches when they are discovered. The bigger problem, Avnur has suggested, is that companies don’t apply the latest security patches to open source (or proprietary) code. No bug bounty or open source development methodology is going to fix this.
SEE: IT pro’s guide to effective patch management (free PDF) (TechRepublic)
In addition, according to HackerOne survey data, 72% of the hackers in its program focus on hacking websites—they’re not working on open source projects.
Open source security “by accident”
Or are they? After all, if these same hackers are trying to spot vulnerabilities in Disney.com, and by so doing they unlock a security problem in some of the open source code powering the site, that’s a contribution to open source security, right? Absolutely. As such, simply by probing software for bugs, hackers will uncover problems in their natural course of inquiry.
The task may not be so much about directly targeting open source software to find security issues. The bigger problem is getting IT to actually implement the updated, fixed versions of the open source software upon which they depend.