CIO

Why open source software isn't as secure as you think

A failure to spot a necessary validation in OpenSLL code before an update caused the Heartbleed bug

The OpenSSL Heartbleed fiasco proves beyond any doubt what many people have suspected for a long time: Just because open source code is available for inspection doesn't mean it's actually being inspected and is secure.

It's an important point, as the security of open source software relies on large numbers of sufficiently knowledgeable programmers scrutinising the code to root out and fix bugs promptly. This is summed up in Linus's Law: "Given enough eyeballs, all bugs are shallow."

But look at what happened with OpenSSL. Robin Seggelemann, a German programmer from Munster University, updated the OpenSLL code by adding a new Heartbeat keep-alive function. Unfortunately, he missed a necessary validation in his code to check that one particular variable had a realistic value.

The member of the OpenSSL development team who checked the code before the update was released also missed it. This caused the Heartbleed bug.

One reviewer, even a handful of reviewers, can easily miss a trivial error such as this if they don't know there's a bug to be found. What's worrying is that, for two years, the Heartbleed bug existed in OpenSLL, in browsers and in Web servers, yet no one in the open source community spotted it. Not enough eyeballs scrutinised the code.

Commercial vendors don't review open source code

Also alarming is that OpenSSL was used as a component in hardware products offered by commercial vendors such as F5 Networks, Citrix Systems, Riverbed Technology and Barracuda Networks - all of whom failed to scrutinise the code adequately before using it, according to Mamoon Yunus, CEO of Forum Systems, a secure cloud gateway vendor.

"You would think that it would be my responsibility as a vendor, if I commercialise OpenSSL, to put my eyeballs on it," he says. "You have to take a level of ownership of the code if you build a company based on an open source component."

Instead, Yunus believes vendors just regarded OpenSSL as a useful bolt-on to their hardware products - and, since it was open source, assumed other people were examining the code.

"Everyone assumed other eyeballs were looking at it. They took the attitude that it was a million other people's responsibility to look at it, so it wasn't their responsibility," he says. "That's where the negligence comes in from an open source angle."

[ Reference: Using Pair Programming Practices in Code Inspections ][ More: Rethinking Software Development, Testing and Inspection ]

Yunus suggests that commercial vendors should run effective peer review programs for any open source code that they use, run static and dynamic analysis tools over it and "fuzz" the code to ensure it's as bug-free as possible. "What have these companies been doing for the last 10 or 15 years? If I were them, I would be taking a long, hard look at QA processes."

In fact, Yunus questions whether OpenSSL should ever have been written in a relatively low-level language such as C, echoing security expert Bruce Schneier by suggesting it could be seen as "criminal negligence" to use a language that lacks memory management for such a security sensitive application.

Jeffrey Hammond, a security analyst at Forrester Research, contradicts this view. He points out that performance is a key attribute of OpenSSL as it has to deal with huge volumes of packets.

"If you have access to memory you are going to be open to some types of attacks, but you get the performance," Hammond points out. "I wouldn't say they should never have developed OpenSSL in C, but it's true that with performance comes responsibility."

OpenSSL, Truecrypt show limits of open source code review

One problem facing many open source projects - and the reason it's hard to blame Seggelemann or the rest of the OpenSSL team - is that carrying out a rigorous code security review is immensely time consuming and requires a high level of skill. That means it's very expensive.

This is illustrated by another open source project: The TrueCrypt encryption program. The code has been open to anyone who cares to look at it since the project started 10 years ago - but it's only very recently, following fundraising campaigns on Indiegogo and Fundfill that yielded US$60,000, that the code has undergone a proper security audit.

[ News: In Baffling Move, TrueCrypt Open-Source Crypto Project Shuts Down ][ Feature: 5 TrueCrypt Alternatives That Can Lock Down Your Data ]

An initial report into just the bootloader and Windows kernel driver of the program identified 11 vulnerabilities, said that the quality of the source code was bad and pointed out that compiling TrueCrypt from source required using outdated (in one case, 21-year-old) and unsigned build tools that could be modified maliciously and that are hard to access from trustworthy sources.

The code auditors said, "Overall, the source code for both the bootloader and the Windows kernel driver did not meet expected standards for secure code. "

What's worrying is that this only came to light after funds where raised to hire the resources to carry out a code review. The open source community had plenty of opportunities to do this over the last 10 years - but the truth is that the community doesn't have the time, skills or resources (including money) to do the job properly.

A new problem will affect the security of OpenSSL going forward, too: The code is being forked, thanks to an initiative called LibreSSL led by the OpenBSD team. LibreSSL is intended to be a stripped down version of OpenSSL; in the first week of the LibreSLL project, more than 90,000 lines of code were removed, including those supporting operating systems such as VMS and OS/2.

The problem, simply stated: Since it's easy to see what's being removed from LibreSSL, and which bits are being replaced as they're deemed insecure, OpenSSL users are left exposed to malicious hackers who may exploit the weaknesses that LibreSSL discovers and removes - that is, unless the OpenSSL project can keep up with LibreSSL's progress.

Security by obscurity is never a good idea, but once vulnerabilities are made public, they need to be fixed right away. It's not clear that the OpenSSL team is in a position to do that - it's said that the project only had one full-time maintainer - or that software and hardware products that use OpenSSL will necessarily be updated promptly even if the OpenSSL software itself is.

Taking open source security seriously after Heardbleed

The good news for those concerned about the security of open source projects like OpenSSL is that help could be on its way in the shape of the Core Infrastructure Initiative (CII), a new project founded by the Linux Foundation in response to Heartbleed. Its purpose is to funnel needed money into software projects such as OpenSSL that are critical to the functioning of the Internet.

"Our global economy is built on top of many open source projects," says Jim Zemlin, the Linux Foundation's executive director. "We will now be able to support additional developers and maintainers to work full-time supporting other essential open source projects."

Support from the CII may also include funding for security audits, computing and test infrastructure. So far, about US$4 million has been pledged over the next three years by companies including Google, Microsoft and Facebook.

Paul Rubens is a technology journalist based in England. Contact him at paul@rubens.org.