Solving the problems that Heartbleed and the CCS Injection Vulnerability have created requires a multi-faceted approach. It's not enough to just load an updated version and expect the patched OpenSSL code to protect you.
The reality is OpenSSL is a piece of computer code. And humans who can make mistakes create all computer code. That means part of the solution to the flaws in OpenSSL comes back to basics.
Code Reviews and Audits
OpenSSL is a just one example of an open source software library that is broadly used for important functions. The reason Heartbleed and the CCS Injection Vulnerability had such a massive impact is that the code wasn't checked thoroughly. It was edited by many different parties over a period of time, poorly commented and not adequately tested.
Companies that rely open source code need to conduct their own thorough reviews and testing. Heartbleed has shown that it is no longer acceptable to believe that the crowd will do this for you.
Many eyes did not find the flaws in OpenSSL. Finding bugs requires eyes that are looking for the vulnerabilities specifically. Duirng his presidency in the 1980s Ronald Reagan was famous for his adoption of the Russian proverb "Trust, but verify". It's advice that we should heed when it comes to open source software.
One potential solution businesses could consider is the use of bug bounties. Employ a team of hackers to find the flaws in your systems and pay them for what they find.
Move your development teams around so that they are not always working on the same applications. And consider rotating them through testing as well so that they learn how to find bugs and vulnerabilities. But getting them to think differently they will approach problem solving in new ways when they develop new code.
Periodically engage external auditors to review parts of your internally developed and externally sourced code. It might be prohibitive for them to review everything but getting them to look at something new every few months can be a good way to keep developers focused.
Using Red/Blue Teams to probe and respond is also a good tactic. Again, this gets developers to think like hackers so that they develop code with security in mind from the outset and not as an afterthought.
Organisations spend a lot of money on ensuring developers are up to speed with the latest programming techniques and that they have access to the best tools to do their job. It makes sense to divert some money into security training for developers.
By developing code that is secure from the outset, companies can avoid the cost of retrofitting security later. But this requires a change in how a lot of code is created. And that will require developers to develop with a different mindset where meeting security needs is just as important as meeting functional requirements.
Abstract security out of applications
This sounds counterintuitive after saying developers need to embed security more deeply into their development process. However, what we're are suggesting is that code that is known, tested and verified to be secure can be used by multiple applications.
For example, a common identity and access management platform can be used by all applications to ensure that the right users have access to the right data and processing options. If a problem is found in the security and access management software then it can be fixed once rather than having to fix every application.
Flaws such as Heartbleed and the CCS Injection Vulnerability could have been detected much earlier had testing processes kept up with usage patterns.
The Internet was created during a time when hardware was far less reliable than it is today. It had to be able to survive the loss of large physical elements of the network. Today, the threats are different. The Internet needs to survive targeted attacks.
That means testing needs to have an increased focus on thwarting targeted attacks and not just ensuring business functionality is supported.