The true root causes of software security failures

Developers being overly trusting is one of them

In the 10 years since I launched my consulting/training venture, I've worked with thousands of software developers around the world. As you might expect, I've seen many software security failures. Given that experience, I'm often asked what I think are the biggest, baddest mistakes made in software today.

In response, I don't cite specific failures. That's what the OWASP Top-10 does. It and similar lists serve a purpose, but at heart they describe a set of symptoms of just a few, far bigger problems.

The two biggest problems I see -- focusing too much on functional specifications, and being overly trusting -- are in a sense two sides of the same coin. And it seems to me that pretty much the entire OWASP Top-10 list stems from these two things.

It's easy to understand why these problems are so persistent. The vast majority of software developers I've met and worked with are highly intelligent, motivated people. They often work in extremely stressful environments, with seemingly impossible deadlines to meet and a management team that seems to be uncompromising on those deadlines. With those pressures, it's not at all surprising that developers tend to focus most on the functionality of their products at the expense of other considerations, such as security. "What features does the customer need?" seems a more pressing question than "What can go wrong?" or "How can an attacker trick the software into misbehaving?"

The other side of the coin is a general underestimation of threats. That, of course, makes it easier for developers to shift their focus overwhelmingly to the functionality side of things. The problem here is that developers tend to be overly trusting in everything from users to the APIs they use in their code.

When you focus only on building functionality and not preventing unspecified functionality, you don't anticipate potential attacks, and you end up with the OWASP Top-10 and other lists like it.

This is my message: Building functionality is indispensable, but so is preventing code from doing things you don't anticipate. Getting that right is the tough part, of course.

I've found two things that are enormously helpful in getting it right. The first is to take an active role in marshaling data through an application. That starts with rigorously validating inputs, and it ends with ensuring that data output cannot cause any harm in its intended environment. The second is to anticipate and handle exception conditions properly.

The approach I coach people to use is to positively validate all inputs whenever possible. If you're expecting an integer, then anything other than an integer is dangerous. This is referred to as positive validation, and it is opposed to negative validation, which simply looks for (usually known) dangerous inputs and blocks those. We've seen countless failures of negative validation, and I suggest avoiding it.

Next, when outputting data, you have to marshal that data and prevent it from causing problems wherever you're outputting it. If the output context is, say, XML-formatted, then your user data must not ever affect that context by containing things like "<" or ">". Failing to do that will result in injection attacks every single time. But it's tricky -- you have to deeply understand where you're sending data and how it will be used, and then safely format it for that context.

And you need to always expect the worst of users, APIs and all input/output. I liken this to the following scenario: You're a caretaker of a toddler, and the two of you need to cross a busy city street during rush hour. Now, toddlers are not generally noted for their good judgment, so it is your job to expect the worst at all times. When you see the toddler looking inquisitively at something shiny lying in the street, you must prevent the toddler from diving into the street to retrieve it. Failing to do that can of course have bad consequences.

In case you missed the analogy, your software is the toddler. Security professionals are akin to primary caregivers, who are attuned to the dangers that lurk because they are accustomed to always pondering what can go wrong. Software professionals are more like secondary caregivers, who have taken the toddler for rush-hour strolls less often and haven't learned to be so pessimistic. The thing is that, just as with those two caregivers, if we can get security professionals and software professionals to work together effectively, we will all benefit in the long run.

I recently worked with a group of software folks who seemed to accept this message and quickly grok it. After I was finished with my work, I was talking with my client, who told me that within a day of my project's finish, he'd had several requests from both his developers and his security team to explore the question of how the two organizations could better work together.

A sweet victory indeed, albeit just the first step of many to make meaningful changes in how that group develops software. It certainly isn't going to be an easy process, but recognizing that it must be done is vital.

With more than 20 years in the information security field, Kenneth van Wyk has worked at Carnegie Mellon University's CERT/CC, the U.S. Deptartment of Defense, Para-Protect and others. He has published two books on information security and is working on a third. He is the president and principal consultant at KRvW Associates LLC in Alexandria, Va.

Read more about security in Computerworld's Security Topic Center.

Tags application securityAccess control and authentication

Show Comments