Should vendors close all security holes?

In the past I have argued that vendors should close all known security holes. This week a reader wrote me with a somewhat interesting argument that I'm still slightly debating, although my overall conclusion stands: Vendors should close all known security holes, whether publicly discussed or not. The idea behind this is that any existing security vulnerability should be closed to strengthen the product and protect consumers. Sounds great, right?

The reader wrote to say that his company often sits on security bugs until they are publicly announced or until at least one customer complaint is made. Before you start disagreeing with this policy, hear out the rest of his argument.

"Our company spends significantly to root out security issues," says the reader. "We train all our programmers in secure coding, and we follow the basic tenets of secure programming design and management. When bugs are reported, we fix them. Any significant security bug that is likely to be high risk or widely used is also immediately fixed. But if we internally find a low- or medium-risk security bug, we often sit on the bug until it is reported publicly. We still research the bug and come up with tentative solutions, but we don't patch the problem."

He continues, "We have five main arguments for waiting to close a noncritical, internally found, security bug. First, in the grand scheme of things, we'd rather spend our resources on high-risk bugs, whether publicly known or unknown. Every medium- or low-risk security bug in the pipeline essentially slows down the whole process. We have a fixed number of resources. We don't have an unlimited budget like Microsoft."

"Second, we give next priority to any publicly known bug. We get evaluated on the bugs known by the public and how fast we close them. You even tout your beloved Secunia.com, and they publicize how fast vendors patch known vulnerabilities. People are checking out that site, and others, to see how well our product stacks up to the competition. Senior management certainly cares how the media portrays us. And nobody, not even senior management, knows about the internally found bugs. We'd be crazy to concentrate on anything else.

"Third, the additional bugs that external hackers find are commonly found by examining the patches we apply to our software. Look at our vulnerability statistics. Most of our hits center around two main features. Both features came to the attention of hackers after we had released patches for them fixing internally found problems. In both cases we located the vulnerable code and patched. Within a month, three more related holes were found by the hacker community. OK, so we didn't do a great job in ferreting out all the errors in the features. After the last round of fixes, we investigated each feature with a more comprehensive analysis and code review. We even hired an external penetration testing team. We found many more holes and patched them. Then in the next six months, we got hacked again in the same features. There's lots of blame going around, along with better solutions, but it doesn't change the fact if we had kept the original exploits unpatched, we would have avoided three additional, publicly discussed exploits.

"Fourth, every disclosed bug increases the pace of battle against the hackers. It's like the anti-virus war. Anti-virus vendors detect each new virus and the virus writers make better viruses. It's possible that if anti-virus software had never been created, we wouldn't be dealing with the level of worm and bot sophistication that we face today. If we patch a hole faster than it needed to be patched, it just makes the hackers look harder, faster than they otherwise would. We are at the losing end of every hacker wannabe in the world, and every fix we have to make slows down our product and costs money. Why do we want to encourage a better war? If we shut up, when the hacker finally discovers the bug, the war proceeds slower, and our customers are on the winning side.

"Fifth, when a bug isn't announced, most hackers don't exploit it. The vast majority of our customers remain protected, because even if a nonpublicly known bug really is known, it's only known by a small group of hackers. Damage is very limited. You've said the same thing in one of your previous columns that I frequently share with coworkers. Once the bug is publicly known, our products come under attack by thousands of hackers and dozens of worms. Most of our customers are protected as soon as they apply our patches, but for some reason many of our customers never patch, or at least don't patch until they call us with their system owned and the damage done.

"Industry pundits such as yourself often say that it benefits customers more when a company closes all known security holes, but in my 25 years in the industry, I haven't seen that to be true. In fact I've seen the exact opposite. And before you reply, I haven't seen an official study that says otherwise. Until you can provide me with a research paper, everything you say in reply is just your opinion. With all this said, once the hole is publicly announced, or becomes high-risk, we close it. And we close it fast because we already knew about it, coded a solution, and tested it."

On first reading, I thought that there were so many factual mistakes in this reader's argument that I didn't know where to begin. But as I re-read it, I realized he did make some cognitive points. As Stephen Northcutt of SANS taught me, "Eat the watermelon and spit out the seeds." There is a little truth in every argument.

Show Comments