Security industry reacts to Oracle's CSO missive

In case there existed any previous questions regarding how Oracle's chief security officer, Mary Ann Davidson, felt about its customers uncovering software vulnerabilities in its applications, they were laid to rest yesterday in a strongly worded blog post, No, You Really Can't. The post, swiftly pulled by Oracle, apparently held nothing back when it came to her views that under no circumstances should customers, or their hired security researchers, evaluate Oracle source code for potential security flaws.

While Oracle did remove the post from its corporate site, the Internet has a memory that refuses to be erased and copies of the missive remain in Google's Web cache and on SecLists.org. The Internet Archive also has a copy.

There does appear to be an increase in the number of security researchers and enterprises vetting their software for security flaws. Due to concerns about software security, quality enterprises are conducting what they deem due diligence on the applications they use, and a growing army of software security researchers are increasingly evaluating enterprise software for security flaws.

Additionally, at least some of the increase is due to the proliferation of formalized bug bounty programs, where software makers provide financial incentives to security researchers who find flaws that were presumably missed by the software developer's internal quality assurance and security teams. These programs are underway at software makers ranging from Tesla to Twitter.

"I have seen a large-ish uptick in customers reverse engineering our code to attempt to find security vulnerabilities in it. <Insert big sigh here.> This is why I've been writing a lot of letters to customers that start with "hi, howzit, aloha" but end with "please comply with your license agreement and stop reverse engineering our code, already," Davidson wrote. "I can understand that in a world where it seems almost every day someone else had a data breach and lost umpteen gazillion records to unnamed intruders who may have been working at the behest of a hostile nation-state, people want to go the extra mile to secure their systems," she wrote.

Davidson also prodded customers to keep their own enterprise security house in order, before poking enterprise software vendor software for potential weaknesses:

That said, you would think that before gearing up to run that extra mile, customers would already have ensured they've identified their critical systems, encrypted sensitive data, applied all relevant patches, be on a supported product release, use tools to ensure configurations are locked down -- in short, the usual security hygiene -- before they attempt to find zero day vulnerabilities in the products they are using. And in fact, there are a lot of data breaches that would be prevented by doing all that stuff, as unsexy as it is, instead of hyperventilating that the Big Bad Advanced Persistent Threat using a zero-day is out to get me! Whether you are running your own IT show or a cloud provider is running it for you, there are a host of good security practices that are well worth doing.

For software security assurances, Davidson advised enterprises to talk to their software suppliers about their assurance programs and to also check for certifications such as Common Criteria certifications or FIPS-140. "Most vendors -- at least, most of the large-ish ones I know -- have fairly robust assurance programs now (we know this because we all compare notes at conferences). That's all well and good, is appropriate customer due diligence and stops well short of "hey, I think I will do the vendor's job for him/her/it and look for problems in source code myself," she wrote.

To say that the post resulted in a strong industry backlash would be an understatement. Oracle distanced itself from Davidson's opinions in its statement distributed to the press. "The security of our products and services has always been critically important to Oracle. Oracle has a robust program of product security assurance and works with third party researchers and customers to jointly ensure that applications built with Oracle technology are secure. We removed the post as it does not reflect our beliefs or our relationship with our customers," Oracle executive vice president and chief corporate architect Edward Screven said in the statement.

"It's incredibly arrogant for Oracle to suppose that they have all the answers and that their IP protections are sufficient and proper to guard against bad guys hacking your organization," said *Jonathan Feldman*, CIO at the city of Asheville, N.C. "We know it's stupid. It's not like we have one year of data. Or five. We have at least 20 years of experience saying that the bad guys do deep, debugger-level code dives, and to ignore that with a Pollyanna 'everybody had better be nice, now, because the Big O has Everything Under Control' is crazy and irresponsible and ignorant," Feldman said.

Others responded to the vitriol and magnitude of the blowback on Twitter and social networks. Gadi Evron, founder and CEO of cybersecurity startup Cymmetria, said he found many of the reactions on the Internet distasteful.

As did Adrian Sanabria*, senior analyst, enterprise security practice at The 451 Group. "I object to people calling her crazy and nutty. I think her argument was well put together (though fatally flawed) and the post was well written - entertaining, even. Forget her point-of-view and the EULA for a moment. The REAL issue is that the CSO of a large corporation made a bold statement on a major issue and her company pulled her statement and publicly denounced her views," Sanabria said.

Andrew van der Stock*, project lead, OWASP Developer Guide at the OWASP Foundation said, "The things I agree about is that there needs to be a better way of reporting vulnerabilities. Just dumping Veracode or Nessus output on a vendor without making sure it's real is stupid," he said. "I also agree with her that folks should pay attention to their own stuff first and foremost, but where we part company is if you stumble across a security defect in a database, that absolutely should be reported and possibly rewarded, not threatened with a sinning letter. So no reporting vulnerabilities without a Proof of Concept and a repeatable write up," he said.

Few would disagree, and based on interviews with software makers over the years, there is no shortage of what many believe to be less than helpful submissions by bug finders who run software analysis tools and submit findings that are nothing more than false positives. "In all professions there are charlatans, jack-asses and frauds who shouldn't be doing anything more than grabbing people coffee - but there are also a lot of highly qualified, well intentioned security researchers that do offer a tremendous value to the community," says Amrit Williams DePaulo, *chief technology officer at CloudPassage.

*Ira Winkler*, president at the security awareness firm Secure Mentem, argued that no matter how irritating bug submissions are, Oracle should be able to adequately manage the situation. "Oracle is a very large and rich company, with products that are widely distributed and used for critical applications. Period. They have a responsibility to make their software as strong as possible," Winkler said.  "There might be a lot of false positives and associated costs, but that is a factor of [their selling] a lot of software that has a lot of users. It is a cost of doing business. I'm sure all software companies have the same false positive reports. I don't hear Microsoft et al. complaining."

Gene Spafford, computer sciences and electrical and computer engineering professor and executive director at Purdue University, said that software vendors have brought much of the current bug finding efforts and environment upon themselves. "If vendors really applied all we know about how to build robust, secured software -- including design, testing, and careful deployment -- Mary Ann's position would be quite sensible. The sloppy, slap-dash, first-to-market coding in most products plus dump-it-on-the-users EULAs mean that we have developed a culture where lots of parties feel the need to probe and test things on their own,"  Spafford said.

*"If she had concluded with a statement like, 'Please continue to feel free to send us the security bugs you find and we'll get them fixed, but please don't waste our time with 500 pages of un-validated findings', it would've been a wee bit more palatable,"said David Litchfield,*experienced software security researcher and consultant at Datacom TSS, who has been known to find a great number of Oracle software vulnerabilities himself.

Tags OracleGoogleapplication securityAccess control and authentication

Show Comments