CIO

While the cyber war tail wags the national security dog, software security offers a different path to cyber peace

This is the fourth in my series of interviews with C-level executives who also happen to be thought leaders in cyber security and privacy. Remember? I enjoy pointing out that "C-level executive" and "thought leader" are not synonyms. Previously, I interviewed:

Jeremiah Grossman, founder and Chief Technology Officer (CTO) of White Security (What's real and what's not in web security),

Michelle Dennedy, Chief Privacy Office (CPO) for McAfee (The perilous path to a new privacy),

and Christopher Burgess, Chief Security Officer for Atigeo (How to meet the challenges of 21st century security and privacy).

In this installment, I engage Gary McGraw, CTO for Citigal, and the principle creator and driver of the Building Security In Maturity Model (BSIMM). McGraw is one of those I respect most in this field, and we had a delightful and wide-ranging discussion, stretching from the insecurity of electronic voting machines to the dangers of weak or conflicted governance, from the release of BSIMM4 to the follies of cyber war mongering. But in this piece, I am just going to focus on BSIMM4 and cyber war, because the two themes actually dovetail (no pun intended) in a meaningful and timely way.

Richard Power: Let's start with your perspective on how BSIMM has evolved, what is particularly striking in this year's iteration, what had surprised you so far, and what hasn't surprised you so far?

Gary McGraw: In the last four years, BSIMM has wildly exceeded my expectations. The thought, in the beginning, was to build a data-driven model. Go out and gather data, and build the model to describe the data ... BSIMM4 has ten times more data than the first BSIMM iteration we released. We have done ninety-nine measurements. Some firms have been measured multiple times, over different numbers of years, and some firms have had sub-organizations underneath the mother firm measured separately and ruled up into one measurement ... If you add up the number of all the firms there are fifty-one.

"The second [common technique] is something that will be near and dear to the heart of operational computer security people, and that is, running a software security disaster simulation drill. "

Gary McGraw on findings in BSIMM4

In BSIMM4, we finally have enough data to publish results for verticals without outing anyone accidentally; so we published the numbers for the financial services vertical, of which we have nineteen firms participating, and for the independent software vendor vertical, as well (e.g., Microsoft, VMware, etc.).

It's really cool to be able to compare those populations. We could always do it, but now we have released it so everyone can look at the data ... And for the first time in the BSIMM project, we identified to new activities that we had not seen before in previous measurements.

Let me explain. When we go to do these interviews, sometimes we'll find some crazy activity that is a one-off, and you will only see it in one place. We would never add an activity that is only done by one firm into the model. What are required instead are multiple observations across multiple firms.

And lo and behold, there were two of those.

The first was using automated code-scanning technology to look for malicious code, which as we both know is an NP-hard problem, but there are some heuristics you can use, and you can do some static analysis, and a bunch of firms have been doing this, especially in the financial services space. And they have been doing it with real impact; they have actually found some interesting stuff in their code, placed there by developers gone awry.

The second is something that will be near and dear to the heart of operational computer security people, and that was, running a software security disaster simulation drill. Let's say you have been outed by Anonymous, and they have sold your stuff to the Wall Street Journal, and all of a sudden you have a massive software security problem that you have to triage, fix and spin P.R., go! There are many firms that are doing that sort of planning. Disaster simulation has always been a part of computer security, but this is the first time we have seen it play a role directly in software security. And now we have one hundred eleven activities. We have a completely data-driven model; as we find new data, the model will change ... It is really a very powerful tool for budgeting, and for strategic thinking about how you should improve, evolve and augment your software security ...

How is it used by an organization? How is it used internally? How would a CSO or CISO or CPO present BSIMM, and to whom? How would you socialize it and to whom? What is its practical application? What would a path to implementing a BSIMM-inspired program look like?

BSIMM is about measuring software security initiatives, not about measuring a particular piece of software. BSIMM is not a methodology, it's a measurement tool, and you can use the measurement tool, and you can use the measurement tool no matter what methodology you are using for software security ... It's a whole bunch of facts. And as you know, in computer security, facts are few and far between. Opinions come fast and loose. There are many, many opinions about software security and how you ought to do it. And what we have done is make a collection of facts that you can use to either justify or disavow or debunk or deliver those opinions.

You have been writing and speaking about cyber war, and this is an issue that I have studied for two decades. And I have seen it morph from an issue that was not taken seriously enough into a monster of hype, which is only making the original issue worse in the guise of addressing it. Now I will shut up, and ask for your thoughts?

As you know, I live very close to Washington, D.C. And I have been going in and interacting with the policy people to try to get some sanity injected into the discussion around cyber warfare, and what cyber war is, and how you do accountability, and all sorts of very thorny issues. My view is that we are spending an awful lot of time and energy talking about cyber war from the offensive perspective, i.e., building incredibly powerful attack machines, and attack ideas, and very, very little time talking about proactive defense, i.e., building stuff that does not suck. And so if you want to make it simple, remember when your grandma said, "don't throw rocks, if you live in a glass house"? Well, we all live in glass houses; and in the U.S., we appear to be, from a cyber perspective, spending a lot of time and energy figuring out how to throw faster and more accurate rocks, inside our glass house. And I find it really distressing. I have written about this with the CEO from the Center for New American Security, Nate Fick, and published a paper [PDF link] that made the rounds in Washington, D.C. and up the executive branch chain, and went to the Pentagon, and to all the spooks.

We are saying, "Hey, when we have this discussion, let's at least keep this perspective in mind, we need to think about real defense, not just firewalls and anti-virus, but real defense, building secure software, and security engineering, right now." If for no other reason than that cyber war is inevitable because our stuff is so broken, it's sort of a game-changer, in the sense that you can be a smaller nation, with fewer resources, and still build a cyber capability with a reasonable budget. It is a lot easier, I would say, to build a cyber capability than an F-22 or an F-35. It evens the playing field, which means that we need to get the playing field uneven again. And the only way to do that is to build better stuff, that's hard to attack ...

If we want to work on cyber issues, we should find out what the real cyber problem is, and there is way more cyber crime than there is cyber war. You can kill three birds with one stone, cyber war, cyber terror and cyber crime, and the one stone is good security engineering, and good software security. I gave a talk on this at Dartmouth and the video is on the net ...

It is impossible to sufficiently mitigate the impact of a nuclear attack, but it is possible to mitigate the impact of a cyber attack, IF you are investing in the things you should be investing in anyway, to deal with the other issues.

I absolutely agree. And I would like to see more attention paid to that. I have to tell you that the disconcerting part from my seat too close to Washington, D.C., whenever I go in there to sit on a panel with guys from the war machine, they are always mad that I showed up. It is the usual suspects, beating the gong of war. Those guys go back through the revolving door. There's plenty of money to be made. The thing that bothers me is that now we see an escalation of the discourse all the way to Secretary of Defense. If you see the speech that Panetta gave on 10-11-12, you can see that this is something that a lot of very powerful people that are talking about the importance of the issue, and it is critical that we get them to understand from a technical perspective, what's true and what's not true about this domain.

Richard Power is a Distinguished Fellow and Director of Strategic Communications at Carnegie Mellon University CyLab, one of world's leading academic cyber security research programs. He is the author of Tangled Web: Tales of Digital Crime from the Shadows of Cyberspace (2000) and co-author of Secrets Stolen, Fortunes Lost: Preventing Intellectual Property Theft and Economic Espionage in the 21st Century (2008).