INTERVIEW: Fighting Trim

Bill Boni is vice president and CISO of Motorola. Ira Winkler is chief security strategist for Hewlett-Packard. In separate interviews, CSO (US) Executive Editor Derek Slater discussed with them their respective visions of what's needed to get the security practice in shape. Both advocated paying attention to the little things.

CSO: You've both mentioned "the death of a thousand cuts" as a description of what security faces today. What does that mean?

Ira Winkler: Let me give you a recent example. I was talking to somebody at a large Canadian railroad company. She said, "I'm trying to convince my boss of the need for computer security. And he has this attitude that, first of all, we're a railroad company, we're not that high-tech. And, on top of that, we're not an American company, so we're not a target that anybody really cares about."

In other words, the boss doesn't believe [his company is] going to be the target of a devastating attack. OK, let's accept that — because, quite frankly, I think all these claims of terrorism and all the FUD work against us anyway. Still, I asked if she was hit by Code Red? She said, Yes. Nimda? Yes. Slammer? Yes. Other viruses? Yes. I asked, "Do you have insiders doing things that cost you a lot of money?" She said, "Yes, we have a lot of incidents we have to investigate. We're a large company."

So I said, "Did you ever add up the costs from all of that?" She said, "No, but it would easily be in the tens of millions of dollars."

Bill [Boni] used the term "the death of a thousand cuts" a long time ago. There's a lot of little things that, when added up, would be devastating if it happened all at once. And if you would do the basic, simple things on an ongoing basis — to protect yourself against the small things that add up to a major loss in total — you'd also be preventing the mythical terrorist attacks and other large-scale events.

Bill Boni: The way I look at it is that most organisations don't have a framework for keeping track of loss, particularly intellectual property-related loss. As IP has become digital, you now face the possibility of it being misappropriated without having the loss detected. It doesn't become manifest until an engineer in your company realises that your biggest competitors have what you were expecting to have, at the same time — and you thought you were a year ahead of them. Plus, they have lower price points because they didn't have to spend the money to develop it.

So you [should try to] capture and synthesise a significant portion of those loss events, using HR, the physical security groups and other branches of the company as sensing mechanisms.

A lot of talk right now in IS is about the software consoles that do event analysis and correlation. I'm talking about creating an analog of that at the corporate level that correlates the technical aspects of security with everything else — HR, legal, all these different areas. Now management can make better-informed decisions with data, not just anecdotes.

A lot of practitioners will take advantage of a breach to say, "Aha, see, we need to protect our IP." But the counterargument is, "This was a onetime event." But if you have a process in place that allows you to prove that, no, it happened three times in the last quarter alone. . . .

The next important question is, What's the source [of the vulnerability]? Is it technology? A legal loophole? A cultural blind spot in employees or management?

Even if you know your intellectual property is leaking out, how do you make that connection between what's been lost and where the loophole is?

Boni: This is where you go back to the fundamentals of counterintelligence. Information security can make its best contributions when you use the whole suite of tools and techniques with a counterintelligence mind-set.

Another example. If someone is scanning the internal network, your internal intrusion detection system goes off, and typically somebody from IT calls the employee who's doing the scanning and says, "Stop doing that." And he replies, "Oh, I was just testing this thing for my college class on IT management. I won't do that again."

He offers you a plausible explanation, and that's the end of it. Throughout the history of IP theft, this is how it always goes. HR sees one thing, physical security sees the guy "accidentally" carrying out documents ("Oops...I didn't realise that got into my briefcase"), and the IT people see the scanning incident. But nobody puts them all together to realise it's the same guy!

With IP theft, you can't always determine that it was Professor Plum in the library with the lead pipe. But [by adopting] a counterintelligence mind-set you can identify gaps in your protection scheme. Sometimes it [really] is accidental; I've worked cases where they did high-level internal product announcements at a ritzy offsite [location] and left copies of printouts lying around. Sometimes it's not accidental. People in other countries — Ira has seen this — send in "dummies" who get jobs in the payroll department, and [once] they're there for several months there's very good likelihood they'll be able to access valuable documents.

The protection mechanisms are too disjointed. Just as in infosec, we have challenges putting together the big picture. The challenge [in IP loss prevention] is how to pull together all those other sensory mechanisms: access cards, legal policies, areas where product models and mockups are done. You have to consider those as sensing devices or places where you can potentially detect behaviours. But they don't [usually] get correlated in any meaningful way in most organisations.

Winkler: It's hard to put a dollar figure on data or IP loss. When it happens and they talk about prosecuting hackers, they'll say I've lost millions of dollars to this. In fact, there was the recent case [involving] Lockheed Martin and Boeing where they were talking billions of dollars. However, I don't think Lockheed Martin took a billion-dollar loss on its balance sheet. Very rarely do they declare the loss in an accounting procedure. And if you don't do that, your executives aren't going to think, "We can protect ourselves against IP theft and save ourselves millions of dollars a year!"

So again, what security managers and CIOs should do is add up the little losses, which will add up to a big loss, and then put their security programs in place by adjusting for the little things.

You touch on the intersection of corporate or operational security issues and info security. Ira, you have a story where you were doing penetration tests at a client company and were able to walk out with critical engineering documents that you found — not in the engineering department but in the graphics department.

Winkler: Right. The CEO has the graphic arts department at his beck and call, and its responsibility is to make documents look pretty. Now, the graphic arts people think of themselves as artists; they're not thinking about, "Hey, I have some of the most valuable documents in the company on my server." Obviously, if you go to the financial group and say, "I want to see your financial data," they'll laugh you out of the office. But if you go to the graphic artists and say, "Can I take a look at your computers for a minute?" — they'll say, "Sure, why not." So people have to understand that there are many places where valuable data goes. And, ironically, some of the most valuable data gets sent to places where they think the data's irrelevant.

That makes an argument for active cooperation of all security groups. It also makes a case for the concept of Defense in Depth: Deemphasise the perimeter-oriented approach to security and start thinking in terms of layers of internal defense.

Winkler: Defense in Depth is actually a Department of Defense concept. The DoD has been using it for a long time. Most people start thinking of defense at the perimeter, but Defense in Depth [advocates] treat each piece of the network as its own. It's not a new term, but it's getting more publicity as more defense people end up in private industry. It's a darn good term.

If you adopt Defense in Depth, you eliminate the debate about which constitutes the bigger threat — internal or external breaches — which seems like a pointless question anyway.

Winkler: At one level, it's pointless, because I've always said threat is irrelevant. It's irrelevant whether they're a teenager, an insider or an outsider — someone is going to try to get you. But different threats do have different levels of resources they can throw at you. Teen hackers may scan your Web site for a while, and then maybe they make a phone call to try some social engineering. But then they go away. However, if you are a [financial sector] company, you are also potentially threatened by outsiders who want to steal money. And if you're talking about, potentially, more organised criminals or competitors, they will get a job inside your company or, more likely, recruit someone who's already inside to steal information for them. So you have to do Defense in Depth.

Back to the money question. We have written several articles saying that CSOs need to do a better job quantifying the cost of a breach, return on security investments (ROSI) and so on. Donn Parker, of SRI International fame, wrote in to say that that's the wrong approach; it's really about due diligence. A lot of people say you can't calculate ROSI. Is it a red herring?

Winkler: There's a big difference between due diligence and security. Due diligence says I might suffer a loss, but nobody can sue me for it. Security, instead, needs to be approached from the standpoint of balancing my risk. If there was some great standard out there, some good laws that said here's what you must do specifically in terms of information security, then taking a due diligence approach might be acceptable.

But if I'm a good security person, I have more to worry about than just preventing a lawsuit; I'm supposed to supply a good cost-benefit to my company. I need to keep it not only out of court, but profitable. I would argue that, theoretically, Enron might have done due diligence, but we all know where it ended up. Due diligence basically says that as long as your CEO can't be sued if the company goes bankrupt, you're fine.

Let's talk more about standards and regulations. We recently surveyed readers about whether, since budget justification is so difficult, there should be more regulation. We got a very mixed response.

Winkler: You have to realise that a regulation, if nothing else, is going to [apply] a uniform standard across a large number of computers. It's never going to be perfect, but it can be reasonable. If you want good [proposed] regulations, here are three.

First is to configure systems according to an acceptable guideline from, say, the Center for Internet Security, from the National Security Agency or from the vendors — freely available [specifications] that have gone through industry peer review.

Second, manage [systems] correctly with a patch-management program. Fixing bugs within, generally, three months allows you to be relatively secure. If you graph the CERT Coordination Center data, most exploits begin to rise after about three months. The activity hits a peak and then comes back down around six months. So that means if you fix a vulnerability within one to three months, the likelihood of your being exploited is acceptable.

Third, network administrators should be reasonably well trained. When computers were first coming out, I [heard about] a company that took its secretary and said, "OK, you know Microsoft Word and Excel, so we're making you our Unix administrator." True story. That's the type of environment we were in. But today, just as you need well-trained mechanics to fix an airplane, you need well-trained administrators to maintain your systems. Some companies are going to say, "I can't afford to send my people to a class to learn how to do this well." But, to me, if you can't afford to do the basics right, you're not offering a secure service to your customers, and maybe you shouldn't be in business.

In raising the notion of "reasonable regulations," you talk about basing regulatory decisions on historical data such as the CERT diagrams. Another analogy that might be useful is the process of legally mandated auto inspections. You have to maintain a car to certain benchmark specs, and you ought to maintain your computer systems similarly.

Winkler: By installing your computers well, you can keep them up and running. Turning off unnecessary processes makes the systems more efficient. This is where security is increasing performance. People lose track of the fact that patches don't all have to do with security [vulnerabilities]. They sometimes have to do with functionality. Doing a security program makes your systems more functional, more stable.

Unfortunately, better patching alone won't make information security work. Looking over the PricewaterhouseCoopers global survey results, the only clear conclusion is that corporate infosec is a mess. There's a bizarre lack of correlation between spending and efficacy, for example.

Boni: You don't have metrics in most cases to measure the nature of a loss; and even if you do, how do you use them to determine controls that will be effective to prevent that loss in the future? You would almost need a prestate array: "Before we got hit, we were experiencing this many problems; and after we implemented this fix, that number was reduced by this much. . . ." But there are a lot of variables in play at the same time. It's very complex.

I spend a lot of my time understanding what people are doing anecdotally, looking at documents, reports from vendors, articles in periodicals such as CSO. I'm also on a number of mailing lists. What I'm looking for is what's actually happening, what's the experience of my trusted colleagues. Information security is still too much of an arcane art right now and not enough science. We're trying to develop the Six Sigma methodology for IS. I think, over time, that kind of process will give us a better basis for having discussions with corporate management. Now you're starting to see that, for example, if you're rolling up your enterprise antivirus stats. Same with vulnerability tools, if you're rolling those up across your company. Then you can say to management, "Here's our starting position, and our goal is to reduce those incidents by an order of magnitude," and being able to report back later: "Here's our result, here's our goal, here's the variance, and here's how we explain the variance."

The CEO's team will always say "give me the data." Because when you're talking to the CFO, for example, the whole nature of managing business is measuring risk versus potential reward. But my more technical-minded brethren tend to see things as binary.

You've been involved in security for many years. From where you sit, what's the state of infosec today? Better? Worse?

Boni: I think it's getting better, but at the same time more complicated and challenging. Once upon a time, a good security program was an array of technology safeguards. Increasingly, the value add is how to enable the business by strategic application of technologies or functionality — facilitating alliances and partnerships, for example. The technical foundation is not eliminated; it's table stakes. But now the infosec pro has to move into the realm of understanding that what [business executives] want is, of course, to be able to do the new business or the product or the approach. And the security pro can't respond, "That's never going to fly, never ever." Instead, you have to start with, "OK, there are risks, and here are some approaches to managing the risks. Here's the decision matrix, and here's my recommendation." It's more like, "Here's your menu of options, and would you like fries with that?"

Care to hazard a guess as to how many information security people understand that concept?

Boni: Well, a manager-level employee may not be personally equipped to have that dialogue or may not be organisationally well placed [for it]. You can pretty much track the maturity of the security program, typically, by its placement within the company. As we see more CISOs put in place, that's becoming part and parcel of how they interact with upper management.

It seems like a race to see whether a critical mass of companies can reach that level of maturity before regulation becomes a necessity. The Department of Homeland Security has expressed a preference against regulation and is in favour of public-private partnerships. The DHS is counting on the private sector getting its cybersecurity in order out of something like enlightened self-interest.

Boni: I attended a meeting where Tom Ridge and key DHS staff came to speak, and there was some very pointed questioning by attendees and a certain amount of private-sector scepticism. But my sense is that Ridge understands that. And [partnership] is the right way to approach it. They're talking about maybe assigning Secret Service agents to banks and big brokerages to help interpret laws and regulations, so there's nobody who accidentally handles things the wrong way due to a lack of understanding. They'd take the posture that, "We're here from the government to help you, be a co-pilot, help interpret our mind-numbing array of existing regulations." But also to help disseminate information and analysis and provide reports to the security officers; for example, "Here's a scam we've seen, and here's how it works." Bingo. That's the kind of information I want as a private-sector employee. I'm happier if we can use our understanding of criminal mechanisms to prevent cybercrime, not just penalise wrongdoers after the fact. Let's turn government into a learning organisation.

That is the analog to the cyberunderground mechanism that shares information: "Hey, this is how this exploit works, let's add something and go hack someone!" The Rand Corporation [an independent think tank] has a study called "The Advent of Netwar" [available at www.rand.org/publications/MR/MR789] that's an excellent study of that kind of network-model, loose organisation. The more traditional model in government is to send all the information to the centre point and then sit back and expect them to be the ones who act. Hierarchies like that are at a tremendous disadvantage versus a network-model group of attackers. So let's build a network-enabled group of defenders. Information-sharing from point to point as well as point to centre has great potential and is going to be required to have an effective societal response to cybercrime or terrorism. Community policing in cyberspace.

Do you think the government is going to achieve that model of network-enabled defense, powered by information sharing?

Boni: The challenge is for us to give the government folks a chance to prove that they can really do it that way. They're all saying this — the FBI, the Secret Service, everybody. If it takes root, it will become a virtuous reinforcing circle. Once it shows payoff for people who participate and share information, a community of interest is formed. Instead of the "Gee, I'm really glad they didn't hit me" model. It has to show a meaningful benefit for active participation.

Whereas if you just write regulations that mandate the use of specific defensive technologies, it'll be the Maginot Line in cyberspace, massively obsolete by the time you get it in place. Protecting against the last threat, not the next one.

Some Fortune 500 corporate security honchos have expressed a strong sense that security, generally, is at a historic inflection point — being driven toward its fulfillment by a confluence of factors: terrorism, yes, the creation or elevation of executive positions, a sort of slow corporate awakening to the importance of risk management and security. Do you agree?

Winkler: I don't think we're at the inflection point yet, and I'll tell you why. There's a difference between should and must. Everybody says we should be secure, and managers today are saying we should be secure. The question is when are the managers going to say we must be secure?

You can go back a decade and hear people saying, "We want to be secure, we want to provide the best service to our customers, we want to secure their data and so on." But when do people actually make security a must? Citibank did after the Vladimir Levin incident. A lot of banks made security a must because they learned a little from Citibank's pain and their own. Because, let's face it, every bank loses money to computer theft; they just don't all admit it.

I don't see it until regulations or third-party liability lawsuits or something else forces people to start addressing it in the proper way. What will get companies all the way there is when government says you have to do it, or else when insurance companies say that, if you want director's and officer's insurance, you have to have an appropriate program. HIPAA, Gramm-Leach-Bliley and so forth are a start, but until I see some large-scale efforts to go beyond specific industries, I don't think we're at that inflection point yet.

Show Comments