Is DevOps good or bad for security?

Does DevOps give you better security through agility or make development and deployment too fast to secure?

If you think of DevOps as failing fast – as Facebook used to put it, “move fast and break things” – then you might also think of rapid releases, automation and continuous integration and deployment as giving you less time to find security problems. After all, you’re changing code, updating features and adding new capabilities more rapidly. That means more chances to introduce bugs or miss vulnerabilities.

With 2016 set to be the year DevOps goes mainstream – Gartner predicts 25 percent of Global 2000 businesses will be using DevOps techniques this year and HP Enterprise is even bolder, claiming that “within five years, DevOps will be the norm when it comes to software development.” Does that mean security problems waiting to happen?

Craig Miller, who’s helped move Microsoft’s Bing search engine to continuous deployment (the service now updates four or five times a day), doesn’t believe that’s necessarily true. “The classic response is faster means maybe lower quality or something might get through the process and I don’t think that's true at all,” says Miller. “CD, if you do it right, provides all the auditing you need to actually be confident in the software you push out. You have to make sure your software is high quality and I think security is a subset of quality.”

Forrester analyst Kurt Bittner agrees. “There’s a perception that with DevOps, speed is achieved by cutting corners and skipping important steps, that it’s uncontrolled,” says Bittner. “The exact opposite is true; it’s a very controlled, very structured environment. Doing DevOps right gives you higher quality, better visibility and speed, as opposed to achieving speed by cutting corners.”

That ought to be better for security, but only if continuous integration and continuous deployment are matched with continuous security and monitoring.

The key is the centralized, standardized delivery pipeline that’s a necessary, foundational piece for DevOps, says Bittner. “You get visibility into what's being built and you get the opportunity to inject various kinds of activities; which might be code scanning, or it might be peer reviews, various kinds of security related testing, control over the environment and having the correct settings.”

Testing is not optional

Miller is uncompromising about the importance of automating testing. “I think is the biggest failure for a lot of companies is that they allow failures in test. We have no tolerance at all for failures.” That might mean changing your tools as well as your development practices, he warns. “The Web security toolset we used was not very scalable; it was an app that somebody ran every week. We’re not going to accept a tool I have to have someone run for me. If it can’t be automated, I think there’s a problem with the design.”

[Related: 7 signs you're doing DevOps wrong]

Bing’s tool for signing binaries during deployment was also manual; both were rewritten, which took time and effort but also improved the tools. “It was the way they did things, but we don’t care how they do things,” says Miller firmly. “We’re not going to do this.”

Recovering fast, and adapting and iterating based on what you’ve learned, are as important as speed to DevOps and Miller dislikes the “fail fast” term. “I like learn fast, because I’m not trying to fail; I'm trying to succeed – but when I don't succeed I want to learn, and hopefully next time it comes around, we know how to do this better now and incrementally get better over time.”

“Putting a guardrail up on the highway allows you to go faster, not slower,” says Alan Sharp-Paul, co-founder of DevOps tool vendor Upguard. “With proper checks, you catch problems before they become showstoppers and security risks in production. And when it’s part of the automated workflow, the overhead is essentially nil.”

That’s what the figures in Puppet’s 2015 State of DevOps Report show as well: “High-performing IT organizations deploy 30x more frequently with 200x shorter lead times; they have 60x fewer failures and recover 168x faster.”

The Heartbleed bug in OpenSSL was a good demonstration of that, suggests Bittner. “People who had DevOps and better delivery pipelines were able to respond quickly and that got some attention in businesses; they were able to respond almost immediately and everyone else was scrambling. When a threat occurs, being able to respond quickly is the big differentiator.”

Miller views that as one of the benefits of DevOps. “Because CD emphasizes having a code review process, small check-ins and rapid mitigation come with it. If you can deploy four or five times a day, you can mitigate something within hours.”

The same applies to spotting breaches, says Sam Guckenheimer from Microsoft’s developer tools team. “With DevOps, you're worried about things like mean time to detect, mean time to remediate, how quickly can I find indicators of compromise. If something anomalous happens on a configuration, you have telemetry that helps you detect, and you keep improving your telemetry – so you get better detection, you get better at spotting indicators of compromise and you get better at remediation.”

Continuous deployment makes life harder for attackers in two ways, Guckenheimer explains. “If you're one of the bad guys what do you want? You want a static network with lots of snowflakes and lots of places to hide that aren't touched. And if someone detects you, you want to be able to spot the defensive action so you can take countermeasures.”

Show Comments