CIO

DNS attack puts in perspective

With more business apps running online, it's time to shore up DoS prevention measures

A few years ago, I had the privilege of seeing some root DNS servers in action at VeriSign's main headquarters. It's something I had wanted to do for over a decade, and I was literally slightly shaking with excitement (yes, I am that big of a geek).

Physical security was high. It took three-factor authentication to get me past the two mantraps and the bomb-blast protected walls. My escort had to use handprint geometry, a PIN, a smart card, and a retinal scan to get me into the inner sanctum.

Turns out VeriSign's DNS root servers at this location are composed of two physically separate, 10-high stacked, 1U pizza-box-style IBM eServers (VeriSign said they tested many different servers, and IBM's gave them the best performance per dollar), running Solaris and Red Hat Linux. Not surprisingly, they don't run BIND and keep things intentionally diverse to protect against a platform-specific attack.

Watching the network lights rapidly blink under millions of transactions per second was a blast. Did I mention I was a geek?

Although I've walked into hundreds of high-security centers since, I've always remained impressed with the VeriSign walk-through. It wasn't too many years ago when some of the (US) east coast DNS servers were reported as being stored in a elevator storage running inside a parking garage.

The DNS infrastructure has come a long way since then and is no longer under threat of being rammed by a car. Unfortunately, physical attacks are the easy ones to stop. The Internet's global DNS infrastructure recently experienced its first huge widespread DoS attack since the Oct. 21, 2002 incident . This year's attack happened on February 6 and only involved three of the Internet backbone's 13 root DNS servers (the 2002 attack targeted all 13).

I discussed the recent attack with VeriSign's chief security officer, Ken Silva. He said that the attack focused on root servers G (maintained by the U.S. Department of Defense) and L (maintained by ICANN), and to a lesser extent, M (maintained by Japan). During the 12-hour attack, nearly 90 percent of legitimate queries to those servers were being dropped.

That's a lot worse than I had been led to believe by other news sources. I hadn't questioned the other sources because I, like most people I know, didn't even notice the attack until it was over and had made headlines.

Silva said a couple of things made this attack less threatening than it could have been. First, as stated above, it only affected three of the 13 root servers. I asked if the attackers had decided to attack all 13 servers at once if the results would have been worse.

"Yes," he replied without missing a beat. "Some phases of the attack contained more than 1Gb of malformed data per second. Normal traffic is a half a million queries per second, or about 26 billion requests per day." These attack loads were significant enough that any DNS server would suffer even with anti-DoS protections put in place.

Second, Silva said the attacks were "plain" malformed DNS requests from spoofed IP addresses. They weren't reflection or amplification attacks, which can saturate bandwidth even more.

Third, much of information served up by the root servers is often cached on downstream DNS servers locally. Most of the traffic to the root servers comes from newly started DNS servers when new top level domains that aren't already cached need to be resolved. Silva said that if the attacks had been directed at the top level domain .com resolution servers, where over 50 million .com domain names are stored, the pain would have been worse.

Finally, VeriSign and most other players have added significant server power and bandwidth since the 2002 attack. Most of the DNS root servers may be listed as one IP address, but in most cases, they are made up of many more servers. The Anycast protocol they use allows multiple computers to share one IP address; the downstream requesting client gets the closest logical DNS server.

I asked Silva what it would take to stop spoofed DoS attacks. "ISPs need to perform egress filtering and stop spoofed packets," he said. "There is an ongoing proposal called BCP 38 that addresses egress filtering."

Still, until we get the majority of ISPs to participate, spoofed IP attacks will continue. "Right now, our biggest defense is over-capacity," explained Silva. "All the DNS providers keep trying to build so much capacity that even the large attacks against the DNS structure are minimal by comparison. In the year 2000, we had a billion legitimate requests a day. Now it's 26 billion. We predict it will be 200 billion requests per day by 2010."

With both legitimate use and attack traffic in mind, VeriSign just announced an increased scaling initiative called Project Titan. It plans to increase DNS throughput tenfold by 2010 -- a "10,000-fold increase since 2000," noted Silva.

VeriSign knows a little something about scaling, and it has to. For one, it manages the first 10 of the 13 DNS root servers (A through J), resolving top-level domain .com and .net traffic. Plus, Verisign offers directory services for more than DNS; it has the exclusive RFID directory service contract to help the Wal-Marts of the world track medicines and inventory. Its near-term scale will be in the trillions of transactions per day.

Despite all this, attack traffic is currently growing even more aggressively than legitimate traffic -- up 150 fold since 2000, said Silva. This is a concern because the DNS infrastructure, which has been badly in need of a security makeover for two decades, is now used for a lot more than Web surfing and e-mail. Like it or not, VoIP applications, IP TV, cell phones, mission-critical communication links, and data repositories are now using the Internet for real-time business. If the Internet goes down today, it's going to affect far more than just your ability to check into MySpace or YouTube.