Why more enterprises are becoming collateral damage
- 09 August, 2019 11:41
It’s fair to say the core infrastructure and protocols used to run the internet have had a rocky 2019.
Between malicious subversion and mistakes, the Domain Name System (DNS) and the Border Gateway Protocol (BGP) have long been capable of taking parts of the internet offline.
Hacker group L0pht infamously told US senators in May 1998 they could take down the internet in 30 minutes by exploiting flaws in BGP.
BGP is used to determine the best route for traffic to flow across the internet; DNS is often referred to as the ‘address book’ or ‘phone book’ of the internet.
Both are largely unchanged since they were drawn up (DNS in 1983 and BGP in 1989). The internet, however, has changed considerably over that time - and the problems associated with that gap in time and thinking are now felt every few months by internet users in the form of problems accessing sites and content.
Relational weaknesses in DNS and BGP make them an increasingly popular vector for attacks.
IDC said this month that 62 percent of organisations in the Asia Pacific region had suffered application downtime associated with DNS attacks.
“As the growth engine of the world, APAC has seen a rapid surge in the exchange of sensitive data, making the region one of the most vulnerable regions for DNS attacks,” the researchers said.
Globally, IDC found “one in five businesses loses more than $1m per DNS attack, and the average organisation faces about nine DNS cyber-attacks per year.”
We see the incidence of these kinds of attacks in our monitoring services - and they are growing. In a way they enable indirect attacks against a target.
A more valuable vector
In years past, if attackers wanted to take a website or service offline they might throw enormous traffic at it. However, such denial of service (DoS) attacks have largely been on the wane (though a recent uptick in such attacks after a long decline is an unwelcome development).
Instead, indirect attacks which take advantage of critical dependencies outside of the control of the intended target are growing, netting more high-profile victims while maximizing the scope of collateral damage.
An example of this is DNS hijacking, where attackers reroute legitimate traffic to a box or site under their control. DNS hijacking involves compromising the DNS server or registrar, typically through a phishing attack or a compromised password. The hijacker gains administrative access to the DNS account in order to change the records directly.
Typically a hijacker will change the name server (NS) record to point future DNS queries to a name server under their control. A hijacker may also directly change address (A or AAAA) records themselves.
Attacks hit high gear
Already this year, there have been several major DNS hijack incidents attributed to a group that Cisco Talos researchers have dubbed ‘Sea Turtle’.
Talos said the group targeted around 40 government, intelligence and energy organisations in one such campaign.
DNS hijacking was used “merely [as] a means for the attackers to achieve their primary objective,” Talos said. “Based on observed behaviours, we believe the actor ultimately intended to steal credentials to gain access to networks and systems of interest.
“To achieve their goals, the actors behind Sea Turtle established a means to control the DNS records of the target; modified DNS records to point legitimate users of the target to actor-controlled servers; and captured legitimate user credentials when users interacted with these actor-controlled servers.”
The same group is also thought to be behind a large-scale DNS hijacking campaign which led to warnings from various governments worldwide, as well as an attack on Greece’s top-level domain registrar.
Last year, users of Amazon’s DNS service Route 53 were collateral damage in a DNS and Border Gateway Protocol (BGP) hijack aimed at stealing cryptocurrency.
The likes of Instagram and CNN became partially unreachable due to the Route 53 problems.
The attackers who pulled off this digital hijacking and robbery made no attempt to penetrate Amazon’s infrastructure. Instead, they compromised a small Internet Service Provider in Columbus, Ohio, using them to propagate false routes to Amazon’s DNS service.
Again, the implicit trust built into Internet routing allowed this attack to take place. The fact that the hijacked service (translating URLs into Internet addresses) is a critical dependency meant that the impact was massive and went far beyond the intended target.
Even Innocent Mistakes Can Cost Businesses Big
Malicious attacks get most of the attention, but the rather creaky state of Internet routing and lack of sound administration by ISPs can also create widespread havoc.
Consider this scenario. A smallish ISP decides to use route optimisation software to help load balance traffic across its backbone coming from Internet sites through its transit provider peerings and headed towards downstream customers.
The reason for doing this is to improve the efficiency of backbone link utilisation to avoid over-saturating some links and reduce the need for immediate capital investment upgrades, to optimise usage and costs of multiple transit links and to improve performance to customers.
To achieve this, the ISP takes legitimate Internet address blocks (prefixes) and splits them up into more specific (smaller) blocks so that the route optimization software can help load balance traffic with those smaller address blocks. These smaller prefixes aren’t legitimate out on the Internet. In fact, on the Internet, because they’re more specific, routers will consider them more authoritative. Unfortunately, if those routes get out into the wilds of the Internet (for example, via an error that externally leaks them), these more specific routes could divert huge sets of traffic away from the major Internet traffic highways and send them down a country lane.
That’s what happened recently in the Cloudflare outage, where a small ISP in Pennsylvania mistakenly let these route optimisation prefixes out of its network through one of its corporate customers that also was connected to Verizon. Verizon chose to believe and share those routes with the rest of the world, and massive congestion from the traffic redirection led to high levels of packet loss and service disruption as a world of user traffic headed to Cloudflare’s network tried to funnel through a corporate network.. Users simply weren’t able to reach the Cloudflare edge servers and the apps and services that depended on them.
You Can Manage If You Can See
It is sometimes tempting to think of outages as a state of affairs which you simply have to endure. However, for every massive global outage, there are many smaller-scale cloud and Internet outages that can affect your business, and for which there is often a remedy if you have visibility. Being able to figure out whether an issue is due to something in your network, in the SaaS or cloud provider or in an ISP, and whether its a DNS, network, server or application-layer issue is critical to achieving a fix.
Enterprise and software-as-a-service providers wanting to avoid becoming collateral damage to these kinds of attacks need to maintain real-time awareness of their DNS integrity and any BGP routing incidents that could impact the availability and security of their services.