CIO

​Network Monitoring: Past Present and Future

by Dick Bussiere, Principal Architect APAC , Tenable Network Security

Network Monitoring has been and will continue to be a very important part of an overall defense in depth security program.

Simply put, monitoring gives you the visibility you need to be able to understand and measure the effectiveness of your overall security strategy, both from a compliance perspective and from a performance perspective. It also gives you the ability to detect attacks and breaches early in their life cycle before a minor breach becomes a major one.

Why do we need monitoring? Because, despite the huge investments organisations have made in security systems, breaches still happen. No security program is perfect, nor are the policies, procedures and machines that we use to implement security programs. Perhaps more significant is the human element. People do silly things. People, not computers, get “social engineered.” People get lazy. People don't follow procedures, and people forget things. Human behaviour leads to weaknesses that can be exploited.

To compensate for the imperfection of both machines and the humans, we assume that our networks will be breached, then build instrumentation into the network to detect the breaches. Monitoring both gives us an indication of things that look suspicious, and also gives us a way to measure our security configurations to see if they are as secure and compliant as they should be.

Like any other technology, monitoring and defenses have evolved and continue to evolve as new technologies emerge and as the threat environment changes. In this two part series, I will review where monitoring has been, and identify some important trends that you need to be aware of.

Measures and Countermeasures

Cybersecurity is very much akin to warfare. As in warfare, the one side comes up with a new weapon, and the other side devises a countermeasure to neutralise that weapon. This cycle goes on and on ad infinitum.

We started simply with firewalls at our network boundaries. The idea was simple – control what can go into and what can leave our well-defined network boundaries. We also diligently put anti-virus onto our endpoints to inspect content and block malware. Yes, these technologies were (and still are) effective for some threats, but over time the threat environment evolved.

Over time, more creative and aggressive attackers began pushing their attacks through the open ports on the firewalls, and using specially crafted packets designed to exploit vulnerabilities in software to compromise the endpoints. Or, these attackers would just brute force attack your assets, looking for misconfigurations such as weak or default passwords that they could take advantage of.

This shift towards network oriented attacks resulted in a shift in monitoring and defenses away from the endpoint and towards the network. Detecting these attacks required dynamic, real-time inspection of network traffic, giving birth to the Intrusion Detection System. Over time, network owners began asking “why just monitor, why not actively stop the attacks”? The industry reacted by introducing the Intrusion Prevention System. Ironically, most Intrusion Prevention Systems to this day are still configured in “detection mode” meaning that they don’t actually prevent anything. This is because this technology is burdened with a high “false positive” rate meaning that a lot of good traffic will be blocked in error. So, the IPS remains a monitoring solution to this day.

Both black and white technology marched ahead further. The attackers started transmitting content in a wide variety of ways, including malware laced emails, through web pages and through downloads over a variety of vectors. The attackers introduced a high degree of polymorphism to their malware, rendering traditional signature-based technologies significantly less effective. This led to the next countermeasure, the sandbox. The basic idea of the sandbox is to reassemble the entire data stream rather than examining single packets, then find the content, then open (execute) the content in a “safe” environment and see what happens. This technique relies on behaviours rather than signatures for detection. Aside from being expensive and complex, this technology is also not fool proof. As an example, all the malicious actor needs to do is delay the execution of the malicious content for a period of time, and the sandbox will be “fooled” into thinking that the content is allowed.

The measure and countermeasure cycle continues unabated, with new technologies emerging on both sides. There is one important fact about this ongoing battle – there is a gap between the time when a new form of attack emerges, and the time when a defensive technique exists to mitigate it. This gap helps to underscore the criticality of monitoring.

The Dissolving Perimeter, Cloud Computing and Monitoring

The traditional perimeter and core of a network is rapidly becoming an anachronism, as the transition to cloud based services of one sort or another continues unabated. Effectively, “your” network is becoming intermingled with the cloud, which is “someone else’s” network. And, that “someone else” won’t let you plug your traditional monitoring and defense mechanisms into “their” network. Additionally, very often the traffic to the cloud is encrypted, inhibiting traffic inspection.

This lack of visibility is resulting in changes to how we need to implement monitoring. For example, many cloud vendors are providing ways to extract telemetry information from their applications. Salesforce provides mechanisms to allow you to monitor user activity and file movement, while Amazon AWS provides a monitoring facility called CloudTrail that records API calls and delivers log files for forensic purposes and compliance auditing. Such information can then be collected and centrally analysed for security purposes.

The Dissolving Perimeter, Mobility and Monitoring

The perimeter is dissolving for more reasons than just the cloud. Consider mobility – devices may exist within the secure confines of your LAN, well protected by your perimeter security. These same devices also travel outside of your LAN, where they are exposed to threats on whatever external networks they connect to. You can’t monitor the network activities of a device that’s outside of your network, nor will your perimeter defenses be able to help that device when outside the perimeter.

To help monitor and protect such mobile assets, agents, sometimes called “kernel level sensors”, are enjoying a renaissance. These agents help to monitor the activities of endpoints while they are off network or while they are otherwise not able to be monitored. All activities – privileged operations, software installs, network activities, user activity and so-on are observed and reported back once the machine has connectivity to the home network.

The dissolving perimeter mobile device combination introduces another dynamic to the monitoring equation – what happens when a system that became infected when it was outside your protected perimeter is brought back inside? How do you find it, since the time for traffic inspection and content inspection is long past?

This mobile device quandary is leading to another shift in monitoring that we’re seeing now – a trend away from content inspection and towards looking for indicators of compromise.

So, instead of looking for content or specific attack packets, network monitoring solutions now look for how the endpoint behaves and what it does on the network.

For example, do any of the destination IP addresses emitted by the endpoint have a bad reputation such as association with known command and control servers? Is the endpoint emitting any unusual traffic patterns, such as an inordinate number of connection attempts to peers within the network? Is the endpoint masquerading one protocol as another? Is the endpoint emitting unusual VPN connections? All the above are examples of indicators of compromise, and are a good sign that the endpoint is in all probability compromised.

In the second part of this article, we’ll investigate the techniques used for monitoring vulnerabilities, how to monitor for shadow and unknown assets and the organisational issues involved with monitoring.