Vulnerabilities Must be Monitored Too
So far we’ve talked about monitoring for activities both on the network and on the endpoint, both of which are necessary. We’ve not talked about vulnerabilities – which when left to fester introduce pathways that can be compromised. With today’s fusion of traditional networks, cloud and mobile, and our imperfect security infrastructures, the number of paths by which attacks can be launched and vulnerabilities exploited is growing at an alarming rate.
Consider this: in 2014 approximately 8,000 CVEs were created - that works out to more than 153 each week, representing a huge, high risk increase in threat surface if left unmitigated. Clearly, given both the dynamic nature of our infrastructures and the velocity at which vulnerabilities are introduced, vulnerability assessment and monitoring must be treated as a continuous process rather than something you do quarterly or annually. A vulnerability assessment done 30 days ago is already irrelevant.
Three techniques are used to monitor for vulnerabilities. The first uses traditional network probing of assets, where a packet is sent to the target, and the response is analysed to look for indications of vulnerabilities. This type of assessment, called an network scan, is good at discovery of assets, discovery of services and blatant vulnerabilities behind those services. It is also good to look for configuration errors such as default accounts with default passwords. That said, since this type of scan is external to the device it cannot identify major issues lurking within the device. This type of scan also cannot discover assets that are undiscoverable – for example most mobile devices.
The second type of scan uses device credentials to get inside the target. This type of scan can find just about every issue since it has total visibility into the device in question. Such a scan can even identify malware that’s somehow made it around your perimeter – a very important function with mobile computers. Scans such as these can be accomplished with external scan technologies or through agents that can be installed in the endpoints.
The third type of vulnerability “scan” is not a scan in the traditional sense since it does not in any way touch the endpoint. Rather, this new passive scanning technology observes traffic on the wire and identifies client and server side vulnerabilities based on deep packet inspection.
Further, since this passive technology has full visibility into all communications on a given segment, it has the ability to illuminate parts of the infrastructure that historically have not been monitored at all – deep within the LAN – for anomalies. Passive monitoring gives visibility not only into endpoint vulnerabilities, but also gives visibility into what the endpoints are doing and how they are being used. This kind of information, from deep inside your network, is invaluable from a security perspective.
One final perspective on vulnerability assessment relates to human behaviour. According to the 2016 Verizon Data Breach Report, one out of five breaches were caused by “miscellaneous errors.” In fact, 63 percent of miscellaneous breaches were related to human failings such as weak credentials, default passwords, people falling for phishing attacks and so-on. Vulnerability and compliance monitoring gives you the chance to catch these human failings. For example, you can identify misconfigurations, weak configurations, weak passwords and so-on, things that humans are responsible for, that could compromise your security.
Monitoring for Unknown & Shadow Assets
The scanning activities mentioned above inherently perform another critical monitoring function – asset discovery. Consider one truth – any asset (hardware, software, protocol over the network, etc.) that is unknown intrinsically introduces risk. Why? Because any asset that is unknown is probably not being patched, properly configured, or otherwise maintained. That means it’s likely to have misconfigurations and vulnerabilities that will go unmatched. Further, unknown assets may have been introduced by a malicious actor and may be performing some nefarious activity. New assets of any type may be discovered using a combination of the techniques previously discussed.
Discovered assets that serve a useful business purpose may be brought under proper management and maintained, while assets that serve no purpose may be removed.
Organisational Issues with Monitoring
The effectiveness of monitoring can
be impacted by political boundaries as well as technical ones. For example, in
many organiSations, the network group controls the technical infrastructure and
the IT group controls the endpoints. In some organiSations, the business units
themselves, not traditionally associated with IT or networking, may subscribe
to cloud services that are completely “off the radar”. OrganiSational issues
such as these leave the security group with little power to enforce monitoring
objectives. For example, how does the security group get the IT group to
install monitoring agents on the servers and endpoints? Even worse, how does
the security group get access to all the log data and user data that’s
controlled by other groups? These hurdles crop up all the time – and must be
considered when designing or maintaining an effective monitoring system.
Even more concerning is the fact that endpoints and networks are in a constant state of flux under the control of other parties. So what happens when the IT group or network group disables monitoring points that were previously operational? These issues force a requirement to “watch the watchers” – in other words, monitor that the monitoring points are indeed operational.
Modernising Your Monitoring
Monitoring, just like the threat environment, has evolved over time. We have discussed some trends that impact how monitoring can be effectively performed and the emerging tools to accomplish this monitoring. If you are still sniffing packets with an intrusion detection system at the perimeter, that’s OK but it’s not enough given the perforation of the perimeter, the emergence of cloud computing and the overall trend towards mobile computing. You need to evaluate how these trends are impacting your infrastructure and instrument accordingly with some of the technologies detailed in this article.
One final point – as more and more monitoring technologies are employed in your environment, the centralisation of the data from the various sensors becomes even more critical. You don’t want to have 10 different consoles to look at. Rather, you need to consolidate the data from the various sources into a single place that can correlate the data and present it effectively in an actionable dashboard format.