I often run into computer security event monitoring teams that brag about how much information they collect each day or second, or tout how many petabytes of new storage arrays they have recently bought. I usually see such proclamations as a sign that they are doing it wrong. Many times, less information is better.
We used to have a dearth of information about potential security events. Now, it appears we have way too much. Most organizations I’m involved with used to collect no security log information. This lead to them being hacked for long period of time without their knowledge.
This problem was well-documented by the Verizon Data Breach Reports, which each year revealed that the vast majority of companies (usually more than 70 percent) suffering a breach actually had the security event log data that would have alerted them to it, if they had just looked and analyzed the data. It’s quite embarrassing to acknowledge to customers and shareholders that you had the data that could have prevented or lessened the breach but didn’t care enough to use it.
This led to most regulatory requirements and compliance laws requiring each covered entity to keep and analyze log files. Unfortunately, organizations went from not collecting anything to collecting and aggregating everything they possibly could. They collected so much information that it slowed down their networks and they had to buy ever bigger event message storage arrays. Today, it’s common for security breaches to be lost in the noise of billions and trillions of event messages.
Analyze only what needs to be analyzed
Not every event log message needs to be collected and analyzed. Companies should collect and analyze only messages that could actually indicate a security event. Even if the message might indicate a security event, but if your team could not easily confirm it, then you shouldn’t collect it. It’s especially important to understand the latter issue.
For example, every Microsoft Windows computer is full of event log messages that indicate very normal, everyday operational events like time clock updates or privileged operations. Either could indicate a malicious hacking event. The problem is that every Windows computer has dozens of these events every day and most aren’t undergoing a malicious hacking event.
Most computer security centers pick up these events, along with tens of thousands of other non-malicious security events reported each day from each computer, and file them away because they could “possibly” indicate a security event. The net result is that billions of these events are filed for every similar event that actually needs to be investigated.
How do you alert on just the events that indicate real hacking?
My overall rule is to collect and alert on only security event log messages, either singularly or in aggregate, that could lead to an immediate security response. Leave the billions of events that would never lead to an immediate response. Regulatory compliance requires that you (or your device) generate many events, but be selective in what you pass along for further analysis. Leave most events locally, and aggregate and analyze centrally what makes more sense.
Pick a selective SIEM vendor
Most companies buy security information and event management (SIEM) products and services that do the security message aggregating and analyzing for them. I’m not always a big believer in outsourcing IT security duties, but it makes sense to outsource security event log analysis to a company or service that understands what to collect and analyze better than you do.
The best SIEM vendor you can pick is one that understands that less is more. The Herjavec Group is one such company that recently caught my eye. Started by Robert Herjavec, one of the stars of ABC’s addictive Shark Tank television series, the Herjavec Group lives this philosophy.
Here’s what Ira Goldstein, Herjavec Group’s senior vice president of global technical operations, said about their less-is-more philosophy, “[The data required to manage security for a modern enterprise infrastructure] has to be parsed, correlated, alerted, evaluated, analyzed, investigated, escalated, and remediated fast enough to protect integrity and operations. The only way to make sense of it all is to focus on fewer, more specific use cases that matter, as opposed to a high volume of low fidelity alerts.”
“An effective security operation is driven by discipline, preventing use-case sprawl that causes information overload,” says Goldstein. “Security teams are pushed by audit, compliance, or business stakeholders to create more alarms that lead to a false sense of accomplishment. This is why deception technologies are starting to gain traction. The promise of fewer, higher impact alerts to replace the overwhelming volume of logging and monitoring infrastructures created in the past decade is exciting.
“For anyone with a budget constraint on their security program – which is almost everyone – there are key decisions to be made around coverage,” says Goldstein. “You can’t have all the logs, all the time, from every system. Our approach is focus on critical assets, of course, but to also target high net worth data sources.”
Those sources include identity and access management (IAM) and privileged identity management (PIM) systems. Adding IAM and PIM data to your base data sources like authentication, perimeter, endpoint, and vulnerability scanning data enables your security operations center (SOC) to produce greater context in alert escalations and higher impact investigations.
If you’re going to select a SIEM vendor, make sure it’s one that gets that less is more. Instead of bragging about how many events per day you collect or how big your storage arrays are, tell me how successful you are at detecting malicious events versus the amount of traffic you collect. Size means nothing. Accuracy means everything.