If you are responsible for the information security of an organisation there are a few very important things to understand.
First, you can't do it alone. If you don't have the support and confidence of those both above and below you in the organization you are doomed to fail.
Second, security is risk management. You are a technical interpreter for the organisation assessing how to best allocate resources against the risks your staff and data face.
Lastly, security need not be complex. The basics are the most important objective and simple tools can achieve as much or more than the latest shiny complicated widgets. Most security problems start from human error. Keeping it simple helps eliminate much of that risk and allows discovery of mistakes more quickly when they do occur.
Considering that we never have as many resources, human or money, as we would like we need to focus on the places where our efforts will have the greatest impact. Threats and attackers change over time and if you are building a plan for yesterday’s problems you will get caught flat-footed.
Let’s look at the two most common methods criminals are using today to compromise organisations.
The easiest way to compromise a system is to take advantage of a logic flaw, or bug, and make the software running on that system execute the malicious code instead of the code the user or administrator intended to run. Often these are categorised into two subcategories, ones that are known and a patch has been provided by the author and unknown or “zero-day” threats.
It seems clear that the plan should be to apply fixes for all of the known threats as soon as they are available, but we all know that this is significantly more difficult to do across an entire organisation, as opposed to a single device.
Clearly not all bugs are created equal so we must assess them based on the risk level. Server or workstation? Critical infrastructure or convenience tool? We can even assess the danger of exploitation using its CVSS score or third-party assessment.
Even after all that, we know we aren’t perfect. Are you aware that one of your developers set up a MongoDB server in the DMZ exposed to the internet? Did you forget about the Windows XP embedded computer that operates the door system? There are lots of reasons failures can happen so it is prudent to have a backup plan.
Exploit mitigation technologies built into Windows, OS X and Linux are improving, but often need to be enabled and configured for their environment. Third party exploit detection/prevention tools are also increasingly effective and kept many machines that weren’t patched from being hit during the WannaCry outbreak.
As we continue to improve the quality of our software and get better at building defense-in-depth strategies, it is increasingly difficult to find an unpatched exploit that throws the doors open for fraud and data theft. Often you need a series of exploits chained together, which more often than not is the territory of nation-states, not cyber-thugs.
Enter the human exploit, social engineering. Or as my mom called it when I was a boy, lying, cheating and tricking people. This problem doesn’t have a patch and is dependent on variables far outside of your control.
I’m a fan of the NIST Cybersecurity Framework which provides us a basic process for approaching these tasks. It goes like this...
1. Identify your assets (human and machine)
2. Protect them the best you can
3. Detect when compromise occurs as quickly as you can
4. Respond and contain the damage, remediate, etc.
5. Recover to your original state the best you can and review what went wrong to improve the protect phase
If we apply this to our staff we can reduce the impact of phishing and other social engineering tricks to improve our security posture.
Identify the risks presented that could lead to compromise of your staff. Thumb drives dropped in the parking lot, calling IT support to reset a password, emailing invoices to the finance department with malicious documents, etc.
Protect your staff by training them on the risks using real world examples, preferably ones that have impacted your organisation to bring the “realness” factor into play. Anti-phishing training can help, but storytelling is what gets through to most non-technical staff.
Detection often requires the humans to help. If your tools could detect compromise, they likely should have been used to prevent it in the first place. If 20 staff are spear-phished and 2 of them notice and tell you, you are now able to go determine who the other 18 are and reset their passwords, clean the malware from their PCs, hopefully in record time.
The key is to train the staff to report the incident when detected, rather than to simply delete it as another scam and move on with their day. Like using Facebook, it needs to be frictionless.
Respond by alerting staff to the threat, how it worked and what to look for next time to avoid compromise. These stories are very powerful and help people feel they are part of the solution rather than the problem.
If we can practice these simple steps (much harder than it looks!) we can dramatically improve our security posture and better communicate our strategy to our management and staff without having to explain to them what a web application firewall or next-gen flibber-widget is. If we move forward as a team the improvement will be obvious to all involved.
Chester Wisniewski is the Principal Research Scientist at Sophos