Hinne Hettema is the team leader of the operational security team at the University of Auckland. He believes there are six critical security services every organisations should have. These are security architecture and security consulting, security and penetration testing of the deployed environment, monitoring and alerting, incident response, security strategy, and policies.
We spoke to Hettema at AusCERT 2016 to find out a bit more about him and what he sees in the actions of hackers and what we can do about them.
“I trained as a philosopher and theoretical chemist, and one of the things that strikes me as strange about cyber security is that it is unclear what ‘security’ is,” says Hettema. “Do we have a definition of security? It turns out we don’t really, we have an idea what it feels like without being able to pinpoint it as something that we can systematically think about”.
It’s clear from talking with Hettema that he has thought not only differently to many other practitioners about security but that sees there is something wrong with how many people approach cybersecurity. He says they tend to address the challenges through the lens of ethics.
“As a result the academic papers developed in this area focus on an ethics of information security and despite some work now being done I don’t think this has so far been very fruitful. It has not helped getting a new handle on the problem, and from my perspective that is because the scope of the initial question has been too limited. Criminals are not particularly ethical and we already knew that”.
Hettema’s view is that something is wrong with the system and that we should develop a view on its security from the perspective of social philosophy, in particular social contract theory, where a “persons' moral and/or political obligations are dependent upon a contract or agreement among them to form the society in which they live” (ref: Internet Encyclopedia of Philosophy).
“I am working on some papers in this area but hackers are leaving me little time to also play academic philosopher,” says Hettema.
In Hettema’s observation, once a company is hacked once, they become a more likely target for future attacks. He says “An initial compromise may be picked up because something unusual happens – a strange email, an AV alert, and IDS alert. Then, once an attacker gains a foothold, they change their tooling, and start working with the sort of things most organisations don’t monitor for very well”.
These new tools include privilege escalation, misuse of administrative tools, unauthorised access, or using the excess access loaded onto most accounts.
Hettema calls this change in tools and techniques by hackers “pivoting”.
Looking at the security posture of many organisations, Hettema says that we have built, over the years, a model for securing our borders.
“The backend – where our business really happens – is a different matter, and is I think where the next security differentiation will come from,” he says.
One of the problems, he says, is that the management of privileged accounts is still poor. Also, he believes we need to do a better job of segmenting networks and take a long-term view of managing permissions as, over time, privileged accounts seem to gather what he calls a “crust” of excess access.
So, what can we do about this? Hettema believes there are several things companies can do to manage internal security better so hackers that bypass the border can’t pivot. Some of the key things are:
Admin access models: There’s an interesting revival of the old Bell-LaPadula and Biba approaches, focused on control rather than reading or writing.
PowerShell configuration: Who can send your servers PowerShell? Through which networks? Signed or unsigned?
Active Directory: Do you reset your krbtgt password?
IDS and detection at the backend: How do you configure an IDS? What should it alert on?
Logs and logging: Do you have a log management server or a data lake solution to know what’s going on?