David Lewis, the Technical Director of Cyber Security Analysis and Operations, Australian Signals Directorate, had perhaps the most "interesting" presentation title of the Oceania CACS conference.
He chose the "Department of Chickens" so no actual government departments would feel they were being outed in his presentation, which covered a "war story" regarding an actual cyberattack against a government agency.
Lewis says ASD finds out about incidents when their clients self-report, a partner organisation reports of they find something through a proactive operation - particularly when reported incidents aren't coming in.
Understanding an incident, says Lewis, starts with understanding the sensitivity of the victim or the affected systems. They then examine the impact and "success" of the malicious action. Finally, they try to identify the actor conducting the activity. Armed with this triage information, they develop and communicate an initial response. In some cases, that communication will come from the ASD, via the ACSC, to victims of a breach before the victim knows they have been attacked.
Lewis says reconnaissance is a much longer process than many organisations realise. In Lewis' fictitious but based on fact department, the ASD knew of a brute force attack well before the victim reported and knew the source servers of the attack and what was going on.
From a mitigation point of view, Lewis says victims should do reconnaissance, restrict access to key systems, and minimise exposure to the the Internet.
The attackers in Lewis' case study used three different attack vectors. All were email based but used different payloads using macros, executable files and HTML applications. And rather than using experience zero-day exploits, they used older, but unpatched, vulnerabilities.
From a mitigation point of view, he says patching, application whitelisting and putting controls around macro execution are important strategies.
Once the threat actor was inside the system, they used scripted reconnaissance tools, even employing commercial penetration testing tools. At some point, the actors also had access to some user credentials so they attempted to use these on a web-mail server. Interestingly, while there were account lockouts on internal systems, the web-mail server did not have that control.
The aim for the hacker was to gain local administrator access on one system, using a known and patched vulnerability (CVE-2014-1812). From there, it was a short time to elevating that to a domain administrator access level. That occurred when a system installed by a third party still had a default password that had been missed in a configuration review.
At this point, the hackers behaved like typical administrators to avoid detection. They moved laterally through systems, depositing highly specific malware for specific targets that avoided the end-point protection used by the department. This included the ability to create their own second factor in organisations that use two-factor authentication.
One of the detection and mitigation processes Lewis recommends is logging and reviewing all admin account usage and the use of administrative tools. But, at this point, detecting the bad guys is very difficult.
The threat actors also tried to exploit the department's trust relationship with other departments by sending email with more malware from within the attacked department.
Lewis, on several occasions, noted the importance of identifying anomalous behaviour - something that is very difficult.
The investigation and remediation process, says Lewis, may take as long as a year. They review logs and carry out intrusion forensics using a number of different tools.
Not surprisingly, Lewis says the best plan for dealing with intrusions is to avoid them completely. Prevention is the best cure, he says.