Your secure developer workstation solution is here, finally!

Developer workstations are high-value targets for hackers and often vulnerable. Now you can protect them using concepts borrowed from securing system admin workstations.

For decades, one of the thorniest problems in computer security is how to better secure developer workstations while still giving them the elevated permissions and privileges--and freedom--they need to get their job done. All the proposed solutions missed the mark. Then, as a side-effect of trying to better secure admins, we found the answer. Finally, we have the right solution: the secure administrative workstation (SAW). Before I describe the SAW, it’s worth reviewing the challenges of securing developers’ workstations and the associated risks.

There is good reason to fear the security consequences of developers using insecure computers to do their job. Developers are often the specific targets of malicious hackers, and a compromise usually gives an attacker immediate elevated access to the most essential, mission-critical content in an enterprise. Want to take over a company or cause reputational damage pretty quickly? Compromise a developer.

This is because developers often have total control of not only their local computer and many other computers and servers, but usually across multiple environments. Many companies have completely different, and separate, environments for production and testing. Testing may even be broken down into multiple testing environments (e.g., test or pre-production).

Developers often have elevated access to all of them. Heck, IT security would be ecstatic if developers didn’t use the same logon name and password across multiple environments. If a hacker compromises the top admin accounts (domain admin, forest admin, etc.) in an environment, they own that environment. Compromise a developer’s credential, however, and you likely can become admin in all environments. It’s a hacker’s nirvana attack scenario. 

There are even specific types of hacker compromises, like “watering hole” attacks where hackers will compromise common, popular developer web sites known to be good places to share code and help troubleshoot programming issues. For example, four of the largest software developer companies in the world were compromised during a single hacker campaign which placed a zero-day Java exploit on an iOS developer web site.

Another favorite, but less common, developer-specific attack is accomplished by the hackers uploading a self-developed piece of code that can be downloaded and used by other developers. The innocuous-looking code often solves a missing piece of the developer puzzle or offers improved administration of a server. Unknown to the subsequent downloaders, the code includes a hidden trapdoor, which will allow the original creators to gain unauthorized access to the server or application.

One of my favorite angles of these types of attacks is when the trojan horse code contains a simple HTTP link that must, according to the open source code license, stay included as perpetual part of the code no matter who uses it. The HTTP link initially points to some innocent source code license or coder statement. But later on, after the code has been downloaded and installed on thousands of sites, the originators change the innocent link to something more malicious, like an encoded Javascript worm that is suddenly launched on every visitor of a web site that has the trojan horse code included. It’s really genius and difficult to stop if you don’t know about such things. Developers not trained in computer security usually don’t.

Precious source code

One of the biggest reasons why developers are attacked is their association with lots of source code. Malicious attackers can download it and look for vulnerabilities that they can take advantage of later on, or even more insidious, change the source code to contain a backdoor. Some once even (unsuccessfully) attempted to insert a backdoor into the Linux.

As just as threatening, trusted inside developers may take source code or databases home or to a competitor. When computer security people start talking about insider threats, oftentimes they are most worried about malicious coders with dubious intentions.

The problem is that even though developers agree that they need to be more resistant to attack, they are even more upset if you even think about taking away that elevated access all the time. Many developers are fine with the idea of being more secure, but openly resist if those protections slow them down more than a few seconds. I get that. Their life and paycheck depend on them being fast and proficient. Coding is hard enough without having to jump through too many hoops to get their job done.

Traditional solutions

Two main traditional solutions for securing developers have persisted. One gived the developer two workstations: one to do coding work on, and one to do everything else. No one likes working, carrying and switching back and forth between two computers.

The other was simply not giving developers admin privileges by default, and making them log onto elevated accounts only when they needed. This latter approach failed because developers need admin privileges all the time. They are updating drivers, source code, and servers as a daily part of their job. Because of these challenges, for a long time many companies simply accepted the risk and made security exceptions for developers. But an acceptable solution has arrived.

Your secure developer workstation solution

Over the last few years, the SAW concept has become nearly ubiquitous for better securing an enterprise’s administrators. A SAW is a specialized, security locked-down computer that admins are required to use to do anything administrative. At the very least, a SAW is prevented from going to the Internet, being contacted from the Internet, and on which normal, higher-risk non-admin activities such as internet browsing, email and file sending are not permitted.

Today’s SAWs often go further by preventing any unauthorized program from executing, usually by using a whitelisting application control program and requiring two-factor (2FA) or multi-factor authentication (MFA) for all logons.

The main difference between a SAW and yesterday’s traditional recommendation of developers using two different computers is that both can be on the same computer. SAWs often contain one or more virtual machines that run the higher risk, non-admin apps and tasks. The admin is forced to run all admin stuff on the highly secured physical computer, and the more open and less trusted “computer” runs as a virtual machine. It’s important that the more trusted and secured computer host the less secure and less trusted virtual machine. Otherwise, the basic tenet of how security trusts should flow would be violated.

The same concept can be used for a secure developer workstation. The developer logs into the more secure physical computer, and must do so using 2FA/MFA logons. This prevents easy password phishing attacks from stealing developer credentials. The developer SAW must be prevented from going to the internet or being contactable by the internet. The develop should also have to open up a less trusted, completely separate virtual computer to do non-developer stuff or connect to the internet.

Developers might argue that they need free rein on the trusted developer SAW to access websites that help their productivity, but it’s that direct access to the internet that presents the most risk. Everything else pales in comparison. If you give in on this point, nothing else really matters. Really, why do any of it at all? This needs to be the line-in-the-sand.

I wouldn’t even allow connections to pre-authorized, trusted web sites (see water hole attacks above), or even trusted vendor websites. Once you start allowing any internet connections to the developer SAW, it will begin the creep of less trusted web sites being added to the exception list. In a year or two you’ll end up with dozens to hundreds of exceptions and kill the whole reason for going to a SAW.

One exception: If the developer develops on and for an internet cloud platform such as Azure or AWS, then obviously those exceptions have to be made. 

A developer SAW can have a few other additions as compared to their admin counterparts. One, it is probably wise to include a strict whitelisting application control program on every developer’s workstation. Whitelisting application control programs are great at preventing malicious executions, but developers, as a rule of thumb, are constantly developing new drivers and programs. If the developer doesn’t need to run new kernel code or programs, enforce the whitelisting program.

If they constantly create new executable content, see if you can use what’s called a “path” or digital signature rule to allow legitimate execution. For example, on Windows computers, most developers shouldn’t be creating new executable code for the Windows\System32 folder on their own system.

Central tools server

You can also create a central tools server with all the pre-approved developer tools that aren’t pre-approved for use on the default SAW image. Developers can log onto their SAWs, go to the servers they need for the session, and map a drive share to the tools server.

Disable removable media

Another key factor of a secure developer SAW is to disable the ability for anyone to remove source code in an unauthorized manner. This means disabling removable media (or at least requiring encrypted removable media when it is allowed), disabling cut-and-paste actions, and anything else that would allow a developer to remove source code or data from their authorized locations.

If source code or data is required to be off the developer’s network or workstation, require that it can only be done by exception or only on particular, well-audited workstations. There’s been enough stolen code from the world’s enterprises to be examples for everyone.

Still too much extra effort

Believe it or not, many developers (and admins) absolutely think that having to click back and forth between the SAW and less trusted virtual machine is too high a burden to increase security. I hear the complaints all the time. Just tell them you’ll be glad to give them two separate computers, one for developing and one for everything else, if switching back and forth between a physical computer and a host VM is too much. That usually shuts them up.

But suppose management agrees that a few mouse clicks to switch contexts between the secure admin workstation and less-trusted virtual machine is too much extra effort, there are myriad ways to have both trusted and untrusted applications running on the same computer at the same time, and have them security-separated.

One traditional way (beyond running hosted virtual machines) is running each app remotely on a different hosting server, such as is often used by Citrix or Remote Desktop Protocol (RDP) implementers. To the end user it looks like the application icon is sitting on the same desktop, but when they click on the icon, it launches the app remotely on a different server.

The problem with these designs is that initial program launch is often a lot slower than subsequent starts, and that in many of these cases there is still not a true security boundary between each of the applications or on the desktop. Although I don’t know of a case of a single “in-the-wild” remote app exploit, the security experts who have looked at these models do find big flaws. Still, they remain a pretty good solution, until the first real attack in the wild exploiting them.

An even better solution is something like Joanna Rutkowski’s CubesOS. Qubes is a hypervisor-enabled desktop system with a focus on security isolation. It can run other operating systems, each within its on virtual machine instance, and the Qubes administration backend and network run in their own isolated virtual machines as well. Qubes is a security-oriented back-end that makes creating, managing, and operating all the virtual instances easier. Each virtual instance can appear co-mingled in a GUI desktop, although they are completely separated by hypervisor-enforced security boundaries. And it’s free.

SDL

While we are at it, make sure all your developers are trained in secure development lifecycle (SDL) methods. Most developers still don’t have adequate secure program training. Most colleges and other educational institutions are still doing a poor job preparing the world’s programmers to write secure code.

A few hero institutions get it and make sure their graduates are appropriately trained in SDL, but most are either not teaching it all as part of the curriculum or are teaching about things that no longer apply. It’s the rare class that teaches programmers about how to stay secure while agile programming, how to secure microservices or containers, or how to secure their code in the cloud.

The organizations giving their developers secure workstations and SDL training are going to be on the forefront of their industries in protecting their companies and customers. Now that there are good answers to developers doing their work securely, it’s time to get busy!

Show Comments