CIO

3 Tales of Systems Architecture Dilemmas

Have concerns about potential vulnerabilities in your data systems? We hear from three IT security experts about how they solved the systems architecture problems that were keeping them up at night

There is an old saying that "a problem shared is a problem halved." In security, shared information can be elusive as risk professionals keep their cards close to their chest. But today's challenging business environment puts a premium on finding practical solutions to the tasks every CSO faces.

Hence "Problem Solved," a CSOonline series of mini-case studies demonstrating how one company handled a particular problem. In this first installment, we hear from three IT professionals about three different challenges with systems architecture.

Web interfaces and default passwords: A bad combination

How many of your critical systems and applications have a web interface? According to Phil Dolbow, a principal with CyberDefenses Inc., almost everything has a web interface these days. But despite their prevalence and potential for damage in the event of a breach, many organizations fail to change the default login credential when the system is installed. Dolbow's Texas-based consultancy specializes in information assurance and other facets of IT security, and does the majority of its work with the federal government, primarily the military. Here he outlines a common scenario involving web interfaces that he sees in many client shops, and shares his suggestion for how to solve the problem.

I have seen it all. For the most part the problems I see are ones that you might guess, such as user training, or lack thereof, misconfigured systems, lack of funding for security. One of the biggies that I see often is easy to find, easy to fix, and potentially devastating: Web interfaces.

When performing assessments we always scan for open web interfaces. These days, almost everything has a web interface; Storage Area Networks, Uninterruptible Power Supply systems (UPS), printers, alarm systems, phones, backup systems, servers, the list goes on. Potentially severe issues arise when these web interfaces are enabled with the default credentials in place.

Here are a couple of examples of a problem that I have seen many times: Company X has no experience with storage area networks, but due to ever expanding need for disk space, they buy one. Since they have no one that understands the intricacies of SANs, and the vendor insists on performing the install anyway, the vendor installs the system. Typically, vendors leave it to the customer to set user IDs and passwords on the systems that the install. The customer rarely follows up with that, and the result is a mission critical device with default factory credentials. I have seen this exact scenario played out many times on very critical systems. A malicious person could destroy LUNs, erase data, and cause all kinds of problems.

Page Break

The same scenario applies to uninterruptible power supply systems. Not long ago we were assessing a large government entity that has spared no expense on IT security they had one of the most secure systems I have ever seen. A few months prior to our assessment, they had a contractor replace all of their UPS systems, including the ones that ran all of their critical servers in their main computer facility. The contractor had connected these UPS systems to the network so that they could be remotely administered and monitored. I have a screenshot on the report to the customer showing us logged into the web interface (with admin rights using the out-of-the-box credentials) and the mouse cursor hovering over the SHUTDOWN button. That got their attention.

The solution? 1. Perform regular port scans for web servers/interfaces. 2. If the web interface is unnecessary, shut down the service. 3. If it is needed: - Change the credentials - Use https if at all possible - Limit access to the interface to only authorized admin workstations - Add firewall restrictions - Monitor logs

User access to production systems: Limiting accounts, stronger password protocol heightens security

Do the systems access privileges among your staff put you at risk for a breach? Here, a senior IT manager with a large manufacturing company details how he reconfigured access to production systems to be more limited and auditable.

Most of our IT staff had full access to all of our production systems, using their 'user accounts.' In a security audit and penetration test, this was exploited by the testers to end up owning our Windows Domain and most of our production data base servers.

We've now removed everyone's 'user accounts' from Domain Administrator, DBA /Application Root Accounts and the like. Technical system administrators that need regular access to sensitive systems and data have a separate account for that purpose with a much stronger password and we audit all use of that account with some audit tools and a password vault tool from Cyber Ark.

Many of our application / DBA folks need a good deal less routine access to production systems. For those systems, we have removed ALL routine admin access and replaced that with select "firefighter accounts," which are more generic. These accounts are stored in the password vault and protected by a very strong password. There's a process for entering tickets, obtaining approval and documenting this in our ticketing system. The password vault also requires several levels of approval for highly sensitive items and it reinforces the ticketing by requiring input of basic ticket numbers and reasons before a password is released. After a password is released, it can be setup to automatically reset after a time and/or to reset after the requestor 'checks it back in'.

Page Break

The password vault has also assumed control over embedded local administrator accounts on servers and PC systems, service accounts, database access and application accounts that used to be embedded in systems. These accounts were typically never watched closely and could NOT have passwords changed in the past without breaking lots of systems. As we have moved these accounts and passwords into control by the tool, we rotate them regularly without issue. Lastly, we now have a disaster recovery copy of them via a replication of this, too. As a result, our DR/BC plans are tighter as well.

One portal, many client databases: A privacy challenge

Can you execute a web system as a portal when the authentication and data is setup as client-specific? That was the question faced by James Ashbaugh, a Senior .Net Architect with a Midwest-based business management consultancy. The company had several clients running the same system code, with data housed in separate physical boxes. But the clients were all competing organizations in the same industry, so data integrity was of utmost importance and Ashbaugh was concerned about how the system was configured.

Let me set the stage for you: - Around 500-1000 external users connecting via Citrix - MS Network - Windows 2003 Servers for Domain Controller, Web Servers, SQL Servers - SQL Server 2008

The web portal runs based on client, user roles and client DB data. Microsoft provides a rich, role-based model with ASP.Net, but that model is not designed to run under a portal structure. So, we married what Microsoft provided in ASP.Net security, Active Directory Accounts/Groups and SQL Security.

Now, here is the wrinkle: The company had multiple (competing) clients running the same system code, but data was housed in separate physical SQL boxes (with client-specific database names). So, everyone ran the same web application and middle tier, but at the DB level it was client specific (everyone was on the same DB model). Remember, our clients connected to our company via Citrix, so single sign-on was a requirement (even though Citrix handled the DMZ authentication, it did not handle the forms authentication with our web applications). It's important to understand that the clients are in the same industry, so data integrity is a must because of competitive advantage.

So, the challenge was how to authenticate, load, execute the web system as a portal when the authentication and data was setup as client specific?

This is how we managed it: In Citrix we shared Internet Explorer as an application (but restricted clients from external web navigation via IE security settings with a login script). We did not want a client using our servers a means to surf the Internet or generate attacks to competitors. The home page of IE was set to our web portal. Using delegation and impersonation we took the Citrix Authenticated user and passed the credentials to the loader code class of the portal. That class kicked off an authentication process to an "authenticate" SQL DB. In this DB we had all the client DBs and client users and roles mapped. Once the loader authenticated this was an active user in the system, the loader would get the appropriate DB connection information and then pass it along in the user's web session. Did we encrypt it? Yes, we found that cross-site scripting attacks could be used to capture session information of other users. Encrypting the connection information in session meant the hacker would need to be on the physical web server and get to the page file or memory block at exactly the time any decryption algorithm was executed. Since this is not very likely, we supported the client requirements of data security at the DB connection level. In the DB, all connection strings and setting are stored in an encrypted form using a client-specific private key (the encrypted data comes from the "authenticate" DB).

Page Break

So, now that the DB connection info was always available to this authenticated user, we had a way to marshall what DB connections were made. Note: We stored users' roles & other security settings in the session too. Once all this front-loaded logic was done the loader would make a connection to the portal home page and use the DB configuration data from the user. The loader would build out the user specific home page & content.

Now, when the user began to interact with the system by requesting and updating data, the request would hit a "controller" code first. This code was used by the service calls to marshall the execution of all system events. It defined the specific stored procs [??] being called via a configuration file. Then by using that name value would make calls to DB layer and request the stored procs execution. The stored procs we geared to roles by setting the SQL Server users allowed to execute them. So, if the user was not in the "Item Users Group" that user would not be allowed to execute "GetActiveInventoryItems" stored proc. In SQL server 2008 you can couple #C code within a stored proc, so we marshalled the specific data fields returned based on the mapped user role. The controls on the web page were dynamically built to support the user-specific content being returned. Meaning if there was a grid of items but only two columns were allowed for, this user the control would be adjusted to that role-specific data.

If you have a problem or a solution you would like to share, email Senior Editor Joan Goodchild at jgoodchild@cxo.com