Back to article
Security Expert Gives Operating Systems Poor Security Grade
Examining Security in Proprietary and Open Source
October 14, 2002
Is open source software more secure? To most Linux enthusiasts, the answer is obvious: open source means more people can look for bugs and a faster dissemination of bug fixes. Obviously, yes. But noted security expert Gene Spafford says that this may not necessarily be true. According to the Purdue professor of computer science and co-author of Practical Unix & Internet Security, good security begins with good design and neither Windows nor Linux have much to brag about in that category. And while you might not agree with Spaf's assessment of the strengths of open source, you have to admit that he knows a thing or two about computer security. He's the director of Purdue's Center for Education and Research in Information Assurance and Security, and has advised a wide variety of organizations on computer security, including CERT, the FBI, the Secret Service, and the Air Force.
LP: You've been a vocal critic of both Windows and Linux's security design. What's the problem with Linux?
Spafford: Windows is awful, but well, so is Linux. Neither presents an environment that your average business user or government user or home user is able to install and use out of the box without worries. And in fact, if you look at your typical Linux distributions, with all of these tools and extra drivers and everything that's thrown on, a lot of that is programmed by people without training, without careful thought, and without careful design.
That's not the argument for the kernel. The kernel is rather tightly controlled by a small group who do have expertise.
In truth, it's the larger collection of things that gets shipped off and sent off that somebody might want to install, and it's very often those poorly designed or poorly examined add-ons and programs that run with privilege and server daemons that lead to the problems.
The places where people get into trouble with these systems are the businesses where you've got someone who doesn't have a degree in computing, hasn't got experience in security, and maybe they've been sold on open source. And they need to run a set of servers with back-end extensions, and it's the way these things interact with each others-- the extra features--that simply haven't been thought through from a security point of view. Things like buffer overflows. Those are coming left and right for everybody's system, and that's a problem.
There was a paper recently that came out of Berkeley, where they did a very nice analysis of the different kind of setuid bits that are present, including the filesystem setuid bit that's in Linux, and found that programs that are written for one variety of Unix, when ported, the bits don't work the same way. And you can introduce a vulnerability because it doesn't follow the traditional Unix semantics. That's another problem with Linux: people have decided that they're going to change the semantics calls, and as a result, someone who has developed experience over time building secure services or reasonably robust services in another environment, when porting it and finding slightly different semantics or different interactions, runs into trouble. How do you build something that's portable?
How many major Linux distributions are there? And they're different. Patching is different. Command arguments are different. They support different peripherals. So when I was saying that Linux was awful, I meant not in the sense that the quality of the kernel is bad, but in terms of even pinning down what Linux is: to be able to know what version to run and that you can depend on it, and that you can port software to it. It just isn't dependable in that way.
LP: But certainly there's movement to unify the Linux APIs, with the UnitedLinux effort and the Free Standards Group. Do you see the community solving this?
Spafford: Maybe, but I remember from ten years ago when there was the attempt to merge the BSD and System V camps, and the Open Group and their standards were formed specifically to counter the Sun/AT&T approach, and we've had this fraction going for some time, and it still hasn't converged. I think it's entirely possible that may happen here. And in either case, the definition of what is and is not part of the distribution is part of the problem, when it ships will all of these other possible peripheral drivers and commands and some of the documentation doesn't match the code. If you look at the man commands, there are options in the commands that aren't documented or operate differently from what the documentation says, because there isn't as well-coordinated an effort in getting those into place as there should be.
If you look at systems that were actually designed with security in mind, we aren't even close to some of the systems that were rated under the old TCSEC B-level or even A assurance. And the argument that comes back is, "Who would want to use that?" Well maybe nobody on the desktop, and that's the thing. The people who are writing Linux want to use it on their desktop, they want to do development, they want to install things as they become available. In the typical business environment, you don't want to do this.
You want to plug it in, run it, and leave it. If we were to see any of these systems--from Microsoft or the Unix vendors or Linux--produced and evaluated under the Common Criteria, then that might be an eye-opener. Or if people who were interested in building a more reliable system got away from using C, then that would be very interesting. But nobody's doing that. The closest we're seeing is some research work, and a little bit from the appliance vendors.
LP: Bruce Schneier has said that the only way to have security is to have knowledgeable people review code, and open source creates the opportunity for this. So why have you said that open source software is not inherently more secure?
Spafford: I don't completely disagree with Bruce. I believe that if you're going to have secure code, it's got to be designed according to well-accepted principles, it needs to be built by people who understand security, using good tools and proper techniques, and then evaluated--using good methods--by people who know something about security. All of those are important steps in the development process. If you think about that, there is absolutely nothing that says it has to be open source or proprietary. All of those steps can be achieved by anybody, and can be accomplished in a proprietary arena--and have been. There's an awful lot of embedded software; there's an awful lot of previous work that's been done in trusted operating systems that was done in a proprietary environment, where they were willing to expend the resources to buy the tools and to hire the people with the appropriate training.
On the other hand, we've seen code that's been out and has been used in teaching--such has Kerberos 4 or Open SSL--that has been out there for months or years before very painfully obvious security flaws were found by someone who happened to look at it with the proper mindset.
The zealots would say, "Well there's clearly the benefit of open source; you couldn't do that with proprietary source." And the answer is, "Of course you could. If those same people had access to the proprietary source, they could have found it as well." It has nothing to do with whether or not it's open. It's the methodology and the training of the people who look at it.
Another response is: "Because it's open source; it's easier to fix." Maybe. It depends on where the code's used. If it's used in a certified environment or an embedded application, and from my standpoint, whether or not I can do all the maintenance on my own car... if I have to go back and install a fix to the breaks every time it crashes and kills somebody, I don't view that as more secure. Secure means it doesn't need the patches. It's done right the same time. So the people who are saying that their code is more secure and it still needs patches every other week--whether it's proprietary or open source--are playing fast and loose with the semantics of what security means.
LP: But at the very least, with open source software, at least you're not blocking out people who want to volunteer to fix your code.
Spafford: Right. You're creating an opportunity. What Crispin Cowan's been doing with his Sardonix project. He's got some funding out of DARPA to set up a pool of people to do code reviews, which is OK, except it's a matter of getting enough people with the right kind of training and the right kind of tools. For me one of the most telling things, is here you have this huge community of open source, but where are all the open source testing tools? Where are all the robust coding tools? There aren't any.
LP: But in terms of quickly identifying the security vulnerabilities and quickly putting out patches to fix them, do you see that as a significant advantage for open source?
Spafford: The problem is that there are people putting the code that's been developed that way into mission critical circumstances, and they've bought into the idea that if everybody looked at the code it must be more secure, and that "Gee I can get to the patches faster, so that makes it more secure." And in neither case is that really true.
LP: Well I guess that depends on what you mean by "more secure." Is it more secure than Windows?
Spafford: Is it more secure than Windows was three years ago? Yes. Is it more secure than Windows will be two years from now? Hard to say. The folks at Microsoft are working very fast and very hard on better patching, and faster mechanisms. If a company gets to the point where they're able to identify these things and able to get tested, fielded, automatically applied patches out in a matter of minutes or hours, is there still a benefit?