The recent compromise of SolarWinds’ Orion software has led to lots of largely ineffective hand-wringing. I’ve seen more time spent on talking about whom should be blamed for the incident than on how to mitigate the damage that it caused or to reduce the chances of a similar incident happening in the future.
But it has motivated the creation of lots of new initiatives to increase the security of software by adding all sorts of additional processes and oversight to software engineering organizations. These efforts might provide some small, incremental gains in the security of software, but they are probably doomed to do little more than that.
Security of code simply isn’t a priority for some developers, and it’s going to be very hard to change that for much of the software currently in use. And it’s probably the case that even the most careful and thorough software security engineering practices are not going to produce secure commercial software. Here’s why.
The problem with FOSS
Free and open-source software (FOSS) is here to stay. Although most FOSS projects don’t go anywhere, those that do can end up making a huge difference. Lots of the Internet now runs on the FOSS LAMP stack—the Linux operating system, the Apache web server, the MySQL database, and the PHP programming language. Commercial alternatives for each of these exist, but they have yet to gain the level of acceptance and use that their FOSS competitors have. Similarly, FOSS components are part of essentially all commercial software these days.
It isn’t easy for software companies to get their developers to take software security seriously. My experience has been that it takes a significant effort backed by the highest levels of management to get this to happen. But it has happened in some places, and the quality of software has dramatically increased in many cases because of these efforts.
But it seems to be much harder to get FOSS contributors to take security seriously. The Linux Foundation’s recent 2024 FOSS Contributor Survey suggests that FOSS programmers just aren’t that interested in security. Instead, things such as learning new things and gaining the respect of their peers motivate them. They spend less than 3% of their time responding to security issues. And some of their responses to the survey suggest that it’s going to be very hard to get them to take security more seriously.
Text responses indicated that many respondents had no interest in increasing time and effort on security; it was not simply that they wanted to be proactive. One respondent said, “I find the enterprise of security a soul-withering chore and a subject best left for the lawyers and process freaks. I am an application developer.” Another said, “I find security an insufferably boring procedural hindrance.”
So with a significant amount of software that’s being used today using FOSS in some way, it seems likely that we’ll be dealing with the security issues that come with it for the foreseeable future.
The problem with software
But the security issues with software aren’t limited to FOSS. Those related to FOSS are probably just harder to manage (and FOSS programmers may be more honest about how they feel about security).
I routinely do things that definitely qualify as insufferably boring procedural hindrances, but I do them because it’s part of my job. I can’t just decide to not do them because I don’t feel like doing them, but that’s an option that FOSS developers have, and some seem to take advantage of this.
It’s hard to fault them. If I wasn’t getting paid to do my job, I’d be avoiding insufferable procedural hindrances, too. Unfortunately, it looks as if security might be getting less attention than some people might like it to because of that.
But it may be impossible to make any form of software reasonably secure, at least in a commercially acceptable way.
Modern software is probably the most complex thing ever created by man. Getting something that complex to work is hard. Try understanding the build process for your typical enterprise software these days and you’ll be surprised that we can even get the stuff to work at all. Getting something that complex to work correctly is even harder. Getting something that complex to work securely may be so hard that it might be impossible in many cases. A study done by the Naval Postgraduate School nicely summed up the situation:
The problem with system security is that it is easy to find flaws, but it is difficult to find all flaws. Thus, if post-development flaw discovery and remediation is chosen as the path to achieving a secure system, then it is difficult to make a statement regarding the completeness of the security mechanism. Similarly, security functions that are added to a pre-existing system require analysis to ensure that they will perform with the level of trustworthiness intended. This analysis will extend to all elements depending on or upon which the security addition depends, as well as all resources shared by the addition, e.g. global data. Furthermore, unless the system has already been rigorously developed, the security analysis is likely to become so complex that starting anew would be more effective.
In other words, it’s probably impossible to make a system secure unless you carefully design it to be secure from the beginning. But commercial systems aren’t made that way. It’s simply too expensive.
NASA’s experience suggests that it is indeed possible to create very reliable and secure software, but it will end up costing more. Way more. Maybe a factor of 20 or so more. So if you think enterprise software is expensive now, imagine how much more expensive it would be if it were developed using the robust system security engineering methodology that NASA uses.
An unfortunate reality
So we know how to make secure software. We just don’t know how to make secure software that’s cheap enough to be sold. And because of that, it seems likely that we’ll be living with software that’s merely expensive instead of prohibitively expensive and that we’ll also be living with the many security issues that will come with that software.
That’s the unfortunate reality of the software business. We can make it better, but we can’t make it perfect. At least not for a price that you’d be willing to pay.
No Security is a monthly column. Recent posts can be found here.