In response to this, I wrote on NewsBites:
"For the moment and for most enterprises "patching" remains mandatory; failing to do so not only puts one at risk but puts one's neighbors at risk. At what point do we decide that the cost of patching is too high? When do we realize that the attack surface of these widely used products is so big, so homogenous, and so porous, that collectively they weaken the entire infrastructure? When do we realize that the architectures (e.g., von Neumann), languages, and development processes that we are using are fundamentally flawed? That hiding these products behind local firewalls and end-to-end application layer encryption is a more efficient strategy? When do we acknowledge that we must fundamentally reform how we build, buy, pay for, and use both hardware and software? At what point do we admit that we cannot patch our way to security?"
A reader responded in part:
I agree with you that the cost of patching does remain high. I agree with you that our languages and development (and testing) processes are flawed. Those complaints are not new and not interesting.
But our architectures, especially von Neumann? Would you lump IPv6 into that category as well? I'm curious why a man of your obvious accomplishments would think of that. Even more interesting would be if you had a better idea. The paradigm of the stored-program computer with instructions and data in the same memory seems unshakable at this point. Everybody has thought of separating memory into instruction space and data space, but that's just another way of getting more parallelism, to make things faster. It doesn't really change how computers work or how we think about them.
So.... I'm curious: what do you have in mind?
I answered:
Thanks for your great question. It is good to know that anyone is reading what I wrote, much less being provoked by it.
von Neumann was a genius but he solved a problem that no longer exists, i.e., dear storage. In his day storage was so dear that one did not want to preallocate it to data or procedure, as incidentally, every computing machine prior to von Neumann's proposal had done. In fact, the problem went away almost as soon as he solved it. (By treating a program as data, von Neumann also gave us "address modification.") As early as the introduction of the IBM 360, index registers made it unnecessary and bad practice for a program to modify itself. Yet today programs are routinely corrupted by their data.
It is ironic that one can get a degree in Computer Science without ever hearing about, much less studying an alternative to the von Neumann Architecture. Consider the IBM System/38 and its successors, the AS/400 and the iSeries. This architecture is so different that, at least for me, learning about it was literally mind-bendng. This architecture is probably older than most readers and has been in constant use but even many of its users do not really appreciate how interesting it really is.
These systems employ a single-level store, symbolic-only addressing, and strongly-type objects. (the number of types is still numbered in the low tens.) The operations that one can perform on an object is specified by its type. For example, in these systems "data" objects cannot be executed and "programs" cannot be modified. Thus it is impossible for a program to be contaminated by its data. Programs can be replaced, but not modified; every version will have different fully-qualified name.
The von Neumann Architecture persists because the market has a preference for convenience, generality, flexibility, and programmability. At some level we do know better but we still continue to tolerate buffer and stack overflows.
Consider Apple's iOS. There is nothing that one can do from the user interface that will alter the programming of even the application, much less the system code. Each app turns the device into a single application only device. It is not a "programmable" device; the creation of apps for iOS is done in a different environment. In the early days of iOS there was no app-to-app communication; it was introduced late and only by means of tightly controlled and controlling APIs. Even device to device communication is limited and tightly controlled. For example, I cannot share an app from my device to another. On the other hand, there has never been a successful virus in this huge population of computing and communicating devices. Yes, I can click on a bait message in my browser or e-mail and I may be misled. However, I do not fear that I will alter the browser, the mail app, or the device code. I need not fear that I will unknowingly install malicious code on my device. One can argue that Android was a reaction to those limitations, not to say features, of iOS.
I expect, almost hope, that my response to the reader raises more questions than it answers.
Perhaps we can continue the dialogue here.
No comments:
Post a Comment