Monday, September 13, 2010

What does it Mean to Say a System is Trusted?

Do not trust any computer that you cannot carry; prefer those that you can put in your pocket.

Nothing useful can be said about the security of a mechanism except in the context of a specific application and environment.
- Robert H. Courtney


That little aphorism of Bob Courtney's has become a habitual touchstone for me. If it has not given me gravitas, it has at least kept me from appearing foolish by opining on the "security" of systems without regard to the threat or what they are being used for. It keeps me from equating the security of the application with that of the system or vice versa. It enables me to use a system for one application that is not suitable for others. It enables me to recognize when the security of a system that has served well is no longer adequate. (Many seem to get by simply saying that no system or application is secure. One can clearly get one's name in the paper by saying that. It is not particularly helpful.)

The client was a property and casualty insurance company. They had some fairly progressive programs under way but both their IT and security programs were mature and stable. We were called in because they expected that they were going to have a number of new e-commerce applications done on the public network. They wanted a security management system to ensure that these applications would be done conservatively.

The method that we used was to propose a straw-man for the management system and then refine it in ever larger meetings. One of the practices that we recommended was that connected applications be done on dedicated hardware; we wanted to be sure that these applications were free from outside interference or contamination. In an early meeting the client asked that this recommendation be changed to say that these applications be done on "trusted systems." We quickly realized that that was a better way to say what we were trying to say. It included our recommendation but was stated as an objective rather than a specific practice.

Then we discovered that the reason that they wanted it restated was because they intended to run the application on their MVS mainframe. "MVS," we said. "You trust MVS?" "No," they said, "We trust our MVS. We have had it for twenty years, we manage it scrupulously, and we trust it." The auditors nodded their heads and then we nodded ours.


Part of the problem is that we came to the question the wrong way. In the early days of computers they were serially reused and had no shared resources. Most of the applications were not sensitive. The question simply did not arise. After a decade or so, we began to recognize that there was a small potential for information to leak from job to job because of the failure to wipe primary storage between jobs. Information left in memory by job n might be available to job n +1.

The problem really emerged with true shared-resource computing in the sixties. Even here the problem was tolerable. The systems were operated by a single enterprise, most of the users knew one another, and they shared similar goals and objectives.

By the late sixties, the size of user populations had begun to be numbered in the high tens to low hundreds and the modern question was on us. The potential for information to leak from one user to another was on us. One clear method by which it might happen was the interference of one process with another. The problem now had a name. Research began. While we thought that it was important, computer use was still so sparse that it wasn't really.

However, these were the days of Grosch's Law where we believed that shared resource systems were inevitable and the scale of sharing would continue to rise forever. We believed that one should always use the biggest computer one could afford. We believed that computers should be scaled to the enterprise. Thus, the problem of data security was framed as that of security in multi-user multi-application systems. We had framed the question in a way that made it almost impossible to talk about, much less answer. We knew that there was an objective called data security but the environment in which we wanted to talk about it was so complex that language failed us.

It was at about this time, 1968 or 1969 that I first met Dr. Willis Ware of the Rand Corporation. He came to White Plains for an IBM briefing on computer security. One item on the agenda was my master work, security for IBM's Advanced Administrative System. This system was intended for 5000 users and ultimately served several times that. It was a multi-user multi-application system but it was operated in a static mode, i.e., programs could not be changed while the system was operating. Users could not program and programmers could not use.

I was justifiably proud of the access control for the system. It was the largest and most complete system of its kind and it worked. The operating system was hidden from the users and the access controls for users to applications ran at the application layer. Dr. Ware listened politely and then dismissed the whole effort as trivial. Years later, when we had become friends, I found that he did not even remember it. He dismissed it on the basis that it did "not address the general case, the one where any user could write and execute a program of his own choice."

So the question of whether or not a system was secure or not had to be addressed not only in the context of sharing of arbitrary applications and data by an arbitrary number of users but there could be no assumptions about the flexibility or generality reserved to any of those users. One might well conclude that such a question excludes any useful answer but that did not keep us from trying.

Tomorrow we will look at some of the attempts.