Tuesday, May 10, 2011

On Trusting Systems

The idea of trusted systems is almost as old as shared resource computing. In fact, it arose in that context. It was all about being sure that data did not leak from one user to another or one class to another. Contamination of one user or process by another did not occur to anyone until we realized that preventing it was essential to preventing leakage. The story we told was how Roger Schell installed a Trojan Horse in Multics.

Today one issue is preventing leakage from one system to another. Our systems leak, at least in part, because they become contaminated. Contamination requires that the system do something, execute something, a program or a command. Sometimes they do this automatically as with a worm or virus. Sometimes because a user tells them to.

Another is preventing the exploitation of lost or stolen mobile devices. Of course, one way to do that is not to put sensitive data on mobile devices in the first place. On the other hand, there are applications where it might be nice to be able to put sensitive data on a mobile device if one had confidence that if it were lost or stolen the data would be safe.

What does it mean to say that a system is trusted? For a few decades I have cautioned people not to trust systems that they cannot carry and to prefer those they can put in their pockets. That recommendation relied upon the fact that such devices were shallow and simple. Ken Thompson, who received the Turing Award for writing Unix, asserted that unless one wrote it oneself, one cannot trust any computer system. Courtney argued that the question was only meaningful in the context of a specific application and environment. Peter Capek and I argued in a paper for the Irish Computer Society that, Thompson notwithstanding, we do in fact trust our systems.

I would argue that one useful test of trust is that of predictability. If for every input to the system, one can predict the output, for both the case when the system is performing correctly and the case where it is failing, that is a very useful test of trust. Of course, the more general, flexible, and functional the system, the more difficult the task of predicting. The issue is not so much a problem of getting the correct results as expressing and reconciling the prediction.

My favorite example of a trusted computer is Pac-Man. The owner of the system can trust Pac-Man. The user can also trust Pac-Man. It is single-user. It is single-application. The program is simple and obvious as to its intent. The behavior, use, and content of the machine is predictable. It is not user programmable. It is closed. The file system is hidden from the user. The operating system is hidden from the user. The user cannot insert arbitrary data or cause it to be executed. It is not connected to any network. It does not have any exposed input/output such as a disk drive or a thumb drive. It does not even have a key-board. As a result, it is stable. It does not get into unusual states. It does not have to be constantly maintained or "patched."

In computer science terms, the arcade game is an "object," an artifact that encapsulates and hides both its data and its methods.

Compare Pac-Man to your typical personal computer. Not the owner of the system, not the user, not anyone can predict what it will do, much less that it will not do forbidden things. The question is how much can we relax the properties of the arcade game before we loose trust? How much must we restrict the personal computer before we obtain necessary trust?

I trust my iPad and my iPhone. I trust iOS devices in ways that I do not begin to trust Windows systems or even OS X. First, it is an application-only machine. It is true that it can perform multiple applications but the abstraction, the "app," is of a single application at a time. Apple assures me that data cannot leak from one app to another and that an app cannot contaminate or interfere with another app.

Said another way, an app looks to me like the arcade machine. It is single user. It is simple and obvious as to its intent. The behavior, use, and content of the app is predictable. It is not user programmable. It is closed. The user cannot insert any arbitrary data, much less cause it to be executed. While the app can see its portion of the file system, it can see only its portion. The file system and the operating system are both hidden from the user. They are hidden beneath an owner chosen collection of apps, chosen from among a growing population of more than 300,000. Moreover, all of these apps and changes to them come to me from one place, a known source, Apple, in temper-evident packaging.

Unlike Pac-Man, the app can be connected to the Internet, at least as a client, but not as a server. However, the app can only access the network via an Apple provided application program interface or API. Unlike Windows or OS X, the iOS network policy is restrictive rather than permissive; one does not need an add-on firewall.

One might suggest that my trust of iOS and my iPhone or iPad relies heavily upon a trust of Apple, a level of trust that few of us have of Microsoft, IBM, or any other vendor. To some extent that is true. However, whatever trust I may place in Apple, is corroborated by four years of experience in which not one among tens of millions of users has reported a malicious program, leakage of data from one app to another, or contamination or interference of one app by another. Perhaps not quite as good as Pac-Man but sufficient for most of my purposes.

Like Pac-Man, and unlike other tablets, my iPhone and iPad do not have slots for SD storage cards or USB Ports, in part to resist data leakage or device contamination. While there is an SD card reader that can be attached to the proprietary Apple connector, its use is limited to importing photos.

Many of my peers, colleagues and contemporaries, clearly prefer more open devices. They want swappable storage. They do not want to be restricted to a single source of programs. They want to be able to write and execute their own programs. Openness has a value and some will choose it in preference to trust. To that class of users, Steve says "Get a Mac; get an Android. However, I will stick my neck out and predict that most new users will prefer trust.

Unlike Pac-Man, the iPhone or iPad can be lost or stolen. While no one wants to loose a $500 device, property loss is a measurable and acceptable risk, one that money will fix. Data loss is a different matter. Therefore, these devices have a passcode that must be provided by the user to access it. To resist brute force attacks, the passcode must be provided in ten tries, otherwise the data will be erased.

Increasingly iOS machines will be chosen by the enterprise. Apple provides features to transfer the trust from the user to their management. These include the ability to control what applications can be used and to restrict the way some, e.g., browsers, are used, by means of Encrypted Configuration Profiles. The enterprise owner may have requirements that the owner user may not. Therefore, Apple provides features like hardware encryption, remote and local wipe, encrypted backup, two-way SSL, crypto APIs, and VPNs.

I have three Windows systems. One is configured as a typical personal computer. It is configured to be used only by me, usually as an unprivileged user so as to resist unintended changes. I use it for many sensitive applications. Therefore, in order to protect myself from leakage of sensitive data, I must hide it from the Internet using a hardware firewall, resist leakage using a software firewall, and resist contamination using antivirus and anti-spam software and services.

One of the other two is configured as a file server. It is not connected to the Internet. It has no applications. The last is configured as an Internet client. It has browsers but no other applications. It has some sensitive meta-data used by the browsers but no other data. An unprivileged user cannot change any programs or store arbitrary data. Needless to say, I trust the latter two systems more than the former.

It is not by accident that Windows and Unix are our most popular operating systems. Popularity leads to low unit cost. They are open to an arbitrary number of users, device types, and applications, including legacies. This means that they have a mammoth attack surface. In order to have trust in them we must configure them in such a way as to limit that surface.

These system share the von Neumann architecture. This means that by default processes include the capability to alter their own methods, procedures, and programs. Moreover, outputs are a function not only of inputs but also of the state of the system. This makes demonstrating the system difficult. We do have other von Neumann architecture systems that are more trustable, including versions of Unix, and alternative architectures like AS/400.

We can have the trust in our systems and devices necessary to the application and the environment. We can tune trust. However, it is not free. It is achieved at the expense of openness, generality, flexibility, freedom, function, application, and programmability.

Programmability is the ultimate in flexibility.
A few years ago I attended a presentation by Fred Cohen in which he pointed out that in a world of "application-only" computers, we would enjoy most, but not all, of the benefits of the general purpose computer. After thinking about it for a while, I decided that even if we could complete get rid of it, programmability is so valuable that some SOB would just invent it all over again. On the other hand, as computers get smaller and cheaper, we will decide that we do not have to have it everywhere.

Of course, trust is never absolute. I teach at the Naval Postgraduate School. It would be fair to describe NPS as a "trusted system program." However, the course that I teach is specifically about managing a population of untrusted systems. This is the job that most of us really have. We are the ones responsible for recommending the level of trust required by our applications and threat environment and identifying the strategy and architecture to achieve it. That is why we are called professionals and are paid the big bucks.


No comments:

Post a Comment