Sunday, February 22, 2015

On Trust

Steve Bellovin wrote:

I'm not looking for concrete answers right now. (Some of the work in secure multiparty computation suggests that we need not trust anything, if we're willing to accept a very significant performance penalty.) Rather, I want to know how to think about the problem. Other than the now-conceputal term TCB, which has been redfined as "that stuff we have to trust, even if we don't know what it is", we don't even have the right words. Is there still such a thing? If so, how do we define it when we no longer recognize the perimeter of even a single computer? If not, what should replace it? We can't make our systems Andromedan-proof if we don't know what we need to protect against them.
When I was seventeen I worked as a punched card operator in the now defunct Jackson Brewing Company.  I was absolutely fascinated by the fact the job to job controls always balanced.  I even commented on it at the family dinner table.  My father responded. "Son, those machines are amazing. They are very accurate and reliable.  But check on them."  Little could either of us know that I would spend my adult life checking on machines.

At the time when the work was being done on the TCB, Ken Thompson wrote his seminal response to the Turing Award in which he asserted that unless one wrote it oneself in a trusted environment, one could not trust it.

Peter Capek and I wrote a response to Thompson in which we pointed out that in fact we do trust. That trust comes from transparency, accountability, affinity, independence, contention, competition, and other sources.

I recall having to make a call on Boeing in the seventies to explain to them that the floating point divide on the 360/168 was "unchecked." They said, "You do not understand; we are using that computer to ensure that planes fly."  I reminded them that the 727 tape drive was unchecked, that when you told it to write a record, it did the very best it could but it did not know whether or not it had succeeded.  The "compensating control" in the application was to back-space the tape, read the record just written and compare it to what was intended.  If one was concerned about a floating point divide, the remedy was to check it oneself using a floating point multiply.

In the early fifties one checked on the bank by looking at one's monthly statement.  Before my promotion to punch-card operator, I was the messenger.  Part of my duties included taking the Brewery's pass book to the bank every day to compare it to the records of the bank.  As recently as two years ago, I had to log on to the bank every day to ensure that there had been no unauthorized transactions  to my account.  Today, my bank confirms my balance to me daily by SMS and sends another SMS for each large transaction.  American Express sends a "notification" to my iPhone for every charge to my account.

In 1976, for IBM, I published Data Security Controls and Procedures.  It included the following paragraph:

Compare Output with Input 
Most techniques for detecting errors are methods of
comparing output with the input that generated it.
The most common example is proofreading or
inspecting the context to indicate whether a
character is correct. Another example, which
worked well when input was entered into punched
cards, is key verification in which source data is key
entered twice. Entries were mechanically compared
keystroke by keystroke, and variances were flagged
for later reconciliation. 
Said another way, while we prefer preventative controls like checked operations and the TCB, ultimately, trust comes late from checking the results.

I often think of the world in which today's toddlers will spend their adult lives, the world that we will leave to them. My sense is that they will have grown up in a world in which their toys talked and listened and generally told the truth, but that every now and then one must check with dad.

No comments:

Post a Comment