Sunday, February 22, 2015

On Trust

Steve Bellovin wrote:

I'm not looking for concrete answers right now. (Some of the work in secure multiparty computation suggests that we need not trust anything, if we're willing to accept a very significant performance penalty.) Rather, I want to know how to think about the problem. Other than the now-conceputal term TCB, which has been redfined as "that stuff we have to trust, even if we don't know what it is", we don't even have the right words. Is there still such a thing? If so, how do we define it when we no longer recognize the perimeter of even a single computer? If not, what should replace it? We can't make our systems Andromedan-proof if we don't know what we need to protect against them.
When I was seventeen I worked as a punched card operator in the now defunct Jackson Brewing Company.  I was absolutely fascinated by the fact the job to job controls always balanced.  I even commented on it at the family dinner table.  My father responded. "Son, those machines are amazing. They are very accurate and reliable.  But check on them."  Little could either of us know that I would spend my adult life checking on machines.

At the time when the work was being done on the TCB, Ken Thompson wrote his seminal response to the Turing Award in which he asserted that unless one wrote it oneself in a trusted environment, one could not trust it.

Peter Capek and I wrote a response to Thompson in which we pointed out that in fact we do trust. That trust comes from transparency, accountability, affinity, independence, contention, competition, and other sources.

I recall having to make a call on Boeing in the seventies to explain to them that the floating point divide on the 360/168 was "unchecked." They said, "You do not understand; we are using that computer to ensure that planes fly."  I reminded them that the 727 tape drive was unchecked, that when you told it to write a record, it did the very best it could but it did not know whether or not it had succeeded.  The "compensating control" in the application was to back-space the tape, read the record just written and compare it to what was intended.  If one was concerned about a floating point divide, the remedy was to check it oneself using a floating point multiply.

In the early fifties one checked on the bank by looking at one's monthly statement.  Before my promotion to punch-card operator, I was the messenger.  Part of my duties included taking the Brewery's pass book to the bank every day to compare it to the records of the bank.  As recently as two years ago, I had to log on to the bank every day to ensure that there had been no unauthorized transactions  to my account.  Today, my bank confirms my balance to me daily by SMS and sends another SMS for each large transaction.  American Express sends a "notification" to my iPhone for every charge to my account.

In 1976, for IBM, I published Data Security Controls and Procedures.  It included the following paragraph:

Compare Output with Input 
Most techniques for detecting errors are methods of
comparing output with the input that generated it.
The most common example is proofreading or
inspecting the context to indicate whether a
character is correct. Another example, which
worked well when input was entered into punched
cards, is key verification in which source data is key
entered twice. Entries were mechanically compared
keystroke by keystroke, and variances were flagged
for later reconciliation. 
Said another way, while we prefer preventative controls like checked operations and the TCB, ultimately, trust comes late from checking the results.

I often think of the world in which today's toddlers will spend their adult lives, the world that we will leave to them. My sense is that they will have grown up in a world in which their toys talked and listened and generally told the truth, but that every now and then one must check with dad.

Friday, February 20, 2015

Fraud Alerts

Recently Bank Info Security raised the question of whether fraud alerts can be used to garner customer loyalty.  I suggest that this is the wrong question.

In a world in which merchant, bank, and insurance systems are routinely breached by nation states and rogue hackers and in which hundreds of millions of credit card numbers, PINs, social security numbers, e-mail addresses, and dates of birth are freely traded for pennies in both white and black markets, it is hardly a question of "fraud alerts and customer loyalty."

I prefer to do business via proxies like PayPal, Amazon, and Apple Pay, that hide my credit card and bank credentials from the merchant.  However, I use my American Express card exclusively because all transactions to my AmEx account are communicated to me in real-time via the American Express app on my iPhone. Both AmEx and I understand that this is essential to our mutual security. It is not a mere convenience or customer loyalty gimmick.

Kenneth Chennault, CEO of AmEx, speaking before the President's "Cyber Security" urged that the regulation forbidding the use of SMS for this purpose be relaxed.  This regulation that was intended to discourage nuisances is in fact resisting a necessary use.

Anthem, the victim of the world's largest breach has offered to pay for fraud protection services for some of its customers, on an opt in basis. eBay, the victim of the second largest breach has not even done that. I think we need a law that requires all banks and credit bureaus to provide automatic notice of all activity to their subject's accounts on an opt-out basis.  While I am willing to pay for such a service, it really ought to be a cost to those who trade in data about me.

Rogue hackers, data brokers, and the intelligence agencies have all but destroyed the trust on which our commerce is based. Reliance upon periodic statements and late detection of fraud is no longer adequate. "Fraud alerts" are not a marketing feature,  In order to restore some order to our markets, "activity notices" need to become standard.

Saturday, February 7, 2015

Crypto Wars Redux

This morning, while researching another question, I found the following from Aaron Schumann to alt.security, quoting a post to the Risk Forum from me.  While written a quarter of a century ago, it might have been written this morning.
From: schuman@sgi.com (Aaron Schuman)
Newsgroups: alt.security
Subject: Congress to order crypto trapdoor?
Message-ID: <1991apr11 .231215.19779="" dragon.wpd.sgi.com="">
Date: 11 Apr 91 23:12:15 GMT 
The United States Senate is considering a bill that would require
manufacturers of cryptographic equipment to introduce a trap door,
and to make that trap door accessible to law enforcement officials.
If you feel, as I do, that the risk of abuse far outweighs the
potential benefits, please write to Senators Joseph Biden and Dennis
DeConcini, and to the Senators that represent your state, asking that
they propose a friendly amendment to their bill removing this
requirement.

I don't have exact addresses for Senators Biden and DeConcini, and
I hope someone will post them here, but the Washington DC post office
can deliver letters addressed to
Senator Joseph Biden Senator Dennis DeConcini
United States Senate and United States Senate
Washington, DC 20510 Washington, DC 20510

------------------------------
RISKS-LIST: RISKS-FORUM Digest  Wednesday 10 April 1991  Volume 11 : Issue 43
Date:  Wed, 10 Apr 91 17:23 EDT
From: WHMurray@DOCKMASTER.NCSC.MIL
Subject:  U.S. Senate 266, Section 2201 (cryptographics)
Senate 266 introduced by Mr. Biden (for himself and Mr. DeConcini)
contains the following section:
SEC. 2201. COOPERATION OF TELECOMMUNICATIONS PROVIDERS WITH LAW ENFORCEMENT
It is the sense of Congress that providers of electronic communications
services and manufacturers of electronic communications service equipment shall
ensure that communications systems permit the government to obtain the plain
text contents of voice, data, and other communications when appropriately
authorized by law.
------------------------------
The referenced language requires that manufacturers build trap-doors
into all cryptographic equipment and that providers of confidential
channels reserve to themselves, their agents, and assigns the ability to
read all traffic. 

Are there readers of this list that believe that it is possible for
manufacturers of crypto gear to include such a mechanism and also to reserve
its use to those "appropriately authorized by law" to employ it?
Are there readers of this list who believe that providers of electronic
communications services can reserve to themselves the ability to read all the
traffic and still keep the traffic "confidential" in any meaningful sense?
Is there anybody out there who would buy crypto gear or confidential services
from vendors who were subject to such a law? 
David Kahn asserts that the sovereign always attempts to reserve the use of
cryptography to himself.  Nonetheless, if this language were to be enacted into
law, it would represent a major departure.  An earlier Senate went to great
pains to assure itself that there were no trapdoors in the DES. Mr. Biden and
Mr. DeConcini want to mandate them.  The historical justification of such
reservation has been "national security;" just when that justification begins
to wane, Mr. Biden wants to use "law enforcement."  Both justifications rest
upon appeals to fear. 
In the United States the people, not the Congress, are sovereign; it should not
be illegal for the people to have access to communications that the government
cannot read.  We should be free from unreasonable search and seizure; we should
be free from self-incrimination.  The government already has powerful tools of
investigation at its disposal; it has demonstrated precious little restraint in
their use. 
Any assertion that all use of any such trap-doors would be only
"when appropriately authorized by law" is absurd on its face.  It is not
humanly possible to construct a mechanism that could meet that
requirement;  any such mechanism would be subject to abuse.
I suggest that you begin to stock up on crypto gear while you can still get it.
Watch the progress of this law carefully.  Begin to identify vendors across the
pond. 
William Hugh Murray, Executive Consultant, Information System Security 21
Locust Avenue, Suite 2D, New Canaan, Connecticut 06840       203 966 4769

We fought this battle once and thought that we won the war.  

My Little Mark

One of my conscious life goals is to "leave my little mark on culture."  I do this mostly through my work.  I often tell my audiences that they are my slate.  This blog is part of my mark and the Internet a place to leave it. I do, record, and distribute much of my work using e-mail.

This morning I was listening to Walter Isaacson, journalist, historian, and cultural commentator, on C-SPAN2's BookTV.  He was bemoaning the disappearance of letter writing and the loss to the historian of this important source.  He noted that most of us now use e-mail for what we used to do with letters but that e-mail is ephemeral, of limited use to the historian.

I wanted to test this assertion so I did a Google search on whmurray@dockmaster.mil, the first public e-mail address that I ever used.  Now this was before the world wide web and long before Google, but sure enough, Google found many messages, not all in the same place. After listing 66 messages, Google said "In order to show you the most relevant results, we have omitted some entries very similar to the 66 already displayed." Few of the items returned seem to point to the origin or destination systems but rather to quotes or citations.  So, while there are more messages than those returned, it is unlikely that all messages from the era, or even most of those with historical interest, survive.

Dockmaster may be a special case, one of historical interest.  It was a domain hosted on a Multics system by the National Computer Security Center, a part of the National Security Agency.  It was used by most of the computer security thought leaders of the era and hosted many productive discussions on the topic. Indeed, it was an example of many of the best ideas on the subject.

Isaacson may be right and we may have lost much of the e-mail. The e-mail I found may be exceptional. Perhaps the content of my message was exceptional; perhaps it was even curated.  I found one message on anus.com, The American Nihilist Underground Society.  (More on this message later.)  However, as storage continues to become cheaper and denser, the potential for e-mail to survive increases.  Thanks to Google, Bing, et.al., we will be able to sift the tiny number of messages with historical interest from the remainder.

Most of us are not aware of the significance of what we are saying or doing at the time we say or do it. It is only with the passage of time that the significance becomes apparent.  The Internet in general, and e-mail in particular, amplify our writings.  They have the potential to filter out and preserve that which is important to history.  However, the recording and reporting of history are, of their essence, imperfect.  History will note and report the impact of paper mail yielding to e-mail.

Isaacson did not comment on blogs, another important source for historians, replacing diaries and journals.  Blogs too may prove to be ephemeral but more will be written, some will survive, and historians will be able to find those that do.

I am satisfied that electronic media contribute to "My Little Mark."

Friday, January 9, 2015

Darkness in the City of Light

Last night the Tour Eifel went dark.

Once more the forces of darkness have donned their black clothing and armor and struck out, not quite blindly.  This time their target was freedom, freedom of speech, a freedom associating France with America.

Je suis Charlie!

Monday, November 24, 2014

Formal Risk Acceptance


The Security Executive's Ultimate Tool

I recently met with seventy-five chief information security officers.  I was reminded that they are staff, not line, executives.  Their authority is limited. They do not own the assets to be protected nor do they have the authority and discretion to allocate resources to that protection.  While they can propose standards and guidelines, they usually do  not have the authority to mandate or enforce them. They can neither reward nor punish.

The real work of protecting assets rests with the managers who are responsible for those assets, for allocating them, for prescribing how and by whom they may be used.  As much as we might wish that it were otherwise, the responsibility for protecting assets cannot be separated from the discretion to use them, from "ownership."

Yet when things inevitably go wrong, when systems are breached, data leaks, or applications fraudulently used, it is likely that the staff executive will be held accountable, not to say lose his job.  There are a number of tools available to the staff executive including persuasion, awareness training, standards, guidelines, measurement, and reporting.  Another, and the subject of this blog, is formal risk acceptance.  It is the staff security executive's measure of last resort.

There are three things that management can do with risk.  They can mitigate it, accept it, or assign it to others through insurance.  Unfortunately risk acceptance is often 'seat of the pants" and without accountability.

Formal risk acceptance is a process in which the risk is documented by staff, usually security staff, and accepted by line management.  The expression of the risk may refer to policy, standards, guidelines, or other expressions of good practice.

Documentation of risk will usually involve some negotiation so that the accepting manager understands the real risk, the description or expression of it, and the alternatives to accepting it. Therefore, this negotiation may involve some reallocation between mitigation and acceptance.  As these negotiations proceed, the manager's understanding of the risk and his options will improve and may result in choices that were not apparent when the negotiation began. The document should also describe and price all alternatives to acceptance that were considered. Note that sometimes a risk is accepted in part because it is believed that it is cheaper to mitigate it late than early.

The manager who accepts the risk must have the authority, discretion, and resources to mitigate the risk if he chooses to do so.  This test is necessary to ensure that the risk is accepted by the right manager or executive.  Said another way, risk should be accepted by a manager or executive who could implement one of the alternatives if he or she preferred.  It should not be accepted as a forced choice.

Risk acceptance decisions have to be revisited periodically.  Therefore, they are finite, they expire.  Often, the risk acceptance is part of a plan to tolerate the risk for a fixed period of time but mitigate it before a time certain in the future, for example in ninety days.  In such cases, a plan scheduled date for the mitigation becomes the expiration date.  Where there is no plan, the acceptance should expire after a term set by policy, usually one year.  This insures the decision will be reviewed periodically.  Managers should understand that risk acceptance is not the same thing as risk dismissal or ignoring.

Finally, risk acceptances should expire with the authority of the accepting manager or executive. When a manager's tenure ends, for whatever reason, all risks accepted by that manager must be revisited and re-accepted.  This will usually be by the manager's successor.  However, in the case of reorganization the risk acceptances may be distributed across multiple other managers.

Staff should keep track of all outstanding risk acceptances, ensure that they are revisited on time. measure whether in the aggregate they are increasing or decreasing, and report on them to higher management.

While, as a matter of fact and by default, a manager does accept any risk which he fails to mitigate or assign, some may be relcutant to document the fact.  In such cases, the staff should escalate.  In any case, the risk must be documented and shared with higher management.

Special attention should be given to audit findings.  While some of these may result from oversight, some may result from decisions taken but not documented.  Note that auditors are rarely in a position to assess the risk associated with their findings.  Therefore, risk assessments should be documented for all their findings and used in the planning process as to what to do about them.  Risk acceptances must be documented for any findings that will not be mitigated before the next audit.  Auditors may want to attend to whether cumulative risk is going up or down.

Monday, September 15, 2014

Q & A About Apple Pay

"Nothing useful can be said about the security of a mechanism except in the context of a specific application and environment."


In that context, what can one say about the security of Apple Pay?

We can say with confidence that Apple Pay is more secure than the alternative widely used payment mechanisms such as cash, mag-stripe cards, or comntactless (RFID) (debit or credit) cards.  Its security is comparable to that of EMV ("Chip" cards).


What is necessary to use Apple Pay?

One must have one or more credit card or other bank accounts to charge.  (By default, Apple Pay will use the account registered with the Apple Store. ). One must  have use of an iPhone 6 or iPhone 6 Plus and Touch ID.  Finally, the merchant must have point of sale devices that have contactless readers.  These readers work with both contactless  (RFID) credit cards and mobile computers using Near Field Communication (NFC).


If one loses one's iPhone can a finder use Apple Pay.

No.  Both possession of the IPhone and the right fingerprint are necessary to use Apple Pay.  Similarly someone with merely a copy of your fingerprint cannot use it.  Of course, one would still want to remotely disable the iPhone.


If my password is disclosed, I can change it, but I cannot change my fingerprint.

True but there is no need.  Passwords work only because they are secret.  Fingerprints work because they are difficult to counterfeit; no need for secrecy.  In fact one leaves copies of one's fingerprints all around in the normal course of things.


One can do Apple Pay with Apple Watch.  Does it have a fingerprint reader?  

No. The Apple Watch uses a different but equally effective authentication scheme.  After one puts the Watch on, one enters a 4 digit personal identification number (PIN).  This lasts until the sensors on the watch indicate that the watch has been taken off.  Both of these authentication schemes are examples of Two-factor Authentication, iPhone and Touch ID, Watch and PIN.  When used with the Secure Module and the one-time digital token to resist replay, Apple Pay has Strong Authentication..  


What is "NFC?"  

NFC is a low power, low speed, extremely short range digital radio capability.  Its applications include retail payments.  Apple Pay uses NFC to communicate with the register or point-of-sale device.  While NFC is only one alternative communication method, payment systems that use it may be identified as "NFC" systems.  


Is NFC secure?

NFC makes no security claims.  All required security must be built into the application.  While it is low power and short range, NFC includes no other security properties, functions or features.  Apple Pay does not rely upon NFC for security.  The token value that Apple Pay uses NFC to send to the point of sale is a one-time value.  Unlike a credit card number, it is not vulnerable to disclosure or reuse.  


How do I know how much I am being charged?

As with credit card transactions, the amount that you will be charged is displayed on the register. As with credit card transactions, you may be asked to "accept" or confirm pthe amount to the register.  As with credit card transactions, the register will provide you with a paper receipt.   


How do I know that the amount that appears on the register, that I confirm, and that is printed on the receipt is what is actually charged to my account?

By benign design and intent, systems will automatically ensure that the displayed amount and the charged amount are the same.  One can imagine a system designed to cheat but these will be very rare, easily detected, and quickly shut  down.  To some degree, this will depend on you.  

As with credit cards and checks, some of you must reconcile charges (perhaps using the paper receipt) to your account to what you authorized.  (Some other "wallet" programs immediately confirm the location and amount to the mobile device by SMS.  It remains to be seen whether Apple Pay will do this but it is likely.)   (Your bank or account holder may also offer transaction confirmation features.  For example, American Express offers its customers the option to have "card not present" transactions confirmed in real time by e-mail.  Incidentally, Apple Pay transactions look to American Express as "card not present.")


What if the charges to my account are not the same as I authorized?

Errors or fraud will be rare but you will continue to enjoy the same right to dispute charges that you have always had.


Lest you think that these questions are trivial, I heard each of them raised seriously by serious people on TV this week.