Wednesday, July 19, 2017

Open Letter to my Congressman


In my forty years in information security I have come to have many colleagues in the intelligence community.  I find them to be brilliant and noble.  I have also found them to be myopic, artful, and zealous.  I have watched their testimony before both the House and Senate judiciary committees.  While I have been impressed by their testimony, I have been less impressed by the questioning.   The testimony has been carefully rehearsed and very consistent.  Where the questioning has not been sympathetic, it has been inept.  Even those legislators who recognize that the testimony is misleading are prevented by secrecy and decorum from asking the questions that might really inform the citizens or even saying so when a witness lies under oath. 

·         Here is a short list of questions that I would like put to the administration to answer under oath.

  • Does GCHQ target American citizens on behalf of the US government?  What did we get for our $152M? 
  • Does the NSA target citizens of the United Kingdom?  Does it do so on behalf of the UK government? 
  • What programs, besides the collection of all telephone call records, does the NSA operate under USA Patriot Act, Section 215?  What programs, other than PRISM, does it operate under the FISA, Section 702?  Are we going to be surprised by more revelations?   
  • NSA has admitted that a query to the call records database implicates not only those connected directly to the "seed" number but all those associated with it to "three hops."  What is the largest number of phone numbers implicated by any single query?  How many subscribers have been implicated by the hundreds of queries made since the inception of the program?  Is it possible that there is any American citizen  that has not been swept up in this huge drag net?
  • Given the density of modern digital storage, e.g., a terabyte in a shirt pocket for $100, what is NSA storing that requires 24 acres of floor space in Utah?  
  • What percentage of the e-mail that crosses our borders does NSA collect?  Store?  Analyze?  Disseminate to other agencies of government?  
  • Given the demonstrations by Edward Snowden and Bradley Manning as to the breadth and depth of their access, how can we rely upon the assurances of NSA  that they can protect us from abuse of the information they collect?  Doesn't the mere collection of all this information invite, not to say guarantee, abuse?
  • Doesn't the mammoth budget ($75B in 2t012?) of NSA justify the conclusion that NSA operates on the premise that "Because we can, we must," and without any regard for efficiency?   Are they not spending far more than doing nothing would cost?
  • Does not the Bush "Warrantless Surveillance Program" demonstrate that citizens cannot rely upon bureaucrats and spies to protect us from over-zealous, not to say rogue, politicians?  Are we building capabilities now that will empower politicians of the future? 
  • Does the NSA require a warrant before they target US citizens on behalf of the FBI?  Secret Service? DEA?  MI5?  MI6?  
  • Does the NSA protect American citizens from surveillance by their peers and colleagues in other nations?  
  •  Is information passed to the FBI by NSA ever, usually, sufficient for the issuance of a wiretap warrant?  A National Security Letter?  
  •  Do the intelligence agencies selectively share intelligence with legislators in order to curry support?

Wednesday, July 5, 2017

The Coming Quantum Computing Crypto Apocalypse

Modern media, both fact and fiction, loves the Apocalypse and the Dystopian future.  The Quantum Apocalypse is just one example but one close to the subject of this blog.  It posits that the coming revolution called quantum computing will obsolete modern encryption and destroy modern commerce as we have come to know it.  It was the hook for the 1992 movie Sneakers starring Robert Redford, Sydney Poitier, Ben Kingsley, and River Phoenix.

This entry will tell the security professional some useful things about the application of Quantum Mechanics to information technology in general, and Cryptography in particular, that will help equip him for, and enlist him in, the effort to ensure that commerce, and our society that depends upon it, survive.  Keep in mind that the author is not a Physicist or even a cryptographer.  Rather he is an octogenarian, a computer security professional, and an observer of and commentator on the experience that we call modern Cryptography beginning with the Data Encryption Standard.

For a description of Quantum Computing I refer you to Wikipedia.  For our purpose here it suffices to say that it is very fast at solving certain classes of otherwise difficult problems.  One of these problems is to find the factors of the product of two prime numbers, the problem that one must solve to find the message knowing the cryptogram and the public key or the private key knowing the message, the cryptogram, and the public key in the RSA crypto systems.

This vulnerable algorithm is the one that we rely upon for symmetric key exchange in our infrastructure.  In fact, because it is so computationally intensive, that is the only thing we use it for.

In theory, using quantum computing, one might find the factors almost as fast as one could find the product, while the cryptographic cover time of the system relies upon the fact that the former takes much longer than the latter.  Cryptographers would certainly say that, by definition, at least in theory, the system would be "broken."  However, the security professional would ask about the relative cost of the two operations.  While the former can be done by any cheap computer, the latter can only be done quickly by much more rare and expensive "quantum" computers.

Cryptanalysis is one of the applications that has always supported cutting edge computing. One of the "Secrets of ULTRA" was that we invented modern computing in part to break the Enigma system employed by Germany.  ULTRA was incredibly expensive for all that.  While automation made ULTRA effective, it was German key management practices that made it efficient.    On the other hand, the modern computer made commercial and personal cryptography both necessary and cheap.

One can be certain that NSA is supporting QC research and will be using one of the first practical implementations for cryptanalysis.  They will be doing it literally before you know it and exclusively for months to years after that.

Since ULTRA, prudent users of cryptography have assumed that, at some cost, nation states (particularly the "Five Eyes," Russia, China, France, and Israel) can read any message that they wish. However, in part because the cost of reading one message includes the cost of not reading others, they cannot read every message that they wish.

The problem is not that Quantum Computing breaks Cryptography, per se, but that it breaks one system on which we rely.  It is not that we do not have QC resistant crypto but that replacing what we are using with it will take both time and money.  The faster we want to do it, the more expensive it will be.  Efficiency demands that we take our time; effectiveness requires that we not be late.

By some estimates we may be as much as 10 years away from an RSA break but then again, we might be surprised.  One strategy to avoid the consequences of surprise is called "crypto agility."  It implies using cryptography in such a way that we can change the way we do it in order to adapt to changes in the threat environment.

For example, there are key exchange strategies that are not vulnerable to QC.  One such has already been described by the Internet Engineering Task Force (IETF).  It requires a little more data and cycles than RSA but this is more than compensated for by the falling cost of computing.  It has the added advantage that it can be introduced in a non-disruptive manner, beginning with the most sensitive applications.

History informs us that cryptography does not fail catastrophically and that while advances in computing benefit the wholesale cryptanalyst, e.g., nation states, before the commercial cryptographer, in the long run they benefit the cryptographer orders of magnitude more than the cryptanalyst.  In short, there will be a lot of work but no "Quantum Apocalypse."  Watch this space.

Wednesday, May 3, 2017

The Next Great Northeastern Blackout

It has already been more than thirteen years since the last great northeastern blackout.  The mean time between such blackouts is roughly twenty years.  

Blackouts are caused by a number of simultaneous component failures that overwhelm the ability of the system to cope.  While the system copes with most component failures so well that the consumer never even notices, there is an upper bound to the number of concurrent failures that it can tolerates. In the face of these failures the system automatically does an orderly protective shutdown that assures its ability to restart within tens of hours to days.

However, such surprising shutdowns are  experienced by the community as a "failure."  One result is finger pointing, blaming, and shaming.  Rather than being seen as a normal and predictable occurrence with a proper and timely response, the Blackout is seen as a "failure" that should have been "prevented."  

These outages are less disruptive than an ice storm.  However, even though they are as natural and as inevitable as weather related outages, they are not perceived that way.  The public and their political representatives see weather related outages as unavoidable but inevitable technology failures as one hundred percent preventable.

Security people understand that perfect prevention has infinite cost.  That as we increase the meantime between outages, we stop, at about twenty years, way short of infinity.  This is in part because the cost of the next increment exceeds the value and in part because we reach a natural limit. We increase the resistance to failure by adding redundancy and automation. However, we do this at the cost of ever increasing complexity.  There is a point, at about twenty years MTBF, at which increased complexity causes more failures than it prevents.

Much of the inconvenience to the public is a function of surprise.  Since they have come to expect prevention, they are not prepared for outages.  The message should be that we are closer to the next Blackout than to the last.  If you are not surprised and are prepared, you will minimize your own cost and inconvenience.

Sunday, March 26, 2017

Internet Vulnerability

On March 24, 2017 Gregory Michaelidis wrote in Slate on "Why America’s Current Approach to Cybersecurity Is So Dangerous."

He cited an article by Bruce Schneier.

In response, I observed to a number of colleagues, proteges, and students that "One takeaway from this article and the Schneier article that it points to is that we need to reduce our attack surface.  Dramatically.  Perhaps ninety percent.  Think least privilege access at all layers to include application white-listing, safe dcfaults, end-to-end application layer encryption, and strong authentication."

One colleague responded "I think one reason the cyber attack surface is so large is that the global intel agencies hoard vulnerabilities and exploits..."  Since secret "vulnerabilities and exploits" account for so little of our attack surface, I fear that he missed my point.

While it is true that intelligence agencies enjoy the benefits of our vulnerable systems and are little motivated to reduce the attack surface, the "hoarded vulnerabilities and exploits" are not the attack surface and the intel agencies are not the cause.  

The cause is the IT culture. There is a broad market preference for open networks, systems, and applications. TCP/IP drove the more secure SNA/SDLC from the field. The market prefers Windows and Linux to OS X, Android to iOS, IBM 360 to System 38, MVS to FS, MS-DOS to OS/2, Z Systems to iSeries, Flash to HTML5, von Neumann architecture [Wintel systems] to almost anything else.  

One can get a degree in Computer Science, even in Cyber Security, without ever even hearing about a more secure alternative architecture to von Neumann's [e.g. IBM iSeries. Closed, finite state architecture (operations can take the system only from one valid state to another), limited set of strongly-typed (e.g., data can not be executed, programs cannot be modified) objects, single level store, symbolic only addressing, etc.)]

We prefer to try and stop leakage at the end user device or the perimeter rather than administer access control at the database or file system. We persist in using replayable passwords in preference to strong authentication, even though they are implicated in almost every breach. We terminate encryption on the OS, or even the perimeter, rather than the application. We deploy user programmable systems where application only systems would do.  We enable escape mechanisms and run scripts and macros by default.

We have too many overly privileged users with almost no multi-party controls. We discourage shared UIDs and passwords for end users but default to them for the most privileged users, where we most need accountability. We store our most sensitive information in the clear, as file system objects, on the desktop, rather than encryptied, in document management systems, on servers. We keep our most sensitive data and mission critical apps on the same systems where we run our most vulnerable applications, browsing and e-mail. We talk about defense in depth but operate our enterprise networks flat, any to any connectivity and trust, not structured, not architected. It takes us weeks to months just to detect breaches and more time to fix them.  

I can go on and I am sure you can add examples of your own. Not only is the intelligence community not responsible for this practice, they are guilty of it themselves. It was this practice, not secret vulnerabilities, that was exploited by Snowden. It is this culture, not "hoarded vulnerabilities and exploits," that is implicated in the breaches of the past few years. It defies reason that one person acting alone could collect the data that Snowden did without being detected.  

Nation states do what they do; their targets of choice will yield to their overwhelming force. However, we need not make it so easy. We might not be able to resist dragons but we are yielding to bears and brigands. I admit that the culture is defensive and resistant to change but it will not be changed by blaming the other guy. "We have seen the enemy and he is us."

Wednesday, January 4, 2017

All "Things" are not the Same

My mentor, Robert H. Courtney, Jr.  was one of the great original thinkers in security.  He taught me a number of useful concepts some of which I have codified and call "Courtney's Laws."  At key inflection points in information technology I find it useful to take them out and consider the problems of the day in their light.  The emergence of what has been called the Internet of Things (IoT) is such an occasion. 

Courtney's First Law cautioned us that "Nothing useful can be said about the security of a mechanism except in the context of a specific application and environment."  This law can be usefully applied to the difficult, not to say intractable, problem of the Internet of things (IoT).  All "things" are not the same and, therefore do not have the same security requirements or solutions.

What Courtney does not address is what we mean by "security."  The security that most seem to think about in this context is resistance to interference with the intended function of the "thing" or appliance.  The examples de jour include interference with the operation of an automobile or with a medical device.   However, a greater risk is the that general purpose computer function in the device will be subverted and used for denial of service attacks or brute force attacks against passwords or cryptographic keys.

Key to Courtney's advice are "application" and "environment."  Consider application first.  The security we expect varies greatly with the intended use of the appliance.  We expect different security properties, features, and functions from a car, a pacemaker, a refrigerator, a CCTV camera, a baby monitor, or a "smart"  TV.  This is critical.  Any attempt to treat all these things  the same is doomed to failure.  This is reflected in the tens of different safety standards that the Underwriters Laboratories has for electrical appliances.  Their list includes categories that had not even been invented when the Laboratories were founded at the turn of the last century.

Similarly our requirements vary with the environment in which the device is to be used.  We have different requirements for devices intended to be used in the home, car, airplane, hospital, office, plant, or infrastructure.  Projecting the requirements of any one of these on any other can only result in ineffectiveness and unnecessary cost.  For example, one does not require the same precision, reliability, or resistance to outside interference in a GPS intended for use in an automobile as for one intended for use in an airliner or a cruise ship.  One does not require the same security in a device intended for connection only to private networks as for those intended for direct connection to the public networks.

When I was at IBM, Courtney's First Law became the basis for the security standard for our products.  Product managers were told that the security properties, features, and functions of their product should meet the requirements for the intended application and environment.  The more things one wanted one's product to be used for and the more, or more hostile, the environments that one wanted it to be used in, the more robust the security had to be.  For example, the requirements for a large multi-user system were higher than those for a single user system.  The manager  could assert any claims  or disclaimers that she liked; what she could not do was remain silent.  Just requiring the manager to describe these things made a huge difference.   This was reinforced by requiring her to address this standard in all product plans, reviews, announcements, and marketing materials.  While this standard might not have accomplished magic, it certainly advanced the objective.

Achieving the necessary security for the Internet of things will require a lot of thought, action and, in some cases, invention.  Applying Courtney's First Law is a place to start.  A way to start might be to expect all vendors to speak to the intended application and environment of his product.  For example, is the device intended "only for home use on a home network; not intended for direct connection to the Internet."  While the baby monitor or doorbell must be able to access the Internet, attackers on the Internet should not be able access the baby monitor.

Thursday, December 29, 2016

"Women and Children First"

As we approach autonomous cars, some have posed the question "In accident avoidance should the car prefer its passenger or the pedestrian?"  It is posed as a difficult ethical dilemma.  I even heard an engineer suggest that not only does he not want to make the decision but that he would like Congress to make it as a matter of law.  

This is really just another instance of an ethical dilemma that humanity has faced forever.  It has many illustrations but one that has been used in teaching ethics is called the "life boat" problem.  If there is not enough room in the lifeboat for everyone, who gets in?  If there is not enough food and water, who gets preference?

The simple answer is "women and children first."  Human drivers will steer away from a woman with a baby carriage even if they do not have time to evaluate the alternative.  It is built into the culture, all but automatic, but the reason is that it is pro life.  Children are the future of the species.  Women can nurture and reproduce.  Men can sow but they cannot reap.  While the male role takes minutes, the female role takes months.  Life needs more females than males.

The reason that we do not apply this pro life rationale to the autonomous automobile is that we assume that the consideration is beyond its capability.  However, most of what one expects of an autonomous car today was beyond its capability a decade ago.  For the moment, most may not be able to consider all the factors we might like.  For example, they may not recognize age and gender, much less consider them.  Ten years from now, they certainly will.  

In this context it is useful to consider how such systems make a decision.  They identify a number of scenarios, depending upon the time available, assign an outcome and a confidence level to each, and choose statistically.  The kind of ties implied by the strawman dilemma will be vanishingly rare, even more so as the computers become faster and the number of things the can consider increases.  

Compare the autonomous car to the human driver.  In the two tenths of a second that it takes a young adult to recognize and react, the autonomous car will evaluate dozens of possibilities with as many considerations.  Like the human driver, the autonomous car may confront instances when there are simply no good options but the whole reason for using them is that they are less likely than the human driver to overlook the least damaging.  

Monday, October 31, 2016

Denial of Service Attacks Exploiting the Internet of Things

The recent denial of service attack against the Domain Name Service provider, Dyn, and exploiting compromised devices on the Internet, has generated a number of proposed solutions to the infrastructure vulnerability represented by the so-called Internet of things.  

One of those proposals involved vigilante hacking to remove vulnerable devices.  It is important to,call this proposal what it is because it puts it in the context of a historical and cultural argument that suggests it is probably a bad idea.  That said, let us consider a related alternative.  

"Nice people do not attach weak systems to the public networks." While understandable, ignorance of the weakness is no excuse.  On the other hand, nice people do not interfere with the operation of another's property.  This is both an ethical and legal conflict.  

However, a system should be able to protect itself from any traffic that it can expect to see on any network to which it is attached.  For SOHO networks, where many of these "things"  can be expected to be, this is not a very high hurdle; for the public networks, even enterprise networks, this may be a very high hurdle indeed.  Part of the solution will be to specify the intended network environment of an appliance.  For example, an appliance might be labeled "intended for home use only; must not be connected to public or enterprise networks."  

Then the community might well consider regulations that make it illegal to attach such devices to the public networks.  Sanctions might include fines or disconnection from the networks under a rule that says, "if it can be  disabled, that is, it is not secure, then it may be disabled."

Under such a rule, one might safely, ethically, and legally connect his light bulbs to his home network. One might connect one's baby monitor to her home network.  One might even access a baby monitor from a mobile using a virtual private network (VPN).  However, should a baby monitor be addressable and operable from the public networks, then it would be permissible to shut it down without notice, whether or not it is compromised and whether or not it is interfering with the public networks. 

Note that such regulation is already within the newly expanded power of the FCC.  For the average user, this would barely impact his use.