Monday, October 23, 2017

Security as Infrastructure

When I began in computers it was really fun.  I was hired as a "boy genius" at IBM Research.  We had the best toys.  I had my own IBM 650.  I was paid to take it apart and put it together again.  How great is that?  I got to work with Dr. Albert Samuels who was programming the IBM 704 to play checkers.  My colleague, Dick Casey, and I programmed the 650 to play Tic-Tac-Toe.  We had to use it on third shift but we even had a third of an IBM 705 where we installed the first Autocoder in Poughkeepsie.  I drove my transistor radio with a program on the IBM 1401.  

That was just the beginning. For sixty years I have had the best toys. I have five PCs, I am on my fifth iPhone, and my fourth iPad.  I carry my sixty years of collected music and photographs, an encyclopedia, a library, and dozens of  movies in my pocket.  It just keeps getting better. It is more fun than electric trains.

One of my favorite toys was the IBM Advanced Administrative System, AAS, five IBM 360/65s and a 360/85.  It was so much fun that I often forgot to eat or even go home at night.  However, on AAS one of my responsibilities was to manage the development of the access control system.  It was great fun to do and fun to talk about.  Serious people came to White Plains to hear me.  I was invited to Paris, Vienna, Amsterdam, London, Helsinki, and Stockholm to talk about my fun and games, about how we provided for the confidentiality, integrity, and availability of our wondrous system.  

However, as seems to happen to us all, I grew up, and finally old.  My toys, fun, and games became serious.  Some place along the way, most of the computers in the world were stitched together into a dense fabric, a network,  into a world-wide web.  While still entertaining, this fabric had become important.  It supports the government, the military, industry, finance, and commerce.  

Without any plan or intent, driven mostly by a deflationary spiral in cost and exploding utility, the fabric had become infrastructure, part of the underlying foundation of civilization.  It had become peer with water, sewer, energy, finance, transportation, and government.  Moreover, it had become THE infrastructure, the one by which all of the others are governed, managed, and operated.  

We build infrastructure to a different standard than toys or anything else not infrastructure.  Infrastructure must not fall of its own weight.  It must not fall under the load of normal use.  It must not even fall under easily anticipated abuse and misuse.  In order to prevent erroneous or malicious operation, the controls for infrastructure are reserved to the trained operators and from the end users.  

No special justification is required for this standard. The Romans built their roads, bridges, and aqueducts, such that. with normal maintenance, they would last a thousand years.  And so they have.  The Hoover Dam and the Golden Gate Bridge were built to the same standard.    With normal maintenance, and in the absence of unanticipated events, they will never fail.  (They may be decommissioned but they will not fail.)  No one quibbled with Henry Kaiser over the cost or schedule for the dam.           

However, our fabric was not driven by design and intent but by economics.  No technology in history has fallen in price and grown in power as fast as ours.  While we tend to think of it in terms of its state at a point in time. it continues to grow at an exponential rate.  Its importance can hardly be appreciated, much less over-stated.

Given the absence of design and intent, it is surprisingly robust and resilient.  While not sufficient for all purposes to which we might wish to put it, it is sufficient for most.  With some compensating design and intent, it can be made sufficiently robust for any application.  

One word on "easily anticipated abuse and misuse."  On September 12, 2001, what could be easily anticipated had changed forever.  

As security people, we are responsible for the safe behavior, use, content, configuration, and operation of infrastructure.  As IT security people, we are responsible for the only international infrastructure, the public networks.  As users, we are responsible for not abusing, misusing, or otherwise weakening it.  

Note that ours is the only infrastructure that, at least by default, contains weak, compromised, or even hostile components and operators.  It is the only one that, by default, has controls intended for the exclusive use of managers and operators right next to those for end users.  Our infrastructure also, by default, connects and exposes the controls of other infrastructure to most of our unprivileged users.  It is our job to compensate fro and remediate these conditions.

Our roles, responsibilities, privileges, and special knowledge give us significant leverage over, and responsibility for the infrastructure of our civilization.  Everything that we do, or fail to do, strengthens or weakens that Infrastructure.  That is why we are called professionals and are paid the big bucks.  




Friday, October 20, 2017

MasterCard to Eliminate Signatures

MasterCard has announced that in the US and Canada, it will no longer require signatures on credit card transactions.  (PINs will continue to be required on debit card transactions.)   MC says that this will be more convenient for the customer and that it will rely on other (unnamed) mechanisms and processes for security.  Let us look at some.

First, many issuers use computer aided mechanisms to detect fraudulent use by looking at such clues as location and other patterns of use.  Most of us have had calls from our banks checking on the legitimacy of activity.

In theory, the required signature resists fraudulent use of lost or stolen cards.  In practice, not so much.  Even when clerks reconciled the signature on the check to the one on the card, it was an imperfect mechanism.  In modern systems, where no one really reconciles the signature, the best that the mechanism can do is to permit the consumer to recognize disputed items that he really did sign. However, for the most part, issuers simply accept the word of the consumer that a transaction is fraudulent.  The signature does not come into play. 

The best way to resist the fraudulent use of lost or stolen cards is to check that a proffered card has not been reported lost or stolen.  This works well in the US and Canada, where most transactions take place on line.  In countries where many transactions take place off line, PINs are used. 

American Express CEO, Kenneth Chennault told President Obama that Am Ex detects many fraudulent transactions within 60 seconds by sending a notification of use to the consumer’s mobile or e-mail in real time. 

Bank of America and others resist fraudulent use by permitting the consumer to turn the card on and off using an app.  Again, works well where most transactions are on line. 

Android, Apple, and Samsung Pay resist fraudulent use by simply taking the card out of the transaction and substituting a digital token for the credit card number.  Lost mobile phones resist fraudulent reuse with PINs for security and biometrics, e.g. facial and fingerprint recognition, for convenience. 

On line merchants have never had the benefit of signatures but  can resist fraud by using PayPal or other proxies instead of accepting credit cards at check out.  Where the merchants cooperate and the consumer uses Ámerican Express at checkout, AmEx will prompt the user for a one-time-password sent to the users mobile.  This protects the merchant, the consumer and AmEx.  All of these resist “card not present” fraud. 

Only the brands and issuers really know how necessary and effective signatures and PINs are: they take the risk when they are not required.

The fundamental vulnerability in the retail payment system is the credit card number in the clear on the magnetic stripe.  Remains a risk to merchants and issuers but is only a nuisance to the consumer. 

In short, the future is mobile, tokenized, cordless, contactless, signature and Pin less, and secure. 

Wednesday, October 18, 2017

The Internet as Infrastructure

Today, when one connects an application, system, or network to the public networks, one is adding to the "system of public works," that is to "infrastructure," of the nation and the world. 

The standards for building infrastructure, such as bridges, tunnels, and dams, are different from those for other artifacts.  Infrastructure must not fall of its own weight, it should not fail in normal use or under normal load, and must resist "easily anticipated abuse and misuse."  A suspension bridge must not fall because a driver falls asleep and an eighteen wheeler goes over the side.

Notice that the abuse and misuse that can be easily anticipated today, is much worse than when we began the Internet.  Were it not so, we might have done many things differently.

We call the resultant necessary property of infrastructure resiliency, rather than security, but the properties are related.

For any artifact, there are limits to the complexity, scale, load, and simultaneous component failures that the mechanism can be expected to survive. How many simultaneous sleepy drivers and plunging eighteen wheelers must a bridge be designed to survive.

When those limits are reached, what we want to happen is that the mechanism fail in such a way that damage is limited and the mechanism can be restored to operation as quickly as possible.

The three Great Northeastern Blackouts, of which August 14, 2003 was the latest, are examples. It is interesting that engineers see these blackouts as successes while the public and their surrogates, journalists and politicians, see them as failures.

All three were caused by multiple simultaneous and cascading component failures under conditions of heavy load. In all three cases the system failed in such a way that it was restored to a ninety percent service level in a day. While all three were spectacular and exciting, the damage was not nearly so severe as one might expect from a major ice storm.

This is the way that we would like the public networks to fail. In fact, so far, that is what we have seen. We have had massive local failures of the PSTN where it took days to weeks to restore to a ninety percent service level. Most of these were fire related and local. We have had one that was national and caused by a software change. We recovered from this one in hours.

To date, we have had a number of local failures of the Internet, all man-made (mostly caused by the infamous "cable-seeking backhoes or boat anchors"); most were accidental. We recovered from all of these in days. SQL/Slammer was man-made, malicious, and software related; it caused a noticeable drop in service for hours. However, there was not really a discontinuity of service.

It should be noted that SQL/Slammer was a homogenous attack.  That is, every instance of it looked the same.  This made it relatively easy to construct and deploy filters that would resist its flow while not interfering with normal traffic.  However, it is fairly easy to visualize a heterogeneous attack that might overwhelm this remedy.

So, there is wide-spread concern that there might be a malicious software-based attack that would bring down the entire Internet. To some degree this is angst, an unfocused apprehension rooted in intuition or ignorance.  However, it is shared by many who are knowledgeable.  Their concern is rooted in the (often unidentified and un-enumerated) facts that:

* the Internet evolved; it was not designed and deployed
* switching in the network is software-based,
* operation of the components is homogenous
* operation of network management controls is in-band
* users often have default access to management controls
* the topology is both open and flat
* paths in the network are ad hoc and adaptive
* connection policy is permissive,
* most of the nodes in the network are un-trusted and a large number are under malicious control.
* access is open and cheap
* identity of both components and users is unreliable
* ownership and management is decentralized
* other

If the impact of these things on the resiliency of the Internet were as obvious prospectively as it is retrospectively, we might have done things differently.  On the other hand, we might not have.  A little discussion is in order.

Unlike the PSTN, the Internet is packet, rather than circuit, switched.  The intent of this was to make the network more resilient in the face of node or link failures.  

The routers and switches may be software running on von Neumann architecture general-purpose computers.  This may make the network more resistant to component failure while making the components more vulnerable to malicious attack.  

We have become accustomed to the idea that software processes are vulnerable to interference or contamination by their data, i.e., the software in the switch can be contaminated by its traffic.  This exposes us to attacks intended to exploit, interfere with, or take control of switches and routers. 

This may be aggravated by the fact that so many routers and switches look the same.  While there are hundreds of products, most of them present controls that are operated via the Border Gateway Protocol (BGP).  An attack that can take control of one might be able to take control of many.   

Even most non-switch nodes in the network look the same, that is, like Windows or Unix (rather than, for example, MVS or OS/400.)   These two operating systems are open, historically broken, and have a commitment to backward compatibility that makes them difficult to fix.  Historically they have shipped with unsafe defaults and have been corrupted within minutes of being connected to the Internet.  The result has been that there are millions of corrupt nodes in the Internet that are under the control of malicious actors.

Operation of the routers and switches (and other network nodes) is via the network itself; they can be operated from almost any node in the network.  Many are hidden, if at all, only by a password, often weak or even default.  Thus, it might be possible to coordinate simultaneous mis-operation of many nodes at the same time. 

The Internet is open as to user, attachment, protocol, and application.  The cost of a connection to the Internet is a function of the bandwidth or load but the cost of a relatively fast persistent connection is in the tens of dollars per month, about the same as a dial connection a decade ago.  

While one must demonstrate the ability to pay, usually with a credit card, the credit card may be stolen, and, depending on the provider, the name in which the connection is registered may not have to be the same as that on the credit card.  In short, almost anyone can add a node to the Internet with minimal checks on their identity or bona fides.  There will be bad actors. 

The only thing that is required to add a new protocol or application to the Internet is that at least two nodes agree on it and that it can be composed from IP packets.  Use of load-intensive protocols and applications for streaming audio and video were added to other protocols and applications with no changes to the underlying infrastructure.  We have seen DoS attacks that relied upon minor changes to protocols and their use.

At least in theory, the topology of Internet is "flat," as opposed to structured or hierarchical.  That is, at least in theory and with few exceptions, any node in the Internet can send a packet to any other node in the Internet.  The time and cost to send a packet between any two nodes chosen at random is roughly the same as for any other pair of nodes.  

Said another way, both the time and cost to send a packet are independent of distance.  One implication of this is that attacks are cheap, can originate anywhere, and can attack anything attached. 

Paths in the Internet are determined late, possibly on a packet by packet basis, and adapt to changes in load or control settings.  The intent is that there be so many potential paths between A and B that at least one will always be available and that it will be discovered and used.  While the intent is to make the network resistant to node and link failures, an unintended consequence is that it is difficult to resist the flow of attack traffic. 

The original policies of the Internet were promiscuous (as opposed to permissive or restrictive); not only was any packet and flow permitted but there were no controls in place to resist them.  This was essential to the its triumph over competitors like SNA and may have been necessary to its success.  

While controls have been added as the scale has grown, the policy is still permissive, rather than restrictive, i.e., everything is allowed that is not explicitly forbidden.  

Said another way, all traffic is presumed to be benign until shown otherwise.  Attack traffic can flow freely until identified and restricted.

Finally, while most of the nodes in the Internet are un-trusted, and we know that many are corrupted and under hostile control, all are given the benefit of the doubt.  To date there has been little effort to identify and eliminate those that have been corrupted.  Therefore there remains a possibility that these corrupt systems can be marshaled in such a way as to deny the use of network to all, or some targeted group, of users. 

The Internet is robust, not fragile.  It is resistant to both natural and accidental artificial events.  However, To the extent that the above things are, and remain, true, the Internet, and indirectly, the nations, economies, institutions and individuals that rely upon, it are vulnerable to abuse and misuse; concern is justified, if not proportionate.  

While these characteristics are pervasive and resistant to change, while they were often chosen for good reason, they are not fixed or required and can be changed.  Understanding them and how they  might be changed is key to making the Internet as resistant to abuse and misuse as it is to component failure or destruction. 

It suggests that the network must become both less open, not to say, closed, and more structured. The management controls must be protected and taken out of band.  The policy must become much more restrictive.  We must identify our users and customers and hold them accountable for their traffic.

To bring the Internet to infrastructure standards, we must overcome not only inertia but also culture.  Each of us must exercise our influence on our  employers, clients, and vendors to move the Internet to the same standards that we expect of skyscrapers, bridges, tunnels, and dams.  Since there is no one else to do it, we are called professionals and are paid the big bucks.