Friday, February 15, 2019

The Desktop, our Achilles Heel

During the last three or four years the number and rate of enterpirse wide breaches have increased dramatically.  Successful attacks have relied upon duping users into clicking on bait malicious objects in e-mail messages and web pages.  The malicious objects capture user credentials and then use them to attack peer systems in the enterprise network, spreading the compromise laterally.  These attacks exploit user gullibility, re-usable credentials, the default desktop access control rule of ”read/write,”  and flat enterprise networks.  Therefore, many security practitioners recommend user training, multi-factor user authentication, and structured networks.  Resistance to all three of these measures is high and their effectiveness limited.  Moreover, they do not address the vulnerability of the desktops to have their procedures modified by their data.   We are left with a high level of risk.

E-mail and browsing are the Achilles Heel of the desktop and the desktop is the Achilles Heel of the enterprise.  One of these two applications are involved in a large percentage of all breaches.  Note that while Achilles was vulnerable on only one heel, small attack surface, the enterprise may be vulnerable on many desktops.                                                            

One obvious defense would be to isolate these two applications from the system on which they run and those systems from the other applications and systems of the enterprise.  Neither of those applications should have the capability to make persistent changes to the procedures of the systems on which they run.  

In a world of cheap hardware, one way to do this would be to run these two applications on sacrifical hardware dedicated to these two applications.  In a world of reliable process-to-process isolation, another would be to use that isolation to protect the system on which the applications run from any changes originating in those porous applications.  The first solution is resisted because IT culture sees hardware as expensive, this in spite of the fact that its cost halves every two years.  The second is resisted because user culture prefers convenience, generality, flexiblity, and ”dancing pigs” to security.  As a consequence, most desktops are configured to offer read-write access to most objects and  few provide reliable protective isolation.  

It does not have to be this way.  Ten years ago Steve Jobs and Apple introduced us to iOS, with very limited capabilities but with very strong process-to-process isolation and strong protection from anything done at the user interface.  As it has matured its capabilities have increased.  Controlled application-to-application communication has been introduced while maintaining strong isolation and protection.  Some generality and flexibility have been sacrificed to usability and security but less than the defenders of the status quo predicted.  Nonetheless, resistance to iOS was so strong that it provoked Android, a more traditional system.  

However, iOS has been adopted by a large population of users that enjoy ”most, but not all, of the benefits offered by the traditional general purpose system.” (Fred Cohen)  At the user application interface, it appears as a single user single application-only machine.  While it can maintain application state, iOS is resistant to any persistent change to itself from the application or the user.  

Said another way, iOS protects itself from its data, its user, and its user’s data.  While the application may be vulnerable to a ”bait” attack, the system is not.  Therefore, it is a preferred environment in which to run vulnerable applications like e-mail and browsing, and sensitive applications like banking and healthcare.   

Personal computers can be configured with hypervisors to provide strong process to process isolation.  They can be configured with the ”least privilege” access control rule to resist contamination of procedures by their data.  Said another way, they can be configured such that simply clicking on a bait object is not sufficient to compromise the system.  Indeed they can even be configured in such a way that, as in iOS, nothing done with the keyboard and mouse is sufficient to compromise the system. 

This brings us to the ”flat enterprise network.”  Traditionally, enterprise networks have been configured for any-to-any connectivity; any node in the network could send a message to any other node.  The latency and bandwidth between any two nodes was roughly the same as the average across all nodes.  Often, and at least by default, they have been operated at a single level of trust.  That is to say, all nodes in the network were assumed to be benign, orderly, and well behaved. Nodes were not expected to have to protect themselves from traffic that originated on the network or question the origin address.  It is this configuration that leaves the enterprise vulnerable to lateral compromise with little more than one compromised system or user credentials.  

The alternative and safer network is referred to as ”zero trust.”  All nodes are assumed to be mutually hostile.  Traffic may flow only between specified pairs, e.g., user to application or client to server. Origin addresses are not trusted but must be authenticated.  Some cost in latency or bandwidth is tolerated for authorization of the connection and mutual authentication of the nodes. This kind of network is resistant to lateral compromise; a compromised node can attack only the nodes to which it is allowed to send traffic.  Even those nodes will treat it with suspicion and may require evidence as to its identity.  

There are a number of ways to restrict the flow of traffic to accord to this policy.  The first and most obvious is to provide only links between authhorized nodes; easy for two nodes, illustrative, but it does not scale even to a small enterprise.   However, the others simulate this illustration, usually through the use of encryption, e.g.,virtual local area networks, VLANs. virtual private networks, VPNs, and software defined networks, SDNs.  Note that in SDNs, users are included as ”nodes.”  Note also that to be most resistant to attack, connections should be at the application layer.  Applications are the nodes of interest and, contrasted to, for example, operating systems, have the smallest attack surface, i.e., the user interface.  

So, to summarize, the traditional use and configuration of desktops leave the enterprise vulnerable.  While awareness and strong authentication, remain essential practices they are limited in their effectiveness.  E-mail and browsing should be isolated from mission critical or otherwise sensitive applications.  The environment should be resistant to persistent changes to programs or procedures from application data; least privilege access control.  Network traffic should be encrypted end-to-end at the application layer; prefer software defined networks to VPNs to VLANs.  
 




Friday, February 1, 2019

Limitations of Two Factor Authentication

In an opinion piece in the New York Times, Professor Josephine Wolff of Rochester Institute of Technology describes a “phishing” attack in which two factor authentication might not protect you.  The bait asks you to click on it to go to an application that you are authorized to use.  Clicking on the link takes you to a site that mimics the application.  It mimics the prompts for the user ID, the password, and the one-time password, all three of which it uses to logon to the real application in your name.  Unfortunately, to some readers this may read like a general limitation of two factor authentication rather than a special case.  Some users might conclude that two factor authentication is not worth the inconvenience.  

Consider some of the conditions for the success of this attack.  First the bait must be for an application to which you actually have an account.  Second, the bait must be sufficiently well crafted to convince you that you want to respond.  Third, you must respond, not by going to the application the way you usually do but by clicking on the bait.  Of course, this is very bad practice.  

While if this man-in-the-middle attack is sufficiently well designed as to fool you, it has only stolen a session that you started. Unlike simple passwords it cannot be use to initiate a session on its own.  It has not exposed you to fraudulent reuse of your credentials.  It cannot be used to compromise other systems “laterally” within the enterprise.  

Well designed applications will not permit the attacker to turn off the two factor authentication without requiring a second one time password and will confirm any such change out of band.    

This is not the only possible successful attack against two factor, depending upon the implementation.  Consider Google’s implementation.   It offers the user five different choices of how to get the one-time password (OTP), in an SMS text message, in an e-mail message, in spoken language over the phone, from a software generator, or from a hardware token (Google Titan).  All of these must ensure that the OTP come from and get to the right place.

For example, SMS text and voice over the phone rely upon legitimate user’s control of the phone number.  E-mail requires that the OTP be sent to the legitimate user’s address.  Attackers have been successful in duping the carrier support personnel into pointing the the number to a new SIM or phone that they control.  They have also been successful in duping the application support personnel to change the number, or e-mail address to which they send the one time password.  Good practice requires that the change be confirmed out of band.  After compromise the user will not get one time passwords, or perhaps even phone calls, that they are expecting.   

Even software and hardware tokens rely upon the right token being associated with the legitimate user.  In order to compensate for lost or broken tokens, most applications provide for enrolling new tokens.  An attacker might succeed in duping support personnel into enrolling their token in place of the one heldby the legitimate user.  

Note that all of these attacks require work and special knowledge.  None of them guarantees success, none of them scales well.  Those that permit fraudulent reuse, also deny the legitimate user access and should be obvious.  

Two factor authentication using one time passwords is a special case of “strong authentication,” defined as multiple forms of evidence at least one of which (e.g., one time passwords) is resistant to replay.  Note that security can be increased by using more forms of evidence.  This at the expense of convenience.  

Strong authentication should be preferred for most applications.  Simple passwords must be used only for trivial applications.  All security mechanisms have limitations that we must understand and compensate for but that does not make them unuseable.  We must not permit the perfect to become the enemy of the good.