Friday, February 15, 2019

The Desktop, our Achilles Heel

During the last three or four years the number and rate of enterpirse wide breaches have increased dramatically.  Successful attacks have relied upon duping users into clicking on bait malicious objects in e-mail messages and web pages.  The malicious objects capture user credentials and then use them to attack peer systems in the enterprise network, spreading the compromise laterally.  These attacks exploit user gullibility, re-usable credentials, the default desktop access control rule of ”read/write,”  and flat enterprise networks.  Therefore, many security practitioners recommend user training, multi-factor user authentication, and structured networks.  Resistance to all three of these measures is high and their effectiveness limited.  Moreover, they do not address the vulnerability of the desktops to have their procedures modified by their data.   We are left with a high level of risk.

E-mail and browsing are the Achilles Heel of the desktop and the desktop is the Achilles Heel of the enterprise.  One of these two applications are involved in a large percentage of all breaches.  Note that while Achilles was vulnerable on only one heel, small attack surface, the enterprise may be vulnerable on many desktops.                                                            

One obvious defense would be to isolate these two applications from the system on which they run and those systems from the other applications and systems of the enterprise.  Neither of those applications should have the capability to make persistent changes to the procedures of the systems on which they run.  

In a world of cheap hardware, one way to do this would be to run these two applications on sacrifical hardware dedicated to these two applications.  In a world of reliable process-to-process isolation, another would be to use that isolation to protect the system on which the applications run from any changes originating in those porous applications.  The first solution is resisted because IT culture sees hardware as expensive, this in spite of the fact that its cost halves every two years.  The second is resisted because user culture prefers convenience, generality, flexiblity, and ”dancing pigs” to security.  As a consequence, most desktops are configured to offer read-write access to most objects and  few provide reliable protective isolation.  

It does not have to be this way.  Ten years ago Steve Jobs and Apple introduced us to iOS, with very limited capabilities but with very strong process-to-process isolation and strong protection from anything done at the user interface.  As it has matured its capabilities have increased.  Controlled application-to-application communication has been introduced while maintaining strong isolation and protection.  Some generality and flexibility have been sacrificed to usability and security but less than the defenders of the status quo predicted.  Nonetheless, resistance to iOS was so strong that it provoked Android, a more traditional system.  

However, iOS has been adopted by a large population of users that enjoy ”most, but not all, of the benefits offered by the traditional general purpose system.” (Fred Cohen)  At the user application interface, it appears as a single user single application-only machine.  While it can maintain application state, iOS is resistant to any persistent change to itself from the application or the user.  

Said another way, iOS protects itself from its data, its user, and its user’s data.  While the application may be vulnerable to a ”bait” attack, the system is not.  Therefore, it is a preferred environment in which to run vulnerable applications like e-mail and browsing, and sensitive applications like banking and healthcare.   

Personal computers can be configured with hypervisors to provide strong process to process isolation.  They can be configured with the ”least privilege” access control rule to resist contamination of procedures by their data.  Said another way, they can be configured such that simply clicking on a bait object is not sufficient to compromise the system.  Indeed they can even be configured in such a way that, as in iOS, nothing done with the keyboard and mouse is sufficient to compromise the system. 

This brings us to the ”flat enterprise network.”  Traditionally, enterprise networks have been configured for any-to-any connectivity; any node in the network could send a message to any other node.  The latency and bandwidth between any two nodes was roughly the same as the average across all nodes.  Often, and at least by default, they have been operated at a single level of trust.  That is to say, all nodes in the network were assumed to be benign, orderly, and well behaved. Nodes were not expected to have to protect themselves from traffic that originated on the network or question the origin address.  It is this configuration that leaves the enterprise vulnerable to lateral compromise with little more than one compromised system or user credentials.  

The alternative and safer network is referred to as ”zero trust.”  All nodes are assumed to be mutually hostile.  Traffic may flow only between specified pairs, e.g., user to application or client to server. Origin addresses are not trusted but must be authenticated.  Some cost in latency or bandwidth is tolerated for authorization of the connection and mutual authentication of the nodes. This kind of network is resistant to lateral compromise; a compromised node can attack only the nodes to which it is allowed to send traffic.  Even those nodes will treat it with suspicion and may require evidence as to its identity.  

There are a number of ways to restrict the flow of traffic to accord to this policy.  The first and most obvious is to provide only links between authhorized nodes; easy for two nodes, illustrative, but it does not scale even to a small enterprise.   However, the others simulate this illustration, usually through the use of encryption, e.g.,virtual local area networks, VLANs. virtual private networks, VPNs, and software defined networks, SDNs.  Note that in SDNs, users are included as ”nodes.”  Note also that to be most resistant to attack, connections should be at the application layer.  Applications are the nodes of interest and, contrasted to, for example, operating systems, have the smallest attack surface, i.e., the user interface.  

So, to summarize, the traditional use and configuration of desktops leave the enterprise vulnerable.  While awareness and strong authentication, remain essential practices they are limited in their effectiveness.  E-mail and browsing should be isolated from mission critical or otherwise sensitive applications.  The environment should be resistant to persistent changes to programs or procedures from application data; least privilege access control.  Network traffic should be encrypted end-to-end at the application layer; prefer software defined networks to VPNs to VLANs.  


  1. I'm not sure what you mean with software defined networks. Do you mean the client application dynamically could connect to a network it needs when it starts, and disconnect of this network on the app, and only this client aplication can connect to this network when it is open ? Similar to a client application starting a VPN connection acessible only to this application ? Or do you mean a compleatly different thing ? Brian Krebs said recently it uses a Virtual Box VM to surf the Web. And Qubes OS uses domain VMs for specific apps. I wish microsoft had someting similar. Can you give some more examples ? Thanks.

  2. Sorry. I just saw your question. Think of it as the ability for management to specify the allowed connections within a network. Management could, for example, enforce a policy that desktops that connect to the Internet could not also access administrative ports on file or database servers, thus resisting "lateral" attacks.