Tuesday, July 16, 2019

Privileged Access Management

In its Private Industry Notification (20190423-001) the FBI specifically called out the risk represented by privileged users, those people who configure and manage networks, systems, applications, and users, those with administrator (e.g., "root," "ADMIN") pivileges.  These users are problematic because they can both expand their privileges and hide their use.  For example, an administrator might create a phony user ID to use to hide his activity or to use after termination.  The most egregious example might be the abuse by Edward Snowden who expanded his privileges to exfiltrate dozens of documents over months from the NSA without being detected.  

At the primitive level, most privileges are associated with a user identifier and a reuseable password.  In order to provide coverage, these are often known to and used by multiple parties with a subsequent loss of accountability, just where we need it most.        

However, there are solutions, called Privileged Access Management (PAM) packages, that can be used to provide some automated control and accountability over these users.  These applications work by acting as proxies for the privileged controls, hiding them, controlling access to them, and recording their use.  Instead of connecting directly to the privileged controls, the administrator connects to the proxy which then connects him to the privileged control.  

These packages may provide:

  • hiding of all privlleged controls
  • strong authentication of privileged users
  • management control over the granting and withdrawing of privileges
  • logging of all connections, events, and uses, content.
  • multi-party controls (two or more people must cooperate)
  • restriction of use to a time of day or shift
  • restriction of use to specified (e.g., supervised) locations (e.g., device, network address, VPN, VLAN)
  • restriction to a single user at a time (checkout/checkin) 
  • other
The PAM becomes the sole process with access to the privileges and uses them on behalf of its user as directed by management or policy.  


If your enterprise, network, system, or application has only one privileged user or administrator, then you have good accountability;  whatever was done, that person did it.  However, that will apply to only very small enterprises.  Everyone else should be using a Privileged Access Manager.  There are now dozens on the market.  Choosing the right one will require some effort but the usual sources (e.g., Gartner, Capterra, Solutions Review) will assist you.

Thursday, July 11, 2019

Control of Privileged Insiders

On April 23, 2019 the FBI published a Private Industry Notification (20190423-001).  The document was distributed as a pdf only by e-mail.  While marked “TLP-White,” “may be distributed without restriction," I could not find it on the web.

The summary read:

The FBI continues to observe U.S. businesses’ reporting significant losses caused by cyber insider threat actors.  These cases often involve former or disgruntled employees exploiting their enhanced privileges—such as unfettered access to company networks and software, remote login credentials, and administrative permissions— to harm companies. Cyber insider threat actors most often are motivated by revenge, but they also conduct attacks to profit financially from stolen information, gain a competitive edge at a new company, engage in extortion, or commit fraud through unauthorized sales and purchases.

I recommend it to the reader.  (Since I cannot find it on the web, here is a link to a private copy.)

There are two kinds of insider risk, accidental and intentional, and three threat sources, benign, dishonest, and disgruntled employees.  Note that insider threat rate is much lower than the outsider threat but the consequences, and therefore risk, may be much greater.  Outsiders damage the brand while insiders may bring down the business.

Not only is employee error by otherwise well motivated and intended employees perhaps the biggest source of losses ("The dummies have it, hands down, now and forever."  --Robert H. Courtney)  but it contributes to the success of attacks by outsiders. (Think “phishing” and other forms of duping.)  Undetected errors may result in employee temptation and fraud.  The employee makes an error and no one notices.  She repeats and still no one notices.  She finally concludes that she could do it in her own favor and still no one would notice.   We distinguish between dishonest employees, who want to keep their activities secret, and disgruntled employees who want you to know that you have been injured.  

Management supervision is the most effective of all insider controls.  Effective supervision usually requires that the supervisor could do, or at least appreciate, the job being supervised.  This control often breaks down for privileged IT jobs.  The more sensitive or unique the task to be supervised, the more narrow should be the span of control.  While one might be able to supervise a dozen tellers or coders, one might supervise no more than five or six loan officers, system designers, or privileged administrators.  

The limitation of supervision is cost; while it is effective, it is also expensive.  Therefore, other more efficient and complimentary controls are often substituted for all or part supervision.  These might include background checks, training, division of responsibility and privileges (so-called multi-party controls), cross training, job rotation, measurement, mandatory vacations, audit trails and audits, recognition, compensation, and complete and timely separation.  

I had been writing and talking on this subject for a few years before I added “please and thank you” to my list of controls.  While equitable compensation is a powerful control, no amount of it can compensate for inadequate recognition.  Many dishonest and most disgruntled employees feel that their contribution to the enterprise has not been appreciated.  Please and thank you go a long way toward maintaining necessary morale.  

The FBI notification gives special attention to IT personnel and, especially, privileged users such as system administrators.  Management often focuses on lower level employees, like tellers or clerks, doing routine tasks.  Where these engage in fraud they get little and are caught early.  It is professionals, managers, and executives who bring down the business.

It is ironic that these highly privileged actors are often inadequately supervised, under paid, and unaccountable.  We caution against the sharing of user IDs and passwords, but it is privileged IDs and passwords that are most likely to be shared.  Many administrators have so much privilege that they cannot be held accountable, can escalate their privileges, and the privileges, once granted, cannot be effectively withdrawn.  Think about the privileges that Edward Snowden had to have accumulated to gain access to all the information that he exfiltrated.  

One should not grant privileges that one cannot withdraw.  Therefore, privileged users should be required to use hardware token based strong authentication.  One should not grant privileges without accountability for their use.  Therefore, when there is more than one privileged user, i.e., in most large enterprises, Privileged Access Management (PAM) controls should be in place.  These controls will be covered in a later post.



Wednesday, April 3, 2019

The Universal Serial Bus

This weekend there was a report out of Mar-a-Lago that a Chinese national had been apprehended while trying to enter this resort while carrying a laptop, 4 mobiles, and a USB thumb drive containing malicious software.  While thumb drives are an efficient attack vector, where the attacker has physical access to a computer, we continue to hear reports of people surprised at how easy it is.  

It is important to decode, to understand, “USB.” It stands for “Universal Serial Bus.” “Universal” refers to the standard; thousands of different devices employ the standard for interoperability. It is a standard interface but it is more than that. It attaches to the bus of the host device as peer with processors, memory, and other external input/output devices. The standard provides for the device to contain executable software to facilitate its attachment to and interaction with the host device.  Think of it this way; any device attached to the bus is logically an internal, not external, device.  

Like many standards, this one is popular, in large part, because it is convenient.  It is an open interface for attaching cameras, scanners, printers, speakers, microphones, head sets, monitors, and storage devices.  As is often, not to say usually, the case, convenience trumps security.  Any control that limited the attachment, i.e., was more secure, would make it less convenient.

It is a privileged form of attachment; no authentication, no cryptography, no control. It is subject only to physical access control. Simply plugging a USB “thumb drive” into the bus of another device is sufficient to alter the fundamental operation of, not to say corrupt, that device in, perhaps, as little as tens of seconds. As we have seen, compromise of a single device on a network may reduce the cost of attack against all other devices on that network.
Since most, not to say all, personal computers expose their bus via the USB standard, it is essential to prevent unauthorized physical access to all such computers.   (Indeed the interface is so ubiquitous and so vulnerable that some security professionals advocate filling the port with superglue.  This measure should be considered for sensitive systems and applications and hostile environments.)  

  

Friday, March 29, 2019

New Paradigm

Just watched the Tom Field, Steve Katz “interview.”

I might have identified one or two new threats or changes to the environment. Not sure I would do anything different as a result of what I heard. We need drastic changes to security to address the applications and environments that Steve described. I used to believe that risk increased in proportion to use, uses, and users but now it is increasing exponentially. We are around the knee of the “hockey stick” curve.  Doing the same things harder is not cutting it.

We need strong authentication, adaptive authentication, federated identity, end-2-end application layer encryption (Network Defined Security) (“zero trust”), “least privilege” access control (or at least “read-only” or “execute-only”), multi-party controls for sensitive capabilities, strong accountability and control for privileged users (PAM), and greatly improved pro-active threat detection. We need out-of-band confirmations and alerts for all transactions, many data changes, and some uses. We need document management systems for intellectual property. Some enterprises may be doing one or two of these, almost none are doing all of them.  

See my interview with Peter Denning.  https://dl.acm.org/citation.cfm?doid=3314328.3306614

Thursday, March 7, 2019

Interview in the Communications of the ACM

In this article, I argue that there is a significant difference between today’s state of security practice, in which convenience trumps security, and the real requirements.  The current practice leaves us vulnerable to the threat sources and their attack methods that we are seeing.  I make a number of recommendations for changes to the practice.

Friday, February 15, 2019

The Desktop, our Achilles Heel

During the last three or four years the number and rate of enterpirse wide breaches have increased dramatically.  Successful attacks have relied upon duping users into clicking on bait malicious objects in e-mail messages and web pages.  The malicious objects capture user credentials and then use them to attack peer systems in the enterprise network, spreading the compromise laterally.  These attacks exploit user gullibility, re-usable credentials, the default desktop access control rule of ”read/write,”  and flat enterprise networks.  Therefore, many security practitioners recommend user training, multi-factor user authentication, and structured networks.  Resistance to all three of these measures is high and their effectiveness limited.  Moreover, they do not address the vulnerability of the desktops to have their procedures modified by their data.   We are left with a high level of risk.

E-mail and browsing are the Achilles Heel of the desktop and the desktop is the Achilles Heel of the enterprise.  One of these two applications are involved in a large percentage of all breaches.  Note that while Achilles was vulnerable on only one heel, small attack surface, the enterprise may be vulnerable on many desktops.                                                            

One obvious defense would be to isolate these two applications from the system on which they run and those systems from the other applications and systems of the enterprise.  Neither of those applications should have the capability to make persistent changes to the procedures of the systems on which they run.  

In a world of cheap hardware, one way to do this would be to run these two applications on sacrifical hardware dedicated to these two applications.  In a world of reliable process-to-process isolation, another would be to use that isolation to protect the system on which the applications run from any changes originating in those porous applications.  The first solution is resisted because IT culture sees hardware as expensive, this in spite of the fact that its cost halves every two years.  The second is resisted because user culture prefers convenience, generality, flexiblity, and ”dancing pigs” to security.  As a consequence, most desktops are configured to offer read-write access to most objects and  few provide reliable protective isolation.  

It does not have to be this way.  Ten years ago Steve Jobs and Apple introduced us to iOS, with very limited capabilities but with very strong process-to-process isolation and strong protection from anything done at the user interface.  As it has matured its capabilities have increased.  Controlled application-to-application communication has been introduced while maintaining strong isolation and protection.  Some generality and flexibility have been sacrificed to usability and security but less than the defenders of the status quo predicted.  Nonetheless, resistance to iOS was so strong that it provoked Android, a more traditional system.  

However, iOS has been adopted by a large population of users that enjoy ”most, but not all, of the benefits offered by the traditional general purpose system.” (Fred Cohen)  At the user application interface, it appears as a single user single application-only machine.  While it can maintain application state, iOS is resistant to any persistent change to itself from the application or the user.  

Said another way, iOS protects itself from its data, its user, and its user’s data.  While the application may be vulnerable to a ”bait” attack, the system is not.  Therefore, it is a preferred environment in which to run vulnerable applications like e-mail and browsing, and sensitive applications like banking and healthcare.   

Personal computers can be configured with hypervisors to provide strong process to process isolation.  They can be configured with the ”least privilege” access control rule to resist contamination of procedures by their data.  Said another way, they can be configured such that simply clicking on a bait object is not sufficient to compromise the system.  Indeed they can even be configured in such a way that, as in iOS, nothing done with the keyboard and mouse is sufficient to compromise the system. 

This brings us to the ”flat enterprise network.”  Traditionally, enterprise networks have been configured for any-to-any connectivity; any node in the network could send a message to any other node.  The latency and bandwidth between any two nodes was roughly the same as the average across all nodes.  Often, and at least by default, they have been operated at a single level of trust.  That is to say, all nodes in the network were assumed to be benign, orderly, and well behaved. Nodes were not expected to have to protect themselves from traffic that originated on the network or question the origin address.  It is this configuration that leaves the enterprise vulnerable to lateral compromise with little more than one compromised system or user credentials.  

The alternative and safer network is referred to as ”zero trust.”  All nodes are assumed to be mutually hostile.  Traffic may flow only between specified pairs, e.g., user to application or client to server. Origin addresses are not trusted but must be authenticated.  Some cost in latency or bandwidth is tolerated for authorization of the connection and mutual authentication of the nodes. This kind of network is resistant to lateral compromise; a compromised node can attack only the nodes to which it is allowed to send traffic.  Even those nodes will treat it with suspicion and may require evidence as to its identity.  

There are a number of ways to restrict the flow of traffic to accord to this policy.  The first and most obvious is to provide only links between authhorized nodes; easy for two nodes, illustrative, but it does not scale even to a small enterprise.   However, the others simulate this illustration, usually through the use of encryption, e.g.,virtual local area networks, VLANs. virtual private networks, VPNs, and software defined networks, SDNs.  Note that in SDNs, users are included as ”nodes.”  Note also that to be most resistant to attack, connections should be at the application layer.  Applications are the nodes of interest and, contrasted to, for example, operating systems, have the smallest attack surface, i.e., the user interface.  

So, to summarize, the traditional use and configuration of desktops leave the enterprise vulnerable.  While awareness and strong authentication, remain essential practices they are limited in their effectiveness.  E-mail and browsing should be isolated from mission critical or otherwise sensitive applications.  The environment should be resistant to persistent changes to programs or procedures from application data; least privilege access control.  Network traffic should be encrypted end-to-end at the application layer; prefer software defined networks to VPNs to VLANs.  
 




Friday, February 1, 2019

Limitations of Two Factor Authentication

In an opinion piece in the New York Times, Professor Josephine Wolff of Rochester Institute of Technology describes a “phishing” attack in which two factor authentication might not protect you.  The bait asks you to click on it to go to an application that you are authorized to use.  Clicking on the link takes you to a site that mimics the application.  It mimics the prompts for the user ID, the password, and the one-time password, all three of which it uses to logon to the real application in your name.  Unfortunately, to some readers this may read like a general limitation of two factor authentication rather than a special case.  Some users might conclude that two factor authentication is not worth the inconvenience.  

Consider some of the conditions for the success of this attack.  First the bait must be for an application to which you actually have an account.  Second, the bait must be sufficiently well crafted to convince you that you want to respond.  Third, you must respond, not by going to the application the way you usually do but by clicking on the bait.  Of course, this is very bad practice.  

While if this man-in-the-middle attack is sufficiently well designed as to fool you, it has only stolen a session that you started. Unlike simple passwords it cannot be use to initiate a session on its own.  It has not exposed you to fraudulent reuse of your credentials.  It cannot be used to compromise other systems “laterally” within the enterprise.  

Well designed applications will not permit the attacker to turn off the two factor authentication without requiring a second one time password and will confirm any such change out of band.    

This is not the only possible successful attack against two factor, depending upon the implementation.  Consider Google’s implementation.   It offers the user five different choices of how to get the one-time password (OTP), in an SMS text message, in an e-mail message, in spoken language over the phone, from a software generator, or from a hardware token (Google Titan).  All of these must ensure that the OTP come from and get to the right place.

For example, SMS text and voice over the phone rely upon legitimate user’s control of the phone number.  E-mail requires that the OTP be sent to the legitimate user’s address.  Attackers have been successful in duping the carrier support personnel into pointing the the number to a new SIM or phone that they control.  They have also been successful in duping the application support personnel to change the number, or e-mail address to which they send the one time password.  Good practice requires that the change be confirmed out of band.  After compromise the user will not get one time passwords, or perhaps even phone calls, that they are expecting.   

Even software and hardware tokens rely upon the right token being associated with the legitimate user.  In order to compensate for lost or broken tokens, most applications provide for enrolling new tokens.  An attacker might succeed in duping support personnel into enrolling their token in place of the one heldby the legitimate user.  

Note that all of these attacks require work and special knowledge.  None of them guarantees success, none of them scales well.  Those that permit fraudulent reuse, also deny the legitimate user access and should be obvious.  

Two factor authentication using one time passwords is a special case of “strong authentication,” defined as multiple forms of evidence at least one of which (e.g., one time passwords) is resistant to replay.  Note that security can be increased by using more forms of evidence.  This at the expense of convenience.  

Strong authentication should be preferred for most applications.  Simple passwords must be used only for trivial applications.  All security mechanisms have limitations that we must understand and compensate for but that does not make them unuseable.  We must not permit the perfect to become the enemy of the good.