Sunday, March 26, 2017

Internet Vulnerability

On March 24, 2017 Gregory Michaelidis wrote in Slate on "Why America’s Current Approach to Cybersecurity Is So Dangerous."

He cited an article by Bruce Schneier.

In response, I observed to a number of colleagues, proteges, and students that "One takeaway from this article and the Schneier article that it points to is that we need to reduce our attack surface.  Dramatically.  Perhaps ninety percent.  Think least privilege access at all layers to include application white-listing, safe dcfaults, end-to-end application layer encryption, and strong authentication."

One colleague responded "I think one reason the cyber attack surface is so large is that the global intel agencies hoard vulnerabilities and exploits..."  Since secret "vulnerabilities and exploits" account for so little of our attack surface, I fear that he missed my point.

While it is true that intelligence agencies enjoy the benefits of our vulnerable systems and are little motivated to reduce the attack surface, the "hoarded vulnerabilities and exploits" are not the attack surface and the intel agencies are not the cause.  

The cause is the IT culture. There is a broad market preference for open networks, systems, and applications. TCP/IP drove the more secure SNA/SDLC from the field. The market prefers Windows and Linux to OS X, Android to iOS, IBM 360 to System 38, MVS to FS, MS-DOS to OS/2, Z Systems to iSeries, Flash to HTML5, von Neumann architecture [Wintel systems] to almost anything else.  

One can get a degree in Computer Science, even in Cyber Security, without ever even hearing about a more secure alternative architecture to von Neumann's [e.g. IBM iSeries. Closed, finite state architecture (operations can take the system only from one valid state to another), limited set of strongly-typed (e.g., data can not be executed, programs cannot be modified) objects, single level store, symbolic only addressing, etc.)]

We prefer to try and stop leakage at the end user device or the perimeter rather than administer access control at the database or file system. We persist in using replayable passwords in preference to strong authentication, even though they are implicated in almost every breach. We terminate encryption on the OS, or even the perimeter, rather than the application. We deploy user programmable systems where application only systems would do.  We enable escape mechanisms and run scripts and macros by default.

We have too many overly privileged users with almost no multi-party controls. We discourage shared UIDs and passwords for end users but default to them for the most privileged users, where we most need accountability. We store our most sensitive information in the clear, as file system objects, on the desktop, rather than encryptied, in document management systems, on servers. We keep our most sensitive data and mission critical apps on the same systems where we run our most vulnerable applications, browsing and e-mail. We talk about defense in depth but operate our enterprise networks flat, any to any connectivity and trust, not structured, not architected. It takes us weeks to months just to detect breaches and more time to fix them.  

I can go on and I am sure you can add examples of your own. Not only is the intelligence community not responsible for this practice, they are guilty of it themselves. It was this practice, not secret vulnerabilities, that was exploited by Snowden. It is this culture, not "hoarded vulnerabilities and exploits," that is implicated in the breaches of the past few years. It defies reason that one person acting alone could collect the data that Snowden did without being detected.  

Nation states do what they do; their targets of choice will yield to their overwhelming force.  
However, we need not make it so easy. We might not be able to resist dragons but we are yielding to bears and brigands. I admit that the culture is defended and resistant to change but it will not be changed by blaming the other guy. "We have seen the enemy and he is us."




Wednesday, January 4, 2017

All "Things" are not the Same

My mentor, Robert H. Courtney, Jr.  was one of the great original thinkers in security.  He taught me a number of useful concepts some of which I have codified and call "Courtney's Laws."  At key inflection points in information technology I find it useful to take them out and consider the problems of the day in their light.  The emergence of what has been called the Internet of Things (IoT) is such an occasion. 

Courtney's First Law cautioned us that "Nothing useful can be said about the security of a mechanism except in the context of a specific application and environment."  This law can be usefully applied to the difficult, not to say intractable, problem of the Internet of things (IoT).  All "things" are not the same and, therefore do not have the same security requirements or solutions.

What Courtney does not address is what we mean by "security."  The security that most seem to think about in this context is resistance to interference with the intended function of the "thing" or appliance.  The examples de jour include interference with the operation of an automobile or with a medical device.   However, a greater risk is the that general purpose computer function in the device will be subverted and used for denial of service attacks or brute force attacks against passwords or cryptographic keys.

Key to Courtney's advice are "application" and "environment."  Consider application first.  The security we expect varies greatly with the intended use of the appliance.  We expect different security properties, features, and functions from a car, a pacemaker, a refrigerator, a CCTV camera, a baby monitor, or a "smart"  TV.  This is critical.  Any attempt to treat all these things  the same is doomed to failure.  This is reflected in the tens of different safety standards that the Underwriters Laboratories has for electrical appliances.  Their list includes categories that had not even been invented when the Laboratories were founded at the turn of the last century.

Similarly our requirements vary with the environment in which the device is to be used.  We have different requirements for devices intended to be used in the home, car, airplane, hospital, office, plant, or infrastructure.  Projecting the requirements of any one of these on any other can only result in ineffectiveness and unnecessary cost.  For example, one does not require the same precision, reliability, or resistance to outside interference in a GPS intended for use in an automobile as for one intended for use in an airliner or a cruise ship.  One does not require the same security in a device intended for connection only to private networks as for those intended for direct connection to the public networks.

When I was at IBM, Courtney's First Law became the basis for the security standard for our products.  Product managers were told that the security properties, features, and functions of their product should meet the requirements for the intended application and environment.  The more things one wanted one's product to be used for and the more, or more hostile, the environments that one wanted it to be used in, the more robust the security had to be.  For example, the requirements for a large multi-user system were higher than those for a single user system.  The manager  could assert any claims  or disclaimers that she liked; what she could not do was remain silent.  Just requiring the manager to describe these things made a huge difference.   This was reinforced by requiring her to address this standard in all product plans, reviews, announcements, and marketing materials.  While this standard might not have accomplished magic, it certainly advanced the objective.

Achieving the necessary security for the Internet of things will require a lot of thought, action and, in some cases, invention.  Applying Courtney's First Law is a place to start.  A way to start might be to expect all vendors to speak to the intended application and environment of his product.  For example, is the device intended "only for home use on a home network; not intended for direct connection to the Internet."  While the baby monitor or doorbell must be able to access the Internet, attackers on the Internet should not be able access the baby monitor.

Thursday, December 29, 2016

"Women and Children First"

As we approach autonomous cars, some have posed the question "In accident avoidance should the car prefer its passenger or the pedestrian?"  It is posed as a difficult ethical dilemma.  I even heard an engineer suggest that not only does he not want to make the decision but that he would like Congress to make it as a matter of law.  

This is really just another instance of an ethical dilemma that humanity has faced forever.  It has many illustrations but one that has been used in teaching ethics is called the "life boat" problem.  If there is not enough room in the lifeboat for everyone, who gets in?  If there is not enough food and water, who gets preference?

The simple answer is "women and children first."  Human drivers will steer away from a woman with a baby carriage even if they do not have time to evaluate the alternative.  It is built into the culture, all but automatic, but the reason is that it is pro life.  Children are the future of the species.  Women can nurture and reproduce.  Men can sow but they cannot reap.  While the male role takes minutes, the female role takes months.  Life needs more females than males.
It 
The reason that we do not apply this pro life rationale to the autonomous automobile is that we assume that the consideration is beyond its capability.  However, most of what one expects of an autonomous car today was beyond its capability a decade ago.  For the moment most may not be able to consider all the factors we might like.  For example, they may not recognize age and gender, much less consider them.  Ten years from now, they certainly will.  

In this context it is useful to consider how such systems make a decision.  They identify a number of scenarios, depending upon the time available, assign an outcome and a confidence level to each, and choose statistically.  The kind of ties implied by the strawman dilemma will be vanishingly rare, even more so as the computers become faster and the number of things the can consider increases.  

Compare the autonomous car to the human driver.  In the two tenths of a second that it takes a young adult to recognize and react, the autonomous car will evaluate dozens of possibilities with as many considerations.  Like the human driver, the autonomous car may confront instances when there are simply no good options but the whole reason for using them is that they are less likely than the human driver to overlook the least damaging.  

Monday, October 31, 2016

Denial of Service Attacks Exploiting the Internet of Things

The recent denial of service attack against the Domain Name Service provider, Dyn, and exploiting compromised devices on the Internet, has generated a number of proposed solutions to the infrastructure vulnerability represented by the so-called Internet of things.  

One of those proposals involved vigilante hacking to remove vulnerable devices.  It is important to,call this proposal what it is because it puts it in the context of a historical and cultural argument that suggests it is probably a bad idea.  That said, let us consider a related alternative.  

"Nice people do not attach weak systems to the public networks." While understandable, ignorance of the weakness is no excuse.  On the other hand, nice people do not interfere with the operation of another's property.  This is both an ethical and legal conflict.  

However, a system should be able to protect itself from any traffic that it can expect to see on any network to which it is attached.  For SOHO networks, where many of these "things"  can be expected to be, this is not a very high hurdle; for the public networks, even enterprise networks, this may be a very high hurdle indeed.  Part of the solution will be to specify the intended network environment of an appliance.  For example, an appliance might be labeled "intended for home use only; must not be connected to public or enterprise networks."  

Then the community might well consider regulations that make it illegal to attach such devices to the public networks.  Sanctions might include fines or disconnection from the networks under a rule that says, "if it can be  disabled, that is, it is not secure, then it may be disabled."

Under such a rule, one might safely, ethically, and legally connect his light bulbs to his home network. One might connect one's baby monitor to her home network.  One might even access a baby monitor from a mobile using a virtual private network (VPN).  However, should a baby monitor be addressable and operable from the public networks, then it would be permissible to shut it down without notice, whether or not it is compromised and whether or not it is interfering with the public networks. 

Note that such regulation is already within the newly expanded power of the FCC.  For the average user, this would barely impact his use.

Wednesday, June 29, 2016

The Role of Risk Assessment in Digital Security

The very idea of Risk Assessment has always been controversial.  I have been engaged in the controversy for fifty years. My ideas on the subject are well considered if otherwise no better than anyone else's.  I record them here.

I attribute the application of this idea to what was then called Computer Security to my mentors, later colleagues, Robert H. Courtney, Jr. and Robert V. Jacobson.  They did it in an attempt to rationalize decision making, more specifically the allocation of scarce security resources, to the then nascent field.  They did it in response to their observation that many, not to say most, security decisions were being made based upon the intuition of the decision maker and their belief, and a tenet of this blog, that security is a space in which intuition does not serve us well.  They wanted to bring a little reason to the process.

They could not possibly have known that in a mere fifty years that the resources applied to this effort would grow to the tens to hundreds of billions of dollars, that the safety and liberty of the individual, the health of public and private enterprise, the efficiency and resilience of our economy, and the security of the nations would turn on how effectively and efficiently we used those resources.  

So, at its core risk assessment is a decision making tool.  It is a tool that we use to answer the question "where to spend the next dollar of our limited resources?"  Courtney's Second Law says one should "Never spend more mitigating a risk than tolerating it will cost you." We will, do, make this decision, with or without tools.  We make it intuitively or we make it rationally but we do make it.  

At its most elaborate risk assessment is a very expensive tool requiring significant knowledge, skill, ability, and experience to use, more than most of us enjoy.  It should be used only for expensive decisions, decisions that are expensive to reverse if we get them wrong.  At its simplest, it protects us from making decisions based solely upon the threat, attack, vulnerability, or consequence de jour.  It protects us from intuition, from fear.

All that said, few of us are confronting expensive or difficult decisions, decisions requiring sophisticated decision making tools, risk assessment or otherwise..  We have yet to implement all those measures that we know to be so effective and efficient as to require no further justification.  They are what Peter Tippett calls essential practices.  Anyone can do them, with available resources, they are about 0.8 effective but work synergistically to achieve an arbitrary level of security. They fall in that category that we call "no brainers."  All we need is the will.  


Monday, April 25, 2016

Compromise of Credit Card Numbers

Recently FireEye published an intelligence report stating that a previously unknown cybercrime group has hacked into numerous organizations in the retail and hospitality sectors to steal an estimated 20 million payment cards, collectively worth an estimated $400 million on the "cybercrime" black market.

To a near approximation, all credit card numbers more than a few months old are public. The market price has dropped to pennies. We are all equally targets of opportunity. That any one of us has not been a victim of fraud is mere chance. They have so many that they simply cannot get to us all.

The brands are at fault for marketing a broken system, one that relies upon the secrecy of credit card numbers but which passes them around and stores them in the clear. Their business model is at risk. They have technology, EMV, tokenization, and checkout proxies, but the first is too slow for many applications and they are not promoting the other two to merchants or consumers.

Issuers take much of the fraud risk. They are attempting, with some short run success, to push this to the merchants.  However, with merchants and consumers, they share in the risk of our broken system.

As the referenced report suggests, bricks and mortar merchants, particularly "big box" retailers and hospitality,  are finding that both issuers and consumers are blaming them for the disclosure of the numbers. Issuers are charging back fraudulent transactions. and suing merchants for the expense of issuing new cards after a breach. Their systems are being penetrated and numbers ex-filtrated wholesale. Point of sale devices are being compromised, or even replaced, to capture debit card numbers and PINs. These are used to produce counterfeit cards.  Some of these are used to,purchase gift cards or get cash at ATMs. Merchant brands have been badly damaged by bad publicity surrounding breaches. While most of these merchants can resist compromise, there are more than enough to guarantee that some will fall. Merchants can reduce fraudulent transactions by preferring mobile, EMV cards, and by checking cards, signatures, and IDs but all but the first slow the transaction and inconvenience the customer.

Online merchants are the target of all kinds of "card not present" scams and take the full cost of the fraud. While it will not stop the fraud, the online merchants can both protect themselves and speed up the transaction by not accepting credit cards and using only proxies like PayPal, Visa Checkout, Apple Pay, and Amazon.

While, at least by default, consumers are protected from financial loss from credit card fraud, the system relies heavily upon them to be embarrassed by it.  At least on court has agreed to hear evidence as to whether or not consumers as a class are otherwise damaged when their card numbers are leaked to the black market.

All this is by way of saying that as long as anyone accepts credit card numbers in the clear, we will be vulnerable to their fraudulent use. There are now alternatives and we need to promote them, not simply tolerate them. Think numberless, card-less, and contact-less.

Monday, February 29, 2016

Encryption and National Security versus Liberty

In the 1990s, in what might be called the first battle of the Crypto War, the government classified encryption as a munition and restricted its export.  While opposing export in general, the government was licensing the export of implementations that were restricted to a forty bit key.  Of course, 56 bit was then the norm and, at the time, expensive for the NSA to crack.  

IBM had just purchased Lotus Notes and wanted to,export it.  In order to get a license, they negotiated an agreement under which they would encrypt 16 bits of the 56 bit message key under a public key provided by the government and attach it to the message or object.  This would mean that while the work factor anyone else would be 56 bits, for the government it would be only 40 bits.

Viewed today, 40 bit encryption is trivial; twenty years ago it was strong enough that, while the government could read any message that it wanted to, it could not read every message that it wanted to.  Said another way, it would be able to do intelligence, or even investigation, but it still would not be able to engage in mass surveillance.  

Moreover, we believed that the NSA only collected,traffic that crossed our borders, that it could not be used against citizens.  We believed that the government could keep,their private key secure. Of course, post "warrant-less surveillance," the routine breaches of government computers, including those of the NSA,and the exponential growth of computing power over a generation, this all seems very naive.  

However, I like,to think that it illustrates that it is possible to craft solutions that grant authorized access to the government, with a work factor measured in weeks to months per message, file, device or key, while presenting all,others with a cost of attack measured in decades or even centuries.   

It also illustrates the fundamental, application, and implementation-induced limitations of any such scheme, limitations that would have to be compensated for.  No such scheme will be fool-proof, nor need it be.  Like our other institutions and tools, it need only work well enough for each intended application and environment.