Wednesday, September 25, 2013

Strength of Materials

At the  2012 Colloquium on Information System Security Education in Orlando I was repeatedly reminded how much computer security education owes to, and has yet to learn, from engineering education.   

For example, every engineering student takes a course called strength of materials.  In this course, he learns not only the strength of those materials that he is most likely to use but how to measure the strength of novel materials.  The student studies how, and in how many different ways, his materials are likely to fail.  He learns how to design in such a way as to compensate for the limitations of his materials.

A computer science or computer security student can get an advanced degree without ever studying the strength of the components that he uses to build his systems.  It may be obvious that all encryption algorithms are not of the same strength but how about authentication mechanisms, operating systems, database managers, routers, firewalls, and communication protocols?   Is it enough for us to simply know that some are preferred for certain applications?

Courtney's First Law, remember that's the one that says nothing useful can be said about the security of a mechanism except in the context of a specific environment and application.  In this construction, security is analogous to strength, environment to load or stress, and application to the consequence of failure.  Said another way,  environment equates to threat or load and application to requirements. 

Computer science students are taught that their systems are deterministic, that integrity is binary, that security is either one or zero.  On the other hand, William Tompson, Lord Kelvin, cautioned that unless one can measure something, one cannot recognize its presence or its absence.  W. Edwards Deming taught us that if we cannot measure it, we cannot improve it.  

One way to measure the strength of a material is by destructive testing.  The engineer applies work or stress to the material until it breaks and measures the work required to break the material.  Note that different properties of a material may be measured.  The engineer may measure yield, compressive, impact, tensile, fatigue, strain, and deformation strength.  

The strength of a security mechanism can be expressed in terms of the amount of work required to overcome it.   We routinely express the strength of encryption algorithms this way, i.e., the  cost of a brute force or exhaustive attack, but fail to do it for authentication mechanisms where it is equally applicable.  As with engineering materials, security components may be measured for their ability to resist different kinds, for example exhaustive or brute force, denial of service, dictionary, browsing, eavesdropping, spoofing, counterfeiting, asynchronous, of attacks.  While some of these attacks should be measured in "cover time," the minimum time to complete an attack,  most should be measured in cost to the attacker.

There are now a number of ways in the literature for measuring the cost of attack.  The cost used should consider the value or cost to the attacker of such things as work, access, risk of punishment, special knowledge, and time to success.  Since these are fungible, it helps to express them all in dollars.  Of course, we will never know these with the precision that we know how much work it takes to fracture steel, but can measure them well enough to improve our designs.

The Trusted Computer System Evaluation Criteria, The TCSEC, can be viewed as an attempt at expressing the strength of a component composed of hardware and software.  While an evaluation speaks to suitability for a threat environment, with a few exceptions, it does not speak to the work required to overcome resistance.  One exception is in Covert Channel analysis, where the evaluation is expected to speak to the rate at which data might flow via such a channel.  

Because it is often misused, a caution about the TCSEC is necessary.  The TCSEC uses "divisions."  The division in which a component is evaluated is not a measure of its strength.  Many fragile components are evaluated in Division A, while some of our strongest are in D.  In order to understand the strength of a component, to understand how to use it, one must read the evaluation.

We have two kinds of vulnerabilities in our components, fundamental limitations and implementation-induced flaws.  The former are more easily measured than the latter.  On the other hand, it is the implementation-induced that we are spending our resources on.  We are not developing software as well as we do hardware, even as well as we know how.  

The engineers use their knowledge of the strength and limitations of their materials to make design choices.  They use safety factor and margin of safety metrics to improve their designs.  More recently, engineers at MIT's Draper Laboratory have proposed that "complex systems inhabit a 'gray world' of partial failure." Olivier de Weck,  associate professor of aeronautics and astronautics and engineering systems says,

“If you admit ahead of time that the system will spend most of its life in a degraded state, you make different design decisions,” de Weck says. “You can end up with airplanes that look quite different, because you’re really emphasizing robustness over optimality.”

Said another way, systems may be optimized for operation over time rather than at a point in time.  The more difficult it is to determine the state of a system at a point in time, the more applicable this design philosophy.  Thus, we see organizations like NSA designing and operating their systems under the assumption that there are hostile components in them.  

While most of our components are deterministic, none of our systems or applications are; they have multiple people in them, and interact in diverse, complex, and unpredictable ways.  Therefore, designing for degraded state may be more efficient over the life of a system than designing for optimum operation at a point in time.  We should be designing for fault tolerance and resilience.  We should be designing to compensate for the limitations of our materials.  

Of course, I am aware that my audience of information assurance and law enforcement professionals cannot reform computer security education or practice.  I will continue to advance that agenda in other forums.  What I hope for is that you will spend some of your professional development hours, effort, and study on the idea of strength and that it will inform and improve your practice.  It is in part because our education is a work in progress that we are called professionals and are paid the big bucks.



Thursday, September 19, 2013

Simulated Attacks Against RFID Credit Cards


Recently a colleague sent me this scary video illustrating an attack against contact-less (RFID) credit cards.  

Sigh.  It is bad.  It is not quite as bad as it sounds and only a little bit worse than it looks.

Watch the film again.  Focus on how close the attacker gets to the target.  Here is why.

The problem is not so much how the information is transferred as that it is transferred in the clear, not so much that the credit card number leaks as that credit card numbers are so easy to exploit. 

Said another way, all uses of credit card numbers in the clear leak;  this includes imprinters, compromised point-of-sale devices, gas pumps, and ATMs.   That would not be a problem if no one would accept a credit card number in the clear from an untrusted source.  

A major problem with the video is that it fails to distinguish between these RFID cards, that rely on the short range of the signal for security, from EMV cards that rely upon encryption, or even chip cards that require contact.

While many US merchants are ready for EMV, the issuers have slipped their schedule to 3Q 2015.  My hope is that by that time PayPal, Google Wallet, Square Wallet, or other (are you listening Apple and Amazon?) mobile computer token-passing systems, will have made them obsolete.  

For the moment, we can treat this as a vulnerability but not a problem;  there are easier ways to get credit card numbers.  The continued use of mag-stripe and PIN dwarfs all other problems in the retail payment system.

Bait E-mails

According to reliable intelligence sources (e.g. Verizon Data Breach Incident Report), a large percentage of successful attacks against targets both of choice and opportunity begin with bait messages (so called "fishing" attacks).

How to recognize a bait message:

It appeals to curiousity, fear, greed, lust, sloth, etc.

It appears to be personal but is addressed to a large number of people.

It has an ambiguous subject, contains a one-liner and a URL, and appears to come from someone you know at gmail, hotmail, Yahoo!, etc but from whom you were not expecting to hear.

It appears to come from PayPal, American Express, Chase, Amazon, or others with whom you do business but does not contain your name or account number. 

It pretends to come from a "security" department asking you to react to activity to your account or profile. 

It asks you to click on a button or URL within the message itself. 

Remember that any of these things may be legitimate but they are all suspicious. 

Remember that bait messages may be very artfully crafted.  They may contain logos, headings, footings, and other copy intended to make them look authentic. 

What to do with a suspicious message:

As little as possible.  Mere receipt of a suspicious message is not likely to hurt you.  It is clicking on things in it that will compromise your system. 

While it is not necessary,  you may wish to alert the purported sender.  If it appears to come from an individual, return it to them with a subject line that says "Did you send this?" and a body that says, "'If not, your e-mail account may be compromised.  Change your password."

If it appears to come from an enterprise, you may wish to forward it as an attachment to them.  Here are some useful addresses for that purpose:

spoof@paypal.com
spoof@ebay.com
spoof@americanexpress.com
stop-spoofing@amazon.com
fraud@abuse.earthlink.net
accountatrisk@chase.com
reportphish@wellsfargo.com
fraud@ups.com
abuse@bankofamerica.com.
abuse@capitalone.com
spoof@citicorp.com
spam@uce.gov
abuse@verizon.com
abuse@fedex.com
webmaster@aa.com
emailfraud@keybank.com

If your victim is not in this list, Google their name with "fraud" and you will likely find it.

Wednesday, September 18, 2013

Thoughts on NSA Betrayal

“I trust the minions of the NSA not to commit treason.  I do not trust them not to commit fraud.”  --Robert H. Courtney, Jr. circa 1975

Like Bruce Schneier, I am not surprised about the kind or nature of NSA activity.   However,  I do not see it so much as a betrayal of the Internet as of the public trust.

I am surprised by the scale.  I am disappointed by the scale of silent, industry cooperation, whether coerced, immunized, or otherwise.

I recall that, when trying to get an export license for Lotus Notes Messaging, IBM negotiated a (reasonable?) compromise and then went before the RSA conference and disclosed and defended what they had done.  Said another way, if the activity is a legitimate response to a legitimate government need, then it can be done in a transparent and accountable manner.  (Admittedly not without cost.  The Lotus Notes compromise is widely criticized outside the US, styled as capitulation (a Yankee word for surrender), and IBM has lost hundreds of millions in international sales as a result.  The government has lost whatever advantage might have accrued to it from the use of a weakened Lotus Notes instead of stronger options.)

That brings us back to the arguments made against CLIPPER, i.e., back-doors inserted for the legitimate use of the government will inevitably be abused by  government and exploited by rogues.  (The thing that distinguishes Edward Snowden among NSA rogues is that he went public; god only knows what the others are doing.) Back-doors weaken the structure and the necessary trust in it. 

These back-doors have been put in by the same administration that has tried to create commercial advantage for US products by suggesting that the PLA has put back-doors in Chinese products.  Did they really believe that they could booby-trap US products and get away with it?   As with bragging about Stuxnet, as with unilateral recourse to armed force,  they have ceded the moral high ground. 

After 9/11 we loosed the intelligence community, always a dangerous thing to do.  It  has been zealous in doing what we asked it to do.  It now has the bit in its teeth, it is going to be difficult to reign it in.  I believe that Directors Clapper and Alexander, are great Americans, motivated by patriotism; we should be grateful for their service.  However, they have been corrupted by the power and secrecy in which they have cloaked themselves.  They have systematically  deceived the American people, lied to Congress,  subverted the courts, and corrupted American industry.  Like Snowden and others, they appeal for justification to a higher value.  However, they seem to have confused "national security" with their oath to defend the Constitution.  They can now serve best by retiring and making way for reform.  

The necessary reform, transparency, and accountability will require new leadership, new leaders who will put the Constitution ahead of "national security."

Thursday, September 12, 2013

Internet Betrayal


On 5 September Bruce Schneier wrote in the Guardian "The US government has betrayed the internet. We need to take it back." 

This article was based upon access to information made available to the Guardian by Edward Snowden about signals intelligence activities of the NSA.   The information suggested that the NSA has systematically compromised cryptographic methods, keys, products, vendors, and systems on which the integrity of the infrastructure and the liberty of our citizens rely.  I was glad to have a reading of these papers by a trusted and eminently qualified colleague. 

While the activities reported were those that I had always expected the Agency to engage in, I was surprised by the extent and scope.  I was not surprised by the secrecy so much as by the deceit.  I was not surprised at what the Agency was doing but I was outraged at the permanent damage to the infrastructure that they were prepared to inflict in pursuit of their goals.  Along with Bruce, I felt betrayed by my government.  I was so angry, I sent a link to Bruce's article to a list of my colleagues.  Since the conclusions seemed obvious, I did not comment.

When one of my colleagues asked me for my thoughts, I sent him my most negative ones.  However, I did include the caveat that I was still ruminating on it and that these comments were still preliminary. 

Now It is great fun, indeed great sport, to affect righteous anger at the perfidy of our government.  For a day I nursed my anger, in fact, I delighted in it.  However, I woke up in the middle of the night to a realization that I would like to share with you.

While it may be true that, as Bruce has said, “the government has betrayed the Internet,” for every system that the government has compromised, there are a hundred compromised by rogue hackers, and a thousand compromised by their users.  While crypto is our strongest security mechanism, the only one we have that is stronger than we need for it to be, the best that it can do is to bring the security of the middle to the level of the end points.  Crypto will never be stronger than the systems that protect the keys.

The government has not “broken crypto.”  While it may have deceived us, broken faith, it has, in the words of Adi Shamir, only “bypassed” crypto.   While it may have corrupted industry, that corruption has relied upon the silent cooperation of industry.  We have known since the disclosure of the warrant-less surveillance program that government had compromised the major carriers.  The motives of industry seem to include patriotism, greed, apathy, and fear.  Whatever the motives, they are sufficient to the day.

Whatever one may think about the activities of the government(s), it is we, the users and the corporations that we own and run, that have betrayed the Internet.  We do “need to take it back.”

One likes to think that we can expect better behavior of our government than of our adversaries.  (The US Congress has warned us against doing business with Huawei because Chinese PLA has subverted them.)  However, governments do what governments do; we cannot expect better of our government than of ourselves. 

We have compromised industry, government, and the Internet.  It is time to stop whining and “take back” all. 

This is all about transparency and accountability.  To the extent that NSA's activities are seen now as a "betrayal," it is because they have been cloaked in secrecy.  Secrecy is what government wants for itself; accountability is what government demands of c
itizens.  However, the inevitable result is a government of men.  A government of law can only exist in the light. 

We must demand increased transparency at all levels of the society, government,  infrastructure, and industry.  Where the use of important controls is obscured by complexity, we must compensate by instrumentation and independent verification.  We must express the requirements for transparency and accountability at least as well as we do those for confidentiality, integrity, and availability and design and operate to satisfy them.  Not easy, not cheap, only necessary, necessary to economic efficiency and freedom.  Stop whining and get on with it.