Thursday, October 22, 2020

Slides from my NYMJCSC Keynote.

 https://tinyurl.com/MurrayNYMJCSC

This tiny url will be decoded for you before you open the synonymous link.  

Thursday, October 15, 2020

Prefer Simplicity

"In anything at all, perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away..."

        -- Antoine de Saint Exupery (Aviator, Mechanic, Poet,

      Exemplar), "Wind, Sand and Stars"

“Simplicate and Add Lightness”

        -- Design philosophy of Ed Heinemann, Douglas Aircraft

      (Also attributed to Igor Sikorsky)

“Make everything as simple as possible, but not simpler.”

        -- Albert Einstein

“One should not increase, beyond what is necessary, the number of entities required to explain anything.”

        -- William of Occam

Engineers, indeed even philosophers and poets, have a preference for simple designs. They eschew (gratuitous or unnecessary) complexity. One need not point out that IT designers and those who secure those systems have no such preference. Engineers try to do nothing in preference to the wrong thing. They recognize that the greater the complexity of a mechanism the greater the potential for failure. They recognize that the greater the complexity of a mechanism the more difficult it is to know yourself that it is performing correctly or to demonstrate to others that it is doing so.

Simple designs are easier to implement, demonstrate, and maintain. Engineers promote the KISS principle. They call simple designs “elegant.” They recognize that complexity causes errors and masks malice. Engineers manage “parts count” because they know that quality goes up as parts count goes down.

Friday, September 18, 2020

Awareness Training, the Message

It was a long time ago.  I was doing data security market support for IBM.  I thought of my job as helping IBM customers keep their computers safe, use them safely, and use them to protect their contents (from accidental or intentional modification, destruction, or disclosure).  There were a manager and three professionals doing this work.

There was another security group in IBM, much larger, responsible for how IBM's intellectual and other property was protected.  They were piloting their first "security awareness" program.  As a courtesy, they invited me to sit in on one of the pilot sessions.  Their intent was, that if the pilot program was successful, they would make it mandatory for all employees.  

The program was motivated in part by a major breach of intellectual property.  A competitor had called to tell us that someone had just offered them the design of what was to become one of IBM's most profitable products, a new disk drive, code named WINCHESTER, which was to launch an entire industry.  The design was offered to the competitor  for $50,000 and that price included monthly updates for a year.  

Not only was IBM's general management upset but the competitor was equally so.  His position was that he could absorb full development cost and still compete.  What he could not do was compete with a mutual competitor who got the design for a pittance.  That if IBM could not keep its own secrets, the next time he was offered such a deal, he would take it.

While the perpetrators were caught and the design documents were recovered, the lesson was not lost on IBM management.  They were going to ensure that the problem was fixed.  The pilot program was to be part of the fix.  

I sat through the program not once, but twice.  When I was asked if I had any reaction, I said that the course was a great lesson in how to commit fraud and intellectual property theft, that it was a good description of the problem but was likely to make it worse rather than better.  

When asked what I would recommend, I suggested that the program be shortened perhaps to as little as fifteen minutes and that the message be shortened to what we expected the managers, professionals, and other employees to do.  

IBM actually had a robust system for classifying and handling data.  It defined how data was to be classified, labeled, and handled.  Most of the documents created were public.  Indeed, at the time IBM was one of the largest publishers in the world, second perhaps only to the US Government Printing Office.  Other documents were classified and labeled as "For Employee Use Only," along with two levels of "confidential."  Confidential meant that use of a document was to be on a "need to know only" basis.  Each level was defined by the controls that were intended for that level.  The highest level, "Registered Confidential," was intended for a only a very small number of documents, those whose disclosure might affect profitability, and where the controls included limiting and numbering every copy, keeping them under lock and key, and logging every use.  

What IBM really needed was for every employee to classify and label documents properly, know the procedures for every class that they were likely to handle, and follow the procedures.  That meant that most employees needed to be trained only on public and employee use only, a smaller number on confidential and "need to know," and only a tiny number on how to recognize, classify, and handle the "crown jewels."  

It was much easier to teach this.  Not only did this focus increase the effectiveness of the program but it greatly reduced its cost.  Keep in mind that this was in an era when most information was still stored on cheap paper rather than on expensive computer storage media.  The more sensitive the data, the less likely it was to be in a computer.  It was very different from today where we use little expensive paper, and store the most sensitive data in cheap computer storage.

The lessons for today are very different but the emphasis of awareness training should be the same.  Focus on behavior, what we need for people to do.  Focus on roles and applications.  Leave descriptions of the environment, the threat sources and rates, the attacks, the vulnerabilities, the problem, to the specialists.  Awareness training should be about the solution, not the problem.  



Thursday, July 9, 2020

Real Programmers Cain't do it

I would like to share a story.  It is a story about how to write security programs securely, something that the story suggests "Real Programmers" cannot do.  It is a story that illuminates many of the problems that lead to modern software quality.  

It is a long story, longer than one can tell in a single blog entry.  One has a choice of breaking it up into multiple entries or simply providing a link from this blog to the story, the choice I have taken. https://tinyurl.com/Real-Programmers

Patching IV

On last week's "Patch Tuesday," for the second month in row, Microsoft published fixes for more than a hundred vulnerabilities.  If anything the number of fixes per month is increasing rather than decreasing.  As I have suggested before, the implication of this is that there is a huge reservoir of known and unknown vulnerabilities remaining.  

In response to this, I wrote on NewsBites:

"For the moment and for most enterprises "patching" remains mandatory; failing to do so not only puts one at risk but puts one's neighbors at risk. At what point do we decide that the cost of patching is too high? When do we realize that the attack surface of these widely used products is so big, so homogenous, and so porous, that collectively they weaken the entire infrastructure? When do we realize that the architectures (e.g., von Neumann), languages, and development processes that we are using are fundamentally flawed? That hiding these products behind local firewalls and end-to-end application layer encryption is a more efficient strategy? When do we acknowledge that we must fundamentally reform how we build, buy, pay for, and use both hardware and software? At what point do we admit that we cannot patch our way to security?"

A reader responded in part:

I agree with you that the cost of patching does remain high. I agree with you that our languages and development (and testing) processes are flawed. Those complaints are not new and not interesting.

But our architectures, especially von Neumann? Would you lump IPv6 into that category as well? I'm curious why a man of your obvious accomplishments would think of that. Even more interesting would be if you had a better idea. The paradigm of the stored-program computer with instructions and data in the same memory seems unshakable at this point. Everybody has thought of separating memory into instruction space and data space, but that's just another way of getting more parallelism, to make things faster. It doesn't really change how computers work or how we think about them.


So.... I'm curious: what do you have in mind?

I answered:

Thanks for your great question.  It is good to know that anyone is reading what I wrote, much less being provoked by it. 

von Neumann was a genius but he solved a problem that no longer exists, i.e., dear storage.  In his day storage was so dear that one did not want to preallocate it to data or procedure, as incidentally, every computing machine prior to von Neumann's proposal had done.  In fact, the problem went away almost as soon as he solved it.  (By treating a program as data, von Neumann also gave us "address modification.")   As early as the introduction of the IBM 360, index registers made it unnecessary and bad practice for a program to modify itself.  Yet today programs are routinely corrupted by their data.  

It is ironic that one can get a degree in Computer Science without ever hearing about, much less studying an alternative to the von Neumann Architecture.  Consider the IBM System/38 and its successors, the AS/400 and the iSeries.  This architecture is so different that, at least for me, learning about it was literally mind-bendng.  This architecture is probably older than most readers and has been in constant use but even many of its users do not really appreciate how interesting it really is. 

These systems employ a single-level store, symbolic-only addressing, and strongly-type objects.  (the number of types is still numbered in the low tens.)  The operations that one can perform on an object is specified by its type.  For example, in these systems "data" objects cannot be executed and "programs" cannot be modified.  Thus it is impossible for a program to be contaminated by its data.  Programs can be replaced, but not modified; every version will have different fully-qualified name.  

The von Neumann Architecture persists because the market has a preference for convenience, generality, flexibility, and programmability.  At some level we do know better but we still continue to tolerate buffer and stack overflows. 

Consider Apple's iOS.  There is nothing that one can do from the user interface that will alter the programming of even the application, much less the system code.  Each app turns the device into a single application only device.  It is not a "programmable" device; the creation of apps for iOS is done in a different environment.  In the early days of iOS there was no app-to-app communication; it was introduced late and only by means of tightly controlled and controlling APIs.  Even device to device communication is limited and tightly controlled.  For example, I cannot share an app from my device to another.   On the other hand, there has never been a successful virus in this huge population of computing and communicating devices.  Yes, I can click on a bait message in my browser or e-mail and I may be misled.  However, I do not fear that I will alter the browser, the mail app, or the device code.  I need not fear that I will unknowingly install malicious code on my device.  One can argue that Android was a reaction to those limitations, not to say features, of iOS.  

I expect, almost hope, that my response to the reader raises more questions than it answers.  

Perhaps we can continue the dialogue here.


Monday, June 22, 2020

On "Ransomware"

Forty years ago, my friend and colleague, Donn Parker, suggested that "employees" would use cryptography to hide enterprise data from management.  Employees because forty years ago only one's employees could send a message to one's systems.  I laughed.  It was obvious to me that such an activity would require both "read" and "write" access to the data.  I was so young and naive that I could not foresee a world in which most of those connected to enterprise systems would be outsiders, including sophisticated criminal enterprises.  Mostly, I could not anticipate a world in which "read/write" would be the default access control rule, not only for data, but also for programs.  

We are now three years since the first "ransomware" attacks.  We are still paying.  Indeed, a popular strategy is to pay an insurance underwriter to share some of the risk.  This is a strategy that only the underwriters and the extortioners can like.  While this was an appropriate strategy to follow early, it is no substitute for resisting and mitigating the attacks as time permits.  Has three years not been enough to address these attacks?  One would be hard pressed to make that case.  

The decision to pay is a business decision.  However, the decision to accept or assign the risk, rather than resisting or mitigating the attack, that is a security decision.  It seems clear that our plans for resisting and mitigating are not adequate and that paying the extortion is simply encouraging more attacks.  

By now every enterprise should have a plan to resist and mitigate, on a timely basis, any such attack.  If an enterprise pays a ransom, then, by definition its plan to resist and mitigate has failed.  As always an efficient plan for resisting attacks will employ layered defense.  It will include strong authentication, "least privilege" access control, and a structured network or end-to-end application layer encryption.  The measures for mitigating will include early detection, safe backup, and timely recovery of mission critical applications.  "Safe backup" will include at least three copies of all critical data, two of which are hidden from all users and at least one of which is off-site.  "Timely recovery" will include the ability to restore, not simply a file or two, but all corrupted data and critical applications within hours to days.  (While some enterprises already meat the three copy requirement, few have the capability to recover access to large quantities of data in hours to days, rather than days to weeks.)

One last observation.  If there is ransomware on your system, network, or enterprise, you have first been breached.  Hiding your data from you to extort money, is only one of the bad things that can result from the breach.  If one is vulnerable to extortion attacks, one is also vulnerable to industrial espionage, sabotage, credential and identity theft, account takeover and more.  The same measures that resist and mitigate ransomware resist and mitigate all of these other risks.

Ransomware attacks will persist as long as any significant number of victims choose to pay the ransom, as long as the value of a successful attack is greater than its cost.  The implication is that to resist attack one must increase its cost, not simply marginally but perhaps by as much as an order of magnitude.  Failure to do so is at least negligent, probably reckless.  Do, and protect, your job.  

Wednesday, June 10, 2020

On "Patching" III

One cannot patch to a secure system.

The rate of published "fixes" suggests that there is a reservoir of known and unknown vulnerabilities in these popular products (e.g., operating systems, browsers, readers, content managers). No matter how religiously one patches, the products are never whole.

They present an attack surface much larger than the applications for which they are used and cannot be relied upon to resist those attacks.  However, in part because they are standard across enterprises and applications, they are a favored target.  

They should not be exposed to the public networks. Hiding them behind firewalls and end-to-end application layer encryption moves from "good" practice to "essential."

Patching may be mandatory but it is expensive, a cost of using the product.  

On the "Expectation of Privacy"


In assessing the "search and seizure" of personal data by law enforcement, modern courts have applied the test of "reasonable expectation of privacy." This test implies that if the citizen has used his data in such a way that exposes it to others, for example, used it in a business transaction, then law enforcement may use it against them without restriction.  

The Framers never conceived of this test and might well be surprised by it.  Rather, the tests that they wrote into the Bill of Rights were "reasonable" and "probable cause."  If a search or seizure is "unreasonable," then law enforcement must have a warrant from a court.  The test for the issuance of a warrant is probable cause to believe that a crime has been, not will be, committed.  

These are constitutional tests and they are independent of how the citizen uses his personal information or what his expectations are.  He should not need to do, think, or "expect" anything in order for them to apply.  The tests apply to the behavior of the state, not the expectation of the citizen.  They restrict what the state, the police, may do.  The Bill of Rights places the burden on the state to show that its behavior is lawful, not on the citizen to demonstrate a right or "expectation."  Note that while information obtained in violation of these tests may not be used to convict the citizen of a crime, it is routinely used to investigate, threaten, and coerce, the very things that the Framers feared from a powerful government.  

In some cases, the state, with the tacit consent of the courts, pretends to get a warrant for all searches.  It pretends that it can legitimately collect anything as long as it does not look at it.  However, the Bill of Rights does not limit the tests to searches but also to "seizures." The government operates a data center in Bluffdale Utah.  In a world in which one can put a terabyte of data in one's pocket for $100, the government requires 26 acres of floor space to accommodate the data that it collects world-wide on citizens' communications.  It claims that this seizure is not "unreasonable" and that it does not need a warrant unless it "searches" or looks at the data.  By what reasoning can the arbitrary collection of so much data be called "reasonable?"

I am not hopeful that this view will be argued before the courts or that, even it argued, it will change much.  Nonetheless, I had to argue it.  




Friday, May 22, 2020

On "Patching" II

The tolerance of the IT community for poor software quality seems infinite.  The "quality" strategy of major software vendors is to push the cost of quality onto the customers.  The more customers they have the greater the cost.  Instead of "doing it right the first time," the vendors push out late patches.  From the rate at which they push out patches one may Infer that there is a reservoir of vulnerabilities.  Their customers have had to allocate resources and organize them around "patching."  They are almost grateful for the fixes.  

The market, the collective of buyers, prefers systems that are open, general, flexible, and that have a deceptively low price.  The real cost includes the cost of perpetual patching, the unknown cost of accepting the unknown risk of all the vulnerabilities in the reservoir, along with the risk of an unnecessarily large and public attack surface.  

We do not even measure the cost of their poor quality.  

We should be confronting the vendors with this hidden cost.  We should be comparing them on it.