Wednesday, January 13, 2021

What I tell my family about protecting their identity.

 Recently a family member asked me how to respond to a solicitation for "identity protection."  The ad appealed to fear and some of the benefits were ambiguous. 


Every time we open an account or do business, we expose ourselves to fraud.  About three percent of us will be the victims of transaction (e.g., payment card) fraud but almost one percent of us will be victims of fraud so serious as to cause serious financial loss or crippling  damage to our reputations.  Therefore, I offer the following advice in the order of its importance.  

  • Use strong (e.g., multi-factor) authentication wherever it is offered.  (Prefer Passkeys for a good balance of security and convenience.)
  • Avoid doing business with those who do not offer it.
  • Prefer purpose-built applications for financial activity.  Avoid the use of browsers.
  • Prefer mobile computers to personal computers for financial activity.
  • Review all account balances and activity on a timely basis (for large and active accounts, "review" equates to online and "timely" may equate to daily.)
  • Sign up for "paperless" options.  (For good security these should be the default option but for reasons of "backwards compatibility," one must usually opt in.)
  • Allow notifications.  (Again, this should be the default.)*
  • Freeze your identity on all three credit bureaus.  (Locking and unlocking is now easy and free but all three bureaus will take every opportunity to try and sell you "identity protection" for a relatively high annual fee.  All three have had major compromises of personal data and are not reliable.)
  • Use complimentary credit monitoring from AAA, American Express, or, as offered, by your bank or credit union.
  • Most card issuers now permit you to "lock" your cards, using a mobile app.  Balance this with the convenience of using the card but be sure to lock the card if it is misplaced, lost, or stolen.  
  • When buying online, prefer to pay with such checkout proxies as PayPal, Apple Pay, or Click to Pay.  Avoid using debit or credit cards.  However, prefer credit cards to debit cards.  
  • When paying at the point of sale, prefer "contactless."  This resists the leakage of the Primary Account Number on the magnetic stripe.  Most banks now offer such cards and both Apple and Google Pay offer.
  • Do not use the option permitting the merchant to retain debit or credit card information.  Checkout as a guest; avoid signing up for accounts.  
  • When using debit or credit cards for the convenience of frequent purchases from a merchant (e.g., Amazon) consider the use of a one-time or one merchant token number from Privacy.com.  
  • Consider insurance against financial loss and/or expenses related to identity theft.  Such insurance is not a substitute for any of the measures above, may be redundant of protections that you already enjoy (from homeowners insurance, fiduciaries, e.g., https://www.fidelity.com/security/customer-protection-guarantee ), may be expensive, and is best purchased from insurance sources (e.g. as an optional endorsement  to one's homeowners insurance).  https://tinyurl.com/FTCreportidenttiyfraud

* While I have been writing this I have received notices of three legitimate transactions.  This assures me that I will get timely notification of fraudulent ones.  

Tuesday, January 5, 2021

SolarWinds

By now most should realize that SolarWinds is a compromise on an almost unimaginable scale. It is a crisis.  While there are "indicators of compromise" there are no indicators of all compromises.  While the attackers have concentrated on gathering intelligence on only a small number of target sites, all SolarWinds customers must assume that they are compromised and that there may be multiple backdoors into their systems for which there are no ICUs.  Only a small number of enterprises, perhaps none, have sufficient control over the content of their systems to be sure that they are resistant to such backdoors.

In https://us-cert.cisa.gov/ncas/alerts/aa20-352a DHS/CISA has suggested that some enterprises under some circumstances will have to "rebuild (from scratch) hosts monitored by the SolarWinds Orion monitoring software using trusted sources."  In fact, we may have to rebuild all enterprise systems.  

President Obama's chief of staff, Rahm Emanuel, famously said in 2008, “You never want a serious crisis to go to waste. I mean, it's an opportunity to do things that you think you could not do before.”  It would be tragic, if after rebuilding our systems, we should come away as vulnerable as when we started.  

We should take Rahm's "opportunity" to introduce "zero trust," indeed zero trust on steroids.  One might well start with a Software Defined Network.  One should include mutually suspicious processes, strong authentication at all levels, and "least privilege" access control.  

Rebuilding systems in month's that took decades to evolve is a daunting task.  I am reminded of what my father taught me when I was just starting out in IT almost sixty years ago.  "Son," he said, "all hard problems in information technology have one and the same answer: one application at a time."  We can do this.  We should use the crisis to overcome the inertia that has kept us from doing what we all know we should have done a while ago.  We know what to do: all we need is the leadership to do it.  

Do not worry about the cost.  Much of what we need to do, we can do with available resources.  For example, we can implement "least privilege" with available tools.  It only requires a change in intent.  In any case, there is always enough money to do that which must be done.  


Thursday, October 22, 2020

Slides from my NYMJCSC Keynote.

 https://tinyurl.com/MurrayNYMJCSC

This tiny url will be decoded for you before you open the synonymous link.  

Thursday, October 15, 2020

Prefer Simplicity

"In anything at all, perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away..."

        -- Antoine de Saint Exupery (Aviator, Mechanic, Poet,

      Exemplar), "Wind, Sand and Stars"

“Simplicate and Add Lightness”

        -- Design philosophy of Ed Heinemann, Douglas Aircraft

      (Also attributed to Igor Sikorsky)

“Make everything as simple as possible, but not simpler.”

        -- Albert Einstein

“One should not increase, beyond what is necessary, the number of entities required to explain anything.”

        -- William of Occam

Engineers, indeed even philosophers and poets, have a preference for simple designs. They eschew (gratuitous or unnecessary) complexity. One need not point out that IT designers and those who secure those systems have no such preference. Engineers try to do nothing in preference to the wrong thing. They recognize that the greater the complexity of a mechanism the greater the potential for failure. They recognize that the greater the complexity of a mechanism the more difficult it is to know yourself that it is performing correctly or to demonstrate to others that it is doing so.

Simple designs are easier to implement, demonstrate, and maintain. Engineers promote the KISS principle. They call simple designs “elegant.” They recognize that complexity causes errors and masks malice. Engineers manage “parts count” because they know that quality goes up as parts count goes down.

Friday, September 18, 2020

Awareness Training, the Message

It was a long time ago.  I was doing data security market support for IBM.  I thought of my job as helping IBM customers keep their computers safe, use them safely, and use them to protect their contents (from accidental or intentional modification, destruction, or disclosure).  There were a manager and three professionals doing this work.

There was another security group in IBM, much larger, responsible for how IBM's intellectual and other property was protected.  They were piloting their first "security awareness" program.  As a courtesy, they invited me to sit in on one of the pilot sessions.  Their intent was, that if the pilot program was successful, they would make it mandatory for all employees.  

The program was motivated in part by a major breach of intellectual property.  A competitor had called to tell us that someone had just offered them the design of what was to become one of IBM's most profitable products, a new disk drive, code named WINCHESTER, which was to launch an entire industry.  The design was offered to the competitor  for $50,000 and that price included monthly updates for a year.  

Not only was IBM's general management upset but the competitor was equally so.  His position was that he could absorb full development cost and still compete.  What he could not do was compete with a mutual competitor who got the design for a pittance.  That if IBM could not keep its own secrets, the next time he was offered such a deal, he would take it.

While the perpetrators were caught and the design documents were recovered, the lesson was not lost on IBM management.  They were going to ensure that the problem was fixed.  The pilot program was to be part of the fix.  

I sat through the program not once, but twice.  When I was asked if I had any reaction, I said that the course was a great lesson in how to commit fraud and intellectual property theft, that it was a good description of the problem but was likely to make it worse rather than better.  

When asked what I would recommend, I suggested that the program be shortened perhaps to as little as fifteen minutes and that the message be shortened to what we expected the managers, professionals, and other employees to do.  

IBM actually had a robust system for classifying and handling data.  It defined how data was to be classified, labeled, and handled.  Most of the documents created were public.  Indeed, at the time IBM was one of the largest publishers in the world, second perhaps only to the US Government Printing Office.  Other documents were classified and labeled as "For Employee Use Only," along with two levels of "confidential."  Confidential meant that use of a document was to be on a "need to know only" basis.  Each level was defined by the controls that were intended for that level.  The highest level, "Registered Confidential," was intended for a only a very small number of documents, those whose disclosure might affect profitability, and where the controls included limiting and numbering every copy, keeping them under lock and key, and logging every use.  

What IBM really needed was for every employee to classify and label documents properly, know the procedures for every class that they were likely to handle, and follow the procedures.  That meant that most employees needed to be trained only on public and employee use only, a smaller number on confidential and "need to know," and only a tiny number on how to recognize, classify, and handle the "crown jewels."  

It was much easier to teach this.  Not only did this focus increase the effectiveness of the program but it greatly reduced its cost.  Keep in mind that this was in an era when most information was still stored on cheap paper rather than on expensive computer storage media.  The more sensitive the data, the less likely it was to be in a computer.  It was very different from today where we use little expensive paper, and store the most sensitive data in cheap computer storage.

The lessons for today are very different but the emphasis of awareness training should be the same.  Focus on behavior, what we need for people to do.  Focus on roles and applications.  Leave descriptions of the environment, the threat sources and rates, the attacks, the vulnerabilities, the problem, to the specialists.  Awareness training should be about the solution, not the problem.  



Thursday, July 9, 2020

Real Programmers Cain't do it

I would like to share a story.  It is a story about how to write security programs securely, something that the story suggests "Real Programmers" cannot do.  It is a story that illuminates many of the problems that lead to modern software quality.  

It is a long story, longer than one can tell in a single blog entry.  One has a choice of breaking it up into multiple entries or simply providing a link from this blog to the story, the choice I have taken. https://tinyurl.com/Real-Programmers

Patching IV

On last week's "Patch Tuesday," for the second month in row, Microsoft published fixes for more than a hundred vulnerabilities.  If anything the number of fixes per month is increasing rather than decreasing.  As I have suggested before, the implication of this is that there is a huge reservoir of known and unknown vulnerabilities remaining.  

In response to this, I wrote on NewsBites:

"For the moment and for most enterprises "patching" remains mandatory; failing to do so not only puts one at risk but puts one's neighbors at risk. At what point do we decide that the cost of patching is too high? When do we realize that the attack surface of these widely used products is so big, so homogenous, and so porous, that collectively they weaken the entire infrastructure? When do we realize that the architectures (e.g., von Neumann), languages, and development processes that we are using are fundamentally flawed? That hiding these products behind local firewalls and end-to-end application layer encryption is a more efficient strategy? When do we acknowledge that we must fundamentally reform how we build, buy, pay for, and use both hardware and software? At what point do we admit that we cannot patch our way to security?"

A reader responded in part:

I agree with you that the cost of patching does remain high. I agree with you that our languages and development (and testing) processes are flawed. Those complaints are not new and not interesting.

But our architectures, especially von Neumann? Would you lump IPv6 into that category as well? I'm curious why a man of your obvious accomplishments would think of that. Even more interesting would be if you had a better idea. The paradigm of the stored-program computer with instructions and data in the same memory seems unshakable at this point. Everybody has thought of separating memory into instruction space and data space, but that's just another way of getting more parallelism, to make things faster. It doesn't really change how computers work or how we think about them.


So.... I'm curious: what do you have in mind?

I answered:

Thanks for your great question.  It is good to know that anyone is reading what I wrote, much less being provoked by it. 

von Neumann was a genius but he solved a problem that no longer exists, i.e., dear storage.  In his day storage was so dear that one did not want to preallocate it to data or procedure, as incidentally, every computing machine prior to von Neumann's proposal had done.  In fact, the problem went away almost as soon as he solved it.  (By treating a program as data, von Neumann also gave us "address modification.")   As early as the introduction of the IBM 360, index registers made it unnecessary and bad practice for a program to modify itself.  Yet today programs are routinely corrupted by their data.  

It is ironic that one can get a degree in Computer Science without ever hearing about, much less studying an alternative to the von Neumann Architecture.  Consider the IBM System/38 and its successors, the AS/400 and the iSeries.  This architecture is so different that, at least for me, learning about it was literally mind-bendng.  This architecture is probably older than most readers and has been in constant use but even many of its users do not really appreciate how interesting it really is. 

These systems employ a single-level store, symbolic-only addressing, and strongly-type objects.  (the number of types is still numbered in the low tens.)  The operations that one can perform on an object is specified by its type.  For example, in these systems "data" objects cannot be executed and "programs" cannot be modified.  Thus it is impossible for a program to be contaminated by its data.  Programs can be replaced, but not modified; every version will have different fully-qualified name.  

The von Neumann Architecture persists because the market has a preference for convenience, generality, flexibility, and programmability.  At some level we do know better but we still continue to tolerate buffer and stack overflows. 

Consider Apple's iOS.  There is nothing that one can do from the user interface that will alter the programming of even the application, much less the system code.  Each app turns the device into a single application only device.  It is not a "programmable" device; the creation of apps for iOS is done in a different environment.  In the early days of iOS there was no app-to-app communication; it was introduced late and only by means of tightly controlled and controlling APIs.  Even device to device communication is limited and tightly controlled.  For example, I cannot share an app from my device to another.   On the other hand, there has never been a successful virus in this huge population of computing and communicating devices.  Yes, I can click on a bait message in my browser or e-mail and I may be misled.  However, I do not fear that I will alter the browser, the mail app, or the device code.  I need not fear that I will unknowingly install malicious code on my device.  One can argue that Android was a reaction to those limitations, not to say features, of iOS.  

I expect, almost hope, that my response to the reader raises more questions than it answers.  

Perhaps we can continue the dialogue here.