Monday, June 22, 2020

On "Ransomware"

Forty years ago, my friend and colleague, Donn Parker, suggested that "employees" would use cryptography to hide enterprise data from management.  Employees because forty years ago only one's employees could send a message to one's systems.  I laughed.  It was obvious to me that such an activity would require both "read" and "write" access to the data.  I was so young and naive that I could not foresee a world in which most of those connected to enterprise systems would be outsiders, including sophisticated criminal enterprises.  Mostly, I could not anticipate a world in which "read/write" would be the default access control rule, not only for data, but also for programs.  

We are now three years since the first "ransomware" attacks.  We are still paying.  Indeed, a popular strategy is to pay an insurance underwriter to share some of the risk.  This is a strategy that only the underwriters and the extortioners can like.  While this was an appropriate strategy to follow early, it is no substitute for resisting and mitigating the attacks as time permits.  Has three years not been enough to address these attacks?  One would be hard pressed to make that case.  

The decision to pay is a business decision.  However, the decision to accept or assign the risk, rather than resisting or mitigating the attack, that is a security decision.  It seems clear that our plans for resisting and mitigating are not adequate and that paying the extortion is simply encouraging more attacks.  

By now every enterprise should have a plan to resist and mitigate, on a timely basis, any such attack.  If an enterprise pays a ransom, then, by definition its plan to resist and mitigate has failed.  As always an efficient plan for resisting attacks will employ layered defense.  It will include strong authentication, "least privilege" access control, and a structured network or end-to-end application layer encryption.  The measures for mitigating will include early detection, safe backup, and timely recovery of mission critical applications.  "Safe backup" will include at least three copies of all critical data, two of which are hidden from all users and at least one of which is off-site.  "Timely recovery" will include the ability to restore, not simply a file or two, but all corrupted data and critical applications within hours to days.  (While some enterprises already meat the three copy requirement, few have the capability to recover access to large quantities of data in hours to days, rather than days to weeks.)

One last observation.  If there is ransomware on your system, network, or enterprise, you have first been breached.  Hiding your data from you to extort money, is only one of the bad things that can result from the breach.  If one is vulnerable to extortion attacks, one is also vulnerable to industrial espionage, sabotage, credential and identity theft, account takeover and more.  The same measures that resist and mitigate ransomware resist and mitigate all of these other risks.

Ransomware attacks will persist as long as any significant number of victims choose to pay the ransom, as long as the value of a successful attack is greater than its cost.  The implication is that to resist attack one must increase its cost, not simply marginally but perhaps by as much as an order of magnitude.  Failure to do so is at least negligent, probably reckless.  Do, and protect, your job.  

Wednesday, June 10, 2020

On "Patching" III

One cannot patch to a secure system.

The rate of published "fixes" suggests that there is a reservoir of known and unknown vulnerabilities in these popular products (e.g., operating systems, browsers, readers, content managers). No matter how religiously one patches, the products are never whole.

They present an attack surface much larger than the applications for which they are used and cannot be relied upon to resist those attacks.  However, in part because they are standard across enterprises and applications, they are a favored target.  

They should not be exposed to the public networks. Hiding them behind firewalls and end-to-end application layer encryption moves from "good" practice to "essential."

Patching may be mandatory but it is expensive, a cost of using the product.  

On the "Expectation of Privacy"


In assessing the "search and seizure" of personal data by law enforcement, modern courts have applied the test of "reasonable expectation of privacy." This test implies that if the citizen has used his data in such a way that exposes it to others, for example, used it in a business transaction, then law enforcement may use it against them without restriction.  

The Framers never conceived of this test and might well be surprised by it.  Rather, the tests that they wrote into the Bill of Rights were "reasonable" and "probable cause."  If a search or seizure is "unreasonable," then law enforcement must have a warrant from a court.  The test for the issuance of a warrant is probable cause to believe that a crime has been, not will be, committed.  

These are constitutional tests and they are independent of how the citizen uses his personal information or what his expectations are.  He should not need to do, think, or "expect" anything in order for them to apply.  The tests apply to the behavior of the state, not the expectation of the citizen.  They restrict what the state, the police, may do.  The Bill of Rights places the burden on the state to show that its behavior is lawful, not on the citizen to demonstrate a right or "expectation."  Note that while information obtained in violation of these tests may not be used to convict the citizen of a crime, it is routinely used to investigate, threaten, and coerce, the very things that the Framers feared from a powerful government.  

In some cases, the state, with the tacit consent of the courts, pretends to get a warrant for all searches.  It pretends that it can legitimately collect anything as long as it does not look at it.  However, the Bill of Rights does not limit the tests to searches but also to "seizures." The government operates a data center in Bluffdale Utah.  In a world in which one can put a terabyte of data in one's pocket for $100, the government requires 26 acres of floor space to accommodate the data that it collects world-wide on citizens' communications.  It claims that this seizure is not "unreasonable" and that it does not need a warrant unless it "searches" or looks at the data.  By what reasoning can the arbitrary collection of so much data be called "reasonable?"

I am not hopeful that this view will be argued before the courts or that, even it argued, it will change much.  Nonetheless, I had to argue it.  




Friday, May 22, 2020

On "Patching" II

The tolerance of the IT community for poor software quality seems infinite.  The "quality" strategy of major software vendors is to push the cost of quality onto the customers.  The more customers they have the greater the cost.  Instead of "doing it right the first time," the vendors push out late patches.  From the rate at which they push out patches one may Infer that there is a reservoir of vulnerabilities.  Their customers have had to allocate resources and organize them around "patching."  They are almost grateful for the fixes.  

The market, the collective of buyers, prefers systems that are open, general, flexible, and that have a deceptively low price.  The real cost includes the cost of perpetual patching, the unknown cost of accepting the unknown risk of all the vulnerabilities in the reservoir, along with the risk of an unnecessarily large and public attack surface.  

We do not even measure the cost of their poor quality.  

We should be confronting the vendors with this hidden cost.  We should be comparing them on it.  


Thursday, October 31, 2019

On "Patching"

IT projects are historically, not to say always, late.  There are a number of reasons for this.  We prioritize schedule before quality; it is part of our culture.  We think that schedule is easy to measure. We think that we are on schedule until late in the effort, when quality jumps up and bites us in the ass.  Another reason that we are late is that we fix things in the order of their discovery rather than in the order of their importance.

This is a way of behaving that we replicate in Cybersecurity.  Not only do we fix things in the order of their discovery but we fix them in the order that someone else discovers them.  Microsoft announces forty vulnerabilities, ten critical, on "patch Tuesday."  We drop anything else we might be doing to apply the patches.  

Microsoft was shamed into publishing one or more of the patches.  Google Project Zero discovered the vulnerability and generously gave Microsoft ninety days to fix it under the threat of a public shaming if they failed.  

Ninety days is arbitrary.  It is not based on the ease of exploiting the vulnerability, how wide spread it is, how costly it is to fix, what the fix might break, or what other vulnerabilities Microsoft may have on its plate.  It is one size fits all.  Sometimes Microsoft even chooses the shaming, in part because of what it knows that Google does not and cannot know.  We often patch without even considering whether or not the vulnerability represents a risk to us.  

Again, it is part of our culture.  Of course, as a result of this automatic, Lemming like, behavior, we may all be at greater risk than we need to be.  

Whatever our vendors or our peers may be doing, we need to fix things in order of their risk to our enterprise.  We need to resist letting others allocate our scarce resources into "unplanned activity."  We need to put aside fear generated by the breach of our neighbor because of an unapplied patch.  

Know your risk tolerance.  Identify your risks.  Mitigate, accept, and assign them in the order of that risk.  Document risk acceptance.  Plan your work and work your plan.  Prefer mitigation measures that are broad over those that are merely most effective.  Keep in mind that hiding vulnerabilities, for example behind firewalls, is often more efficient than patching them.  At least the mistakes you make will be your own.

Wednesday, October 23, 2019

FBI Recommends Use of Biometrics

In its Private Industry Notification, 17 September 2019 PIN Number 20190917-001, the FBI encourages the use of biometrics to resist what they see as the limitations of strong authentication.  In fact what they have observed is effective social engineering attacks necessitated by effectiveness of one-time passwords.  Other strong authentication, which might include biometrics, is the solution that I would recommend.

Consider my financial services firm.  They offer me strong authentication based upon a software token installed on my mobile computer.  I downloaded the token from the App Store and gave its identity, 4 letters and 8 digits, to my financial services firm and they associated that token with my account.  When I logon with my UID and password, I am prompted for a one-time password, six digits, generated by that token, with a life of sixty seconds, and expected by a server used by my financial services firm.  

Now, suppose I were to lose the mobile.  I would have to get a new mobile and download a new token.  I would have to associate the replacement token with my account.  In the capability to do that lies a potential vulnerability.  If an attacker were successful in convincing my financial services firm to associate his token with my account, then he might be able to defeat the strong authentication.  Therefore, my financial services firm must be able to resist this "social engineering" attack.  This is where biometrics can play a useful role.

When I call my financial services firm to replace my lost token, or for any other purpose, they may recognize me from my "calling number ID."  They authenticate me by my voice, a biometric, something that only I can do, one that works over the phone.  Yep, they really do; they tell me that that is what they are doing.  While I am a stranger to the agent, the computer recognizes my voice as the one to expect for my phone number.  The agent also asks me for another piece of shared information, a challenge and response, a second factor.  Only then will they honor my request to replace the lost token ID with the the new one.  I think that this is an instance of the use of biometrics that would meet the expectation of the FBI.  

Of course, the process does not end there.  My firm e-mails me, out-of-band confirmation, that they have changed the token associated with my account.  This gives me the opportunity to recognize a fraudulent change to my token ID.  

Now the link above not only points to my blog entry on limitations of one-time passwords but also to the limitations of biometrics.  One needs to understand those limitations in order to use biometrics effectively.  I like the voice implementation used by my financial services firm because it is dynamic and resists replay attacks; replay attacks are one of the limitations of biometrics.  Along with facial recognition, voice is one of two biometrics that both people and machines can reconcile reliably.  

(I am sure that you have heard of static facial recognition being duped by a photograph, a limitation, but fooling a four year old child in dynamic facial recognition, for example, over Skype or FaceTime, as to the identity of her grandmother might be more difficult.)

While there are alternatives to the use of biometrics, the FBI and I agree that they can be both convenient and secure in some applications and environments.  The FBI recommends them to resist what they see as limitations of multi-factor authentication.  I recommend them as effective and efficient measures for resisting one form of "social engineering."





Sunday, October 20, 2019

EBA Relaxes Requirements for Strong Authentication

"The European Banking Authority (EBA) has issued a new Opinion that provides the European payments industry with an EU-wide additional 15 months to comply with strong customer authentication (SCA) requirements for online ecommerce transactions."

Since there are banks that are already in compliance, the solution for consumers is to do business only with those banks.  

While there is no international law on this, there is good banking practice that is universal.  All banks have an obligation to "know their customers," and to ensure that "transactions are properly authorized."  Passwords that are vulnerable to fraudulent reuse do not meet these standards of good practice.  

In an era when most customers have e-mail, mobile computers, or both, strong authentication is not sufficiently difficult to implement to justify an extension.  This is an example of "regulatory capture."  The authority is derelict.  It is serving banks rather than customers.  Shame.