Friday, October 5, 2018

The Big Hack: How China Used a Tiny Chip to Infiltrate U.S. Companies - Bloomberg

Bloomberg Story

This is a ”developing story.” It has the reputation of Bloomberg and its reporters behind it, and the story has been updated to speak to its sources.  However, both Apple and Amazon have denied the roles attributed to them. While it originated months ago, the government has been silent. We need to wait for verification from the government. However, we should use the time to think about what to do assuming that the story is verified. What do we do in the face of un-trusted and potentially hostile hardware?

It is time to abandon the password for all but trivial applications. Keep in mind that passwords put the user and the system or application at risk.  Steve Jobs and the ubiquitous mobile computer have lowered the cost and improved the convenience of strong authentication enough to overcome all arguments against it.

It is time to abandon the flat network. Secure and trusted communication now trumps ease of any-to-any communication. It is time for end-to-end encryptions for all applications. Think TLS, VPNs, VLANs and physically segmented networks. Software Defined Networks put this within the budget of most enterprises.

It is time to abandon the convenient but dangerously permissive default access control rule of “read/write/execute” in favor of restrictive “read/execute-only” or even better, “Least privilege.” Least privilege is expensive to administer but it is effective. Our current strategy of “ship low-quality early/patch late” is proving to be ineffective and more expensive in maintenance and breaches than we could ever have imagined.

Finally, we must consider abandoning the open and flexible von Neumann Architecture for closed application-only operating environments, something more like iOS or the IBM iSeries with strongly typed objects and APIs, process-to-process isolation, and a trusted computing base (TCB) protected from other processes.

Oh, I almost forgot. We must monitor traffic flows. Automated logging and monitoring of the origin and destination of all traffic moves from ”nice to do” to ”must do.”

These measures are now timely, whatever the facts of the Bloomberg story.  While nothing will completely protect one from using hostile hardware, these measures will raise the cost of attack and reduce the risk.  We know what to do. We described it generations ago. Do we have the will?

Friday, August 17, 2018

Limitations of One Time Passwords

Recently a man sued AT&T because his one time password was sent to the wrong phone, causing him to lose $24M in “cryptocurrency.”  To punish them, he asked for $200M in punitive damages.  This led to headlines talking about the “dangers” of relying upon SMS to deliver one time passwords.

These are not so much ”dangers” as they are ”limitations.” ”Nothing useful can be said about the security of a mechanism except in the context of a specific application and environment.” All security measures have limitations. Perfect security has infinite cost; we must not let it become the enemy of good security.

While one time passwords, whether sent from the server or generated at the client, are orders of magnitude more secure than reusable passwords, they still have limitations.  They must be properly associated with the user or his account.   Like most security measures, and as in this case, this association is vulnerable to social engineering attacks.

Some of you may have tried to register a new SIM or move an existing phone number from one device to another.  You can testify that it can be a pain; the carriers have stringent security procedures in place to resist fraudulent changes to your account. However, they have hundreds of agents handling provisioning requests and they are trained to be customer friendly. In pursuit of this, they can be expected to make mistakes. That is a limitation of using your phone and its number as part of your authentication scheme.

Note that if you do not get a one time password, or a phone call, or even a paper bank statement that you are expecting, you may have been compromised. Note also that the carriers are not the only targets of these ”social engineering” attacks. The attackers may try to get your account holder to change the phone number, e-mail or street address in your account record from your number to theirs. That is why responsible account holders will send you an out-of-band confirmation of all changes to your account record to both the old and the new address. Even hard tokens may be vulnerable to these attacks because the account holder must be able to respond to lost tokens by allowing you to register a new token. Again, not so much a danger as a limitation. Keep in mind that just twenty years ago, the scam was to request a postal address change.

While I use SMS for Google, Dropbox, PayPal, Amazon, my credit union, and my banks, I use a token for my brokerage and retrirement accounts. Note all of these offer me choice of SMS or tokens. ”Horses for courses.” Rest assured that I would not use SMS for $24M. In fact, I would never put all that in one account.

Even as users, we need to know the limitations of the things that we depend upon for security.  As security professionals, responsible for choosing, applying, and operating these mechanisms, it is mandatory.

Thursday, May 3, 2018

Blockchain Revealed

Many of you must be frustrated following so many links about blockchain without finding out what it is, how it works, or what it is good for.  While it was invented a quarter of a century ago by Haber and Stornetta, it has recently been popularized by its most famous application, Bitcoin.

One might well ask why anyone might choose to write on something where so much has already been written.  The answer is that finding the answer to my three questions has proved to be difficult.  In this blog, I will attempt to answer those three questions but to the extent that I fail, I hope that readers will follow up with questions.

A blockchain is a collection of digital objects related in such a way that changing any one of the objects will be obvious and changing all of them in a non-obvious way is computationally infeasible.   Thus, it is a data integrity mechanism.  It is an example of a zero knowledge proof, i.e., making a demonstration without prior knowledge or pre-arrangement.

The objects are “chained” in such a way that object N includes a hash of object N-1 and its hash is included in object N+1.   The hash of the last, or latest, object in the chain depends upon, is an arithmetic function of, the content of every object in the chain.  Therefore, one can demonstrate that the chain is complete and that no objects have been altered.   We also know that object N-1 existed before N and N before N+1.

This works for any set of similar digital data objects.  The objects determine the application.  For a simple example, the objects might be the entries in a journal, or entries in a database.  More complex examples might include a digital record for all the transactions of a digital currency, bills of lading, warehouse receipts, or public records.

Blockchains enable anyone to demonstrate, by recomputing the hash of the last object, the integrity of the data. Therefore many applications may not require or rely upon a trusted party or access control of the data.  On the other hand, demonstrating the integrity of the chain is computationally intensive so blockchains work best for applications where the number and size of objects is low.  Said another way, blockchains may not scale well.

Note that trusted third parties, such as banks, often assume risk and collect fees for their role; disintermediating them can reduce cost.  For example, one might be able to transfer large sums internationally more cheaply using digital currency than by using orders on central or correspondent banks.

Thursday, March 8, 2018

The Use of SMS for Strong Authentication

NIST and others have discouraged the use of SMS for strong authentication.  This is another case of the perfect as the enemy of the good. 

First, strong authentication using a one time password sent via SMS is dramatically more secure than a replayable password. Second, if you get a one-time password when you ask for it, you are safe.

The problem is not so much with SMS but with the (cell) phone number. There is a risk that an attacker can either change the number in your account, to which the one time password will be sent, to a number other than yours, or get the phone company to associate, re-assign, your number with their phone. In either case, you will not get the one time password when you ask for it. In the latter case, you will not even get phone calls. Whenever the cell phone number in your profile is changed, you will get an e-mail message asking you if you really did it.

Carriers have controls in place to resist fraudulent reassignment of numbers to new phones.  However, the large number of agents and their desire to be accommodating, makes them vulnerable to ”social engineering” attacks. 

The difference in risk between a one-time password sent to your phone and one generated on board is small, particularly when compared to the difference in risk between either and a reuseable password.

In certain circumstances, the difference in convenience may be great. I have ten different accounts associated with my cell phone number. If I get a new phone, all my accounts continue to work as they did on the old phone. The number has moved to the new phone. If I used an on-board password generator, not portable to the new phone, I would have to register the new password generator with each of the ten accounts. I have to do that by calling support, authenticating myself, and registering the new generator. Until I have done that, I cannot logon to or use the account.

If you think about it, the real risk is in provisioning of the phone number or the registering of the on board generator (e.g., VIP Access, Google Authenticator, RSA SecurID Software Token). 

Wednesday, February 21, 2018

Law Enforcement vs. Security and Privacy

A recent report quoted the Director of the FBI as complaining that he had more than 7000 mobiles for which he has established probable cause to believe contain evidence of a crime, but that their security is so good that he cannot be sure.  Well, perhaps his emphasis was different than mine but you get the gist.

Of course, a decade ago he did not have any.  The modern mobile has given him a rich source of evidence that he has never had before.  Instead of saying ”thank you,” he complains that the source is not even richer than it is.  He neglects to say how many mobiles that he has opened while finding the few that he cannot. He neglects to address what percentage of those contained useful, much less admissable, evidence of crimes, a number that might give us some idea of any probative value of the contents of the 7000.

What he is really complaining about is that the default security of these devices raises his cost of investigation. He does not even speak to the resistance to crimes against the tens of millions of legitimate devices, users applications, data, and information that that security provides. Therefore, he cannot even get to the idea that in the absence of such security, there would be fewer devices, users, and applications, much less that his rich source of evidence might not even exist.

He argues that, in order to reduce his cost, the default security of the devices should be reduced.  In spite of all the testimony against this proposition, and the absence of any in its favor, he argues that the purveyors of the mobiles can reduce his cost while maintaining the security against all others.  Without specifying what would satisfy him, he argues that this is simply a small technical problem that the industry can solve any time it wants to.

While the Director talks in terrms of  ”capability,” that he does not have, I talk in terms of  ”cost.”  I assert that if one has a cryptogram, the method, and the key, all of which are on the mobile device, then, at some price, one can recover the clear text. Depending upon the design of the device, the cost may be high but it is finite.  The Bureau demonstrated this for us in the San Bernardino case. After asserting that Apple could, but that they could not, they turned to the Israelis, who for a  million dollars, recovered the data.  Incidentally it proved to be worth considerably less; it provided neither evidence nor intelligence. On the other hand, on a wholesate basis, the cost per device would be significantly less.

One problem is that, whatever the cost, the Bureau prefers to transfer it to the purveyor and the user than to just pay it. It hopes to do this by sowing enough fear, uncertainty, and doubt that a law and order Congress will pass coercive legislation forcing the uninvolved and unwilling to become arms of law enforcement.  If the purveyor is coerced into reducing the security, i.e., a value, of his product, he will lose sales and profit. Remaining users will lose security and privacy, experience costly breaches, and incur costs for compensating controls. 

The net is that, while the Director may not be able to read every mobile for which he has a warrant, he can read most of them.  While he knows what he cannot read, he bears the burden of proof that reading it would yield evidence or intelligence; he has the data, he must share.  We are not talking about cryptography in general but only about the security of mobile devices.  We are not talking about capabitlity but cost.  Not so much about how much as about who will pay; will we pay by taxation on all or coercion of a few?  The Director may have a case, but he has not made it yet.


Tuesday, February 20, 2018

Budget for the Cost of Losses

One idea of security is to minimize the total of the cost of losses and the cost of security measures.  However, it is easier to measure the cost of security measures than that of losses.  This may make it difficult to justify the cost of security measures.

While historically we have had only anecdotal data about losses, thanks to our rapidly increasing scale, laws requiring disclosure of breeches, and open source intelligence reports like the Verizon Data Breech Incident Report, we know a great deal more. 

I had one Fortune One Hundred client that budgeted for losses at the level of a line of business.  While the first year was little more than a guess, a decade later they have confidence in their numbers and have pushed them to smaller business units.  Just putting the line in the budget has caused the collection of actual data. 

The security staff uses the budget and actual figures to justify the cost of security measures.  Performance against budget allows them to assess their risk analysis and management program; losses are inevitable but are they greater or less than our expectation. 

Business unit managers use the numbers to make decisions about security measures and to negotiate with information technology.  They manage the cost the same as any other.  As with any other expense, the budget tells them the level of losses that higher managment has accepted. 

Budgeting for the cost of losses makes this expense peer with other expenses and subject to the same effort and control as other expenses.  It puts the responsibility on the line of business where it belongs,  It moves us one step closer to professional security based on data rather than on intuition. 

Wednesday, November 29, 2017

Securability

In 2008 the ACM sponsored a Workshop on the Application of Engineering Principles to Information System Security.  Participants were asked to submit brief notes as seed material for the Workshop.  Far and away the most useful paper submitted to the workshop was by Amund Hunstad anJonas Hallberg of the Swedish Defence Research Agency entitled “Design for securability – Applying engineering principles to the design of security architectures.” This original paper points out “that no system can be designed to be secure, but can include the necessary prerequisites to be secured during operation; the aim is design for securability.” That is to say, it is the securability of the system, not its security, which is the requirement. We found this idea to be elegant, enlightening, and empowering. Like many elegant ideas, once identified it seems patently obvious and so useful as to be brillant.

One cannot design an airplane to be safe, such that it can never be unsafe, but one can, indeed aeronautical engineers do, design them such that they can be operated safely.  Neither IBM nor Microsoft can design a system that is safe for all applications and all environments.  They can design one that can be operated safely for some applications and some environments.  As the aeronautical engineer cannot design a plane that is proof against ”pilot error,” so IBM and Microsoft cannot design a system that is proof against the infamous ”user error.”  One cannot design a plane that is proof against terrorism or a computer that is proof against brute force attacks.

In the early days we talked about the properties of secure systems, Integrity, Auditability, and Controllability, and we told product managers that the properties, features, and functions of the product must be appropriate for the intended application and environment of the product. 

Integrity speaks to the wholeness, completeness, and appropriateness of the product.  One test of Integrity is predicability, that is the product does what, and only what, is expected.  Note that very few modern computer systems meet this test, in large part because they too complex. 

Auditability is that property that provides for relative ease in inspecting, examining, demonstrating, verifying, or proving the behavior and results of a system.  The tests for Auditability include accountability and visibility or transparency.  The test of accountability is that it must be possible to fix responsibility for every significant event to the level of a single individual.  The test of visibility is that a variance from the expected behavior, use, or content of the system must come to thattention of responsible management in such a way as to permit timely and appropriate corrective action. 

Controllability is that property of a system that enables mamnagemrnt to exercise a directing or restraining influence over the behavior, use, or content of the system.   The tests are Granularity and Specificity.  The test of granularity requires that the size of the resource to be controlled must be small enough to permit management to achieve the intended level of risk.  Specificity requires that management be able to predict the effect of granting any access to any resource, privilege, or capability from the meta-data, e.g., name, properties, of the resource, privilege or capability. 

Note that these properties compliment one another, indeed are really simply different ways of looking at the property of ”securability.”  However, they may be achieved at the expense of other desiderata of the system.  How to achieve the proper balance is the subject for another day.