Wednesday, November 29, 2017


In 2008 the ACM sponsored a Workshop on the Application of Engineering Principles to Information System Security.  Participants were asked to submit brief notes as seed material for the Workshop.  Far and away the most useful paper submitted to the workshop was by Amund Hunstad anJonas Hallberg of the Swedish Defence Research Agency entitled “Design for securability – Applying engineering principles to the design of security architectures.” This original paper points out “that no system can be designed to be secure, but can include the necessary prerequisites to be secured during operation; the aim is design for securability.” That is to say, it is the securability of the system, not its security, which is the requirement. We found this idea to be elegant, enlightening, and empowering. Like many elegant ideas, once identified it seems patently obvious and so useful as to be brillant.

One cannot design an airplane to be safe, such that it can never be unsafe, but one can, indeed aeronautical engineers do, design them such that they can be operated safely.  Neither IBM nor Microsoft can design a system that is safe for all applications and all environments.  They can design one that can be operated safely for some applications and some environments.  As the aeronautical engineer cannot design a plane that is proof against ”pilot error,” so IBM and Microsoft cannot design a system that is proof against the infamous ”user error.”  One cannot design a plane that is proof against terrorism or a computer that is proof against brute force attacks.

In the early days we talked about the properties of secure systems, Integrity, Auditability, and Controllability, and we told product managers that the properties, features, and functions of the product must be appropriate for the intended application and environment of the product. 

Integrity speaks to the wholeness, completeness, and appropriateness of the product.  One test of Integrity is predicability, that is the product does what, and only what, is expected.  Note that very few modern computer systems meet this test, in large part because they too complex. 

Auditability is that property that provides for relative ease in inspecting, examining, demonstrating, verifying, or proving the behavior and results of a system.  The tests for Auditability include accountability and visibility or transparency.  The test of accountability is that it must be possible to fix responsibility for every significant event to the level of a single individual.  The test of visibility is that a variance from the expected behavior, use, or content of the system must come to thattention of responsible management in such a way as to permit timely and appropriate corrective action. 

Controllability is that property of a system that enables mamnagemrnt to exercise a directing or restraining influence over the behavior, use, or content of the system.   The tests are Granularity and Specificity.  The test of granularity requires that the size of the resource to be controlled must be small enough to permit management to achieve the intended level of risk.  Specificity requires that management be able to predict the effect of granting any access to any resource, privilege, or capability from the meta-data, e.g., name, properties, of the resource, privilege or capability. 

Note that these properties compliment one another, indeed are really simply different ways of looking at the property of ”securability.”  However, they may be achieved at the expense of other desiderata of the system.  How to achieve the proper balance is the subject for another day. 

Monday, October 23, 2017

Security as Infrastructure

When I began in computers it was really fun.  I was hired as a "boy genius" at IBM Research.  We had the best toys.  I had my own IBM 650.  I was paid to take it apart and put it together again.  How great is that?  I got to work with Dr. Albert Samuels who was programming the IBM 704 to play checkers.  My colleague, Dick Casey, and I programmed the 650 to play Tic-Tac-Toe.  We had to use it on third shift but we even had a third of an IBM 705 where we installed the first Autocoder in Poughkeepsie.  I drove my transistor radio with a program on the IBM 1401.  

That was just the beginning. For sixty years I have had the best toys. I have five PCs, I am on my fifth iPhone, and my fourth iPad.  I carry my sixty years of collected music and photographs, an encyclopedia, a library, and dozens of  movies in my pocket.  It just keeps getting better. It is more fun than electric trains.

One of my favorite toys was the IBM Advanced Administrative System, AAS, five IBM 360/65s and a 360/85.  It was so much fun that I often forgot to eat or even go home at night.  However, on AAS one of my responsibilities was to manage the development of the access control system.  It was great fun to do and fun to talk about.  Serious people came to White Plains to hear me.  I was invited to Paris, Vienna, Amsterdam, London, Helsinki, and Stockholm to talk about my fun and games, about how we provided for the confidentiality, integrity, and availability of our wondrous system.  

However, as seems to happen to us all, I grew up, and finally old.  My toys, fun, and games became serious.  Some place along the way, most of the computers in the world were stitched together into a dense fabric, a network,  into a world-wide web.  While still entertaining, this fabric had become important.  It supports the government, the military, industry, finance, and commerce.  

Without any plan or intent, driven mostly by a deflationary spiral in cost and exploding utility, the fabric had become infrastructure, part of the underlying foundation of civilization.  It had become peer with water, sewer, energy, finance, transportation, and government.  Moreover, it had become THE infrastructure, the one by which all of the others are governed, managed, and operated.  

We build infrastructure to a different standard than toys or anything else not infrastructure.  Infrastructure must not fall of its own weight.  It must not fall under the load of normal use.  It must not even fall under easily anticipated abuse and misuse.  In order to prevent erroneous or malicious operation, the controls for infrastructure are reserved to the trained operators and from the end users.  

No special justification is required for this standard. The Romans built their roads, bridges, and aqueducts, such that. with normal maintenance, they would last a thousand years.  And so they have.  The Hoover Dam and the Golden Gate Bridge were built to the same standard.    With normal maintenance, and in the absence of unanticipated events, they will never fail.  (They may be decommissioned but they will not fail.)  No one quibbled with Henry Kaiser over the cost or schedule for the dam.           

However, our fabric was not driven by design and intent but by economics.  No technology in history has fallen in price and grown in power as fast as ours.  While we tend to think of it in terms of its state at a point in time. it continues to grow at an exponential rate.  Its importance can hardly be appreciated, much less over-stated.

Given the absence of design and intent, it is surprisingly robust and resilient.  While not sufficient for all purposes to which we might wish to put it, it is sufficient for most.  With some compensating design and intent, it can be made sufficiently robust for any application.  

One word on "easily anticipated abuse and misuse."  On September 12, 2001, what could be easily anticipated had changed forever.  

As security people, we are responsible for the safe behavior, use, content, configuration, and operation of infrastructure.  As IT security people, we are responsible for the only international infrastructure, the public networks.  As users, we are responsible for not abusing, misusing, or otherwise weakening it.  

Note that ours is the only infrastructure that, at least by default, contains weak, compromised, or even hostile components and operators.  It is the only one that, by default, has controls intended for the exclusive use of managers and operators right next to those for end users.  Our infrastructure also, by default, connects and exposes the controls of other infrastructure to most of our unprivileged users.  It is our job to compensate fro and remediate these conditions.

Our roles, responsibilities, privileges, and special knowledge give us significant leverage over, and responsibility for the infrastructure of our civilization.  Everything that we do, or fail to do, strengthens or weakens that Infrastructure.  That is why we are called professionals and are paid the big bucks.  

Friday, October 20, 2017

MasterCard to Eliminate Signatures

MasterCard has announced that in the US and Canada, it will no longer require signatures on credit card transactions.  (PINs will continue to be required on debit card transactions.)   MC says that this will be more convenient for the customer and that it will rely on other (unnamed) mechanisms and processes for security.  Let us look at some.

First, many issuers use computer aided mechanisms to detect fraudulent use by looking at such clues as location and other patterns of use.  Most of us have had calls from our banks checking on the legitimacy of activity.

In theory, the required signature resists fraudulent use of lost or stolen cards.  In practice, not so much.  Even when clerks reconciled the signature on the check to the one on the card, it was an imperfect mechanism.  In modern systems, where no one really reconciles the signature, the best that the mechanism can do is to permit the consumer to recognize disputed items that he really did sign. However, for the most part, issuers simply accept the word of the consumer that a transaction is fraudulent.  The signature does not come into play. 

The best way to resist the fraudulent use of lost or stolen cards is to check that a proffered card has not been reported lost or stolen.  This works well in the US and Canada, where most transactions take place on line.  In countries where many transactions take place off line, PINs are used. 

American Express CEO, Kenneth Chennault told President Obama that Am Ex detects many fraudulent transactions within 60 seconds by sending a notification of use to the consumer’s mobile or e-mail in real time. 

Bank of America and others resist fraudulent use by permitting the consumer to turn the card on and off using an app.  Again, works well where most transactions are on line. 

Android, Apple, and Samsung Pay resist fraudulent use by simply taking the card out of the transaction and substituting a digital token for the credit card number.  Lost mobile phones resist fraudulent reuse with PINs for security and biometrics, e.g. facial and fingerprint recognition, for convenience. 

On line merchants have never had the benefit of signatures but  can resist fraud by using PayPal or other proxies instead of accepting credit cards at check out.  Where the merchants cooperate and the consumer uses √Āmerican Express at checkout, AmEx will prompt the user for a one-time-password sent to the users mobile.  This protects the merchant, the consumer and AmEx.  All of these resist “card not present” fraud. 

Only the brands and issuers really know how necessary and effective signatures and PINs are: they take the risk when they are not required.

The fundamental vulnerability in the retail payment system is the credit card number in the clear on the magnetic stripe.  Remains a risk to merchants and issuers but is only a nuisance to the consumer. 

In short, the future is mobile, tokenized, cordless, contactless, signature and Pin less, and secure. 

Wednesday, October 18, 2017

The Internet as Infrastructure

Today, when one connects an application, system, or network to the public networks, one is adding to the "system of public works," that is to "infrastructure," of the nation and the world. 

The standards for building infrastructure, such as bridges, tunnels, and dams, are different from those for other artifacts.  Infrastructure must not fall of its own weight, it should not fail in normal use or under normal load, and must resist "easily anticipated abuse and misuse."  A suspension bridge must not fall because a driver falls asleep and an eighteen wheeler goes over the side.

Notice that the abuse and misuse that can be easily anticipated today, is much worse than when we began the Internet.  Were it not so, we might have done many things differently.

We call the resultant necessary property of infrastructure resiliency, rather than security, but the properties are related.

For any artifact, there are limits to the complexity, scale, load, and simultaneous component failures that the mechanism can be expected to survive. How many simultaneous sleepy drivers and plunging eighteen wheelers must a bridge be designed to survive.

When those limits are reached, what we want to happen is that the mechanism fail in such a way that damage is limited and the mechanism can be restored to operation as quickly as possible.

The three Great Northeastern Blackouts, of which August 14, 2003 was the latest, are examples. It is interesting that engineers see these blackouts as successes while the public and their surrogates, journalists and politicians, see them as failures.

All three were caused by multiple simultaneous and cascading component failures under conditions of heavy load. In all three cases the system failed in such a way that it was restored to a ninety percent service level in a day. While all three were spectacular and exciting, the damage was not nearly so severe as one might expect from a major ice storm.

This is the way that we would like the public networks to fail. In fact, so far, that is what we have seen. We have had massive local failures of the PSTN where it took days to weeks to restore to a ninety percent service level. Most of these were fire related and local. We have had one that was national and caused by a software change. We recovered from this one in hours.

To date, we have had a number of local failures of the Internet, all man-made (mostly caused by the infamous "cable-seeking backhoes or boat anchors"); most were accidental. We recovered from all of these in days. SQL/Slammer was man-made, malicious, and software related; it caused a noticeable drop in service for hours. However, there was not really a discontinuity of service.

It should be noted that SQL/Slammer was a homogenous attack.  That is, every instance of it looked the same.  This made it relatively easy to construct and deploy filters that would resist its flow while not interfering with normal traffic.  However, it is fairly easy to visualize a heterogeneous attack that might overwhelm this remedy.

So, there is wide-spread concern that there might be a malicious software-based attack that would bring down the entire Internet. To some degree this is angst, an unfocused apprehension rooted in intuition or ignorance.  However, it is shared by many who are knowledgeable.  Their concern is rooted in the (often unidentified and un-enumerated) facts that:

* the Internet evolved; it was not designed and deployed
* switching in the network is software-based,
* operation of the components is homogenous
* operation of network management controls is in-band
* users often have default access to management controls
* the topology is both open and flat
* paths in the network are ad hoc and adaptive
* connection policy is permissive,
* most of the nodes in the network are un-trusted and a large number are under malicious control.
* access is open and cheap
* identity of both components and users is unreliable
* ownership and management is decentralized
* other

If the impact of these things on the resiliency of the Internet were as obvious prospectively as it is retrospectively, we might have done things differently.  On the other hand, we might not have.  A little discussion is in order.

Unlike the PSTN, the Internet is packet, rather than circuit, switched.  The intent of this was to make the network more resilient in the face of node or link failures.  

The routers and switches may be software running on von Neumann architecture general-purpose computers.  This may make the network more resistant to component failure while making the components more vulnerable to malicious attack.  

We have become accustomed to the idea that software processes are vulnerable to interference or contamination by their data, i.e., the software in the switch can be contaminated by its traffic.  This exposes us to attacks intended to exploit, interfere with, or take control of switches and routers. 

This may be aggravated by the fact that so many routers and switches look the same.  While there are hundreds of products, most of them present controls that are operated via the Border Gateway Protocol (BGP).  An attack that can take control of one might be able to take control of many.   

Even most non-switch nodes in the network look the same, that is, like Windows or Unix (rather than, for example, MVS or OS/400.)   These two operating systems are open, historically broken, and have a commitment to backward compatibility that makes them difficult to fix.  Historically they have shipped with unsafe defaults and have been corrupted within minutes of being connected to the Internet.  The result has been that there are millions of corrupt nodes in the Internet that are under the control of malicious actors.

Operation of the routers and switches (and other network nodes) is via the network itself; they can be operated from almost any node in the network.  Many are hidden, if at all, only by a password, often weak or even default.  Thus, it might be possible to coordinate simultaneous mis-operation of many nodes at the same time. 

The Internet is open as to user, attachment, protocol, and application.  The cost of a connection to the Internet is a function of the bandwidth or load but the cost of a relatively fast persistent connection is in the tens of dollars per month, about the same as a dial connection a decade ago.  

While one must demonstrate the ability to pay, usually with a credit card, the credit card may be stolen, and, depending on the provider, the name in which the connection is registered may not have to be the same as that on the credit card.  In short, almost anyone can add a node to the Internet with minimal checks on their identity or bona fides.  There will be bad actors. 

The only thing that is required to add a new protocol or application to the Internet is that at least two nodes agree on it and that it can be composed from IP packets.  Use of load-intensive protocols and applications for streaming audio and video were added to other protocols and applications with no changes to the underlying infrastructure.  We have seen DoS attacks that relied upon minor changes to protocols and their use.

At least in theory, the topology of Internet is "flat," as opposed to structured or hierarchical.  That is, at least in theory and with few exceptions, any node in the Internet can send a packet to any other node in the Internet.  The time and cost to send a packet between any two nodes chosen at random is roughly the same as for any other pair of nodes.  

Said another way, both the time and cost to send a packet are independent of distance.  One implication of this is that attacks are cheap, can originate anywhere, and can attack anything attached. 

Paths in the Internet are determined late, possibly on a packet by packet basis, and adapt to changes in load or control settings.  The intent is that there be so many potential paths between A and B that at least one will always be available and that it will be discovered and used.  While the intent is to make the network resistant to node and link failures, an unintended consequence is that it is difficult to resist the flow of attack traffic. 

The original policies of the Internet were promiscuous (as opposed to permissive or restrictive); not only was any packet and flow permitted but there were no controls in place to resist them.  This was essential to the its triumph over competitors like SNA and may have been necessary to its success.  

While controls have been added as the scale has grown, the policy is still permissive, rather than restrictive, i.e., everything is allowed that is not explicitly forbidden.  

Said another way, all traffic is presumed to be benign until shown otherwise.  Attack traffic can flow freely until identified and restricted.

Finally, while most of the nodes in the Internet are un-trusted, and we know that many are corrupted and under hostile control, all are given the benefit of the doubt.  To date there has been little effort to identify and eliminate those that have been corrupted.  Therefore there remains a possibility that these corrupt systems can be marshaled in such a way as to deny the use of network to all, or some targeted group, of users. 

The Internet is robust, not fragile.  It is resistant to both natural and accidental artificial events.  However, To the extent that the above things are, and remain, true, the Internet, and indirectly, the nations, economies, institutions and individuals that rely upon, it are vulnerable to abuse and misuse; concern is justified, if not proportionate.  

While these characteristics are pervasive and resistant to change, while they were often chosen for good reason, they are not fixed or required and can be changed.  Understanding them and how they  might be changed is key to making the Internet as resistant to abuse and misuse as it is to component failure or destruction. 

It suggests that the network must become both less open, not to say, closed, and more structured. The management controls must be protected and taken out of band.  The policy must become much more restrictive.  We must identify our users and customers and hold them accountable for their traffic.

To bring the Internet to infrastructure standards, we must overcome not only inertia but also culture.  Each of us must exercise our influence on our  employers, clients, and vendors to move the Internet to the same standards that we expect of skyscrapers, bridges, tunnels, and dams.  Since there is no one else to do it, we are called professionals and are paid the big bucks. 

Tuesday, September 26, 2017

Security Should Pay, Not Cost

1. Security --A Cost of Doing Business
There is a television commercial in the U.S. that shows an automobile mechanic. In one hand
he has some worn piston rings. In the other he holds an oil filter. The mechanic looks from
hand to hand and says, "You can pay me now, or you can pay me later." The point of the
commercial is that friction is inevitable. It is a cost of running an automobile. It is
inescapable. You will pay. The only choice that is open to you is how you pay. You may pay
in a regular and orderly manner, or you may pay in a destructive and unpredictable manner,
but you must pay.

So it is with information protection. It is a cost of doing business. It is unavoidable. You will
pay. The only choice that you have is how you pay. You can pay in a regular, orderly, and
business-like manner, or you can pay in an irregular and unpredictable manner, but you must

Now, I know what you are thinking. You are thinking that those rings came from an
American car, not from a Mercedes or a BMW. You are thinking of all those stories that you
have heard about the little old lady who drove her 500 SEL for a million kilometers without
ever changing the oil, much less the filter. Perhaps you can get away with not changing your
oil filter. Perhaps you will be lucky.

But the mechanic in our advertisement portrays himself as a friend, giving friendly advice. He
is trying to help us understand our choice so that we will make the one that is best for us. He
too has heard the story about the little old lady; he's even seen some of those cars. But he
understands that those cars are the exceptions. While one driver may get away with it, for
most drivers and cars, periodic changes of oil filters is a smart and efficient policy.
While one department or manager within your organization may get away with poor security,
taken across all departments and all managers, security can pay and its absence will put the
health ofthe business at risk. You are in the role of the friendly mechanic. It is your job to
convince management in general and managers in particular to change their oil filters. 

2. Security Should Pay; not Cost 
Security should pay, it should not cost. Management has a fundamental responsibility to 
conserve and protect the assets and interests of the institution and its constitutents. However, 
it should spend no more to do that than will contribute to the objectives of the institution, at 
least across the whole institution and across time. Security is a means, not an end. As with 
safety programs, personnel programs, recognition programs, and the like, we have security 
programs because they contribute to the bottom line. 

3. Efficiency 
Some of you, with keen ears for English, might have noted that I said "security can pay." It 
does not necessarily do so. As with anything else, it is possible to pay too much for security. 
Of course, if you do, then it will not pay. Courtney's second law says, "You should spend no 
more to deal with a risk than tolerating it will cost you." 

Security must be efficient. That is, it must be effective without waste. A security measure is 
efficient when it costs less than the alternatives, including the alternative of doing nothing. A 
collection of security measures is efficient over time when the sum of the cost of losses and 
the cost of the measures is at a minimum. Infinite security means infinite cost, and zero 
security means intolerable losses. 

Of course, there is part of the problem. This is a difficult number to know. While we can 
measure the cost of security measures, the frequency of large losses is low and the period 
long. The cost of frequent, but controllable, losses is often beneath our notice and, when 
noticed, is not seen as related to the cost of security. Therefore, it is difficult to identify the 
value of our day-to-day activity, to convey it our management, and to motivate our peers and 

We have a saying in English, "No one promised you a rose garden." No one promised that 
management was easy. If it were easy, it might not pay so well and offer such nice working 

One important form of efficiency is consistency. It is important that security measures result 
in a similar level of security across like parts of the institution and similar resources. We do 
not want to spend a great deal of money to raise the average height of a fence by greatly 
increasing the height of one section while leaving most of it alone. Therefore, efficiency 
requires that like resources receive similar protection. It requires that all resources receive the 
appropriate protection, while reserving expensive measures for only those resources that need 

4. Efficient Management Systems 
Having said that, we can now begin to identify efficient management systems and efficient 

It may be that there are some institutions that are so homogeneous that one level of protection 
would serve for all but a small, easily identified, set of their resources. I have not encountered 
one in my 25 years in this field. It may also be that there is a management system, other than 
classifying resources by their sensitivity or according to the protection measures that they 
measures that they should get, that will ensure that everything is properly protected but 
expensive measures are reserved. Again, I have not encountered one, but wonders never 
cease. In the meantime, I do not expect to see an institution the size of those represented here 
that has an efficient security program that does not require management to classify and label 
information resources. 

5. Efficient Measures 
While we tend to focus our attention on the effectiveness of security measures, efficiency is 
inversely proportional to effectiveness. That is to say, the most effective measures are rarely 
efficient. Either they cost too much, or they have too large a negative impact on our ability to 
accomplish other objectives. The most important factor in efficiency is the breadth of the 
measure. Those measures that are most efficient are those that address the largest set of risks, 

• Direction to employees 
• Management supervision 
• Physical security 
• Access control 
• Encryption 
• Data base backup 
• Contingency planning . 

Tell your people what you expect and what you rely upon them for. When employees fail to 
do what we expect, it is far more often the result of a failure to communicate on our part than 
of any failure of motive or intent on theirs. 

‘Supervise. Note variances from intent or expectation and take timely corrective action. 
Management supervision is the most general, flexible, and effective of all controls. We use 
others only to the extent that they are more efficient. 

Provide a safe environment. The test should be that what is safe for people will generally be 
safe for computers and information. The skills and special knowledge of your people makes 
them irreplaceable, while property is cheap, and information easily copied. 

Limit access to sensitive and valuable resources. The more valuable the resource, the more 
layers of control and the fewer the number of people with access. 

When you cannot limit access to information, then record it in codes that only the intended 
parties can read. Modem cryptography can be fully automated and arbitrarily strong. It 
enables us to protect infonnation independent of the media on which it is recorded or the 
environment through which it must pass. We can implement both logical envelopes and 
logical signatures. We can compose these to simulate any control that we have ever been able 
to implement with paper. Using the computer, we can do these things in a manner transparent 
to the user and too cheap to meter. 

Create multiple copies of important data and distribute them over space such that not all 
copies are vulnerable to the same event. Consistent with the needs to keep the copies current 
and confidential, the more the better. 

Use slack time and resource before a disaster to reduce the cost and duration of the outage. 
You will survive and recover from most disasters. The issue is not whether you will survive 
but, rather, of how much it will cost and how long it will take to return to nonnal. Do not 
focus on tactics that might fail, or might not apply, but on strategies that must succeed. 

None of these measures is one hundred percent effective against any hazard; all involve some 
residual risk. Therefore, their efficiency does nol result from their effectiveness versus their 
cost. Rather, it results from the number of hazards that they address. While not completely 
effective against any exposures, they are efficient because they marginally reduce our exposure 
to a large number of risks and vulnerabilities, some of which we cannot even identify in 

Now, that is all there is to it. That is half a century of experience in a nutshell. That is 
all you really need to know. But it is only the beginning of what you must do. 

Note that what is good for one security objective may be bad for another. The more copies of 
the data, the lower the likelihood that they will all be destroyed, but the greater the chance 
that one will be disclosed. Security is the act of balancing the cost of security measures 
against the cost of losses. The balance is not stable; It requires the continual application of 
experience and judgment. 

6. The Price of Security 
So we see how we should manage and what measures we ought to employ. All of this begs 
the question of how much we ought to spend. It may be that what I have said so far is all that 
can be confidently said on the issue. On the other hand, my experience suggests that these 
answers are not satisfying. To say that "you should spend less than it would cost to do 
nothing" is unsatisfying ifthe cost of doing nothing cannot be readily known. 

Most organizations cannot tell you with much confidence how much they spend on security. 
Their books are not set up to measure things so small. Neither can they tell you much about 
the cost oflosses; the books are not set up to track things that occur so seldom. 

Early in my career, I used to respond to this question by saying that if you were spending 
more than three percent of your budget on security, then you were not likely to be efficient. 
The longer I am in the business, the lower the number gets. Perhaps it is as little as one-tenth 
of one percent. That is to say, perhaps one employee in a thousand works full-time in 

How much you spend may be a measure of intent, but it is not a measure of accomplishment. 
Accomplishment is measured by how well you spend. We maximize our chances of spending 
wisely by spending on the efficient measures. Now it is time to get on with it. 

You can pay me now, or you can pay me later. 

Wednesday, July 19, 2017

Open Letter to my Congressman


In my forty years in information security I have come to have many colleagues in the intelligence community.  I find them to be brilliant and noble.  I have also found them to be myopic, artful, and zealous.  I have watched their testimony before both the House and Senate judiciary committees.  While I have been impressed by their testimony, I have been less impressed by the questioning.   The testimony has been carefully rehearsed and very consistent.  Where the questioning has not been sympathetic, it has been inept.  Even those legislators who recognize that the testimony is misleading are prevented by secrecy and decorum from asking the questions that might really inform the citizens or even saying so when a witness lies under oath. 

·         Here is a short list of questions that I would like put to the administration to answer under oath.

  • Does GCHQ target American citizens on behalf of the US government?  What did we get for our $152M? 
  • Does the NSA target citizens of the United Kingdom?  Does it do so on behalf of the UK government? 
  • What programs, besides the collection of all telephone call records, does the NSA operate under USA Patriot Act, Section 215?  What programs, other than PRISM, does it operate under the FISA, Section 702?  Are we going to be surprised by more revelations?   
  • NSA has admitted that a query to the call records database implicates not only those connected directly to the "seed" number but all those associated with it to "three hops."  What is the largest number of phone numbers implicated by any single query?  How many subscribers have been implicated by the hundreds of queries made since the inception of the program?  Is it possible that there is any American citizen  that has not been swept up in this huge drag net?
  • Given the density of modern digital storage, e.g., a terabyte in a shirt pocket for $100, what is NSA storing that requires 24 acres of floor space in Utah?  
  • What percentage of the e-mail that crosses our borders does NSA collect?  Store?  Analyze?  Disseminate to other agencies of government?  
  • Given the demonstrations by Edward Snowden and Bradley Manning as to the breadth and depth of their access, how can we rely upon the assurances of NSA  that they can protect us from abuse of the information they collect?  Doesn't the mere collection of all this information invite, not to say guarantee, abuse?
  • Doesn't the mammoth budget ($75B in 2t012?) of NSA justify the conclusion that NSA operates on the premise that "Because we can, we must," and without any regard for efficiency?   Are they not spending far more than doing nothing would cost?
  • Does not the Bush "Warrantless Surveillance Program" demonstrate that citizens cannot rely upon bureaucrats and spies to protect us from over-zealous, not to say rogue, politicians?  Are we building capabilities now that will empower politicians of the future? 
  • Does the NSA require a warrant before they target US citizens on behalf of the FBI?  Secret Service? DEA?  MI5?  MI6?  
  • Does the NSA protect American citizens from surveillance by their peers and colleagues in other nations?  
  •  Is information passed to the FBI by NSA ever, usually, sufficient for the issuance of a wiretap warrant?  A National Security Letter?  
  •  Do the intelligence agencies selectively share intelligence with legislators in order to curry support?

Wednesday, July 5, 2017

The Coming Quantum Computing Crypto Apocalypse

Modern media, both fact and fiction, loves the Apocalypse and the Dystopian future.  The Quantum Apocalypse is just one example but one close to the subject of this blog.  It posits that the coming revolution called quantum computing will obsolete modern encryption and destroy modern commerce as we have come to know it.  It was the hook for the 1992 movie Sneakers starring Robert Redford, Sydney Poitier, Ben Kingsley, and River Phoenix.

This entry will tell the security professional some useful things about the application of Quantum Mechanics to information technology in general, and Cryptography in particular, that will help equip him for, and enlist him in, the effort to ensure that commerce, and our society that depends upon it, survive.  Keep in mind that the author is not a Physicist or even a cryptographer.  Rather he is an octogenarian, a computer security professional, and an observer of and commentator on the experience that we call modern Cryptography beginning with the Data Encryption Standard.

For a description of Quantum Computing I refer you to Wikipedia.  For our purpose here it suffices to say that it is very fast at solving certain classes of otherwise difficult problems.  One of these problems is to find the factors of the product of two prime numbers, the problem that one must solve to find the message knowing the cryptogram and the public key or the private key knowing the message, the cryptogram, and the public key in the RSA crypto systems.

This vulnerable algorithm is the one that we rely upon for symmetric key exchange in our infrastructure.  In fact, because it is so computationally intensive, that is the only thing we use it for.

In theory, using quantum computing, one might find the factors almost as fast as one could find the product, while the cryptographic cover time of the system relies upon the fact that the former takes much longer than the latter.  Cryptographers would certainly say that, by definition, at least in theory, the system would be "broken."  However, the security professional would ask about the relative cost of the two operations.  While the former can be done by any cheap computer, the latter can only be done quickly by much more rare and expensive "quantum" computers.

Cryptanalysis is one of the applications that has always supported cutting edge computing. One of the "Secrets of ULTRA" was that we invented modern computing in part to break the Enigma system employed by Germany.  ULTRA was incredibly expensive for all that.  While automation made ULTRA effective, it was German key management practices that made it efficient.    On the other hand, the modern computer made commercial and personal cryptography both necessary and cheap.

One can be certain that NSA is supporting QC research and will be using one of the first practical implementations for cryptanalysis.  They will be doing it literally before you know it and exclusively for months to years after that.

Since ULTRA, prudent users of cryptography have assumed that, at some cost, nation states (particularly the "Five Eyes," Russia, China, France, and Israel) can read any message that they wish. However, in part because the cost of reading one message includes the cost of not reading others, they cannot read every message that they wish.

The problem is not that Quantum Computing breaks Cryptography, per se, but that it breaks one system on which we rely.  It is not that we do not have QC resistant crypto but that replacing what we are using with it will take both time and money.  The faster we want to do it, the more expensive it will be.  Efficiency demands that we take our time; effectiveness requires that we not be late.

By some estimates we may be as much as 10 years away from an RSA break but then again, we might be surprised.  One strategy to avoid the consequences of surprise is called "crypto agility."  It implies using cryptography in such a way that we can change the way we do it in order to adapt to changes in the threat environment.

For example, there are key exchange strategies that are not vulnerable to QC.  One such has already been described by the Internet Engineering Task Force (IETF).  It requires a little more data and cycles than RSA but this is more than compensated for by the falling cost of computing.  It has the added advantage that it can be introduced in a non-disruptive manner, beginning with the most sensitive applications.

History informs us that cryptography does not fail catastrophically and that while advances in computing benefit the wholesale cryptanalyst, e.g., nation states, before the commercial cryptographer, in the long run they benefit the cryptographer orders of magnitude more than the cryptanalyst.  In short, there will be a lot of work but no "Quantum Apocalypse."  Watch this space.

Wednesday, May 3, 2017

The Next Great Northeastern Blackout

It has already been more than thirteen years since the last great northeastern blackout.  The mean time between such blackouts is roughly twenty years.  

Blackouts are caused by a number of simultaneous component failures that overwhelm the ability of the system to cope.  While the system copes with most component failures so well that the consumer never even notices, there is an upper bound to the number of concurrent failures that it can tolerates. In the face of these failures the system automatically does an orderly protective shutdown that assures its ability to restart within tens of hours to days.

However, such surprising shutdowns are  experienced by the community as a "failure."  One result is finger pointing, blaming, and shaming.  Rather than being seen as a normal and predictable occurrence with a proper and timely response, the Blackout is seen as a "failure" that should have been "prevented."  

These outages are less disruptive than an ice storm.  However, even though they are as natural and as inevitable as weather related outages, they are not perceived that way.  The public and their political representatives see weather related outages as unavoidable but inevitable technology failures as one hundred percent preventable.

Security people understand that perfect prevention has infinite cost.  That as we increase the meantime between outages, we stop, at about twenty years, way short of infinity.  This is in part because the cost of the next increment exceeds the value and in part because we reach a natural limit. We increase the resistance to failure by adding redundancy and automation. However, we do this at the cost of ever increasing complexity.  There is a point, at about twenty years MTBF, at which increased complexity causes more failures than it prevents.

Much of the inconvenience to the public is a function of surprise.  Since they have come to expect prevention, they are not prepared for outages.  The message should be that we are closer to the next Blackout than to the last.  If you are not surprised and are prepared, you will minimize your own cost and inconvenience.

Sunday, March 26, 2017

Internet Vulnerability

On March 24, 2017 Gregory Michaelidis wrote in Slate on "Why America’s Current Approach to Cybersecurity Is So Dangerous."

He cited an article by Bruce Schneier.

In response, I observed to a number of colleagues, proteges, and students that "One takeaway from this article and the Schneier article that it points to is that we need to reduce our attack surface.  Dramatically.  Perhaps ninety percent.  Think least privilege access at all layers to include application white-listing, safe dcfaults, end-to-end application layer encryption, and strong authentication."

One colleague responded "I think one reason the cyber attack surface is so large is that the global intel agencies hoard vulnerabilities and exploits..."  Since secret "vulnerabilities and exploits" account for so little of our attack surface, I fear that he missed my point.

While it is true that intelligence agencies enjoy the benefits of our vulnerable systems and are little motivated to reduce the attack surface, the "hoarded vulnerabilities and exploits" are not the attack surface and the intel agencies are not the cause.  

The cause is the IT culture. There is a broad market preference for open networks, systems, and applications. TCP/IP drove the more secure SNA/SDLC from the field. The market prefers Windows and Linux to OS X, Android to iOS, IBM 360 to System 38, MVS to FS, MS-DOS to OS/2, Z Systems to iSeries, Flash to HTML5, von Neumann architecture [Wintel systems] to almost anything else.  

One can get a degree in Computer Science, even in Cyber Security, without ever even hearing about a more secure alternative architecture to von Neumann's [e.g. IBM iSeries. Closed, finite state architecture (operations can take the system only from one valid state to another), limited set of strongly-typed (e.g., data can not be executed, programs cannot be modified) objects, single level store, symbolic only addressing, etc.)]

We prefer to try and stop leakage at the end user device or the perimeter rather than administer access control at the database or file system. We persist in using replayable passwords in preference to strong authentication, even though they are implicated in almost every breach. We terminate encryption on the OS, or even the perimeter, rather than the application. We deploy user programmable systems where application only systems would do.  We enable escape mechanisms and run scripts and macros by default.

We have too many overly privileged users with almost no multi-party controls. We discourage shared UIDs and passwords for end users but default to them for the most privileged users, where we most need accountability. We store our most sensitive information in the clear, as file system objects, on the desktop, rather than encryptied, in document management systems, on servers. We keep our most sensitive data and mission critical apps on the same systems where we run our most vulnerable applications, browsing and e-mail. We talk about defense in depth but operate our enterprise networks flat, any to any connectivity and trust, not structured, not architected. It takes us weeks to months just to detect breaches and more time to fix them.  

I can go on and I am sure you can add examples of your own. Not only is the intelligence community not responsible for this practice, they are guilty of it themselves. It was this practice, not secret vulnerabilities, that was exploited by Snowden. It is this culture, not "hoarded vulnerabilities and exploits," that is implicated in the breaches of the past few years. It defies reason that one person acting alone could collect the data that Snowden did without being detected.  

Nation states do what they do; their targets of choice will yield to their overwhelming force. However, we need not make it so easy. We might not be able to resist dragons but we are yielding to bears and brigands. I admit that the culture is defensive and resistant to change but it will not be changed by blaming the other guy. "We have seen the enemy and he is us."

Wednesday, January 4, 2017

All "Things" are not the Same

My mentor, Robert H. Courtney, Jr.  was one of the great original thinkers in security.  He taught me a number of useful concepts some of which I have codified and call "Courtney's Laws."  At key inflection points in information technology I find it useful to take them out and consider the problems of the day in their light.  The emergence of what has been called the Internet of Things (IoT) is such an occasion. 

Courtney's First Law cautioned us that "Nothing useful can be said about the security of a mechanism except in the context of a specific application and environment."  This law can be usefully applied to the difficult, not to say intractable, problem of the Internet of things (IoT).  All "things" are not the same and, therefore do not have the same security requirements or solutions.

What Courtney does not address is what we mean by "security."  The security that most seem to think about in this context is resistance to interference with the intended function of the "thing" or appliance.  The examples de jour include interference with the operation of an automobile or with a medical device.   However, a greater risk is the that general purpose computer function in the device will be subverted and used for denial of service attacks or brute force attacks against passwords or cryptographic keys.

Key to Courtney's advice are "application" and "environment."  Consider application first.  The security we expect varies greatly with the intended use of the appliance.  We expect different security properties, features, and functions from a car, a pacemaker, a refrigerator, a CCTV camera, a baby monitor, or a "smart"  TV.  This is critical.  Any attempt to treat all these things  the same is doomed to failure.  This is reflected in the tens of different safety standards that the Underwriters Laboratories has for electrical appliances.  Their list includes categories that had not even been invented when the Laboratories were founded at the turn of the last century.

Similarly our requirements vary with the environment in which the device is to be used.  We have different requirements for devices intended to be used in the home, car, airplane, hospital, office, plant, or infrastructure.  Projecting the requirements of any one of these on any other can only result in ineffectiveness and unnecessary cost.  For example, one does not require the same precision, reliability, or resistance to outside interference in a GPS intended for use in an automobile as for one intended for use in an airliner or a cruise ship.  One does not require the same security in a device intended for connection only to private networks as for those intended for direct connection to the public networks.

When I was at IBM, Courtney's First Law became the basis for the security standard for our products.  Product managers were told that the security properties, features, and functions of their product should meet the requirements for the intended application and environment.  The more things one wanted one's product to be used for and the more, or more hostile, the environments that one wanted it to be used in, the more robust the security had to be.  For example, the requirements for a large multi-user system were higher than those for a single user system.  The manager  could assert any claims  or disclaimers that she liked; what she could not do was remain silent.  Just requiring the manager to describe these things made a huge difference.   This was reinforced by requiring her to address this standard in all product plans, reviews, announcements, and marketing materials.  While this standard might not have accomplished magic, it certainly advanced the objective.

Achieving the necessary security for the Internet of things will require a lot of thought, action and, in some cases, invention.  Applying Courtney's First Law is a place to start.  A way to start might be to expect all vendors to speak to the intended application and environment of his product.  For example, is the device intended "only for home use on a home network; not intended for direct connection to the Internet."  While the baby monitor or doorbell must be able to access the Internet, attackers on the Internet should not be able access the baby monitor.