Wednesday, December 28, 2011

Security is about Infrastructure

When I began in computers it was really fun. I was hired as a "boy genius" at IBM Research. We had the best toys. I had my own IBM 650. I was paid to take it apart and put it together again. How great is that? I got to work with Dr. Albert Samuels who was programming the IBM 704 to play checkers. My colleague, Dick Casey, and I programmed the 650 to play Tic-Tac-Toe. We had to use it on third shift but we even had a third of an IBM 705 where we installed the first Autocoder in Poughkeepsie. I drove my transistor radio with a program on the IBM 1401.

That was just the beginning. For fifty years I have had the best toys. I have three PCs and a MacBook Air. I am on my fifth iPhone, and my fourth iPad. I carry my fifty years of collected music and photographs, an encyclopedia, a library, and a dozen movies in my pocket. It just keeps getting better. It is more fun than electric trains.

One of my favorite toys was the IBM Advanced Administrative System, AAS, five IBM 360/65s and a 360/85. It was so much fun that I often forgot to eat or even go home at night. However, on AAS one of my responsibilities was to manage the development of the access control system. It was great fun to do and fun to talk about. Serious people came to White Plains to hear me. I was invited to Paris, Vienna, Amsterdam, London, Helsinki, and Stockholm to talk about my fun and games, about how we provided for the confidentiality, integrity, and availability of our wondrous system.

However, as seems to happen to us all, I grew up, and finally old. My toys, fun, and games became serious. Some place along the way, most of the computers in the world were stitched together into a dense fabric, a network, into a world-wide web. While still entertaining, this fabric had become important. It supports the government, the military, industry, and the economy.

Without any plan or intent, driven mostly by a deflationary spiral in cost and exploding utility, the fabric had become infrastructure, part of the underlying foundation of civilization. It had become peer with water, sewer, energy, finance, transportation, and government. Moreover, it had become THE infrastructure, the one by which all of the others are governed, managed, and operated.

We build infrastructure to a different standard than toys or anything else not infrastructure. Infrastructure must not fall of its own weight. It must not fall under the load of normal use. It must not even fall under easily anticipated abuse and misuse. In order to prevent erroneous or malicious operation, the controls for infrastructure are reserved to the trained operators and from the end users.

No special justification is required for this standard. The Romans built their roads, bridges, and aqueducts, such that. with normal maintenance, they would last a thousand years. And so they have. The Hoover Dam and the Golden Gate Bridge were built to the same standard. With normal maintenance, and in the absence of unanticipated events, they will never fail. (They may be decommissioned but they will not fail.) No one quibbled with Henry Kaiser over the cost or schedule for the dam.

However, our fabric was not driven by design and intent but by economics. No technology in history has fallen in price and grown in power as fast as ours. While we tend to think of it in terms of its state at a point in time. it continues to grow at an exponential rate. Its importance can hardly be appreciated, much less over-stated.

Given the absence of design and intent, it is surprisingly robust and resilient. While not sufficient for all purposes to which we might wish to put it, it is sufficient for most. With some compensating design and intent, it can be made sufficiently robust for any application.

One word on "easily anticipated abuse and misuse." On September 12, 2001, what could be easily anticipated had changed forever.

As security people, we are responsible for the safe behavior, use, content, configuration, and operation of infrastructure. As IT security people, we are responsible for the only international infrastructure, the public networks. As users, we are responsible for not abusing, misusing, or otherwise weakening it.

Note that ours is the only infrastructure that, at least by default, contains weak, compromised, or even hostile components and operators. It is the only one that, by default, has controls intended for the exclusive use of managers and operators right next to those for end users. Our infrastructure also, by default, connects and exposes the controls of other infrastructure to most of our unprivileged users. It is our job to compensate fro and remediate these conditions.

Our roles, responsibilities, privileges, and special knowledge give us significant leverage over, and responsibility for the infrastructure of our civilization. Everything that we do, or fail to do, strengthens or weakens that Infrastructure. That is why we are called professionals and are paid the big bucks.







Thursday, December 15, 2011

Security is about Efficiency

For the first thirty years I was in the computer security business, I often wondered what I was doing. I didn't have a product or a service. I did not have a customer. The computer was so sparse that it was not even important. Was I making a difference?

Part of me really wanted to go back to project management at which I was better than the average bear. The projects might not have made an existential difference but I knew that I had done them well. Satisfying.

Even today, I get discouraged. When I look at health care and see that safety and privacy are being used as an excuse not to automate health records, I get discouraged. When I look at the payment card industry, I get discouraged. When I look at SCADA, I get discouraged.

When I read about on-line banking being used to rip off another small business, non-profit, or municipality I get angry. I get angrier still when the courts and the regulators permit the banks to escape their fundamental responsibility to ensure that all transactions are properly authorized.

I have the good grace, not to say good sense, to be chagrined when I hear that another enterprise has been completely compromised because a user clicked on an obvious bait message, or even an artfully crafted one.

I am sad when I see that High School Harry Hacker has grown into the organized criminal of the day and is being recruited as a spy by governments all over the world. I am shamed when so-called "security researchers" publish exploits for obscure vulnerabilities rather than work-arounds for those that are being actively exploited. I am shamed when rogue hackers identify themselves as "security consultants" and claim that they are just trying to be helpful, just doing what security people do.

I feel a sense of failure when I see that US government security, the best in the world for decades, has all but fallen apart: that it mis-classifies. under vets and supervises, and over-clears. Under these circumstances Wiki-leaks is inevitable. However, Wiki-leaks might be tolerable if it were not typical, if the entire government was not such a large source of leaks of sensitive and personal information.

We security people are probably not unique among professionals for holding ourselves to very high expectations and being disappointed with our results.

In order to keep my perspective, sanity, not to mention my self respect, I have put a post-it on my bathroom mirror. I read it several times a day. It says, "We are not about perfection."

That's right. It is not my job to prevent all leaks and losses. It is not my job to make the world safe for democracy, or even the Internet safe for all applications. It is not my job to prevent all the Seven Deadly Sins, the motives for the things that we do wrong. I am not responsible for every unchecked input, much less preventing all the SQL-injection and buffer over-flow attacks that exploit them.

It is not my fault that the banking industry has consistently and persistently ignored my sage advice to confirm all changes of address to the old address and unusual transactions out-of-band, to change from mag-stripe and PIN to smart-cards, and to use strong authentication.

While I have to advocate that all Internet facing web applications should use the OWASP Enterprise Security API, I am not responsible for most failures to do so. While I am responsible for using every teaching and training hour efficiently, I should not condemn myself for failing to communicate the entire canon in an hour or not rationalizing all media coverage and political thought.

Our job is to make the world work better with us in it than it would be without us. Fortunately we have such leverage that that is not very difficult. While we do not make the world perfect, we make an existential difference.

As security professionals, we are expected to know that some losses are cheaper to tolerate than to prevent, some damage cheaper to repair than resist, that no matter what they think they want, no one really wants perfect security. We are expected to know that the cost of security curve is not linear, that to halve one's risk, one must double one's cost, that the better one's security already is, the less efficient the next dollar spent.

Our job is to ensure that all of the systems, applications, networks, and enterprises in our care get the protection that is appropriate to their sensitivity and the environment in which they operate, and that expensive security measures are reserved only for the targets that require them. Said another way, our job includes avoiding the use of inefficient measures. It is more about efficiency than effectiveness. If we prevent a loss or save the cost of a protective measure, in either case, the impact falls right through to the bottom line of the enterprise, the line called profit, the one that measures enterprise efficiency and contributes to the productivity of the economy.

Our job is to ensure that the sum of the cost of losses and the cost of security is at a minimum. That is impossible to know at any given point in time. It is a balancing act. It is not stable; it moves as the threat changes and the cost of technology falls. It takes both measurement and management to approach it over time. However, that is our job and our opportunity. That is how we make the world work better and justify our existence. If it were easy, they would give it to someone else.

Only when we rationalize our expectations of ourselves, communicate those expectations to our employers and clients, and measure ourselves appropriately against them, will we be satisfied with our jobs, appreciated as professionals, and paid the big bucks.








Monday, November 21, 2011

Security Culture for the Cloud

It is difficult to miss the trend toward "outsourcing," to have things done by others that traditionally had been done by employees within the enterprise. This trend is facilitated in part by "The Cloud," the Internet and the incredible range of services, fee and free, that are offered on it.

I used the example of Stanford Health Clinic that transferred patient information to a collection agency only to have it posted to a public site on the Internet, a gross and egregious violation of the privacy of their patients.

I left you with the idea that our professionall objective is to arrive at a state in which all parties understand their roles and responsibilities and carry them out in such way as to produce the intended results.

I had decided to elaborate on that advice this week. I came up with a list of policy, technical, and legal guidance for use with out sourcing.

I was going to suggest that enterprises should have a policy that spells out its risk tolerance in general and in regard to the use of outside sources in particular. It might specify which data and applications could be outsourced and which could not. For example, it might specify that the enterprise's intellectual property and personal information should not be outsourced. It might also specify insurance coverage for any risk that exceeds the specified tolerance.

I planned to say that agreements should enumerate the laws, regulations, and contracts to which the parties are subject and all standards that they had adopted. They should also spell out any limitations such as the requirement to disclose information in response to legal service.


I was going to suggest that enterprises should prefer to do business with vendors that were part of such organizations as the Cloud Security Alliance and the Cloud Auditing Data Federation Working Group (CADF). I would have suggested that using enterprises might want to participate in the Cloud Standards Customer Council.

I would have stressed that your contract should provide for audit or for a service auditor report, I would have cautioned you about the limitations of service auditor reports, for example, that they are limited to controls asserted by the auditee and that they are as of the time of the audit.

I had planned to suggest that agreements should be service by service and application by application.

I intended to suggest that agreements should enumerate all existing controls, who is to operate them, and under what conditions. That the agreements should spell out the intended use of the controls as well as what record the use of the controls would produce. Examples of such controls include, Identification, authentication, access control, encryption, administration, provisioning, confirmations, messages, alerts, alarms, measurements, and reports.

I would have emphasized the importance of provisioning controls in The Cloud and pointed out that compromise of those controls might enable others to use services and charge them to you. I had even planned to stress that all use of such controls result in automatic out of band confirmations. I would have given a caution about error-correction and vendor over-ride controls.

Fortunately, while doing my research, and before I had embarrassed myself with all of this irrelevant advice, I came across a report in the New York Times by KEVIN SACK Published: October 5, 2011. Here is part of what I learned.

First, there was no evil here, no recklessness, not even gross negligence, just bad judgment all around. To the extent that there was any motive, it was efficiency, just getting the job done. No greed, no lust, not even sloth.

Stanford Hospital and Clinics (SHC) is a 600 bed general hospital. It is not Kaiser-Permanente or UPMC but it is a major enterprise in its community.

Multi Specialties Collection Service (MSCS) is a collection agency for medical services in the same market as SHC. It bills about $0.5M per year and employees 5-10 people. One might call the relationship asymmetric, one-sided.

The identity and role of the sender of the information is not public, but should have required significant management discretion and rare privileges to access and send it.

The receiver of the information was a contractor to MSCS. He often represented himself as an officer of MSCS and had an MSCS e-mail address. Been there, done that. He decrypted the data, put it in a spread sheet, and, among other things, gave it to an applicant for a job with him.
While SHC says the information was for "permissible hospital billing support purposes," the consultant says that it was for a "study." In any case, the information was not passed in the normal course of "collections," the service. I believe that both the sending and receiving of the information probably was outside the agreement between SHC and MSCS.

The actual posting to the public web-site, StudentofFortune.com, was by a job applicant to the consultant. He had given the applicant the spreadsheet to convert it to charts and graphics as a test of skill

The posting was a violation of the SoF Terms of Use which require the user to "represent and warrant that (they) (a) own or have sufficient rights to post (their) Contributions, on or through the Site, and (b) will not post Contributions that violate Student of Fortune or any other person’s privacy rights, publicity rights, copyrights or contract rights.

Two things seem clear. First, everyone involved has egg on their face except StudentofFortune.com. Their Terms of Use were obvious, concise, plain, and clear. One cannot register for their site without acknowledging and agreeing to them. When the violation was called to their attention they responded on a timely basis. I would gladly testify for or against any of the other parties.

Second, none of the policy, technical, or legal measures that I wanted to recommend would have prevented the breach. If asked in advance, management might well have accepted the risk that so many controls and people would fail at once, However, SHC is now the target of a $20M class action law suit and will almost certainly be penalized by the regulators. MSCS has lost a major client, has closed its web site, and is not answering its phone.

I am not sure that the penalties fit the crime but they sure are getting our attention However, to the extent that the breach impedes the urgent move to electronic health records, or even the efficient use of cloud resources, perhaps they are proportional.

I like to think that my lists above are useful, if not necessary, but they are clearly not sufficient or even the place to start. No, we are back to management and security 101. There is no substitute for training and supervision.

"Outsourcing" makes this even more important. Note that StudentofFortune.com is typical of free or low-cost collaboration "cloud services" that help our employees get their jobs done and are within the discretion of most of our employees. We are going through a major change in how we organize production and resources. It is being driven by the falling cost of information technology. As this new model matures we need to evolve a culture of personal due care, one in which people automatically ask "should I do it" rather than simply "Is it efficient?" A culture in which people automatically consult with others before they act, a culture of caution.

Security must start with our most effective controls, training and supervision. We should focus on or use our other tools only to the extent that they are more efficient. Then we will be called professionals and be paid the big bucks.

Thursday, November 17, 2011

On Resistiing Phishing Attacks

At Secure World in St. Louis I heard a presentation on "Cybercrime" by Brian Mize, a Special Federal Officer with the FBI. One of Brian's points was the number of such crimes that begin with a successful crafted bait e-mail message. Brian reported that more than half of crimes investigated by the St. Louis Cyber Squad, on which he serves, began with such a message.

While there were many steps in the attacks, they began with bait messages, specifically because they are so efficient. By definition, if one puts bait before a sufficient number of people, someone will take it. The interesting thing is how small that number has to be. In one group of 527 targets, one in ten took the bait.

The bad news is that only one click by one user may be sufficient to contaminate the entire enterprise. The good news is that all most all attacks against enterprises are starting in the same way.

The bait of choice no longer appeals to fear, greed, or lust. Rather it appeals to curiosity. Human beings are naturally curious; curiosity has survival value. Mass bait will be of the form "Look what Justin Bieber did." Alternately it may exploit the disaster news of the day. However, messages directed to the enterprise, while still appealing to curiosity, are much more artfully crafted. For example, the bait that compromised RSA was a pdf identified as "2011 Recruitment Plan." If this came to you from someone whose name you recognized, would you be suspicious? Would you resist it? Remember when we preferred PDFs to Word documents for safety?

The obvious defense against bait attacks is awareness training. However, as with campaigns like "Just Say No." there are fundamental limits to the effectiveness of such training. We are left with the fact that a successful attack only requires one temporary failure of our training.

I met Brian later and we agreed that we really need an effective and efficient artificial intelligence, AI, for identifying such messages. We both identify and reject one or two bait messages a day that get past our spam filters. If we can identify them, surely Google could.

However, I heard another presentation by Steve Ward, Vice President of Marketing for Invincea, speaking at Data Connectors at Bridgewater's at the end of Fulton Street. He talked about a product that took a different approach. It looked at the second step in the attack. It seems that one bites, i.e., "takes the bait," by clicking on a button. It turns out that almost all of the buttons are URLs. Steve says, even if I cannot stop everyone from biting, one might be able to cut lhe line just as they do. Only rare messages are bait but all bait messages are URLs.

The URLs link to an executable that corrupts the user's system. It effectively contaminates the network, all machines to which that machine is peer connected. In far too many enterprises, that is the entire enterprise network.

Note that contaminiation requires user privileges, perhaps ADMIN, at least the ability to create or modify an executable. Part of the problem is that users that do not require such privileges have them by default. On the other hand, we cannot limit all such privileges.

However, Steve Ward points out that controlling the process that parses the URL could prevent the contamination. His product takes an architectural approach, it installs as an application, becomes the parser for all URLs, and interprets them in a virtual machine so as to prevent contamination of the real machine. Even if a privileged user takes the bait, her machine will not be contaminated.

Efficient security relies upon layers and redundant measures. We must train users to recognize and resist bait. We must limit their privileges. We must configure their systems to resist contamination. We must layer and compartment the enterprise network to resist the spread of contamination. We must control access to sensitive data. We must monitor, detect and remediate. We must resist exfiltration of our data. Of course, it is because knowing and doing this is difficult that we are called professionals and are paid the big bucks.


Wednesday, November 2, 2011

FBI Proposes Alternate Network

At the recent ISSA Conference in Baltimore, FBI Executive Assistant Director, Shawn Henry, proposed a "new alternate secure Internet," separate from the public Internet, to operate the nation's critical infrastructure. While there is clearly a need for better security, I am going to argue that this proposal reflects a poor understanding of both the problem and of networks.

The government is justifiably concerned about the existential vulnerability that has arisen because of the connection of infrastructure controls, i.e., supervisory control and data acquisition (SCADA), to the public networks. This connection permits at least parts of the infrastructure to be operated from any place in the world. To the extent that the controls are insecure, they can be abused or misused to cause the infrastructure to be mis-operated.

To the extent that the infrastructure itself is fragile, mis-operating it may cause damage that cannot be efficiently remedied. "Experts" have speculated that the infrastructure might be maliciously operated in such a way as to shut down our entire economy for days to weeks. Value and savings might be destroyed. Millions might starve or freeze unless we could rebuild in days what it has taken us decades to create.

However unlikely such an event, to the extent that such a vulnerability is implementation induced, rather than fundamental, it should not be tolerated. Making controls intended only for the use of a few privileged operators visible to everyone is unnecessary, in this case, reckless. It is analogous to putting a copy of the control of the autopilot for an airliner between every two seats.

However, it is specifically because the infrastructure is fragile that the controls are connected to the public networks in the first place. The operators understand that the infrastructure must be "operated;" that its continued service requires that it be monitored, adjusted, "provisioned," and configured to compensate for changes in inputs or load or the inevitable failure of components. While some of this operation is automated, some of it requires timely human intervention.

The operators of these controls have connected them to the public networks on the implicit assumption that, far bigger than the risk of connecting them would be their own inability to monitor and operate the controls on a timely basis. Few of them see their connection in the context of all the other connections. They understand that no single connection would represent a major risk; they are only just waking up to the realization that the collection constitutes an existential vulnerability.

Part of the problem in the Critical Infrastructure space is the culture. Given the sensitivity of these controls, one would expect them to be hidden behind virtual private networks and strong authentication. For reasons of convenience, for the most part they are not and that is the root of the problem.

In order to provide for around the clock, but somewhat sparse, remote monitoring and control, the operators have have connected the controls to, not just one, but to both of the public networks. While this kind of remote operation is good for the enterprise, and may even be strategic, many of the early connections were tactical, more by and for the convenience of the operators than for the enterprise.

In order to improve the chances that they can always connect when necessary, that is, compensate for any network failure, many of the controls are connected both to the public switched telephone network (PSTN) and the Internet. While they use the public wide area networks, they use them to create a limited number of relatively short point-to-point connections, for example, from the operator's home to the plant.

While the public networks permit world-wide any-to-any connectivity, and while the operators might actually monitor and operate their systems from the end of a plane trip, that is the exception, not the rule. The result is that anyone may use the public networks to send a message to any of these controls. They may be able to connect and operate the controls.

In the early days, most of these controls were purpose-built, offered only a limited command interface, and operating them required a lot of special knowledge. Even finding them would have been difficult, much less misusing them. Today, many have already been identified; most of them have graphical user interfaces and require much less special knowledge. Moreover, such intelligence as operator manuals and other documentation may be independently available in the world-wide-web.

Behind the controls, there may be operational dependencies between components such that operation of one may influence the behavior of others. For example shutting down the external power to a nuclear reactor may cause the reactor to shut down. These effects may cascade. The electrical grid is the most inter-dependent of the infrastructures and almost everything else is dependent upon it.

What might a "separate" network look like? How separate might it be? Well, it might be as separate as the two public networks are from one another. For example, it might have a separate "address space." Like these two networks, it might use different signaling, connection setup, and protocols.

On the other hand, the Internet, the digital network, originally piggy-backed for connectivity on the PSTN, the analog network. Today, for reasons of efficiency, all wide area networks share the same glass and copper fabric and most analog traffic is now encapsulated in digital. While much of that fabric is less than a decade old, it has taken us more than a century to achieve near world wide coverage. Surely a new separate network would exploit the existing fabric rather than attempt to replicate it.

For security reasons, it might be desirable for the networks to have different user populations. However, that would mean that a user of the alternate network could not use the public one. Not very likely.

The single public fabric that we use today emerged as a number of public and private networks coalesced around the Arpanet. When I first became an e-mail user, I had a list of tens of gateways and paths from the IBM network to other networks. We would use nested addresses of the form ((foo@foonet)@ibmgatewaytofoonet.com). Sometimes these addresses were two or three layers deep. An x400 or proprietary address might be nested inside an IP address or vice versa. Routing through these gateways often required a great deal of special knowledge. Gradually those gateways gave way to intelligent routing. x400 and other forms of addressing gave way to IP addressing.

The Internet is defined, and has evolved, as the collection of all networks that are connected to one another, that communicate in Internet protocols, or that are connected via gateways, think firewalls, that use that protocol. We did not set out to have one network; there was no design or intent. The Internet came about for economic reasons. The value of a network goes up with the number of potential connections. Therefore, the propensity of two networks to connect goes up with the square of their size. The unfortunate corollary to this is that, if we were able to provide a separate network, the users would respond to the economics by connecting them together again.

So for a number of cultural, technological, and economic reasons, a completely separate "alternate" network, no matter how desirable, seems unlikely. While still unlikely, a more viable alternative might be one or more virtual private networks (VPNs) exploiting the underlying fabric of the public networks.

Moreover. most of the advantages of such a network or networks can be achieved with much cheaper alternate mechanisms such as strong authentication, end-to-end encryption, and firewalls and other proxies. Even if there were hope for the kind of alternate network envisioned by Director Henry, it would still be our job to apply those mechanisms while we were waiting for it to emerge. It is also necessary to hide all of the information about the infrastructure controls that is gratuitously available to all but needed only by the few.

The status quo is the result of a large number of individual but reversible choices. It is unacceptable. It is our job to fix it. For that that we are called professionals and are paid the big bucks.


Tuesday, October 25, 2011

On Understanding Biometrics


A decade or so ago, I had an extended dialogue with David Clark, of the Clark-Wilson Model, about biometrics. He knew that the remedy for a compromised password was to change it. Since he knew that biometrics could not be changed, he could only understand how they worked for about fifteen minutes at a time, about the same length time that I can understand the derivation of an RSA key-pair.
Unlike Passwords, biometrics do not rely upon secrecy, they do not have to be changed simply because they are disclosed. Biometrics work because, and only to the extent that, they are difficult to counterfeit.
We have all heard about the attacker who picked up a latent fingerprint in gelatin and replayed it, or the image scanner that was fooled by a photo. Good biometric systems must be engineered to resist such attacks.  For example, Google has patented a (challenge-response) scheme to resist replay in a facial recognition system by directing the user to assume one of several expressions that the system has been trained to recognize.  
While the fundamental vulnerability of passwords are replay and disclosure, the fundamental vulnerabilities of biometrics are replay and counterfeiting. These vulnerabilities are limitations on the effectiveness of the mechanism, rather than fatal flaws. What we must ask of any biometric system is how does it resist replay and counterfeiting.
At a recent NY Infragard meeting there was discussion of biometrics that illustrated that this confusion persists. In this case, at least one person seemed to be convinced that the secrecy of the stored reference had to be maintained in order to preserve the integrity of the system.
As with properly stored passwords, one cannot go from the stored reference to what one must enter. We solved the problem of disclosure of the password reference by encrypting it with a one-way transform at password choice time. (in practice, all too many passwords are stored in a reversible form.) At verification time we apply the same transform to the offered password and compare the encrypted versions. By definition of "one-way transform," it is not possible to go from the stored reference to the clear password.
We do not store an image of the face, the password, or a recording of the voice. Instead we store a one-way, but reproducible, transform. To make sure that the transform is reproducible, we may collect multiple samples so that we can test for reproducibility.
However, as with passwords, biometrics are vulnerable to replay attacks. While we cannot discover the biometric from the reference, we might capture an offered instance and replay it over and over.
Like passwords, biometrics may be vulnerable to brute force attacks, but unlike passwords, they are not vulnerable to exhaustive attack, if only because it is impossible to try every possibility. While a password reference can be stored in 8 or 16 bytes, a biometric reference may be hundreds or low thousands of bytes.  In an exhaustive attack against a password, each unsuccessful trial reduces the uncertainty about the correct answer, in part because we know the max size. This may not be true about a brute force attack against a biometric where the maximum size may be arbitrary.
For most purposes we use brute force and exhaustive as though they were synonymous but they really are not. In brute force, we submit a sufficient number or trials to succeed in finding a (false) positive. An exhaustive attack is a special case of a brute force attack in which we are trying to find one integer in a known set. While the reference for a biometric may be too large to be exhausted but there are many values that will fit.
This introduces the issue of false positives. There is only one password that will satisfy the transform, it is at least one integer value away from those that do not satisfy. We key in the integer. However, we "sample" a biometric; there will be many biometric samples that will fit. Depending upon the precision of our system, it might even be possible to dupe the system, a false positive. On the other hand, it is also possible for a given sample of a valid biometric to be rejected, a false negative.
Biometric systems can be tuned so that they achieve an arbitrary level of security; we are looking for a transform that minimizes both false positives and false negatives. Unfortunately we reduce one at the expense of increasing the other. That is to say, the less likely it is for the system to permit a false positive, the more likely it is to generate a false reject. We tune the mechanism to achieve an acceptable ratio of on to the other for a particular application and environment.
My preferred biometrics are the visage, i.e., the face, and the voice. These share the advantage that they can be reconciled both by a computer and by non-expert human beings. Infants can recognize their parents, by face and voice, by the age of six months; it has survival value. Many share the experience of recognizing someone, that we have not seen in years, from one or two words, spoken over the telephone.
Until very recently, machines could not reconcile faces as fast humans, indeed fast enough for many applications. However, Google now has software that can not only authenticate an individual from an arbitrary image but identify them within seconds.
For most of the time that they have been in use, fingerprints could only be reconciled by an "expert," but we now have computers that can do it even better than the experts. In fact, recent studies using these computers have suggested that even these experts are all too fallible. Nonetheless, non-experts can independently verify fingerprint identification.
Think about DNA. While it discriminates well, it contains so much information that it takes a long time to reconcile and the results cannot be independently verified by amateurs. To some extent we will always be dependent upon instruments and experts.
Since biometrics share a vulnerability with passwords to replay, a password plus a biometric does not qualify as "strong authentication." Therefore, the preferred role of biometrics is either as an identifier or as an additional form of evidence in a system of strong authentication, one in which another mechanism, e.g., a smart token, is used to resist replay.
Because there is only a vanishingly small chance that two samples of a biometric will be identical, any sample that matches one previously submitted could be thrown out as a possible replay.
Part of the special knowledge that identifies us as security professionals, and for which we are paid the big bucks, is that knowledge about the use, strengths, and limitations of biometrics.

Wednesday, October 12, 2011

Who is Responsible for Security?

On September 30, 2011, SANS Institute NewsBites reported the following story:


--European Union to Introduce Liability Rules for Cloud Vendors (September 28 & 29, 2011) The European Union (EU) plans to introduce the "Binding Safe Processor Rules," which would hold vendors of cloud services in the EU liable for data security breaches. Vendors would sign up for what amounts to an accreditation. Consumers are likely to feel safer doing business with a company that is willing to stand behind its services. The rules are an update to the Data Protection Directive. The companies will be required to demonstrate their compliance with certain data protection standards for approval under the rules. Current law holds data owners responsible for data loss.


They cited two sources, SC Magazine and V3.co.UK, The Frontline.

http://www.scmagazine.com.au/News/275173,eu-cloud-vendors-liable-for-breaches.aspx

http://www.v3.co.uk/v3-uk/the-frontline-blog/2112906/eu-rules-allow-cloud-companies-legally-customer


The NewsBites editor added the following comment by me.


[Editor's Note (Murray): The devil is in the details and the rules may be helpful. However, the idea that one can transfer the responsibility for protecting the data from the owner to the custodian by fiat, or any other way, is absurd on its face. The decisions about protecting the data cannot be separated from the decisions about collecting it and using it.]

While I confess I misread the details, they are where the devil is hiding. It turns out that this rule is nothing like it sounds either in its name or in this report. Instead it was sought by Amazon, Google, and others to say that EU enterprises may rely for security of their data upon service providers that are certified by an EU country as complying with these rules and without regard to location. It is a response, in part, to the fact that Europeans will not do business with US service providers because they are subject to the USA Patriot Act. They are concerned that they would be accused of improper reliance. The EU has never been happy with the idea of data on Europeans being stored in the US.

This week NewsBites reported this story:

--Stanford Hospital Pins Breach Responsibility on Third-Party Billing Contractor (October 6, 2011) Stanford Hospital & Clinics says that a data security breach that compromised the personal information of 20,000 patients is the fault of a third-party contractor. One of the patients filed a US $20 million lawsuit against Stanford following the breach disclosure last month. The data were exposed because a spreadsheet handled by a billing contractor somehow was posted to a student homework help website. The compromised information includes names, diagnosis codes and admission and discharge dates.

http://www.computerworld.com/s/article/9220626/Stanford_Hospital_blames_contractor_for_data_breach?taxonomyId=17

Now, I have to tell you, the Hospital tells a really great tale. Mind you, it does not excuse them for the breach. However it might have confused a jury if they had not attempted it to try it out in the media first.

Seems they turned the data over to a collection service, MCSC, in encrypted form. This is allowed under HIPAA rules but requires that they have security of the data as part of their agreement with the service provider.

Needless to say the collection agency, MCSC, decrypted the data. It converted it to a spread-sheet before turning it over to an "unauthorized" third party, This third party posted it, as an attachment to a request for assistance, to a site called Student of Fortune where it remained for a year. Student of Fortune is a site where students can solicit assistance with their homework assignments. It seems this third party wanted assistance with a graphical representation of the data in the spreadsheet. It would probably be unfair for one to infer that someone familiar with such a site is a recent student. There must be some truth here. You can't make this stuff up.

It seems clear that there is plenty of blame to go around here. However, the question is not blame but responsibility, ethical, legal, financial, and otherwise.

Public and private enterprises are increasingly relying upon contractors and other enterprises, "partners," to carry out duties and responsibilities that historically have been performed by employees and within the enterprise. Therefore, it is timely to revisit the question of responsibility.

Both of these stories suggest that the responsibility rests with the custodians of the data, The first story suggests that the responsibility can be assigned to the custodian by order of the state or the consent of the custodian. The second suggests that the responsibility moves with the data.

Ultimately, the legal questions raised by these stories will be decided by courts. I can hardly wait. I am a great fan of court records and decisions. While subject to error, they are much more reliable than the statements of the parties.

In Information Assurance, we have traditionally assigned protection duties and responsibilities in terms of roles, i.e., management, staff, owners, custodians, and users. We have argued that, by definition and default, the responsibility to protect the data rests with the "owner," the manager responsible for all the decisions about the data.

For example, the owner makes the decision to collect and store the data. The owner, again by definition, makes the decisions about who can use the data. The owner makes the decision as to the sensitivity of the data, how much to spend on protection and how much risk to accept. The owner's responsibility includes communicating these decisions to custodians and users.

It is difficult to see how this control and discretion can be separated from the responsibility for its exercise.

Our colleague, Bob Johnston, likes to argue that "When entrusted to process, you are obligated to safeguard." However, as a custodian I would respond by asking how much and at whose expense? Clearly a custodian would not want to spend more than the owner would and would expect to be reimbursed or compensated for what he does spend.

What is really at issue here is how we identify and select custodians, describe their duties, compensate them for those duties, what penalties they must pay for breach of those duties, and to whom. Obviously, this begins with negotiations between the owner and the custodian. I will continue to argue, both as matters of definition and practicality, that the responsibility for the results, the success, of those negotiations must start and end with the owner.

As a matter of law and good public policy, we want the responsibility in the same hands as the discretion. The alternative would permit the owner to pick the low cost service provider and then escape responsibility for any consequences. One might call that moral hazard.

Service providers are in the role of custodians of the data. Their duty is to the owner of the data, the party that pays them, not to the subjects of the data. They must be diligent in the execution of the duties that they have agreed to and for which, in part, they are being paid.

Stanford Hospital had a duty to their patients to protect the data. That duty did not go down when, for their own convenience and efficiency, they decided to give a copy to another party, a party of their choice. That they encrypted it for purpose of transfer, did not protect it from that agency, to whom they also gave the key. The agency's duty was to Stanford Health, to protect the data in accordance with their agreement, the provisions of which we are left to guess. While it is unlikely that Stanford Hospital specifically contemplated the possibility that MCSC would give a copy to a contractor, their agreement should have resisted it.

One might argue that as a collection agency, the agency owed a duty to the subjects of the data. However, it would be hard to argue that that duty relieved Stanford Health of its responsibility..

As security staff, some for the owners, some for the custodians, our role is to assist the business managers and lawyers in expressing the security requirements in such a way that all parties understand their duties and are likely to discharge them in manner that will produce the intended results. Our job does not stop there; we must go on to measure and report the results, note variances from the expected and intended, and recommend timely corrective action on a timely basis. "Timely" is before, rather than after, any breach. To the extent that this is difficult, we are called professionals and are paid the big bucks.

Thursday, September 15, 2011

The Terrorists Won

I know that is not a popular position and this is not a popular time to take it. I expect to take some flack for saying it. I identify with the little boy that pointed out the naked emperor, but the emperor was not a danger and the little boy had no obligation to say anything.

I have had the Principle of Proportionality on the list to talk about for a while but something always trumped it. This weekend has elevated it.

Terrorism is defined as an attempt to effect political change through fear and intimidation, usually by attacking civilians. When an act of terror produces political change out of proportion to the act, by definition, the terrorists win.

For example, the Blitz was terrorism. Dresden was terrorism. Hiroshima and Nagasaki were terrorism. The IRA bombing of London was terrorism. 9/11, as terrible as it was, barely ranks with the least of these. The Blitz did not affect the intended political change. It did not turn the British people against the war. Dresden did not achieve the capitulation of Nazi Germany. The terrorists did not win.

In response to 9/11, we have fought two major wars at a cost of more than 100 thousand lives, $1T, and our reputation as a moderate and moderating influence in the world. We are locked in those wars to the tune of $2B per week with no honorable way to withdraw. That is called disproportionate. The terrorists won.

We have betrayed our own principles. We have engaged in torture, imprisoned people without charge or trial, and spied on our own citizens. We have denied Habeas Corpus, public trials, a jury of one's peers, and surrendered the Common Law principle of "innocent until proven guilty." That is called disproportionate. The terrorists won.

We are more divided than at any time in this century. We are so divided by party that good policy is no longer politically possible. We are divided by region, religion, and origin. The terrorists would delight.

We now spend $8B a year on TSA. Of all the bad things that can happen when one gets on an airplane, this addresses only the least of them. That is called disproportionate. The terrorists won.

We have created a huge, expensive, and secret bureaucracy. There are 1000 of them for every identifiable terrorist in the world. They have built themselves a headquarters second only to the Pentagon. We did not even notice. Speaking of the Emperor's suit, no politician has the courage to question this budget. We are no more than one election from having this monstrosity, in an excess of caution or zeal, turned against the citizen. That is called disproportionate. The terrorists won.

As I write this, CNN is reporting three stories. One is about a the catastrophic flooding of the Susquehanna River, a river that is awesome even when it is not in flood. The second is about the loss of electric power to 5M people in the southwest on a day when temperatures reached 115 degrees Fahrenheit. The third is about a "specific, credible, but uncorroborated," not to mention "secret," threat, linked to Al Qaeda, and involving three "terrorists." That is called disproportionate. The terrorists won.

We have become a fearful and timid people. We are incapacitated by fear. We behave as though terrorism were an existential threat, the equivalent of thermo-nuclear war. It is sad to see the tourist in the airport, justifying the removal of her diaper as "it makes us safe." This is called disproportionate. The terrorists won.

Even when their plots that fail they win. Can you say "No shoes, no belts, no suspenders, no diapers, no liquids, no nail files?" That is called "disproportionate" not to mention "locking the barn after the horse is stolen."

At their most ambitious, the terrorists never imagined that we would afford them such disproportionate leverage. They won big time.

Of course "security" has also won. There are at least ten of us today for every one of us a decade ago. Dozens of new security and intelligence businesses have sprung up along the beltway, mostly on contract to DHS.

Proportionality is the fundamental principle of security. "Do not spend more mitigating a risk than tolerating it will cost you." A fundamental principle of our professional ethics is that we must not give unwarranted comfort or unnecessary alarm to our constituents. While I understand how difficult that balance is, I suggest to you that we have not served our constituents well over the last decade. We have not deserved the right to be called professionals or to be paid the big bucks.

Yes, I did see the photo of Presidents Bush and Obama. I did hear Renee Fleming sing Amazing Grace and the New York Philharmonic play the Resurrection Symphony. I saw the Concert from the Kennedy Center. I know that New York's Bravest are still ready to go into harm's way to protect me. I am hopeful.

However, there will be other terrrorist attacks, some successful. Hopefully these will be at the limits of our abilities, but it is simply not possible even to identify, much less deter, all the crazies. Our leaders have already set us up to see these as "failures of security," as justification for even more drastic measures. That is what government does. If what they are doing does not work, they simply do it harder.

It is our professional responsibility to ensure that America sees these attacks as the inevitable price of freedom, as the price of our values, as the price of greatness. Then we will be professionals and deserve the big bucks.