Monday, September 13, 2010

What does it Mean to Say a System is Trusted?

Do not trust any computer that you cannot carry; prefer those that you can put in your pocket.

Nothing useful can be said about the security of a mechanism except in the context of a specific application and environment.
- Robert H. Courtney


That little aphorism of Bob Courtney's has become a habitual touchstone for me. If it has not given me gravitas, it has at least kept me from appearing foolish by opining on the "security" of systems without regard to the threat or what they are being used for. It keeps me from equating the security of the application with that of the system or vice versa. It enables me to use a system for one application that is not suitable for others. It enables me to recognize when the security of a system that has served well is no longer adequate. (Many seem to get by simply saying that no system or application is secure. One can clearly get one's name in the paper by saying that. It is not particularly helpful.)

The client was a property and casualty insurance company. They had some fairly progressive programs under way but both their IT and security programs were mature and stable. We were called in because they expected that they were going to have a number of new e-commerce applications done on the public network. They wanted a security management system to ensure that these applications would be done conservatively.

The method that we used was to propose a straw-man for the management system and then refine it in ever larger meetings. One of the practices that we recommended was that connected applications be done on dedicated hardware; we wanted to be sure that these applications were free from outside interference or contamination. In an early meeting the client asked that this recommendation be changed to say that these applications be done on "trusted systems." We quickly realized that that was a better way to say what we were trying to say. It included our recommendation but was stated as an objective rather than a specific practice.

Then we discovered that the reason that they wanted it restated was because they intended to run the application on their MVS mainframe. "MVS," we said. "You trust MVS?" "No," they said, "We trust our MVS. We have had it for twenty years, we manage it scrupulously, and we trust it." The auditors nodded their heads and then we nodded ours.


Part of the problem is that we came to the question the wrong way. In the early days of computers they were serially reused and had no shared resources. Most of the applications were not sensitive. The question simply did not arise. After a decade or so, we began to recognize that there was a small potential for information to leak from job to job because of the failure to wipe primary storage between jobs. Information left in memory by job n might be available to job n +1.

The problem really emerged with true shared-resource computing in the sixties. Even here the problem was tolerable. The systems were operated by a single enterprise, most of the users knew one another, and they shared similar goals and objectives.

By the late sixties, the size of user populations had begun to be numbered in the high tens to low hundreds and the modern question was on us. The potential for information to leak from one user to another was on us. One clear method by which it might happen was the interference of one process with another. The problem now had a name. Research began. While we thought that it was important, computer use was still so sparse that it wasn't really.

However, these were the days of Grosch's Law where we believed that shared resource systems were inevitable and the scale of sharing would continue to rise forever. We believed that one should always use the biggest computer one could afford. We believed that computers should be scaled to the enterprise. Thus, the problem of data security was framed as that of security in multi-user multi-application systems. We had framed the question in a way that made it almost impossible to talk about, much less answer. We knew that there was an objective called data security but the environment in which we wanted to talk about it was so complex that language failed us.

It was at about this time, 1968 or 1969 that I first met Dr. Willis Ware of the Rand Corporation. He came to White Plains for an IBM briefing on computer security. One item on the agenda was my master work, security for IBM's Advanced Administrative System. This system was intended for 5000 users and ultimately served several times that. It was a multi-user multi-application system but it was operated in a static mode, i.e., programs could not be changed while the system was operating. Users could not program and programmers could not use.

I was justifiably proud of the access control for the system. It was the largest and most complete system of its kind and it worked. The operating system was hidden from the users and the access controls for users to applications ran at the application layer. Dr. Ware listened politely and then dismissed the whole effort as trivial. Years later, when we had become friends, I found that he did not even remember it. He dismissed it on the basis that it did "not address the general case, the one where any user could write and execute a program of his own choice."

So the question of whether or not a system was secure or not had to be addressed not only in the context of sharing of arbitrary applications and data by an arbitrary number of users but there could be no assumptions about the flexibility or generality reserved to any of those users. One might well conclude that such a question excludes any useful answer but that did not keep us from trying.

Tomorrow we will look at some of the attempts.

Wednesday, August 25, 2010

Are you a target of "Advanced Persistent Threat," Sources, or Attacks

"Advanced Persistent Threat" (APT) is a term of art. It was coined by the USAF to label an attack pattern that they had identified and that they thought was emanating from a nation state. It came into the security jargon when it was used to describe an extended and resourceful attack reported by Google.

These attacks are "advanced" in the sense that they are coordinated and multi-phased. The phases begin with target selection and vulnerability identification, through domain contamination and information ex-filtration, to intelligence analysis and exploitation.

These attacks are also advanced in the sense that there are knowledge, skills, and abilities specific to each phase; no single individual is likely to be expert in all phases. One guy crafts the bait while another selects the malicious code. The attacks are advanced in that the threat source brings together the necessary experts and coordinates their activity across phases and time.

The attack is persistent in the sense that it continues through all the necessary phases, and the threat source is persistent in the sense that it will invest whateever time and resource in necessary for success.

While the term really refers to an attack, rather than a threat, to the extent that the attack has a rate and a source, it implies a "threat."

Is this something that you need to woory about? Is your enterprise a target?

The short answer is that if you are a Fortune Five Hundred enterprise with intellectual property, you are probably a target of choice of one or more nation states. If you are a financial services company or a payment card industry service provider, you are a target of choice for organized and resourceful criminal enterprises.

This is not to say that the rest of us might not be targets of opportunity for these threat sources, but only that their attacks against us are not persistent or continuing. Individuals may be "victims" of payment card fraud but it is the enterprise that is the "target."

It would be nice if one could detect such attacks early. Then one could at least determine whether or not one was currently under attack. However, the attacks usually begin with low intensity activities such as vulnerability probes or the distribution of bait messages. While intensive probes are easy to recognize, the same probes spread across enough time may not be obvious. If bait messages are not difficult to detect, they will not work at all. In fact, they will be as artfully crafted as necessary for them to work. There will also be a "sufficient" number of them that one or more victims will take the bait. Only after the bait has been taken are the other phases of the attack triggered. While it is somewhat easier to automate the detection of these later phases of the attack, it may also be only after some data has leaked and some systems compromised.

Note that while the compromise of your intellectual property may be a threat to the health and continuity of your enterprise, the consequences may not be limited to your enterprise. They may include damage to the vitality and growth of our economy and, perhaps, even to "homeland security." In this light, "best efforts" or "hit and miss" security is not good enough.

"Defense in depth" must be the order of the day; push your defenses up and out and your resources in and down. We can no longer afford an enterprise architecture that relies primarily on perimeter protection such that one person clicking on a bait message compromises the entire defense.

Tuesday, August 3, 2010

Electro-magnetic Emanations

During my last years at IBM, Wjm Van Eck published his paper about reading screens using TV receiving equipment. The press loved it. There were TV shows on the BBC demonstrating reading screens at a show and reading a document from outside Scotland Yard.

Van Eck's experiment was based in part on the following:

· The screens of the day were character only
· They were CRT
· The CRTs were noisy and
· the noise mimicked standard broadcast TV signals

Van Eck simply cobbled together antennas, amplifiers, and receivers and displayed the signals on a standard TV screen.

I decided to see if I could replicate Van Eck’s results. I purchased from him a replica of his experimental rig and gave it to two engineers, one senior and one junior, in the Raleigh lab next to the plant that manufactured 3270 terminals. They assured me that it would be a piece of cake to reproduce the experiment.

It proved to be much more difficult than they anticipated. On one trip, they did manage to show me a screen that lit up like the one that they were trying to read at a distance of two meters. It was clear that the image on the destination screen was related to the one on the origin screen but the content was less than readable. As often happens with engineers, these two lost interest in the effort after they were satisfied that, given enough time and resources, they could replicate the results but long before they had actually dome so.

In the more general case, in estimating the cost of attack, engineers often discount the value of their own special knowledge and skills. They think, “Everyone knows (or can do) that.” The also tend to think that if an attack is feasible, it will be used.

These are the esoteric attacks from which Mission Impossible is crafted. In fact, one can expect an attack to be used only if it is efficient. The set of cases in the world in which such an attack is both suitable for the intended application and environment and cheaper than all alternatives is vanishingly small.

The leakage of information via electromagnetic signals is a vulnerability without a threat, a non-problem. Not all vulnerabilities are problems, not all problems are the same size.

Of course, today the cost of attack is even higher. Screens are bit-mapped graphics, not character. They are LCD, not CRT. Their emanations do not mimic broadcast TV signals. While they still leak, they are much quieter than those of a generation ago. Unless your applications are very sensitive, your adversary a nation state, and the rest of your security so good that this is your weak link, Spend your security resources elsewhere. Remember that Mission Impossible style attacks are undertaken only against those targets that are very sensitive and that have very good security.

Saturday, July 17, 2010

"Data leaks! Get over it."

On the Real Risk of Thumb-drives

The first disk drive that I ever saw was the size and weight of a refrigerator and gave off as much heat. It would hold one megabyte. It was so expensive that it was far more likely to be used for tables than for files or databases. At the same time, the storage medium of choice was punched paper, cards or tape. A gigabyte in punched cards would fill a railroad box car.

The first hard drive that I bought was 10mb and cost me $3000 at IBM employee price. I thought I would never use it up. One can now buy a terabyte in a cigar box for $115 (I kid you not!) and for $50 one can buy 320GB that will fit in one's shirt pocket.

This week I bought an 8GB micro-SDHC card. It is the size of my fingernail. I paid $18 plus $4 shipping and handling although it could have been sent first class mail for less than a $1. A great portion of the cost is in the transaction, not the materials nor even the technology.

I thought that the SD card, the size of a postage stamp, was as small as a storage device would ever get. Smaller than that, one can hardly label or keep up with. However, the devices in which the storage is used are getting smaller and thus the microSD.

About every decade or so, as storage gets smaller, denser, and cheaper, managers began to worry that its very existence will encourage data theft. One could carry a 2400' reel of tape in one's overcoat or send out half a dozen in the waste paper basket. Multiple diskettes could be carried in a shirt pocket. Said another way, it has been a long time since the weight or the volume of the data was a deterrent to its theft.

However, we are going through the panic again. This time it is "USB drives." For example, a recent press release said "Lumension’s 2008 Annual Report and Threat Predictions for 2009 finds removable media as “the leading cause of data breaches…."

Dr. Peter Tippett reports, "It is endless talk among very large company CIO’s and CSO/CISOs that I speak with every week.. I think the driver is that everyone has a small case that happened in their shop, or that they heard about among their peers.... Then they have a “wouldn’t it be horrible if” worst case scenario they dream up relative to their own data.. And voila! It is the worst thing."

The other hand, in the 500 cases that Verizon reports on in its Data Breach Report, there were no cases in which thumbdrives (or other small portable media) played more than an incidental role. In no case did it appear necessary to the success of the breach, much less was it “causal.”

Even DoD leadership has been panicked by ‘thumb drives.’ Rather than control access to the data, they are trying to resist the technology. They no longer permit, at least as a matter of policy, portable digital media inside secure computing facilities, only paper. In some commands they do not permit the use of thumbdrives on (user owned) laptops attached to their networks. Anyone else see the irony here?

Now we all understand the limits of such controls. Modern storage is now so dense that one can conceal and carry an entire database inside any body cavity. (Yes, in certain extreme instances, authorities do search body cavities; this is usually law enforcement, not security, and in no case is it routine.) One can no more resist leakage by resisting media, digital or analog, than one can resist the use of computers, networks, or, for that matter, paper. The economics are simply against it. We pay extra for small and dense.

The way to resist data leakage is to restrict access to the sensitive, proprietary, or personally identifiable information, near the source (e.g., at the database server) and hold people accountable for its use. It is difficult to do but it is orders of magnitude more efficient than chasing the new tiny media de jour. It is far easier to control what data is copied than to control where it is copied or what happens to the copy. Data access control is media independent. Said another way, it works for all media, including the network, now and in the future, not just the one that one that is topical.

When I was a small boy and first went out to play without supervision, my mother said, “Son, never ever take thumbdrives from strangers.” When I got a little older, my daddy said, “Son, never ever put your thumbdrive in a strange machine.” I assume that someone cautioned my sister not to let anyone put their thumbdrive in her machine.”

The real risk of portable media is not data leakage but system contamination.

Wednesday, May 19, 2010

Encryption by Default

A recent survey was reported as follows:

IDG News Service - Employees at many U.S. government agencies are using unsecure methods, including personal e-mail accounts, to transfer large files, often in violation of agency policy, according to a survey.


Pasted from www.computerworld.com/s/article/9176889/Survey_Gov_t_agencies_use_unsafe_methods_to_transfer_files?taxonomyId=17>

Stephen Northcutt, writing as an editor of SANS Newsbites, observes:

I agree that too many people use insecure means to move data; disagree the root cause is no access to encryption.

A lot of people have access to encryption for email at work and yet consistently send data in the clear. We discuss this in the class I author and teach, and I think we as a community are becoming numb to the dangers we face from the Internet. Pretty Good Privacy (PGP) has been around almost 20 years now. In the early days, when you went to conferences, they had PGP signing parties and almost all the security professionals I interacted with had PGP and a key. Now, almost nobody seems to use it outside of FIRST, AV Research and similar enclaves...(stephen@sans.edu).


In another context this week I was reminded of a lesson I learned a long time ago, "One must make the desired behavior at least marginally easier than the wrong behavior." Almost by definition, "harder to do it right" is too hard.

Twenty years ago we were very concerned that user credentials would be compromised in the network. Today with activity more than a 1000 times what it was twenty years ago, credentials are compromised at the end points, not in the network. The reason is that for data in motion we use encryption. We use SSL. Thanks to Netscape, we use it by default.

When we say our prayers at night we should say, "Thanks for Netscape." Netscape understood that encryption in the World Wide Web was essential, like brakes on a car, not optional. They made it standard, not a separately priced feature. It was included in the function and price of the server. Thinking back on my time at IBM, I have often thought that had IBM invented SSL, they might well have priced it as an option and it would have failed. The way we price things often influences how we think of them and how we use them.

Even though the software is not separately priced, SSL has to be turned on and, at the level of its current default use, it has a significant cost. Nonetheless, we use it pervasively and users have come to expect it. We use it by default. If either party expects it, the other party can hardly avoid it.

Note that the problem addressed by the survey is identified as "file transfer," much of which is not even done in the network but on portable media, on what we used to call the "sneaker net." Much of it is ad hoc, with no standard procedures. Management has not told employees how to transfer data, much less how to do it securely.

The data leaks in dozens of ways. It leaks when users make gratuitous copies and then loses them. It leaks when backup copies fall off the back of the truck. It leaks when hackers compromise servers. It leaks through the user interface of ftp servers and other ways too numerous to enumerate. The user does not even contemplate most of these leakage modes and believes that the ones that he does contemplate are too rare to worry about.

Stephen Northcutt points out that PGP can be used to resist most of these leaks. Even simpler tools like passwords on .doc and .pdf files would resist many of them. PKZip and sftp are powerful tools to help us. However, most of these solutions require user involvement and a high level of user knowledge, not to mention judgment and initiative.

The solution to the problem includes making using encryption on all data easier than not, to make the encryption of data at rest the default, not the exception. It includes providing encryption by default across enterprises. It includes resisting gratuitous copies at the end points, even where the use requires that the data must be in the clear. It includes management direction and automated procedures to implement that direction.

A tall order you say? Suppose I told you that encryption by default is routine, automagic, in many enterprise and government domains and even across domains? True. Just for an example, Lotus Notes protects files and databases at rest, by default, using encryption. Even if one makes a gratuitous copy of the file on one's laptop or thumb-drive, it is encrypted. Notes provides for automatic safe exchange across domains. It provides for automatic key management that is transparent to the users. Obtaining copies of these files and databases in the clear requires both privileges and work. In this environment, it is easier to do it the right way. Indeed, it is so easy that many, not to say most, users do not even know that it is happening.

Though I believe that it is under-sold and under-appreciated, I am not here to sell Lotus Notes. I use it merely as an example of "encryption by default." I believe that encryption by default should be the standard in all government agencies and most private enterprises, and that we have at least one successful model of how to achieve it.

Wednesday, May 12, 2010

Security in "The Cloud"

Plus ça change, plus c'est la même chose.

When T.V. Learson was leading IBM, he was asked by a customer whether his IT should be centralized or decentralized. Learson responded that whatever way he was currently organized he should change it. Said another way, "What goes around, comes around."

In the early days of shared resource computing, the computer and most of the data resources were owned by the enterprise. "Data security," as we called it then, meant that what the enterprise said it intended, what it intended it did. We tried to help them think about it by suggesting the properties of the data that the enterprise most wanted to conserve. In some proportion of one to the others, the enterprise wanted the data to exhibit confidentiality, integrity, and availability.

To the extent that Grosch's Law described the economics, i.e., efficiency increased with scale, the economics favored centralization. Similarly, protection and control was also centralized. The risk was information leakage. The control of interest was Data Access Control, usually implemented as an optional process of the operating system.

In some cases use was metered and cost allocated but often cost was simply absorbed by the enterprise. This was in part because the meters and metrics of cost and value were immature. Metering and cost allocation were expensive and often had perverse effects on usage and uses.

At some point, Grosch's Law gave way to Moore's Law. Efficiency began to favor the small. When the scale of computing changed, it was not so much that data in the glass house moved to departmental and personal systems, although copies clearly did, as that data in departmental paper files got sucked in to the departmental and personal system, increasing the number of electronic records. At the same time, all computers were being connected to the Internet, making them and their data more vulnerable to attack by outsiders.

At about the same time as the scale was changing, we went from talking about "data security" to "information assurance," reflecting a shift in priority from confidentiality to integrity. Protection and control moved from centralized to distributed. The risk shifted to system contamination with malicious code. While we still used data access control, we relied more upon control of access to systems and applications. Other controls of interest included anti-virus, firewalls, and cryptography.

At this writing, we are discussing what security means in "cloud" computing. The name, cloud, for this style of computing comes form the cloud symbol that we used in network diagrams to represent that which was not known or beneath the level of abstraction at which we were working.

NIST defines cloud computing as "a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources" (e.g., networks, servers, storage, applications, software, and other services) "that can be rapidly provisioned and released with minimal management effort or service provider interaction."

However useful one may find the definition, some examples may help to appreciate the concept. The earliest emergent examples really define the cloud. Perhaps the most important of these is the Domain Name Service (DNS). E-mail and the World Wide Web are also on the list. Note that these are collaborative services, instantiated by the cooperation of many edge processes. For most users, their cost is included in the cost of their connection.

An early example is Hotmail, an advertising supported personal e-mail service. A more recent competitor to Hotmail is gmail from Google. As a personal service gmail is ad supported but Google also offers a service to "outsource" corporate e-mail. Instead of operating its own e-mail servers, an enterprise contracts with Google.

E-mail is an example of an application level service Dropbox is an example of a private file service in the cloud. Carbonite is an example of a backup service. IBM and EMC offer segment level storage backup. Indeed, they will operate an enterprise's entire storage network for them. Amazon offers a complete web storefront Think about almost anything that is hidden behind a standard service interface; it is available as a service in the cloud.

Those of us who were around in the days of "shared resource computing" think "what goes around…." In this analysis, the "cloud" is simply another shared resource computer. After all, it looks the same to the end users. At some level or another cloud service protect what they offer from contamination, leakage, or loss.

However, it is really not quite, indeed nowhere near, that simple. The cloud is really not just a computer or use. Rather it is an abstraction, a model for looking at computers and computing. It is on the same list as serial re-use, time-sharing, host-guest, and client-server. However, unlike these, cloud computing is not designed and implemented top-down but emergent from the bottom up.

The computing resources may include any combination of connectivity, computing capacity, instantiated processes, servers, storage, and services, including software (SaaS) and application services. While the resources are rapidly, and usually automagically, allocated and provisioned, use is metered and cost is allocated.

Security in the cloud turns not only on the axis of centralization v decentralization but also on one of scale, and on another of organization. Let's think about the last first.

In the cloud, the services are used by multiple users or organizations but owned by none of them. While most of the data may belong to the users, the hardware, software, and many of the controls are owned and operated by another enterprise, the service provider.

Each organization's interest in the security of the data is different. For example, the owner of the data may rank confidentiality, integrity, and availability, in that order, might prefer that the data disappear before leaking. The service provider, on the other hand ranks availability, integrity, and confidentiality, would prefer that the data leak than that he not be able to deliver it when it is asked for. One can easily imagine a scenario in which the service provider has so many copies of the data that he cannot erase them all on demand, perhaps not ever.

Users of the T-Mobile smart phone, the Sidekick were offered a service to backup the names, phone numbers, calendars, to-do lists, and other data that they had stored in their phones. This service was implemented by an enterprise ironically named Danger. However, it was offered to the user by T-Mobile under the T-Mobile brand. That there was a second enterprise involved was not apparent to most users.

Danger had a server crash. The service was clearly down and user's data was at least temporarily unavailable, perhaps lost altogether. To complicate matters, Danger was in the process of being acquired by Microsoft.

The story had a happy ending. In less than a week, Microsoft/Danger recovered the data and made it available on a new server. However, it illustrates another aspect of the cloud that impacts security; that is, you may not know with whom you are doing business or upon whom they rely.


The abstraction, The Cloud, hides the fact that it, the cloud, is a mechanism for combining, composing, and connecting (other cloud) resources to provide services with those properties, i.e., on-demand, easily and rapidly provisioned, that are described in the NIST definition of the cloud. A cloud application may reside in a cloud virtual machine, using cloud connectivity, cloud storage, cloud data, and even other cloud applications. Each of these resources may be offered by a different vendor and components my be added or subtracted on the fly. The service level agreements (SLAs) for these resources are probably "best efforts," the default service level for most information technology.

This is potentially a security nightmare for both buyers and sellers. Of course, a proper understanding of the problem is an essential step to a solution. More on this later.

Sunday, March 28, 2010

Rockefeller-Snowe and Security Credentials

Legislation working its way through Congress may impose requirements for credentials on information assurance practitioners and professionals.

Two editors of SANS Newsbites responded as follows:

[Editor's Note (Pescatore): Since software engineering is still an oxymoron, there really are no meaningful software developer or IT system architect certifications. So, trying to say IT security professionals need certification will be good for the companies that will sell such certifications but really does not make sense from the point of any improvement of security.
(Paller): Cisco and NSA and SANS are compiling the available body of knowledge on what works and what doesn't work in security engineering.
They will be doing a workshop in June for people who will be hiring security engineers and architects.


I responded to them:

I agree with John that “software engineering” is an oxymoron. I argue that the application of engineering principles to software is very beneficial but very rare.

I agree with Alan that those same principles can be usefully applied to security and I commend his and any efforts to encourage it.

However, it seems to me that the certification requirements in the Rockefeller-Snowe Bill are more akin to the certification of security professionals that we have been engaged in for the last twenty years.

I would not be so dismissive of these programs as John is. Whatever else has resulted from these programs, they have had a huge impact on the documentation and spread of security principles and other knowledge. While this may be more arguable, they have also encouraged the professionalism of the practice of security.

It seems reasonable to me that agreement on the principles should come before certification or licensing. However, practice precedes either and continues even in their absence. Thousands of years of practice of engineering preceded its codification and licensing. Since we do not have the freedom to wait, we should encourage all three activities in parallel.


A dialogue between John Pescatore and myself follows:

John: Hi, Bill – I blogged on this in a bit more detail at , where I summarized:

That’s not to say there is no value in security certification as one element in evaluating security personnel.

Bill: The justification of legal requirements for minimum credentials in a professional practice go way beyond evaluating individual members of the practice.

John: But turning it into a requirement tends to make it set the height of the bar just at that level – that would not be a good thing.

Bill: Perhaps. Would you argue that the requirements for a medical license or a CPA are static? The federal government has been requiring credentials for aviation since 1917. Would you argue that the “height of the bar” is still at the 1917 level. I can testify from my own knowledge that the requirements for the CISSP are not static. When I qualified the program tested only for knowledge. Today one is tested for different knowledge as well as the skill to apply that knowledge.


John: My major issue is that there are no federal requirements for IT architect certification or software developer certification or database administrator certification. That is because certifications in those fields are largely meaningless, because software is *not* an engineering discipline yet. This is why GASP and the like, and the Security System Engineering Capability Maturity model and the like (I had involvement in the early 1990s with that one) really didn’t go anywhere. The Brits have had several certification programs that really haven’t done much to advance the state of the practice, either.

Bill: Agreed.

John: So, to have an federal information security certification requirement really is not going to be meaningful. It will just turn into a boon for certification programs.

Bill: I might agree that the field is not sufficiently mature for a federal requirement or even that the federal government should be involved in any credentialing program. On the other hand, their credentialing program in aviation has been very successful. Security operation of IT is at least as mature as the operation of airplanes in 1917.

I do not agree that credentialing programs benefit only the programs. The practice of engineering and medicine were both dramatically advanced by the credentialing programs that established minimum entry requirements to their practice.

John: Requiring training and education and job experience is so, so, so much more valuable in this kind of thing that requiring certification. This is pretty standard advice I give at Gartner to clients trying to evaluate security consulting personnel.

Bill: I grant you that certification is not sufficient for evaluation of professionals without granting that there are no benefits to minimum standards. Whether or not those benefits are sufficient to justify their requirement at law is another issue. Licensing of professional engineers was a reaction to infrastructure failure. Licensing of physicians and lawyers was a reaction by the competent minority to rampant incompetence. I do not argue that the practice of security is peer with these professions. I do argue that they were advanced by their requirements for credentials.

I think that the inclusion of credentialing in Rockefeller-Snowe is a reaction to the public perception that we are building infrastructure and that our efforts are simply not good enough. I am not sure that the remedy will be effective, much less that it is justified but I am satisfied that It will not make things worse. They certainly will not "set the bar" at today's level.

Monday, March 22, 2010

It works!

That's right. Security works. Your enterprise security program works.

Consider the following question. What part of the attacks that hit your perimeter does it resist?

a) None of them
b) Few of them
c) Enough of them
d) Most of them
e) All of them

Most of you said "c" or "d." We call that "working."

A few of you may argue that only "e" can be called working, to which I respond, "Be careful what you ask for, you might get it."

Do you know how to resist even more attack traffic? If that were the only objective, of course you do. You do not do it because resisting attack traffic is not the only objective. Even resisting "most attacks" involves at least slowing, if not rejecting, some legitimate traffic. It may also involve tolerating a resourceful attack.

Said another way, within the tough choices that face you, the security perimeter is doing what you intend. We call that "working."

Security is a hard problem; there are no perfect solutions. It requires the exercise of informed judgment. That is why we are called professionals and are paid the big bucks. Such as we are and given the conditions that we face, we do the best we can. That is called "working."

Wednesday, February 24, 2010

The Worst Case Scenario

At the direction of the board of directors, the IT staff of a national property and casualty insurance company developed a backup and recovery contingency plan (as contrasted to a business continuity plan.) They found themselves in a bind between the board, who said the plan cost too much, and the auditors who said that it was inadequate.

Many of us have been there and I was called in to assist, i.e. to "consult." I was not terribly surprised by what I found. It seems that every time the staff thought that they had a plan the auditors would identify another case in which it would not work. The staff would add a new capability to address the new case.

The board tended to look less at the capabilities than the total cost. Admittedly, the board of a property and casualty company looks at the cost a little differently than might a bank or a manufacturer. The insurers ask themselves, how much insurance must I write to cover that cost? How much coverage would I offer for that amount if it was paid to me as a premium. How much coverage could I buy for that much money? They could not even judge the capability in the plan but they "knew" that its cost was too high.

Of course, the problem was in the failure to properly identify the objectives of the plan. Allowing the auditors to hypothesize cases clearly was not working. No matter the plan, they were always clever enough to come up with a new case in which it would not work.

A plan that can deal with the "worst case" has infinite cost. What case then? What case must the backup and recovery plan of a national property and casualty insurer deal with?

We concluded that such a company would have to recover from any disaster that both it and the majority of its policy holders survived. Certainly it has an obligation to recover from the destruction of its own premises. It must survive a community disaster like an earthquake. It must survive a regional disaster like Katrina. Of course, these are far short of the "worst case," short of thermo-nuclear war or the end of the world. Of course, the scope of the event was not the only thing that had to be agreed upon but also the expected rate.

Finally, IT had to agree with the business as to the mean-time-to-recovery and the point of recovery for each application. The faster one wants the application back, the more one can expect to pay. The closer one wants to recover to the point of failure, e.g. close of business on the day before the event, the more one can expect to pay. More on these on another day.

While these things are difficult to agree upon, such agreements are essential to an effective and efficient plan. They are necessary to being able to satisfy both the auditors and the directors.

Thursday, February 11, 2010

You may enjoy this.

https://financialcryptography.com/mt/archives/001223.html

Bears, Brigands, and Dragons

A Fable

In medieval times, the populous was terrified of dragons. Everyone knew someone who knew someone who had seen one. Many knew someone who knew someone who had lost a relative to dragons.

However, when they built the castle, usually in stages over decades, getting stronger with time, they always stopped long before the castle became dragon-resistant, much less dragon-proof. After all, dragons are awesome creatures; they are very strong and they fly. How high would the walls of the castle have to be to keep the dragon from just flying over?

So, they built their walls to resist bears and brigands. They fully intended to get around to resisting dragons but it was so expensive that somehow it never got done. After all, bears and brigands were much more numerous than dragons.

In modern terms, we would call the dragon strategy, risk acceptance. This is sort of like our strategy for greater than Richter 7.0 seismic events and greater than Saffir-Simpson Category V storms. Needless to say, the watch mounted the walls everyday, with their bows and arrows, ready to repel the dragons, but they never saw any.

Every twenty years or so we have a massive power blackout, embracing multiple states and tens of millions of homes, and lasting for several days. It is usually the result of multiple simultaneous failure of a highly unlikely number of components. The media and the politicians scream "There be dragons. Why weren't you prepared?" The industry says mea culpa and promises to do better next time.

Actually they do do better the next time. They raise the walls. They replace older components with new ones that have a longer mean-time-to-failure. They add redundancy so that they are better able to tolerate component failures, and they automate the response to component failures. Of course, all of this ads cost. Long before the mean-time-to-failure of the system reaches infinity, they stop.

In fact, in about twenty years, their best efforts will be overwhelmed once more. The knee of the curve that plots mean-time-to-massive-failure against cost seems to be at about twenty years. I have now lived through three such blackouts and hope to live to see a fourth. Mitigating it will be expensive but not as expensive as preventing it.

No matter how high we build the walls, the damned dragons just fly over.

Tuesday, February 9, 2010

"Effective" Security

Nothing useful can be said about the effectiveness of a security mechanism except in the context of a specific application and environment. -- Robert H. Courtney

Perfect security has infinite cost.


Security people often reject novel security mechanisms "because they know how to break them." That is to say, they are not effective. On the other hand, they may continue to rely heavily on other mechanisms, like passwords, that they also know how to break. Most of this is simply habit. It does not really have anything to do with effectiveness.

Effectiveness has nothing to do with whether or not something can be broken. Anything and everything created by man can be broken by man; the real issue is the cost. No mechanism provides perfect security. (Indeed, the last thing anyone wants is perfect security; think about it.)

A security mechanism can be said to be effective if the cost of attack is higher than value of success. The issue is not whether or not a mechanism can be broken but how much it costs to break it.

Since we may not know all the failure modes of a mechanism, we never really know the minimum cost of breaking the mechanism. On the other hand, we often know the maximum cost. The maximum cost of breaking an encryption mechanism is never higher than the cost of an exhaustive attack against the key. Similarly, the maximum cost of breaking a password is the cost of a "brute force" attack. Of course, the cost of attack against a well chosen password can be arbitrarily high. Note that the cost of an attack to the attacker is measured in terms of the resources available to him, their reusability, and how he values them. For example, he may value as cheap special knowledge that he already possesses and that is easily re-used, while he values as dear knowledge that he does not have and which would have limited application once obtained.

A security mechanism can be said to be effective if it behaves as expected in the intended application and environment.
One of the possible expectations might be that it would resist a certain percentage, e.g. 80%, of attacks. It might be that it would take more than a certain amount of time to break it.

As with most things in security, we need not know the effectiveness of a measure with a high degree of accuracy or precision for the abstraction of effectiveness to be useful to us.

Sunday, February 7, 2010

"Advanced Persistent Threat"

Courtney's First Law says that, "Nothing useful can be said about the security of a mechanism except in the context of a specific application and environment." The "environment" is all about threat, natural and artificial. For the first three decades, while we talked a lot about the man-made threat, the threat that mattered was from "The" environment, that is, from nature, mostly fire and water but also earthquake.

While the natural threat has not changed much, the risk has changed. The risk was governed in part by scale. In the early days, the consequences were related to the fact that computers were scarce, large, expensive, and we thought that we were very dependent on some of their applications. In a world in which computers are a commodity, small, and cheap, the risk is not to the property but to the information, the loss of confidentiality, integrity, or availability. Man, not nature, has become the threat of interest.

In 2006 The US Air Force began to use the Term "Advanced Persistent Threat" to describe the role of nation states in attacking users of the Internet. The expression has surfaced in both the industry and popular press during the past two weeks.

The use of words is how we "think about security." Expressions like this one influence what and how we think about security. If the expressions are not carefully crafted, they may distort or mislead. If we are to use them, we should examine them carefully.

Of course a nation state is not a threat; threats have rate. Rather a nation state, like organized crime, is a threat source; threats have rate and source. Persistent can clearly modify a threat source. One must assume that nation states are persistent.

It is hard to see how "advanced" can modify either threat or persistent. In context, it clearly modifies the attack method. Fundamental attack methods have not changed since I wrote about them in a side-bar for an article in IEEE Spectrum in the early seventies. What has changed is the implementation, both the art and the craft.

Nation states and organized crime may exploit vulnerabilities that are not widely known but what is significant about the methods in these attacks is how they are used in steps and stages, from target selection, to exploitation of the product.

For example, while Operation Aurora used other elements after the bait was taken, getting someone to take the bait was the key to success. While crafting of the bait included forging the origin address, and while resistance to this could have been automated, we need to be more skilled at recognizing bait. Today, it is all too easy to get someone to "click" on the bait and that is often sufficient to compromise a system or domain. Apparently, the higher up the "food chain" one is, the easier. Similarly, the higher up the food chain the origin appears to be, the more likely the target is to take the bait.

I am reminded of my South African colleague who said that his demonstration bait message had a subject line of "big teats." He argued that its attraction was gender dependent, but its appeal was to both genders. One gender wanted them while the other wanted to look at them.

The key word is "persistent." Right now that means fishing every day and throwing out a lot of bait. History suggests that artfully crafted bait sufficiently replicated and spread, will work. Of course, the key word in all of this is "sufficiently." However, "sufficiently" implies brute force. Since the adversary is not going away, one must recognize bait and force early, while there is still time to mitigate or resist it. One must decrease the size of the domain that can be compromised by a single "click."

Every "large" enterprise is a target but surprisingly so are some small ones. We will save this discussion for another day.

Friday, February 5, 2010

The Total Cost of Security

In the context of this curve, which illustrates that the total cost of security is equal to the sum of the cost of security measures plus the cost of the losses after those measures, Fred asks,

:

But is it obvious where you should operate? Is lowest cost necessarily the best?”


The best answer to the question is “close to the middle; “ at either extreme, the cost of error goes up exponentially. Ideally, one wants to plan and operate at the min but that is not knowable in any real sense. That is why one does risk assessments and other attempts to estimate the annualized cost of losses and the value of security measures.


One implication of the curve is that at the far right, it is very expensive to achieve small reductions in losses. Security measures increase dramatically in cost for small reductions in already small losses. Said another way, as the cost of losses approaches zero, the cost of security approaches infinity.


Note that in the middle, the sum curve tends to be fairly flat. That says that any place in the middle is “OK.” There is little danger that one will overspend on security. Long before it becomes inefficient, other limits on available resources will kick in. The real danger is in under spending. The cost of under spending is not so obvious; indeed one may under spend for several years and “get away with it.” It may be only across a few years that it becomes obvious that it is inefficient.


Note that spending on security is balanced by risk. The tolerance for risk is different for different enterprises. For example, small new enterprises are inherently more risky that large mature ones; it is not efficient to pursue low security risk in the face of high business risk. Within an enterprise, risk tolerance may be different in different periods. In some periods, management may tolerate a higher level of risk in an attempt to move net income from one period to another.


The curve can be used to illustrate Courtney’s Second Law, “Do not spend more mitigating a problem than tolerating it will cost you.” However, it is an abstraction. The two curves do not have the same time scale. In any period the cost of security is more predictable than the cost of losses. We plan and measure the cost of security mechanisms annually while the cost of losses may only be known with confidence across decades. On the other hand, one can estimate the cost of losses well enough to avoid gross errors, or, in the vernacular, “Close enough for government work.”


Donn Parker warns about “risk assessment:” it is a blunt tool that can cost more than making the decision wrong will cost. He argues for what he calls “baseline controls” and what Peter Tippett calls essential practices. In combination these low cost controls are very effective and so efficient as to require little justification. This is a subject for another day.


Actually taken across a large enough enterprise, one can measure losses pretty accurately. However, I have only encountered one enterprise, Nortel, that does it. They have a budget for losses, not meaningful at the departmental level, but works pretty well at the business unit level. In the first year that they did it, the variance between what was budgeted and actual was pretty high. However, after a few years of experience, variance was much more within a normal range.


In the long run, the cost of security simply is what it is. It is unavoidable. In the words of the mechanic in the Pennzoil ad, “You can pay me now, or you can pay me later,” but the implication is that one cannot escape this cost. The advantage of the cost of security over that of losses is that it is both knowable and predictable. As long as one avoids gross over or under spending, one is likely to be within the efficient range.