Tuesday, March 13, 2012

Essential Security Practices

I got the idea for essential practices from my friend and colleague, Dr. Peter Tippett.

He pointed out that one of the things that senior management looks to the security staff for is advice on where to spend the next security dollar. We each have a way of answering this. Some of us do it by the "seat of the pants:" those of you who are pilots have been taught how unreliable that is.

Some advocate 'best security practices,' but because of cost and other constraints we quit long before we get there.

Some employ formal risk assessment. However, few of us have the necessary knowledge, skills and abilities necessary to employ this expensive measure, one that is much better at justifying expenditure than telling us what to spend on.

Peter proposed essential practices as a method for answering this question. Let me explain what is meant by this expression.

I teach at the Naval Postgraduate School where my students are warriors. Since they understand the concept of "force protection," I use it to teach computer security.

In force protection, "first you dig a hole." This rule is so important that Caesar is said to have sacked generals who permitted the troops to eat or sleep before they dug a hole.

In modern warfare we call this hole a fox-hole. Fox-holes are about 0.8 effective. If the artillery shell or the grenade lands in the fox-hole, it does not help. However, if either is a "near-miss" then the hole offers protection.

Now most of you have seen the movie, Patton. You know that General Patton required that even the cooks and the doctors wear their steel pots. Again, a steel pot will not protect you from a direct hit but it improves the effectiveness of the fox-hole.

So, there are two pieces of equipment that all the troops carry into the field. The first is the steel pot and the second is the entrenching tool. Each costs about $15-. The helmet liner costs more.

What about body armor? About 0.8 effective, complements the hole and the steel pot. However, it costs not tens of dollars but hundreds. In the first Iraq War we heard stories of Mom hocking the family homestead to pay for it. However, the price has now fallen by half or more. Does it now qualify as "essential?"

So, anyone can dig a hole with available resources, It is about 0.8 effective. It complements the steel pot.

There are analogous IT security measures that:

  • can be done by anyone
  • with available resources
  • are about 0.8 effective
  • and which complement one another

For example, if a fox-hole is a primitive fortification, then an IT analogy might be a free software firewall like the one provided by Microsoft in Windows. We accept that it is limited in its effectiveness. Anti-virus software might be analogous to the steel pot. A hardware firewall that cost hundreds of dollars might, like the body armor, be questionable. However, one that costs tens of dollars and protects a SOHO network, an application server, or even an enterprise desktop probably qualifies.

In the rest of this chapter we will identify a number of qualifying IT security measures. Some will be fairly obvious, at least once they have been identified. A few will be surprising and one or two will be controversial. You will find that many will be measures that you already have in place but each of you will find one or two on the list that you have overlooked There will be some items on the list that you have considered and rejected; I will ask you to reconsider them.

Here is one of the obvious ones, changing default passwords. This one is so obvious that one really not have to call it out. However, if one reads the Verizon Data Breach Incident Report (VDBIR), one finds that there must be a very large number that have not been changed. Surely it meets the definition.

User and management awareness training may or may not qualify, depending upon how one values the time of the users and managers and how well one uses it. On the other hand, many recent breaches have relied, at least in part, on duping users.

Formal risk acceptance makes the cut. It is used when, for business reasons, management elects to accept, rather than mitigate, a risk. The risk acceptance document is written by the security staff and signed by line management. The security staff describes the risk and the reason for accepting it and a business executive, with sufficient resources and authority to mitigate the risk if he wished, signs it. Its life is the shorter of a specified duration, the tenure of the signing executive, or one year.

Explicit assignment of security roles and responsibilities and supervision to ensure that they are carried out is both effective and efficient. When people fail to do what we expect, it is far more often the result of our failure to communicate the expectation than it is a failure of motive on their part. It is essential both in the sense of our definition and in the sense of necessary.

Unique user identifiers make our list. Again, while the motivations for shared IDs have all but vanished, the Verizon Data Breach Incident Report suggests that they persist and contribute to breaches. It is ironic that the most frequently shared IDs are privileged ones (can you say "root?"), the one's where accountability is the most important.

Similarly, we should be using strong passwords, not so much because we are seeing a lot of brute force attacks, but because accountability requires that we be able to take the possibility that there the password is compromised off the table.

Anti-virus and personal firewalls would appear here on the list. We used them as examples above. However, there a few more things to say about them.

The personal hardware firewall changes the policy of the personal system from permissive to restrictive. It hides personal system vulnerabilities, some of which will never be identified, much less fixed, from the Internet. Of course, say firewall in most enterprises and the network managers go ballistic; they are certain that any firewall will break applications.

However, I give away personal firewalls as hospitality gifts. If I am your guest for the weekend, I will bring and install a Linksys. I have yet to break an application. Indeed firewalls and applications are increasingly aware of one another and configure themselves as necessary.

But ranking with AV and firewalls, I put backup. It is important because our storage is fragile and our data friable. In addition, it is a measure that protects us against those threats that we can not anticipate. We use it because computers do nothing quite so well as they make cheap, dense, portable copies of data.

Having decided to make backup copies of our data, we should encrypt the backup copies. The more copies of the data, the more persistent it is but the greater the probability that it will leak. We have also seen tens of cases in which we are unable to account for a clear-text backup copy. Most of the software that we use for backup has an encryption feature. Backup software with an encryption feature is not that much more expensive than that without it.

Next is a practice that is clearly on the list but often over looked, time-out to a lock-word protected screen-saver. The lock-word need only be long enough to resist a guessing attack lasting minutes to tens of minutes. This simple measure greatly reduces risk on machines in public or employee-only spaces. It does not impose a burden even in secure spaces. It is an option on most popular operating system. However, the control is not always easy to find. For example on Windows 7 it is under "personalization." In Mac OS X, it is the second option in System Preferences.


A practice that is not quite so obvious is end-to-end encryption. By end-to-end, I mean client to application. I am going to argue that anyone can do it and, thanks to Netscape and Cisco/Linksys, with available resources. Yes, SSL, SSH, and IPSec are ubiquitous. All remote desktop servers include it. It is the essence of virtual private networks, VPNs, and virtual local area networks, VLANs. If a server does not support one of these, one need only put a proxy in front of it.

A related practice is to terminate VPNs on the application, not on the perimeter and not on the operating system. On the client-side, the VPN is hidden under the application name or icon. On the server-side, the application hides network and operating system vulnerabilities from the client and the encryption hides application vulnerabilities from all but authenticated users. The encryption hides the client from enterprise network users.


By policy and practice, store sensitive data, including books of account, payment card information (PCI), personally identifiable information (PII), and intellectual property (IP), on enterprise servers. Resist, with policy and controls, the creation of arbitrary copies on desktops and portable devices. Yes, I know that the users claim that they need those copies in order to get their jobs done. However, that is a service level issue. Given the speed and coverage of modern networks, this is clearly less necessary than it was last year and will be even less so next year.


Prefer application-only access to data, the need to look up an associate's e-mail address or phone number should not confer the privilege to copy the enterprise directory.


Note that if the sensitive data is on the servers, then that is where the control ought to be. First there will be fewer control points and that will reduce administrative effort. Perhaps more important, access controls on the servers are much more reliable, more resistant to bypass, than those on clients.


If one is going to use Wi-Fi, and almost everyone does, one should encrypt the air side. WPA2, the standard protocol, IEEE 802.11i-2004, is supported in all modern Wi-Fi equipment but any is better than none. The burden is in distributing the key among the using devices. The size of this burden is a function of the number of devices using the wireless network. As in any encryption scheme, the key is most vulnerable when it is being distributed. All key distribution protocols leak. but none leak nearly so much as Wi-Fi without encryption. [This use is so fundamental that the RIAA and MPAA have advocated making it a criminal offense not to use it, but that is a subject for another day.]

Dial in only using VPNs and via ISPs, never direct to the enterprise. Said another way, get rid of dial-in. The Verizon Data Breach Incident Report shows that a significant number of breaches exploit dial ports installed by or for the convenience of support personnel. That is in part because operating dial-in securely requires knowledge, skills, abilities, and resources that are beyond all but a few specialized enterprises. Yes, I understand that many of your users work from locations with no broadband access; for them dial-in may be necessary. However, that does not say that it is necessary for you to operate it. Users should dial-in to an ISP and access your applications and systems via the Internet.

Those VPNs, indeed all VPNs should terminate on the applications, not on the perimeter and not on operating systems. Encryption should always terminate on the application rather than the network perimeter or the operating system. This simplifies its use and improves its effectiveness.

Patch broadly in preference to early. Said another way, convert patching from an unplanned, to a planned activity. It turns out to be dramatically more efficient to patch most, or even many, of your systems in ninety, or even one hundred and eighty days than a few in a hours or days.

Patch wisely. In a year in which 2200 vulnerabilities were reported, roughly one in a hundred were exploited. Said another way, not all vulnerabilities are problems, not all problems are the same size. Would you not rather have patched the twenty than the 2200?

Lock down your systems. Hide the operating system and any other capability to install or modify programs. I can hear the moans from here. Most of you are professionals or para-professionals; many of you exploit such capabilities and some of you even need them. However, no executives and few managers need these privileges. Few administrative or clerical users need them. Change the default and dramatically reduce your vulnerability. Microsoft provides powerful tools for locking down systems and administering locked-down systems. Few of us use them well.

Finally, layer your defenses. As your resources grow in value and your adversaries in power, push your defenses out and push your valuables down in the ground. Between the Internet and the crown jewels, there should be several, say four, layers. Tunnels, VPNs, that bypass those layers, should lead to limited and contained privileges and capabilities.

Now that is my list. Peter’s list might be somewhat different. However, my Jesuits who taught me in prep school, taught me that, for the sake of completeness all such lists should end with “other.”

Many of these practices you may have already implemented. One or two may not apply to you. There may be other practices that qualify as essential that I have not identified. Make your own list. Order it. Work your list. Most of these things can be done in six months or so. Focus on essential practices before moving on to more expensive measures that require more expensive justification processes. In the meantime, cover those measures with risk acceptances.

These measures are so fundamental, so efficient, so “essential” that we must get them done before we even take time to consider other measures. I will argue that measures that are 0.8 effective and cost tens of dollars per seat are so efficient as to qualify as "essential." Because efficient can be defined as cheaper than all of the alternatives, including that of doing nothing, these measures qualify as efficient by two definitions.

As with all controls, there will be exceptions to these rules but the exceptions should be manageable and we ought to be managing them down in both scope and number.


I understand that "anyone can do them with available resources" does not translate to "easy." Depending upon the size of the enterprise and the style of its management instituting even essential practices can still be very difficult.

The use of "Essential Practices" is a method. It is a method for allocating our scarce resources. It is a method for answering the management question, "Where should we spend our next security dollar?" It is method that characterizes us professionals, ensures our efficiency, and earns us the big bucks.

Classification and Labeling of Data

In the early days, much of computer security research was aimed at developing computers that could be relied upon to enforce the DoD scheme for restricting access to data "classified" in the national security interest. Out of this research emerged the Bell-Lapadula model, the Trusted Computer System Evaluation Criteria (TCSEC), and rules-based access control. These assumed that data is "classified," that is, labeled as to its sensitivity. The research assumed that data is "born classified," without paying much attention to who says so or why.


Data is not really born classified. Someone has to decide. Classification is an economic decision. While some people think that one is simply making a statement about an inherent property of the data, what one is really doing is making a statement about how one believes the data should be protected and at what cost. The class maps the data to a set of methods and procedures to be used to protect it. We call this mapping "policy." Since these methods have a cost, by assigning a class or a label, one says that this is how much the enterprise or the authority is prepared to spend to protect this data.

The label can be thought of as a code that the author/classifier uses to communicate to the users of the data how he, the author/classifier, wants it to be protected. For example, when one labels an object "confidential," one communicates to all custodians and users of the data that it is only to be seen by those with "need to know." When one labels it "top secret," one asserts that, among other measures, the data should be locked up when not in use.

We choose our label or class based upon the "sensitivity," a term of art, of the data. Sensitivity is a function of context and association. A single bit of data is sensitive only if one knows what it signifies. A social security number, standing by it self, may not be sensitive but the bind of a social security number to a name starts to be sensitive. As one associates date and place of birth, the names of parents, address, credit scores, and credit card numbers sensitivity increases.


Sensitivity increases along an axis from raw data, to organized and analyzed data, to conclusions or intelligence derived, to plans of action based upon the intelligence. Thus for most business enterprises, competitive intelligence, product plans, and business plans tend to be very sensitive.


The sensitivity of data is a function of its timeliness. Reuters charges a premium for data that it plans to give away for free in fifteen minutes. At the other extreme, the identity of spies remains sensitive for the life of their grandchildren. The Secrets of Ultra remained sensitive until the inventions of modern cryptography made them obsolete; we kept them secret for another twenty years just to be on the safe side. While there are exceptions, the sensitivity of data tends to decrease with age. The sensitivity of Thomas Eagleton's mental health records was decaying along a predictable curve until he ran for the US Senate. It spiked again when he was chosen to run for Vice President of the United States.


Similarly, the sensitivity also tends to increase with quantity. One credit card number is sensitive but the sensitivity of a set of such numbers goes up with the number in the set.


So to recap, the sensitivity of data is a function of context or association, age, organization and analysis, and quantity.


Today we have default sets. In the private sector these include intellectual property (IP), personally identifiable information (PII), and payment card information (PCI) that must be protected from disclosure, and the books of account that must be protected from manipulation or contamination. In law enforcement we have investigations in progress and wants and warrants. In intelligence we must protect not only the conclusions and recommendations but also the sources and methods by which they were developed. Sources and methods are among the most sensitive data we have because compromise may cost lives.


Classification decisions must be made by those who know the most about the data. For business functional data, such as accounts receivable and the payroll, that it usually the manager of the function. By default, it is the person who creates the object. We often refer to this individual as the "owner," because the role usually includes the authority and discretion to say who, and in what circumstances, can see or modify the data.


Because the decision requires judgment and experience, it is normally reserved to executives, managers, and professionals. Our job as staff is to ensure that the process of creating an object includes the step of labeling it, then to note variances and ensure that they are corrected. However, it is often difficult to identify the responsible individual,


In most organizations, the authority to classify information includes the authority to re-classify. An exception is the US National Security system, where once classified, the data must go through a rigorous declassification process applied by specialists.


In business the label should include the identity of the classifying manager or authority and a date whereon the classification expires or must be extended. Enterprise policy may require the former and limit the latter.


The objective of the system should be to ensure that all data gets the protection that it warrants but that expensive measures are reserved for only the data that needs it. Under classification may result in leakage or contamination. Over classification is inefficient.


An issue is binding the classification label to the data object. The integrity of the system relies upon the label being tamper resistant, or at least tamper evident. In ink based systems the paper binds the data and the label to the same piece of paper. Because of the mutability of electronic data, the label is part of the meta-data and the bind may not be as reliable. Closed systems like the AS/400 or Lotus Notes help. Encryption can be used to bind the label to the object in such a way that tampering with it will be evident. The integrity of the bind is checked at open time and flagged if the content and the label do not agree.


An effective classification and labeling system has to be baked into the culture of the enterprise or organization. Policy, methods, procedures, All members of the organization have to expect labels and must know the measures for all classes that they are likely to see in their roles. Moreover, those who create data objects must know how to classify them. Both of these things require constant reinforcement and training. Supervisors, managers, and executives must note variances between the sensitivity of the object and the label assigned and take timely corrective action.


Fifty years of electronic computing has resulted in an explosion of data, if not intelligence. Efficient, even effective information assurance requires not only that we identify the sensitivity of our data, that is, classify it, but also then communicate that judgment to all users of the data, that is, label it. We have seen at RSA and Lockheed-Martin what comes from trying to protect all data the same.


Luckily, the computer has given us powerful tools, for example, uniform and consistent processes, user identification and authentication, rules-based access control, and encryption to protect the information. In some cases, simply applying the proper label may go a long way toward ensuring that it is protected appropriately.


Many organizations have nominal classification and labeling programs that are not effective. When a client tells me that they have a program, I make a quick test. I ask a couple of executives if they would notice a mis-classified document if it hit their desk, and, if so, what they would do about it. I get unsatisfactory answers more often than not.


Because such a program must be woven into the warp and woof of the fabric of the enterprise, creating a is difficult, time consuming, and high maintenance. Of course, that is why those of us who implement classification schemes are called professionals and are paid the big bucks.

Security Architecture Principles

I would like to recommend to you some principles for security architecture, how to design security into our enterprises, networks, systems, and applications. First I have to say a little about architecture in general and security architecture in particular.

Architecture is part of design. It is that part of design that deals with:

* appearance, how things look,
* function, what they do,
* location, where things sit relative to other things, and
* materials, what they are built with.
I like to use the analogy of residential architecture because that is what most of us think about when we think architecture. The design describes what the house will look like, what kinds of rooms it will have, where they will sit relative to one another and what materials the house will be built with.

Note that the design of the house includes multiple views and documents, including a model, elevation, or perspective, a floor plan, and a list of materials and their specifications. Note that most of the materials can be specified by reference to existing standards but novel materials have to be described in detail.

Security architecture deals with how security looks to the user, security functions, such as user identification and authentication, and access control, where the functions are located and how they interact, and what components, both off-the-shelf and purpose built, they are built with.

My mentor, Harry DeMaio, likes to say that "security architecture is derivative of and dependent upon IT architecture" but then goes on to point out that if that architecture is not rigorous and well documented then that must be done before the security design can proceed. My own experience is that when I ask management about their IT architecture, they give me a list of components but cannot tell me how they are used or how they fit together, let alone show me where these things are documented. Of course the implication is that much of the design is ad hoc, not to say that it "grew like Topsy." Thus our work often starts with documenting "what is" so that we can refer to it.

We speak of "expressing" the design. Like the design of the house, our design involves multiple views and levels of detail, multiple documents and models. We can think of it as top down, one page at a time, carefully indexed and cross referenced. As the designer of a building or house uses blue-prints (still called that though they are no longer blue) we use network diagrams, stack diagrams, and access path schematics. As the building designer uses pictures, we may use screen shots, as he uses models, we may use working prototypes.

Design is an iterative process that begins with identification and documentation of requirements. Think about the residential architect. He needs to know neighborhood, family size, life style, taste, budget, etc. We need to know industry, business strategy, environment, including natural and man-made hazards, scale, required behavior, forbidden behavior, permissible and impermissble failure modes, applications, risk tolerance, cost to over come resistance, i.e. strength, organization, users, etc. Identifying and expressing security requirements is a topic for another day.


To identify all of these requirements, like the residential architect, we may use our experience, structured interviews, questionnaires, document reviews, and "strawman" documents, sample deliverables that illustrate design options and choices.

For security design these choices may include economy of logon, single point of administration, end-to-end user-to-application encryption, layered networks, limited reliance on operating systems, granular access close to the data, certificate based credentials, failure to logon (rather than to the operating system prompt).

So much for context. Let's get to the principles. First practices, then appearance, function, materials, and, finally one or two ideas on sources.

The most important practice, perhaps the most important design principle is "do it right." Do it right the first time; build for the ages as contrasted to iteratively assess and fix. Design top down by functional decomposition and iterative refinement. Plan to implement by composition from the bottom up. Prefer broad solutions, for example backup, that work across the enterprise, across multiple applications, and that address multiple risks. Prefer end-to-end encryption-based solutions that are independent of the underlying network.

Appearance. Provide a consistent presentation and interface that is predictable, intuitive, and obvious as to intent and effect. Design for ease of safe use, such that doing the right thing is easier than doing the wrong thing. Hide and encapsulate necessary special knowledge such as encryption keys. Using the system safely should require as little user effort and special knowledge as possible. Prefer simplicity, hide complexity. Design around roles and responsibilities so that users see only that which is relevant to what they are supposed to be doing. All other things equal, security should look the same across the enterprise, i.e., across applications, systems, and platforms. Correct behavior learned in one context should not cause errors in another.

Place controls close to the resources, e.g., on the server, but place operation of the control close to the special knowledge required for its safe operation. For example the control for enrolling users should be in the hands of the user's manager while that for granting access to a resource should be in the hands of the owner or manager of the resource. Place control where the effect of its operation can be observed. The operator of a control should be able to observe the effect of its operation. Prefer localized function and data; the fewer things that must operate correctly to produce the safe result the better. Distribute functions and data only as required for reliability, availability, and performance. Separate controls from their use and user. For example, the operating system of a server should be hidden from users of the server.

Include audit trail in your design. An audit trail should enable you to know the current state of the system, how it got that way, and what it looked like in the past. It should enable one to fix accountability for all use, content, and behavior of the system to the level of a single individual

Localize security function. Prefer single services for naming of resources, users, and groups, authenticating users, and controlling access across a domain, e.g. network or organization. Similarly, prefer a single service for encrypting data and managing keys.

Structure and layer networks, for example into public, i.e., peer with the Internet and the PSTN, private, i.e., user and application, trusted, i.e.,trusted-system to trusted-system. and control, i.e., privileged operator to system controls. Isolate the layers from one another using firewalls, proxies, and encryption, e.g., vLANs and VPNs. Prefer short segments and limited to segment-to-segment traffic, i.e., a balance between the two.

Prefer robust materials from trusted sources in tamper-evident packaging. Prefer materials that have been evaluated by third parties against industry or government standards. Prefer single-use to multi-use components. Prefer hardware to software for process-to-process isolation. Prefer virtual-machine to virtual-machine isolation to operating-system-process to operating-system-process isolation. Said another way place more reliance on hyper-visors than operating systems. Prefer industry-standard, public, and mature encryption mechanisms and products, e.g., SSL, SSH, and IPSec; avoid RYI.

Finally, we come back to "do it right." Use the right material for the application. Use materials as they are intended to be used. Do not bend to fit. Do not break interfaces, that is, do not exploit knowledge that you may have about what is behind the interface.

Now, you might think that is a lot to absorb at one time. On the other hand, once enumerated, the principles are pretty straight forward. Many of them you abide by much of the time without thinking about them. For those of us who participate in the design of organizations, networks, applications, or products they can go a long way to earning us the warrant of professional and the big bucks.