Monday, November 21, 2011

Security Culture for the Cloud

It is difficult to miss the trend toward "outsourcing," to have things done by others that traditionally had been done by employees within the enterprise. This trend is facilitated in part by "The Cloud," the Internet and the incredible range of services, fee and free, that are offered on it.

I used the example of Stanford Health Clinic that transferred patient information to a collection agency only to have it posted to a public site on the Internet, a gross and egregious violation of the privacy of their patients.

I left you with the idea that our professionall objective is to arrive at a state in which all parties understand their roles and responsibilities and carry them out in such way as to produce the intended results.

I had decided to elaborate on that advice this week. I came up with a list of policy, technical, and legal guidance for use with out sourcing.

I was going to suggest that enterprises should have a policy that spells out its risk tolerance in general and in regard to the use of outside sources in particular. It might specify which data and applications could be outsourced and which could not. For example, it might specify that the enterprise's intellectual property and personal information should not be outsourced. It might also specify insurance coverage for any risk that exceeds the specified tolerance.

I planned to say that agreements should enumerate the laws, regulations, and contracts to which the parties are subject and all standards that they had adopted. They should also spell out any limitations such as the requirement to disclose information in response to legal service.


I was going to suggest that enterprises should prefer to do business with vendors that were part of such organizations as the Cloud Security Alliance and the Cloud Auditing Data Federation Working Group (CADF). I would have suggested that using enterprises might want to participate in the Cloud Standards Customer Council.

I would have stressed that your contract should provide for audit or for a service auditor report, I would have cautioned you about the limitations of service auditor reports, for example, that they are limited to controls asserted by the auditee and that they are as of the time of the audit.

I had planned to suggest that agreements should be service by service and application by application.

I intended to suggest that agreements should enumerate all existing controls, who is to operate them, and under what conditions. That the agreements should spell out the intended use of the controls as well as what record the use of the controls would produce. Examples of such controls include, Identification, authentication, access control, encryption, administration, provisioning, confirmations, messages, alerts, alarms, measurements, and reports.

I would have emphasized the importance of provisioning controls in The Cloud and pointed out that compromise of those controls might enable others to use services and charge them to you. I had even planned to stress that all use of such controls result in automatic out of band confirmations. I would have given a caution about error-correction and vendor over-ride controls.

Fortunately, while doing my research, and before I had embarrassed myself with all of this irrelevant advice, I came across a report in the New York Times by KEVIN SACK Published: October 5, 2011. Here is part of what I learned.

First, there was no evil here, no recklessness, not even gross negligence, just bad judgment all around. To the extent that there was any motive, it was efficiency, just getting the job done. No greed, no lust, not even sloth.

Stanford Hospital and Clinics (SHC) is a 600 bed general hospital. It is not Kaiser-Permanente or UPMC but it is a major enterprise in its community.

Multi Specialties Collection Service (MSCS) is a collection agency for medical services in the same market as SHC. It bills about $0.5M per year and employees 5-10 people. One might call the relationship asymmetric, one-sided.

The identity and role of the sender of the information is not public, but should have required significant management discretion and rare privileges to access and send it.

The receiver of the information was a contractor to MSCS. He often represented himself as an officer of MSCS and had an MSCS e-mail address. Been there, done that. He decrypted the data, put it in a spread sheet, and, among other things, gave it to an applicant for a job with him.
While SHC says the information was for "permissible hospital billing support purposes," the consultant says that it was for a "study." In any case, the information was not passed in the normal course of "collections," the service. I believe that both the sending and receiving of the information probably was outside the agreement between SHC and MSCS.

The actual posting to the public web-site, StudentofFortune.com, was by a job applicant to the consultant. He had given the applicant the spreadsheet to convert it to charts and graphics as a test of skill

The posting was a violation of the SoF Terms of Use which require the user to "represent and warrant that (they) (a) own or have sufficient rights to post (their) Contributions, on or through the Site, and (b) will not post Contributions that violate Student of Fortune or any other person’s privacy rights, publicity rights, copyrights or contract rights.

Two things seem clear. First, everyone involved has egg on their face except StudentofFortune.com. Their Terms of Use were obvious, concise, plain, and clear. One cannot register for their site without acknowledging and agreeing to them. When the violation was called to their attention they responded on a timely basis. I would gladly testify for or against any of the other parties.

Second, none of the policy, technical, or legal measures that I wanted to recommend would have prevented the breach. If asked in advance, management might well have accepted the risk that so many controls and people would fail at once, However, SHC is now the target of a $20M class action law suit and will almost certainly be penalized by the regulators. MSCS has lost a major client, has closed its web site, and is not answering its phone.

I am not sure that the penalties fit the crime but they sure are getting our attention However, to the extent that the breach impedes the urgent move to electronic health records, or even the efficient use of cloud resources, perhaps they are proportional.

I like to think that my lists above are useful, if not necessary, but they are clearly not sufficient or even the place to start. No, we are back to management and security 101. There is no substitute for training and supervision.

"Outsourcing" makes this even more important. Note that StudentofFortune.com is typical of free or low-cost collaboration "cloud services" that help our employees get their jobs done and are within the discretion of most of our employees. We are going through a major change in how we organize production and resources. It is being driven by the falling cost of information technology. As this new model matures we need to evolve a culture of personal due care, one in which people automatically ask "should I do it" rather than simply "Is it efficient?" A culture in which people automatically consult with others before they act, a culture of caution.

Security must start with our most effective controls, training and supervision. We should focus on or use our other tools only to the extent that they are more efficient. Then we will be called professionals and be paid the big bucks.

Thursday, November 17, 2011

On Resistiing Phishing Attacks

At Secure World in St. Louis I heard a presentation on "Cybercrime" by Brian Mize, a Special Federal Officer with the FBI. One of Brian's points was the number of such crimes that begin with a successful crafted bait e-mail message. Brian reported that more than half of crimes investigated by the St. Louis Cyber Squad, on which he serves, began with such a message.

While there were many steps in the attacks, they began with bait messages, specifically because they are so efficient. By definition, if one puts bait before a sufficient number of people, someone will take it. The interesting thing is how small that number has to be. In one group of 527 targets, one in ten took the bait.

The bad news is that only one click by one user may be sufficient to contaminate the entire enterprise. The good news is that all most all attacks against enterprises are starting in the same way.

The bait of choice no longer appeals to fear, greed, or lust. Rather it appeals to curiosity. Human beings are naturally curious; curiosity has survival value. Mass bait will be of the form "Look what Justin Bieber did." Alternately it may exploit the disaster news of the day. However, messages directed to the enterprise, while still appealing to curiosity, are much more artfully crafted. For example, the bait that compromised RSA was a pdf identified as "2011 Recruitment Plan." If this came to you from someone whose name you recognized, would you be suspicious? Would you resist it? Remember when we preferred PDFs to Word documents for safety?

The obvious defense against bait attacks is awareness training. However, as with campaigns like "Just Say No." there are fundamental limits to the effectiveness of such training. We are left with the fact that a successful attack only requires one temporary failure of our training.

I met Brian later and we agreed that we really need an effective and efficient artificial intelligence, AI, for identifying such messages. We both identify and reject one or two bait messages a day that get past our spam filters. If we can identify them, surely Google could.

However, I heard another presentation by Steve Ward, Vice President of Marketing for Invincea, speaking at Data Connectors at Bridgewater's at the end of Fulton Street. He talked about a product that took a different approach. It looked at the second step in the attack. It seems that one bites, i.e., "takes the bait," by clicking on a button. It turns out that almost all of the buttons are URLs. Steve says, even if I cannot stop everyone from biting, one might be able to cut lhe line just as they do. Only rare messages are bait but all bait messages are URLs.

The URLs link to an executable that corrupts the user's system. It effectively contaminates the network, all machines to which that machine is peer connected. In far too many enterprises, that is the entire enterprise network.

Note that contaminiation requires user privileges, perhaps ADMIN, at least the ability to create or modify an executable. Part of the problem is that users that do not require such privileges have them by default. On the other hand, we cannot limit all such privileges.

However, Steve Ward points out that controlling the process that parses the URL could prevent the contamination. His product takes an architectural approach, it installs as an application, becomes the parser for all URLs, and interprets them in a virtual machine so as to prevent contamination of the real machine. Even if a privileged user takes the bait, her machine will not be contaminated.

Efficient security relies upon layers and redundant measures. We must train users to recognize and resist bait. We must limit their privileges. We must configure their systems to resist contamination. We must layer and compartment the enterprise network to resist the spread of contamination. We must control access to sensitive data. We must monitor, detect and remediate. We must resist exfiltration of our data. Of course, it is because knowing and doing this is difficult that we are called professionals and are paid the big bucks.


Wednesday, November 2, 2011

FBI Proposes Alternate Network

At the recent ISSA Conference in Baltimore, FBI Executive Assistant Director, Shawn Henry, proposed a "new alternate secure Internet," separate from the public Internet, to operate the nation's critical infrastructure. While there is clearly a need for better security, I am going to argue that this proposal reflects a poor understanding of both the problem and of networks.

The government is justifiably concerned about the existential vulnerability that has arisen because of the connection of infrastructure controls, i.e., supervisory control and data acquisition (SCADA), to the public networks. This connection permits at least parts of the infrastructure to be operated from any place in the world. To the extent that the controls are insecure, they can be abused or misused to cause the infrastructure to be mis-operated.

To the extent that the infrastructure itself is fragile, mis-operating it may cause damage that cannot be efficiently remedied. "Experts" have speculated that the infrastructure might be maliciously operated in such a way as to shut down our entire economy for days to weeks. Value and savings might be destroyed. Millions might starve or freeze unless we could rebuild in days what it has taken us decades to create.

However unlikely such an event, to the extent that such a vulnerability is implementation induced, rather than fundamental, it should not be tolerated. Making controls intended only for the use of a few privileged operators visible to everyone is unnecessary, in this case, reckless. It is analogous to putting a copy of the control of the autopilot for an airliner between every two seats.

However, it is specifically because the infrastructure is fragile that the controls are connected to the public networks in the first place. The operators understand that the infrastructure must be "operated;" that its continued service requires that it be monitored, adjusted, "provisioned," and configured to compensate for changes in inputs or load or the inevitable failure of components. While some of this operation is automated, some of it requires timely human intervention.

The operators of these controls have connected them to the public networks on the implicit assumption that, far bigger than the risk of connecting them would be their own inability to monitor and operate the controls on a timely basis. Few of them see their connection in the context of all the other connections. They understand that no single connection would represent a major risk; they are only just waking up to the realization that the collection constitutes an existential vulnerability.

Part of the problem in the Critical Infrastructure space is the culture. Given the sensitivity of these controls, one would expect them to be hidden behind virtual private networks and strong authentication. For reasons of convenience, for the most part they are not and that is the root of the problem.

In order to provide for around the clock, but somewhat sparse, remote monitoring and control, the operators have have connected the controls to, not just one, but to both of the public networks. While this kind of remote operation is good for the enterprise, and may even be strategic, many of the early connections were tactical, more by and for the convenience of the operators than for the enterprise.

In order to improve the chances that they can always connect when necessary, that is, compensate for any network failure, many of the controls are connected both to the public switched telephone network (PSTN) and the Internet. While they use the public wide area networks, they use them to create a limited number of relatively short point-to-point connections, for example, from the operator's home to the plant.

While the public networks permit world-wide any-to-any connectivity, and while the operators might actually monitor and operate their systems from the end of a plane trip, that is the exception, not the rule. The result is that anyone may use the public networks to send a message to any of these controls. They may be able to connect and operate the controls.

In the early days, most of these controls were purpose-built, offered only a limited command interface, and operating them required a lot of special knowledge. Even finding them would have been difficult, much less misusing them. Today, many have already been identified; most of them have graphical user interfaces and require much less special knowledge. Moreover, such intelligence as operator manuals and other documentation may be independently available in the world-wide-web.

Behind the controls, there may be operational dependencies between components such that operation of one may influence the behavior of others. For example shutting down the external power to a nuclear reactor may cause the reactor to shut down. These effects may cascade. The electrical grid is the most inter-dependent of the infrastructures and almost everything else is dependent upon it.

What might a "separate" network look like? How separate might it be? Well, it might be as separate as the two public networks are from one another. For example, it might have a separate "address space." Like these two networks, it might use different signaling, connection setup, and protocols.

On the other hand, the Internet, the digital network, originally piggy-backed for connectivity on the PSTN, the analog network. Today, for reasons of efficiency, all wide area networks share the same glass and copper fabric and most analog traffic is now encapsulated in digital. While much of that fabric is less than a decade old, it has taken us more than a century to achieve near world wide coverage. Surely a new separate network would exploit the existing fabric rather than attempt to replicate it.

For security reasons, it might be desirable for the networks to have different user populations. However, that would mean that a user of the alternate network could not use the public one. Not very likely.

The single public fabric that we use today emerged as a number of public and private networks coalesced around the Arpanet. When I first became an e-mail user, I had a list of tens of gateways and paths from the IBM network to other networks. We would use nested addresses of the form ((foo@foonet)@ibmgatewaytofoonet.com). Sometimes these addresses were two or three layers deep. An x400 or proprietary address might be nested inside an IP address or vice versa. Routing through these gateways often required a great deal of special knowledge. Gradually those gateways gave way to intelligent routing. x400 and other forms of addressing gave way to IP addressing.

The Internet is defined, and has evolved, as the collection of all networks that are connected to one another, that communicate in Internet protocols, or that are connected via gateways, think firewalls, that use that protocol. We did not set out to have one network; there was no design or intent. The Internet came about for economic reasons. The value of a network goes up with the number of potential connections. Therefore, the propensity of two networks to connect goes up with the square of their size. The unfortunate corollary to this is that, if we were able to provide a separate network, the users would respond to the economics by connecting them together again.

So for a number of cultural, technological, and economic reasons, a completely separate "alternate" network, no matter how desirable, seems unlikely. While still unlikely, a more viable alternative might be one or more virtual private networks (VPNs) exploiting the underlying fabric of the public networks.

Moreover. most of the advantages of such a network or networks can be achieved with much cheaper alternate mechanisms such as strong authentication, end-to-end encryption, and firewalls and other proxies. Even if there were hope for the kind of alternate network envisioned by Director Henry, it would still be our job to apply those mechanisms while we were waiting for it to emerge. It is also necessary to hide all of the information about the infrastructure controls that is gratuitously available to all but needed only by the few.

The status quo is the result of a large number of individual but reversible choices. It is unacceptable. It is our job to fix it. For that that we are called professionals and are paid the big bucks.