Thursday, December 12, 2013

CO-TRAVELER and Secret Government

Last week The Washington Post reported on a program that the NSA calls CO-TRAVELER.  Under this program, the NSA collects and stores geo-location "meta-data" from cellular service providers.  "The NSA claims that Executive Order 12333 allows the agency to collect cellphone location data, generating up to five billion records every day."

When I saw the report, I forwarded the link to two colleagues retired from the NSA.  One of them told me that the NSA did not create this capability but was only exploiting data that the carriers collected so that they could provide the emergency services (fire, rescue, police) with the geographic origin of 911 calls.  It may well be that NSA did not create the capability but they did create CO-TRAVELER, the program.  I seem to recall that the privacy advocates (e.g., ACLU, EFF, EPIC) speculated on precisely this "misuse" when this application to 911 was first discussed.

Note that the NSA does not use this capability in the same way that the emergency services do.  They are not so much interested in the origin of calls in real time.  Rather, they are interested in who may be geographically associated with a target individual in the past.  As in the so-called 215 program, the NSA collects all the data and stores it for an undisclosed period of time.  As in 215, the NSA asserts that they simply collect all this data on speculation, "on the come,"  that they hardly ever query it, that they are not looking at associations in general but only for associations to specific target individuals, and that there are controls in place to resist misuse and abuse.

(I am not the only security professional to assert that the mere existence of such a capability all but guarantees abuse and misuse.  Edward Snowden has demonstrated the limitations of these controls.  The only difference between Snowden and the other rogues in NSA is that he went public.)

This capability is not the problem.  The program is not the problem.  Surveillance is not the problem, at least not yet.  Secrecy and deceit are the problem.  This is one more example of the class of programs the very existence of which the administration repeatedly denied last spring.  While one can make a case for classifying sources and methods, secret government programs are antithetical to the Rule of Law.  That is a distinction that appears to have been lost on President Obama and Directors Clapper and Alexander.  

Wednesday, October 16, 2013

The Bank Secrecy Act

Recently BankInfoSecurity e-News reported that "A U.S. District Court in North Carolina last month dismissed its case against $2 billion CommunityOne Bank after the bank agreed to pay $400,000 toward restitution to victims of a third-party ponzi scheme that operated through accounts maintained at the bank."  The bank was being prosecuted under the so-called Bank Secrecy Act.

Orwell warned us that laws would be misnamed in order to make them seem less repressive. One might be led to believe that a law called the "Bank Secrecy Act" would punish banks for violating the confidence of their customers. Not so. Instead what is does is immunize the banks from liability when, and only when, they snitch on their customers, and as in this case, punishes them severely when they fail to do so.

The regulators would have you believe that this is in the interest of resisting the funding of terrorists, or even as in this case, fraud. Again, not so. While it may occasionally do one of those things, it is really in the interest of tax collection.

Privacy is dead. Your bank is not on your side. Now you understand the popularity of a thinly traded and volatile currency like Bitcoin.

Monday, October 14, 2013

The Price of Liberty

I recently received the following from a colleague.

"In the past few weeks some spirited discussions have sprung up amongst crypto spirits.

The consensus seems we can't even agree on how to secure, or implement, our existing Internet crypto architecture, let alone design and deploy one against Sovereign surveillance."

I was prompted to write:

The broader the application of crypto, the less effective but the more threatening to the governing class.

Crypto cannot protect us from the surveillance state.  We need both law and transparency to do that.  However, the pervasive use of crypto increases the state's cost and decreases its efficiency.

The state can read any message that it wants to; it cannot read every message that it wants to.  Every message that it reads, every person that it watches, is at the expense of another that it does not.  However, as the price of technology continues to fall, they will automatically increase their scope.  They will do so without any motive or intent but only because that is what bureaucrats do.  They believe that whatever they can do, they must.

That the effectiveness of crypto is limited in resisting state surveillance, is no reason not to use it.

All that said, I am not very sanguine.  The American people are fearful and their leaders weak.  The idea of a zero-risk society, based upon cheap technology, is too tempting for either to resist.

"The price of Liberty is eternal vigilance." The battle goes on, never fully lost or won.  Never surrender.

Wednesday, September 25, 2013

Strength of Materials

At the  2012 Colloquium on Information System Security Education in Orlando I was repeatedly reminded how much computer security education owes to, and has yet to learn, from engineering education.   

For example, every engineering student takes a course called strength of materials.  In this course, he learns not only the strength of those materials that he is most likely to use but how to measure the strength of novel materials.  The student studies how, and in how many different ways, his materials are likely to fail.  He learns how to design in such a way as to compensate for the limitations of his materials.

A computer science or computer security student can get an advanced degree without ever studying the strength of the components that he uses to build his systems.  It may be obvious that all encryption algorithms are not of the same strength but how about authentication mechanisms, operating systems, database managers, routers, firewalls, and communication protocols?   Is it enough for us to simply know that some are preferred for certain applications?

Courtney's First Law, remember that's the one that says nothing useful can be said about the security of a mechanism except in the context of a specific environment and application.  In this construction, security is analogous to strength, environment to load or stress, and application to the consequence of failure.  Said another way,  environment equates to threat or load and application to requirements. 

Computer science students are taught that their systems are deterministic, that integrity is binary, that security is either one or zero.  On the other hand, William Tompson, Lord Kelvin, cautioned that unless one can measure something, one cannot recognize its presence or its absence.  W. Edwards Deming taught us that if we cannot measure it, we cannot improve it.  

One way to measure the strength of a material is by destructive testing.  The engineer applies work or stress to the material until it breaks and measures the work required to break the material.  Note that different properties of a material may be measured.  The engineer may measure yield, compressive, impact, tensile, fatigue, strain, and deformation strength.  

The strength of a security mechanism can be expressed in terms of the amount of work required to overcome it.   We routinely express the strength of encryption algorithms this way, i.e., the  cost of a brute force or exhaustive attack, but fail to do it for authentication mechanisms where it is equally applicable.  As with engineering materials, security components may be measured for their ability to resist different kinds, for example exhaustive or brute force, denial of service, dictionary, browsing, eavesdropping, spoofing, counterfeiting, asynchronous, of attacks.  While some of these attacks should be measured in "cover time," the minimum time to complete an attack,  most should be measured in cost to the attacker.

There are now a number of ways in the literature for measuring the cost of attack.  The cost used should consider the value or cost to the attacker of such things as work, access, risk of punishment, special knowledge, and time to success.  Since these are fungible, it helps to express them all in dollars.  Of course, we will never know these with the precision that we know how much work it takes to fracture steel, but can measure them well enough to improve our designs.

The Trusted Computer System Evaluation Criteria, The TCSEC, can be viewed as an attempt at expressing the strength of a component composed of hardware and software.  While an evaluation speaks to suitability for a threat environment, with a few exceptions, it does not speak to the work required to overcome resistance.  One exception is in Covert Channel analysis, where the evaluation is expected to speak to the rate at which data might flow via such a channel.  

Because it is often misused, a caution about the TCSEC is necessary.  The TCSEC uses "divisions."  The division in which a component is evaluated is not a measure of its strength.  Many fragile components are evaluated in Division A, while some of our strongest are in D.  In order to understand the strength of a component, to understand how to use it, one must read the evaluation.

We have two kinds of vulnerabilities in our components, fundamental limitations and implementation-induced flaws.  The former are more easily measured than the latter.  On the other hand, it is the implementation-induced that we are spending our resources on.  We are not developing software as well as we do hardware, even as well as we know how.  

The engineers use their knowledge of the strength and limitations of their materials to make design choices.  They use safety factor and margin of safety metrics to improve their designs.  More recently, engineers at MIT's Draper Laboratory have proposed that "complex systems inhabit a 'gray world' of partial failure." Olivier de Weck,  associate professor of aeronautics and astronautics and engineering systems says,

“If you admit ahead of time that the system will spend most of its life in a degraded state, you make different design decisions,” de Weck says. “You can end up with airplanes that look quite different, because you’re really emphasizing robustness over optimality.”

Said another way, systems may be optimized for operation over time rather than at a point in time.  The more difficult it is to determine the state of a system at a point in time, the more applicable this design philosophy.  Thus, we see organizations like NSA designing and operating their systems under the assumption that there are hostile components in them.  

While most of our components are deterministic, none of our systems or applications are; they have multiple people in them, and interact in diverse, complex, and unpredictable ways.  Therefore, designing for degraded state may be more efficient over the life of a system than designing for optimum operation at a point in time.  We should be designing for fault tolerance and resilience.  We should be designing to compensate for the limitations of our materials.  

Of course, I am aware that my audience of information assurance and law enforcement professionals cannot reform computer security education or practice.  I will continue to advance that agenda in other forums.  What I hope for is that you will spend some of your professional development hours, effort, and study on the idea of strength and that it will inform and improve your practice.  It is in part because our education is a work in progress that we are called professionals and are paid the big bucks.

Thursday, September 19, 2013

Simulated Attacks Against RFID Credit Cards

Recently a colleague sent me this scary video illustrating an attack against contact-less (RFID) credit cards.  

Sigh.  It is bad.  It is not quite as bad as it sounds and only a little bit worse than it looks.

Watch the film again.  Focus on how close the attacker gets to the target.  Here is why.

The problem is not so much how the information is transferred as that it is transferred in the clear, not so much that the credit card number leaks as that credit card numbers are so easy to exploit. 

Said another way, all uses of credit card numbers in the clear leak;  this includes imprinters, compromised point-of-sale devices, gas pumps, and ATMs.   That would not be a problem if no one would accept a credit card number in the clear from an untrusted source.  

A major problem with the video is that it fails to distinguish between these RFID cards, that rely on the short range of the signal for security, from EMV cards that rely upon encryption, or even chip cards that require contact.

While many US merchants are ready for EMV, the issuers have slipped their schedule to 3Q 2015.  My hope is that by that time PayPal, Google Wallet, Square Wallet, or other (are you listening Apple and Amazon?) mobile computer token-passing systems, will have made them obsolete.  

For the moment, we can treat this as a vulnerability but not a problem;  there are easier ways to get credit card numbers.  The continued use of mag-stripe and PIN dwarfs all other problems in the retail payment system.

Bait E-mails

According to reliable intelligence sources (e.g. Verizon Data Breach Incident Report), a large percentage of successful attacks against targets both of choice and opportunity begin with bait messages (so called "fishing" attacks).

How to recognize a bait message:

It appeals to curiousity, fear, greed, lust, sloth, etc.

It appears to be personal but is addressed to a large number of people.

It has an ambiguous subject, contains a one-liner and a URL, and appears to come from someone you know at gmail, hotmail, Yahoo!, etc but from whom you were not expecting to hear.

It appears to come from PayPal, American Express, Chase, Amazon, or others with whom you do business but does not contain your name or account number. 

It pretends to come from a "security" department asking you to react to activity to your account or profile. 

It asks you to click on a button or URL within the message itself. 

Remember that any of these things may be legitimate but they are all suspicious. 

Remember that bait messages may be very artfully crafted.  They may contain logos, headings, footings, and other copy intended to make them look authentic. 

What to do with a suspicious message:

As little as possible.  Mere receipt of a suspicious message is not likely to hurt you.  It is clicking on things in it that will compromise your system. 

While it is not necessary,  you may wish to alert the purported sender.  If it appears to come from an individual, return it to them with a subject line that says "Did you send this?" and a body that says, "'If not, your e-mail account may be compromised.  Change your password."

If it appears to come from an enterprise, you may wish to forward it as an attachment to them.  Here are some useful addresses for that purpose:

If your victim is not in this list, Google their name with "fraud" and you will likely find it.

Wednesday, September 18, 2013

Thoughts on NSA Betrayal

“I trust the minions of the NSA not to commit treason.  I do not trust them not to commit fraud.”  --Robert H. Courtney, Jr. circa 1975

Like Bruce Schneier, I am not surprised about the kind or nature of NSA activity.   However,  I do not see it so much as a betrayal of the Internet as of the public trust.

I am surprised by the scale.  I am disappointed by the scale of silent, industry cooperation, whether coerced, immunized, or otherwise.

I recall that, when trying to get an export license for Lotus Notes Messaging, IBM negotiated a (reasonable?) compromise and then went before the RSA conference and disclosed and defended what they had done.  Said another way, if the activity is a legitimate response to a legitimate government need, then it can be done in a transparent and accountable manner.  (Admittedly not without cost.  The Lotus Notes compromise is widely criticized outside the US, styled as capitulation (a Yankee word for surrender), and IBM has lost hundreds of millions in international sales as a result.  The government has lost whatever advantage might have accrued to it from the use of a weakened Lotus Notes instead of stronger options.)

That brings us back to the arguments made against CLIPPER, i.e., back-doors inserted for the legitimate use of the government will inevitably be abused by  government and exploited by rogues.  (The thing that distinguishes Edward Snowden among NSA rogues is that he went public; god only knows what the others are doing.) Back-doors weaken the structure and the necessary trust in it. 

These back-doors have been put in by the same administration that has tried to create commercial advantage for US products by suggesting that the PLA has put back-doors in Chinese products.  Did they really believe that they could booby-trap US products and get away with it?   As with bragging about Stuxnet, as with unilateral recourse to armed force,  they have ceded the moral high ground. 

After 9/11 we loosed the intelligence community, always a dangerous thing to do.  It  has been zealous in doing what we asked it to do.  It now has the bit in its teeth, it is going to be difficult to reign it in.  I believe that Directors Clapper and Alexander, are great Americans, motivated by patriotism; we should be grateful for their service.  However, they have been corrupted by the power and secrecy in which they have cloaked themselves.  They have systematically  deceived the American people, lied to Congress,  subverted the courts, and corrupted American industry.  Like Snowden and others, they appeal for justification to a higher value.  However, they seem to have confused "national security" with their oath to defend the Constitution.  They can now serve best by retiring and making way for reform.  

The necessary reform, transparency, and accountability will require new leadership, new leaders who will put the Constitution ahead of "national security."

Thursday, September 12, 2013

Internet Betrayal

On 5 September Bruce Schneier wrote in the Guardian "The US government has betrayed the internet. We need to take it back." 

This article was based upon access to information made available to the Guardian by Edward Snowden about signals intelligence activities of the NSA.   The information suggested that the NSA has systematically compromised cryptographic methods, keys, products, vendors, and systems on which the integrity of the infrastructure and the liberty of our citizens rely.  I was glad to have a reading of these papers by a trusted and eminently qualified colleague. 

While the activities reported were those that I had always expected the Agency to engage in, I was surprised by the extent and scope.  I was not surprised by the secrecy so much as by the deceit.  I was not surprised at what the Agency was doing but I was outraged at the permanent damage to the infrastructure that they were prepared to inflict in pursuit of their goals.  Along with Bruce, I felt betrayed by my government.  I was so angry, I sent a link to Bruce's article to a list of my colleagues.  Since the conclusions seemed obvious, I did not comment.

When one of my colleagues asked me for my thoughts, I sent him my most negative ones.  However, I did include the caveat that I was still ruminating on it and that these comments were still preliminary. 

Now It is great fun, indeed great sport, to affect righteous anger at the perfidy of our government.  For a day I nursed my anger, in fact, I delighted in it.  However, I woke up in the middle of the night to a realization that I would like to share with you.

While it may be true that, as Bruce has said, “the government has betrayed the Internet,” for every system that the government has compromised, there are a hundred compromised by rogue hackers, and a thousand compromised by their users.  While crypto is our strongest security mechanism, the only one we have that is stronger than we need for it to be, the best that it can do is to bring the security of the middle to the level of the end points.  Crypto will never be stronger than the systems that protect the keys.

The government has not “broken crypto.”  While it may have deceived us, broken faith, it has, in the words of Adi Shamir, only “bypassed” crypto.   While it may have corrupted industry, that corruption has relied upon the silent cooperation of industry.  We have known since the disclosure of the warrant-less surveillance program that government had compromised the major carriers.  The motives of industry seem to include patriotism, greed, apathy, and fear.  Whatever the motives, they are sufficient to the day.

Whatever one may think about the activities of the government(s), it is we, the users and the corporations that we own and run, that have betrayed the Internet.  We do “need to take it back.”

One likes to think that we can expect better behavior of our government than of our adversaries.  (The US Congress has warned us against doing business with Huawei because Chinese PLA has subverted them.)  However, governments do what governments do; we cannot expect better of our government than of ourselves. 

We have compromised industry, government, and the Internet.  It is time to stop whining and “take back” all. 

This is all about transparency and accountability.  To the extent that NSA's activities are seen now as a "betrayal," it is because they have been cloaked in secrecy.  Secrecy is what government wants for itself; accountability is what government demands of c
itizens.  However, the inevitable result is a government of men.  A government of law can only exist in the light. 

We must demand increased transparency at all levels of the society, government,  infrastructure, and industry.  Where the use of important controls is obscured by complexity, we must compensate by instrumentation and independent verification.  We must express the requirements for transparency and accountability at least as well as we do those for confidentiality, integrity, and availability and design and operate to satisfy them.  Not easy, not cheap, only necessary, necessary to economic efficiency and freedom.  Stop whining and get on with it. 

Monday, August 26, 2013

On the Ethics of Hacking

My favorite definition of "hacker" comes from golf: i.e., a determined, persistent, and self-taught amateur; an autodidact.  I have never quite trusted them to "call a stroke" on themselves.  

Similarly, I have always been cautious about computer hackers. Too many of them get off on the power.  Hacking is addictive.  Many hackers seem to see the Internet as a playpen where they are engaged in a game of "Gotcha," a game where they score points by exposing or embarrassing others.

I came of age in the fifties, when computers were scarce and dear, when it took a team of us to get them to do anything useful, when one's access was a function of the trust one had earned by contributing to the team.

Last week we had another case of a self-described "ethical hacker" being convicted of a crime.  The man was a member of his country's parliament.  I am sure he thought of himself as a "good guy."  I have met few people in my life, no matter how corrupt, who did not self-identify as good guys.  Most hackers think of themselves as good guys and many good guys think of themselves as hackers. How then to stay out of jail?  I have a few suggestions.

As a judge, here are some of the questions I might ask to distinguish between so called "ethical" and criminal hacking.

Was the hacker engaged in gainful employment?  If no one is willing to pay for an activity,  it is at least questionable.  Professionals do not work without pay. It is unfair competition with other professionals.  It takes food from the mouths of one's children.

Was the activity authorized by the owner of the network, system, application, or data?  Is there a record of this agreement, a letter or a contract?  Does it spell out the content of, and the limits on, the activity?  Such a letter might keep one out of jail.

Was the activity covert?  Is there anyone from whom the hacker might wish to conceal it?  Did it involve fraud or deceit?  If the activity were discovered while it was ongoing, would it surprise, embarrass, or frighten anyone?  Would it trigger alarms?  If one is shamed by one's activity, one has already judged it.

Was the hacker accountable?  Supervised?  Was he acting as part of a team or working with at least one colleague who could act as a check  on, or vouch for the legitimacy of, what was done?  Was a record kept?  Was it attested to by two or more parties? Acts authorized by one's employer are not always ethical; we settled that at Nuremberg. All unauthorized acts are not necessarily unethical. However, unilateral activity is always at least questionable..  

Was data disclosed to anyone not already authorized to see it? While unauthorized disclosure might not be criminal, it is probably not ethical.

Was anything broken?  Did networks, systems, or applications stop working?  Was there a loss of availability?  Was there a loss of data integrity or confidentiality?  Trust?  Reputation?  Did the target, not to say victim, have to reallocate resources to remediation?  Did the target incur liability to others? 
Was there any threat or coercion?  Were the results of the activity used to get someone to do something that they might not otherwise do, or to do something earlier rather than later?  Coercion is rarely ethical and often criminal.

Now, there are special cases.  Almost every hacker claims one or more of these as justification for otherwise anti-social, not to say sociopathic, behavior.   For example, some hackers lie.  Their rationale is that they have an authorization and are being paid to do it, usually by someone else for whom the lie would be unethical.  They claim that they must lie because rogue hackers do "social engineering," and they must test the ability of the target organization to resist it.  As with most ethical dilemmas, one has to decide for oneself.  However, I already know that social engineering works against most organizations.  I do not need to engage in it to satisfy myself on that issue.  

Many rogue hackers excuse their activity as "research."  However, much of it is outside the tradition of science.   Labeling an activity as "science" does not excuse otherwise unethical behavior.  Science is conservative; it does not make things worse.  It does not increase vulnerability, instability, or risk.  While there are destructive experiments, they break things that belong to the scientist, not the property of others.  That the scientist does not own, and cannot buy, a network of his own, does not justify experiments that break the public networks or the private networks of others. 

Another very special case is national security, espionage and sabotage, activity that would fail many of the tests above.  Since few of us are engaged in such activity and fewer still will be called to account for it, can we agree to leave this case for another day? 

Some claim "civil disobedience," appealing to a "higher good," admitting otherwise questionable, even criminal, behavior but claiming that it is justified.  Perhaps.   However the burden of proof and responsibility is on them.  

Professionals are confronted with difficult ethical dilemmas; it goes with the territory.  Even with scrupulous ethical tests, one professional presumes when he undertakesl to judge another.  However, it also goes with the territory that we must be scrupulous in our own behavior.  We may not be slipshod in our own behavior or make cheap excuses for ourselves. 

It is a mark of the immaturity of our profession that we must deal with so much questionable behavior and so many pretenders.  Physicians have separated themselves from"quacks" and lawyers from "shysters."  Not only have we not separated ourselves from our amateurs, hackers, we may secretly admire them and excuse them.  Indeed, I systematically qualify hacker with "rogue" so that I do not have to listen to my colleagues defend questionable activity.  I look forward to the day when hacker is on the same list as quack, shyster, shrink, and other pretenders. 


Thursday, August 22, 2013

Security with Persistent Threat but no Perimeter and no Edge

Whether one focuses on the consumerization of technology, "bring your own device to work," "Advanced Peristent Threat," or merely the exponential growth in use, uses, and users of information technology, we really have reached a tipping point. Our approach to Information assurance is no longer working.  We cannot discern "the edge."  We cannot control the user device.  While the network is spreading and flattening, the perimeter is crumbling.   As the base of the hierarchy of authority, privilege, capability and control is spreading, the altitude is shrinking.  The compromise of one system, can compromise the entire network.  A compromise of a single user can compromise the entire enterprise. We cannot afford for all of our data, the protection indicated for the most sensitive.

Our traditional tools, user identification and authentication, access control, encryption, and firewalls are not scaling well.

My purpose is not so much to tell you what to do as to change the way you think about what you do.  I hope to change your view of your tools  and methods and the materials to which you apply them.

First and foremost, you must identify and segregate the data you most want to protect.  This will include, but may not be limited to, the books of account, intellectual property, and personally identifiable data.  You cannot protect all your data to the level that is required by these.  "Classification" of data is essential to effective and efficient security.

Prefer closed systems for this sensitive data.  Think AS/400 and Lotus Notes but you can close any system.  While most of our systems will continue to be open, these will always be vulnerable to contamination and leakage and not reliable for your most sensitive data.  Lotus Notes is both closed and distributed.  Trusted clients are available for many popular edge devices.

Consider single application client systems for sensitive network applications.  In an era of cheap hardware, it can be efficient to use different systems for on-line banking on the one hand, and web browsing or e-mail on the other.

Prefer object-oriented formats and databases to flat files for all sensitive data.  This should include Enterprise Content Management or Document Management systems, for example, Lotus Notes, SharePoint, or netdocuments.   The common practice of storing documents as file system objects is not appropriate for intellectual property or other sensitive documents.

Control access as close to the data source as possible,  i.e., at the server, not on the edge device.  Control access at every layer, edge device, application, network, database, and file.  Do not rely upon one layer for exclusive control of access to another.  For example, do not rely exclusively upon the application to control all access to the database.  The application controls should mediate what data is passed to the user but database controls should be used to mediate what data is to be passed to the application.

Prefer application-only access, not file system, not database management systems, not device.  Prefer purpose-built application clients; think "apps."  Said another way, the application should be the only way for a user to access the data.  The user should not be able to bypass the application and access the data by other methods or tools.

Prefer end-to-end encryption, that is, known edge client to the application, not to the network, not to an operating system.  Said another way, when a user opens a VPN, he should see an application, not an operating system command line, not a desktop.  While there are many ways to accomplish this, for existing applications, an easy way to do this is to hard-wire an encrypting proxy, a router, in front of the application.  The cost of such a router will range from tens of dollars to low tens of thousands, depending upon the load. A limitation of this control is that what appears to be the edge device may be acting as a proxy for some other device.  While we can know that data passes to a known device, we can not know that it stops there.

Prefer strong authentication for sensitive data; consider the edge device identity, for example, EIN or MAC address, as one form of evidence.  Consider out-of-band to the user or token-based  one-time-passwords to resist replay. Check out Google Two Factor Authentication as an example.  (It takes advantage of the fact that the typical hand-held computer ("smartphone") has addresses in both public networks.  Thus, when I want to log on to Google, I am prompted for my e-mail address and my password.  However, these are not sufficient; knowing them would not enable you to impersonate me.  I am then prompted for a six digit number, a one-time password,  that Google has sent, in text or spoken language, to a telephone number of my choice, provided to Google by me at enrollment time.)  Consider the user's hand held or other edge device as the token.  Both RSA and Verisign offer complete solutions  for this.

Control the data rate at the source; prefer one record or page at a time.  One may not be able to prevent the user from "screen scraping" and reconstructing the document or database at the edge but one can resist it, the stick.

Provide a high level of service, the carrot.  You can make any control or restriction at least tolerable provided that you couch it in a sufficiently high level of service.  Remember that most leakage is of gratuitous copies.  These copies are made to trade off cheap local storage against scarce bandwidth and/or high network latency.  The faster you can deliver data from the source, the fewer copies will be made at the edge.  

Involve multiple people in privileged access and control. System administrators and other privileged users have become favored targets and means for compromising the enterprise.  Tools and methods are available to exercise control over them and to provide the necessary accountability.  These include such tools as sudo at the Unix system level to Cyber-Ark at the network or enterprise level.

These measures focus on the data rather than the technology.   They address malice, for example, state or organized crime sponsored commercial espionage, and errors and omissions, for example leakage or opportunistic contamination at the edge.  They address outsiders, the largest source of attack, and insiders, the largest source of risk.

They are not for those who look for or believe in "magic bullets."  They may be too expensive for all the data in the enterprise but efficient, and perhaps even mandatory, for the most sensitive data.  It is for drawing the line between the classes of data and applying the appropriate measures that we are called professionals and are paid the big bucks. 

Tuesday, August 13, 2013

Consumerization of Information Technology

One of the things that I try to bring to the this blog is historical perspective.    I argue for the importance of history, that if we do not know where we came from, we cannot appreciate where we are, much less where we are going.  I have been here longer than the average bear.  I can see things across time that are difficult to appreciate at a point in time.

When I was selling computers for IBM, chief executive officers did not have the discretion to buy a computer.  It was an economic decision for the enterprise comparable to that of building a new plant or committing to a new product.  It was a board level decision.  While the CEO could say "no," he could not unilaterally say "yes."

As the scale of the technology has changed, as its price has fallen and its efficiency has exploded, the decision making has moved.  For almost a generation, we matched the scale of the computer to that of the enterprise.  Each enterprise had one computer, the most powerful that it could afford.  The decision was made in the executive suite.

By the time that the "minicomputer" came on the market, the decision had fallen to the level of the department.  We did not consciously make a decision to do that.  It was simply a reflection of the scale, price, and efficiency.  However, until very recently, most computers used in the enterprise were still purchased, owned, and managed, not to say controlled,  by the enterprise.

Recently we passed a tipping point;  most computers are now purchased, owned, and to the extent that they are managed, by individuals, by consumers.  We buy them at Wal-Mart and Costco, next to groceries, diapers, paper towels, and bottled water.  Because they are so cheap and so powerful, they are used for things that we could not have imagined as recently as a decade ago.  They are driving other devices, particularly single purpose devices, from the market.

As I sit here, there are seven computers within 5 feet of me and nine screens within 9 feet.  They are all connected and interoperable.  Moreover, to a first order approximation, they are connected to, and will inter-operate with, any and every computer in the world.  These do not count the application-only computers like my cable box, Sling-box, and "Smart-TV;"  they all "boot" so I assume that they are "computers."

As I sit here, I am waiting for one great niece to decide between a Kindle Fire and an iPad and am replacing an iPhone for another who dropped her's in the toilet at the mall.  The discretion, the decision making power, has now fallen to the children.  Remember?  The decision is made one level below the guy who signs the order, the check or the credit card?  I only pay, the kids decide.  Their decisions impact the enterprise and the infrastructure, those things that you and I are expected to control and protect.

Infants use computers.  I choose the term "use" advisedly.  They use them for their "work," at their age indistinguishable from "play," learning to master their environment.  They project the capability of one computer as requirements on another.  They "swipe" across TV screens and even magazine pages.  Seven year-old write critical reviews of applications, and teen-agers know more about computers than the information technology elites of a generation ago; different things perhaps, but more.  They have been called "digital natives" but digital savages might be more accurate.

There are some things that are beneath their level of notice.  For the most part they are agnostic as to where an application runs and its data is stored.  They are oblivious as to what we used to call "speeds and feeds."

It is almost impossible to remember that the first iPhone came out only five years ago and that about all it could do was phone calls, e-mail, and browse.  Oops, I forgot; play music.  Apple and Google now have a couple of major announcements and ship dates a year.  Just to keep up!  Teens track the features in new versions of iOS the way my generation tracked new car models.  By the time that YOU have figured out the security implications of one new product, another has shipped.

I remember when I had to keep a list of e-mail gateways and use embedded addresses to get from one domain to another.  No longer; the address space has flattened.  Now I keep a list, shorter, but still a list, of application proxies to get me around fire-walls and other security restrictions.  When I complained that the Naval Postgraduate School blocked my access to AOL Instant Messenger, two students quietly gave me the addresses of two different proxies.  Proxies now come plug-n-play-in-a-box or simply run as services in the Internet.

One niece and nephew go to a very traditional school, elite, but so traditional that they are still expected to carry fifty pounds of paper in and out of school everyday. They can take their iPhones, but cannot use them, and iPads and MacBooks must still be left at home.  So, they use Dropbox, Evernote, and thumb-drives.  No matter what controls or road-blocks we throw in their way, they will get around them.  Savages.  Not (yet) civilized.  

The good news is that there are only two popular operating systems for the most popular consumer products, right?  iOS and Android?  All you have to know about, right?  The bad news is that there are dozens of versions of Android, all different, most open.  

There is more bad news.  RIM has not gone away.  Windows Mobile has hardly gotten here.   Playstations and X-Boxes are becoming richer and more open.  Even Play Station Portables and DS Lites are being opened some.  Proxies and servers are popping up everywhere to expand their capabilities even further.

As I write this on Evernote, I am using the Window's Evernote Client on my  Dell, but I am using the screen and key-board on my MacBook Air.  In order to find the IP address of the Windows system across the room, the MacBook goes to an addressability server in the Internet, perhaps thousands of miles away, where the Dell has published its address.
The devices at the the edge are becoming smaller, cheaper, more diverse, more powerful, at an exponential rate. Now it is not news that one can buy gigabytes on a chip the size of one's pinky nail for dollars per gig or that one can buy a terabyte to fit in one's shirt pocket for under $100-.

All of this is by way of saying that you cannot prevent contamination and leakage at the edge.  You no longer own or control the edge.  You cannot even see it.  It has been a battle to see it since the edge began to include PCs but it is now clearly a lost cause.  Technology changes so rapidly that it is often obsolete before we can figure out how to control it.  Trying to achieve adequate protection by using or controlling the edge technology has probably been the wrong strategy all along.

In a future blog I will suggest an alternate approach.

Tuesday, February 26, 2013

Security of Enterprise Mobile Apps

 A colleague invited my attention to this article.  I was engaged by this headline :


The best way to keep mobile apps safe is to secure the services they connect to.”

Perhaps.   In any case, this is good treatise on the security of client-server applications. 

However, the quote seems to suggest that the risk is that client mobile apps are being contaminated by connecting to rogue services.  In fact, the risk to the enterprise is more likely that rogue or compromised apps on mobile devices will leak sensitive data into the network.  Even that risk ranks after the risk to the user that rogue apps will incur charges; this is one way that rogue apps are being monetized.

Therefore, the issue for the enterprise is not protecting the client app from the server, or any server, but protecting the application and its data on the server.  The best way to do that is to ensure that the server will only accept connections from known and trusted clients.  Said another way, use crypto to authenticate the code in the app to ensure that it is the code that you think that it is; then use crypto to authenticate the client application and bind it to the server end-to-end. 

The owner (not necessarily the user) of the mobile device must get the client app from trusted sources, e.g., iTunes, the enterprise itself, and protect it from contamination or compromise from other apps.  (If the enterprise does not yet know how to protect its servers, this discussion  is premature.)  Again, trusted apps from trusted sources via trusted transport or packaging.  (This assumes that the enterprise has a sufficiently well-controlled development process that it can produce application programs that do what, and only what, it intends.) 

To protect against any unacceptable residual risk of a rogue application on the mobile device, one should prefer a mobile device operating system, e.g., iOS, that provides good process-to-process isolation.  For highly sensitive applications one should use a mobile device dedicated to that application.  Hardware is cheap.  This is a cost of high security and must be balanced against the risk or sensitivity of the application.  (One should not use a shared device and then whine about its operating system.)