Friday, August 22, 2014

Managing Insider Risk

"Outsiders damage the brand; insiders bring down the business."

"We use automated controls over insiders only to the extent that they are more efficient than management supervision; under no circumstances are they a substitute for supervision."

Management of insider risk is not for the indolent, ignorant, or incompetent.  It requires diligence, special knowledge, and skill.  Here are some ideas that you may find useful.

Focus controls on the early detection and correction of errors.  Not only will such controls also resist malice but they reduce the temptation that results when employees make errors and recognize that they are not detected.

Focus controls on executives, officers and managers rather than clerks and tellers.  History suggests that we often focus on those likely to steal little and be caught early rather than those able to destroy the business but be caught late.

Ensure that supervisors have the necessary knowledge, skills, and abilities to perform and assess the duties of subordinates.  Historic losses from insider errors or malice have involved employees whose superiors did not understand what they did.

Structure duties and roles such that one person, simply performing his assigned duties, without doing anything heroic or exercising extraordinary judgement, acts as a control over others.  This arrangement detects errors and omissions, and discourages and detects malicious acts.

Separate origination from approval, record creation from maintenance, and custody of assets from the records about those assets.  These rules are as old as double-entry bookkeeping and originate with the same little monks.

Require the cooperation of two or more people to exercise extraordinary privileges or capabilities.  No one should have been able to do what Edward Snowden appears to have done.

Consider the rule of "least possible privilege" when granting access and authorizing capabilities.  Said another way, employees should have only those privileges and capabilities necessary to carry out their assignments. Guard against the accretion of privileges as employees move from role to role through their careers.

Use automatic alerts and alarms. Distribute them to those best able to recognize the need for and the authority to take the necessary corrective action. Distribute them such that one person has to deal with only a few a day. Require that individuals make a record of the disposition of all alerts and alarms

Instruct all employees to report all anomalies and variances from expectation to the attention of at least two people, including one manager and a member of the audit or security staff.  Be sure to treat all such reports and reporters with respect; dismissing them will discourage future reporting.

Measure and report on performance; changes in performance are suspicious.  However, "If the numbers are too good to be true, they are not true."   Massive frauds, including Bearings Bank, Enron, and Equity Funding, all began with glowing revenue numbers.  Management fraud has resulted from attempts to keep beating earlier numbers.

Rotate employees in assignments and enforce mandatory vacations; continuity is often necessary to mask malicious activity.  Officers who come into the office when they are supposed to be on vacation should be viewed as suspicious rather than diligent.

Compensate employees in a manner that is consistent with the amount of economic discretion that they exercise.  Under paying is corrupting.

Use invoices, statements, confirmations and other communications to and from customers, suppliers, investors, and taxing authorities to control insider risk.  While these controls operate late, and may be seen by the media as relying upon chance, they are legitimate, effective, and efficient; management is entitled to rely upon them.  Automatic, i.e., not under the control of the originator, transaction confirmations sent by e-mail or SMS are both timely and cheap.

Say "please" and "thank you." With few exceptions, unhappy insiders believe that their contribution is not recognized or appreciated by management.

Revoke all access, privileges, and capabilities immediately upon termination or separation.  Of course, this requires that one keep track of what they are.


Monday, August 4, 2014

Defensive Ethical Hacking

In 2006 Eric McCarty pleaded guilty to a SQL injection attack into a database at the University of Southern California.  The prosecutor and the court rejected McCarty's defense that he was a "security consultant" just doing what such consultants do.  His defense counsel claimed that he had acted responsibly by only giving the records of seven people to a reporter.  By pleading guilty, McCarty avoided jail and served only six months house arrest.

Several years earlier, while working on a gap analysis at a major media conglomerate, I became aware of a penetration test by a competitor that ran amuck.  It seems that after successfully penetrating file servers, the consultant arbitrarily extended the test to include an AS/400 on the client's network triggering multiple alarms and involving the FBI.

These are only two examples of so-called "ethical" hacking that went awry.  Without addressing the issue of whether "ethical" is a matter of motive or behavior, I have always had a set of defensive rules that I have imposed upon myself, my clients, and my associates that are intended to, among other things, keep me out of courtrooms and jails.

The first of these rules is that I do not engage in covert or clandestine activities.  My client, including all his personnel, must know about and acknowledge, all the activities in which I am to engage.

I do not engage in fraud, deception, or other forms of social engineering, not even for money.  I already know that these attacks will work; they have worked throughout human history.  I do not need to embarrass the client or his people to demonstrate that I am a proficient liar.

I do not work without a contract or letter of agreement.  Such a letter is part of my authority to do what I do.  It also demonstrates that both the client and I understand the extent and limitations of that authority.

I do not work for free.  There is little better proof that I was engaged by the client to do what I did than his check.  McCarty had no letter of agreement, much less a check.  Out of respect for my professional colleagues, I do pro bono work only for bona fide non-profits.  I price my work at my normal rates and require that the beneficiary acknowledge my contribution with a receipt.

I do not work alone.  I prefer to work with the client's people; failing that, I work with my associates.  Not only are my collaborators potential witnesses for the defense, they act as an ethical check on my behavior.  One is far less likely to cross an ethical line with another watching.

I do not share the client's data with others not expressly authorized by the client to see it; not even with the authorities.  If the state wants my client's information it must get it from him, not me.  Short of torture, it will not get it from me.  (I do not contract or commit to resist torture; even if I knew my own capacity to resist it, I would not know how to price it.)

Not all my clients or even my associates like all of these rules all the time.  Clients may think that disclosing all of my activities to his emploeyees in advance defeats his purpose.  There are those in my profession who deceive client personnel for the purpose  of discovering vulnerabilities or demonstrating naivete.  If the client wants that done, he should engage those professionals.  Some of my associates may feel that such activities are effective or that always working with others is inefficient.

I will not knowingly or willingly engage in any behavior, such that if I were caught in the act of that behavior it might embarrass or alarm me, my associates, the client, or the client's people.

These rules may increase my cost of service or even reduce my potential revenue.  However, they are both defensive and conservative.  They act early to help me avoid ethical conflicts and assist me late in resolving such ethical dilemmas as may arise in the  course of an engagement.

They have served me well.  They might have saved McCarty from conviction.  I commend them to you.

Sunday, August 3, 2014

Please do not say "Two Factor"

Thirty years ago I wrote a list for my staff to address what I thought was sloppy and problematic use of special language.  It was of the form "Please do not say _______ when you really mean _______."  I cannot even remember many of the entries but one was "Please do not say 'privacy' when you really mean 'confidentiality.'" Another was "Do not say 'secure' when you mean 'protected."  While the distinctions may seem small, they are nonetheless useful.

In the spirit of that list, I would like to suggest that one should not say "two-factor," or "multi-factor" authentication when what one really intends is "strong authentication."  Strong Authentication is defined as "at least two kinds of evidence, at least one of which is resistant to replay."  Thus, all strong authentication is two-factor but not all two-factor authentication is strong.

For example, a password and a biometric is clearly two-factor but might not be strong.   It is more resistant to brute force attacks than a password alone but might be no stronger against a record and replay attack than the password alone. We are no longer seeing brute force attacks but credential replay attacks are a major problem.  If all one wants to do is resist brute force, adding bits to the password is likely to be more efficient than adding a biometric.

If one accepts that record and replay attacks are the greater problem, then one wants a second factor that resists replay, something like a one time password (OTP), whether token-based or sent out-of-band to a phone or mobile computer.

The use of  "two factor" enjoys so much currency that it suggests that any second form of evidence is the same as any other.  The irony is that RSA, the vendor of one of the original and most popular OTP token is one of the sources of that currency.  However, when they spoke of two factor, the first factor was the OTP.  The second factor a PIN used to resist fraudulent use of a lost or stolen token.

One popular "second factor" with banks is challenge-response based upon shared secrets.  The secret is established at enrollment time.  One popular method is to ask the user to select a number of questions from a list and record his own answers to those questions.  Questions may be similar to "what was the name of your first pet, school, or playmate?"  "In what hospital or city were you born?"  "What were the names of your grandparents?"  "The mascot of your high school?"  Answers should be easy for the subject to remember but not obvious except perhaps to an intimate.  At authentication time one question is chosen at random.  Actually this method can be resistant to replay provided that the set of questions is large enough relative to how often they are used. 

One bank started using this method only for large transactions, those above a threshold value.  However, they figured if it was good for large transactions, wouldn't it be better for all?  They lowered the threshold to zero.  Because the size of the set of questions was not large enough for this kind of use, all the answers for some accounts were soon compromised.  

The Verizon Data Breach Incident Report (DBIR) demonstrates that use of strong authentication would have resisted many of the breaches reported upon.  Because it is so powerful, we should be encouraging its use by all available means.  These means should include distinguishing between it and mere multi-factor authentication. 






Good Security Practice for Programmers

This is the one of a series of posts on "Good Data Processing Security Practices."  The context for the series can be found here.  The following practices and controls are for enterprise development programmers, the individuals who produce the computer programs on which enterprise managers wish to rely.  Like other posts in this series, this post suggests useful separations of duties to ensure quality and fix accountability.


An individual programmer should not both specify and write a procedure.

Should not both write and test a procedure.

Should not both create and maintain a procedure.

Should not name procedures that he writes. (Program names are analogous
to account number which are normally assigned as part of the approval
by management or a designee separate from the originator).

Should not both write and execute a procedure (exception: data local to
himself as in testing or personal computing).

Should not both program and maintain the program library (exception:
they do all maintenance to that library).

Programmers should have personal copies of specifications. data definitions. source
code. test data. test results. load modules and object modules. All transfers
between the programmers personal libraries and project or production
libraries should be controlled by someone else.

The above represents the ideal. Because of limitations of scale, it may not be
realizable in all installations. However. under no circumstances should one
person specify, write, test. name. maintain and execute the same program.

On Nation States and the Limits of Anonymity - Tor

As a general rule, society has a preference for accountability.  For this reason, governments discourage anonymity.  Among the exceptions to this rule is citizen communications in resistance to government.  In this context, governments in general, and police states in particular, abhor anonymity.

Tor (formerly TOR ("The Onion Router")) is a tool for providing anonymity in the Internet.  It uses thousands of contributed routers, communicating using nested encryption, along a randomly selected path, such that when the communication finally appears in the clear, it cannot be traced back to its origin.  It raises the general problem of attribution in the Internet to a whole new level.  Its uses range from hiding browsing activity from routine state surveillance to hiding criminal or revolutionary communications.  

The following news item recently appeared:

 --Russian Government Seeking Technology to Break Tor Anonymity (July 25 & 28, 2014) 
The Russian government is offering a 3.9 million rubles (US $109,500) contract for a technology that can be used to identify Tor users. Tor was initially developed by the US Naval Research Laboratory and DARPA, but is now developed by The Tor Project, a non-profit organization. Tor is used by journalists and others who need to keep their identities hidden for their own safety; it is also used by criminals for the same purposes. The entrance fee for the competition is 195,000 rubles (US $5,500).


In my role as a member of the editorial board of SANS Newsbites, I made the observation that:

"In his most recent novel, Richard Clarke implied that NSA had targeted and broken TOR."

A reader responded in part:

"...more out of curiosity, didn’t the NSA have trouble cracking TOR, and at best, could only identify ingress and egress points?  As told by Team CYMRU.org, anyway."

Now you have a context for this post.  I responded to him as follows:


Thanks for your note.  It allows me to know that the comment did what I had hoped it would do, i.e., raise questions.

I was deliberately vague and cited a questionable authority.

My working hypothesis, the advice I give my clients, is that nation states, at least wealthy ones, can read any message that they want to, rarely in near real time.  However, they cannot read every message that they want to.  Incidentally, that is why they store every cryptogram they see.  Decryption is expensive but storage is cheap.  The cost of decryption is falling but not nearly as fast as that of storage.  

When applied to Tor and anonymity, my assumption is similar.  I assume that nation states can identify the origin of any message that they want to, again, probably not in near real time.  However, they cannot identify the source of every message that they want to.   Again, that is why they require acres of storage.   Like breaking ciphers, breaking Tor is expensive.  However, given their resources and determination, it would be foolish to bet one’s life that they cannot do it.   They know the protocol better than anyone and they own some of the routers.

If you think about it, your question implies a point in time.  However, my guidance assumes that what they cannot do today, they will be able to do tomorrow.  Cheap storage buys them time.  It took them fifty years to crack Venona but they never gave up.

As with crypto, the resistance of Tor to nation states depends in part upon how much it is used.  The more they have to deal with, the less efficient they are.  Therefore, one wants to encourage its use while discouraging anyone from betting their life on it.

The net is that Tor is adequate to provide individual privacy.  It is probably adequate for most political discourse, at least in democratic states.  It becomes problematic when fomenting revolution or disclosing state secrets in authoritarian, or even wealthy but vindictive, countries.