Monday, September 15, 2014

Q & A About Apple Pay

"Nothing useful can be said about the security of a mechanism except in the context of a specific application and environment."


In that context, what can one say about the security of Apple Pay?

We can say with confidence that Apple Pay is more secure than the alternative widely used payment mechanisms such as cash, mag-stripe cards, or comntactless (RFID) (debit or credit) cards.  Its security is comparable to that of EMV ("Chip" cards).


What is necessary to use Apple Pay?

One must have one or more credit card or other bank accounts to charge.  (By default, Apple Pay will use the account registered with the Apple Store. ). One must  have use of an iPhone 6 or iPhone 6 Plus and Touch ID.  Finally, the merchant must have point of sale devices that have contactless readers.  These readers work with both contactless  (RFID) credit cards and mobile computers using Near Field Communication (NFC).


If one loses one's iPhone can a finder use Apple Pay.

No.  Both possession of the IPhone and the right fingerprint are necessary to use Apple Pay.  Similarly someone with merely a copy of your fingerprint cannot use it.  Of course, one would still want to remotely disable the iPhone.


If my password is disclosed, I can change it, but I cannot change my fingerprint.

True but there is no need.  Passwords work only because they are secret.  Fingerprints work because they are difficult to counterfeit; no need for secrecy.  In fact one leaves copies of one's fingerprints all around in the normal course of things.


One can do Apple Pay with Apple Watch.  Does it have a fingerprint reader?  

No. The Apple Watch uses a different but equally effective authentication scheme.  After one puts the Watch on, one enters a 4 digit personal identification number (PIN).  This lasts until the sensors on the watch indicate that the watch has been taken off.  Both of these authentication schemes are examples of Two-factor Authentication, iPhone and Touch ID, Watch and PIN.  When used with the Secure Module and the one-time digital token to resist replay, Apple Pay has Strong Authentication..  


What is "NFC?"  

NFC is a low power, low speed, extremely short range digital radio capability.  Its applications include retail payments.  Apple Pay uses NFC to communicate with the register or point-of-sale device.  While NFC is only one alternative communication method, payment systems that use it may be identified as "NFC" systems.  


Is NFC secure?

NFC makes no security claims.  All required security must be built into the application.  While it is low power and short range, NFC includes no other security properties, functions or features.  Apple Pay does not rely upon NFC for security.  The token value that Apple Pay uses NFC to send to the point of sale is a one-time value.  Unlike a credit card number, it is not vulnerable to disclosure or reuse.  


How do I know how much I am being charged?

As with credit card transactions, the amount that you will be charged is displayed on the register. As with credit card transactions, you may be asked to "accept" or confirm pthe amount to the register.  As with credit card transactions, the register will provide you with a paper receipt.   


How do I know that the amount that appears on the register, that I confirm, and that is printed on the receipt is what is actually charged to my account?

By benign design and intent, systems will automatically ensure that the displayed amount and the charged amount are the same.  One can imagine a system designed to cheat but these will be very rare, easily detected, and quickly shut  down.  To some degree, this will depend on you.  

As with credit cards and checks, some of you must reconcile charges (perhaps using the paper receipt) to your account to what you authorized.  (Some other "wallet" programs immediately confirm the location and amount to the mobile device by SMS.  It remains to be seen whether Apple Pay will do this but it is likely.)   (Your bank or account holder may also offer transaction confirmation features.  For example, American Express offers its customers the option to have "card not present" transactions confirmed in real time by e-mail.  Incidentally, Apple Pay transactions look to American Express as "card not present.")


What if the charges to my account are not the same as I authorized?

Errors or fraud will be rare but you will continue to enjoy the same right to dispute charges that you have always had.


Lest you think that these questions are trivial, I heard each of them raised seriously by serious people on TV this week.

Monday, September 8, 2014

"Come Back with a Warrant."

Recently, in recognition of my routine contribution, the Electronic Frontier Foundation (EFF) sent me a little sheet of stickers highlighting their areas of interest and action.  Since advocacy of the Fourth Amendment to the US Constitution is one of my pursuits, I particularly liked the one that said "Come Back with a Warrant."  I inferred that, as good custodians of the private information of others, when asked for that information by government, our default response should be "Come back with a warrant."

As one who has had occasion to draft rules and regulations, if not law, I have always stood in awe of those who crafted our Constituion.  It is a model of brevity, clarity, and balance.  While tortured by events and progress, it has served us well.  Not only is the Fourth Amendment not an exception to this observation, it is an example of it.  Having recently thrown off the yolk of tyranny, the Authors were exquisitely sensitive to the potential for abuse of the power of the state.  In the Fourth Amendment the Authors sought to place a limit on the magisterial police powers of their awesome creature.

They stipulated that the people have a right to be secure in their "persons, houses, papers, and effects" from "searches and seizures."  In consideration of police necessity, the Authors qualified the searches and seizures that they were addressing as "unreasonable," leaving open the possibility of reasonable ones, and to specifically include those where the state had a "warrant" of a specific character.

In recent times, in response to threats real and imagined, the state, congress, courts, and executive, have dramatically limited the right of the people to be secure in "persons, houses, papers, and effects."  Congress has passed laws, such as the USA Patriot Act, granting massive exceptions to the requirements for warrants in the name of "counter-terrorism."  Secret courts have permitted seizures so massive as to defy the wildest definitions of reasonable.  The Executive has engaged in secret programs of "warrantless surveillance" and officially lied to the American people about their existence.  They have systematically parsed every word in the Amendment, specifically including "unreasonable," "seizure," "papers," and even "their" so as to eviscerate the protection that the Amendment was intended to afford.

For example, It is hard to imagine a definition of seizure that does not include "taking from another under force of law."  However, for their own convenience this administration, Departments of Defense and Justice, secretly agreed among themselves to a definition that such an act did not constitute seizure as long as one promised not to look at what one had "taken."  Having gotten a secret court to agree to this definition, the act was now not only "legal" but also, at least by this arguable definition, constitutional.  Such "weasel wording" might be laughable in another context.

So, where should we take our stand?  I propose that we stand with the EFF, that we adopt enterprise policy that, at least by default, we expect a warrant.  We should not wait until we are served with a National Security Letter, which may even say that we may not consult counsel, but we should proactively adopt and direct counsel to implement a policy that we expect a warrant and will resist deficient orders.

I am willing to grant the government access to almost anything for which they have a warrant.  Some even say I have given up.  However, even a capricious warrant offers us fundamental protections.  First, unlike some other orders, it is never unilateral.  Two people, usually with different motives, must cooperate before there can be a warrant.  An investigator must at least have the consent of a magistrate.

Second, a warrant requires probable cause, not merely "articulable suspicion."  It requires that an investigator not only present the court with "probable cause" but do so under oath, subject to penalties for perjury.  The investigator may not simply make an assertion.

Finally, while it may be broad, a warrant must be limited in its scope.  The Amendment requires specify the "place to be searched, and the persons or things to be seized."  As the custodians of the personal data of others, we should at least assert that the warrant should specify the data to be searched, the arguments to be used and the functions that are responsive.  We should be prepared to challenge warrants that we believe to be overly broad but even if we fail, the specifications will be a matter of record.

The Authors of the Amendment gave state the, admittedly carefully limited, warrant as an exception to the right of the people to be secure from searches and seizures.  Even those who do not agree with me that they should be required, have to concede that they are just not that hard to get. Let's expect them to bring one.

Friday, August 22, 2014

Managing Insider Risk

"Outsiders damage the brand; insiders bring down the business."

"We use automated controls over insiders only to the extent that they are more efficient than management supervision; under no circumstances are they a substitute for supervision."

Management of insider risk is not for the indolent, ignorant, or incompetent.  It requires diligence, special knowledge, and skill.  Here are some ideas that you may find useful.

Focus controls on the early detection and correction of errors.  Not only will such controls also resist malice but they reduce the temptation that results when employees make errors and recognize that they are not detected.

Focus controls on executives, officers and managers rather than clerks and tellers.  History suggests that we often focus on those likely to steal little and be caught early rather than those able to destroy the business but be caught late.

Ensure that supervisors have the necessary knowledge, skills, and abilities to perform and assess the duties of subordinates.  Historic losses from insider errors or malice have involved employees whose superiors did not understand what they did.

Structure duties and roles such that one person, simply performing his assigned duties, without doing anything heroic or exercising extraordinary judgement, acts as a control over others.  This arrangement detects errors and omissions, and discourages and detects malicious acts.

Separate origination from approval, record creation from maintenance, and custody of assets from the records about those assets.  These rules are as old as double-entry bookkeeping and originate with the same little monks.

Require the cooperation of two or more people to exercise extraordinary privileges or capabilities.  No one should have been able to do what Edward Snowden appears to have done.

Consider the rule of "least possible privilege" when granting access and authorizing capabilities.  Said another way, employees should have only those privileges and capabilities necessary to carry out their assignments. Guard against the accretion of privileges as employees move from role to role through their careers.

Use automatic alerts and alarms. Distribute them to those best able to recognize the need for and the authority to take the necessary corrective action. Distribute them such that one person has to deal with only a few a day. Require that individuals make a record of the disposition of all alerts and alarms

Instruct all employees to report all anomalies and variances from expectation to the attention of at least two people, including one manager and a member of the audit or security staff.  Be sure to treat all such reports and reporters with respect; dismissing them will discourage future reporting.

Measure and report on performance; changes in performance are suspicious.  However, "If the numbers are too good to be true, they are not true."   Massive frauds, including Bearings Bank, Enron, and Equity Funding, all began with glowing revenue numbers.  Management fraud has resulted from attempts to keep beating earlier numbers.

Rotate employees in assignments and enforce mandatory vacations; continuity is often necessary to mask malicious activity.  Officers who come into the office when they are supposed to be on vacation should be viewed as suspicious rather than diligent.

Compensate employees in a manner that is consistent with the amount of economic discretion that they exercise.  Under paying is corrupting.

Use invoices, statements, confirmations and other communications to and from customers, suppliers, investors, and taxing authorities to control insider risk.  While these controls operate late, and may be seen by the media as relying upon chance, they are legitimate, effective, and efficient; management is entitled to rely upon them.  Automatic, i.e., not under the control of the originator, transaction confirmations sent by e-mail or SMS are both timely and cheap.

Say "please" and "thank you." With few exceptions, unhappy insiders believe that their contribution is not recognized or appreciated by management.

Revoke all access, privileges, and capabilities immediately upon termination or separation.  Of course, this requires that one keep track of what they are.


Monday, August 4, 2014

Defensive Ethical Hacking

In 2006 Eric McCarty pleaded guilty to a SQL injection attack into a database at the University of Southern California.  The prosecutor and the court rejected McCarty's defense that he was a "security consultant" just doing what such consultants do.  His defense counsel claimed that he had acted responsibly by only giving the records of seven people to a reporter.  By pleading guilty, McCarty avoided jail and served only six months house arrest.

Several years earlier, while working on a gap analysis at a major media conglomerate, I became aware of a penetration test by a competitor that ran amuck.  It seems that after successfully penetrating file servers, the consultant arbitrarily extended the test to include an AS/400 on the client's network triggering multiple alarms and involving the FBI.

These are only two examples of so-called "ethical" hacking that went awry.  Without addressing the issue of whether "ethical" is a matter of motive or behavior, I have always had a set of defensive rules that I have imposed upon myself, my clients, and my associates that are intended to, among other things, keep me out of courtrooms and jails.

The first of these rules is that I do not engage in covert or clandestine activities.  My client, including all his personnel, must know about and acknowledge, all the activities in which I am to engage.

I do not engage in fraud, deception, or other forms of social engineering, not even for money.  I already know that these attacks will work; they have worked throughout human history.  I do not need to embarrass the client or his people to demonstrate that I am a proficient liar.

I do not work without a contract or letter of agreement.  Such a letter is part of my authority to do what I do.  It also demonstrates that both the client and I understand the extent and limitations of that authority.

I do not work for free.  There is little better proof that I was engaged by the client to do what I did than his check.  McCarty had no letter of agreement, much less a check.  Out of respect for my professional colleagues, I do pro bono work only for bona fide non-profits.  I price my work at my normal rates and require that the beneficiary acknowledge my contribution with a receipt.

I do not work alone.  I prefer to work with the client's people; failing that, I work with my associates.  Not only are my collaborators potential witnesses for the defense, they act as an ethical check on my behavior.  One is far less likely to cross an ethical line with another watching.

I do not share the client's data with others not expressly authorized by the client to see it; not even with the authorities.  If the state wants my client's information it must get it from him, not me.  Short of torture, it will not get it from me.  (I do not contract or commit to resist torture; even if I knew my own capacity to resist it, I would not know how to price it.)

Not all my clients or even my associates like all of these rules all the time.  Clients may think that disclosing all of my activities to his emploeyees in advance defeats his purpose.  There are those in my profession who deceive client personnel for the purpose  of discovering vulnerabilities or demonstrating naivete.  If the client wants that done, he should engage those professionals.  Some of my associates may feel that such activities are effective or that always working with others is inefficient.

I will not knowingly or willingly engage in any behavior, such that if I were caught in the act of that behavior it might embarrass or alarm me, my associates, the client, or the client's people.

These rules may increase my cost of service or even reduce my potential revenue.  However, they are both defensive and conservative.  They act early to help me avoid ethical conflicts and assist me late in resolving such ethical dilemmas as may arise in the  course of an engagement.

They have served me well.  They might have saved McCarty from conviction.  I commend them to you.

Sunday, August 3, 2014

Please do not say "Two Factor"

Thirty years ago I wrote a list for my staff to address what I thought was sloppy and problematic use of special language.  It was of the form "Please do not say _______ when you really mean _______."  I cannot even remember many of the entries but one was "Please do not say 'privacy' when you really mean 'confidentiality.'" Another was "Do not say 'secure' when you mean 'protected."  While the distinctions may seem small, they are nonetheless useful.

In the spirit of that list, I would like to suggest that one should not say "two-factor," or "multi-factor" authentication when what one really intends is "strong authentication."  Strong Authentication is defined as "at least two kinds of evidence, at least one of which is resistant to replay."  Thus, all strong authentication is two-factor but not all two-factor authentication is strong.

For example, a password and a biometric is clearly two-factor but might not be strong.   It is more resistant to brute force attacks than a password alone but might be no stronger against a record and replay attack than the password alone. We are no longer seeing brute force attacks but credential replay attacks are a major problem.  If all one wants to do is resist brute force, adding bits to the password is likely to be more efficient than adding a biometric.

If one accepts that record and replay attacks are the greater problem, then one wants a second factor that resists replay, something like a one time password (OTP), whether token-based or sent out-of-band to a phone or mobile computer.

The use of  "two factor" enjoys so much currency that it suggests that any second form of evidence is the same as any other.  The irony is that RSA, the vendor of one of the original and most popular OTP token is one of the sources of that currency.  However, when they spoke of two factor, the first factor was the OTP.  The second factor a PIN used to resist fraudulent use of a lost or stolen token.

One popular "second factor" with banks is challenge-response based upon shared secrets.  The secret is established at enrollment time.  One popular method is to ask the user to select a number of questions from a list and record his own answers to those questions.  Questions may be similar to "what was the name of your first pet, school, or playmate?"  "In what hospital or city were you born?"  "What were the names of your grandparents?"  "The mascot of your high school?"  Answers should be easy for the subject to remember but not obvious except perhaps to an intimate.  At authentication time one question is chosen at random.  Actually this method can be resistant to replay provided that the set of questions is large enough relative to how often they are used. 

One bank started using this method only for large transactions, those above a threshold value.  However, they figured if it was good for large transactions, wouldn't it be better for all?  They lowered the threshold to zero.  Because the size of the set of questions was not large enough for this kind of use, all the answers for some accounts were soon compromised.  

The Verizon Data Breach Incident Report (DBIR) demonstrates that use of strong authentication would have resisted many of the breaches reported upon.  Because it is so powerful, we should be encouraging its use by all available means.  These means should include distinguishing between it and mere multi-factor authentication. 






Good Security Practice for Programmers

This is the one of a series of posts on "Good Data Processing Security Practices."  The context for the series can be found here.  The following practices and controls are for enterprise development programmers, the individuals who produce the computer programs on which enterprise managers wish to rely.  Like other posts in this series, this post suggests useful separations of duties to ensure quality and fix accountability.


An individual programmer should not both specify and write a procedure.

Should not both write and test a procedure.

Should not both create and maintain a procedure.

Should not name procedures that he writes. (Program names are analogous
to account number which are normally assigned as part of the approval
by management or a designee separate from the originator).

Should not both write and execute a procedure (exception: data local to
himself as in testing or personal computing).

Should not both program and maintain the program library (exception:
they do all maintenance to that library).

Programmers should have personal copies of specifications. data definitions. source
code. test data. test results. load modules and object modules. All transfers
between the programmers personal libraries and project or production
libraries should be controlled by someone else.

The above represents the ideal. Because of limitations of scale, it may not be
realizable in all installations. However. under no circumstances should one
person specify, write, test. name. maintain and execute the same program.

On Nation States and the Limits of Anonymity - Tor

As a general rule, society has a preference for accountability.  For this reason, governments discourage anonymity.  Among the exceptions to this rule is citizen communications in resistance to government.  In this context, governments in general, and police states in particular, abhor anonymity.

Tor (formerly TOR ("The Onion Router")) is a tool for providing anonymity in the Internet.  It uses thousands of contributed routers, communicating using nested encryption, along a randomly selected path, such that when the communication finally appears in the clear, it cannot be traced back to its origin.  It raises the general problem of attribution in the Internet to a whole new level.  Its uses range from hiding browsing activity from routine state surveillance to hiding criminal or revolutionary communications.  

The following news item recently appeared:

 --Russian Government Seeking Technology to Break Tor Anonymity (July 25 & 28, 2014) 
The Russian government is offering a 3.9 million rubles (US $109,500) contract for a technology that can be used to identify Tor users. Tor was initially developed by the US Naval Research Laboratory and DARPA, but is now developed by The Tor Project, a non-profit organization. Tor is used by journalists and others who need to keep their identities hidden for their own safety; it is also used by criminals for the same purposes. The entrance fee for the competition is 195,000 rubles (US $5,500).


In my role as a member of the editorial board of SANS Newsbites, I made the observation that:

"In his most recent novel, Richard Clarke implied that NSA had targeted and broken TOR."

A reader responded in part:

"...more out of curiosity, didn’t the NSA have trouble cracking TOR, and at best, could only identify ingress and egress points?  As told by Team CYMRU.org, anyway."

Now you have a context for this post.  I responded to him as follows:


Thanks for your note.  It allows me to know that the comment did what I had hoped it would do, i.e., raise questions.

I was deliberately vague and cited a questionable authority.

My working hypothesis, the advice I give my clients, is that nation states, at least wealthy ones, can read any message that they want to, rarely in near real time.  However, they cannot read every message that they want to.  Incidentally, that is why they store every cryptogram they see.  Decryption is expensive but storage is cheap.  The cost of decryption is falling but not nearly as fast as that of storage.  

When applied to Tor and anonymity, my assumption is similar.  I assume that nation states can identify the origin of any message that they want to, again, probably not in near real time.  However, they cannot identify the source of every message that they want to.   Again, that is why they require acres of storage.   Like breaking ciphers, breaking Tor is expensive.  However, given their resources and determination, it would be foolish to bet one’s life that they cannot do it.   They know the protocol better than anyone and they own some of the routers.

If you think about it, your question implies a point in time.  However, my guidance assumes that what they cannot do today, they will be able to do tomorrow.  Cheap storage buys them time.  It took them fifty years to crack Venona but they never gave up.

As with crypto, the resistance of Tor to nation states depends in part upon how much it is used.  The more they have to deal with, the less efficient they are.  Therefore, one wants to encourage its use while discouraging anyone from betting their life on it.

The net is that Tor is adequate to provide individual privacy.  It is probably adequate for most political discourse, at least in democratic states.  It becomes problematic when fomenting revolution or disclosing state secrets in authoritarian, or even wealthy but vindictive, countries.