A decade or so ago, I had an extended dialogue with David Clark, of the Clark-Wilson Model, about biometrics. He knew that the remedy for a compromised password was to change it. Since he knew that biometrics could not be changed, he could only understand how they worked for about fifteen minutes at a time, about the same length time that I can understand the derivation of an RSA key-pair.
Unlike Passwords, biometrics do not rely upon secrecy, they do not have to be changed simply because they are disclosed. Biometrics work because, and only to the extent that, they are difficult to counterfeit.
We have all heard about the attacker who picked up a latent fingerprint in gelatin and replayed it, or the image scanner that was fooled by a photo. Good biometric systems must be engineered to resist such attacks.
While the fundamental vulnerability of passwords are replay and disclosure, the fundamental vulnerabilities of biometrics are replay and counterfeiting. These vulnerabilities are limitations on the effectiveness of the mechanism, rather than fatal flaws. What we must ask of any implementation is how does it resist replay and counterfeiting.
At a recent NY Infragard meeting there was discussion of biometrics that illustrated that this confusion persists. In this case, at least one person seemed to be convinced that the secrecy of the stored reference had to be maintained in order to preserve the integrity of the system.
This is a slightly different kind of confusion with what we know about passwords. While the fundamental vulnerabilities of passwords are replay and disclosure, the fundamental vulnerabilities of biometrics are replay and counterfeiting. These vulnerabilities are limitations on the effectiveness of the mechanism, rather than fatal flaws.
As with passwords, one cannot go from the stored reference to what one must enter. We solved the problem of disclosure of the password reference by encrypting it with a one-way transform at password choice time. At verification time we apply the same transform to the offered password and compare the encrypted versions. By definition of "one-way transform," it is not possible to go from the stored reference to the clear password.
We do not store an image of the face, the password, or a recording of the voice, Instead we store a one-way, but reproducible, transform. To make sure that the transform is reproducible, we may collect multiple samples so that we can test for reproducibility.
However, as with passwords, biometrics are vulnerable to replay attacks. While we cannot discover the biometric from the reference, we might capture an offered instance and replay it over and over.
Like passwords, biometrics may be vulnerable to brute force, attacks, but unlike passwords, they are not vulnerable to exhaustive attack, if only because it is impossible to try every possibility. While a password reference can be stored in 8 or 16 bytes, a biometric reference may be hundreds or low thousands of bytes. In an exhaustive attack against a password, each unsuccessful trial reduces the uncertainty about the correct answer; this is not be true about a brute force attack against a biometric.
For most purposes we use brute force and exhaustive as though they were synonymous but they really are not. In brute force, we submit a sufficient number or trials to succeed in finding a (false) positive. An exhaustive attack is a special case of a brute force attack in which we are trying to find one integer in a known set. The reference for a biometric is too large to be exhausted but there are many samples that will fit.
This introduces the issue of false positives. There is only one password that will satisfy the transform, it is at least one integer value away from those that do not satisfy. We key in the integer. However, we "sample" a biometric; there will be many biometric samples that will fit. Depending upon the precision of our system, it might even be possible to dupe the system, a false positive. On the other hand, it is also possible for a given sample of a valid biometric to be rejected, a false negative.
Biometric systems can be tuned so that they achieve an arbitrary level of security; we are looking for a transform that minimizes both false positives and false negatives. Unfortunately we reduce one at the expense of increasing the other. That is to say, the less likely it is for the system to permit a false positive, the more likely it is to generate a false reject. We tune the mechanism to achieve an acceptable ratio of on to the other for a particular application and environment.
My preferred biometrics are the visage, i.e., the face, and the voice. These share the advantage that they can be reconciled both by a computer and by non-expert human beings. Infants can recognize their parents, by face and voice, by the age of six months; it has survival value. Many share the experience of recognizing someone, that we have not seen in years, from one or two words, spoken over the telephone.
Until very recently, machines could not reconcile faces as fast humans, indeed fast enough for many applications. However, Google now has software that can not only authenticate an individual from an arbitrary image but identify them within seconds.
For most of the time that they have been in use, fingerprints could only be reconciled by an "expert," but we now have computers that can do it even better than the experts. In fact, recent studies using theses computers have suggested that even these experts are all too fallible. Nonetheless, non-experts can independently verify fingerprint identification.
Think about DNA. While it discriminates well, it contains so much information that it takes a long time to reconcile and the results cannot be independently verified by amateurs. To some extent we will always be dependent upon instruments and experts.
Since biometrics share a vulnerability with passwords to replay, a password plus a biometric does not qualify as "strong authentication." Therefore, the preferred role of biometrics is either as an identifier or as an additional form of evidence in a system of strong authentication, one in which another mechanism, e.g., a smart token, is used to resist replay.
Because there is only a vanishingly small chance that two samples of a biometric will be identical, any sample that matches one previously submitted could be thrown out as a possible replay.
Part of the special knowledge that identifies us as security professionals, and for which we are paid the big bucks, is that knowledge about the use, strengths, and limitations of biometrics.