What is the “Error Rate” of fingerprint identification?

“What is the error rate of latent print comparison?”

The following explanation is the way I handle that question in court.

I begin by asking the attorney whether, by “error rate,” they mean the number of erroneous identifications compared to the number of correct identifications made over an extended period of time. If we are talking about error in the sense of convicting the wrong person, we cannot add in the rate of erroneous exclusions and the rate of erroneous decisions regarding the suitability of a latent for comparison in the first place.

Once we pin down the “error rate” we are talking about it exclusively that of erroneous identifications, I then analyze the question, “What is the error rate of fingerprint identification.” But now the use of the term “error rate” implies that there is, in fact, one error rate that applies across all latent print comparisons and all examiners.

That inference is incorrect. In all fields of human endeavor, there are some practitioners who are more competent than others, whether we’re talking about medical doctors, sports team coaches, race car drivers, or latent print examiners. Among latent print examiners, there are some who have great ability and are conservative in reaching and expressing a conclusion. Those examiners may go a whole career without making an erroneous identification. There are other, less talented or less conservative examiners, who make multiple erroneous identifications in their career.

Likewise, not all latent prints are equally susceptible to being identified erroneously. Some latent prints have a large amount of detail and are relatively free of distortion. Other latent prints have a minimal number of features and are smeared and badly distorted. Obviously, the clear, undistorted print with an excess of features is less apt to be identified erroneously than the badly smeared print with a minimum number of features.

Then there is the question of close AFIS non-matches. Finding a close non-match in a comparison with a suspect identified by an investigator through normal investigative means is much less likely than finding a close non-match with an AFIS search of a large database. Especially when you are searching a distorted latent print with a minimum number of features, the danger of an erroneous identification is much greater. Just ask Brandon Mayfield.

Stephen Meagher testified in the Brian Mitchell case, the first Daubert challenge to fingerprints, that the error rate is 1 in 11 million. That figure has been soundly debunked.

@Glenn Langenburg and @Kasey Wertheim did a study in the early 2000’s over several comparison classes they were teaching and came up with about 1 erroneous identification for every 3,500 correct identification. But that study was strongly biased in favor of producing erroneous identifications.

In the #FBI/Noblis “Black Box Study” published in 2011, the researchers found about 1 in 1,000 identifications claimed by experts was in error.

Some sources cite anecdotal cases – Brandon Mayfield, Shirley McKie, and Lana Canen, to name a few. But anecdotal cases cannot be used to challenge error rate.

So what is the error rate? It depends on the skill of the examiner, the quality of the latent print, and the quantity of matching features present. No single error rate can be applied across the board, but the highest error rate in the studies mentioned was 1 in 1,000, which is probably greater than in real case work. The lowest error rate, that presented by Meagher, is probably way too low. If there is a single error rate, it lies somewhere between 1 in a 1,000 and 1 in 11,000,000.

But the most important thing to understand is that no overall “error rate” can ever be assumed to apply to a specific identification. It depends on the skill of the examiner and the quality of the latent print.