What the Hell is Science?

For a couple of years, I debated regularly with Jeff Salyards about science. Jeff was Director of the Defense Forensic Science Center and I was the Latent Print Instructor in the Training Division. Jeff and I met for lunch about once a month to discuss science. Jeff took the position that if numbers cannot be used to describe a conclusion, whatever it is, it is not science. I took the position that physical matching disciplines (fingerprints, firearms, toolmarks, footwear impressions, fracture matching, etc.) cannot be defined by numbers, but nonetheless, are good science.

In this post and for the next few weeks, I will discuss the topic of science. How do we define it? Shane Turnidge posted on LinkedIn with his observation than in Canada, many latent print examiners do not understand what they do, nor can they explain how they reach a conclusion. I replied with similar observations of forensic scientists, especially fingerprint examiners, in the US. A large part of the problem may be that many young forensic scientists just follow Thomas Kuhn’s description of normal science without giving thought to understanding the processes they are following.

But even for those who try to understand science, the problem is that there is no “one size fits all” version of science, no universal definition that includes all true sciences and excludes all pseudo sciences, or even excludes all blatantly unscientific pursuits. The question is one of demarcation. How do you explain what science is while, at the same time, excluding all unscientific categories?

Many of us paraphrase Supreme Court Justice Potter Stewart who, in 1964, famously said that he could not define obscenity, “but I know it when I see it.” And so we say of science, I cannot give a concise definition of good science, but I know it when I see it.

I have been dismayed in recent years during interviews of latent print examiner applicants with masters’ degrees in forensic science. I will ask them to define science. Every applicant I have asked has fumbled the answer. Some can only fall back on “I know it when I see it.” A small minority mention hypothesis testing. But when I ask in follow up if they can cite the influence on fingerprints of Francis Galton, Edmond Locard, Karl Popper, Carl Hempel, Thomas Kuhn, David Ashbaugh, and Thomas Bayes, all I get are blank looks. Most often, the applicants have heard the name Galton and maybe Ashbaugh, but they have no recognition, much less knowledge, when it comes to the others. A few can cite Ashbaugh’s ACE-V to explain conclusions. I have to wonder what the universities are teaching in the MFS degree program courses? Never heard of Locard, Popper, Kuhn, Bayes, or the others?

Sir Francis Galton wrote the first book on fingerprints in 1892. He used a crude statistical model of placing a grid over a fingerprint. He then compared the patterns of grid blocks in which features occurred. Although crude, it was our first statistical model. Twelve matching grid blocks was sufficient to conclude that two prints came from the same finger.

Edmund Locard stated his Tripartite Rule in 1914. Twelve or more “points” establish positive identification. Eight to twelve points may be sufficient, depending on their number and clarity along with other finer features. Fewer than eight points might be presumptive, but not conclusive. Locard also commented on the power of sweat pores to contribute to a conclusion of identification. See Dusty Clark’s page http://www.latent-prints.com/Locard.htm

Karl Popper was the most influential scientific thinker of the Twentieth Century, Popper wrote on the demarcation between science and pseudoscience. He defined science as providing a theory or conclusion that is falsifiable. If a scientist gets it wrong, other scientists can prove it is wrong. That is not true of pseudoscience. See https://www.simplypsychology.org/Karl-Popper.html

Carl Hempel proposed the “Covering Law Model” in 1942 to define science. It could be stated as the use of validated natural laws to provide explanations or make predictions.

Thomas Kuhn – Kuhn was a physicist and historian of science. In “The Structure of Scientific Revolutions” (considered by some scientists to be the most important book of the Twentieth Century), Kuhn defined “normal science” as work done by a group of practitioners using consensus dogma to solve small puzzles. A fingerprint comparison using ACE-V is a perfect example of normal science, ACE-V being the dogma examiners follow. Kuhn then defined a “Scientific Revolution” as a situation in which the old method fails to answer new questions and a crisis arises. New practitioners come up with a new method. The old guys don’t accept it and there is tension between the old guys and the new practitioners until enough of the old guys retire or die and the new people outnumber them. Then the science changes, new dogmas are accepted, and the revolution or “Paradigm Shift” is complete.

David Ashbaugh – When Ashbaugh described ACE-V in his landmark paper “Ridgeology” (published in JFI in 1991) he articulated the dogma for fingerprint examination as defined by Kuhn. This set the standard for a couple of decades as the way to explain latent print comparisons and identifications. ­­­­

The transition we are undergoing from a positive identification made with ACE-V to calculating a likelihood ratio using Bayes’ Theorem is a classic Kuhnian paradigm shift.

Paul Feyerabend argued in the mid-20th Century that in science there are no universal methodological rules, no single definition of science, no one scientific method that applies across the board to all sciences. Feyerabend said progress occurs in science when scientists break the rules. Feyerabend argued that each field of science should be defined in its own terms and not according to the pattern of other sciences. To try and impose the rules of one science on all of the other sciences hampers progress.

One of the issues we debated for years in SWGFAST was the question of what is the scientific method of fingerprint identification. David Ashbaugh gave us the term ACE to describe the examination method and ACE-V to describe the laboratory process, but neither can legitimately be called a scientific method.

In trying to devise a coherent, logical model for fingerprint comparison that could be described as scientific, I came up with the following five-step process in 2000:

1.    Examination of the unknown (latent print) to determine whether there is sufficient detail to be identifiable, to determine the pattern type and finger or palm area, and to select a target to search for in the known impressions.
2.    Arbitrarily define the hypothesis as identification and a “counter hypothesis” as exclusion (not the same as a null hypothesis, which would include “inconclusive” for whatever reason along with exclusion). We must prove one and disprove the other.
3.    Experimentation, which is the side-by-side comparison, starting with the target and expanding outward to locate more features.
4.    Formation of a tentative conclusion, which is the very first instant that the examiner experiences the “Ah Ha!” feeling that it is an identification. It is important to recognize that no examiner stops the comparison at that first “Ah Ha!” We always look for a few more points, and that leads us from “tentative conclusion” to “identification.”
5.    After “Ah Ha,” we always continue looking for a few more points. We have all had prints for which we said, “If I could just find one more point!” We had the “Ah Ha!” moment, but we couldn’t get past it and reported the latent as inconclusive or being not suitable. This final step establishes a relationship of “reliable predictability” between the unknown and the known. Once you are confident that finding a point in one of the prints will allow you to reliably predict that you will find the same point in the other print, that is when you become comfortable with the identification. If you cannot establish that relationship, you do not claim an identification.

This fits with Carl Hempel’s description of science, i.e., the use of validated natural laws to provide explanations or make predictions. The “validated natural laws” we use are 1) Uniqueness from the formation of friction ridge skin on the fetus and 2) the structure of friction ridge skin that ensures persistency of features over time.

A paradigm shift occurs when the old way of doing things fails to answer a new question and a crisis arises in a field of science. New people find a way to answer the new question. The old timers resist the change, but eventually, if the new method works better, the change takes place.

In latent prints, the new question is that of error rates and probabilities. New scientists are experimenting with Bayes Theorem to derive a likelihood for an FRS ident. Leading the charge are scientists like Christophe Champod and Henry Swofford.

Software programs are being developed to calculate likelihood ratios, but there are problems. First, many people still accept the principle of biological uniqueness. If a thing is truly unique, then you cannot calculate a probability of it being repeated. True uniqueness is not the real problem. Our inability to distinguish uniqueness is. Another problem is that current software programs only consider Level 2 (L2) details. Level 3 (L3) is completely ignored. An experienced latent print examiner’s brain uses L3 in the comparison, even if only subconsciously.

 PiAnoS4 and FRStats are two computer software programs that calculate LRs for FRS idents. These programs calculate a distortion factor between a latent print and the known print to which it has been identified. This distortion factor along with the number of L2 details yields a likelihood ratio for the ident.

Moving from a positive identification to a likelihood ratio is the shift we are undergoing now. The algorithms will continue to be refined as we get closer to an accurate model. An acceptable model to calculate a likelihood ratio will eventually emerge and become the new rule under which latent print examination is conducted. This is the way science progresses.

Max Planck famously said in “Scientific Autobiography and Other Papers” in 1949 that “A scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it.”

The paradigm shift will be complete when a suitable, validated model emerges and those who embrace likelihood ratios outnumber the old guys.

For more on science, the history of science, and the philosophy of science, I recommend two lecture series:

  1. The Great Courses, “Philosophy of Science” by Jeffrey L. Kasser, Ph D (see https://www.thegreatcourses.com/courses/philosophy-of-science)
  2. YouTube lecture series “An Introduction to the History and Philosophy of Science” by SisyphusRedeemed (see https://www.youtube.com/playlist?list=PL67E2553770A6E39E)