Monday, October 29, 2012
Fingerprints Aren’t the Proof We Thought
Why Fingerprints Aren’t the Proof We Thought They Were
Fingerprint matching is a vital investigative tool. But despite its legendary aura of infallibility, courtroom claims of fingerprints’ uniqueness are slowly receding.
September 20, 2012 • By Sue Russell
http://www.psmag.com/legal-affairs/why-fingerprints-arent-proof-47079/
Scientist Nancy Knight documented snowflakes in 1988 while studying cirrus clouds for the National Center for Atmospheric Research. During a Wisconsin snowstorm she found two identical sets of snow crystals – identical under a microscope, at least – giving lie to the old belief that no two snowflakes are alike. That aura of uniqueness also surrounds the arches, loops, and whorls at the tips of our fingers, and to this day most fingerprint examiners remain steadfast that no two fingerprints are exactly alike.
“Fingerprint examiners typically testify in the language of absolute certainty,” professor Jennifer Mnookin at the University of California Los Angeles has written. But like many other claims for forensic science, the assertion that fingerprints are unique lacked a solid scientific basis and now is viewed with new caution.
“The language of certainty that examiners are forced to use hides a great deal of uncertainty,” the U.K.’s Lord Justice Leveson put it when addressing the Forensic Science Society.
Or as Penn State Dickinson School of Law professor David Kaye observed in a 2010 news release, “Fingerprint examiners and other forensic experts often testify with ’100% confidence’ or to a ‘scientific certainty’ that a defendant is the source of a latent fingerprint or that a bullet came from a particular gun. It is time for criminalists to provide more scientifically defensible-and legally palatable-testimony.”
Meanwhile, “It’s a match!“ – the term so beloved of TV shows – is being phased out of the courtroom lexicon right along with the notion of definitive fingerprint matches that rule out everyone else on the planet.
Righting What’s Wrong in Criminal Justice
Wrongful convictions stem from the belated entrance of scientific rigor into the field of forensics, systemic problems, and the ubiquitous ‘human factor.’ In the coming weeks, a series of stories by crime author Sue Russell looks at why convictions go wrong, at the common reluctance to rectify error, and at innovations to better safeguard justice.
The word “match” certainly troubles former NYPD street cop Nick Petraco, a veteran hair and fiber expert. Petraco, assistant professor at John Jay College of Criminal Justice at the City University of New York and a forensic consultant to the NYPD, thinks it is inherently misleading because the public is so accustomed to interpreting anything called a match – paint colors or fabric swatches, for example – to mean identical. And concerns go beyond fingerprints.
“Basically it’s a word that most people associate with ‘identical,’ meaning that it’s the exact same and that there’s no other alternative,” he says. “So if you say in relationship to a hair examination ‘the hair matched,’ to them that means that it’s from that guy’s head or from that guy’s body. But there’s no scientific basis for that, in reality, especially in cases involving things like hairs and fibers and paint chips.”
Petraco favors words or descriptions that convey that two pieces of evidence are consistent with one another; concordant. “Consistent means that they have the same properties and they could be from that location…but not necessarily,” he says.
Short of the impossible – comparing the prints of the entire world’s population, past and present – we can never know or say for sure if fingerprints are unique. More to the point, for the purposes of linking a fingerprint to a suspect or crime scene, the goal of uniqueness – or of “individualization,” as identifying something to a unique, specific source is known – may not much matter.
A 2009 National Academies of Science report on the failings and shortcomings of forsensic science recommended more research focused on probabilities – figuring out the statistical likelihood that a fingerprint could be pronounced a match erroneously.
Probabilities as reflected in those astronomic statistical odds that DNA could belong to a party other than the defendant– remember O.J.? – have been a mainstay of presenting the science as evidence to jurors.
A 2011 UCLA Law Review article, “The Need for a Research Culture in the Forensic Sciences,” whose 13 authors include Mnookin and Kaye, addressed this, raising questions such as: “How frequently might a portion of two fingerprints – or striation marks on bullets, or toolmarks, or handwriting specimens – share any given degree of similarity even if they derive from different sources?”
Meanwhile, since no scientific proof exists to “individualize” the results of fingerprint matching to the point where all other possible sources can be excluded and the language of certainty has the potential to mislead jurors, a new vocabulary can’t hurt. Many experts favor judges focusing on making sure that jurors understand that what they are really being told is that there are many similarities between a crime scene latent print and a suspect’s print and no explainable differences. And that more subtle explanation could go a long way towards demythologizing fingerprint evidence for jurors.
So could, if solid research is produced and eventually becomes acceptable to courts, presenting the statistical odds that a latent fingerprint (a smudge, partial or degraded print found at a crime scene) does indeed belong to a defendant. As of this moment, it’s been possible in the limited universe of an Agatha Christie mystery in which, say, 12 potential suspects are confined to a moving train, perhaps, but not in the world at large.
But new research underway is taking on the fingerprint match probability problem. And preliminary results published in February in Significance, the magazine of the American Statistical Association and the UK’s Royal Statistical Society, give a taste of what may one day be a new fingerprint evidence reality. The researchers’ report announced the creation of a statistical model that will allow fingerprint evidence to be quantified so it can be accorded appropriate weight in courtrooms.
Cedric Neumann, Pennsylvania State University assistant professor of forensic science and statistics, is its lead author. “It is unthinkable that such valuable evidence should not be reported, effectively hidden from courts on a regular basis,” he said in a statement. “Such is the importance of this wealth of data, we have devised a reliable statistical model to enable the courts to evaluate fingerprint evidence within a framework similar to that which underpins DNA evidence.”
Neumann is not suggesting anything close to overnight change here. The goals are long term and much testing remains to be done, both of the model itself and if that passes muster, of the legal system’s reaction to it. Courtroom battles surrounded DNA in its infancy and are certain to converge on any attempts to introduce a statistical approach to fingerprint evidence.
“It remains to be seen how future legal battles play out,” Neumann and his fellow researchers wrote, “but we see models such as this one as a powerful platform for change.” The full study is due to be published later this year.
Meanwhile, the FBI’s fingerprint database, the Integrated Automated Fingerprint Identification System, grows by thousands daily, with more than 70 million subjects’ prints in its criminal master file alone. Some experts, like Mnookin and cognitive neuroscientist Itiel Dror, ask if the database’s very size may make accidental matches more likely.
As Dror said in 2009, “It enables you to look at millions of prints very quickly, make quite good comparisons; it enables you to resolve cold cases and is very good in many ways. However, that same powerful technology that helps solve crimes has also increased the likelihood of finding very similar fingerprints by accidental coincidence.”
In February, the National Institute of Standards and Technology and the Department of Justice’s National Institute of Justice issued a report—three years in the making—on improving fingerprint analysis. After examining all manner of factors that can impact the process, the 34-people strong Expert Working Group on Human Factors in Latent Print Analysis offered 34 recommendations, including “Urging management at forensic science provider facilities to foster a culture in which it is understood that some human error is inevitable and that openness about errors leads to improvements in practice.”
In February 2009, the day after the release of the National Academies of Science’s sweeping report on forensics, Robert J. Garrett, then-president of the International Association for Identification, also sounded a cautionary note. “It is suggested,” he wrote, “that members not assert 100% infallibility (zero error rate) when addressing the reliability of fingerprint comparisons.”
Researchers Mark Pagea, Jane Taylor and Matt Blenkins, in their March 2011 paper Uniqueness in the Forensic Identification Sciences – Fact or Fiction, contend that “the forensic examiner should therefore not use terms such as ‘unique’ or even ‘individualization’ as they are not the issues the forensic practitioners should concern themselves with; these are issues for the judge and jury. To claim that a fingerprint or bitemark has identified someone ‘to the exclusion of all others,’ or has been ‘individualized’ to only this person, usurps the jury’s role as the rightful assessor of the evidence...”
“It’s a match!” claims also have been notoriously derailed in a couple of high-profile cases. In March, a report finally vindicated Scottish police officer Shirley McKie, who was accused of leaving a fingerprint in the home of a female murder victim. Officer McKie canvassed neighbors in the case but always swore that she never entered the victim’s home – the crime scene.
In the most notorious mistaken print case of all, Oregon resident Brandon Mayfield was erroneously fingered by several top FBI analysts as the Madrid train bomber. Fortunately for Mayfield, Spanish authorities tied the prints to their correct owner, Ouhnane Daoud. Ultimately, the FBI was compelled to admit that its bias and “circular reasoning” had led it to target an innocent man.
But the relevance and importance of probability research becomes clear when you consider that the reason such similar prints were located in the first place was as a result of the FBI database’s power to search through tens of millions of images. Reason enough for a vocabulary rethink.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment