Police, prosecutors, judges and juries all have a vested interest in determining when someone is lying. The same can be said of diplomats, entrepreneurs and investors. And, you know, parents.
We have all seen the use of polygraph machines on TV, though they are not admissible in court. They are quite unreliable; ten percent of liars pass and twenty percent of truth tellers fail. So, 30 percent of the time, polygraph tests grab the wrong people. Remember that statistic.
Now Anne Eisenberg reports for the New York Times about software that listens for lies. No clumsy machine need apply; this emerging field studies what scientists call “emotional speech,” the language of deception, friendship and anger.
Before we dip our toes too deeply in these waters, let us remember that lying is an engrained part of human life. In a landmark study from the late ’90s, researchers found that college students reported telling two lies a day, in what amounted to one out of three of their social interactions (33%). Community members told one lie a day, in what amounted to one out of five social interactions (20%). The range of lies told is noteworthy. There were:
- Lies about opinions.
- Lies about achievements.
- Lies about events, people or possessions.
- Lies told to elicit a particular emotional response.
- Lies told to protect the liars’ interests.
- Outright lies, exaggerations and subtle lies designed to mislead.
Let us also acknowledge that not all “lies” are “bad.” We accept it as gospel that one should probably not answer in the affirmative when asked if a particular garment makes a person “look fat.” Better a neutral “it looks okay” in such situations. Chances are that’s enough to elicit a garment switch.
There is a third, more profound problem, however. The human brain is a “fabrication machine.” An experiment by neuroscientist David Eagleman illustrates the point. Participants were asked to choose between two randomly chosen cards, labeled A and B. They had no way of knowing which was the better choice, though they were rewarded somewhere between a penny and a dollar for the correct one. “What the participants didn’t know was that the reward in each round was based on a formula that incorporated the history of their previous forty choices — far too difficult for the brain to detect and analyze.”
That, however, did not stop the participants from coming up with explanations.
“[T]heir conscious minds, unable to assign the task to a well-oiled zombie system, desperately sought a narrative. The participants weren’t lying; they were giving the best explanation they could… Minds seek patterns. In a term introduced by science writer Michael Shermer, they are driven toward ‘patternicity’ — the attempt to find structure in meaningless data.” [David Eagleman, Incognito: The Secret Lives of the Brain]
Of course, we have to acknowledge a distinction here. Daily lies, and brain in-filling of missing narratives, are not often the serious deceptions of the type designed to hide crimes and other harmful acts. That said, these new techniques, while improvements, fall far short of the accuracy necessary to make them reliable instruments in mission-critical situations. The algorithms developed by leading scientist Dan Jurafsky, for example, can identify a liar 70 percent of the time, compared to human assessments of the same data in the 57 percent range.
Sounds impressive. Until you realize his algorithms are wrong 30 percent of the time. Ouch. No better than a polygraph.
There is a lesson here, somewhere.
In the software world, for example, a predicative feature that is correct 70 percent of the time is not good enough; users will stop trusting the feature very quickly, because they care more about the 30 percent error rate. In the U.S. criminal justice system, with its presumption of innocence, getting it wrong 30 percent of the time is, one would hope, unacceptable. In a 2010 Stanford Law Review article, “The Substance of False Confessions” by Brandon L. Garrett, the author notes that, “Postconviction DNA testing has now exonerated over 250 convicts, more than forty of whom falsely confessed to rapes and murders.” Somehow it’s inconceivable that a new technology, with a 30 percent failure rate, is going to make things… ah… better.
The research will go on. The technology will improve. Triangulation with other evidence will, as always, prove decisive. But if it sounds to good to be true? It is.