Kleinberg, B;
Verschuere, B;
(2021)
How humans impair automated deception detection performance.
Acta Psychologica
, 213
, Article 103250. 10.1016/j.actpsy.2020.103250.
Preview |
Text
1-s2.0-S0001691820305746-main.pdf - Published Version Download (911kB) | Preview |
Abstract
Background: Deception detection is a prevalent problem for security practitioners. With a need for more large-scale approaches, automated methods using machine learning have gained traction. However, detection performance still implies considerable error rates. Findings from different domains suggest that hybrid human-machine integrations could offer a viable path in detection tasks. / Method: We collected a corpus of truthful and deceptive answers about participants' autobiographical intentions (n = 1640) and tested whether a combination of supervised machine learning and human judgment could improve deception detection accuracy. Human judges were presented with the outcome of the automated credibility judgment of truthful or deceptive statements. They could either fully overrule it (hybrid-overrule condition) or adjust it within a given boundary (hybrid-adjust condition). / Results: The data suggest that in neither of the hybrid conditions did the human judgment add a meaningful contribution. Machine learning in isolation identified truth-tellers and liars with an overall accuracy of 69%. Human involvement through hybrid-overrule decisions brought the accuracy back to chance level. The hybrid-adjust condition did not improve deception detection performance. The decision-making strategies of humans suggest that the truth bias - the tendency to assume the other is telling the truth - could explain the detrimental effect. / Conclusions: The current study does not support the notion that humans can meaningfully add the deception detection performance of a machine learning system. All data are available at https://osf.io/45z7e/.
Type: | Article |
---|---|
Title: | How humans impair automated deception detection performance |
Open access status: | An open access version is available from UCL Discovery |
DOI: | 10.1016/j.actpsy.2020.103250 |
Publisher version: | https://doi.org/10.1016/j.actpsy.2020.103250 |
Language: | English |
Additional information: | Copyright © 2020 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). |
Keywords: | Deception detection, Machine learning, Decision-making, Truth bias, Deceptive intentions |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > UCL BEAMS UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Security and Crime Science |
URI: | https://discovery-pp.ucl.ac.uk/id/eprint/10120129 |
Archive Staff Only
View Item |