This article focuses on tools that have been proposed to leverage the use of similarity scores to assess the probative value of forensic findings.
For several decades, legal and scientific scholars have argued that conclusions from forensic examinations should be supported by statistical data and reported within a probabilistic framework. Multiple models have been proposed to quantify and express the probative value of forensic evidence. Unfortunately, the use of statistics to perform inferences in forensic science adds a layer of complexity that most forensic scientists, court officers and lay individuals are not armed to handle. Many applications of statistics to forensic science rely on ad-hoc strategies and are not scientifically sound. The opacity of the technical jargon used to describe probabilistic models and their results, and the complexity of the techniques involved make it very difficult for the untrained user to separate the wheat from the chaff. The current article introduces a series of papers intended to help forensic scientists and lawyers recognize limitations and issues in tools proposed to interpret the results of forensic examinations. The family of tools reviewed in the current article is called ‘score-based likelihood ratios’. This article describes some specific members of the family of tools, which is called ‘score-based likelihood ratios’. The article presents the fundamental concepts on which these tools are built and describes some specific members of this family of tools. They are compared to the Bayes factor through an intuitive geometrical approach and through simulations. Finally, the article discussed their validation and their potential usefulness as a decisionmaking tool in forensic science. (publisher abstract modified)