This thesis proposes solutions for three causes of brittleness in forensic evaluation models for checking evidence found at a crime scene compared to evidence found on a suspect.
This thesis describes the design, implementation, and evaluation of CLIFF, a forensic evaluation tool for measurements of trace evidence. The author identifies three causes for brittleness in forensic evaluation models: a tiny error in the collection of data, leading to a spurious outcome from a forensic model; the model is dependent in statistical assumptions, for example, assuming that the distributions of refractive indices of glass collected at a crime scene or a suspect obeys the properties of a normal distribution; the model requires the use of measured parameters from surveys to calculate the frequency occurrence of trace evidence in a population, a value used in models which follow the Bayesian approach. The author’s goal is to present solutions for those three causes of brittleness and also provide one method for reducing brittleness. CLIFF avoids the three previously listed causes of brittleness, it also quantifies and reduces brittleness. The author introduces a novel approach to quantify brittleness and uses a prototype learning to reduce the brittleness in CLIFF. With a dataset composed of the infrared spectra of the clear coat layer of a range of cars, the performance analysis demonstrated that it is strong with near 100% of the validation set finding the right target. The prototype learning was applied successfully with a reduction in brittleness while maintaining statistically indistinguishable results with validation sets.