Award Information
Description of original award (Fiscal Year 2019, $216,518)
The widespread use of handheld smartphones and other recording devices has resulted in the likelihood that user generated recordings (UGRs) may be presented as evidence in a criminal investigation. Increasingly, there may be multiple concurrent recordings of the same incident from different positions around the scene. When multiple UGRs are available, the set of recordings may provide information useful to the forensic investigation, such as spatial information about the location of perpetrators, the position and orientation of firearms, and if speech utterances are present, a potential means to increase intelligibility. However, UGRs generally start and stop at different times, differ in technical format specifications, and seldom have sufficiently reliable time stamp information for exact time synchronization of each recording. Thus, the proposed research will (a) study the analytical and practical constraints, then develop a reliable automatic means to combine and synchronize multiple UGRs, and (b) consider the means to authenticate UGRs obtained from bystanders and private sources outside the customary chain of custody, to reduce the likelihood of forged audio evidence.
This proposal encompasses two phases, each with an Applied Research Goal.
Phase 1: We research and implement the means for scientific and reliable comparison and synchronization of audio recordings of the same incident captured concurrently by multiple unsynchronized recording devices at the scene. This work involves digital audio signal processing techniques that are emerging from the audio engineering literature. The major deliverable from Phase 1 is a proposed methodology for processing multiple recordings to identify concurrency and the most likely synchronization point, then performing desired processing for sound source localization and reduction of incoherent background noise.
Phase 2: We design and implement in software a signal integrity monitor that compares the multiple user generated audio recordings and identifies instances of inconsistency in amplitude and temporal pattern that could indicate an altered or otherwise inauthentic recording. The major deliverable from Phase 2 is a proposed methodology for performing inter-recording consistency verification, identifying one or more suspected edits made to an original recording by segment deletion, segment insertion, or additive mixing of forged material.
Results are published in scientific journals and trade publications, and the software and methodology will be provided to for public dissemination.
Note: This project contains a research and/or development component, as defined in applicable law, and complies with Part 200 Uniform Requirements - 2 CFR 200.210(a)(14).
CA/NCF