The authors present an approach to combine multiple noisy low density 3D face models obtained from uncalibrated video into a higher resolution 3D model.
The approach first generates ten 3D face models (containing a few hundred vertices each) of each subject using 136 frames of video data in which the subject face moves in a range of approximately 15 degrees from frontal. By aligning, resampling, and merging these models, the authors produce a new improved 3D face model containing over 50,000 points. An ICP face matcher employing the entire face achieved a 75% rank one recognition rate, which falls within the documented range of performance similar to whole-face 3D matcher results [2] that use more advanced laser scanners for data acquisition. The simplicity of the hardware requirements reduces cost, complexity, and may enable the use of "other people's video" for 3D face modeling and recognition. (Publisher abstract provided).
Downloads
Similar Publications
- Understanding the Potential for Multidisciplinary Threat Assessment and Management Teams to Prevent Terrorism: Conducting a Formative Evaluation of the MassBay Threat Assessment Team, Executive Summary
- ChatGPTing Securely: Using Machine Learning to Automate Writing Rape Reports, Closed Source Large Language Models
- A Fully Continuous System of DNA Profile Evidence Evaluation That Can Utilize STR Profile Data Produced Under Different Conditions Within a Single Analysis