With the emergence of new applications centered around the sharing of image data, questions concerning the protection of the privacy of people visible in the scene arise.
In most of these applications, knowledge of the identity of people in the image is not required. This makes the case for image de-identification, the removal of identifying information from images prior to sharing of the data. Privacy protection methods are well established for field-structured data; however, work on images is still limited. In this chapter, the authors review previously proposed naïve and formal face de-identification methods. They then describe a novel framework for the de-identification of face images using multi-factor models which unify linear, bilinear, and quadratic data models. They show in experiments on a large expression-variant face database that the new algorithm is able to protect privacy while preserving data utility. The new model extends directly to image sequences, which are demonstrated on examples from a medical face database. (Publisher abstract provided)
Similar Publications
- Cognitive Behavioral Interventions and Misconduct Behind Bars: A Randomized Control Trial of CBI-CC
- Atlanta Youth Count 2018 Community Report: The Prevalence of Sex and Labor Trafficking Among Homeless Youth in Metro Atlanta
- Multidisciplinary Teams, Street Outreach, and Gang Intervention: Mixed Methods Findings from a Randomized Controlled Trial in Denver