In this paper the authors show in extensive experiments that pixelation and blurring in facial images provide very poor privacy protection while significantly distorting the data. They then introduce a novel framework for de-identifying facial images.
Advances in camera and computing equipment hardware in recent years have made it increasingly simple to capture and store extensive amounts of video data. This, among other things, creates ample opportunities for the sharing of video sequences. To protect the privacy of subjects visible in the scene, automated methods to de-identify the images, particularly the face region, are necessary. So far, most privacy protection schemes currently used in practice rely on ad-hoc methods such as pixelation or blurring of the face. The authors’ algorithm. On the other hand, combines a model-based face image parameterization with a formal privacy protection model. In experiments on two large-scale data sets they demonstrate privacy protection and preservation of data utility. (Publisher abstract provided)
Similar Publications
- I studied ShotSpotter in Chicago and Kansas City – Here’s What People in Detroit and the More Than 167 Other Cities and Towns Using This Technology Should Know
- Characteristics and Dynamics of Cyberstalking Victimization Among Juveniles and Young Adults
- Panacea or Poison: Can Propensity Score Modeling (PSM) Methods Replicate the Results from Randomized Control Trials (RCTs)?