Presentation attack detection in biometrics

This work addresses the problem of presentation attacks against iris recognition systems. Iris recognition systems attempt to recognize individuals based on their iris patterns typically acquired in the near-infrared spectrum. However, it is possible for an adversarial user to circumvent the system by presenting a deliberately modified iris pattern or a fake iris pattern. These are called presentation attacks (PAs). Examples of PAs include (1) using printed images of another person’s iris, (2) presenting a fake eye, (3) displaying an eye image on a Kindle, or (4) wearing cosmetic contact lenses to mask one’s own iris pattern. To detect such attacks, we develop a deep convolutional neural network (CNN) that can determine if an input eye image corresponds to an attack or not. By sampling patches from the images, the proposed CNN is able to extract discriminatory features for effective presentation attack detection. Upon testing our algorithms on several image datasets of real and fake eyes, we observed True Detection Rates as high as 100% at a False Detection Rate of 0.2% in both intra-dataset and cross-dataset experiments.

C. Chen and A. Ross, "Exploring the Use of IrisCodes for Presentation Attack Detection," Proc. of 9th IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS), (Los Angeles, USA), October 2018.

S. Hoffman, R. Sharma, A. Ross, "Convolutional Neural Networks for Iris Presentation Attack Detection: Toward Cross-Dataset and Cross-Sensor Generalization," Proc. of IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), (Salt Lake City, USA), June 2018.

C. Chen and A. Ross, "A Multi-Task Convolutional Neural Network for Joint Iris Detection and Presentation Attack Detection," Proc. of IEEE Winter Conference on Applications of Computer Vision Workshops (WACVW), (Lake Tahoe, USA), March 2018.