“The system was developed using a generative adversarial network, a machine learning system in which one deep neural network is pitted against another, allowing the spoofing system to refine its efforts over time until it had established its highly effective DeepMasterPrints.”
Researchers at New York University have developed a neural network system that they say can deliver a kind of master key for fingerprint authentication.
As The Guardian reports, the research team’s “DeepMasterPrints” system can be used to spoof one in five fingerprints in an authentication system that otherwise has a False Acceptance Rate of one in a thousand. That’s because the DeepMasterPrints are comprised of highly common fingerprint features, and are designed to fool the many fingerprint identification systems that rely on only partial matches of fingerprints.
The researchers compare this means of spoofing to a common method of cracking passwords in which an automated system will run through a list of common words until it succeeds.
The technology appears to be the product of research undertaken last year in a joint effort between New York University’s Tandon School of Engineering and Michigan State University’s College of Engineering, which explored the security vulnerabilities of partial fingerprint matching systems; while four of the researchers behind DeepMasterPrints hail from NYU, a fifth, Arun Ross, is affiliated with Michigan State University, which has become an important hub of biometrics research. The system was developed using a generative adversarial network, a machine learning system in which one deep neural network is pitted against another, allowing the spoofing system to refine its efforts over time until it had established its highly effective DeepMasterPrints.
While the system has not yet been applied to any nefarious ends in the real world, it points to real security vulnerabilities in fingerprint biometrics, and underscores the importance of innovations in the area of multimodality and also liveness detection, which should ensure that this approach is thwarted if it is ever applied in an attack.
Source: The Guardian
Follow Us