Korean researchers have developed a sophisticated acoustic measurement component for mobile devices that can be used for biometric authentication and other voice data processing applications.
The research team is based at the Korea Advanced Institute of Science and Technology (or KAIST), and was led by Professor Keon Jae Lee. They have developed a flexible piezoelectric acoustic sensor inspired by the human ear, with what they describe in a statement as a “multi-resonant ultrathin piezoelectric membrane mimicking the basilar membrane of the human cochlea.” This approach, the researchers explain, was inspired by the human ear’s ability to detect distant voices.
Previous acoustic sensors based on this approach were too large for integration into smartphones, but the researchers say they were able to develop a “mobile-sized” sensor through their use of ultra-thin piezoelectric membranes.
Having developed the technology, the researchers then “successfully demonstrated the miniaturized acoustic sensor mounted in commercial smartphones and AI speakers for machine learning-based biometric authentication and voice processing,” according to their announcement.
The researchers asserted that their solution demonstrated “highly accurate and far-distant speaker identification” using a small amount of data for machine learning. While they have not specified exactly how accurate their voice recognition technology was, they did say that its error rate was reduced by 56 percent after being trained on 150 datasets, and by 75 percent when trained on 2,800 datasets, in comparison to the performance of a MEMS condenser device.
The research team has founded a startup, Fronics Inc., aimed at commercializing their technology. For his part, Professor Lee has high hopes for the technology’s prospects, gesturing to a secretive Alphabet project aimed at dramatically enhancing people’s hearing abilities through sensor-driven technology.
“Google has been targeting the ‘Wolverine Project’ on far-distant voice separation from multi-users for next-generation AI user interfaces,” he said. “I expect that our multi-channel resonant acoustic sensor with abundant voice information is the best fit for this application.”
–
(Originally posted on FindBiometrics)
Follow Us