Researchers at the University of Eastern Finland are warning about the vulnerability of voice recognition systems to effective mimics.
Their findings are outlined in a new study published in Elsevier entitled, “Acoustical and perceptual study of voice disguise by age modification in speaker verification”. Essentially, the researchers sought to see if they could fool automated speaker verification systems not with technological systems, but with two professional impersonators, who endeavored to mimic eight Finnish public figures.
In a summary of the study, the researchers assert that these impersonators “were able to fool automatic systems and listeners in mimicking some speakers.” Meanwhile, efforts from a number of additional participants to modify the sound of their voices were able to further degrade the performance of these automated systems, particularly when they altered their voices to sound more like children.
While this might raise alarm bells given the rise of consumer-facing devices designed to recognize their voices, there is at least one important caveat here: It appears that the research is based not on specific voice command systems like Apple’s Siri or the Google Assistant, but rather on statistical speaker models that are popularly used in automated speaker verification systems. In other words, the deficiencies were found in hypothetical voice recognition systems. Nevertheless, the research may go some way in explaining why companies like Apple and Google aren’t yet using voice recognition for user authentication, even though their devices are able to recognize users’ voices.
Sources: University of Eastern Finland, ScienceDirect
Follow Us