An international research team discovered valuable insights that manufacturers can use to improve the security and privacy of voice assistants. The “fake wake” phenomenon was investigated by the team of eight voice assistants in English and Chinese.
Oice assistants listen actively to the environment to determine if their “wake words”, such as “Alexa,” or “OK Google”, are activated. Fake wake occurs when the Voice assistant detects false “wake-up words”, also known as “fuzzy words”, from, e.g. through background TV programs or conversations. Attackers can use these incorrectly recognized words to activate voice assistants, even without the user being aware.
Prof. Wenyuan Xu and Dr. Yanjiao Chen led the team that created false wake words automatically instead of listening to audio material. This was the first time this has happened. These fuzzy words were generated using a known initial word, such as “Alexa.” Researchers did not have access to the model which recognizes wake-up words or the vocabulary upon which the voice assistant is built. They also looked into the causes of incorrect wake-up terms being accepted.
They first identified the most common features that contributed to fuzzy words being accepted. They focused on the phonetic portion of the word as the determining factor. The voice-examined assistants were also capable of activating false words that were significantly different from real wake-up words. However, this was despite the fact that the surrounding noises, volume of the words, and gender of the speaker had very little impact.
genetic algorithms combined with machine-learning allowed for more than 960 custom fuzzy terms in English and Chinese to be generated. This activated the “wake up word detector” of voice assistants. This demonstrates the seriousness of fake wake phenomena, while it also provides deeper insight.
Retraining the voice assistant’s wake word detector with fuzzy words can reduce the impact of fake wake phenomena. This allows the voice assistant distinguish between real and fake wake-up words . Voice assistant manufacturers can greatly benefit from this research. They can retrain existing models to be more accurate and less susceptible to fake wake-up attacks. This will increase security and privacy.