AI-enabled spectacles listen to the silent speech
What is Echo-speech?
The principal author of EchoSpeech: Continuous Silent Speech Recognition on Minimally-obtrusive Eyewear Powered by Acoustic Sensing is a PhD student of information science. The EchoSpeech low-power, wearable interface requires only a few minutes of user training data before it recognises instructions and can be performed on a smartphone.
Who are all will use it?
This silent speech technology could be an effective input for a voice synthesiser for persons who are unable to vocalise sound. It has the potential to restore patients’ voices. EchoSpeech, in its current state, might be used to converse with others via smartphone in places where voice is inconvenient or improper, such as a busy restaurant or a quiet library.
How it looks and works?
The EchoSpeech glasses, which are equipped with a pair of microphones and speakers the size of pencil erasers, transform into a wearable AI-powered sonar system, sending and receiving soundwaves across the face and tracking mouth movements. The echo profiles are then analysed in real time by a deep learning algorithm, which achieves 95% accuracy.
EchoSpeech, an acoustic detection technology, eliminates the need for wearable video cameras. Because audio data is far smaller than image or video data, it requires less bandwidth to analyse and may be conveyed to a smartphone in real time through Bluetooth, according to François Guimbretière, professor of information science. Furthermore, because the data is handled locally on your smartphone rather than transferred to the cloud, “privacy-sensitive information never leaves your control.”