Spoken Communication from Neural Signals
The Cognitive Systems Lab at the University of Bremen is engaged in research on brain-computer interfaces (BCI) that transform neural signals resulting from speech processes in the brain directly into speech made audible to any listener or transformed into a textual representation that can be read to people or interpreted as commands by computers. With this research, the CSL-Team is closing the gap between Spoken Communication Interfaces and BCIs.
For affected people, neurological disorders can lead to the loss of the ability to speak. They fall silent because without additional assistance they are completely unable to communicate with other people or interact with their environment. Speech prostheses that transform neural signals directly into speech or text offer great potential for alternative communication modalities. For many years, the CSL-team has been carrying out with its American partner’s research and development in this field. Jointly they advanced the state of the art with several innovative methods and technologies, as described above.
Methodologies
The methodologies developed by the CSL rely on brain activities derived from speech, leveraging the fact that spoken communication is initiated and continuously processed in the brain. To be specific, these speech processes can be recorded from electrodes through electric potentials, which arise from the interaction of many underlying neurons. An analysis of such brain activities using appropriate machine learning methods enables insights about spoken communication down to the fine-grained level of sound production, with respect to its electric potentials. The identified sound segments can be presented as text or directly synthesized into audible speech.
The above mentioned Machine Learning methods learn from brain signals donated by voluntary participants. The acquisition of participants, design of the studies and collection of brain signal data is carried out by our American cooperation partners. The data collection relies on an invasive technology in which electrodes are placed on the cortex or inserted into the brain via depth electrodes. This technology requires a surgical procedure that was planned for the volunteer patients for medical reasons because they suffer from severe epilepsy. Thus, the positioning of electrodes in the brain are arranged according to the patient’s needs. Through direct interpretation of neural signals into audible speech, researchers hope to develop neural speech prostheses in the future that will enable humans to communicate using only their thoughts.
Scientific Contributions
Scientific research results and contributions from the CSL team not only cover the offline reconstruction of speech from recorded data, but also live synthesis using a real-time capable decoder that instantly converts brain activity data into speech that is played back as acoustic feedback via loudspeaker. This human-centric Brain-to-Speech principle directly involves the user in a close loop, i.e. users can listen to the systems interpretation of their own neural signals.
To solve these challenges, algorithms were implemented and evaluated that generate speech with maximum audio quality while minimizing delay. This was achieved by engineering a signal processing and extraction of useful features as well as hand-crafted real-time decoding techniques that provide a continuous feedback to the user either in the form of audible speech for direct human-to-human communication or optionally as a textual representation, which can be interpreted as a command by machines.
Collaboration partner
We work closely together with the ASPEN Lab at the Virginina Commonwealth University, which is run by professor Dean Krusienski.
Contact
Cognitive Systems Lab, University of Bremen
Phone: +49 (0) 421 218 64270
tanja.schultzprotect me ?!uni-bremenprotect me ?!.de
Miguel Angrick, M. Sc.
Cognitive Systems Lab, Universit?t Bremen
Phone: +49 (0) 421 218 64265
miguel.angrickprotect me ?!uni-bremenprotect me ?!.de
Relevant Publications:
We have listed our publications on speech communication from neural signals below, sorted by publication date. For an introduction to the field, the following articles are particularly suitable: Automatic Speech Recognition from Neural Signals: A Focused Review (Christian Herff and Tanja Schultz, In Frontiers in Neuroscience, volume 10, 2016) and Biosignal-based Spoken Communication: A Survey (Tanja Schultz et al., In IEEE/ACM Transactions on Audio, Speech and Language Processing, volume 25, 2017)