Brain Computer Interface for Speech Synthesis Based on Multilayer Differential Neural Networks

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

This manuscript proposes the design of a speech synthesis algorithm based on measured electroencephalographic (EEG) signals previously classified by a class of neural network with continuous dynamics. A novel multilayer differential neural network (MDNN) classifies a database with the EEG studies of 20 volunteers. The database contains information described by input-output pairs corresponding to EEG signals and a corresponding word imagined by the volunteer. The suggested MDNN estimates the unknown relationship between the information instances and suggests the most-likely word that the user wants mentioning. The proposed MDNN satisfactory classifies over 95% a set of words obtained from the suggested EEG study in which, the users have to watch four different geometric figures on a screen.

Original languageEnglish
Pages (from-to)126-140
Number of pages15
JournalCybernetics and Systems
Volume53
Issue number1
DOIs
StatePublished - 2022

Keywords

  • EEG signal analysis
  • EEG signal processing
  • Speech synthesis
  • multilayer differential neuralnetworks
  • pattern classifier

Fingerprint

Dive into the research topics of 'Brain Computer Interface for Speech Synthesis Based on Multilayer Differential Neural Networks'. Together they form a unique fingerprint.

Cite this