Sub Auditory Speech Recognition based on EMG/EPG Signals

--Sub-vocal electromyogram/electro palatogram (EMG/EPG) signal classification is demonstrated as a method for silent speech recognition. Recorded electrode signals from the larynx and sublingual areas below the jaw are noise filtered and transformed into features using complex dual quad tree wavelet transforms. Feature sets for six sub-vocally pronounced words are trained using a trust region scaled conjugate gradient neural network. Real time signals for previously unseen patterns are classified into categories suitable for primitive control of graphic objects. Feature construction, recognition accuracy and an approach for extension of the technique to a variety of real world application areas are presented.


A. Data,4cquiMtion
Three subjects aged 55, 35, and 24 were recorded while sub auditorially pronouncing six English words: stop, go, left, right, alpha, and omega.These particular six words were selected in order to form a control set for a small graphic model of a Mars Rover.
Alpha, and omega were chosen as general control words to represent faster/slower or up/down as appropriate for the particular simulated task.
EMG and EPG signal data was collected for each of the Where k is the translation and j the dilation/compression parameter, m is the expansion function.In our case these were Daubechies filters.And subjects using two pairs of self-ad,besive AG/AG-CI electrodes.They were located on the left and right anterior area of the throat approximately .25 cm back from the chin cleft and 1-1/2 cm from the right and left side of larynx (Figure 1).Initial results indicated that as few as one electrode pair located diagonally between the cleft of the chin and the larynx would suffice for small sets of discrete word recognition.Signal grounding required an additional electrode attached to the right wrist.When acquiring data using the wet electrodes, each electrode pair was connected to a commercial Neuroscan signal recorder which recorded the EMG responses sampled at 2000 Hz.A 60 hertz notch filter was used to remove ambient interference.

Fig
Fig l: Electrode placement and recording

Fig. 2
Fig. 2 shows two typical EMG blocked signals for the words left and omega.

EPG) signal classification is demonstrated as a method for silent speech recognition. Recorded electrode signals from the larynx and sublingual areas below the jaw are noise filtered and transformed into features using complex dual quad tree wavelet transforms. Feature sets for six sub-vocally pronounced words are trained using a trust region scaled conjugate gradient neural network. Real time signals for previously unseen patterns are classified into categories suitable for primitive control of graphic objects. Feature construction, recognition accuracy and an approach for extension of the technique to a variety of real world application areas are presented. Index Terrns--EMG, Sub Acoustic Speech, Wavelet, Neural Network
2, and Shane Agabon 3 (e-mail: ciorgensen@mail.arc.nasa.o_ov).2Diana.Lee. is with SAIC Corporation NASA Ames Research Center (e-mail: ddlee@mail.arc.nasa.gov).3Shane.Agabon is with QSS Corporation NASA Ames Research Center e-mail: sagabon@mail, arc.nasa.gov The samples must be evenly spaced.In effect two parallel fully decimated trees are constructed so that the filters in one tree provides delays that are half a sample different from those in the other tree.In the linear phase this requires odd length filters in one tree and even length filters in the other.The impulse response of the filters then looks like the real and

TABLE 3
(QUASAR)to develop electric potential flee space sensors that do not require resistive, or even good capacitive coupling to the user.The sensor design provides a high input