Concentrating on Word Sounds Helps Reading Instruction and Intervention

(Posted March 13, 2015)

Article by Bert Gambini, 1-29-2015
Excerpts from Brook Phillips, WDE Vision Outreach Services Consultant

A neuroimaging study by a University at Buffalo psychologist suggests that phonics, a method of learning to read using knowledge of word sounds, shouldn’t be overlooked in favor of a whole-language technique that focuses on visually memorizing word patterns, a finding that could help improve treatment and diagnosis of common reading disorders such as dyslexia.

A better reader is someone whose visual processing is more sensitive to audio information, according to the study’s results. Barring injury, all parts of the brain are working at all times, contrary to the myth that it functions at only a fraction of its capacity. However, different parts of the brain are specialized for different types of activities that trigger some regions to work harder than others.

With reading, the Visual Word Form Area (VWFA) is excited when it encounters familiar letter combinations. Think of a bottom-up process as a flow of information that begins with the visual system feeding neurons that detect basic features in words such as line orientation that eventually leads to word recognition. A top-down process implies that some other information enters that flow of visual recognition – information like the knowledge of the word sounds.

To find evidence of this top-down input, researchers presented subjects with wide ranges of reading abilities between the ages of 8 and 13 with word pairs. The subjects had to determine if the words rhymed while an MRI scanner monitored their brain activity. The experiment used three sets of conditions when presenting the word pairs: subjects first read the word pairs (visual-only); then heard the word pairs (auditory-only); and lastly, a combination of sight and sound, hearing the first word but reading the second (audio-visual). The MRI scanner determined which parts of the brain were most active during each condition by displaying a three dimensional representation of the brain, made up of what look like a series of cubes, called voxels.

To make sense of the results through all the conditions, researchers take the sum of the auditory-only and visual-only signals and compare that to the strength of the audio-visual condition. This helps them distinguish between multisensory sensory neurons, which become excited by audio-visual information, and collections of heterogeneous unisensory neurons, a mix of visual-only and auditory-only that respond excitedly to one or the other. As you learn how to read, your brain starts to make more use of top-down information about the sounds of letter combinations in order to recognize them as parts of words.

To read the complete article, go to: http://medicalxpress.com/news/2015-01-word-intervention.html.