Language: The brain’s ability to process native speech sounds

It’s approximately 6 o’clock in the evening when the telephone rings. As I pick up the receiver, my 4-year-old nephew greets me. With great excitement, he demonstrates his ability to say words that he’s learned in his intensive French program. I proudly congratulate him, trying to mask my tone of jealously at how he pronounces complex French sounds. I wonder to myself, why is it that after four years of high school French I still can’t roll my tongue when I say ‘Merci’?

Speech sounds, otherwise known as phonemes, vary across many different languages. Development of the ability to pronounce these phonemes is first acquired through an infant’s attention to their parent’s voice and facial expressions. Throughout the years, researchers have been interested in studying an infant’s ability to differentiate speech sounds of their native language with non-native speech sounds. Studies had shown that the life period between 6-8 months is when babies have the ability to separate these speech sounds that they experience everyday, from non-native speech sounds which are less commonly heard (Intartaglia et al., 2016). Research has also shown that at this stage in life, babies can differentiate speech sounds from foreign languages just as well as babies native to that foreign language. However, it is from 10-12 months of age that infants lose this impressive ability. The child becomes attentive to native phonemes and becomes verbally blind to unimportant phonemes from non-native languages. This stage helps in the development of speech (Intartaglia et al., 2016). A study was completed by Intartaglia et al., (2016) which focused on this lost ability in adults to separate speech sounds from non-native languages. The aim of their study was to see if the brain’s successful coding of native speech sounds was exclusively based on language experience.

Brain responses to a phoneme stimulus were recorded for 26 native speakers of American English and 35 native speakers of French. Age of the participants ranged from 18 to 36 years old with an approximate gender ratio of 2:1, females to males. Phoneme stimuli used in this experiment were [?y] (Native French syllable “ru”) and [ðæ] (Native English syllable “thae”). These speech sounds were chosen because the consonant/vowel pairing does not exist in the opposing language. The stimulus was given through sound limiting headphones and the sound was administered exclusively to the right ear at different frequencies (pitch). Auditory brainstem responses were gathered by placing electrodes on the scalp of the participant.

The results of this experiment had supported the hypothesis that brain responses have increased sensitivity to speech stimulus through language experience. While looking at the results of this study, two major findings were significant. The first finding showed that brain responses from English participants had an increased reaction to the fundamental frequency (pitch) of the stimuli than French participants. Although both languages are not representatives of a “tone” language, English relies on the specific placement of pitch on pronunciation of words to relay an accurate message compared to French. One example of frequency variation found in the English language would be to increase the pitch at the end of a sentence to cue a question. This language experience of English participants was believed to have contributed to brain sensitivity towards fundamental frequency. The second major finding was that participants showed increased accuracy for brain responses to speech sounds in their native language compared to non-native sounds. This result shows that language experience strengthens brain responses to native phonemes, and can result in a more accurate prediction of oncoming words and syllables (Intartaglia et al., 2016).

In conclusion, this study was successful in providing further evidence for the decreasing ability of adults to distinguish between irrelevant non-native speech sounds. Thus, bi-lingual development should be considered during the period of infancy to utilize the impressive flexibility of brain development to foreign speech sounds.

 

 

Source: Intartaglia, B., White-Schwoch, T., Meunier, C., Roman, S., Kraus, N., Schön, D., (2016). Native language shapes automatic neural processing of speech. Neuropscyhologia. 89: 57-65.