How do our brains hear words?

Our brains are like an all-purpose computer and can process all kinds of information.

This could be identifying a bird as it flies through the sky, telling one tea from another by scent, or anything else, but an important type of information for living in society is language.

When you think about it closely, language is just a series of sounds that should essentially be the same as a clashing cymbal or a tooting horn, or the whistling wind, so why can our brains distinguish language as something that has meaning?

Signifiant and signifié: Language = form + meaning?

The field that digs deeply into what language is is known as linguistics. When reading books about linguistics, the terms signifiant and signifié often come up.

The concepts of signifiant and signifié were presented by the father of modern linguistics, Saussure, and they refer to the theory that words essentially comprise two elements.

One is the series of sounds and form of the word (signifiant), such as d-o-g for the word “dog,” and the other is the meaning that the word “dog” has (signifié).

It could be considered obvious, but language functions only once the form and the meaning are combined, and even if we can perfectly detect the sounds in a foreign language (watashiwanihongogahanasemasu), we can’t tell what is being said without knowledge about that foreign language (私は日本語が話せます。/watashi wa nihongo ga hanasemasu./I can speak Japanese.).

In other words, to understand language, mechanisms in the brain to process both the form (signifiant) and the meaning (signifié) of sounds are necessary.

In that case, how are forms and words processed in the brain?

Ventral and dorsal pathways in the visual and auditory senses

I talked above about the form and meaning of words, but this mechanism is not limited to words at all.

Information received through the eyes—visual information—is thought to consist of two broad kinds of information.

For example, let’s imagine that a baseball is coming towards you.

To respond appropriately to the baseball, you must process two pieces of information:

  • Where is the ball now?
  • What is that thing flying towards me?

You can’t catch a ball if you don’t know where it is coming from or going to, and if the approaching item is shard of glass, not a ball, catching it would cause serious injury.

To summarize, for visual information, when dealing with the object of vision, it is necessary to process spatial information (where it is) and meaning information (what it is), and there is a dorsal stream from the visual field to the parietal lobe corresponding to the former (spatial information processing) and a ventral stream from the visual field to the temporal lobe for the latter (meaning information processing).

It is believed that the sense of sound also has streams equivalent to these dorsal and ventral streams, with the dorsal stream enabling processing of the form of sounds, and the ventral stream enabling processing of the meaning of sounds.

However, the meaning of sounds can refer to a range of things. The whistle of a boiling kettle or an ambulance siren are comparatively simple sounds, but the words and sounds of humans talking can carry huge amounts of information and take innumerable patterns.

That being the case, does the brain have mechanisms specialized to processing the meaning of language?

The posterior part of the left superior temporal sulcus and sound processing

The paper I discuss today investigates in detail auditory cognitive processing in humans.

In the experiment, subjects were played four types of sounds:

  • ordinary speech;
  • speech distorted with noise (which is harder to hear than ordinary speech and sounds like a harsh whisper);
  • unintelligible sounds (with a tempo and intonation the same as ordinary speech); and
  • unintelligible sounds distorted with noise (which sounds like a harsh whisper),

and their brain activity was investigated using PET.

The results showed a strong relationship with the left superior temporal sulcus for sound processing, and particularly a strong relationship with the posterior region of the left superior temporal sulcus for speech processing.

The paper does not mention it, but some people with autistic spectrum disorder have great difficulty on the phone, and I wondered whether this characteristic could be due to a difference in the speech processing mechanism in the brain compared to healthy people.

Reference URL: Identification of a pathway for intelligible speech in the left temporal lobe

Recommended Articles