Read these sentences aloud:

I never said she stole my money.

I never said she stole my money.

I never said she stole my money.

Emphasizing any one of the words over the others makes the string of words mean something completely different. "Pitch change" — the vocal quality we use to emphasize words — is a crucial part of human communication, whether spoken or sung.

Recent research from Dr. Edward Chang's lab at the University of California, San Francisco's epilepsy center has narrowed down which part of the brain controls our ability to regulate the pitch of our voices when we speak or sing— the part that enables us to differentiate between the utterances "Let's eat, Grandma" and "Let's eat Grandma."

Scientists already knew, more or less, what parts of the brain are engaged in speech, says Chang, a professor of neurological surgery. What the new research has allowed, he says, is a better understanding of the neural code of pitch and its variations — how information about pitch is represented in the brain.

Chang's team was able to study these neural codes with the help of a particular group of study volunteers: epilepsy patients.

Chang treats people whose seizures can't be medically controlled; these patients need surgery to stop the misfiring neurons. He puts electrodes in each patient's brain to help guide the scalpel during their surgery.

And, with the patients' permission, while the electrodes are in place, the recordings also allow Chang and his colleagues to study the way the brain seamlessly orchestrates movements of muscles in the tongue, lips, jaw and larynx to to produce natural speech.

In a study published in Neuron last month, epilepsy patients were asked to say aloud more than 400 different sentences designed to cover many of the movement patterns seen in American English. While they spoke, the neural activity in the sensorimotor cortex portion of the brain was recorded, as well as their speech.

The result was a map showing that neurons work in clusters to create words, coordinating the patterns of movement of nearly 100 different muscles.

There's not one cluster of neurons that controls the entire tongue, for example, says Josh Chartier, a bioengineering graduate student in Chang's lab and a co-author of the study. Instead, a single cluster consisting of tens of thousands of neurons in one part of the brain controls a movement pattern in the lips, tongue and larynx.

"If we think about the sensory motor cortex as a whole as the major command center, it's giving commands down to the vocal tract of how to move," he says.

Each area that controls a movement is like playing a chord on the piano, he says, and when all these chords come together, music is made. In a similar way, when different areas of the sensory motor cortex are activated, fluid speech is made.

A second study also published in June, in the journal Cell, and co-authored by recent UCSF doctoral graduate Benjamin Dichter, went even further in discerning the way pitch can be controlled by regulating tension in the vocal folds.

Dichter says they were able to identify neural populations that encode pitch in the dorsal laryngeal motor cortex, a part of the brain that controls movement of the larynx.

Researchers asked participants to read the sentence "I never said she stole my money" multiple times, emphasizing a different word in the sentence each time. They also had participants listen to pitch patterns, and then repeat them.

Each time the speakers emphasized a word, brain cells in the dorsal laryngeal motor cortex became active — especially when the speaker's pitch reached its peak. What's more, right before the speaker vocalized a pitch change while singing, the researchers also picked up neural activity in that area.

And when certain of these brain sites were stimulated via the electrode, larynx muscles would flex — some people even made some involuntary vocalizations.

The higher the pitch, the more activity there was in this area, Dichter says. The region fired even when people weren't speaking, but just listening to recordings of themselves speak. That suggests this part of the brain functions when we hear a change in vocal pitch, as well as when we change the pitch ourselves.

The next major step, Chang says, is to harness this information to improve devices for people who struggle with speech.

Emily Myers, an associate professor in the departments of speech, language and hearing sciences at the University of Connecticut, who was not involved in the study, says this new research could be especially useful to sufferers of aprosodia, a neurological condition that some researchers have described as "a disruption in the expression or comprehension of the changes in pitch, loudness, rate, or rhythm that convey a speaker's emotional intent."

Someone with this disorder, Myers says, will typically have a very flat voice, such as sometimes happens in people who have Parkinson's disease, or after a stroke.

"Maybe, down the road, knowing how these areas of the brain are responsible in controlling the melody of your voice might set the stage for things like cortical arrays of electrodes that could be implanted that could help people regain that melody if they've lost it," she says.

An interesting next step in the research, Chang says, would be to investigate the way that people's brains work when speaking a language other than English, such as Mandarin — where varying the intonation can significantly change the meaning of a word.

"In Mandarin, every syllable has its own pitch tone and trajectory," Chang explains. "We'd like to understand how those differences occur as people across different languages speak and understand pitch changes."

Copyright 2018 NPR. To see more, visit http://www.npr.org/.

300x250 Ad

300x250 Ad

Support quality journalism, like the story above, with your gift right now.

Donate