Exploring ways into brain's 'music room'

Experts say music sensitivity may be more fundamental to the human brain than is speech perception

Exploring ways into brain's 'music room'

Whether to enliven a commute, relax in the evening or drown out the buzz of a neighbour’s recreational drone, Americans listen to music nearly four hours a day.

In international surveys, people consistently rank music as one of life’s supreme sources of pleasure and emotional power. We marry to music, graduate to music, mourn to music.

Every culture ever studied has been found to make music, and among the oldest artistic objects known are slender flutes carved from mammoth bone some 43,000 years ago — 24,000 years before the cave paintings of Lascaux.

Given the antiquity, universality and deep popularity of music, researchers had long assumed that the human brain must be equipped with some sort of music room, a distinctive piece of cortical architecture dedicated to detecting and interpreting the dulcet signals of song.

Yet for years, scientists failed to find any clear evidence of a music-specific domain through conventional brain-scanning technology, and the quest to understand the neural basis of a quintessential human passion foundered.

Now, researchers at the Massachusetts Institute of Technology have devised a radical new approach to brain imaging that reveals what past studies had missed.
By mathematically analysing scans of the auditory cortex and grouping clusters of brain cells with similar activation patterns, the scientists have identified neural pathways that react almost exclusively to the sound of music — any music.

It may be Bach, bluegrass, hip-hop, big band, sitar or Julie Andrews. A listener may relish the sampled genre or revile it. No matter. When a musical passage is played, a distinct set of neurons tucked inside a furrow of a listener’s auditory cortex will fire in response. Other sounds, by contrast – a dog barking, car skidding, toilet flushing – leave the musical circuits unmoved.

Nancy Kanwisher and Josh H McDermott, professors of neuroscience at MIT, and their postdoctoral colleague Sam Norman-Haignere reported their results in the journal Neuron. The findings offer researchers a new tool for exploring the contours of human musicality.

“Why do we have music?” Kanwisher said. “Why do we enjoy it so much and want to dance when we hear it? How early in development can we see this sensitivity to music, and is it tunable with experience? These are the really cool first-order questions we can begin to address.”

McDermott said the new method could be used to computationally dissect any scans from a functional magnetic resonance imaging device, or fMRI — the trendy workhorse of contemporary neuroscience — and so may end up divulging other hidden gems of cortical specialisation. As proof of principle, researchers showed that their analytical protocol had detected a second neural pathway in the brain for which scientists already had evidence — this one tuned to the sounds of human speech.

Importantly, the MIT team demonstrated that the speech and music circuits are in different parts of the brain’s sprawling auditory cortex, where all sound signals are interpreted, and that each is largely deaf to the other’s sonic cues, although there is some overlap when it comes to responding to songs with lyrics.

The new paper “takes a very innovative approach and is of great importance,” said Josef Rauschecker, director of the Laboratory of Integrative Neuroscience and Cognition at Georgetown University. “The idea that the brain gives specialised treatment to music recognition, that it regards music as fundamental a category as speech, is very exciting to me.”

In fact, Rauschecker said, music sensitivity may be more fundamental to human brain than is speech perception. “There are theories that music is older than speech or language. Some even argue that speech evolved from music.”

And though the survival value that music held for our ancestors may not be as immediately obvious as the power to recognise words, Rauschecker added, “music works as a group cohesive. Music-making with other people in your tribe is a very ancient, human thing to do.”

Elizabeth Hellmuth Margulis, the director of the Music Cognition Lab at the University of Arkansas, said that when previous neuroscientists failed to find any anatomically distinct music centre in the brain, they came up with any number of rationales to explain the results.

“The story was, oh, what’s special about music perception is how it recruits areas from all over the brain, how it draws on the motor system, speech circuitry, social understanding, and brings it all together,” she said.

Some dismissed music as “auditory cheesecake,” a pastime that co-opted other essential communicative urges. “This paper says, no, when you peer below the cruder level seen with some methodologies, you find very specific circuitry that responds to music over speech.”

Kanwisher’s lab is widely recognised for its pioneering work on human vision, and the discovery that key portions of the visual cortex are primed to instantly recognise a few highly meaningful objects in the environment, like faces and human body parts.

Soundscaping the world

The researchers wondered if the auditory system might be similarly organised to make sense of the soundscape through a categorical screen. If so, what would the salient categories be? What are the aural equivalents of a human face or a human leg — sounds or sound elements so essential the brain assigns a bit of gray matter to the task of detecting them?

To address the question, McDermott, a former club and radio disc jockey, and Norman-Haignere, an accomplished classical guitarist, began gathering a library of everyday sounds — music, speech, laughter, weeping, whispering, tires squealing, flags flapping, dishes clattering, flames crackling, wind chimes tinkling.

They put the lengthy list up for a vote on the Amazon Mechanical Turk crowdsourcing service to determine which of their candidate sounds were most easily recognised and frequently heard. That mass survey yielded a set of 165 distinctive and readily identifiable sound clips of two seconds each. The researchers then scanned the brains of 10 volunteers (none of them musicians) as they listened to multiple rounds of the 165 sound clips.

Focusing on the brain’s auditory region — located, appropriately enough, in the temporal lobes right above the ears — the scientists analysed voxels, or three-dimensional pixels, of the images mathematically to detect similar patterns of neuronal excitement or quietude.

The computations generated six basic response patterns, six ways the brain categorised incoming noise. But what did the categories correspond to? Matching sound clips to activation patterns, the researchers determined that four of the patterns were linked to general physical properties of sound, like pitch and frequency.

The fifth traced the brain’s perception of spe-ech, and for the sixth the data turned operatic, disclosing a neuronal hot spot in the major crev-ice, or sulcus, of the auditory cortex that attended to every music clip researchers had played.

“The sound of a solo drummer, whistling, pop songs, rap, almost everything that has a musical quality to it, melodic or rhythmic, would activate it,” Dr Norman-Haignere said. “That’s one reason the result surprised us. The signals of speech are so much more homogeneous.”

Researchers have yet to determine exactly which acoustic features of music stimulate its dedicated pathway. “It’s difficult to come up with a dictionary definition,” McDermott said. “I tend to think music is best defined by example.”

Justice Potter Stewart of the Supreme Court likewise said of pornography that he knew it when he saw it. Maybe music is a kind of cheesecake after all. The neuroscience of music is just getting started, and our brains can’t help but stay tuned.

DH Newsletter Privacy Policy Get top news in your inbox daily
Comments (+)