New methods to understand how the brain responds to sounds – including singing – Innovita Research

New methods to understand how the brain responds to sounds – including singing

New research has identified neurons in the brain that ‘light up’ to the sound of singing, but do not respond to any other type of music.

Image credit: Max Pixel, CC0 Public Domain

Assistant Professor of Neuroscience and Biostatistics and Computational Biology Samuel Norman-Haignere, Ph.D., with the Del Monte Institute for Neuroscience at the University of Rochester is first author on the paper in Current Biology that details these findings. “The work provides evidence for relatively fine-grained segregation of function within the auditory cortex, in a way that aligns with an intuitive distinction within music,” Norman-Haignere said.

The singing-specific area of the brain is located in the temporal lobe, near regions that are selective for speech and music. Researchers worked with epilepsy patients who had electrodes implanted in their brain (electrocorticography or ECOG) in order to localize seizure-related activity as a part of their clinical care. ECoG enables more precise measurements of electrical activity in the brain.

“This higher precision made it possible to pinpoint this subpopulation of neurons that responds to song. This finding along with prior findings from our group give a bird’s eye view of the organization of the human auditory cortex and suggest that there are different neural populations that selectively respond to particular categories, including speech, music, and singing.”

In previous research, fMRI was used to scan the brains of participants as they listened to different types of speech and music. Norman-Haignere combined the fMRI data from this prior study in order to map the locations of song-selective neural populations, which were identified in their new ECoG study.

This way of combining ECoG and fMRI is a “significant methodological advance,” according to Josh McDermott, Ph.D., of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds and Machines (CBMM) and co-senior author on the study. “A lot of people have been doing ECoG over the past 10 or 15 years, but it's always been limited by this issue of the sparsity of the recordings. Sam is really the first person who figured out how to combine the improved resolution of the electrode recordings with fMRI data to get better localization of the overall responses.”

Understanding the timing and location of how the auditory cortex responds to different sounds is integral in developing an understanding the relationship neurons have to speech and music. Recent research published in Nature Human Behavior offers a novel method of measuring the timescale over which different brain regions integrate information.

“We want to understand the time window that different neurons are processing. If a neuron is looking at a 100-millisecond window that suggests it might be analyzing phonemes, but definitely not whole sentences,” said Norman-Haignere, who is first author on the paper. “Prior to this research, we didn’t have a general-purpose method to estimate neural integration times.”

Understanding this timing will allow researchers to better map how information is processed across different regions of the brain. “Understanding how information is coded in different areas of the brain is necessary so we can build models that better replicate what is happening in the brain. Each step in this work brings us close to understanding how to link these representations to perception.”

Source: University of Rochester