Music, Lyrics & Brain Structures

While I’ve been digging into the idea of music as a mnemonic device for activities of daily living, one thing struck me as quite odd in the literature: With such variability in the findings, there must be methodological mistakes, oversights, or misunderstandings in research. There has to be some confounding factor wreaking havoc on the results. 

Well, a new paper I happened to catch from McCarty et al. (2023) entitled, Intraoperative cortical localization of music and language reveals signatures of structural complexity in the posterior temporal cortex, may hold a few clues.

One Person’s Direct Brain Recordings Are Another’s Possible Epiphany

McCarty et al. (2023) had the unique opportunity to take direct brain recordings during an awake craniotomy on a musician. They found several interesting details that help map out music and language processing in the brain. The TL;DR version, however, is that music and language share the same regions until the melodies or phrases increase in complexity. The posterior superior temporal gyrus (pSTG) was involved in the perception and production of speech and music. (Neuroscience News has a nice summary and a link to the paper for anyone interested.)

The authors also note:

“…pMTG [posterior middle temporal gyrus] activity was modulated by musical complexity, while pSTG activity was modulated by syntactic complexity. This points to shared resources for music and language comprehension, but distinct neural signatures for the processing of domain-specific structural features.”

McCarty et al. (2023)

So, as you can imagine, that caught my attention.

Cortical Competition and Cognitive Decline In Alzheimer’s

Through the many papers I’ve been reading over the last year or so, there are a few exciting things that routinely jump out at me:

  1. Alzheimer’s patients often lose speech production, motor control, and language before losing their music-related memories and abilities.
  2. To those who enjoy it and know it well, music triggers various processes and memories. And for more than just listeners. It triggers sequences for musicians and dancers, too.
  3. When experimenting with sung and spoken lyrics, Alzheimer’s patients show improved recall for those phrases sung during encoding, particularly after a lengthy delay. (In some instances, have demonstrated savings in subsequent learning conditions.)
  4. Most studies on using music as a mnemonic utilize word or language tasks.

Language and music are undoubtedly intertwined. Trebuchon et al. released a paper along a similar vein in 2021 entitled Functional Topography of Auditory Areas Derived From the Combination of Electrophysiological Recordings and Cortical Electrical Stimulation. Here’s what stuck out to me:

“…the posterior part of left STS [posterior superior temporal sulcus] seems involved in more high-level language processes required in naming and reading tasks, because its stimulation did not induce positive auditory symptoms but naming or reading deficits (e.g., delayed responses, phonological errors, or semantics errors). The reading deficit included grapheme decoding, comprehension deficit and grapheme to phoneme deficit.”

Trebuchon et al. (2021)

When testing Heschl’s Gyrus (HG) nearby, Trebuchon et al. noticed that:

“…This functional asymmetry is a plausible neurophysiological substrate of the greater sensitivity of the left auditory cortex to short sound segments and brief speech features (Jamison et al., 2006Obleser et al., 2008) and of the greater sensitivity of the right auditory cortex to slower acoustic fluctuations and longer steady speech signals such as vowels and syllables (Boemio et al., 2005Abrams et al., 2008)…the enhanced sensitivity to the temporal acoustic characteristics of sounds that is only present in BA 41 and BA 42 reflects information processes needed for tagging further phonetic processing which likely take[s] place in BA 22 (Morillon et al., 2012Giroud et al., 2020).”

Trebuchon et al. (2021)

Now, BA 22? That’s the pSTG region I mentioned earlier. Interesting, right? Furthermore, Trebuchon et al. note:

“…HG and PT should be tested with a repetition or repetition and designation task; Spt [located posteriorily to the HG in the left Sylvian fissure] should be tested with repetition and repetitive motor tasks, STS should be preferentially tested with naming and reading tasks.”

Trebuchon et al. (2021)

All of this combined, the issue within the research on music as a memory device could be their choice of tasks.

Repetitive Language Tasks Could Be the Problem with the Research

For the sake of reasoning this out, let’s assume that both of these studies are “proof” of how brains process music, sound, and language. If that is the case, the messy findings in the literature over the last 30 years make perfect sense.

First, these experiments use music and lyrics of varying complexity and length. Some use memory lists (and the number of correct recalls of words), while other experiments use lyrics, familiar tunes, and novelty. All of these would use different structures and processes, and therefore, a meta-analysis or summary of the research so far would be asinine. The best option is to examine each of these studies on their own.

Second, healthy controls have little to no impairment in these music and language processing areas. Therefore, they may, in fact, not see an improvement in their memory when lyrics, words, or instructions are sung to them.

In Alzheimer’s, however, we know that impairment varies. But as I mentioned earlier, language and motor impairments seem to often occur before their music and sound processing. Now, thinking back to some of my cognitive neuroscience courses, I remember several studies, such as the Three-eyed frog experiment by Martha Constantine-Paton, experiments on interocular rivalry by Tong and Engel (2001), and various monkey studies such as Weiss et al. (2000). 

Research suggests that the auditory cortex also follows the same concept of plasticity and reorganization as vision and motor movement. So, it makes perfect sense for music capabilities to take over the function of the damaged language areas, improving memory and processing when lyrics are sung. This improvement would be particularly pronounced with repetitive exposures and learning.

My conclusion? These biological processes confound the repetitive word and lyric tasks because the two areas are intricately interlinked. Teasing the two apart, particularly when cognitive regions may be damaged in unknown ways, seems to me to be exceptionally difficult. We may find a statistically significant improvement, but we would have no idea why without knowing more about which structures and systems the tasks use, when, and how they have changed in the individual brains. That’s a mighty big, complicated task.

These issues seem like important things for me to note in my research. And perhaps going back through the studies done on lyrical and word tasks would be a worthwhile endeavour to see if these errors could be inferred from the results of each one.

References

Abrams, D. A., Nicol, T., Zecker, S., and Kraus, N. (2008). Right-hemisphere auditory cortex is dominant for coding syllable patterns in speech. J. Neurosci. 28, 3958–3965. doi: 10.1523/JNEUROSCI.0187-08.2008

Boemio, A., Fromm, S., Braun, A., and Poeppel, D. (2005). Hierarchical and asymmetric temporal sensitivity in human auditory cortices. Nat. Neurosci. 8, 389–395. doi: 10.1038/nn1409

Constantine-Paton, M., & Law, M. I. (1978). Eye-Specific Termination Bands in Tecta of Three-Eyed Frogs. Science, 202(4368), 639–641. https://doi.org/10.1126/science.309179

Giroud, J., Trébuchon, A., Schön, D., Marquis, P., Liegeois-Chauvel, C., Poeppel, D., et al. (2020). Asymmetric sampling in human auditory cortex reveals spectral processing hierarchy. PLoS Biol. 18:e3000207. doi: 10.1371/journal.pbio.3000207

Irvine, D. R. F. (2018). Plasticity in the auditory system. Hearing Research, 362, 61–73. https://doi.org/10.1016/j.heares.2017.10.011

Jamison, H. L., Watkins, K. E., Bishop, D. V. M., and Matthews, P. M. (2006). Hemispheric specialization for processing auditory nonspeech stimuli. Cereb. Cort. 16, 1266–1275. doi: 10.1093/cercor/bhj068

McCarty, M. J., Murphy, E., Scherschligt, X., Woolnough, O., Morse, C. W., Snyder, K., Mahon, B. Z., & Tandon, N. (2023). Intraoperative cortical localization of music and language reveals signatures of structural complexity in posterior temporal cortex. IScience, 26(7), 107223. https://doi.org/10.1016/j.isci.2023.107223

Morillon, B., Liégeois-Chauvel, C., Arnal, L. H., Bénar, C. G., and Giraud, A.-L. (2012). Asymmetric function of theta and gamma activity in syllable processing: an intra-cortical study. Front. Psychol. 3:248. doi: 10.3389/fpsyg.2012.00248

News, N. (2023, July 8). Brain’s Melody and Prose: How Music and Language Affect Different Regions. Neuroscience News. https://neurosciencenews.com/music-language-brain-23597/

Obleser, J., Eisner, F., and Kotz, S. A. (2008). Bilateral speech comprehension reflects differential sensitivity to spectral and temporal features. J. Neurosci. 28, 8116–8123. doi: 10.1523/JNEUROSCI.1290-08.2008

Tong, F., & Engel, S. A. (2001). Interocular rivalry revealed in the human cortical blind-spot representation. Nature, 411(6834), Article 6834. https://doi.org/10.1038/35075583

Trébuchon, A., Alario, F.-X., & Liégeois-Chauvel, C. (2021). Functional Topography of Auditory Areas Derived From the Combination of Electrophysiological Recordings and Cortical Electrical Stimulation. Frontiers in Human Neuroscience, 15. https://www.frontiersin.org/articles/10.3389/fnhum.2021.702773

Weiss, T., Miltner, W. H., Huonker, R., Friedel, R., Schmidt, I., & Taub, E. (2000). Rapid functional plasticity of the somatosensory cortex after finger amputation. Experimental Brain Research, 134(2), 199–203. https://doi.org/10.1007/s002210000456

Featured Image by Ri_Ya/Pixabay

Spread the love