Music Cognition Lab

                                            MCL: Research Overview


The Minor Third Communicates Sadness in Speech and Music

Project Leader: Meagan Curtis

Musical intervals are associated with emotions, but the origin of these associations hasn't been sufficiently explained by current theories. For instance, the minor third is associated with sadness in Western cultures, but the origin of this association is a matter of debate. This line of research explores the intriguing possibility that the associations between intervals and emotions are also present in the prosody of human vocal expressions. Bi-syllabic speech samples conveying happiness, anger, pleasantness, and sadness were recorded by nine actresses. The speech samples were rated for perceived emotion. Acoustic analyses were conducted on the speech samples, and the prosodic contours of the sad speech samples revealed that the relationship between the two salient pitches tended to approximate a minor third, which is consistent with the emotional associations in the domain of music. Other patterns were observed for other emotions. Regression analysis of the speech sample emotional ratings revealed that the minor third was the most reliable acoustic cue for identifying sadness. The results suggest that there are correspondences across domains in the use of pitch contours to encode and decode emotion. These findings support the theory that human vocal expressions and music share an acoustic code for communicating emotion.


Expecting the Unexpected: Cross-Modal Priming of Low-Probability Stimuli

Project Leader: Meagan Curtis

Performance on a visual odd-ball discrimination task was influenced by whether the visual stimulus was preceded by a high-probability or low-probability auditory stimulus. Hearing a high-probability sine wave speeded responses to the high-probability visual stimulus and slowed correct responses to the low-probability visual stimulus, relative to hearing a low-probability sine wave. These results demonstrate that encountering a high-probability stimulus in one domain can lead one to expect high-probability stimuli in other domains, or conversely, encountering a low-probability stimulus in one domain can lead one to expect low-probability stimuli in other domains. It is possible that the brain circuits implicated in monitoring probability utilize the level of predictability in one domain to calibrate the level of predictability generally.


Culture-Specific Tonal Knowledge Drives Judgments

Project Leader: Meagan Curtis

Knowledge of a tonal system can drive a listener's expectations for future musical events. The rules of a tonal system are learned implicitly and effortlessly by listeners who have had adequate exposure to the tonal system. Most of the research that has examined tonal expectations has been conducted on Western subjects listening to Western tonal music. We examined whether the tonal rules of Western music are used to generate musical expectations when listening to music of an unfamiliar tonal system. We compared the musical expectations of Western subjects when they were listening to Western tonal melodies to their expectations when they were listening to Indian tonal melodies.


Tonal Violations Interact with Lexical Processing: Evidence from Cross-modal Priming

Project Leader: Meagan Curtis

The goal of our investigations was to determine whether musical expectancy violations can be informative across modalities, alerting the perceptual systems to anticipate other low probability stimuli. Expectancy can be easily modulated using musical stimuli. Once a tonal context has been established, listeners expect subsequent chords and notes to adhere to the established tonality. If tonality is violated, an expectancy violation occurs. Participants heard chord progressions in which the target chord either adhered to or violated the established tonality. A visual discrimination task was presented simultaneously with the target chord. The visual stimuli consisted of familiar and novel stimuli: words and nonwords.


Music Influences the Processing of Syntax

Project Leader: Meagan Curtis

Language and music are both rule-governed systems for combining discrete units (words and pitches) into longer, hierarchically-structured sequences. In language, these rules are known as syntax. Music can also be described as having syntax, which is implicitly learned and constrains a listener's expectations for future musical events. The processing of syntactic incongruities in music has been shown to evoke ERP responses that are statistically indistinguishable from those evoked by linguistic syntactic incongruities. Broca's areas and its right hemisphere homologue have been implicated in the processing of both linguistic syntax and musical syntax. Aniruddh Patel has offered a "shared syntactic integration resource hypothesis", proposing that these frontal areas may support the syntactic computations common tovlanguage and music. If syntactic computations across both domains utilize the same cortical areas, the simultaneous processing of language and music may result in interactions between the domain-specific syntactic computations. Garden path sentences offer an opportunity to test the influence of musical parsing on linguistic parsing.

490 Boston Ave, Medford, MA 02155 email the webmaster with comments or questions