Building Meaning From Language
Tufts University Initiative on Emerging Trends
in Psychology
June 14 - June 19, 2007
Tufts University, Medford, MA
Abstracts
What are words?
Daniel C. Dennett
Curiously enough, philosophers who devote their careers to 'linguistic
philosophy' and 'analytic metaphysics' often take words for granted, as if
they were just as unproblematic bits of the world's furniture as tables and
chairs, raindrops and sunrises. Yet when asked whether words are "in their
ontology" some philosophers find themselves swallowing hard and saying that,
strictly speaking, there are no such things as words! If words exist (and I
am sure they do!), what, exactly, are they? And how did there come to be so
many of them?
Morton Ann Gernsbacher
Fine Tuning for Meaning: Behavioral and Neural Imaging Experiments
Over fifteen years ago, we began asking research participants to read a
sentence and then judge quickly whether a test word was related to the
overall meaning of the sentence that they had just read. Some test words
were related to the final word of the sentence but unrelated to the
sentence's overall meaning, for example, the test word "ace" and the
sentence, "He dug with the spade." When the test words were presented
immediately after participants read the sentences, participants were slower
to correctly reject those test words as not being related to the sentence
than they were to correctly reject test words that were completely unrelated
to the sentences. Thus, participants experienced interference. However, when
the test words were presented after a brief delay, this interference was
attenuated. We interpreted this pattern of immediate interference followed
by attenuation as manifesting the action of a cognitive mechanism of
suppression. We have explored the basis of this suppression mechanism
behaviorally with numerous participant groups (e.g., less-skilled readers,
elderly readers, readers with small working memory spans), and we have used
event-related functional magnetic brain imaging to identify its neural
basis. These data will be presented to support the argument that fine tuning
for meaning requires attenuating inappropriate information (i.e.,
suppression) as well as activating appropriate information.
Premotor cortex, action control, and language.
Arthur Glenberg
To effectively control action, the brain has evolved to solve a number of
thorny problems: Learning complex action sequences with hierarchical
structure, exquisite timing of movements (e.g., in tennis, piano playing,
walking) when sensory feedback may be too slow to help, and determining just
what information in the sensory array might be useful. Interestingly,
similar problems arise in learning and using language. Might the brain use
mechanisms of action control to learn, produce, and comprehend language?
Recent findings of mirror neurons tuned for action and speech recognition in
premotor cortex (Broca's area in particular) suggest a positive answer. In
this talk, I will illustrate how a formal theory of action control,
Wolpert's HMOSAIC model, can be modified to account for basic facts in
language. Then, I will discuss the results of several projects testing
theoretically derived claims regarding language acquisition, how
manipulating the motor system affects language comprehension, and how
manipulating language comprehension affects the motor system.
Building Linguistic Meaning
Ray Jackendoff
What does a theory of sentence and discourse meaning have to account for?
- Word and construction meanings are stored in
long-term memory, in association with phonological and syntactic
information. Word and construction meanings can contain variables that
stipulate combinatorial potential.
- Sentence and discourse meanings are built up
online in working memory, in part by instantiating variables in word
meanings.
- Because of the structured nature of semantic
combinatoriality, semantic working memory cannot consist simply of the
parts of longterm memory that are activated.
- Word, sentence, and discourse meanings involve the
interaction of at least two kinds of combinatorial structures: a
quasi-algebraic Conceptual Structure, which encodes categorial
and function-argument information, and a quasi-geometric/ topological
Spatial Structure, which encodes details of shape and spatial
configuration.
- Conceptual Structure itself is organized into a
number of discrete but interacting tiers, including propositional
(function-argument) structure, referential structure, and information
(topic/focus) structure.
- Both Conceptual Structure and Spatial Structure
interact not only with language but also with perception and action.
This is what allows us to talk about what we see and act in response to
instructions.
- Rules of inference (including heuristics) are
defined over Conceptual Structure and Spatial Structure.
- Word meanings have internal combinatorial
structure that enables them to trigger patterns of inference.
- Word meanings are not just sets of necessary and
sufficient conditions. They also involve various sorts of violable
constraints that show up in nonstereotypical situations.
- Building sentence and discourse meaning from word
meanings involves more than just combining word meanings. Many aspects
of meaning (often called pragmatic) are not represented by any words in
the sentence but arise out of the necessity to combine word meanings in
semantically well-formed and/or situationally appropriate fashion.
Two levels of verb meaning: Neuroimaging and neuropsychological evidence
David Kemmerer
For over 20 years, research in linguistics has supported the existence of
two levels of verb meaning. The first level consists of an austere
representation, sometimes called the "event structure template," that (a) is
common to all the verbs in a given class (e.g., "manner of motion" verbs),
(b) is composed primarily of simple predicates and variables for arguments,
and (c) strongly constrains the range of morphological and syntactic
constructions that are possible. The second level reflects the uniqueness of
every verb and has been dubbed the "constant" because it captures
idiosyncratic semantic features that (a) distinguish each verb in a given
class from all the others (e.g., stroll vs. strut vs. stagger), (b) are
often modality-specific in format, and (c) are grammatically irrelevant. I
present evidence from two neuroscientific approaches -- specifically,
functional neuroimaging studies with normal subjects, and neuropsychological
studies with brain-damaged subjects -- that begin to reveal how these two
levels of verb meaning are implemented in the brain. This research suggests
that "event structure templates" depend on cortical structures in the
classic left perisylvian language system, whereas "constants" depend on
cortical structures in anatomically distributed sensorimotor systems,
including regions involved in vision and action.
Statistical Semantics
Walter Kintsch
Statistical semantics attempts to infer semantic knowledge from the
analysis of linguistic corpora. For example, Latent Semantic Analysis (Landauer
& Dumais, 1997; Landauer et al., 2007) constructs a high-dimensional map of
meaning that allows the ready computation of similarities between word
meanings as well as text meanings. I briefly describe LSA as well as several
related methods and then focus on two limitations of such systems.
Typically, semantic representations are generated from data that consist
only of word co-occurrences in documents, neglecting information about word
order, syntax, as well as discourse structure. Ways to include word order as
well as syntactic information in the construction of corpus-based semantic
representations are described. Specifically, dependency grammar will be used
to guide the construction of semantic representations and comparisons.
Secondly, statistical semantics is based solely upon verbal information,
whereas human semantics integrates perception and action with the symbolic
aspects of meaning. A map of meaning that considers only its verbal basis
can nevertheless be useful, in that language mirrors real world phenomena.
Furthermore, it is argued that meaning, while clearly based on perception
and action, transcends this basis and includes a symbolic level, which we
attempt to model by statistical semantics.
The Neural Basis of Comprehension: Temporo-Spatial evidence from
Event-related Potentials and functional Magnetic Resonance Imaging
Gina Kuperberg, MD PhD
ERP and fMRI findings converge to suggest that, within simple, active
English sentences, semantic violations between verbs (denoting actions) and
their subjects (denoting Agents carrying out these actions) evoke a neural
response that is more similar to that evoked by morphosyntactic violations
between verbs and their arguments, than to that evoked by violations arising
only at the level of our real world semantic knowledge. On the basis of
these data, I will suggest that normal language comprehension proceeds along
at least two dissociable but highly interactive neural processing streams:
an associative semantic memory-based mechanism that is based mainly on
accessing the frequency of co-occurrence of words or events, as stored
within semantic memory, and a combinatorial mechanism in which structure is
assigned to a sentence not only on the basis of morphosyntactic rules, but
also on the basis of certain action-relevant (thematic) semantic
constraints.
Based on ERP data, I will suggest that the semantic memory-based analysis
operates as a first-pass mechanism, primarily between 300-500msec, and that
a morphosyntactic and thematic-semantic combinatorial analysis around a verb
begins within this time window, at least partially in parallel to semantic
memory-based processing. Any conflicts between the different representations
that are output by the semantic memory-based and combinatorial streams lead
to continued or second-pass combinatorial analysis, operating between
500-900msec. This may serve as a double check to ensure that we effectively
make sense of incoming information.
Based on fMRI data, I will suggest that the semantic memory-based analysis
is reliant on activity within the left anterior inferior frontal cortex
that, together with temporal cortices, acts to retrieve information about
the likelihood of events in the real world. In contrast, both
morphosyntactic and thematic-semantic combinatorial analyses around a verb
appear to engage a common frontal/inferior parietal/basal ganglia network,
known to mediate the execution and comprehension of goal-directed action.
Finally, based on both ERP and fMRI studies examining visual actions
depicted within short, silent movie-clips, I will suggest that that these
two processing streams may generalize beyond the language system and may
also be engaged in relating people, objects and action during real-world
event comprehension. I will conclude by briefly considering the implications
of this model of language and real-world visual comprehension for
understanding neurocognitive basis of neuropsychiatric disorders such as
schizophrenia.
Meaning and structure: Influences of real-world events on language
comprehension
Ken McRae
A significant proportion of everyday utterances concern real-world
events. Thus, people's knowledge of everyday events, including their common
participants, is an important component of sentence comprehension. Our
original research on this topic focused on verb- specific thematic role
conceptual knowledge as an important basis for expectancy generation in
language with respect to both upcoming fillers of thematic roles, and
upcoming structure. That is, as is common in the literature, we considered
structural and semantic expectancy generation in sentence processing as
being driven primarily by the verb in the aggregate. However, it has become
apparent that the empirical phenomena demand a richer, less verb- centered
approach in three ways. First, rather than being verb- specific, the
evidence demands an event-specific explanation. This is particularly
pertinent to situations in which verbs have multiple senses and thus can
refer to multiple classes of real-world events, which is the case with many
verbs, at least in English. Second, other sentential elements can influence
event-based expectations. We have focused primarily on various types of
thematic role fillers (i.e., agents, patients, instruments). Third,
extra-sentential context can bias language comprehenders to a range of event
spaces. I will present experimental results that provide evidence for this
richer event-based view of language comprehension.
Polysemy and Coercion
James Pustejovsky
Recently, there has emerged a new appreciation of the complexity at play
in the interpretation of polysemy. Two classes of parameters have been
broadly identified as contributing to the interpretation of polysemous
expressions: more complex lexical representations, and a means of
incorporating local context compositionally. In this talk, I formalize this
distinction as that of inherent versus selectional polysemy, and demonstrate
that polysemy cannot be modeled adequately without enriching the
compositional mechanisms available to the language. In particular, lexically
driven operations of coercion and type selection provide for contextualized
interpretations of expressions, which would otherwise not exhibit polysemy.
I contrast this with the view that it is not possible to maintain a
distinction between semantic and pragmatic ambiguity. I will argue that a
strong distinction between pragmatic and semantic modes of interpretation
can be maintained, and is in fact desirable, if we wish to model the
complexity of contributing factors in compositionality in language.
Visual World Studies of Language Processing
Michael Tanenhaus
In the Visual World Paradigm (VWP), participants' eye movements are
measured as they follow instructions to perform actions in a
circumscribed visual world. This approach allows investigators to
examine how language is interpreted in the context of perception and
context-specific goal-directed action, and how language, vision and
action interact. I'll review the logic of the VWP, including how it
combines the 'language-as-product' and 'language-as-action' traditions,
focusing on the effects of action-specific affordances, intentions and
interlocutors' joint goals on real-time syntactic processing,
reference-resolution and spoken word recognition. I'll then review
in-progress work with Kate Pirog and Dick Aslin that uses artificial
languages with the VWP with fMRI to examine activation of
motion-sensitive areas in V5 during spoken word recognition.
Meaning, Argument Structure, and Parsing: Building Meaning from
Language Using Lexically Stored Syntactic Representations
Matthew Traxler
A long-running debate in psycholinguistics pits autonomous syntax against
lexically-driven structure-building processes. According to autonomous
syntax accounts (e.g., Chomsky, 1965; Frazier, 1979, 1987; Pinker, 1997),
syntactic structures are built on the basis of abstract, word-category
representations by a mechanism that operates independently of other levels
of representation. Lexicalist accounts (e.g., Boland & Boehm-Jernigan, 1998;
MacDonald et al., 1994; Trueswell et al., 1993; Vosse & Kempen, 2000)
suggest instead that elements of syntactic structure are tied to individual
entries in the mental lexicon. Determining how words in sentences relate to
one another (i.e., parsing the sentence) starts with accessing individual
word representations, activating argument-structures and syntactic frames
associated with those representations, and applying multiple sources of
constraint to choose one structure from among competing alternatives.
Syntactic priming experiments can be used to test these theories.
Syntactic priming occurs when a prime sentence affects processing of a
subsequent target sentence because the two sentences share elements of
syntactic structure. According to autonomous syntax accounts, priming should
occur whether two sentences have overlapping words or not, as long as the
two sentences have the same syntactic structure. Lexicalist accounts predict
that priming effects should be larger when specific words are repeated
across the prime and target sentences. A series of eye-tracking and ERP
experiments have established the following:
- Priming occurs in comprehension.
- Priming occurs because of facilitated syntactic processes (rather
than facilitated semantic processes).
- It depends on overlapping lexical material in sentences involving
argument relations.
- It occurs independent of lexical overlap in sentences involving
adjunct relations.
- It does not depend on readers predicting the upcoming structure.
The overall pattern of results in comprehension is most consistent with
the argument structure hypothesis (Boland & Boehm-Jernigan, 1998; Boland &
Blodgett, 2006) and lexically mediated parsing (Traxler & Tooley, 2007).
Comprehending with Language
Rolf Zwaan
Language comprehension has long been understood as the comprehension
of language--first as the recovery of the syntactic and semantic
structure of the linguistic input, and later as the construction of a
situation model based on the linguistic input and background knowledge. I
will argue that language comprehension is better understood as comprehension
with language. That is, language comprehension is a special form of
event and action comprehension. There are similarities between how we
understand an action that we observe (e.g., someone pouring himself a cup of
coffee) and an action we hear or read about (e.g., He poured himself a
cup of coffee). Specifically, there is overlap in the brain systems that
are involved (e.g., the area of premotor cortex that controls movement of
the right hand). In both cases, comprehension appears to involve a mental
simulation of the actions and events. However, comprehension with language
is special because the mental simulation is not modulated directly by the
observed actions and events, but indirectly via language. Thus, the key to
developing a theory of language comprehension is to examine how language
modulates mental simulations of actions and events. I will discuss recent
empirical findings that speak to this issue.
|