Welcome to the
Spatial Cognition Laboratory
Recent research has demonstrated that readers mentally simulate the perceptual and motoric elements contained within a discourse. One assumption of much of this work is that when readers comprehend an event they mentally simulate the described actions from the actor's perspective. That is, readers place themselves in a protagonist's shoes, simulating what he/she sees and does. The present work tests this assumption by presenting readers with simple event sentences with one of three subject pronouns: first-person (e.g., I am slicing the tomato), second-person (e.g., You are slicing the tomato), or third-person (e.g., He is slicing the tomato). Immediately after reading the description, participants perform a picture verification task that involves verifying whether a displayed picture matches the event described in the sentence. These pictures are presented either from an internal perspective (as pictured above) or external perspective, and we measure response times to correct verifications. Preliminary results suggest that readers only truly 'embody' a text when they are directly addressed as the subject of an action sentence (i.e., You are slicing the tomato); following second-person pronoun sentences readers show faster event image verification from the internal- relative to external-perspective. With the pronouns 'I' and 'He' however, readers imagine the described actions from an 'on-looker' perspective; following first- and third-person pronoun sentences readers show faster event image verification from the external- relative to internal-perspective. This work extends recent work on embodied cognition, situated conceptualization, and immersed experience by demonstrating that taking an actor's perspective is the exception, rather than the rule, during language comprehension.
and Arousal Influences on Global and Local Spatial Attention and Memory
This project investigates the influence of mood and arousal induction on attentional patterns while studying spatial information such as maps and spatial descriptions, and the eventuating memory form. We are examining this issue by looking at reading times, eye movements over maps, and the ability for people to solve problems related to local and global features of a learned environment. This project extends work being done by Justin Storbeck and Jerry Clore (U. Virginia), Yves Corson and Nadege Verrier (U. Nantes, France), and several other researchers interested in emotion influences on memory.
Recent work in our lab suggests that psychological arousal resulting from viewing strongly-valenced images can result in a global focus on neutral spatial information (see above). The present U.S. Army funded work manipulates caffeine consumption and measures its effects on psychological and physiological arousal, and global and local attention and memory. We are specifically investigating performance on spatial and non-spatial, and verbal and non-verbal, cognitive tasks. Hypotheses are withheld until participant recruitment is complete.
Temporal and Spatial Shifts in Discourse: Cost or Benefit?
Johnna Swartz, Jessica Emerson, Holly Taylor, Tad Brunye, and Tali Ditman
This honor's thesis project investigates the online (e.g., reading time) and offline (e.g., memory) effects of encountering temporal continuities or shifts in narrative discourse. We are interested in whether any identified cognitive "costs" of encountering temporal shifts are driven by lexical items (i.e., using the terms before versus after) or by the necessary conceptual reordering. Participants learn a series of sentences that describe event-laden scenarios with embedded time shifts (e.g., before she called her friend, she ate dinner), then perform serial recall and a sequence verification task. This project extends previous work done by researchers such as Rolf Zwaan (Erasmus U., Rotterdam) by using common spatial/temporal shift terms (before and after) and testing memory for events and their ordering.
and Top-down Control of Information Gathering from Maps
It is rarely the case that one studies a map without some kind of goal; typically, people study maps to find a route from one place to another, or maybe just to learn about the structure of an environment. This project investigates the influence of goals on eye-movements while studying maps, and the eventuating spatial memory. Participants are given one of two goals: study the map to learn about the routes through the environment and the landmarks that are passed along the way, or study the map to learn about the overall layout of the environment and the relationships between landmarks in terms of canonical coordinates (N, S, E, W). Then, we monitor eye movements while participants study maps to assess the extent to which goals have a top-down influence on spatial information gathering. Finally, we test the nature of their spatial memory. This project extends work being done by Holly Taylor (Tufts U.), Susan Naylor, and Marieke van Asselen (Utrecht U.).
of Detail in Descriptions and Depictions of Geographic Space
This NSF-funded collaborative research investigates the perceptual and mnemonic impact of systematically varying levels of spatial and verbal details in descriptions and depictions of campus environments. Participants learn about a college campus by either reading a description or studying a map with either available or unavailable spatial (landmark locations) and verbal (labels) details. They are then tested for their memory for the information by drawing sketch maps. Ultimately, the results of this work inform the design of handheld and in-vehicle GPS devices. This project is in collaboration with Mike Worboys of the University of Maine Spatial Information Science and Engineering Department. See the Publications page for a recent manuscript on this topic.
Multimedia Learning: Formats for Procedural Knowledge Acquisition
Tad Brunye, Holly Taylor, and David Rapp
This master's thesis project looked at format influences on learning assembly procedures, and the involved working memory mechanisms. Participants learn a total of 18 procedural assembly sequences in one of a variety of formats - pictures only, text only, or one of several multimedia types (e.g., redundant, non-redundant, interleaved). They then complete serial recall, format recall, and order verification tasks. This project extends previous work done by researchers such as Richard E. Mayer (UCSB), Valerie Gyselinck (U. of Padua), Alan Baddeley (U. of York), and Jeffrey Zacks (Washington U.). See the Publications page for recent manuscripts on this topic.
Orientation Specificity in Memory for Layouts
Melissa Pergakis, Elyse Rosenberg, Holly Taylor, and Tad Brunye
This honor's thesis project looks at orientation specificity in people's memory for small-scale spatial layouts. Participants learn one of the two object array pictured on the left, then complete memory tasks testing their knowledge for allocentric (bird's eye view) and egocentric (first-person's view) information. This project extends previous work done by researchers such as Tim McNamara (Vanderbilt) and Amy Shelton (Johns Hopkins) by manipulating the availability of global reference frames and testing for allocentric memory.
in Nested Environments
This doctoral project investigates spatial updating in nested environments and the resulting mental representations. Participants learn one nested environment (room or campus) from either map study or navigation, and then update their location with the first-learned environment. They are then tested for their memory for the non-updated environment and comparing switching between environments or staying in one. This project extends previous work done by researchers such as Ranxiao Wang (U. Illinois) and Weimin Mou (Chinese Academy of Sciences) by examining how perspective affects spatial updating.
Memory in Spatial Mental Model Formation and Retrieval
This doctoral project investigates the working memory mechanisms responsible for the formation and retrieval of mental models formed from spatial descriptions. Participants learn about a town or convention center environment from either a survey (bird's eye) or route (first person) perspective and are then tested for their ability to draw maps of and form inferences about the learned environments. Working memory responsibility is elucidated through the use of a variety of selective interference tasks targeting particular working memory systems during either encoding or retrieval. This project extends previous work done by researchers such as Holly Taylor (Tufts U.), Barbara Tversky (Stanford U.), Valerie Gyselinck (U. Paris), and Alan Baddeley (U. of York). See the Publications page for recent manuscripts on this topic.
Website is best viewed using Firefox.