"Colour, Bit by Bit"
How does colour perception develop in the infant or child? Despite a very long history of philosophical and scientific interest in the nature of colour perception, this question is still puzzling. Prima facie, there seem to be only three options: (a) infants come into the world with the ability to see the colours; (b) infants come into the world seeing ‘in black and white’ and, one day (one night?) suddenly acquire this ability, or; (c) they acquire colour vision ‘bit by bit’, just as they do other complex perceptual concepts. The first suggestion runs counter to all evidence, historical and current. Even Darwin was puzzled by his children’s complete inability to learn colour terms until into their fourth year of life and feared that his children were congenitally colour blind. The second option, (b), seems even less likely. Surely we would see some behavioural consequences of such an extraordinary developmental event. This leaves us with the (c), the gradual acquisition of colour perception. This option is most in line with current research in the developmental psychology and neuroscience but no less puzzling for it. Does the child’s black and white world slowly grow more saturated day by day or do the colours appear, one category at a time, from blue to red?) This talk presents one way to resolve the puzzle, an answer drawn from contemporary colour neurophysiology and psychophysics.
Please join us for a concert and reception to celebrate Ray Jackendoff's retirement, March 10th, 2017, 7pm, Granoff Music Center.
Krys Dolega, Center for Cognitive Studies Visiting Fellow, will give a lecture: "Predictive Processing and the Illusory Richness of Experience," March 17th, 3:30pm, Miner 225
Enoch Lambert, Center for Cognitive Studies Postdoc, will give a lecture: "Transformative-ish Phenomena," March 31st, 3:30pm, Miner 225
ABSTRACT: Transformative change is on the minds of many philosophers and psychologists, thanks to work by L. A. Paul. But what is it, exactly? I raise some challenges to Paul's categories for making sense of psychological transformation and consider implications for its scientific and philosophical study.
ABSTRACT: Traditional computing machines must provide total predictability or else halt unconditionally. With such 'hardware determinism' guaranteed, software is constructed with no internal error-checking or redundancy, producing impressive behavior quickly, but also systemic fragility, brittleness, and unsecurability. Biological 'hardware', by contrast, offers no such guarantee, leading to computational architectures and 'software' strategies that emphasize robustness and distributed agency rather than efficiency and centralized authority. We are developing the Movable Feast Machine, an indefinitely scalable computer architecture that abandons hardware determinism and accepts merely best-effort performance from hardware and software both. We demonstrate simple software designed for best-effort conditions, employing lifelike strategies such as growth, healing, opportunistic reproduction, and collaborative action. In addition to presenting routes towards vast growth in computational scale and robustness, postdeterministic digital design offers expanded computing concepts with potential applications in cognitive science, organizational and social action theory, and computational thinking.
BIO: Dave Ackley is an associate professor of Computer Science at the University of New Mexico, with degrees from Tufts (A79) and Carnegie Mellon. Always connecting life and computation, over four decades he has contributed to areas ranging from neural networks and machine learning, to evolutionary algorithms and artificial life, to biological approaches to computer security and architecture.
Cognitive Science colloquium (CBS--Cognitive and Brain Science) lecture, “Morphological schemas: Theoretical and psycholinguistic issues”
Monday, November 7th, 2016, 3pm, Cohen Auditorium
Evidence from both linguistic theory and psycholinguistics argues the lexicon contains many composite items, stored as such with their internal structure. Moreover, one of the tenets of the Parallel Architecture (Jackendoff 2002) is that there is no divide between lexicon and grammar: “rules of grammar” are stored in the lexicon in the form of schemas that contain variables. Hence the traditional overarching focus on how rules construct novel utterances must give way to a shared focus, in which the relationships among lexical items are equally if not more important. Schemas come in two types. A nonproductive schema captures regularities among its listed instances, but it resists extension to new instances. A productive schema also captures regularities among listed instances, but in addition can be used freely to create new utterances online; it is this function that corresponds most closely to traditional rules. An important question arises, however: Does the theory (or the brain) need nonproductive schemas? Or can the subregularities encoded in nonproductive schemas be captured by simpler associative principles, as advocated both by the connectionists and by Pinker? I will suggest reasons why nonproductive schemas might be helpful in acquisition, in organizing storage, and in lexical access.
Ray Jackendoff at MIT, March 3rd, 2016: Title: "Morphology in the Mental Lexicon." Place: MIT, 32D-461 (Stata Center). Time: Thursday 3/3, 12:30-1:50