I. Strategies of simplification
The field of Artificial Intelligence has produced so many new concepts--or at least vivid and more structured versions of old concepts--that it would be surprising if none of them turned out to be of value to students of animal behavior. Which will be most valuable? I will resist the temptation to engage in either prophecy or salesmanship; instead of attempting to answer the question: "How might Artificial Intelligence inform the study of animal behavior?" I will concentrate on the obverse: "How might the study of animal behavior inform research in Artificial Intelligence?"
I take it we all agree that in the end we want to be able to describe and explain the design and operation of animal nervous systems at many different levels, from the neurochemical to the psychological and even the phenomenological (where appropriate!), and we want to understand how and why these designs have evolved, and how and why they are modulated in the individual organisms. AI research, like all other varieties of research on this huge topic, must make drastic oversimplifications in order to make even apparent progress. There are many strategies of simplification, of which these five, while ubiquitous in all areas of mind/brain research, are particularly popular in AI:
(1) Ignore both learning and development; attempt to model the "mature competence" first, postponing questions about how it could arise.
(2) Isolate a particular subcomponent or sub-sub-component, ignoring almost all problems about how it might be attached to the larger system.
(3) Limit the domain of operation of the modeled system or subsystem to a tiny corner of the real domain--try solve a "toy problem", hoping that subsequent scaling-up will be a straightforward extrapolation.
(4) Bridge various gaps in one's model with frankly unrealistic or even deliberately "miraculous" stopgaps-- "oracles", or what I have called "cognitive wheels" (Dennett, 1984). (In the neurosciences, one posits what I have called "wonder tissue" to bridge these gaps.)
(5) Avoid the complexities of real-time, real-world coordination by ignoring robotics and specializing in what I call "bed-ridden" systems: systems that address the sorts of problems that can be presented via a narrow "verbal" channel, and whose solutions can be similarly narrowly conveyed to the world. (Dennett, 1980)
(The whole paper is now available in Daniel Dennett, Brainchildren, Essays on Designing Minds, MIT Press and Penguin, 1998.)