FINAL DRAFT for Lisbon Conference on Cognitive Science, May 1998



Daniel C. Dennett

Center for Cognitive Studies

Tufts University



Things About Things





Perhaps we can all agree that in order for intelligent activity to be produced by embodied nervous systems, those nervous systems have to have things in them that are about other things in the following minimal sense: there is information about these other things not just present but usable by the nervous system in its modulation of behavior. (There is information about the climatic history of a tree in its growth rings--the information is present, but not usable by the tree.) The disagreements set in when we start trying to characterize what these things-about-things are--are they "just" competences or dispositions embodied somehow (e.g., in connectionist networks) in the brain, or are they more properly mental representations, such as sentences in a language of thought, images, icons, maps, or other data structures? And if they are "symbols", how are they "grounded"? What, more specifically, is the analysis of the aboutness that these things must have? Is it genuine intentionality or mere as if intentionality? These oft-debated questions are, I think, the wrong questions to be concentrating on at this time, even if, "in the end", they make sense and deserve answers. These questions have thrived in the distorting context provided by two ubiquitous idealizing assumptions that we should try setting aside: an assumption about how to capture content and an assumption about how to isolate the vehicles of content from the "outside" world.



A Thing about Redheads



The first is the assumption that any such aboutness can be (and perhaps must be) captured in terms of propositions, or intensions--sometimes called concepts. What would an alternative claim be? Consider an old example of mine:

Suppose, for instance, that Pat says that Mike "has a thing about redheads." what Pat means, roughly, is that Mike has a stereotype of a redhead which is rather derogatory and which influences Mike's expectations about and interactions with redheads. It's not just that he's prejudiced against redheads, but that he has a rather idiosyncratic and particular thing about redheads. And Pat might be right--more right than he knew! It could turn out that Mike does have a thing, a bit of cognitive machinery, that is about redheads in the sense that it systematically comes into play whenever the topic is redheads or a redhead, and that adjusts various parameters of the cognitive machinery, making flattering hypotheses about redheads less likely to be entertained, or confirmed, making relatively aggressive behavior vis-à-vis redheads closer to implementation than otherwise it would be, and so forth. Such a thing about redheads could be very complex in its operation or quite simple, and in either case its role could elude characterization in the format:



Mike believes that : (x)(x is a redhead . . . . )



no matter how deviously we piled on the exclusion clauses, qualifiers, probability operations, and other explicit adjusters of content. The contribution of Mike's thing about redheads could be perfectly determinate and also undeniably contentful and yet no linguification of it could be more than a mnemonic label for its role. In such a case we could say, as there is often reason to do, that various beliefs are implicit in the system. ("Beyond Belief," [in The Intentional Stance, p148]



But if we do insist on recasting our description of the content in terms of implicit beliefs, this actually masks the functional structure of the things that are doing the work, and hence invites us to ask the wrong questions about how they work. Suppose we could "capture the content" of such a component by perfecting the expression of some sentence-implicitly-endorsed (and whether or not this might be "possible in principle," it is typically not remotely feasible). Still, our imagined triumph would not get us one step closer to understanding how the component accomplished this. After all, our model for such an activity is the interpretation of data structures in computer programs, and the effect of such user-friendly interpretations ("this is how you tell the computer to treat what follows as a comment, not an instruction to be obeyed") is that they direct the user/interpreter's attention away from the grubby details of performance by providing a somewhat distorted (and hyped up) sense of what the computer "understands". Computer programmers know enough not to devote labor to rendering the intentional interpretations of their products "precise" because they appreciate that these are mnemonic labels, not specifications of content that can be used the way a chemist uses formulae to describe molecules. By missing this trick, philosophers have created fantasy worlds of propositional activities marshaled to accomplish reference, recognition, expectation-generation, and so forth. What is somewhat odd is that these same philosophers have also largely ignored the areas of Artificial Intelligence that actually do take such content specifications seriously: the GOFAI worlds of expert systems, inference engines, and the techniques of resolution theorem-proving and the like. Presumably they can see at a glance that whatever these researchers are doing, their products are not remotely likely to serve as realistic models of cognitive processes in living minds.



A thing-about-redheads is not an axiomatized redhead-theory grafted into a large data base. We do not yet know how much can be done by a host of things-about-things of this ilk because we have not yet studied them directly, except in very simple models--such as the insectoid subsumption architectures of Rodney Brook and his colleagues. One of the chief theoretical interests of Brooks' Cog project is that it is pushing these profoundly non-propositional models of contentful structures into territory that is recognizable as human psychology. Let's see how they work, how they interact, and how much work they can do before we take on the task of linguifying their competences as a set of propositions-believed.





Transducers, Effectors, and Media



The second ubiquitous assumption is that we can think of a nervous system as an information network tied to the realities of the body at various restricted places: transducer or input nodes and effector or output nodes. In a computer, there is a neat boundary between the "outside" world and the information channels. A computer can have internal transducers too, such as a temperature transducer that informs it when it is getting too hot, or a transducer that warns it of irregularities in its power supply, but these count as input devices since they extract information from the (internal) environment and put it into the common medium of information-processing. It would be theoretically tidy if we could identify the same segregation of information channels from "outside" events in a body with a nervous system, so that all interactions happened at identifiable transducers and effectors. The division of labor this permits is often very illuminating. In modern machines it is often possible to isolate the control system from the system that is controlled, so that control systems can be readily interchanged with no loss of function. The familiar remote controllers of electronic appliances are obvious examples, and so are electronic ignition systems (replacing the old mechanical linkages) and other computer-chip-based devices in automobiles. And up to a point, the same freedom from particular media is a feature of animal nervous systems, whose parts can be quite clearly segregated into the peripheral transducers and effectors, and the intervening transmission pathways, which are all in the common medium of impulse trains in the axons of neurons.



At millions of points, the control system has to interface with the bodily parts being controlled, as well as with the environmental events that must be detected for control to be well-informed. In order to detect light, you need something photosensitive, something that will respond swiftly and reliably to photons, amplifying their sub-atomic arrival into larger-scale events that can trigger still further events. In order to identify and disable an antigen, for instance, you need an antibody that has the right chemical composition. Nothing else will do the job. It would be theoretically neat if we could segregate these points of crucial contact with the physics and chemistry of bodies, thereby leaving the rest of the control system, the "information-processing proper," to be embodied in whatever medium you like. After all, the power of information theory (and automata theory) is that they are entirely neutral about the media in which the information is carried, processed, stored. You can make computer signals out of anything--electrons or photons or slips of paper being passed among thousands of people in ballrooms. The very same algorithm or program can be executed in these vastly different media, and achieve the very same effects, if hooked up at the edges to the right equipment.



As I say, it would be theoretically elegant if we could carry out (even if only in our imagination) a complete segregation. In theory, every information-processing system is tied at both ends, you might say, to transducers and effectors whose physical composition is forced by the jobs that have to be done by them, but in between, everything is accomplished by medium-neutral processes. In theory, we could declare that what a mind is is just the control system of a body, and if we then declared the transducers and effectors to be just outside the mind proper--to be part of the body, instead--we could crisply declare that a mind can in principle be out of anything, anything at all that had the requisite speed and reliability of information-handling.



This important theoretical idea sometimes leads to serious confusions, however. The most seductive confusion is what I call the myth of double transduction: first the nervous system transduces light, sound, temperature, and so forth into neural signals (trains of impulses in nerve fibers) and second, in some special central place, it transduces these trains of impulses into some other medium, the medium of consciousness! This is, in effect, what Descartes thought, and he declared the pineal gland, right in the center of the brain, to be the locus of that second transduction. While nobody today takes Descartes' model of the second transduction seriously, the idea that such a second transduction must somewhere occur (however distributed in the brain's inscrutable corridors) is still a powerfully attractive, and powerfully distorting, subliminal idea. After all (one is tempted to argue) the neuronal impulse trains in the visual pathways for seeing something green, or red, are practically indistinguishable from the neuronal impulse trains in the auditory pathways for hearing the sound of a trumpet, or a voice. These are mere transmission events, it seems, that need to be "decoded" into their respective visual and auditory events, in much the way a television set transduces some of the electromagnetic radiation it receives into sounds and some into pictures. How could it not be the case that these silent, colorless events are transduced into the bright, noisy world of conscious phenomenology? This rhetorical question invites us to endorse the myth of double transduction in one form or another, but we must decline the invitation. As is so often the case, the secret to breaking the spell of an ancient puzzle is to take a rhetorical question, like this one, and decide to answer it. How could it not be the case? That is what we must see.



What is the literal truth in the case of the control systems for ships, automobiles, oil refineries and other complex human artifacts doesn't stand up so well when we try to apply it to animals, not because minds, unlike other control systems, have to be made of particular materials in order to generate that special aura or buzz or whatever, but because minds have to interface with historically pre-existing control systems. Minds evolved as new, faster control systems in creatures that were already lavishly equipped with highly distributed control systems (such as their hormonal systems), so their minds had to be built on top of, and in deep collaboration with, these earlier systems.(1)



This distribution of responsibility throughout the body, this interpenetration of old and new media, makes the imagined segregation more misleading than useful. But still one can appreciate its allure. It has been tempting to argue that the observed dependencies on particular chemicals, and particular physical structures, are just historical accidents, part of an evolutionary legacy that might have been otherwise. True cognitive science (it has been claimed) ought to ignore these historical particularities and analyze the fundamental logical structure of the information-processing operations executed, independent of the hardware.



The Walking Encyclopedia



This chain of reasoning led to the creation of a curious intellectual artifact, or family of artifacts, that I call The Walking Encyclopedia. In America, almost every schoolyard has one student picked out by his classmates as the Walking Encyclopedia--the scholarly little fellow who knows it all, who answers all the teacher's question, who can be counted on to know the capital cities of all the countries of the world, the periodic table of chemical elements, the dates of all the Kings of France, and the scores of all the World Cup matches played during the last decade. His head is packed full of facts, which he can call up at a moment's notice to amaze or annoy his companions. Although admired by some, the Walking Encyclopedia is sometimes seen to be curiously misusing the gifts he was born with. I want to take this bit of folkloric wisdom and put it to a slightly different use: to poke fun at a vision of how a mind works.

According to this vision, a person, a living human body, is composed of a collection of transducers and effectors intervening between a mind and the world. A mind, then, is the control system of a vessel called a body; the mind is material--this is not dualism, in spite of what some of its ideological foes have declared--but its material details may be safely ignored, except at the interfaces--the overcoat of transducers and effectors. Here is a picture of the Walking Encyclopedia.



[figure 1 about here]



In this picture--there are many variations--we see that just inboard of the transducers are the perceptual analysis boxes that accept their input, and yield their output to what Jerry Fodor has called the "central arena of belief-fixation" (The Modularity of Mind, 1983). Just inboard of the effectors are the action-directing systems, which get their input from the planning department(s), interacting with the encyclopedia proper, the storehouse of world knowledge, via the central arena of belief-fixation. This crucial part of the system, which we might call the thinker ,or perhaps the cognition chamber, updates, tends, searches, and--in general--exploits and manages the encyclopedia. Logic is the module that governs the thinker's activities, and Noam Chomsky's LAD, the Language Acquisition Device, with its Lexicon by its side, serves as a special purpose, somewhat insulated module for language entry and exit.



This is the generic vision of traditional cognitive science; For several decades, controversy has raged about the right way to draw the connecting boxes that compose the flow charts--the "boxology"--but little attention has been devoted to the overcoat. That is not to say that perception, for instance, was ignored--far from it. But people who were concerned with the optics of vision, or the acoustics of audition, or the physics of the muscles that control the eye, or the vocal tract, were seen as working on the periphery of cognitive science. Moreover, those who concerned themselves with the physics or chemistry of the activities of the central nervous system were seen to be analogous to electrical engineers (as contrasted with computer scientists).



We must not let this caricature get out of hand. Boxologists have typically been quite careful to insist that the interacting boxes in such flow diagrams are not supposed to be anatomically distinct subregions of the brain, separate organs or tissues "dedicated" (as one says in computer science) to the tasks inscribed in the boxes, but rather a sort of logical decomposition of the task into its fundamental components, which could then be executed by "virtual machines" whose neuroanatomical identification could be as inscrutable and gerrymandered as you like--just as the subroutines that compose a complex software application have no reserved home in the computer's hardware but get shunted around by the operating system as circumstances dictate.



The motivation for this vision is not hard to find. Most computer scientists don't really have to know anything much about electricity or silicon; they can concentrate on the higher, more abstract software levels of design. It takes both kinds of experts to build a computer: the concrete details of the hardware are best left to those who needn't concern themselves with algorithms or higher level virtual machines, while voltages and heat-dispersion are ignorable by the software types. It would be elegant, as I said, if this division of labor worked in cognitive science as well as it does in computer science, and a version of it does have an important role to play in our efforts to reverse-engineer the human mind, but the fundamental insight has been misapplied. It is not that we have yet to find the right boxology; it is that this whole vision of what the proper functioning parts of the mind are is wrong. The right questions to ask are not:



How does the Thinker organize its search strategies?



or



Isn't the Lexicon really a part of the World Knowledge storehouse?



or

Do facts about the background have to pass through Belief Fixation in order to influence Planning, or is there a more direct route from World Knowledge?



These questions, and their kin, tend to ignore the all-important question of how subsystems could come into existence, and be maintained, in the highly idiosyncratic environment of a mammalian brain. They tend to presuppose that the brain is constructed of functional subsystems that are themselves designed to perform in just such an organization--an organization roughly like that of a firm, with a clear chain of command and reporting, and each sub-unit with a clear job description. We human beings do indeed often construct such artificial systems--virtual machines--in our own minds, but the way they come to be implemented in the brain is not how the brain itself came to be organized. The right questions to ask are about how else we might conceptualize the proper parts of a person.



Evolution embodies information in every part of every organism. A whale's baleen embodies information about the food it eats, and the liquid medium in which it finds its food. A bird's wing embodies information about the medium in which it does its work. A chameleon's skin, more dramatically, carries information about its current environment. An animal's viscera and hormonal systems embody a great deal of information about the world in which its ancestors have lived. This information doesn't have to be copied into the brain at all. It doesn't have to be "represented" in "data structures" in the nervous system. It can be exploited by the nervous system, however, which is designed to rely on, or exploit, the information in the hormonal systems just as it is designed to rely on, or exploit, the information embodied in the limbs and eyes. So there is wisdom, particularly about preferences, embodied in the rest of the body. By using the old bodily systems as a sort of sounding board, or reactive audience, or critic, the central nervous system can be guided--sometimes nudged, sometimes slammed--into wise policies. Put it to the vote of the body, in effect.(2)



Let us consider briefly just one aspect of how the body can contribute to the wise governance of a mind without its contribution being a data structure or a premise or a rule of grammar or a principle. When young children first encounter the world, their capacity for attending is problematic. They alternate between attention-capture--a state of being transfixed by some object of attention from which they are unable to deflect their attention until externally distracted by some more powerful and enticing signal--and wandering attention, attention skipping about too freely, too readily distracted. These contrasting modes are the effects of imbalances between two opponent processes, roughly captured under the headings of boredom and interest. These emotional states--or proto-emotional states, in the infant--play a heavy role in protecting the infant's cognitive systems from debilitating mismatches: when confronted with a problem of pattern-recognition that is just too difficult, given the current immature state of the system, boredom ensues, and the infant turns off, as we say. Or turns away, in random search of a task more commensurate with the current state of its epistemically hungry specialists. When a nice fit is discovered, interest or enthusiasm changes the balance, focussing attention and excluding, temporarily, the distractors.(3)



I suppose this sort of meta-control might in theory have been accomplished by some centralized executive monitor of system-match and system-mismatch, but in fact, it seems to be accomplished as a byproduct of more ancient, and more visceral, reactions to frustration. The moral of this story may not strike one as news until one reflects that nobody in traditional Artificial Intelligence or cognitive science would ever have suggested that it be important to build a capacity for boredom or enthusiasm into the control structure of an artificially intelligent agent.(4) We are now beginning to see, in many different ways, how crippled a mind can be without a full complement of emotional susceptibilities.(5)



Things that go Bump in the Head



But let me make the point in a deeper and more general context. We have just seen an example of an important type of phenomenon: the elevation of a byproduct of an existing process into a functioning component of a more sophisticated process. This is one of the royal roads of evolution.(6) The traditional engineering perspective on all the supposed subsystems of the mind--the modules and other boxes--has been to suppose that their intercommunications (when they talk to each other in one way or another) were not noisy. That is, although there was plenty of designed intercommunication, there was no leakage. The models never supposed that one box might have imposed on it the ruckus caused by a nearby activity in another box. By this tidy assumption, all such models forego a tremendously important source of raw material for both learning and development. Or to put it in a slogan, such over-designed systems sweep away all opportunities for opportunism. What has heretofore been mere noise can be turned, on occasion, into signal. But if there is no noise--if the insulation between the chambers is too perfect--this can never happen. A good design principle to pursue, then, if you are trying to design a system that can improve itself indefinitely, is to equip all processes, at all levels, with "extraneous" byproducts. Let them make noises, cast shadows, or exude strange odors into the neighborhood; these broadcast effects willy-nilly carry information about the processes occurring inside. In nature, these broadcast byproducts come as a matter of course, and have to be positively shielded when they create too many problems; in the world of computer simulations, however, they are traditionally shunned--and would have to be willfully added as gratuitous excess effects, according to the common wisdom. But they provide the only sources of raw material for shaping into novel functionality.



It has been recognized for some time that randomness has its uses. For instance, sheer random noise can be useful in preventing the premature equilibrium of dynamical systems--it keeps them jiggling away, wandering instead of settling, until some better state can be found. This has become a common theme in discussions of these hot topics, but my point is somewhat different: My point is not that systems should make random noise--though this does have its uses, as just noted--but that systems should have squeaky joints, in effect, wherever there is a pattern of meaningful activity. The noise is not random from that system's point of view, but also not useful to it. A neighboring system may learn to "overhear" these activities, however, thereby exploiting it, turning into new functionality what had heretofore been noise.



This design desideratum highlights a shortcoming in most cognitive models: the absence of such noise. In a real hotel, the fact that the guests in one room can overhear the conversations in an adjacent room is a problem that requires substantial investment (in soundproofing) to overcome. In a virtual hotel, just the opposite is true: nobody will ever overhear anything from an "adjacent" phenomenon unless this is specifically provided for (a substantial investment). There is even a generic name for what must be provided: "collision detection". In the real world, collisions are automatically "detected"; when things impinge on each other they engage in multifarious interaction without any further ado; in virtual worlds, all such interactions have to be provided for, and most cognitive models thriftily leave these out--a false economy that is only now beginning to be recognized.



Efficient, effective evolution depends on having an abundant supply of raw material available to shape into new functional structures. This raw material has to come from somewhere, and either has paid for itself in earlier economies, or is a coincidental accompaniment of features that have paid for themselves up till then. Once one elevates this requirement to the importance it deserves, the task of designing (or reverse engineering) intelligent minds takes on a new dimension, a historical, opportunistic dimension. This is just one aspect of the importance of maintaining an evolutionary perspective on all questions about the design of a mind. After all, our minds had to evolve from simpler minds, and this brute historical fact puts some important constraints on what to look for in our own designs. Moreover, since learning in the individual must be, at bottom, an evolutionary processes conducted on a different spatio-temporal scale, the same moral should be heeded by anybody trying to model the sorts of learning that go beyond the sort of parameter-tuning that is exhibited by self-training neural nets whose input and output nodes have significances assigned outside the model.





Conclusions



Cognitive science, like any other science, cannot proceed efficiently without large helpings of oversimplification, but the choices that have more or less defined the field are now beginning to look like false friends. I have tried to suggest some ways in which several of the traditional enabling assumptions of cognitive science--assumptions about which idealized (over-)simplifications will let us get on with the research--has sent us on wild goose chases. The "content capture" assumption has promoted the mis-motivated goal at explicit expression of content in lieu of the better goal of explicit models of functions that are only indirectly describable by content-labels. The "isolated vehicles" assumption has enabled the creation of many models, but these models have tended to be too "quiet," too clean for their own good. If we set these assumptions aside, we will have to take on others, for the world of cognition is too complicated to study in all its embodied particularity. There are good new candidates, however, for simple things about things now on offer. Let's give them a ride and see where we get.(7)



1. The previous 6 paragraphs are drawn, with some revisions and additions, from Kinds of Minds.

2. The preceding two paragraphs are from Kinds of Minds.

3. Cynthia Ferrell, discussion at American Association for Artificial Intelligence Symposium on Embodied Cognition and Action, MIT Nov, 1996.

4. Consider a sort of problem that often arises for learning or problem-solving programs whose task can be characterized as "hill-climbing"--finding the global summit in a problem landscape pocked with lower, local maxima. Such systems have characteristic weaknesses in certain terrains, such as those with a high steep, knife-edge "ridge" whose summit very gently slopes, say, east to the global summit. Whether to go east or west on the ridge is something that is "visible" to the myopic hill-climbing program only when it is perched right on the knife-edge; at every other location on the slopes, its direction of maximum slope (up the "fall line" as a skier would say), is roughly perpendicular to the desired direction, so such a system tends to go into an interminable round of overshooting, back and forth over the knife-edge, oblivious to the futility of its search. Trapped in such an environment, an otherwise powerful system becomes a liability. What one wants in such a situation, as Geoffrey Hinton has put it, is for the system to be capable of "noticing" that it has entered into such a repetitive loop, and resetting itself on a different course. Instead of building an eye to oversee this job, however, one can just let boredom ensue.

5. Antonio Damasio's recent book Descartes's Error (New York: Grosset & Dunlap, 1994) is a particularly effective expression of the new-found appreciation of the role of emotions in the control of successful cognition. To be fair to poor old Descartes, however, we should note that even he saw--at least dimly--the importance of this union of body and mind:



By means of these feelings of pain, hunger, thirst, and so on, nature also teaches that I am present to my body not merely in the way a seaman is present to his ship, but that I am tightly joined and, so to speak, mingled together with it, so much so that I make up one single thing with it. (Meditation Six)



6. In what follows I owe many insights to Lynn Stein's concept of "post-modular cognitive robotics" and Eric Dedieu, "Contingency as a Motor for Robot Development" AAAI Symposium on Embodied Cognition and Action, MIT Nov, 1996.

7. I want to thank Chris Westbury and Rick Griffin for comments on an earlier draft.