Philosophical controversy about the mind has flourished in the thin air of our ignorance about the brain. The humble toad, it now seems, may provide our first instance of a creature whose whole brain is within the reach of our scientific understanding. What will happen to the traditional philosophical issues as our theoretical and factual ignorance recedes? Discussion of the issues explored in the target article is, as Ewert says, "often too theoretical, sometimes philosophical and even [as if that weren't bad enough?--DCD] emotion-laden." The research reported by Ewert has interesting philosophical implications, as he probably recognizes, but he wisely leaves the philosophy to the philosophers. Being one, I would like to draw some of the conclusions he eschews.
First, just to keep enthusiasm in check, we should remind ourselves that while toads are "smart" enough, or "psychological" enough--unlike Aplysia, for instance--to raise (at least for discussion, prior to dismissal) the fascinating questions, they are still remarkably stupid, and the research program depends critically on their stupidity. Insofar as the experimental results are unequivocal, it is thanks to the limits of plasticity and discrimination that have been uncovered. Toads, we learn, are born with hard-wired prey-capture routines with only modest degrees of plasticity. They will go on striking at dummy targets indefinitely, and their standards for prey-recognition, while serviceable in their niches, are gratifyingly crude. Imagine how we would react to the discovery that some human beings were so oblivious that we could substitute cardboard silhouettes on wheels for real people in their environment without seriously disrupting their behavior. "Social" psychology would be a lot easier using these people as subjects, but we would no doubt have the same feeling with regard to them that we now have--thanks to the researches of Ewert and his colleagues--with regard to toads: when you look closely, you find that there's nobody home.
I take it that most readers, after digesting the details of the target article, are ready to relinquish whatever remaining convictions they may have had that residing behind the toad's bulging eyes, watching and waiting and deciding when to strike, was a middletoad, a conscious self, the something that must be there if it is to be like something to be a toad (Nagel 1974). Why? What is it about Ewert's model, and (more importantly) its supporting research, that promotes this opinion of the toad as a "mere" automaton, a zombie in spite of its biological commonality with us?
Dualists may be tempted to say that the mere fact that Ewert can explain the toad's behavior without ever adverting to any transactions between the toad's brain and some non-physical mind, would be sufficient (if confirmed) to establish that toads are mere automata, however "clever" they appear to the uninitiated. The objections to this view do not need rehearsing here. Modern- day materialists, who have no doubt that in principle there can be a purely physical, mechanistic account of even a self- conscious, human mind, may still feel that if Ewert's account is right in outline, toads lack something crucial for consciousness. I would like to draw attention to a confusion that may affect even subtle materialists in their attempts to say what is missing in Ewert's toads.
Ewert does not claim that his model is complete. Note that he attempts to account for only one of the four F's, feeding, alluding only in passing to the interfaces still to be described with the others: fleeing, fighting, and reproduction. What is important is that within one quarter of the toad's control agenda, the model claims to show us how the corner is turned at what Ewert calls the "sensorimotor interface": how afferents are eventually "interpreted" to the point where (normally) appropriate efferents are generated.
Tradition would interpose the Mind or the Self at that sensorimotor interface, contemplating and comprehending the input, and figuring out the wisest output, given that input. But that particularly wise homunculus--or bufunculus--has been brusquely dismissed in Ewert's model, replaced by nothing but a few generally hard-wired AND-gates. Even though there are neural messages with meanings passing through the sensorimotor interface--Ewert speaks somewhat misleadingly of a "sensorimotor code"--they are not decoded by any salient intervening entity; they have their apposite effects by being already wired up by Mother Nature to trigger the appropriate "commands".
For instance the convergence of a T5.2 firing, meaning "stimulus recognized as prey, n degrees outside the fixation area" with activity in T4, meaning "stimulus moving somewhere in the visual field" yields the command to orient; T5.2', meaning "stimulus recognized as prey" firing concurrently with T1.2, meaning "stimulus close to toad" yields the command to fixate, and so forth.
Consider the grounds offered for these renderings of the meanings of types of neuronal firing. The particular form of words chosen by Ewert is dictated not by any isolation of "syntactic" features of some neuronal "language of thought" as candidates for word-for-word translation into English, but rather by an appreciation of the "semantic" contribution of those events to the ongoing control problem of the toad, expressed (roughly) from the toad's narcissistic point of view (Akins 1986). As Ewert notes, "prey categorization is approximate" and moreover, his decision to elevate the semantics of an event type such as T5.2' to "stimulus recognized as prey" is governed by pragmatic considerations (Dennett 1987). As opposed to what? As opposed to presumably objective, information-theoretic criteria of the sort envisaged by Dretske (1981) [and BBS treatment], Fodor (1980) ["Methodological Solipsism", BBS] and others. The objective informational parameters (e.g., reliable covariation with shape and motion properties of the sort distinguished in Ewert's Figure 4) set outside limits, at best, of the potential narcissistic- information-bearing properties Ewert is prepared to assign to the events. The support he offers for going beyond the (Dretskean) information given when fixing his renderings of meaning includes his recognition of the appropriateness (under normal conditions) of the normal continuations of those events, as modulated by "goal-related information . . . result[ing] from intrinsic processes". But these continuations are normal precisely because that's the way toads are wired, not because something in the toad "recognizes" or "analyzes" the meaning of those neuronal events as the result of some sort of perception-and-parsing process.
In other words, the "decision-making" relating those messages to those commands does not occur in real time in the toad, but occurred eons ago in the course of the design process that created the wiring diagram that now implements those policies. Their rough wisdom is endorsed, probabilistically, by the very existence of the toad as a surviving descendant, but, it seems, the toad itself has no intelligent contribution to make to its own survival. (Here is where the mistaken line of thought begins.) The toad, one gathers, is none the wiser for having messages with these meanings zapping around in its brain. It doesn't appreciate or comprehend or figure out what the observing neuroethologist does. It can get through life reasonably well without benefit of comprehension, thanks to its felicitously arranged inner wiring--and that, it seems, is what makes toads zombies.
It is this line of thought that makes the idea of a language of thought (Fodor 1975) so treacherous. There is a strong temptation to suppose that in a truly intelligent, intentional, conscious creature (unlike the poor toad), the meaning-bearing events streaming in from the sense organs would have to be couched in some parsable, comprehensible language, so that the middletoad (or middleman or self-module) would have some way of recognizing and appreciating their meanings. But this just postpones the problem of meaning, and makes it even more mysterious by setting an impossible task of comprehension and inference for the inner module (Dennett 1978). The mistake lies in confusing the toad itself (the whole toad, from whose self- centered perspective all content-ascription must proceed) with some imagined inner middletoad, which, indeed, as a proper part of the toad's nervous system, would be none the wiser for having events coursing through it with various meanings. But the whole toad is the wiser; its capacity to pick its way through its world with some adroitness is explained by the presence of events in its brain having the sort of significance Ewert assigns to them. This conclusion is hard to accept if one is still enthralled with the common vision of meaning being present in items (in the mind or in the brain) as if it were candy in a box, to be sent from A to B, there to be unpacked and enjoyed (comprehended) by the (intelligent) receiver. The mistake is to concentrate all the intelligence in the imagined middletoad, instead of spreading it through the whole system. This involves offloading inexpressi ble portions of meaning into the appropriate hard-wiring, and into the acquired interactive effects entrained by experience, in the manner Ewert's model illustrates very well. Once the intelli gence is seen as distributed, no inner thing remains to be a candidate for the Mind or Self, but this does not in itself rule out consciousness.
The real reason , finally, why we may be inclined to give up on the toad's inner life is not because when we look closely at the inside, we find nothing but wise wiring--that's all we are ever going to find in us, too--but because of what we find on the outside, in the ethology: toads don't catch on to very much. If toads were much harder to fool, and learned more, and in particular if they could learn that they were being manipulated, and come to take evasive or counteractive steps of some kind, we would find that our sense that "somebody was home" would survive. Such cognitive prowess as this would take much more internal machinery than the toad has, but other creatures have it, and we can know in advance that when we look closely at its details (if we can master them at all) , we will find no concentration of insight in an inner module, but just much more wise wiring. Endnote 1
Dennett, D. (1978). "Current Issues in the Philosophy of Mind," American Philosophical quarterly, 15, pp. 249-61.
Dennett, D. (1987) The Intentional Stance, Cambridge, MA: The MIT Press/ A Bradford Book.
Dretske, F. (1981) Knowledge and the Flow of Information, The MIT Press/ A Bradford Book. [plus BBS treatment]
Fodor, J. (1975) The Language of Thought, Scranton, PA: Crowell.
Fodor, J. (1980) ["Methodological Solipsism" in BBS]
Nagel, T. (1974). "What is it Like to be a Bat?" Philosophical Review, 83, pp.435-50.
1. I am indebted to Kathleen Akins for some of the ideas in this commentary, and for guidance in interpreting the target article.