October 7, 1996
for Contemporary British and American Philosophy and Philosophers (in Chinese), ed Ouyang Kang, Dept of Philosophy, Wuhan University, Wuhan, Hubei 430072, P.R. China
An Overview of my Work in Philosophy
Daniel C. Dennett
In my opinion, the two main topics in the philosophy of mind are content and consciousness, and they have received about equal attention from me. As the title of my first book, Content and Consciousness (1969) suggested, that is the order in which they must be addressed: first, a theory of content or intentionality--a phenomenon more fundamental than consciousness--and then, building on that foundation, a theory of consciousness. Over the years I have found myself recapitulating this basic structure twice, partly in order to respond to various philosophical objections, but more importantly, because my research on foundational issues in cognitive science led me into different aspects of the problems. The articles in the first half of Brainstorms (1978a) composed in effect a more detailed theory of content, and the articles in the second half were concerned with specific problems of consciousness. The second recapitulation devoted a separate volume to each half: The Intentional Stance (1987a) is all and only about content; Consciousness Explained (1991a) presupposes the theory of content in that volume and builds an expanded theory of consciousness. My more recent books, Darwin's Dangerous Idea (1995) and Kinds of Minds (1996), extend the scope of my earlier work, bringing out the evolutionary foundations of both the theory of intentional systems and the theory of consciousness. A summary of both of these in their current versions follows a review of how I got there.
1. Beginnings and Sources
Although quite a few philosophers agree that content and consciousness are the two main issues confronting the philosophy of mind, many--perhaps most--follow tradition in favoring the opposite order: consciousness, they think, is the fundamental phenomenon, upon which all intentionality ultimately depends. This difference of perspective is fundamental, infecting the intuitions with which all theorizing must begin, and it is thus the source of some of the deepest and most persistent disagreements in the field.
It is clear to me how I came by my renegade vision of the order of dependence: as a graduate student at Oxford (1963-65), I developed a deep distrust of the methods I saw other philosophers employing. That was the heyday of ordinary language philosophy, and "theories of mind" were debated on the basis of a lean diet of conceptual analysis--as if one could develop a theory of horses on the basis of nothing other than a careful investigation of the meaning of the ordinary word "horse". I decided that I had to supplement (and maybe even adjust!) the fruits of ordinary language analysis with an attempt to figure out how the brain could possibly accomplish the mind's work. I knew next to nothing about the relevant science, but I had always been fascinated with how things worked--clocks, engines, magic tricks. (In fact, had I not been raised in a dyed-in-the-wool "arts and humanities" academic family, I probably would have become an engineer, but this option would never have occurred to anyone in our family.) So I began educating myself, always with an eye to the curious question of how the mechanical responses of "stupid" neurons could be knit into a fabric of activity that actually discriminated meanings. Somehow it had to be possible, I assumed, since it was obvious to me that dualism was a last resort, to be postponed indefinitely. While most philosophers of mind would have agreed with me that on general materialist principles it had to be possible, they thought the details were philosophically irrelevant. I was convinced that the facts about how it was actually done would provide insights into the perennial philosophical puzzles.
So from the outset I worked from the "third-person point of view" of science, and took my task to be building--or rather sketching the outlines of--a physical structure that could be seen to accomplish the puzzling legerdemain of the mind. At the time, no one else in philosophy was attempting to build that structure, so it was a rather lonely enterprise, and most of the illumination and encouragement I could find came from the work of a few visionaries in science and engineering: Warren McCulloch, Donald MacKay, Donald Hebb, Ross Ashby, Allen Newell, Herbert Simon, and J. Z. Young come to mind. Miller, Galanter and Pribram's 1960 classic, Plans and the Structure of Behavior, was a dimly understood but much appreciated beacon, and Michael Arbib's 1964 primer, Brains, Machines and Mathematics, was very helpful in clearing away some of the fog.
Given my lack of formal training in any science, this was a dubious enterprise, but I was usually forgiven my naiveté by those who helped me into their disciplines. At the time I considered myself driven by (indeed defined by) my disagreements with my philosophical mentors, Quine and Ryle--I took myself to be a Wittgensteinian, and I deplored Ryle's dismissal of psychology. Hilary Putnam's (1960, 1962, 1963, 1964, 1967a,b) great series of papers on minds and machines tantalized me by keeping one or two steps ahead of me, and were just about the only contemporary work in philosophy of mind I took seriously. In retrospect, however, it is clear to me that my deep agreement with both Quine and Ryle about the nature of philosophy--so deep as to be utterly unexamined and tacit--was the primary source of such intellectual security as I had.
I soon discovered that my speculative forays into possible brain mechanisms always wandered to the same place: when mechanical push came to shove, a brain was always going to do what it was caused to do by current, local, mechanical circumstances, whatever it ought to do, whatever a God's-eye view might reveal about the actual meanings of its current states. But over the long haul, brains could be designed--by evolutionary processes--to do the right thing (from the point of view of meaning) with high reliability. My conclusion was that the only thing brains could do was to approximate the responsivity to meanings that we presuppose in our everyday mentalistic discourse. This found its first published expression in (1969) section 9, "Function and Content," and it remains the foundation of everything I have done since then. As I put it in (1978a), brains are syntactic engines that can mimic the competence of semantic engines. (See also the thought experiment in (1978b)--a forerunner of Searle's Chinese Room--about being locked in the control room of a giant robot, the conclusion of which was that "the job of getting the input information interpreted correctly is thus not a matter of getting the information translated into a particular internal code unless getting the information into that code is ipso facto getting it into functional position to govern the behavioral repertoire of the whole organism." [p258])
Note how this point forces the order of dependence of consciousness on intentionality. The appreciation of meanings--their discrimination and delectation--is central to our vision of consciousness, but this conviction that I, on the inside, deal directly with meanings turns out to be something rather like a benign "user illusion." What Descartes thought was most certain--his immediate introspective grasp of the items of consciousness--turns out to be not even quite true, but rather a metaphorical byproduct of the way our brains do their work of approximating the behavior of a semantic engine. This vision tied in beautifully with a doctrine of Quine's that I had actually vehemently resisted as an undergraduate: the indeterminacy of radical translation. As I will try to explain sketchily below, I could now see why, as Quine famously insisted, indeterminacy was "of a piece with" Brentano's thesis of the irreducibility of the intentional, and why those irreducible intentional contexts were unavoidably a "dramatic idiom" rather than an expression of unvarnished truth. I could also see how to re-interpret the two philosophical works on intentionality that had had the most influence on me, Anscombe's Intention (1957) and Taylor's The Explanation of Behaviour (1964).
If your initial allegiance is to the physical sciences and the third person point of view, the idea that the best route to content is via a "heuristic overlay" from the outside, rather than "from the inside," can seem not just intuitively acceptable, but inevitable, satisfying, natural. If on the other hand your starting point is the traditional philosophical allegiance to the mind and the deliverances of introspection, this vision can seem outrageous. Perhaps the clearest view of this watershed of intuitions can be obtained from an evolutionary perspective. There was a time, before life on earth, when there was neither intentionality nor consciousness, but eventually replication got under way and simple organisms emerged. Suppose we ask of them: Were they conscious? Did their states exhibit intentionality? It all depends on what these key terms are taken to mean, of course, but underneath the strategic decisions one might make about pre-emptive definition of terms lies a fundamental difference of outlook. One family of intuitions is comfortable declaring that while these earliest ancestors were unconscious automata, not metaphysically different from thermostats or simple robotic toys, some of their states can nevertheless be semantically evaluated. These organisms were, in my terms, rudimentary intentional systems, and somewhere in the intervening ascent of complexity, a special subset of intentional systems has emerged: the subset of conscious beings. According to this vision, then, the intentionality of our unconscious ancestors was as real as intentionality ever gets; it was just rudimentary. It is on this foundation of unconscious intentionality that the higher-order complexities developed that have culminated in what we call consciousness. The other family of intuitions declares that if these early organisms were mere unconscious automata, then their so-called intentionality was not the real thing. Some philosophers of this persuasion are tempted to insist that the earliest living organisms were conscious--they were alive, after all--and hence their rudimentary intentionality was genuine, while others suppose that somewhere higher on the scale of complexity, real consciousness, and hence real intentionality, emerges. There is widespread agreement in this camp, in any case, that although a robot might be what I have called an intentional system, and even a higher-order intentional system, it could not be conscious, and so it could have no genuine intentionality at all.
In my first book, I attempted to cut through this difference in intuitions by proposing a division of the concept of consciousness into awareness1, the fancy sort of consciousness that we human beings enjoy thanks to our capacity to make introspective (verbal) reports, and awareness2, the mere capacity for appropriate responsivity to stimuli, a capacity enjoyed by honeybees and thermostats alike. The tactic did not work for many thinkers, who continued to harbor the hunch that I was leaving something out; there was, they thought, a special sort of sensitivity--we might call it animal consciousness--that no thermostat or fancy robot could enjoy, but that all language-less mammals and birds (and perhaps all fish, reptiles, insects, mollusks, . . . ) shared. The more one learns about how simple organisms actually work, however, the more dubious this hunch about a special, organic sort of sensation becomes. It amounts, in the end, to some sort of latter-day vitalism. But to those who refuse to look at the science, it is a traditional idea that is about as comfortable today as it was in the 17th century, when many were horrified by Descartes's claims about the mechanicity of (non-human) animals. In any event, definitional gambits are ineffective against it, so in later work I dropped the tactic and the nomenclature of "aware1" and "aware2"--but not the underlying intuitions.
My accounts of content and consciousness have subsequently been revised in rather minor ways and elaborated in rather major ways. Two themes that figured heavily in (1969, chapters 3 and 4) lay dormant in my work through the 70's and early 80's, but were never abandoned, and are now re-emerging: the theme of learning as evolution in the brain and the theme of content being anchored in distributed patterns of individually ambiguous nodes in networks of neurons. The truth is that while I can fairly claim to have seen the beauty, and indeed the inevitability, of these ideas in 1969 (see also 1974), and to have sketched out some of their philosophical implications quite accurately, I simply couldn't see how to push them further in the scientific domain, and had to wait for others--not philosophers--to discover these ideas for themselves and push them in the new directions that have so properly captured recent philosophical attention. My own recent discussions of these two themes are to be found in (1986, 1987b, 1991a, 1991b, 1991c, 1992a).
2. Content: Patterns Visible from the Intentional Stance
My theory of content is functionalist: all attributions of content are founded on an appreciation of the functional roles of the items in question in the biological economy of the organism (or the engineering economy of the robot). This is a specifically "teleological" notion of function (not the notion of a mathematical function or of a mere "causal role", as suggested by David Lewis and others). It is the concept of function that is ubiquitous in engineering, in the design of artifacts, but also in biology. (It is only slowly dawning on philosophers of science that biology is not a science like physics, in which one should strive for find "laws of nature", but a species of engineering: the analysis, by "reverse engineering," of the found artifacts of nature--which are composed of thousands of deliciously complicated gadgets, yoked together opportunistically but elegantly into robust, self-protective systems.) These themes were all present in (1969), but they were clarified in (1971) when I introduced the idea that an intentional system was, by definition, anything that was amenable to analysis by a certain tactic, which I called the intentional stance. This is the tactic of interpreting an entity by adopting the presupposition that it is an approximation of the ideal of an optimally designed (i.e., rational) self-regarding agent. No attempt is made to confirm or disconfirm this presupposition, nor is it necessary to try to specify, in advance of specific analyses, wherein consists rationality. Rather, the presupposition provides leverage for generating specific predictions of behavior, via defeasible hypotheses about the content of the control states of the entity.
My initial analysis of the intentional stance and its relation to the design stance and physical stance was addressed to a traditional philosophical issue--the problem of free will and the task of reconciling mechanism and responsibility (1973). The details, however, grew out of my reflections on practices and attitudes I observed to be ubiquitous in Artificial Intelligence. Both Allen Newell (1982) and David Marr (1982) arrived at essentially the same breakdown of stances in their own reflections on the foundations of cognitive science. The concept of intentional systems (and particularly, higher order intentional systems) has been successfully exploited in clinical and developmental psychology, ethology, and other domains of cognitive science, where the main metaphysical implications of the theory have not been the focus of attention. Philosophers, focussing on these implications, have been reluctant to endorse them.
In particular, I have held that since any attributions of function necessarily invoke optimality or rationality assumptions, the attributions of intentionality that depend on them are interpretations of the phenomena--a "heuristic overlay" (1969), describing an inescapably idealized "real pattern" (1991d). Like such abstracta as centers of gravity and parallelograms of force, the beliefs and desires posited by the highest stance have no independent and concrete existence, and since this is the case, there would be no deeper facts that could settle the issue if--most improbably--rival intentional interpretations arose that did equally well at rationalizing the history of behavior of an entity. Some philosophers had thought to counter Quine's thesis of the indeterminacy of radical translation by positing a language of thought or internal system of mental representations that could render determinate whatever was left unsettled by peripheral, "behaviorist" investigations, but the central implication of the intentional stance is that no such internal foundation can be found: indeterminacy carries all the way in, as the thesis of the indeterminacy of radical interpretation of mental states and processes.
The fact that cases of radical indeterminacy, though possible in principle, are vanishingly unlikely ever to confront us is small solace, apparently. This idea is deeply counterintuitive to many philosophers, who have hankered for more "realistic" doctrines. I have tried to show that the option of "realism" misconstrues the issues on two different fronts.
(1) realism about the entities purportedly described by our everyday mentalistic discourse--what I dubbed folk-psychology (1981)--such as beliefs, desires, pains, the self;
(2) realism about content itself--the idea that there have to be events or entities that really have intentionality (as opposed to the events and entities that only behave as if they had intentionality).
Against (1), I have wielded various arguments, analogies, parables. Consider what we should tell the benighted community of people who speak of "having fatigues" where we speak of being tired, exhausted, etc. (1978a) They want us to tell them what fatigues are, what bodily states or events they are identical with, and so forth. This is a confusion that calls for diplomacy, not philosophical discovery; the choice between an "eliminative materialism" and an "identity theory" of fatigues is not a matter of which "ism" is right, but of which way of speaking is most apt to wean these people of a misbegotten feature of their conceptual scheme.
Against (2), my attack has been more indirect. I view the philosophers' demand for answers to questions about what the content really is in various problem cases as a variety of misplaced essentialism, a common philosophical mistake. Philosophers often maneuver themselves into a position from which they can see only two alternatives: infinite regress versus some sort of "intrinsic" foundation--a prime mover of one sort or another. For instance, it has seemed obvious that for some things to be valuable as means, other things must be intrinsically valuable--ends in themselves--otherwise we'd be stuck with a vicious regress of things valuable only as means. This is undeniable if interpreted mildly, but there is a temptation to inflate "intrinsic value" into something that can then be contrasted to "mere" value-from-a-point-of-view. It has seemed similarly obvious that although some intentionality is "derived" (the aboutness of the pencil marks composing a shopping list is derived from the intentions of the person whose list it is), unless some intentionality is original and underived, there could be no derived intentionality. This is undeniable--until an attempt is made to inflate original intentionality into something that is foundational and independent of any interpretative scheme.
There is always another alternative, which naturalistic philosophers should look on with favor: a finite regress that peters out without marked foundations or thresholds or essences. Here is an easily avoided paradox: every mammal has a mammal for a mother--but this implies an infinite genealogy of mammals, which cannot be the case. The solution is not to search for an essence of mammalhood that would permit us in principle to identify the Prime Mammal, but rather to tolerate a finite regress that connects mammals to their non-mammalian ancestors by a sequence that can only be partitioned arbitrarily. The reality of today's mammals is secure without foundations.
The best known instance of this theme in my
work is the idea that the way to explain the miraculous-seeming powers of an intelligent intentional system is to decompose it into hierarchically structured teams of ever more stupid intentional systems, ultimately discharging all intelligence-debts in a fabric of stupid mechanisms (1971, 1974, 1978a, 1991a). Lycan (1981) has called this view homuncular functionalism. One may be tempted to ask: are the subpersonal components real intentional systems? At what point in the diminution of prowess as we descend to simple neurons does real intentionality disappear? Don't ask. The reasons for regarding an individual neuron (or a thermostat) as an intentional system are unimpressive, but not zero, and the security of our intentional attributions at the highest levels does not depend on our identifying a lowest-level of real intentionality. Another exploitation of the same idea is found in Elbow Room (1984): at what point in evolutionary history did real reason-appreciators, real selves, make their appearance? Don't ask--for the same reason. Here is yet another, more fundamental, version: at what point in the early days of evolution can we speak of genuine function, genuine selection-for and not mere fortuitous preservation of entities that happen to have some self-replicative capacity? Don't ask. Many of the most interesting and important features of our world have emerged, gradually, from a world that initially lacked them--function, intentionality, consciousness, morality, value--and it is a fool's errand to try to identify a first or most-simple instance of the "real" thing. It is for the same reason a mistake to suppose that real differences in the world must exist to answer all the questions our systems of content attribution permit us to ask. Tom says he has an older brother living in Cleveland and that he is an only child (1975b). What does he really believe? Could he really believe that he had a brother if he also believed he was an only child? What is the real content of his mental state? There is no reason to suppose there is a principled answer.
From this perspective, most of the large and well-regarded literature on propositional attitudes (especially the debates over wide versus narrow content, "de re versus de dicto" attributions, and what Pierre believes about London) appears to be scholastic, concerned at best with questions whose answers have no bearing on cognitive science, and at worst, with details in one of history's most slowly unwinding unintended reductio ad absurdum arguments. By and large the disagreements explored in that literature cannot even be given an initial expression unless one takes on the assumptions I have argued are fundamentally unsound (see especially 1975b, 1978a, 1982, 1987b, 1991d): strong realism about content, and its constant companion, the idea of a language of thought, a system of mental representation that is decomposable into elements rather like terms, and larger elements rather like sentences. The illusion that this is plausible, or even inevitable, is particularly fostered by the philosophers' normal tactic of working from examples of "believing-that-p" that focus attention on mental states that are directly or indirectly language-infected, such as believing that the shortest spy is a spy, or believing that snow is white. (Do polar bears believe that snow is white? In the way we do?) There are such states--in language-using human beings--but they are not exemplary or foundational states of belief; needing a term for them, I call them opinions ("How to Change your Mind," in 1978a; see also 1991c). Opinions play a large, perhaps even decisive, role in our concept of a person, but they are not paradigms of the sort of cognitive element to which one can assign content in the first instance. If one starts, as one should, with the cognitive states and events occurring in non-human animals, and uses these as the foundation on which to build theories of human cognition, the language-infected states are more readily seen to be derived, less directly implicated in the explanation of behavior, and the chief but illicit source of plausibility of the doctrine of a language of thought. Postulating a language of thought is in any event a postponement of the central problems of content ascription, not a necessary first step. (Although a few philosophers--especially Millikan, Robert Stalnaker, Stephen White--have agreed with me about large parts of this sweeping criticism, they have sought less radical accommodations with the prevailing literature.)
3. Consciousness as a Virtual Machine
My theory of consciousness has undergone more revisions over the years than my theory of content. In (1969) the theory concentrated on the role of language in constituting the peculiar but definitive characteristics of human consciousness, and while I continue to argue for a crucial role of natural language in generating the central features of consciousness (our kind), my first version overstated the case in several regards. For instance, I went slightly too far in my dismissal of mental imagery (see the corrections in 1978a, 1991a), and I went slightly too fast--but not too far!--in my treatment of color vision, which was unconvincing at the time, even though it made all the right moves, as recent philosophical work on color has confirmed, in my opinion. But my biggest mistake in 1969 was positing a watershed somewhere in the brain, the "awareness line," with the following property: revisions of content that occurred prior to crossing the awareness line changed the content of consciousness; later revisions (or errors) counted as post-experiential tamperings; all adjustments of content, veridical or not, could be located, in principle, on one side or the other of this postulated line. The first breach of this intuitive but ultimately indefensible doctrine occurred in (1975a), in which I argued that the distinction between proper and improper entry into memory (and thence into introspective report, for instance) could not be sustained in close quarters. Related arguments appeared in "Two Approaches to Mental Imagery" (in 1978a) and "Quining Qualia" (1988), but only in (1991a, Dennett and Kinsbourne, 1992) was an alternative positive model of consciousness sketched in any detail, the Multiple Drafts model.
The best way to understand this model is in contrast to the traditional model, which I call the Cartesian Theater. The fundamental work done by any observer can be characterized as confronting something "given" and taking it--responding to it with one interpretive judgment or another. This corner must be turned somehow and somewhere in any model of consciousness. On the traditional view, all the taking is deferred until the raw given, the raw materials of stimulation, have been processed in various ways and sent to central headquarters. Once each bit is "finished" it can enter consciousness and be appreciated for the first time. As C. S. Sherrington (1934) put it:
The mental action lies buried in the brain, and in that part most deeply recessed from outside world that is furthest from input and output.
In the Multiple Drafts model, this single unified taking is broken up in cerebral space and real time; the judgmental tasks are fragmented into many distributed moments of micro-taking (Dennett and Kinsbourne, 1992). Since there is no place where "it all comes together," no line the crossing of which is definitive of the end of pre-conscious processing and the beginning of conscious appreciation, many of the familiar philosophical assumptions about human phenomenology turn out to be simply wrong, in spite of their traditional obviousness.
For instance, from the perspective provided by this model one can see more clearly the incoherence of the absolutist assumptions that make qualia seem like a good theoretical idea. Qualia are properties that philosophers purport to identify by ostension and example ("You know: the smell of the coffee, or the way that shade of red looks to you."), but then they are characterized as properties that are (obviously) independent of all one's reactive dispositions, properties that are properties of particular conscious states, rather than the causes or effects of such properties. But this requires the identification, in principle, of a privileged point in the causal chains between sense organs and behavior which is subsequent to all pre-experiential adjustments and antecedent to all post-experiential reactions. No such privileged locus can be coherently defined. It follows from the Multiple Drafts model that "inverted spectrum" and "absent qualia" thought experiments, like the thought experiments encountered in the propositional attitude literature (Twin Earth, what Pierre believes, beliefs about the shortest spy), are fundamentally misbegotten, and for a similar reason: the "common sense" assumption of "realism" with regard to the mental items in question--beliefs, in the first instance, qualia, in the second--is too strong.
These points have been further elaborated by me in a series of responses to the critiques of my work that have appeared in the wake of Consciousness Explained. The most extensive critical survey of my work to date is the double issue of Philosophical Topics, 1994, with seventeen essays by philosophers and a monograph-length response by me (1994). But see also Dahblom, 1993) and Dennett (1993a, 1993b, 1993c, 1993d)
4. Functionalism reconsidered
Some philosophers of mind think of the field as a sort of intellectual game, the object of which is to define an "ism" and defend it against all objections. I have turned my back on that enterprise, but since taxonomy is inescapable, I hereby reluctantly characterize my overall theory of both content and consciousness as a variety of functionalism. It is not, however, a simple variety, and I have recently clarified my own variety by showing the excesses of simpler versions. What follows is excerpted from (1996).
One of the fundamental assumptions shared by many modern theories of mind is known as functionalism. The basic idea is well known in everyday life and has many proverbial expressions, such as handsome is as handsome does. What makes something a mind (or a belief, or a pain, or a fear) is not what it is made of, but what it can do. We appreciate this principle as uncontroversial in other areas, especially in our assessment of artifacts. What makes something a spark plug is that it can be plugged into a particular situation and deliver a spark when called upon. That's all that matters; its color or material or internal complexity can vary ad lib, and so can its shape, as long as its shape permits it to meet the specific dimensions of its functional role. In the world of living things, functionalism is widely appreciated: a heart is something for pumping blood, and an artificial heart, or a pig's heart, may do just about as well, and hence can be substituted for a diseased heart in a human body. There are more than a hundred chemically different varieties of the valuable protein lysozyme. What makes them all instances of lysozyme is what makes them valuable: what they can do. They are interchangeable, for almost all intents and purposes. In the standard jargon of functionalism, these functionally defined entities admit multiple realizations. Why couldn't artificial minds, like artificial hearts, be made real--realized--out of almost anything? Once we figure out what minds do, (what pains do, what beliefs do, and so on), we ought to be able to make minds (or mind parts) out of alternative materials that have those competences. And it has seemed obvious to many theorists--myself included--that what minds do is process information; minds are the control systems of bodies, and in order to execute their appointed duties they need to gather, discriminate, store, transform, and otherwise process information about the control tasks they perform. So far, so good. Functionalism, here as elsewhere, promises to make life easier for the theorist by abstracting away from some of the messy particularities of performance and focusing on the work that is actually getting done. But it's almost standard for functionalists to oversimplify their conception of this task, making life too easy for the theorist.
It's tempting to think of a nervous system (either an autonomic nervous system or its later companion, a central nervous system) as an information network tied at various restricted places--transducer or input nodes and effector or output nodes--to the realities of the body. A transducer is any device that takes information in one medium (a change in the concentration of oxygen in the blood, a dimming of the ambient light, a rise in temperature) and translates it into another medium. A photoelectric cell transduces light, in the form of impinging photons, into an electronic signal in the form of electrons streaming through a wire. A microphone transduces sound waves into signals in the same electronic medium. A bimetallic spring in a thermostat transduces changes in ambient temperature into bending of the spring (and that, in turn is typically translated into the transmission of an electronic signal down a wire to turn a heater on or off). The rods and cones in the retina of the eye are the transducers of light into the medium of nerve signals; the eardrum transduces sound waves into vibrations, which eventually get transduced (by the hair cells on the basilar membrane) into the same medium of nerve signals. There are temperature transducers distributed throughout the body, and motion transducers (in the inner ear), and a host of other transducers of other information. An effector is any device that can be directed by some signal in some medium to make something happen in another "medium" (to bend an arm, close a pore, secrete a fluid, make a noise).
In a computer, there is a nice neat boundary between the "outside" world and the information channels. The input devices, such as the keys on the keyboard, the mouse, the microphone, the television camera, all transduce information into a common medium--the electronic medium in which "bits" are transmitted, stored, transformed. A computer can have internal transducers too, such as a temperature transducer that "informs" the computer that it is overheating, or a transducer that warns it of irregularities in its power supply, but these count as input devices, since they extract information from the (internal) environment and put it in the common medium of information processing.
It would be theoretically clean if we could insulate information channels from "outside" events in a body's nervous system, so that all the important interactions happened at identifiable transducers and effectors. The division of labor this would permit is often very illuminating. Consider a ship with a steering wheel located at some great distance from the rudder it controls. You can connect the wheel to the rudder with ropes, or with gears and bicycle chains, wires and pulleys, or with a hydraulic system of high-pressure hoses filled with oil (or water or whiskey!). In one way or another, these systems transmit to the rudder the energy that the helmsman supplies when turning the wheel. Or you can connect the wheel to the rudder with nothing but a few thin wires, through which electronic signals pass. You don't have to transduce the energy, just the information about how the helmsman wants the rudder to turn. You can transduce this information from the steering wheel into signal at one end and put the energy in locally, at the other end, with an effector--a motor of some kind. (You can also add "feedback" messages, which are transduced at the motor-rudder end and sent up to control the resistance-to-turning of the wheel, so that the helmsman can sense the pressure of the water on the rudder as it turns. This feedback is standard, these days, in power steering in automobiles, but was dangerously missing in the early days of power steering.)
If you opt for this sort of system--a pure signaling system that transmits information and almost no energy--then it really makes no difference at all whether the signals are electrons passing through a wire or photons passing through a glass fiber or radio waves passing through empty space. In all these cases, what matters is that the information not be lost or distorted because of the time lags between the turning of the wheel and the turning of the rudder. This is also a key requirement in the energy-transmitting systems--the systems using such mechanical linkages, such as chains or wires or hoses. That's why elastic bands are not as good as unstretchable cables, even though the information eventually gets there, and why incompressible oil is better than air in a hydraulic system.
In modern machines, it is often possible in this way to isolate the control system from the system that is controlled, so that control systems can be readily interchanged with no loss of function. The familiar remote controllers of electronic appliances are obvious examples of this, and so are electronic ignition systems (replacing the old mechanical linkages) and other computer-chip-based devices in automobiles. And up to a point, the same freedom from particular media is a feature of animal nervous systems, whose parts can be quite clearly segregated into the peripheral transducers and effectors, and the intermediary transmission pathways. One way of going deaf, for instance, is to lose your auditory nerve to cancer. The sound-sensitive parts of the ear are still intact, but the transmission of the results of their work to the rest of the brain has been disrupted. This destroyed avenue can now be replaced by a prosthetic link, a tiny cable made of a different material (wire, just as in a standard computer), and since the interfaces at both ends of the cable can be matched to the requirements of the existing healthy materials, the signals can get through. Hearing is restored. It doesn't matter at all what the medium of transmission is, just as long as the information gets through without loss or distortion.
This important theoretical idea sometimes leads to serious confusions, however. The most seductive confusion could be called the Myth of Double Transduction: first, the nervous system transduces light, sound, temperature, and so forth into neural signals (trains of impulses in nerve fibers) and second, in some special central place, it transduces these trains of impulses into some other medium, the medium of consciousness! That's what Descartes thought, and he suggested that the pineal gland, right in the center of the brain, was the place where this second transduction took place--into the mysterious, nonphysical medium of the mind. Today almost no one working on the mind thinks there is any such nonphysical medium. Strangely enough, though, the idea of a second transduction into some special physical or material medium, in some yet-to-be-identified place in the brain, continues to beguile unwary theorists. It is as if they saw--or thought they saw--that since peripheral activity in the nervous system was mere sensitivity, there had to be some more central place where the sentience was created. After all, a live eyeball, disconnected from the rest of the brain, cannot see, has no conscious visual experience, so that must happen later, when the mysterious x is added to mere sensitivity to yield sentience.
The reasons for the persistent attractiveness of this idea are not hard to find. One is tempted to think that mere nerve impulses couldn't be the stuff of consciousness--that they need translation, somehow, into something else. Otherwise, the nervous system would be like a telephone system without anybody home to answer the phone, or a television network without any viewers--or a ship without a helmsman. It seems as if there has to be some central Agent or Boss or Audience, to take in (to transduce) all the information and appreciate it, and then "steer the ship".
The idea that the network itself--by virtue of its intricate structure, and hence powers of transformation, and hence capacity for controlling the body--could assume the role of the inner Boss and thus harbor consciousness, seems preposterous. Initially. But some version of this claim is the materialist's best hope. Here is where the very complications that ruin the story of the nervous system as a pure information-processing system can be brought in to help our imaginations, by distributing a portion of the huge task of "appreciation" back into the body.
It has always been clear that wherever you have transducers and effectors, an information system's "media neutrality," or multiple realizability, disappears. In order to detect light, for instance, you need something photosensitive--something that will respond swiftly and reliably to photons, amplifying their subatomic arrival into larger-scale events that can trigger still further events. (Rhodopsin is one such photosensitive substance, and this protein has been the material of choice in all natural eyes, from ants to fish to eagles to people. Artificial eyes might use some other photosensitive element, but not just anything will do.) In order to identify and disable an antigen, you need an antibody that has the right shape, since the identification is by the lock-and-key method. This limits the choice of antibody building materials to molecules that can fold up into these shapes, and this severely restricts the molecules' chemical composition--though not completely (as the example of lysozyme varieties shows). In theory, every information-processing system is tied at both ends, you might say, to transducers and effectors whose physical composition is dictated by the jobs they have to do; in between, everything can be accomplished by media-neutral processes.
The control systems for ships, automobiles, oil refineries, and other complex human artifacts are media-neutral, as long as the media used can do the job in the available time. The neural control systems for animals, however, are not really media-neutral--not because they have to be made of particular materials in order to generate that special aura or buzz or whatever, but because they evolved as the control systems of organisms that already were lavishly equipped with highly distributed control systems, and the new systems had to be built on top of, and in deep collaboration with, these earlier systems, creating an astronomically high number of points of transduction. We can occasionly ignore these ubiquitous interpenetrations of different media--as, for instance, when we replace a single nerve highway like the auditory nerve with a prosthetic substitute--but only in a fantastic thought experiment could we ignore these interpenetrations in general.
For example: The molecular keys needed to unlock the locks that control every transaction between nerve cells are glutamate molecules, and dopamine molecules, and norepinephrine molecules (among others), but "in principle" all the locks could be changed--that is, replaced with a chemically different system. After all, the function of the chemical depends on its fit with the lock, and hence on the subsequent effects triggered by the arrival of this turn-on message, and not on anything else. But the distribution of responsibility throughout the body makes this changing of the locks practically impossible. Too much of the information processing--and hence information storage--is already embedded in these particular materials. And that's another good reason why, when you make a mind, the materials matter. So there are two good reasons for this: speed, and the ubiquity of transducers and effectors throughout the nervous system. I don't think there are any other good reasons.
These considerations lend support to the intuitively appealing claim often advanced by critics of functionalism: it really does matter what you make a mind out of. You couldn't make a sentient mind out of silicon chips, or wire and glass, or beer cans tied together with string. Are these reasons for abandoning functionalism? Not at all. In fact, they depend on the basic insight of functionalism for their force.
The only reason minds depend on the chemical composition of their mechanisms or media is that in order to do the things these mechanisms must do, they have to be made, as a matter of biohistorical fact, from substances compatible with the preexisting bodies they control. Functionalism is opposed to vitalism and other forms of mysticism about the "intrinsic properties" of various substances. There is no more anger or fear in adrenaline than there is silliness in a bottle of whiskey. These substances per se are as irrelevant to the mental as gasoline or carbon dioxide. It is only when their abilities to function as components of larger functional systems depend on their internal composition that their so-called "intrinsic nature" matters.
The fact that your nervous system, unlike the control system of a modern ship, is not an insulated, media-neutral control system--the fact that it "effects" and "transduces" at almost every juncture--forces us to think about the functions of its parts in a more complicated (and realistic) way. This recognition makes life slightly more difficult for functionalist philosophers of mind. A thousand philosophical thought experiments (including my own story, "Where am I?") have exploited the intuition that I am not my body, but my body's . . . owner. In a heart transplant operation, you want to be the recipient, not the donor, but in a brain transplant operation, you want to be the donor--you go with the brain, not the body. In principle (as many philosophers have argued), I might even trade in my current brain for another, by replacing the medium while preserving only the message. I could travel by teleportation, for instance, as long as the information was perfectly preserved. In principle, yes--but only because one would be transmitting information about the whole body, not just the nervous system. One cannot tear me apart from my body leaving a nice clean edge, as philosophers have often supposed. My body contains as much of me, the values and talents and memories and dispositions that make me who I am, as my nervous system does.
The intermediate ontological position I recommend--I call it "mild realism"--might be viewed as my attempt at a friendly amendment to Ryle's (1949) tantalizing but unpersuasive claims about category mistakes and different senses of "exist" (see especially 1969, chapter 1, and 1991d), but it also owes a large debt to Quine's arguments about the place of intentional idioms in science, and the role of science as ontological arbiter. My contribution was seeing how to put the views of my mentors together, but I also went beyond them by paying attention to the actual details of the sciences of the mind--and asking new philosophical questions about those details, questions that engaged the scientists as well as the philosophers. One of the advantages of being a philosopher among scientists, I have discovered, is that even a modest display of empirical knowledge (or just curiosity) comes as such a pleasant surprise that one is treated to a first-class education, enthusiastically and patiently laid on. Over the years, I have been tutored by many of the leading theoreticians in artificial intelligence, psychology and neuroscience, in hundreds of hours of discussion. I think no non-philosopher has been so privileged. This base camp in the sciences has permitted me to launch a variety of differently posed arguments, drawing on overlooked considerations. These arguments do not simply add another round to the cycle of debate in philosophy, but have some hope of dislodging the traditional intuitions with which philosophers previously had to start. For instance, from this vantage point one can see the importance of evolutionary models (1969, 1974, 1978a, 1983, 1984a, 1990b, 1991f, 1995) and concomitantly, the perspective of cognitive science as reverse engineering (1989, 1991, 1992a, 1995), which goes a long way to overcoming the conservative mindset of pure philosophy. The idea that a mind could be a contraption composed of hundreds or thousands of gadgets takes us a big step away from the overly familiar mind on which most philosophers have concentrated.
My writing style is another strength of mine that owes a debt to Quine and Ryle. No sentence from Quine or Ryle is ever dull, and their work always exhibits the importance of addressing an audience of non-philosophers, even when they know that philosophers will be perhaps 95% of their actual and sought-for audience. They also both embody a healthy skepticism about the traditional methods and presuppositions of our so-called discipline, which I have inherited, attempting to follow their example in my own writing. I have also been self-conscious about philosophical methods and their fruits, and presented my reflections in various meta-level digressions, in particular about the role of intuition pumps in philosophy (1980, 1984a, 1991a), and about the besetting foible of philosophers: mistaking failures of imagination for insights into necessity.
By the standards of much of the mainstream literature in the field, I am definitely an impure philosopher of mind: I refuse to conduct my investigations by the traditional method of definition and formal argument, and insist that philosophers must gird their imaginations with the relevant science if they want their intuitions to be granted any authority. (For my most trenchant observations on this score, see "Get Real" 1994) Moreover, on both main topics, content and consciousness, I maintain radical positions; if I am right, much of the work at the presumed cutting edge of the field is beyond salvage. I thus cut myself off from some of the controversies that capture the imaginations of others in the field, but the philosophical problems that arise directly in non-philosophical research in cognitive science strike me as much more interesting, challenging, and substantive. So I concentrate on them: the frame problem of Artificial Intelligence (1984b, 1991e), problems about mental imagery and "filling in" (1992b), the binding problem in neuroscience and the problem of temporal anomalies (1991a, Dennett and Kinsbourne, 1992). These are real, as opposed to artifactual, problems of mental representation, but they are still largely conceptual problems--of just the sort philosophers love to tackle, and are actually well-equipped to solve. Cognitive science has reached the stage where scientists have to be amateur philosophers to continue their work. They don't do all that badly on their own, but they can use our help, and welcome it, if we approach in the spirit of partnership that was familiar to Descartes and Leibniz, and even to Kant, but which has since largely faded from the tradition.
Anscombe, G. E. M., 1957, Intention, Oxford: Blackwell.
Dahlbom, Bo, 1993, ed., Dennett and his Critics: Demystifying Mind, Bo Dahlbom, ed., Oxford: Blackwell, 1993.
Dennett, D. C., 1969, Content and Consciousness, London: Routledge & Kegan Paul
- 1971, "Intentional Systems" J.Phil., 8, pp87-106.
- 1973, "Mechanism and Responsibility," in T. Honderich, ed., Essays on Freedom of Action, London: Routledge & Kegan Paul.
- 1974, "Why the Law of Effect Will Not Go Away," Journal of the Theory of Social Behaviour, 5, pp.169-87.
- 1975, "Are Dreams Experiences?" Phil. Review, 73, 151-71.
- 1975b, "Brain Writing and Mind Reading," in K. Gunderson, ed., Language, Mind, and Meaning, Minnesota Studies in Philosophy of Science, 7, Minneapolis: Univ. Minn. Press.
- 1978a, Brainstorms: Philosophical Essays on Mind and Psychology, Montgomery, VT: Bradford.
- 1978b, "Current Issues in the Philosophy of Mind,"
American Philosophical Quarterly, pp.249-61.
- 1980, "The Milk of Human Intentionality," Behavioral and Brain Sciences, 3, pp.428-30.
- 1982, "Beyond Belief" in A. Woodfield, ed., Thought and Object, Oxford: Oxford Univ. Press.
- 1983, "Intentional Systems in Cognitive Ethology: the 'Panglossian Paradigm' Defended," Behavioral and Brain Sciences, 6, pp.343-390.
- 1984a, Elbow Room: The Varieties of Free Will Worth Wanting, Cambridge, MA: MIT Press/A Bradford Book.
- 1984b "Cognitive Wheels: the Frame Problem of AI," in C. Hookway, ed., Minds, Machines and Evolution, Cambridge: Cambridge Univ. Press.
- 1986, "The Logical Geography of Computational Approaches: A View from the East Pole," in R. Harnish and M. Brand, eds., The Representation of Knowledge and Belief, Tucson: University of Arizona Press.
- 1987a, The Intentional Stance, Cambridge, MA: MIT Press/A Bradford Book.
- 1987b, "Evolution, Error and Intentionality," in 1987a
- 1988, "Quining Qualia," in A. Marcel and E. Bisiach, eds., Consciousness in Contemporary Science, Oxford: Oxford Univ. Press.
- 1990a, "Memes and the Exploitation of Imagination," Journal of Aesthetics and Art Criticism, 48, pp.127-135.
- 1990b, "The Interpretation of Texts, People and Other Artifacts," Philosophy and Phenomenological Research, 50, pp. 177-194.
- 1991a, Consciousness Explained, Boston: Little Brown.
- 1991b, "Mother Nature versus the Walking Encyclopedia," in W. Ramsey, S. Stich and D. Rumelhart, eds., Philosophy and Connectionist Theory, Hillsdale, NJ: Erlbaum.
- 1991c, "Two Contrasts: Folk Craft versus Folk Science and Belief versus Opinion," in J. Greenwood, ed., The Future of Folk Psychology: Intentionality and Cognitive Science, Cambridge: Cambridge Univ. Press.
- 1991d, "Real Patterns," J. Phil., 89, pp27-51.
- 1991e, "Producing Future by Telling Stories," in K. M. Ford and Z. Pylyshyn, eds., The Robot's Dilemma Revisited: The Frame Problem in Artificial Intelligence, Norwood, NJ: Ablex.
- 1991f, "Ways of Establishing Harmony," in B. McLaughlin, ed., Dretske and his Critics, Oxford: Blackwell, pp. 119-30.
- 1992a, "Cognitive Science as Reverse Engineering: Several Senses of 'Top-Down' and 'Bottom-Up'" in D. Prawitz, B. Skyrms, and D. Westerstahl, eds., Proc. of the 9th International Congress of Logic, Methodology and Philosophy of Science, North-Holland.
- 1992b, "Filling In versus Finding Out: a Ubiquitous Confusion in Cognitive Science," in H. Pick, P. Van den Broek, and D. Knill, eds., Cognition: Conceptual and Methodological Issues, Washington, DC: American Psychological Association.
- 1993a "Living on the Edge," (reply to seven essays on Consciousness Explained), Inquiry, 36, March 1993.
- 1993b "Caveat Emptor" (reply to Mangan, Toribio, Baars and McGovern), Consciousness and Cognition, 2, (1), 48-57, Mar. 1993.
- 1993c "The Message is: the is no Medium" (reply to Jackson, Rosenthal, Shoemaker & Tye), Philosophy & Phenomenological Research, 53, (4), 889-931, Dec. 1993.
- 1993d "Back from the Drawing Board" (reply to critics) in Dennett and his Critics: Demystifying Mind, Bo Dahlbom, ed., Oxford: Blackwell, 1993.
- 1994, "Get Real," reply to my critics, in Philosophical Topics, 22, pp.505-68.
- 1995, Darwin's Dangerous Idea: Evolution and the Meanings of Life, New York: Simon & Schuster.
- 1996, Kinds of Minds, New York: Basic Books, and London: Weidenfeld & Nicolson (part of the Science Masters series).
Dennett, D. C., and Kinsbourne, M., 1992, "Time and the Observer: the Where and When of Consciousness in the Brain," Behavioral and Brain Sciences, June.
Lycan, W. G., 1981, "Form, Function, and Feel," J.Phil.,78, pp24-49.
Marr, D., 1982, Vision, San Francisco: Freeman.
Miller, G., Galanter, E., and Pribram, K., 1960, Plans and the Structure of Behavior, New York: Holt, Rinehart and Winston.
Newell, A., 1982, "The Knowledge Level," Artificial Intelligence, 18, pp.81-132.
Putnam, H. 1960, "Minds and Machines," in S. Hook, ed., Dimensions of Mind, NYU Press.
-1962, "Dreaming and 'Depth Grammar'," in R. Butler, ed., Analytical Philosophy First Series, Oxford: Blackwell.
-1963, "Brains and Behavior," in R. Butler, ed., Analytical Philosophy Second Series, Oxford, Blackwell.
-1964, "Robots: Machines or Artificially Created Life?" J.Phil., 61, pp.668-91.
-1967a, "The Mental Life of Some Machines," in H. Castaneda, ed., Intentionality, Minds and Perception, Detroit, Wayne State Univ. Press.
-1967b, "The Nature of Mental States," in Capitan and Merrill, eds., Art, Mind and Religion, Pittsburgh: Univ. of Pitt. Press.
Quine, W. V. O, 1960, Word and Object, Cambridge, MA: MIT Press
Ryle, G., 1949, The Concept of Mind, London: Hutchinson.
Sherrington, C. S., 1934, The Brain and Its Mechanism, London: ??
Taylor, C., 1964, The Explanation of Behaviour, London: Routledge & Kegan Paul.