A substantially revised version of this paper is now in print. This material has been published in Journal of Consciousness Studies, Volume 12, Number 12, December 2005, pp. 3-25, the only definitive repository of the content that has been certified and accepted after peer review. Copyright and all rights therein are retained by Imprint Academic. This material may not be copied or reposted without explicit permission.

What RoboDennett Still Doesn’t Know[1]

 

I. Introduction

 

Mary, the color-deprived neuroscientist, embodies perhaps the best known form of the knowledge argument against physicalism[2] . She is a better-than-world-class[3] neuroscientist. Living in an entirely black-and-white environment, she has learnt all the physical facts about human color vision. She is supposed to be enough like us to have the sort of experiences that we have, but also clever enough to know (and understand) all the pertinent facts about color vision, and to be able to work out all the relevant consequences of the facts which she knows.

 

The key premise of this form of the knowledge argument is that when Mary is finally released from her black and white captivity and shown colored objects, she will learn something: namely, what it is actually like to see in color. Indeed, in Frank Jackson’s original paper, he takes it to be “just obvious” that Mary will “learn something about the world and our visual experience of it”[4] on her release.

 

The following, then, is a simple version of Jackson’s original Knowledge Argument, (all premises refer to Mary’s pre-release status):

 

a)      Mary knows all the physical facts about color vision

b)     Mary will learn something about what it is like to see in color on her release

Presumed corollary:

Mary does not know all the facts about color vision

c)      Physicalism requires that if Mary knows all the physical facts then she knows all the facts

Conclusion:

      Physicalism is false

 

Premise b) both implies and is implied by what I will call ‘the Mary intuition’. That is, the intuition that Mary, in the circumstances described, will still learn something on first seeing a colored object (equivalently, that there is something that Mary, in the circumstances described, does not yet know, namely what it is like to see in color). When I talk about belief in the Mary intuition, I will mean the belief that premise b) can be true in the circumstances given in premise a).

 

Jackson himself has presented a clarified form of his argument somewhat along the above lines[5] . However Paul Churchland has argued persuasively[6] that every possible form of Jackson’s argument requires some equivalent of premise c) above which only appears to go through because of equivocation on two different senses of the word “knows”.

 

The knowledge argument, qua argument against physicalism, fails, on Churchland’s account, not because Mary learns nothing on her release, but rather because she comes to “know”, in a new way, something which she already “knew” as a set of propositional facts. The physical nature of this ‘new’ type of knowledge is something which Churchland addresses in detail, and which I will discuss further below.

 

If we buy into Churchland’s reading of Jackson’s argument then it is trivially (or at least, no longer interestingly) false. We can consistently believe in both physicalism and the Mary intuition. The intuition that Mary will learn something on her release (equivalently, that there is something she does not know, before her release) can be perfectly compatible with physicalism, on Churchland’s view, because both types of knowledge (the type of knowledge she gains, and the type of knowledge she has before her release) are physically definable, and because possession of either type of knowledge does not necessarily imply possession of the other.

 

If we accept Churchland’s arguments, can we consider interesting discussion on the knowledge argument closed? Unfortunately not, for the above, apparently straightforward, physicalist position on the logical status of the knowledge argument remains radically different to the position held by both Daniel Dennett (who is, of course, another die-hard physicalist) and Frank Jackson (now a recent convert to belief in physicalism).

 

The following quotes should make Jackson’s current position clear. He feels that “after the strength of the case for physicalism has been properly absorbed” [7] , one is forced to recognize that there must be a mistake somewhere in the knowledge argument. He is “reluctantly”[8] led to conclude that “The redness of our reds can be deduced in principle from enough [information] about the physical nature of our world despite the manifest appearance to the contrary that the knowledge argument trades on.”[9] Quite simply, Jackson still sees no problem with the logical validity of the knowledge argument. Instead, he now feels that the evidence against its conclusion so overwhelming that the argument must serve as a reductio of its own “obvious” premise 2), that is, of the Mary intuition.

 

Dennett’s very similar position is made clear in his recent paper on the subject, “What RoboMary Knows”[10] . For Dennett, “most people’s unexamined intuitions imply dualism” (for which, in context, read “the Mary intuition is incompatible with physicalism”). The explicit objective of Dennett’s new paper is to demonstrate, in detail, exactly why the Mary intuition is an anti-physicalist confusion. He intends the central portion of his paper as “a positive account that just might convince a few philosophers that they really can imagine [the falsity of the Mary intuition] after all.”

 

But why should Dennett believe that “most people’s unexamined intuitions imply dualism”? Or that philosophers need to understand how the Mary intuition can be false in order to understand how physicalism can be true? It seems that Dennett believes that the truth of the Mary intuition would present a serious problem for physicalism for precisely the same reason as Jackson does: Dennett also believes that there is some logically valid form of the knowledge argument implying a fundamental incompatibility between the Mary intuition and physicalism.

 

Can we make any sense of these two quite different positions, each of which is held by highly respectable, avowedly physicalist philosophers, and each of which is entirely opposed to the other concerning the physical possibility of truth of the Mary intuition and concerning the logical status of the knowledge argument? I believe we can. I will argue that both Daniel Dennett and Frank Jackson are in fact using yet another variation on the knowledge argument (in Jackson’s case, across a change of position about the status of the second premise; in Dennett’s case having always believed that the second premise must be false), as follows:

 

1)      Mary knows, as propositional facts, all the physical facts about color vision that can be known as propositional facts (and she can reason about these propositional facts as perfectly as may be required by us for any subsequent steps in this argument)

Corollary: Mary has full mastery of all possible propositional knowledge concerning color vision

2)      Mary, even if she is as described in premise 1), is still unable to come to know what it is like to see in color without having actually experienced a colored object

Corollary: Mary (pre-release) is not able to come into possession of all possible types of knowledge concerning color vision

3)      If physicalism is true, full mastery of all the propositional knowledge of a physical situation should necessarily imply the ability to attain all possible types of knowledge of that situation

Conclusion:

Physicalism is false

 

Again, and henceforth, by ‘the Mary intuition’, I mean the intuition that premise 2) can be true even under the circumstances described in premise 1).

 

Now, as we will see, Dennett believes that the situation described in premise 1) can be made into a usable thought experiment; he also denies the conclusion of the above argument; and he feels that it is necessary to deny premise 2) in order to deny this conclusion. I think, then (and this will be discussed in detail below) that Dennett must believe in something very similar to premise 3), for the undesirable conclusion simply does not follow from the truth of premises 1) and 2) alone.

 

The premise-for-premise parallel with the simpler form of the knowledge argument given earlier is obvious. Therefore, perhaps the simplest resolution of the differences between the two incompatible positions I have outlined would be to claim that premise 3) also equivocates on two sense of “knows” (as, Churchland has claimed, do all forms of its parallel, premise c)). In this case, Churchland’s position would be correct without reservation, and Dennett and Jackson would both simply have failed to appreciate to force of his arguments. I will argue that the correct situation is more complex than that, and more interesting.

 

The explicit aim of Dennett’s new paper is to show that Mary will necessarily be able to come to know what it is like to see in color, if she knows all the physical facts about color vision. There is a key logical point here, on which the present discussion hinges. If there are some physical beings for whom knowing all the facts about color vision is sufficient for coming to ‘know what it is like’, that is one thing. If all physical beings who know all the facts about color vision must necessarily be able to come to know what it is like, that is quite another. If the latter situation is the case, then Dennett and Jackson are right, the Mary intuition is incompatible with physicalism, and something very similar to premise 3) is true. But if only the former situation is the case, then belief in the Mary intuition is not necessarily an anti-physicalist confusion after all; rather, it can be seen as no more than a belief that Mary has one type of physically possible cognitive architecture rather than another.

 

The current paper is an attempt to demonstrate that the first of the two situations described above is in fact the case: out of the range of cognitive architectures which are physically possible in our world, some are compatible with the Mary intuition and some are not. Dennett’s new paper is an attempt to demonstrate that the alternative situation given above is the case: that no cognitive architecture which is physically possible in our world is compatible with the Mary intuition.

 

I believe we can establish that Dennett’s line of reasoning is flawed, but the flaw is not as simple as an equivocation on “knows”. Rather it goes to the heart of functionalism, and hinges on whether or not Dennett is correct to claim that there is no “fact of the matter”[11] about what subjective experience consists in.

 

II. The Blue Banana Alternative

 

Dennett’s previous major position statement on the knowledge argument occurred in his book “Consciousness Explained”[12] . There, he first outlined in print what he believes to be a perfectly legitimate alternative ending to the Mary story. Instead of experiencing “surprise and delight”[13] on being released from her room and first seeing colored objects, something quite different happens. Mary’s captors decide to trick her, and the first colored object they allow her to see is a blue banana. (Dennett doesn’t explicitly state as much, but presumably Mary’s captors are expecting Mary to say to herself something like, “Ah, so that is what yellow looks like!”) However Mary isn’t fooled for a moment, she takes one look at the blue banana and says, “Hey! You tried to trick me! Bananas are yellow, but this one is blue!” and further “I was not in the slightest surprised by my experience of blue (what surprised me was that you would try such a second-rate trick on me).”

 

Dennett states that both students and professional philosophers have had considerable problems with his alternative ending to the story[14] . What is he saying? Is he seriously trying to claim that Mary has “figured out” what it is like to see in color without ever having seen anything colored? That is, of course, exactly what he is trying to claim. And he is not just stating that Mary will know enough about her own physical reactions to color to be able to recognize them when they first occur, and so work out what color she has seen. He is, rather, taking the following much stronger position: that knowing as much about your own reactions in advance of the fact as Mary does is logically equivalent to knowing what it is like to see color in advance of the fact.

 

He explicitly states that he knows of no “distinction … between knowing ‘what one would say and how one would react’ and knowing ‘what it is like’. If there is such a distinction, it has not yet been articulated and defended, by [anyone], so far as I know.”[15]

 

To many, of course (even to those who hold to the truth of some form of physicalism) this current, clear and explicit statement of position by Dennett will itself seem extreme. This is why he has felt compelled to return to the fray, and to “attempt to convince a few philosophers” that his position might be correct after all.

 

III. Introducing RoboMary

 

Dennett’s chosen weapon for his final attack on the knowledge argument is RoboMary, a perfected robot neuroscientist. Dennett uses RoboMary because he needs to discuss the physical details of her behavior and thought processes at a level of detail not currently available to human neuroscience. Using RoboMary he hopes to show, by analogy, how a human-like Mary could also come to know “what it is like” in advance of the experience.

 

I am happy with this approach, and agree with Dennett that a physicalist account of what is really going on in the Mary thought experiment will require a discussion of the physical details of the ‘agent’ under discussion. As Dennett says:

 

“If materialism is true, it should be possible (‘in principle!’) to build a material thing – call it a robot brain – that does what a brain does, and hence instantiates the same theory of experience that we do. Those who rule out my scenario as irrelevant from the outset are not arguing for the falsity of materialism; they are assuming it”.

 

Dennett wants to make sure that RoboMary is a well constructed and well labeled “intuition pump”. He succeeds admirably. In fact, once I have summarized here Dennett’s key “knobs” and “settings” for RoboMary, she will make an ideal subject on which to attempt some “cooperative reverse-engineering” of my own.

 

There are two major models of RoboMary, either of which, it is argued, can come to know what it is like to see in color in advance of the experience. As Dennett outlines these two versions of RoboMary he considers and refutes many possible objections to his account. On many, indeed most, of these points I am fully in agreement with Dennett. Therefore I will only give an outline of the key facts about RoboMary here. With the caveat, then, that if a particular objection to RoboMary isn’t addressed here, it probably is addressed in Dennett’s original paper, I will proceed.

 

IV. Unlocked RoboMary

 

The basic RoboMary model is (for reasons presumably lost in the mists of sci-fi time) a standard Mark 19 robot. The easiest thing to do will be to quote directly the key points from Dennett’s story about her (omitting, as just discussed, the several objections to this story that Dennett has already successfully addressed).

 

“1. RoboMary is a standard Mark 19 robot, except that she was brought on line without color vision; her video cameras are black and white, but everything else in her hardware is equipped for color vision, which is standard in the Mark 19.

 

“2. While waiting for a pair of color cameras to replace her black-and-white cameras, RoboMary learns everything she can about the color vision of Mark19s. She even brings colored objects into her prison cell along with normally color-sighted Mark 19s and compares their responses – internal and external – to hers.

 

“3. She learns all about the million-shade color-coding system that is shared by all Mark19s.

 

“4. Using her vast knowledge, she writes some code that enables her to colorize the input from her black and white cameras (à la Ted Turner's cable network) according to voluminous data she gathers about what colors things in the world are, and how Mark19s normally encode these. So now when she looks with her black-and-white cameras at a ripe banana, she can first see it in black and white, as pale gray, and then imagine it as yellow (or any other color) by just engaging her colorizing prosthesis, which can swiftly look up the standard ripe-banana color-number-profile and digitally insert it in each frame in all the right pixels. After a while, she decides to leave the prosthesis turned on all the time, automatically imagining the colors of things as they come into focus in her black and white camera eyes.

 

“5. She wonders if the ersatz coloring scheme she's installed in herself is high fidelity. So during her research and development phase, she checks the numbers in her registers (the registers that transiently store the information about the colors of the things in front of her cameras) with the numbers in the same registers of other Mark 19s looking at the same objects with their color camera eyes, and makes adjustments when necessary, gradually building up a good version of normal Mark 19 color vision.

 

“6. The big day arrives. When she finally gets her color cameras installed, and disables her colorizing software, and opens her eyes, she notices . . . . nothing. In fact, she has to check to make sure she has the color cameras installed. She has learned nothing. She already knew exactly what it would be like for her to see colors just the way other Mark 19s do.”

 

For what it is worth, I buy into this story. There don’t seem to me to be any interesting reasons why RoboMary can’t do what Dennett claims, above, that she can do. And if she can indeed do the above then she would indeed come to know what it is like to see in color in advance of the experience. But an objection that Dennett considers concerning his step 4 is the crucial one, in terms of relating the story of unlocked RoboMary to the story of Mary. The question is, is unlocked RoboMary cheating or not when she writes directly to her color coding registers? Perhaps, as Dennett himself says, RoboMary’s colorizing system is simply the “robot version … of trans-cranial magnetic stimulation”: cheating in the sense of using a non-surprising way of coming to know what it is like, which doesn’t truly involve deducing what it is like from the facts one knows. Or perhaps we should accept that “RoboMary is entitled to use her imagination, and that is just what she is doing after all, no hardware additions are involved”.

 

Dennett is happy to vary this setting in both directions. For reasons related to the above point about imagination, my understanding is that Dennett thinks there is no truly principled reason to rule out even this unlocked version of RoboMary as a counter-example to the Mary intuition. (I will argue below that there is, in fact, a principled reason to rule out unlocked RoboMary’s route to coming to know what it is like as cheating.) Nevertheless Dennett is happy to take on board this objection, and to consider next a much more challenging version of the RoboMary story.

 

V. Locked RoboMary

 

As Dennett says, unlocked RoboMary was “Too Easy! Now let’s turn the knob and consider the way RoboMary must proceed if she is prohibited from tampering with her color-experience registers.” The use of a robot instead of a human in the thought experiment once again pays dividends. As Dennett says, we have no idea how “Mary could be crisply rendered incapable of using her knowledge to put her own brain into the relevant imaginative and experiential states”, but we can easily describe something equivalent for RoboMary. We can put a software system in place which automatically converts all the color values in Mary’s visual array to black and white (or rather, grayscale) values before any further processing takes place . Now lets put unbreakable software security on this system. Suddenly RoboMary really can’t “imagine” herself into normal any color vision state. She can’t even create color ‘phosphenes’ (one objection to the original Mary story) by any robot equivalent of rubbing her eyes. The only way her color registers can ever come to contain any usable color values is for the software security system to be disabled which, let us assume, requires a hardware change and so can be treated as unambiguous cheating.

 

Surely then there is no way for RoboMary to deduce what it is like to see in color, is there? Oh yes there is, says Dennett:

 

“This doesn't faze her for a minute, however. Using a few terabytes of spare (undedicated) RAM, she builds a model of herself and from the outside, just as she would if she were building a model of some other being’s color vision, she figures out just how she would react in every possible color situation.”

 

This is pure heterophenomenology. For Dennett, there can be no distinction between the full facts about “what one would say and how one would react” and the full facts about “what it is like”. Thus, if RoboMary can indeed build such a model, she can indeed come to know what it is like. QED.

 

But the preceding is a reconstructed abbreviation of Dennett’s argument. Let’s follow the actual details of the story which Dennett gives. Rather than mix and match direct and indirect quotation, I will paraphrase this section of Dennett’s argument. Imagine, says Dennett, a situation in which (locked) RoboMary is shown a ripe tomato. She can see it and touch it and find out all about its bulginess and softness. She can also consult an encyclopedia to find out exactly what shade of red it would be, if only her color registers were unlocked. RoboMary will react in various ways to this stimulus, resulting in some complex, internal, gray tomato experiencing state, state A. But at the same time, she can feed into her internal model of herself the true red color values that she knows she would have seen if her color vision equipment was normal for Mark 19s. So her model will go into a different complex state, a red-tomato-experiencing state, state B. This should be fine: the model RoboMary doesn’t have to be ‘locked’, just because RoboMary is. She knows all about how she would work if she was not locked, and so she should be able to build and operate an unlocked model just as Dennett describes. So now, returning to direct quotation, locked RoboMary compares state A with state B and:

 

“being such a clever, indefatigable and nearly omniscient being makes all the necessary adjustments and puts herself into state B.

 

Dennett is at pains to point out that state B really isn’t an illicit state in the sense in which direct tampering with color registers is an illicit state. State B is the state that Mary would have gone into if she had had the color experience, even though she hasn’t in fact had it: she isn’t making herself experience color (cheating) she is making herself be as she would be if she had experienced color (not cheating).[16]

 

I agree with Dennett that we ought to accept that if Mary can find such a state, and put herself into it, then she has not cheated. But I don’t buy into the locked RoboMary story as an analogy to Mary. Both the locked and the unlocked RoboMarys embody abilities which are different, in interesting ways, to what we should allow in any architecture which is plausibly related to human cognitive architecture, as Mary’s own architecture is certainly supposed to be.

 

VI. The Churchland-Lewis Account

 

To obtain the details of human cognitive architecture on which I wish to draw, I will now briefly recall two very well known accounts of how it might be that a consistently defined and completely physical Mary could come to know all the facts about color vision and still not know what it is like to see in color.

Paul Churchland and David Lewis were two of the first authors to present an “ability” or “knowledge how vs. knowledge that” response to Frank Jackson’s knowledge argument.

 

Lewis’ distinction between these two forms of knowledge occurs in a postscript[17] to an earlier paper[18] . In his postscript Lewis states that “The most formidable challenge to any sort of materialism and functionalism comes from the friend of phenomenal qualia.” Lewis details the nature of this perceived challenge by presenting his own version of the knowledge argument which parallels Jackson’s, using the taste of Vegemite instead of the visual experience of color. He concludes:

 

“We dare not grant that there is a sort of information we overlook; or, in other words, that there are possibilities exactly alike in the respects we know of, yet different in some other way. That would be defeat. Neither can we credibly claim that lessons in physics, physiology, … could teach the inexperienced what it is like to taste Vegemite.”

 

That is to say, of course, that a) epiphenomenal, or otherwise non-physical, qualia must be rejected, but nevertheless that b) as far as Lewis is concerned the Mary intuition (in this case, the Vegemite intuition) is correct: someone who has not tasted vegemite cannot know what it is like, however much they know of the physical facts.

 

Lewis concludes that the proper resolution must lie in the realization that: “knowing what it’s like is not the possession of information at all”, rather it is the “possession of abilities … to recognize, … to imagine, … to predict one’s behavior by means of imaginative experiments”.

 

He goes on to flesh out the kind of thing he is thinking about:

 

“Imagine a smart data bank. It can be told things, it can store the information it is given, it can reason with it, it can answer questions on the basis of its stored information. Now imagine a pattern-recognizing device that works as follows. When exposed to a pattern it makes a sort of template, which it then applies to patterns presented to it in future. Now imagine one device with both faculties… There is no reason to think that any such device must have a third faculty: a faculty of making templates for patterns it has never been exposed to… If it has a full description about a pattern but no template for it, it lacks an ability but it doesn’t lack information. (Rather, it lacks information in usable form.) When it is shown the pattern it makes a template and gains abilities, but it gains no information.”

 

“We might”, Lewis suggests, “be rather like that.”

 

Indeed we might.

 

The details of Churchland’s account occur in his second response to the knowledge argument[19] . Though considerably more detailed than Lewis’, Churchland’s is an account of essentially the same distinction . As Churchland says, “modern cognitive neurobiology already provides us with a plausible account of what the difference is” between “knowledge by description” and “knowledge by acquaintance”. He points out that in all trichromatic creatures “color information is coded as a pattern of spiking frequencies [within] the optic nerve”. This “massive cable of axons” projects to the “lateral geniculate nucleus (LGN)” and thence “to V1, V2, and ultimately to V4, which area appears to be especially devoted to the processing and representation of color.” The model of visual information processing that Churchland then appeals to is one which assumes that the “representation of familiar colors … consist[s] in a specific configuration of weighted synaptic connections meeting the millions of neurons that make up area V4.” This “configuration of synaptic weights partitions the [abstract] activation-space of the neurons in area V4 … into a structured set of subspaces, one for each prototypical color.” New patterns of input from the eye can then be categorized accordingly. “In such a pigeonholing, it … appears, does visual recognition of a color consist.”

 

Churchland concludes:

 

“This distributed representation is not remotely propositional or discursive, but it is entirely real. All trichromatic animals have one, even those without any linguistic capacity. It apparently makes possible the many abilities we expect from color-competent creatures: discriminations, recognition, imagination, and so on. Such a representation is presumably what a person with Mary’s upbringing would lack, or possess only in … incomplete form. There is thus more than just a clutch of abilities missing in Mary: there is a complex representation, a processing framework that deserves to be called cognitive… There is indeed something she ‘does not know.’ Jackson’s premise … is thus true on … wholly materialist assumptions.”

 

In order to provide the details in the above account Churchland admits that he has “momentarily” put “caution and qualification … aside”. In other words, Churchland believed, in 1989, that the state of neuroscientific knowledge was such as to allow us to know perfectly well that there was such a story to be told about the human brain, but to have to guess at many of the details. The precise details of Churchland’s account do not, I hope, matter because this overview of the situation remains accurate: we still know that there is such a story to be told and we still have to guess at many of the relevant details.

 

VII. RoboDennett

 

Now it seems that the key feature of Mary’s ‘architecture’, which both Churchland and Lewis require, and which means that there really is an objective and a subjective distinction between knowing what it is like and knowing everything about what one would say and do, for Mary, is the following:

 

Mary’s propositional reasoning about color as experienced is built on, and cannot operate without, certain lower level recognitional capacities. (Q1)

 

I further suggest, as a directly related definition of the state of “knowing what it is like”, the following:

 

The state of “knowing what it is like” is the functional state of having one’s higher-level, more abstract or propositional reasoning capabilities directly and transparently subserved by lower level recognitional capacities. (Q2)

 

The above is meant as a preliminary proposal for a functional definition of the state of knowing what it is like. It is not a complete definition: how are we meant to distinguish ‘more abstract’ from ‘less abstract’ cognitive capacities? What, exactly, does a ‘propositional reasoning’ capacity look like? And in what does the proposed ‘direct and transparent subservience’ relationship consist?

 

The first two questions are wide open questions in the philosophy of mind and I do not plan to answer them here. I do propose the following (incomplete) answer to the third question:

 

The ‘transparency’ relationship of Q2 captures a subjective fact (which equates to an objective constraint on Mary’s architecture): we, and Mary, cannot tell by introspection how it is that our more abstract thought is subserved by our less abstract thought. (Q3)

 

I will introduce two further, useful state definitions:

 

The state of knowing exactly what the state of knowing what it is like consists in. (S1)

 

The state of knowing what it is like. (S2)

 

In both cases the state of knowing what it is like is to be read as the state defined by Q2.

 

Using the above definitions I will define a robot who is, I think, human enough to defeat Dennett’s intuitions about Mary, even while being powerful enough to know as much about himself as RoboMary does. I will name him RoboDennett.

 

RoboDennett’s architecture is very similar to RoboMary’s. Both robots have color coding registers. Both have a propositional reasoning module. Indeed both have the “few terabytes of spare (undedicated) RAM” and associated processing power needed to build detailed models of themselves.

 

It is crucial for my argument that RoboDennett’s architecture be compatible with Q1-Q3. However (and although Dennett was at no particular pains to make it so), I believe that RoboMary’s architecture is also compatible with Q1-Q3. How then, can I possibly differentiate between the two robots? By the use of an additional requirement, which, I will argue, is emphatically not compatible with Dennett’s description of RoboMary, but which I will make true of RoboDennett by definition:

 

In reasoning about RoboDennett, no additional abilities shall be assumed other than those required by Q1-Q3 combined with any necessary consequences of asymptotically increasing his reasoning ability towards infinity. [20] (Q4)

 

Q4 requires careful handling. For instance, there may be some ability A which is implied by all possible approaches to arbitrarily increasing some human-like (Q1-Q3 style) system’s reasoning ability. In that case A should explicitly be allowed for RoboDennett.  On the other hand, there may be some additional ability, B, which is only implied by a subset of the possible approaches to arbitrarily increasing such a system’s reasoning ability. In that case, B is excluded by Q4 as an ability for RoboDennett. But more complex cases exist. It may be that ability C is implied by yet another class of techniques which could be used to perfect RoboDennett. If C is not implied by all such techniques, then it should not be allowed for RoboDennett either. But if B and C between them exhaust all the possible techniques for perfecting RoboDennett, then we must allow that RoboDennett has either B or C. Such cases of overlapping and non-overlapping possible and necessary abilities can, of course, be extended to arbitrary complexity. I am not planning to hang my argument on these particular subtleties, but it is as well to be clear about what Q4 requires.

 

So suppose that we do chose to regard RoboMary as compatible with Q1-Q3 (something allowed, but not required, by Dennett’s definition of her), and suppose further that RoboDennett’s architecture is compatible with Q1-Q4 (which is true by definition). In that case, RoboMary tells us about what might possibly be true of a human style reasoner as she becomes more and more capable in her reasoning about color vision. RoboDennett, on the other hand, tells us about what must necessarily be true of such an agent. And the two cases differ.

 

VIII. RoboDennett and Unlocked RoboMary

 

Let’s first of all consider RoboDennett as he attempts to follow the route to “knowing what it is like” taken by unlocked RoboMary. As an interesting case I agree with Dennett that we can dismiss this. In fact, we use RoboDennett to dismiss more firmly than Dennett does that equivalent case in unlocked RoboMary. To Dennett, it was not clear that there was any principled reason for stating that unlocked RoboMary was going beyond her legitimate powers of imagination in directly manipulating her color registers. But, using RoboDennett, we can obtain such a principled reason. If RoboDennett’s color discrimination circuitry works as it works in us (roughly as per Paul Churchland’s model) then (using Q4) we need an argument as to why increasing RoboDennett’s propositional reasoning ability should eventually endow him with the ability, which we do not have, of being able to alter the discrimination weights in his low level circuitry. I do not have a proof that there is no such argument, so I shall have to be content with having shifted the burden of proof. But if, as seems to be the case, there is no logical reason why greatly increasing RoboDennett’s reasoning abilities necessarily allows him to directly modify his color registers, then that is principled enough reason to state that for RoboMary to directly manipulate her color registers in this way is unequivocal ‘cheating’.

 

IX. RoboDennett and Locked RoboMary

 

What, then, of locked RoboMary? Why should I claim that RoboDennett cannot follow her route to coming to know what it is like?

 

The key point about locked RoboMary is that she is able to generate a very accurate model of herself. An analogy comes in useful here. Let’s first of all discuss the case where RoboDennett (or we ourselves) need to predict the operation of a pocket calculator.

 

Is it plausible that one could have a complete, accurate model of pocket calculator, and yet still to require a pocket calculator to work out sums? Well of course it is, if you’re a mere human. Even if your understanding of the calculator is, in some sense, just right, you’re likely to make errors if you try to derive conclusions about the operation of the calculator in complex cases. And even if you don’t make errors, the real calculator will still be much faster than you.

 

But to think that RoboDennett would still need a calculator, once he had put his mind to understanding one, is indeed to make precisely the mistake that Dennett accuses us all of making with regard to Mary. For RoboDennett is much better than us (as, indeed, is RoboMary, and presumably Mary too). Once he has put his mind to understanding a pocket calculator, it will be immediately obvious to him what the result would be of calculating sin(37π/5)^6 (for instance)[21] . That is to say, these agents are good. Very good. And, crucially, they are all supposed to be equally good even at the vastly more complex task of understanding themselves.

 

Are we still within the bounds of sense here? Is it possible to make any meaningful statements about an agent who is supposed to be a) in some relevant way, human-like, but b) to know as much, and be as good at using that knowledge, as Mary, RoboMary or RoboDennett are supposed to be? Yes, I believe so, although we will have to steer very carefully in these waters.

 

Sticking, for a moment, to RoboDennett: in the example of the calculator, above, his understanding of the calculator becomes good enough for him to do away with the actual calculator if two crucial conditions obtain:

 

  1. his understanding is so good that it is functionally isomorphic to (the relevant level of organization of) the calculator itself
  2. he can operate this functionally isomorphic understanding at least as fast as the calculator itself

 

A recent paper by Adams and Aizawa[22] offers the opinion that “Philosophers these days seem not to appreciate that isomorphism is a relatively weak relation”. I wish to claim that, on the contrary, isomorphism is an exceedingly strong relation. Something physical which is fully, counterfactually, functionally isomorphic to a particular definition of a calculator is, in a good sense (quite the best sense, in fact) a calculator. I take it that I am with Dennett on this.

 

But now we are getting close to the heart of my disagreement with Dennett. Let’s consider some further objections (ones which Dennett does not consider) to the notion that RoboMary could have as good an understanding of herself as we have currently allowed RoboDennett to have of a pocket calculator.

 

IX.1. First Additional Objection to Locked RoboMary

 

The issue of timing (criterion (ii), above) allows me raise one claim which I wish to make: Dennett’s discussion of locked RoboMary skirts dangerously (but in the end, not irrevocably) close to outright logical contradiction, in a way that Dennett does not discuss.

 

Is it really plausible to allow that RoboMary can simulate anything she needs to, about herself, as fast as she likes? On pain of logical contradiction, I believe not. Again, I will initially appeal to an analogy.

 

Imagine that Mary has fully understood the principles of an efficient algorithm for factoring numbers which are the products of pairs of large prime numbers. Now there is a mathematical theory, to the effect that the difficulty of factoring the products of pairs of large primes increases exponentially with the size (in bits, say) of the product to be factored. So let’s assume, for the sake of argument, that Mary takes a very short, but finite, time to work out the factors of any 100 bit product of two primes. Now if Mary is given a 200 bit number to factor, it won’t take her twice as long to work out the answer, it will take her on the order of 2100 times as long. In general, however fast Mary is at factoring any particular number, it would be easy to generate a larger number which she will not be able to factor in the lifetime of the universe.[23]

 

The above is by way of softening up our intuitions. It shows that, on pain of physical impossibility, there must be physical limits on what RoboMary can process (shades of RoboDennett, though I have not yet established that there is any key ability that RoboMary is relying on which RoboDennett would not be allowed). Now let’s imagine RoboMary processing a model of herself. Can this really be a perfect model, which operates, as required, at least as fast as RoboMary herself? Here’s an argument that says not.

 

RoboMary’s model of herself does not have to parallel what RoboMary does. Indeed, even in Dennett’s story, it does not – RoboMary’s model of herself is set up to respond as she would without color locking. This can presumably result, over time, in a completely different state in the modeled RoboMary as compared to the actual RoboMary, at least for as long as the actual RoboMary chooses to continue the simulation. Moreover, because the model is so complex (as complex as RoboMary herself) the modeled RoboMary should itself be able to model RoboMary. And the RoboMary modeled by this model should also be able to model RoboMary. And so on, ad infinitum. Moreover, each one of this apparently infinite series of models could be behaving in arbitrarily different ways to the others. Now I do have a clear intuition about this. It is that the above situation is not possible in any real physical system (that is, in Mary, RoboMary or RoboDennett). I am far from sure how I should argue for this intuition against anybody who denies it, and who claims that the above situation is physically possible. Certainly, if one is prepared to accept that cognition is information processing, and that it requires physical resources to achieve that processing, then one seems to be led, in the above scenario, to the conclusion that infinite physical resources are present in a finite being. Similarly, if what it is to be (a model of) RoboMary is to be some physical process, then it is far from clear how an infinite number of arbitrarily different RoboMary-processes could occupy a finite volume of physical matter, whatever the nature of the process[24] . The above therefore seems to be a reductio ad absurdum of the idea that a material Mary can have a perfect model of herself.

 

However, I suspect that Dennett is one step ahead of us here. He suggests that RoboMary should use some “spare (undedicated) RAM” to build a model of herself. So let’s allow that RoboMarys are indeed brought online with enough undedicated memory (and undedicated processing power) to be able to build a prefect functional model of themselves which can contain everything except the additional, undedicated processing area itself. I think that this is probably a valid route out of the above circularity . Of course, by the above line of reasoning, RoboMary can only work out the behavior of a model of herself which hasn’t built a model of itself. This might provide some additional complications in comparing her ‘state A’ and ‘state B’, as per Dennett’s story: she now has to work out which differences between her state and her model’s state are due to the differences she wants to know about (of actual vs. modeled color hardware), and which are due to the fact that her own thoughts are being modified by interaction with an internal model (there must be some interaction), whereas the model’s thoughts are not. I do not wish to pursue this line of reasoning as a live objection to RoboMary. While one might try an argument to the effect that RoboMary would not be able to unpick this tangle without re-introducing the circularities above, I am not sure that this is the case. And I am not sure that it matters.

 

IX.2. Second Additional Objection to Locked RoboMary

 

Let’s concede, as we perhaps ought given the considerations above, that locked RoboMary can build an accurate enough model of herself for the purposes Dennett requires. What has she then achieved? What (if anything) would she have to do with this model in order to “figure out” what it is like to see in color? Can she do what is necessary? And, finally, if she can, can RoboDennett?

 

The first thing to note is that even in Dennett’s account, RoboMary actually has to do something with the results of her model in order to come to know what it is like (there is no automatic implication of S2, given S1). Specifically, RoboMary has to compare her own state, state A, with the state of the model, state B, and then she “makes … the necessary adjustments” to put herself into state B. Let’s examine those “necessary adjustments”.

 

The objection to be considered here is again based on assuming that RoboMary is rather like us: compatible with Q1-Q3, and thus with a state S2 of knowing what it is like see in color which is more or less as it is in Churchland’s model. In that case, to come know what it is like to see in color, she has to put her color recognition system into the right state to ‘transparently subserve’ (see Q2, Q3) processing in her rational reasoning system. If she can do this, then she can be said to know about color as experienced, if she cannot, then she can only be said to know about color propositionally. (This is what Q2 claims.)

 

So now, surely, we can just state that locked RoboMary (never mind RoboDennett) simply cannot put her low level recognitional system into the state required to subserve her high level processing because she can’t, by Dennett’s own definition, alter her low level color processing circuitry at all.

 

This form of the objection is defeasible. This is because RoboMary may well have enough control over the operation of her propositional processing circuitry to be able to re-organize things so that some part of that system (her high-level, ‘rational’ reasoning system) ends up subserving the functions of the remainder of the system in exactly the way that her color processing circuitry would normally subserve the entire system, if she were in S2.

 

I intend to accept that if RoboMary can create this bona fide, causally efficacious[25] , S2-style state, then she really will know what it is like, though we can briefly consider a couple of additional objections here which Dennett does not raise.

 

Firstly, the state doing the low-level ‘subserving’ in RoboMary’s ersatz S2 cannot contain the information that simply isn’t there due to RoboMary’s lack of color vision. Thus, while this simulation of her low level visual circuitry is switched on, she will sometimes see bananas as yellow when they are actually the right kind of blue in the wrong kind of lighting; there simply isn’t be enough information in all grayscale pictures of bananas to determine whether they are really yellow. But I don’t think that this matters. Her rational thought is subserved by something, in ways that correspond often enough and correctly enough to the real S2 for her to be able to recognize these as errors as soon as her real color vision is switched on.

 

There is another, slightly stronger objection in this area: what if the first colored object RoboMary is shown is indeed a blue banana in the exactly the wrong shade of lighting (a banana that could not have been distinguished from a yellow banana using just black and white cameras)? In her initial attempts to understand her new information feed, she has nothing to go on except her previous (necessarily imperfect) attempts to calibrate her false color system. Thus, I think, we have to concede that RoboMary could indeed be fooled by her first colored object. The reason why I don’t think this objection is crucial, is that I don’t think that it legitimates us in claiming that RoboMary did not know what it was like to see in color (in sense Q2) before the experience. It would rather seem to be the case that she did know what it was like to see in color, but was sometimes wrong about what colors she was seeing.

 

IX.3. Final Objection to Locked RoboMary

 

I don’t think any of the above objections go to the heart of the problem with locked RoboMary. She needs to do something physical to come to know what it is like to see in color in advance of the experience, and she can do what she needs to. The crucial difference between locked RoboMary and RoboDennett, of course, is that to do what she does RoboMary requires abilities not allowed in RoboDennett because of Q4.

 

The things RoboMary has to do (the state changes she has to make) to come to know what it is like are fairly substantial re-arrangements of her causal architecture. I am not claiming that RoboMary cannot make these changes for some principled physical reason. I don’t believe I need to. I agree with Dennett (and Churchland[26] ) that if physicalism is true, there must be some agents, such as RoboMary, who can use a perfected S1 to attain S2.

 

But conceding this is not the same as conceding Dennett’s position. Most of the things which Dennett is trying to argue for in his paper (in particular, the anti-physicalist nature of the Mary intuition) require that there be no physically definable agents who can genuinely attain S1 and still be simply unable to attain S2. But I see no reason to concede that a perfected rational reasoning system must be such as to allow its possessor to have as much control over its own reasoning as RoboMary has over hers. There seems to be nothing in the architecture which supports RoboDennett’s propositional thought (Q1-Q3, which he shares with us) which requires that, even if he can build a perfect model of himself, doing so would necessarily allow him to put himself (and not just the model) into state S2 (the state defined by Q2).

 

X. Conclusion

 

If the pocket calculator analogy as suggested in section VIII is close to the mark, then it is no accident that the process which Dennett’s RoboMary has to use to come to know what it is like to see in color involves creating an internal simulation of herself.

 

Underlying Dennett’s position is the belief that a perfect objective understanding of oneself can necessarily always be used as a perfect simulation of oneself, just as in the case of the pocket calculator. Dennett also believes that there is no difference between knowing all the facts about “what one would say and how one would react” and knowing all the facts about “what it is like”. Hence he is lead to believe in the third premise of the form of the knowledge argument given in section I.

 

There is indeed an equivocation in this line of reasoning, but I don’t think that it is an equivocation on “knows”. I think that it is a mistake about what functionalism requires, for it is simply not the case that the state of knowing what ‘knowing what it is like’ consists in, either implies or is implied by the state of ‘knowing what it is like’.

 

Spelt out in more detail, I accept that a sufficiently able robot can necessarily use its ‘perfect’ understanding of itself to create a bona fide functional state of knowing what it is like. This is because (under a strongly functionalist position with which I am not arguing) such a robot is necessarily able to create an internal model of itself which can be in such a state. But the state is a state of the model, not a state of the agent in which the model runs. There is a genuine distinction here precisely because there is a genuine, objective, functional fact of the matter about what the subjective state of knowing what it is like consists in. A preliminary definition of this ‘fact of the matter’, adequate for our purposes, and based on (though arguably extending) earlier accounts given by Churchland and Lewis, is given above (as Q2 of section VI).

 

Thus, we can conclude that there is no valid third premise for the knowledge argument and that the ‘Mary intuition’ (the intuition that Mary really will learn something on her release) remains compatible with physicalism. This is because a well defined, near perfect, but human like reasoner, such as RoboDennett, still cannot come to know what it is like to see in color solely on the basis of the facts of color vision, even on a strictly functionalist account.

 

MICHAEL BEATON   

Centre for Research in Cognitive Science

University of Sussex


 



[1] With particular thanks to Steve Torrance, Simon McGregor and Rowan Lovett, and additional thanks to Marek McGann, Rob Clowes, Chrisantha Fernando, Hanneke De Jaegher, the members of the e-Intentionality research group and the members of the Philosophy of Cognitive Science MA class of 2002-2003 at Sussex University for fruitful discussions of the ideas presented here.

[2] The original papers describing Mary are Jackson, F. (1982). "Epiphenomenal Qualia." Philosophical Quarterly 32(127): 127-136. and Jackson, F. (1986). "What Mary Didn't Know." Journal of Philosophy 83(5): 291-295.

[3] Though perhaps not perfect, of which more later.

[4] Jackson, F. (1982). "Epiphenomenal Qualia." Philosophical Quarterly 32(127): 127-136. (Page 130)

[5] Jackson, F. (1986). "What Mary Didn't Know." Journal of Philosophy 83(5): 291-295. (Page 293)

[6] Churchland, P. M. (1985). "Reduction, qualia and the direct introspection of brain states." Ibid. 82: 8-28, Churchland, P. M. (1989). Knowing Qualia: A Reply to Jackson. On The Contrary. P. M. Churchland and P. S. Churchland. Cambridge, MA, MIT Press: 143-153, Churchland, P. M. (1998). Postscript to Knowing Qualia. On the Contrary. P. M. Churchland and P. S. Churchland. Cambridge MA, MIT Press: 153-157.

[7] Jackson, F. (1998). PREFACE. Mind, Method, and Conditionals, Routledge: vii-viii. (Page vii)

[8] Ibid. (Page vii)

[9] Jackson, F. (1998). Postscript on Qualia. Mind, Method, and Conditionals, Routledge: 76-79. (Pages 76-77)

[10] Dennett, D. C. (Forthcoming). What RoboMary Knows. Phenomenal Concepts and Phenomenal Knowledge: New Essays on Consciousness. T. Alter, OUP. Quotes from Dennett are from this paper except where otherwise indicated. Currently available online at http://ase.tufts.edu/cogstud/papers/RoboMaryfinal.htm.

[11] Dennett, D. C. (1988). Quining Qualia. Consciousness in Modern Science. A. Marcel and E. Bisiach, Oxford University Press.; Dennett, D. C. (1991). Consciousness Explained. Boston, MA, Little, Brown & Co.; etc.

[12] Dennett, D. C. (1991). Consciousness Explained. Boston, MA, Little, Brown & Co. (Pages 398-401)

[13] Graham, G. and T. Horgan (2000). "Mary Mary, Quite Contrary." Philosophical Studies 99: 59-87. (Page 72)

[14] Dennett, D. C. (Forthcoming). What RoboMary Knows. Phenomenal Concepts and Phenomenal Knowledge: New Essays on Consciousness. T. Alter, OUP.

[15] Ibid. (Footnote 3)

[16] Dennett’s draws an instructive analogy here with Swamp Mary (another character whom Dennett introduces, whilst suppressing his “gag reflex” and “giggle reflex”). You may be happy to infer the details for yourself, or you may wish to refer to Dennett’s paper, but I think that his point goes through.

[17] Lewis, D. (1983). Postscript to "Mad Pain and Martian Pain". Philosophical Papers: Volume I, OUP: 130-2.

[18] Lewis, D. (1980). Mad Pain and Martian Pain. Readings in the Philosophy of Psychology: Volume I. N. Block. Cambridge, MA, Harvard University Press.

[19] Churchland, P. M. (1989). Knowing Qualia: A Reply to Jackson. On The Contrary. P. M. Churchland and P. S. Churchland. Cambridge, MA, MIT Press: 143-153. (Pages 145-147)

[20] I intend to draw an analogy, which I believe holds well, with the mathematical device of taking the limit of a function of x, say f(x), as x tends towards infinity. In addition to the caveats on the use of Q4 noted in the main text, one should perhaps note that there are caveats to be borne in mind when using this mathematical formalism. In particular, one must take account of how the limiting value of f(x) was obtained if one wishes to combine it with any further calculations which depend on the value of x. I do not believe that I have made any analogous mistakes here, though I await information to the contrary.

[21] The answer is approximately 0.74, and I don’t happen to know how many decimal places were on the calculator RoboDennett was thinking about.

[22] Adams, F. and K. Aizawa (2001). "The Bounds of Cognition." Philosophical Psychology 14(1): 43-64. (Page 58)

[23] At least, the hope is that this is true: this widely believed but unproven theory (not theorem) is at the heart of all modern, computer based cryptography.

[24] It has been pointed out to me that the ultimate limit on the number of different physical processes which can occupy a finite space is determined only by the ultimate grain of the fundamental theory of matter, which we do not now, and may never, have. If there is in fact no such fundamental theory, which is certainly an open theoretical possibility, then there would be no such limit. I am still not sure that this represents a valid argument against the point I am making, except in that case that the principles on which the cognitive agent in question operates are not, finally, understandable by us. In that case the basis on which we are here attempting to understand the consequences of physicalism – the discussion of comprehensible cognitive architectures – collapses; though one could still, perhaps, attempt the response that this incomprehensible architecture should not be comprehensible to the agent itself, either, in which case the notion of a perfect simulation of an agent by itself, created using its own understanding of itself, would remain untenable, though for an unexpected reason.

[25] C.f. Chrisley, R. (1994). "Why Everything Doesn't Realize Every Computation." Minds and Machines 4(4)., Chalmers, D. (1994). "On Implementing a Computation." Minds and Machines 4(4).

[26] Churchland, P. M. (1985). "Reduction, qualia and the direct introspection of brain states." Journal of Philosophy 82: 8-28. (Page 27)