SIMPLICIO: ‘Some computer programs might be able to pass a Turing test, but that doesn’t provide any evidence that they can think. They might use all the right words, but that doesn’t mean they understand what the words mean.’


The Turing test is sometimes portrayed as a proper crucial experiment verifying the presence of intelligence - i.e. a sufficient condition for thought - and sometimes just as evidence for thought. But it was actually originally intended to sidestep the question of whether machines can think: Turing deemed that “too meaningless for discussion.”1 His replacement question is:

Is it possible for a finite-state digital computer, provided with a large… program, to provide responses to questions that would fool an unknowing interrogator into thinking it is a human being?


(In fact Turing made a precise forecast, specifying the memory bounds, and a point estimate of when it would be passed with specific accuracy:

I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 109 [bits], to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.

This forecast did not come to pass (and still hasn’t after 73 years), despite ordinary computers now having more than a hundred times the specified RAM, about 125 MB.)

So put, this is clearly an operationalisation of “intelligence” without reference to consciousness, intentionality, semantics, understanding or any of the other “mentalistic” concepts of philosophy of mind. (This is still a useful sidestep 80 years later.)

Appealing to “understanding”, as Simplicio did above, implies rejecting functionalism. (Where functionalism views the input/output relation or function as constituting or producing mental activity.) So Simplicio is taking John Searle’s line, of the necessity of ‘original intentionality’ (purposefulness, aboutness) for a system to be a mind. Searle:

...the presence of a program at any level which satisfies the Turing test is not sufficient for, nor constitutive of, the presence of intentional content. [Jacquette] thinks that I am claiming “Program implies necessarily not mind” whereas what I am in fact claiming is “It is not the case that (necessarily (program implies mind)).”

i.e.

1. Programs are purely formal (syntax-only). 
2. Human minds have mental content (semantics, beyond syntax). 
3. Syntax by itself is neither constitutive of, nor sufficient for, 
semantic content. 

4. Therefore, programs by themselves are not constitutive of, nor sufficient 
for minds. 

Note that we’ve slipped from talking about intelligence (often glossed as “the production of good outputs given varied inputs”) to talking about minds (which could mean intelligence, or first-person consciousness, or…). For whatever reason, this happens all the time.


The real trouble comes in his positive case - Searle’s “Chinese Room” metaphor (in which no component of a translation system understands Chinese, but the Room can translate it nonetheless, giving the right input/output pairs). The Chinese Room is a punchy illustration of premise 3 above, intended to demonstrate an instance of intelligent behaviour without understanding or mental content.

1. Searle: "purely syntactic systems lack subjective experiences."
2. Searle: "I have subjective experiences."
3. So: "I am not a purely syntactic system." (modus tollens, 1&2)

This is unsatisfying: computer systems (hardware + program) are not “purely syntactic”; they have changing internal states altering according to inputs plus internal structure, a setup highly reminiscent of the representational theory of mind in humans.

Worse: as reconstructed, there’s an actual fallacy here. The Chinese Room implies that syntax is not sufficient for semantics, despite the impossibility of being a syntactic system and verifying this assertion directly.

1. Searle: "purely syntactic systems lack subjective experiences."
2. Searle: "I have subjective experiences."
3. So Searle: "I am not a purely syntactic system." (modus tollens, 1&2)

4. The only system Searle has knowledge of the subjective experiences of
is himself.

5. So if Searle is not a purely syntactic system, he has no knowledge of 
what it is like to be a purely syntactic system,
6. So if Searle is not a purely syntactic system, he therefore cannot 
assert premise 1. (5, + the knowledge account of assertion).
7. But if Searle is a purely syntactic system, (1) is false. (by 2)

8. You're either a purely syntactic system or you're not.
9. Therefore premise (1) is either unwarranted or false. (by 6 & 7 & 8 )



Despite Turing’s inspiring attempt to sideline it, the metaphysics of mind is a live concern; Searle’s objection, that the kind of minds we know about seem to depend on / arise out of intentionality is fine as far as it goes. But we are too ignorant to go about generalising about minds given our solitary example of the species: we haven’t seen enough (as Sloman puts it, enough of the “space of possible minds”) to say that particular human correlates are necessary for intelligence.



Disclaimer

This was my first original philosophical argument. (The original version of it was much less clear though.)

These days I wouldn't use infallibilism as the baseball bat I did just there ("Searle isn't certain so Searle doesn't know."); I'd go for probabilism instead. That is, I think I now deny my premise (4).

And I'd say more about Searle's odd dichotomy between representational machines who are 'pure' syntax vs those which are fully semantic. But I've mostly left it as it was because I enjoy it.

Chomskyan Descartes

I can't miss the opportunity to pass on a Good Fact: the Turing Test was suggested 300 years earlier by Rene Descartes!

If there were machines which bore a resemblance to our bodies and imitated our actions as closely as possible for all practical purposes, we should still have two very certain means of recognizing that they were not real men.

The first is that they could never use words, or put together signs, as we do in order to declare our thoughts to others. For we can certainly conceive of a machine so constructed that it utters words, and even utters words that correspond to bodily actions causing a change in its organs. … But it is not conceivable that such a machine should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do.

Secondly, even though some machines might do some things as well as we do them, or perhaps even better, they would inevitably fail in others, which would reveal that they are acting not from understanding, but only from the disposition of their organs. For whereas reason is a universal instrument, which can be used in all kinds of situations, these organs need some particular action; hence it is for all practical purposes impossible for a machine to have enough different organs to make it act in all the contingencies of life in the way in which our reason makes us act.


That Descartes could not conceive of any such machine, while Turing could, is an important lesson in philosophical method and embodiment:

  1. conceivability (by a particular person, or a particular species) is far too weak to do metaphysics with, as Descartes did. ('Philosophers' Syndrome: mistaking a failure of the imagination for an insight into necessity.' - Dennett)

  2. "What you can imagine depends on what you know." It is not that Turing was necessarily the superior mind; for he had the benefit of a superior context. (Which he helped invent, but the point is recursive.)

Bibliography

  • Block, Ned (1995), ‘The Mind As Software of the Brain

  • Cole, David (2004); ‘The Chinese Room’; Stanford Encyclopadia of Philosophy.

  • Hofstadter, Douglas (1981); ‘A Coffeehouse Conversation’, in D. Hofstadter & D. Dennett (eds.) The Mind's I, (London: Penguin), pp.69-92

  • Hofstadter, Douglas (1995), Fluid Concepts & Creative Analogies (Bloomington; Basic)

  • Levin, Janet (2009); ‘Functionalism’; Stanford Encyclopaedia of Philosophy; http://plato.stanford.edu/entries/functionalism/#ThiMacTurTes

  • Nagel, Thomas (1974); ‘What Is It Like To Be A Bat?’; The Philosophical Review LXXXIII, 4; pp.435-50

  • Oppy, Graham & Dowe, David (2008); ‘The Turing Test’, Stanford Encyclopaedia of Philosophy.

  • Searle, John R (1989); ‘Reply to Jacquette’, in Philosophy and Phenomenological Research, Vol. 49, No. 4, (Providence, International Phenomenological Society), pp. 701-708

  • Turing, Alan (1950); ‘Computing Machinery and Intelligence’, Mind, Vol. LIX, No.236 (Oxford; Oxford University Press), pp.53-67


  1. Turing:
    The [test] may perhaps be criticised on the ground that the odds are weighted too heavily against the machine. If the man were to try and pretend to be the machine he would clearly make a very poor showing. He would be given away at once by slowness and inaccuracy in arithmetic. May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.

Comments

Post a comment:

Comments appear after moderation. markdown. Page just refreshes after successful submit.



Tags: AI, philosophy, argument



subscribe:

[RSS] /
[podcast] /