SIMPLICIO: ‘Some computer programs might be able to pass a Turing test, but that doesn’t provide any evidence that they can think. They might use all the right words, but that doesn’t mean they understand what the words mean.’


The Turing test is sometimes portrayed as a proper crucial experiment verifying the presence of intelligence - i.e. a sufficient condition for thought - and sometimes just as evidence for thought. But it was actually originally intended to sidestep the question ‘Can machines think?’ which Turing deemed “too meaningless for discussion.”1 His replacement question is:

Is it possible for a finite-state digital computer, provided with a large… program, to provide responses to questions that would fool an unknowing interrogator into thinking it is a human being?


(In fact Turing made a precise forecast, specifying the memory bounds, and a point estimate of when it would be passed with specific accuracy:

I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 109 [bits], to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.

This forecast did not come to pass (and still hasn’t after 73 years), despite ordinary computers now having more than a hundred times the specified RAM, which was ~125 MB.)


So put, this is clearly an operationalisation of “intelligence” without reference to consciousness, intentionality, semantics, understanding or any of the other “mentalistic” concepts of philosophy of mind. (This is still a useful sidestep 80 years later.)

Appealing to “understanding”, as Simplicio did above, implies rejecting functionalism. (Where functionalism views the input/output relation or function as constituting or producing mental activity.) So Simplicio is taking John Searle’s line, of the necessity of ‘original intentionality’ (purposefulness, aboutness) for a system to be a mind. Searle:

...the presence of a program at any level which satisfies the Turing test is not sufficient for, nor constitutive of, the presence of intentional content. [Jacquette] thinks that I am claiming “Program implies necessarily not mind” whereas what I am in fact claiming is “It is not the case that (necessarily (program implies mind)).”

i.e.

1. Programs are purely formal (syntax-only). 
2. Human minds have mental content (semantics, beyond syntax). 
3. Syntax by itself is neither constitutive of, nor sufficient for, 
semantic content. 

4. Therefore, programs by themselves are not constitutive of, nor sufficient 
for minds. 

Note that we’ve slipped from talking about intelligence (often glossed as “the production of good outputs given varied inputs”) to talking about minds (which could mean intelligence, or first-person consciousness, or…). For whatever reason, this happens all the time.


The real trouble comes in his positive case - Searle’s “Chinese Room” metaphor (in which no component of a translation system understands Chinese, but the Room can translate it nonetheless, giving the right input/output pairs). The Chinese Room is a punchy illustration of premise 3 above, intended to demonstrate an instance of intelligent behaviour without understanding or mental content.

1. Searle: "purely syntactic systems lack subjective experiences."
2. Searle: "I have subjective experiences."
3. So: "I am not a purely syntactic system." (modus tollens, 1&2)

This is unsatisfying: computer systems (hardware + program) are not “purely syntactic”; they have changing internal states altering according to inputs plus internal structure, a setup highly reminiscent of the representational theory of mind in humans.

Worse: as reconstructed, there’s an actual fallacy here. The Chinese Room implies that syntax is not sufficient for semantics, despite the impossibility of being a syntactic system and verifying this assertion directly.

1. Searle: "purely syntactic systems lack subjective experiences."
2. Searle: "I have subjective experiences."
3. So Searle: "I am not a purely syntactic system." (modus tollens, 1&2)

4. The only system Searle has knowledge of the subjective experiences of
is himself.

5. So if Searle is not a purely syntactic system, he has no knowledge of 
what it is like to be a purely syntactic system,
6. So if Searle is not a purely syntactic system, he therefore cannot 
assert premise 1. (5, + the knowledge account of assertion).
7. But if Searle is a purely syntactic system, (1) is false. (by 2)

8. You're either a purely syntactic system or you're not.
9. Therefore premise (1) is either unwarranted or false. (by 6 & 7 & 8 )

10

Despite Turing’s inspiring attempt to disambiguate and sideline it, the metaphysics of mind is a live concern; Searle’s objection, that the kind of minds we know about seem to depend on / arise out of intentionality is fine as far as it goes. But we are too ignorant to go about generalising about minds given our solitary example of the species: we haven’t seen enough (as Sloman puts it, enough of the “space of possible minds”) to say that particular human correlates are necessary for intelligence.



Bibliography

  • Block, Ned (1995), ‘The Mind As Software of the Brain
  • Cole, David (2004); ‘The Chinese Room’; Stanford Encyclopadia of Philosophy.
  • Hofstadter, Douglas (1981); ‘A Coffeehouse Conversation’, in D. Hofstadter & D. Dennett (eds.) The Mind's I, (London: Penguin), pp.69-92
  • Hofstadter, Douglas (1995), Fluid Concepts & Creative Analogies (Bloomington; Basic)
  • Levin, Janet (2009); ‘Functionalism’; Stanford Encyclopaedia of Philosophy; http://plato.stanford.edu/entries/functionalism/#ThiMacTurTes
  • Nagel, Thomas (1974); ‘What Is It Like To Be A Bat?’; The Philosophical Review LXXXIII, 4; pp.435-50
  • Oppy, Graham & Dowe, David (2008); ‘The Turing Test’, Stanford Encyclopaedia of Philosophy.
  • Searle, John R (1989); ‘Reply to Jacquette’, in Philosophy and Phenomenological Research, Vol. 49, No. 4, (Providence, International Phenomenological Society), pp. 701-708
  • Turing, Alan (1950); ‘Computing Machinery and Intelligence’, Mind, Vol. LIX, No.236 (Oxford; Oxford University Press), pp.53-67


  1. Turing:
    The [test] may perhaps be criticised on the ground that the odds are weighted too heavily against the machine. If the man were to try and pretend to be the machine he would clearly make a very poor showing. He would be given away at once by slowness and inaccuracy in arithmetic. May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.
  2. These days I wouldn't use infallibilism as the baseball bat I did ("Searle isn't certain so Searle doesn't know."); I'd go at Searle with probabilism instead. That is, I think I now deny my premise (4).

    And I'd also say more about Searle's odd dichotomy between representational machines who are 'pure' syntax vs those which are fully semantic. But I've mostly left it as it was.