DOUGLAS F. STALKER
WHY MACHINES CAN’T THINK: A REPLY TO JAMES MOOR
(Received 27 September, 1977)
In An Analysis of the Turing Test, James Moor claims that all too many have simply misunderstood how Turing’s test figures in arguments about the mentality of machines. The test is a familiar one: an interrogator enters Turing’s question/answer setup with the aim of finding out which respondent is another person, which a computer. According to Moor, this sort of test can be “interpreted inductively” (p. 266). That is, one can perfectly well view Turing’s test as providing “behavioral evidence” (p. 253). Indeed, Moor thinks it can provide enough evidence to secure the point at issue. He takes the test results, when passing, as evidence of an ability to think.
Passing, in Moor’s sense, comes to this: many different interrogators are allowed as many chances to question as they like, and yet in the end an average interrogator can only spot the machine about 50% of the time (pp. 249–50). Moor thinks this would be “very adequate grounds for inductively inferring that the computer could think” (p. 251). Would it be?
Though Moor calls his interpretation an inductive one, it is really more accurate to call it an explanatory one. This becomes clear when he discusses why we should take a computer’s behavior as telling evidence of cognition. Moor first turns to one’s own situation with respect to other people. He wants us to pay attention to something most believe: that other people can and do think. Why do we believe this? Moor finds his answer by appealing to a theory:
I believe that another human being thinks because his ability to think is part of a theory I have to explain his actions. The theory postulates a number of inner information processes, but the evidence for the theory comes from the outward behavior of the person. On the basis of his behavior I can confirm, disconfirm, and modify my theory. (p. 251)
On this approach, one’s beliefs about the mentality of others are part of an explanatory theory. In order to explain the behavior of others, we invoke a theory that involves the notion of thinking. But this is not, so far, the full story. It doesn’t tell us why we should take a person’s behavior as telling evidence of a certain mental life. On this explanatory approach, the behavior counts as evidence because it is connected with a going theory. How does it count as telling evidence? To be that, it needs to be connected with the best of the going theories.
When it comes to everydayish efforts at explaining the behavior of other people, one is hard put to find anything better than the current mentalistic scheme. That scheme involves, of course, one’s common notion of thinking. And it is presumably the scheme that Moor relies on here.
When Moor turns to computers, he urges a parity. He claims that our situation with respect to other people is the same as ours with respect to computers. We need to explain the computer’s behavior, and so we invoke a theory. As Moor puts it:
Furthermore, there is no reason why knowledge of computer thinking can not arise in the same way. I can use the computer’s behavior as evidence in assessing my theory about its information processing. (p. 251)
Moreover, the computer’s behavior is the same kind of behavior we take as evidence that other people can think: “the Turing test permits direct or indirect testing of virtually all of the activities one would count as evidence for thinking” (p. 251). For example, it provides a direct way to check on a computer’s verbal behavior. With its question/answer format, Turing’s test “permits (even demands) evaluation of linguistic behavior which is central to our inductive inferences about how others think” (p. 251). It also provides an indirect way to check on nonverbal behavior. An interrogator can ask for descriptions of how the respondent would do something that takes some thinking (p. 252). As Moor sees it, what counts as telling evidence for people also counts as that for computers. Thus he invokes a theory that involves the notion of thinking; and this supposedly explains the behavior of a computer that can pass Turing’s test.
Let’s grant that an explanatory approach is a viable one for questions of computer cogitation. Even so, Moor arrives at his conclusion all too quickly. He glosses over a step that one simply can’t pass by. To put the point another way, Moor leaves an essential step unargued and assumed: viz., that his theory for explaining a computer’s behavior is better than others about. Without that step, Moor’s argument is really no argument at all.
For example, Moor takes a computer’s linguistic behavior as evidence in the way that a person’s linguistic behavior is evidence. He counts both bits of behavior as decided evidence of some thinking on the part of each. It is, Moor claims, just what “one would count as evidence for thinking” (p. 251). But just who, and why? In the case of people, such behavior counts because it figures in a theory that serves to explain that and other behavior. And not just any old theory. Evidential weight, as noted above, comes from connections with a theory that is better than alternative ones.
To cite a simple example, the verbal behavior of either a person or a computer could connect up with any number of incompatible theories purporting to explain. To the winning theory goes the word on what’s sensibly evidence of what. Moor just doesn’t make this step of theory competition explicit. In fact, he never mentions it. He needs to. It is the vital step in any defense of Turing’s test along explanatory lines. How, then, does it go?
Moor has picked a theory to explain a computer’s behavior here—its winning ways at the imitation game. That prowess is certainly something that needs to be explained. Moor aims to explain it, of course, with a theory that makes use of the notion of thinking. But is that the best theory around? I don’t think so. There is what looks to be a clearly preferable alternative.
In dealing with such a computer, I take it that we’re dealing with one similar in structure, composition, and size to ones about nowadays. We’re dealing, that is, with a machine, a mechanism, not an organism of any sort. With that fixed, there are three factors we can appeal to in order to fashion an explanation of why a computer is doing what it’s doing. They are: the computer’s physical structure, its program, and physical features of its environment. In short, an explanation can be framed solely in such mechanical terms.
To escalate to a full theory, these factors fall under the principles of contemporary mechanics. That theory readily covers this case. With the mechanical information and this theoretic framework available, one can give a perfectly fine description of the behavior of a computer. Of course such a description won’t mention a single thought, let alone appeal to any mental notions. That, in short compass, indicates an alternative explanation.
And is it preferable to Moor’s theory? I think so, and moreover think there’s no real competition between the two. The usual theoretic virtues (coherence, completeness, simplicity, precision, and so forth) are prominent in an explanation that is couched in contemporary mechanics. A theory making reference to thinking pales in comparison—seems a homespun alternative when applied to computers.
To be sure, it serves well enough for us ordinary types in our dealings with other people. One doesn’t have to pause over picking a better theory for us. We don’t have, for example, anything like a program for a person available. With ready access to a computer’s program, we have access to a partial explanation of why the machine is doing what it is. Neurophysiology, psychophysics, and the various brands of psychology haven’t supplied us with anything like that yet.
Indeed, if one adopts an explanatory approach, the interesting question becomes whether we might find someday that we don’t need a notion of thinking for other people, and do need it for some type of machine. This comes as no surprise to those who have adopted such an approach. It merely reflects how responsive such an approach needs to be to shifting and supplanting explanations.
In fact, this question will most likely resolve into one about the character and change of explanations themselves. But at present, Turing’s test, properly understood along explanatory lines, poses no problem. If a computer could pass Turing’s test, one wouldn’t need to explain this feat by resorting to the notion of thinking. The currently better theory doesn’t involve that sort of explanatory device.
DOUGLAS F. STALKER WHY MACHINES CAN’T THINK: A REPLY TO JAMES MOOR (Received 27 September, 1977)
In An Analysis of the Turing Test, James Moor claims that all too many have simply misunderstood how Turing’s test figures in arguments about the mentality of machines. The test is a familiar one: an interrogator enters Turing’s question/answer setup with the aim of finding out which respondent is another person, which a computer. According to Moor, this sort of test can be “interpreted inductively” (p. 266). That is, one can perfectly well view Turing’s test as providing “behavioral evidence” (p. 253). Indeed, Moor thinks it can provide enough evidence to secure the point at issue. He takes the test results, when passing, as evidence of an ability to think.
Passing, in Moor’s sense, comes to this: many different interrogators are allowed as many chances to question as they like, and yet in the end an average interrogator can only spot the machine about 50% of the time (pp. 249–50). Moor thinks this would be “very adequate grounds for inductively inferring that the computer could think” (p. 251). Would it be?
Though Moor calls his interpretation an inductive one, it is really more accurate to call it an explanatory one. This becomes clear when he discusses why we should take a computer’s behavior as telling evidence of cognition. Moor first turns to one’s own situation with respect to other people. He wants us to pay attention to something most believe: that other people can and do think. Why do we believe this? Moor finds his answer by appealing to a theory:
I believe that another human being thinks because his ability to think is part of a theory I have to explain his actions. The theory postulates a number of inner information processes, but the evidence for the theory comes from the outward behavior of the person. On the basis of his behavior I can confirm, disconfirm, and modify my theory. (p. 251)
On this approach, one’s beliefs about the mentality of others are part of an explanatory theory. In order to explain the behavior of others, we invoke a theory that involves the notion of thinking. But this is not, so far, the full story. It doesn’t tell us why we should take a person’s behavior as telling evidence of a certain mental life. On this explanatory approach, the behavior counts as evidence because it is connected with a going theory. How does it count as telling evidence? To be that, it needs to be connected with the best of the going theories.
When it comes to everydayish efforts at explaining the behavior of other people, one is hard put to find anything better than the current mentalistic scheme. That scheme involves, of course, one’s common notion of thinking. And it is presumably the scheme that Moor relies on here.
When Moor turns to computers, he urges a parity. He claims that our situation with respect to other people is the same as ours with respect to computers. We need to explain the computer’s behavior, and so we invoke a theory. As Moor puts it:
Furthermore, there is no reason why knowledge of computer thinking can not arise in the same way. I can use the computer’s behavior as evidence in assessing my theory about its information processing. (p. 251)
Moreover, the computer’s behavior is the same kind of behavior we take as evidence that other people can think: “the Turing test permits direct or indirect testing of virtually all of the activities one would count as evidence for thinking” (p. 251). For example, it provides a direct way to check on a computer’s verbal behavior. With its question/answer format, Turing’s test “permits (even demands) evaluation of linguistic behavior which is central to our inductive inferences about how others think” (p. 251). It also provides an indirect way to check on nonverbal behavior. An interrogator can ask for descriptions of how the respondent would do something that takes some thinking (p. 252). As Moor sees it, what counts as telling evidence for people also counts as that for computers. Thus he invokes a theory that involves the notion of thinking; and this supposedly explains the behavior of a computer that can pass Turing’s test.
Let’s grant that an explanatory approach is a viable one for questions of computer cogitation. Even so, Moor arrives at his conclusion all too quickly. He glosses over a step that one simply can’t pass by. To put the point another way, Moor leaves an essential step unargued and assumed: viz., that his theory for explaining a computer’s behavior is better than others about. Without that step, Moor’s argument is really no argument at all.
For example, Moor takes a computer’s linguistic behavior as evidence in the way that a person’s linguistic behavior is evidence. He counts both bits of behavior as decided evidence of some thinking on the part of each. It is, Moor claims, just what “one would count as evidence for thinking” (p. 251). But just who, and why? In the case of people, such behavior counts because it figures in a theory that serves to explain that and other behavior. And not just any old theory. Evidential weight, as noted above, comes from connections with a theory that is better than alternative ones.
To cite a simple example, the verbal behavior of either a person or a computer could connect up with any number of incompatible theories purporting to explain. To the winning theory goes the word on what’s sensibly evidence of what. Moor just doesn’t make this step of theory competition explicit. In fact, he never mentions it. He needs to. It is the vital step in any defense of Turing’s test along explanatory lines. How, then, does it go?
Moor has picked a theory to explain a computer’s behavior here—its winning ways at the imitation game. That prowess is certainly something that needs to be explained. Moor aims to explain it, of course, with a theory that makes use of the notion of thinking. But is that the best theory around? I don’t think so. There is what looks to be a clearly preferable alternative.
In dealing with such a computer, I take it that we’re dealing with one similar in structure, composition, and size to ones about nowadays. We’re dealing, that is, with a machine, a mechanism, not an organism of any sort. With that fixed, there are three factors we can appeal to in order to fashion an explanation of why a computer is doing what it’s doing. They are: the computer’s physical structure, its program, and physical features of its environment. In short, an explanation can be framed solely in such mechanical terms.
To escalate to a full theory, these factors fall under the principles of contemporary mechanics. That theory readily covers this case. With the mechanical information and this theoretic framework available, one can give a perfectly fine description of the behavior of a computer. Of course such a description won’t mention a single thought, let alone appeal to any mental notions. That, in short compass, indicates an alternative explanation.
And is it preferable to Moor’s theory? I think so, and moreover think there’s no real competition between the two. The usual theoretic virtues (coherence, completeness, simplicity, precision, and so forth) are prominent in an explanation that is couched in contemporary mechanics. A theory making reference to thinking pales in comparison—seems a homespun alternative when applied to computers.
To be sure, it serves well enough for us ordinary types in our dealings with other people. One doesn’t have to pause over picking a better theory for us. We don’t have, for example, anything like a program for a person available. With ready access to a computer’s program, we have access to a partial explanation of why the machine is doing what it is. Neurophysiology, psychophysics, and the various brands of psychology haven’t supplied us with anything like that yet.
Indeed, if one adopts an explanatory approach, the interesting question becomes whether we might find someday that we don’t need a notion of thinking for other people, and do need it for some type of machine. This comes as no surprise to those who have adopted such an approach. It merely reflects how responsive such an approach needs to be to shifting and supplanting explanations.
In fact, this question will most likely resolve into one about the character and change of explanations themselves. But at present, Turing’s test, properly understood along explanatory lines, poses no problem. If a computer could pass Turing’s test, one wouldn’t need to explain this feat by resorting to the notion of thinking. The currently better theory doesn’t involve that sort of explanatory device.