If the long-term goal of the field of artificial intelligence is to build a system that exhibits general intelligence comparable or greater than that of a human, Turing's test basically says that we're not going to worry about the fundamental qualities of the thing we're trying to emulate. Instead, I'm going to designate an arbitrary, very difficult task that everyone can agree requires a lot of the thing I'm not defining. He might as well have posed the CEO test: if a computing machine can become the head of a major corporation, then it can be said to be intelligent. Or the Professor Test: if a computing machine can be hired and carry out all the required duties of a university professor, then it can be said to be intelligent. How are any of these useful in framing the problem at hand and charting a course for development of an intelligent system? Well, they're not.
And that's one reason why artificial intelligence has expressed far more hype than results. The early pioneers of the field basically shied away from answering the difficult, but necessary theoretical questions. If you want to build an intelligent system, you can't just say you're going to flip aside the whole notion of what it means to be intelligent, set some extremely difficult and arbitrary target, and flounder towards it. And yet that's how much of AI research has been done.
Here's what I'd like to see: a book that methodically lays out a system-neutral approach to intelligence measurement. What do I mean by that? Consider the following related questions:
- In what was is a particular human more or less intelligent than another human?
- In what ways is a human more or less intelligent than a chimpanzee? A dog than a mouse? An octopus than an ant?
- How do we measure the progress of the field of artificial intelligence? If a computing machine were intelligent, how would we know it?
- If a portal opened up on Earth, and a group of aliens walked through, how would we evaluate their cognitive capabilities?
We should have a reasonably well-defined theoretical framework that provides answers to these sorts of questions in a coherent manner. If we hold that humans are the only systems capable of exhibiting intelligence, then the task is easy. But if we take a functionalist approach and hold that intelligence is a complex, multi-dimensional feature of brains, but that other brains built out of different stuff can also exhibit it, then we need to develop answers to the questions above.
I'd like to see a theory that tries to define the features of intelligence in a modular, hierarchical way, attempting to determine dependencies. For example, tasks involving melody recognition are dependent upon faculties for sensing melodies, discriminating pitch and intervals. Solving a task such as opening a locked box with a key in order to get at a goal inside requires a whole set of interdependent faculties.
Once a hypothesized set of features and their dependencies is formulated, ways of evaluating features could be devised. For humans, tests measuring various features of intelligence have been reasonably worked out, and their results correlate with one another reasonably well. But because they are human-centric, such tools do not apply well in a system-neutral way. I'd envision a battery of both passive and active measurements for the various features of intelligence. By passive, I mean measurements based purely on observation of the target system in a particular environment. Active measurements would require interaction between the observer and the target system, such as subjecting the target system to particular tests. The line between these would not necessarily be a hard one.
With such a roadmap in hand, such a tool could be debated, discussed, and revised, but it would provide a theoretical trajectory for artificial intelligence, and an incremental way of assessing the breadth and depth of any particular system.
It's been 58 years since Turing published his famous paper posing his famous test, but in many ways we're still stuck in the 20th century, due to a lack of strong theoretical grounding. The Turing Test avoids the difficult but necessary task of trying to define the target features of the type of system we want to understand and build. In posing the imitation game, the complexities and difficulties of the problem are trivialized.
I've read some pretty good books attempting to define intelligence. I liked Baum's What is Thought? and Hawkins' On Intelligence. There are many others, but none that I know of that provide a way of assessing intelligence in various systems. And without that, how do we know to what extent we are progressing?
3 comments:
While I agree that the Turing test surely isn't worth a lot with regard to how to create artificial intelligence, OTOH I don't think that measuring individual differences in "intelligence" would be a worthwhile endeavour for AI scientists. In particular your question:
"In what is a particular human more or less intelligent than another human?"
I wager that the kind of "intelligence" you measure with IQ tests does not help anything at all in constructing artificial intelligence. IQ tests do not measure at all how to communicate, how to learn anything new etc. Supposedly any human being has the relevant abilities (in an all-or-nothing way), but some are better. In AI, you don't need to construct a machine that's better than the others; you need to have one that can do useful things at all. So basically what psychologists are calling intelligence is a different thing from what is called intelligence in AI.
I'd agree that human IQ tests are relatively crude tools, but surely they capture some of the features of intelligence that an AI researcher would ultimately want to instill in its creation, e.g., pattern recognition, pattern completion, capacity for analogy, etc.
You mention communication and learning. To what extent are these features integral to intelligence? I think that's exactly the kind of question that hasn't been well-considered and well-articulated in AI research.
You seem to be arguing that the feature of intelligence as posed by psychologists is qualitatively different from that posed by AI researchers. My argument is that there has to be significant overlap, along with what animal cognition researchers are studying, and that we are sorely lacking a coherent theory of the shared and distinct features.
my statement "what psychologists are calling intelligence is a different thing from what is called intelligence in AI" was maybe too extreme, but I would still argue that an intelligence test does not help you too much in constructing an AI. An intelligence test (in the psychologist´s way) does not really test if you are able to make an analogy (to take your example) but really how fast you are and what kinds of unusual words you know; if you don't know how to analogize at all, you want be able to use the test.
Ok, that said, I totally back up your main argument: There has not been enough done on integrating intelligence research in humans, animal cognition, and AI.
Post a Comment