If the long-term goal of the field of artificial intelligence is to build a system that exhibits general intelligence comparable or greater than that of a human, Turing's test basically says that we're not going to worry about the fundamental qualities of the thing we're trying to emulate. Instead, I'm going to designate an arbitrary, very difficult task that everyone can agree requires a lot of the thing I'm not defining. He might as well have posed the CEO test: if a computing machine can become the head of a major corporation, then it can be said to be intelligent. Or the Professor Test: if a computing machine can be hired and carry out all the required duties of a university professor, then it can be said to be intelligent. How are any of these useful in framing the problem at hand and charting a course for development of an intelligent system? Well, they're not.
And that's one reason why artificial intelligence has expressed far more hype than results. The early pioneers of the field basically shied away from answering the difficult, but necessary theoretical questions. If you want to build an intelligent system, you can't just say you're going to flip aside the whole notion of what it means to be intelligent, set some extremely difficult and arbitrary target, and flounder towards it. And yet that's how much of AI research has been done.
Here's what I'd like to see: a book that methodically lays out a system-neutral approach to intelligence measurement. What do I mean by that? Consider the following related questions:
- In what was is a particular human more or less intelligent than another human?
- In what ways is a human more or less intelligent than a chimpanzee? A dog than a mouse? An octopus than an ant?
- How do we measure the progress of the field of artificial intelligence? If a computing machine were intelligent, how would we know it?
- If a portal opened up on Earth, and a group of aliens walked through, how would we evaluate their cognitive capabilities?
We should have a reasonably well-defined theoretical framework that provides answers to these sorts of questions in a coherent manner. If we hold that humans are the only systems capable of exhibiting intelligence, then the task is easy. But if we take a functionalist approach and hold that intelligence is a complex, multi-dimensional feature of brains, but that other brains built out of different stuff can also exhibit it, then we need to develop answers to the questions above.
I'd like to see a theory that tries to define the features of intelligence in a modular, hierarchical way, attempting to determine dependencies. For example, tasks involving melody recognition are dependent upon faculties for sensing melodies, discriminating pitch and intervals. Solving a task such as opening a locked box with a key in order to get at a goal inside requires a whole set of interdependent faculties.
Once a hypothesized set of features and their dependencies is formulated, ways of evaluating features could be devised. For humans, tests measuring various features of intelligence have been reasonably worked out, and their results correlate with one another reasonably well. But because they are human-centric, such tools do not apply well in a system-neutral way. I'd envision a battery of both passive and active measurements for the various features of intelligence. By passive, I mean measurements based purely on observation of the target system in a particular environment. Active measurements would require interaction between the observer and the target system, such as subjecting the target system to particular tests. The line between these would not necessarily be a hard one.
With such a roadmap in hand, such a tool could be debated, discussed, and revised, but it would provide a theoretical trajectory for artificial intelligence, and an incremental way of assessing the breadth and depth of any particular system.
It's been 58 years since Turing published his famous paper posing his famous test, but in many ways we're still stuck in the 20th century, due to a lack of strong theoretical grounding. The Turing Test avoids the difficult but necessary task of trying to define the target features of the type of system we want to understand and build. In posing the imitation game, the complexities and difficulties of the problem are trivialized.
I've read some pretty good books attempting to define intelligence. I liked Baum's What is Thought? and Hawkins' On Intelligence. There are many others, but none that I know of that provide a way of assessing intelligence in various systems. And without that, how do we know to what extent we are progressing?