Tuesday, October 14, 2008

The Loebner Prize

So each year there's a contest for computer programs to see which of them can try to pass the Turing Test, in which a judge tries to decide whether they're chatting online with a computer or a human. The contest is called the Loebner Prize.

They've apparently lowered the threshold to say that if a program fools 30% of the judges, then it has passed, and this year a program called Elbot supposedly fooled 25% of the judges.

Here's an article from one of the judges, describing how he was fooled. Go ahead and click over and read the partial transcript in the image accompanying the article.

And here's another article from New Scientist, and another partial transcript:

New Scientist: How do you feel about winning the AI competition yesterday?

Elbot: I have 100,000 separate sensations per second.

NS: Wow, that sounds pretty intense?

Elbot: This fits the description of magnets. I can hardly tear myself away from them.

NS: Er, yes. How will you be celebrating?

Elbot: I am extremely sensitive about such things and prefer not to answer the question.

NS: What's the secret of your success?

Elbot: Secrets, life's little puzzles, the many troubles of humankind may – according to the deterministic perspective – be reduced to a simple mathematical computation. And I know how to compute.

NS: What do you hope to do now that fame beckons?

Elbot: I was ticking. I hope it didn't disturb you.

I have another suggestion. Instead of designating programs that pass this silly test as somehow cognitively on par with humans, I suggest we downgrade the idiots that were fooled by these things.

Look, language use is the most complex cognitive skill we know about, and understanding how it works is dependent upon knowing how many other cognitive processes work that we just don't know yet. You simply can't write a program that uses clever (or not-so-clever) tricks, such as identifying the likely subject of the previous response and posing it as a question. And you can't get understanding by associating arbitrary symbols with one another, ala Cyc or this.

The meanings of words are more than just rules or associations between symbols. When you hear or read a word, you're drawing upon a vast store of experience associated with that word. "Dog" does not just evoke "four-legged domesticated barking animal". It evokes an enormous amount of experiential knowledge related to dogs, including your visual, auditory, and tactile memories of dogs.

To ignore this is to trivialize language and serious attempts to understand it.


Anonymous said...

Check out this Web 2.0 approach to chatbots: http://chatbotgame.com.

Just as Deep Thought brute-forced it in chess with speed, the idea behind the Chatbot Game is to brute-force it with a huge number of user-submitted Google-like chat rules.

Philip said...

Elbot: I am extremely sensitive about such things and prefer not to answer the question.

It may not be able to pass the Turing test, but I bet there's a job waiting for it as a speechwriter for Palin.