Tuesday, September 1, 2009

Noel Sharkey on AI

I just came across this interview with Noel Sharkey (who I'd never heard of before I just came across this interview). Some of it is valid, but he says some pretty silly things.

Case in point, I thought this particular answer was the silliest:

Are we close to building a machine that can meaningfully be described as sentient?

I'm an empirical kind of guy, and there is just no evidence of an artificial toehold in sentience. It is often forgotten that the idea of mind or brain as computational is merely an assumption, not a truth. When I point this out to "believers" in the computational theory of mind, some of their arguments are almost religious. They say, "What else could there be? Do you think mind is supernatural?" But accepting mind as a physical entity does not tell us what kind of physical entity it is. It could be a physical system that cannot be recreated by a computer.

Okay, the computational theory of mind is not "merely an assumption". It is built on evidence, like any good theory. And it's not "religious" to ask for an alternative theory if someone says a particular theory is crap. If this guy doesn't think that the brain receives input from the environment and performs information processing on that input, then what is his alternative hypothesis?

And I'm not sure what he's talking about in that last sentence, either. Any physical system can be simulated computationally. The fidelity of the simulation is limited by the complexity of the model system and the computational resources available. If what we're interested in is the algorithm executed by the simulated hardware, we should be able to recreate the algorithms processed by the brain. In other words, no, a simulated rainstorm can't make you wet, but a simulated abacus can perform calculations just like a physical one, and a simulated chess player can kick your ass at chess. I don't know of a reasonable theoretical argument for why the function of the brain can't be emulated with a computer.

A reasonable answer to the question would have been: "Probably not, although there are no theoretical roadblocks to prevent it as an eventuality."


Anonymous said...

This sort of argument assumes that the particular organic composition of the human brain produces a particular quality of effects (which we term "mind") that cannot be reproduced by artificial means. Artificial means might simulate some of these effects, but cannot (so the story goes) manifest them in exactly the same way. He is, in philosophy of AI terms, an anti-functionalist or pro-physicalist. See here:


BTW, his comment about functionalism being akin to religion is an obvious attempt to unsettle functionalist colleagues by saying they are relying on a form of faith that trades doctrine for proof.

Anonymous said...

Thanks for posting - interesting stuff.

However, I must say, what an odd interpretation of the paragraph you presented from an interview. I do not see any rejection of a computational theory mind here. I only see some sharp sceptical,questioning of an overblown theory.

I get what he means by saying it is an assumption. All theories have assumptions - basic philosophy of science. There has never been a empirical test of the assumption that mind is computational (and that is different from saying that it is computable).

I also had to smile when I realised your argument is almost exactly like the interview guy referred to in the quote. You ask for the alternative as well. Isn't it a bit wonky to think that people can only criticise a theory if they have an alternative.

Finally, you say any physical system can be simulated but the guy in the article said 'recreated'. Can you see the difference? You can simulate gravity or a rainstorm on a computer but you cannot recreate them.

Sorry to pounce on your blog but I just stumbled into it and felt that it needed a balancing comment.