Right now there's a human vs. computer poker competition taking place:
Developed by an artificial intelligence group at the University of Alberta in Canada, Polaris will be pitted against several professionals at the Rio Hotel between July 3rd and 6th. Its human opponents will include Stoxpoker.com coaches Nick Grundzien and Ijay Palansky along with Matt Hawrilenko, all of whom have well over $1 million in lifetime winnings from playing poker.
They tried this last year, and the program did reasonably well, but didn't win. The reason poker might be a better benchmark for AI is that in involves making decisions with incomplete information (e.g., your opponents hands). It also requires probabilistic reasoning (unlike chess, which is completely deterministic). Also, it has been shown that any optimal strategy in poker requires some bluffing, which makes sense. If you can gain an advantage by misrepresenting your strength (either under- or over-representing) you're going to lead opponents into giving you more of their chips. But knowing when and how to bluff is difficult.
So it's an interesting story, but I thought this quotation near the end was pretty dumb:
"It's possible, given enough computing power, for computers to play 'perfectly,' where over a long enough match, the program cannot lose money," said associate professor Michael Bowling. "Humans will always make some mistakes, meaning the program will have an advantage."
I ran into this same fallacy when I read a paper about Tic-Tac-Toe several years back, in which they argued that a program that never loses at Tic-Tac-Toe is "playing optimally". Well, no. If you want to define it that way, good for you, but it's a very poor definition.
To play "optimally" or "perfectly" doesn't just mean that you avoid losing, but that you maximize wins against weaker opponents. I don't think we'd call a poker player that never lost money but just barely made money a "perfect" player.