Thursday, December 31, 2009

Multiplayer Coop Games

The new Super Mario Bros. game keeps making a bunch of Top 10 lists for 2009 (#3 on Wired's list and #2 on Gamasutra's).

I just have to reiterate: NO.

As a game that is billed as cooperatively multiplayer, this is one of the very worst-designed co-op games I've ever played. Characters occupy the same space and most levels contain surfaces or tunnels that only allow for a single character to stand on or squeeze through. Even if you are trying to help each other out, on any given level you are more likely to accidentally push a friendly character off a ledge to their death, steal their power-ups, or otherwise screw them up. I actually got a chance to play with 4 people over Christmas, and as I suspected, the result was even more hideous. Any time one character dies, as in single-player mode, the action pauses for half a second or so. If you are about to execute a jump that needs good timing (and virtually every jump in this game requires good timing), that little time jag will throw you off and, you guessed it, cause you to die.

Another fun-killer for co-op is the fact that the characters all look amazingly alike, so it is extremely easy to accidentally follow the wrong character for a second or two, thinking you are controlling them, while you are actually running your character into a pit of lava. Fun!

If inadvertent death and frustration are what you look for in a co-op game, then the new Super Mario Bros. is your cup of tea. Again, single-player mode or competitive mode most likely work much better, but some of us actually want to play alongside our friends.

If you want to play a game that actually implements co-op multiplayer in a good way, this year that game is Trine. It's a beautiful side-scrolling puzzle game with wonderful attention to detail. The art design is amazing, but the gameplay and puzzles are great too. In single-player mode you can toggle between three characters: a wizard, a thief, and a knight, each with their own strengths and weaknesses. The wizard isn't just a magic version of an archer, he can create metal boxes and platforms and move physical objects. To complete any given level, you have to use all three characters' at various times.

And multiplayer is great. If you're playing with two people, the two of you can be any combination of the two characters (but not the same character). So if your friend is the wizard, you can toggle between the thief and the knight. If you are currently the thief, they can transform into either the wizard or the knight.

Characters have distinct looks, so they are not easily confused. And they do not occupy the same physical space, so they can stand on the same narrow platform without knocking each other off. Gameplay is designed so that the adding players enhances the experience. One of you can cover the other from attacking skeletons while the other makes a bridge to get across a chasm. You actually feel like you are accomplishing goals together, not getting in each other's way. This is the fundamental principle that the game designers on the new Super Mario Bros. forgot (or just ignored).

The only real drawback to Trine is the controller setup. It probably works great on a Playstation, but on the PC you need XBox compatible controllers for additional players. A networked version would have been great, too. But purely in terms of game design, Trine is a perfect example of how you do co-op multiplayer right. And the new Super Mario Bros. is exactly how to screw it up.

Sunday, December 27, 2009

Avatar

Finally got around to seeing Avatar on Christmas. My impressions were much in line with the reviews I'd read...pretty to look at with a silly story.

First, the visuals: It's easy these days to get jaded when it comes to special effects. They've just gotten so good. I don't think Avatar revolutionizes film making (as some of the more hyperbolic reviewers have proclaimed), but it does take things up a notch. I hadn't seen a modern 3D movie, and it was done extremely well. No gimmicky scenes to point out to you that you are in a 3D movie, just an extra element to add to the already great visuals. The effects were good enough to help me suspend disbelief and make me feel like I really was looking at an alien world.

The culture and language of the Na'vi, the biology of the various species on the planet, and the technology of the humans were all very well thought out. I liked the way the Na'vi had co-evolved with the various species on Pandora to allow them to interface directly with them (I had used a similar idea in a novel that never really got off the ground). The various species on Pandora were inventive and fun to watch. The human tech wasn't as creative, but looked believable and created an immersive experience.

Unfortunately, the story was not nearly as original or well thought-out. It was also moralizing and preachy. Humans have traveled all the way to Pandora to rape and exploit its natural resources, in particular a ridiculously named mineral called "unobtainium". We never learn why humans want this stuff, only that it is insanely valuable. Might it have made the military-industrial cardboard cutouts seem a bit more human and made the story a bit more interesting if this mineral were vital for continued human survival? For all we know, rich people back on Earth just like to make earrings out of the stuff.

As it is, the corporate stooge is a generic bad guy. The military commander is an over-the-top bad guy with claw marks on his head. And the only humans with an ounce of decency are the protagonist, one military defector, and a few scientists. I don't mind simple stories with clearly-marked good guys and bad guys. If you're going to do this, I'd suggest divorcing it from any sort of thinly-veiled, half-baked political preachiness. Just make the bad guy wear black and look ominous and want something simple like ruling the entire galaxy. Cameron, unfortunately, didn't do this...he so obviously wanted to make some sort of commentary on the modern state of things (the loony military dude even gives a paranoid rant where he says something like "We gotta protect ourselves by launching a preemptive strike"). The phrase "shock and awe" was also used. The corporate honcho says things like "We've built them schools, hospitals, and roads...what more do they want?" This is not subtlety. This is stupidity. Pandora is not the Middle East...if only it were.

The politics of the film frankly made me want to barf. If you're going to make a film that says something interesting about colonization and exploitation, then make that film. If you want to make a fun, awesome sci-fi adventure, don't beat me over the fucking skull with your infantile geo-political caricatures of reality.

Final verdict:

Direction, special effects, design: A
Story: F

Still, it's worth seeing.

Monday, December 14, 2009

Game Review: New Super Mario Bros. Wii

The new Super Mario Brothers game for the Wii apparently just won Best Wii Game at the annual video game awards. This ought to be an indication of how crappy the selection of games is for the Wii.

I've played quite a few levels now with my girlfriend. We're always on the lookout for good coop games, because they just don't make that many, and the ones that are made generally aren't that good. We were excited when we heard the new Super Mario game would be cooperative, with up to 4 players. Man, were we set up for some disappointment.

First of all, if you're playing with Mario and Luigi, they can't occupy the same space, i.e., they bounce off each other and can push one another off ledges. This is not good. More often than not, you interfere with one another rather than helping each other. Often there are limited places to jump to survive, and one player almost inevitably prevents the other from making the jump or gets pushed off into oblivion themselves. Players can jump on top of each other to reach places they otherwise wouldn't be able to, but this doesn't really help because once one player gets up there, the other one is stuck below. So one player might get a bunch of goodies, like power ups and coins, while the other just got his face stepped on.

The game works much better if you're actually playing multiple players, but you're actually competing against one another. It does NOT function well as a cooperative game. It is much easier to fug up your co-player than it is to assist them in just about any aspect of the game.

A couple of very simple, but very large changes I would have made to the game design to make it, you know, actually cooperative, would be:

1) Overlapping players: Instead of bouncing off each other, you overlap one another. This might make it difficult to see your character if they're behind another one, but that would be a much more desirable tradeoff than trying to jump on a 1-inch ledge that's already occupied by your partner, only to either shove them off to death or bounce off their head to death yourself.

2) Tethers: This would make the game truly coop and a hell of a lot of fun. Have the characters tethered to one another with a flexible bungee cord. If they're both on the same screen, there is no effect. If one player either lags behind horizontally or vertically, the other player is able to pull them up to their current location. This should be extendable to 3 and 4 player as well. On vertically-scrolling levels, there is endless frustration when one player falls too far and dies, rather than simply falling to the last stable platform as they would in single-player. A simple tether system would alleviate this aggravation and make the game feel a lot more like people working as a team.

There are some neat little additions to the game. The ability to shoot snowballs and freeze opponents is a cool power-up. But the game feels mostly like an early 90's retro side-scroller. This is fine...but what's unforgiveable is billing this game as coop and then implementing it in such a horrible, thoughtless way. As the game is currently designed, it would be much more fun to take turns with a partner in single-player mode rather than playing with multiple players, which is, frankly, dumb-fuck stupid.

Nintendo gets a B+ for single-player mode and a big-ass F for coop.

Although one good thing might come out of it, and that's the implementation of a tether system in one of my own games, either a side scroller or perhaps a coop mountain climbing game.

Tuesday, October 27, 2009

Not the Greatest Evolution Book on Earth

So I've been reading Richard Dawkins' new book The Greatest Show on Earth: The Evidence for Evolution, and I have to say I'm underwhelmed.

I've got one main beef with the book, namely that Dawkins clearly states the purpose of the book in the Preface, and then does a poor job of following through. He starts out by mentioning a slew of his previous books before saying:
Looking back on those books, I realized that the evidence for evolution itself was nowhere explicitly set out, and that this was a serious gap that I needed to close.
Sounds good, right? Books like The Blind Watchmaker and Climbing Mount Improbable are very good, but they explain the mechanisms of evolution and the conceptual issues of understanding how evolution works, such as seeing natural history through the lens of deep time and incremental change.

So what does Dawkins do, then? Well, he starts in on the conceptual problems of understanding evolution. Chapter 1 is devoted to the semantics of the word "theory" and how scientists use it as opposed to its everyday use. I thought, "Okay, fine...now in Chapter 2 he'll start hammering on about the evidence from the fossil record and molecular genetics". Nope.

Guess what he does in Chapter 2? He basically retreads the line of argumentation from On the Origin of Species. Namely, "Look at the powerful change and diversity brought about by artificial selection (i.e. selective breeding among domesticated plants and animals). Look at all the different breeds of dogs that all originated from a single species, the wolf." It's part of a strategy he calls "softening up" the reader to make the transition from buying into evolution by means of natural selection by realizing how powerful artificial selection is. It worked pretty well for Darwin, but he didn't have many alternative strategies to convince his readers. Genetics wasn't even been formalized or understood. There was very little of a fossil record, especially with regard to human ancestry.

But from a modern perspective, why start out by retreading a line of argumentation from 150 years ago, especially when you have giant mountains of hard evidence with which to convince the reader? It's very weak. If I were either a creationist or sitting on the fence, I would be utterly frustrated with the book by this point.

There's no need to "soften up" your readers. Hit them square between the damn eyes with the indisputable, incontrovertible evidence that all life on this planet shares a common ancestry. You can either fill in the conceptual arguments later, or better yet, refer them to your previous books.

Dawkins uses the analogy of the historical sciences, like evolution and geology, to the work of a detective coming on the scene of a crime. We have powerful evidence in the form of effects, from which we can solidly determine the causes, even though we weren't around when the actual event happened. We can determine very accurately how and how long it took the Grand Canyon to form based on an understanding of erosion and other physical processes, just as we can convict a murderer with a clear conscience based on overwhelming physical evidence (DNA at the scene, the bullet matching the suspect's gun, gunpowder patterns on the suspect's hands and arms, blood in their car and their house, and on and on). If the evidence is overwhelming, we have no problem confidently making the correct inference, even if we don't have an eyewitness or a confession.

The Dawkins needed to do, right out the gate, is present the damn evidence. Attempting to overcome the reader's conceptual hurdles to understanding the mechanisms of evolution makes Dawkins seem like he doesn't have a case and that he's stalling.

Chapter 1 needed to be a summary of the enormous amount of physical evidence we have from many branches of science that converge irrefutably on the fact that all life on this planet shares a common ancestry in a giant family tree that took billions of years to unfold. Talk about the overwhelming fossil evidence and the evidence from molecular genetics. And then work back from there. He probably gets to this later, but I'm afraid he probably loses a lot of the people he wants to convince very early on.

Tuesday, October 6, 2009

Staged Muslim Discrimination at the Czech Stop

Here's a video clip from an ABC News report on discrimination against Muslims in America:



They say the majority of customers either supported or did nothing regarding the discrimination.

6 supported the discriminatory behavior
13 spoke out against the behavior
22 said and did nothing

I'm not sure how damning this is. First of all, it was staged at the Czech Stop in West, Texas, which is between Waco and Austin. I used to stop there all the time when I attended Baylor as an undergrad and would take trips home.

It's a gas station/bakery, and a lot of people just want to come in, pay for their gas, maybe buy a kolache, and get the hell out...not get embroiled in a fight for social justice.

I think the more telling figure is that over twice as many people complained as supported the behavior, and this is smack dab in the middle of Texas. The story could have spun the results a number of different ways, but I'm pretty damned encouraged by their little experiment.

Oh, and if you're ever driving along I-35 between Austin and Waco, you really should stop there. I recommend the Spicy Hot Chubbies. No, I'm not making that up.

Sunday, September 20, 2009

Grocery Paradoxes

The other day I was in a local Asian market here in Lafayette, and I came across this jar of seeds:


Are they really pumpkin seeds, and the picture is wrong? Or are they watermelon seeds, and the text is wrong? Or do they come from a strange land where watermelons are called "pumpkins"?

Dunno. I would have had to buy them to find out. They were copiously coated with some kind of red gunk. No thanks.

And then I noticed this in my local Albertson's:

The label for the aisle is "catsup":


But you know how many of the actual bottles of the stuff were named "catsup"? Absolutely zero.

Friday, September 18, 2009

Literature as a Source of Knowledge

Jason Rosenhouse has an interesting post about whether or not fiction is a valid way of knowing something about the world.

Ultimately I agree with him (except for his ranking of Star Trek captains). Yes, literature contains truths about the human condition and about the world in general. Otherwise it would have a lot less value. But it also often contains falsehoods, or overgeneralizations.

Literature (and narrative media in general) can be extremely useful to help elucidate, proselytize, or reinforce existing beliefs. But I don't think it functions as a primary source of knowledge. A metaphor can help reinforce some aspects of how the world works. For example, one could tell a story about how white blood cells are the knights of the realm, ever vigilant in capturing and slaying unwanted intruders. Many things about the metaphor may ring true, and align well with the actual state of affairs. But we can't know how the immune system works from such stories. That takes painstaking investigation of the phenomenon itself.

Something in a work of fiction might "ring true", but there's no way to validate it within the framework of the story itself. You'd be surprised how many people overseas think that every American owns a gun from watching our movies. If I gleaned universal truths from Judd Apatow films, I'd live in a world where fat, unemployed stoner shlubs hooked up with super-hot TV personalities and lived happily ever after. How do I know the world does not work this way? By comparing the vision of the story with the actual state of affairs.

So I think it makes the most sense to view literature, and really all art, as a way of reframing truths to make them more interesting, accessible, etc., but ultimately not as a source of truth.

Tuesday, September 1, 2009

Noel Sharkey on AI

I just came across this interview with Noel Sharkey (who I'd never heard of before I just came across this interview). Some of it is valid, but he says some pretty silly things.

Case in point, I thought this particular answer was the silliest:

Are we close to building a machine that can meaningfully be described as sentient?

I'm an empirical kind of guy, and there is just no evidence of an artificial toehold in sentience. It is often forgotten that the idea of mind or brain as computational is merely an assumption, not a truth. When I point this out to "believers" in the computational theory of mind, some of their arguments are almost religious. They say, "What else could there be? Do you think mind is supernatural?" But accepting mind as a physical entity does not tell us what kind of physical entity it is. It could be a physical system that cannot be recreated by a computer.

Okay, the computational theory of mind is not "merely an assumption". It is built on evidence, like any good theory. And it's not "religious" to ask for an alternative theory if someone says a particular theory is crap. If this guy doesn't think that the brain receives input from the environment and performs information processing on that input, then what is his alternative hypothesis?

And I'm not sure what he's talking about in that last sentence, either. Any physical system can be simulated computationally. The fidelity of the simulation is limited by the complexity of the model system and the computational resources available. If what we're interested in is the algorithm executed by the simulated hardware, we should be able to recreate the algorithms processed by the brain. In other words, no, a simulated rainstorm can't make you wet, but a simulated abacus can perform calculations just like a physical one, and a simulated chess player can kick your ass at chess. I don't know of a reasonable theoretical argument for why the function of the brain can't be emulated with a computer.

A reasonable answer to the question would have been: "Probably not, although there are no theoretical roadblocks to prevent it as an eventuality."

Monday, August 24, 2009

Attack of the Nutria...Plus, a Rainbow

Laurie and I were out for a walk in a local park here in Lafayette this evening, and we brought some bread to feed the ducks. I'd seen nutria at this park before, but they'd never approached very close. They must be hungry, because we had a whole pack of them come up and beg for bread along with the ducks.

In case you don't know what a nutria is, here's the Wikipedia entry. They're basically big rats that live in the water.

So, I snapped a few actions shots with my G1. Enjoy.






Oh, and I found this picture on my phone that I'd forgotten about, when we had a rainbow a while back. Actually, there was a double rainbow, but it didn't show up in this pic:

Saturday, August 22, 2009

District 9

I went to see District 9 yesterday. The movie has enjoyed critical and financial success so far, and I guess my expectations were fairly high.

I can't exactly say that I enjoyed the movie, though. It was definitely an original mix of elements, though it borrowed heavily from a lot of SF source material. The plot, as many have pointed out, was similar to the movie/TV series Alien Nation. It also had influences from Cronenberg's version of The Fly, RoboCop, and others, especially the recent technique of making the unreal seem more real by employing a pseudo-documentary style, e.g. Blair Witch, and Cloverfield.

The film definitely kept you engaged. It was alternately grotesque and action-packed. But the acting was relatively poor; the plot was fragmentary and incomplete; and the characterization was pretty much 2D. The bad guys were a favorite villain of modern cinema, the multi-national corporation, and they're portrayed without even a hint of conscience. The middle-level bureaucrat at the center of the film seems to have a change of heart, literally and figuratively, but it was pretty heavy-handed, and when all was said and done, the film seemed more like a set-up for a sequel than a self-contained film.


The production values were great, though, and the aliens and their tech were interesting and well-developed. Strangely, though, a major scene from the trailer, where an alien is being interrogated about how his weapons work, was not in the film. Weird. The movie had a number of striking images, and was actually cringe-worthy in a lot of scenes. But great SF is about ideas, and even though there were parallels between the 'prawns', as the aliens are called, and other refugee populations, I didn't see their plight used as much more than a set-up for gross-outs and action. I'm not sure what the point was, and part of this was because the movie didn't really resolve anything.

I've heard people comparing the other smaller-budget SF movie that came out at the end of the summer, Moon. There are definitely parallels, especially the use of corporations as the bad guys. But Moon was a much more thoughtful picture, and was ultimately a much better film. Still, I'd marginally recommend District 9, if only because there are scenes that you simply won't see in any other movie.

Friday, August 14, 2009

Michael Ruse on the New Atheists

Michael Ruse is a philosopher of biology at Florida State University, a self-avowed atheist, and is one of the people who thinks that religion and science work just fine nestled up against each other and that vocal atheism is bad for everybody.

Here's his latest, and it's pretty bad, through and through.

He starts out by trying to establish his cred, rather than actually getting on with his point. When he finally does start in with his actual case, three paragraphs in, here's how he starts:


Which brings me to the point of what I want to say. I find myself in a peculiar position. In the past few years, we have seen the rise and growth of a group that the public sphere has labeled the "new atheists" - people who are aggressively pro-science, especially pro-Darwinism, and violently anti-religion of all kinds, especially Christianity but happy to include Islam and the rest.


Lovely. Notice the word "violently". He already lost me right there. Dawkins, Hitchens, Harris, and Dennett are all peaceful, thoughtful individuals. It's not a good start to use such a word, even metaphorically. Just say "strongly" and avoid the loaded bullshit terminology.

Then he quickly notes the recent campaign by Sam Harris and others against Francis Collins being appointed to head the NIH:


Recently, it has been the newly appointed director of the NIH, Francis Collins, who has been incurring their hatred. Given the man's scientific and managerial credentials - completing the HGP under budget and under time for a start - this is deplorable, if understandable since Collins is a devout Christian.


Oh dear. Look, the case against Collins doesn't begin and end with the fact that he's a devout Christian. Here's Harris' thorough statement on Collins and the case for why he isn't a good choice for director of the NIH.

He then goes on to say that the level of philosophical argument and theological understanding that Richard Dawkins demonstrates in The God Delusion "would fail any introductory philosophy or religion course." But of course, he doesn't provide arguments against anything Dawkins says, or provide links to reviews or essays that do so.

Let's take a quick look at one of Dawkins' central arguments in his book. The Ultimate Boeing 747 Gambit is summarized as follows:


1. One of the greatest challenges to the human intellect, over the centuries, has been to explain how the complex, improbable appearance of design in the universe arises.

2. The natural temptation is to attribute the appearance of design to actual design itself. In the case of a man-made artefact such as a watch, the designer really was an intelligent engineer. It is tempting to apply the same logic to an eye or a wing, a spider or a person.

3. The temptation is a false one, because the designer hypothesis immediately raises the larger problem of who designed the designer. The whole problem we started out with was the problem of explaining statistical improbability. It is obviously no solution to postulate something even more improbable. We need a "crane" not a "skyhook," for only a crane can do the business of working up gradually and plausibly from simplicity to otherwise improbable complexity.

4. The most ingenious and powerful crane so far discovered is Darwinian evolution by natural selection. Darwin and his successors have shown how living creatures, with their spectacular statistical improbability and appearance of design, have evolved by slow, gradual degrees from simple beginnings. We can now safely say that the illusion of design in living creatures is just that – an illusion.

5. We don't yet have an equivalent crane for physics. Some kind of multiverse theory could in principle do for physics the same explanatory work as Darwinism does for biology. This kind of explanation is superficially less satisfying than the biological version of Darwinism, because it makes heavier demands on luck. But the anthropic principle entitles us to postulate far more luck than our limited human intuition is comfortable with.

6. We should not give up hope of a better crane arising in physics, something as powerful as Darwinism is for biology. But even in the absence of a strongly satisfying crane to match the biological one, the relatively weak cranes we have at present are, when abetted by the anthropic principle, self-evidently better than the self-defeating skyhook hypothesis of an intelligent designer.


What are the responses to this basic argument?


Dawkins writes about his attendance at a conference in Cambridge sponsored by the Templeton Foundation,where he challenged the theologians present to respond to the argument that a creator of a universe with such complexity would have to be complex and improbable. According to Dawkins, the strongest response was the objection that he was imposing a scientific epistemology on a question that lies beyond the realm of science. When theologians hold God to be simple, who is a scientist like Dawkins "to dictate to theologians that their God had to be complex?" Dawkins writes that he didn't get the impression that those employing this "evasive" defence were being "wilfully dishonest," but were "defining themselves into an epistemological Safe Zone where rational argument could not reach them because they had declared by fiat that it could not."


This is supposed to be a serious response? And Dawkins would fail a philosophy course?

Well how about from professional philosophers?


Both Alvin Plantinga and Richard Swinburne raise the objection that God is not complex. Swinburne gives two reasons why a God that controls every particle can be simple. First, he writes that a person is not the same as his brain, and he points to split-brain experiments that he has discussed in his previous work, thus he argues that a simple entity like our self can control our brain, which is a very complex thing. Second, he argues that simplicity is a quality that is intrinsic to a hypothesis, and not related to its empirical consequences.

Plantinga writes "So first, according to classical theology, God is simple, not complex. More remarkable, perhaps, is that according to Dawkins' own definition of complexity, God is not complex. According to his definition (set out in The Blind Watchmaker), something is complex if it has parts that are "arranged in a way that is unlikely to have arisen by chance alone." But of course God is a spirit, not a material object at all, and hence has no parts. A fortiori (as philosophers like to say) God doesn't have parts arranged in ways unlikely to have arisen by chance. Therefore, given the definition of complexity Dawkins himself proposes, God is not complex."


You've got to be shitting me, right? A person is not the same as their brain. Okay. Then Swinburne argues that a simple thing like our self can control a complex thing like our brain? He sounds like a standard dualist, which hasn't been taken seriously for several hundred years. And that stuff about simplicity being a quality specific to a hypothesis sounds like gobbledygook.

I like Plantinga's response, though. At least it's funny. God is simply because only material things can be complex (I guess an algorithm can't be complex, right?), and god isn't made out of material parts. That's just sweet. Sure, you can claim whatever the hell you want about an imaginary entity. You can claim it's complex or simple, whatever the situation calls for...because you have absolutely zero evidence regarding its nature. This reminds me of how people make all sorts of claims about what god knows and what god feels and what god wants, and then simultaneously claim that he works in mysterious ways and that any aspect of his nature ultimately falls outside of the realm of scientific knowledge. Good stuff.

Anyway, there's more, but that's enough. You can go read it yourself if you're feeling masochistic.

Just one more thing about Ruse's article which is a particular nitpick. If you're writing on the internet, and you're talking about other stuff that is readily available on the internet, for fuck's sake, use hyperlinks. That's what they're there for.

Atheist for a Day

Last week a group of over 300 atheists visited the Creation Museum in Kentucky. I had been following the story on PZ Myers' blog (here's his initial report).

But I hadn't seen this, which my sister pointed me to. A Christian went incognito with the group, to see what it was like to be perceived as an atheist.

He says the group he was with was the target of "hateful glances" and exaggerated amens. That doesn't sound that bad, actually. Still, he says he was ashamed of his fellow Christians. There's an ongoing discussion at his blog, if anybody's interested.

Friday, August 7, 2009

Roger Ebert, Knowing, Determinism, and Randomness

I saw Knowing a while back and actually liked it, despite being pretty sure going in that it was going to suck.

Well, Roger Ebert thought the film was brilliant, which isn't that surprising, given that he gushed over Dark City, another SF film directed by Alex Proyas (I thought that one was kind of interesting, but also a bit silly).

Anyway, Ebert blogged about Knowing, and right off the bat he brings up one of the sillier moments of the film:

Is the universe deterministic, or random? Not the first question you'd expect to hear in a thriller, even a great one. But to hear this question posed soon after the opening sequence of "Knowing" gave me a particular thrill. Nicolas Cage plays Koestler, a professor of astrophysics at MIT, and as he toys with a model of the solar system, he asks that question of his students. Deterministic means that if you have a complete understanding of the laws of physics, you can predict with certainty everything that will happen after (for example) the universe is created in the Big Bang. Random means you can't predict anything. "What do you think?" a student asks Koestler, who says, "I think...shit just happens."

No, no, no, no, no.

Both the film and apparently Ebert are making a huge mistake here, confusing what we can know (epistemology) with the way things really are (ontology).

The limits of our ability to find out something about the world is directly relevant to us, and obviously important, but does not necessarily reflect the way things really are. Take a simple example: a machine holds a pair of fair dice in a shaker. Every thirty seconds, the machine shakes the dice, rolls them onto a felt surface, records the results, and scoops them back up again.

Are the actions of this system random or deterministic? That is, if before a particular roll of the dice, we knew their exact position, all the physical properties of the shaker, the algorithm the machine used, air pressure in the room, etc., would we be able to predict the outcome of the roll (e.g. 4-3). Sure we would. But that information is extremely difficult to come by, even in a small, controlled situation. Our knowledge about the outcome is limited by variables that are difficult to measure, so we use probability to describe what we can know about the outcome (e.g. there's a 1:6 chance of rolling a particular number on each die, and we can figure out distributions of outcomes when we roll both dice, and for successive trials).

So, we can't predict the outcome of this system, at least not to the precision of saying exactly what the outcome will be. We can approximate the outcome by saying which events are likelier than others. Does our knowledge about the system reflect the way the system actually is? Are the dice actually random? Of course not. They and the machine are behaving according to physical laws.

To recap: determinism does not equal predictability, and randomness does not equal unpredictability. Determinism means that what happens could not have happened any other way, whether we are able to predict it or not. Randomness means that there are things in the universe that behave probabilistically rather than behaving lawfully, e.g., a rock might sometimes drift up rather than fall in the presence of earth's gravity. If there are elements of the universe that are truly random, then it is truly impossible for us or anyone else to predict the outcomes of those systems, even in principle. But the definition of each term rests on the way things are, not our ability to measure them.

Thursday, July 30, 2009

Demo of Relativia

Here's a short video of my progress on my entry to the Android Developer Challenge II, a casual mobile puzzle role-playing game called Relativia:



The hard deadline is the end of August. I'm not sure how many more features I'm going to be able to get into the game by then, but we'll see. I just got most of the map stuff working this week using Philip's open-sourced code.

Monday, July 27, 2009

Is the World Hierarchical?

I was chatting with a friend the other day about how we decide what things are and how we parse the world. I made the statement that the world is hierarchically-arranged, and my friend said that the world might not really be that way, but that I could be imposing that type of organization on it. It's certainly a strong possibility that my biases determine how I organize how I think about the world around me.

But I'll be damned if I can step outside of my frame of reference and conceive of another hypothetical way of looking at the world. We normally form concepts of things that exhibit spatial and temporal continuity. We treat a dog as a unified thing because there's stuff that makes up the dog that is close to itself in space and moves together across time. Same for chairs and glasses and pumpkins and even more abstract things. But it also seems obvious that however you slice and dice the world, whatever you call "things" are always going to be composed of constituent things. And those things are likewise composed of constituent things. A car is made of things like mufflers and wheels and axles, and those are made of smaller parts, and those are eventually made of atoms, and those are made of subatomic particles, and those are maybe made out of even smaller things.

I suppose you could have some sort of Buddhist view where there are no parts or subparts...that everything is just one big unified, interconnected thing. But I don't think you'll get very far in understanding how the world works if you don't partition it in some way and try to figure out how the parts work together to produce the phenomenon you want to understand. Hence the usefulness of reductionism.

And my guess is that if there is intelligent life elsewhere in the universe, to make headway in understanding how the world works, they view it hierarchically, try to determine the best way to segment it into parts, and try to figure out how those parts work together. Maybe I'm just myopic, but I can't even conceive of another way in which they might go about understanding the world.

So I don't think I'm imposing some kind of structure on the world. I think the world really does have the structure and that our brains have evolved to learn to exploit that inherent structure in order to survive better.

Sunday, July 26, 2009

Harry Potter and the Half-Comprehensible Script

Went to see the newest Harry Potter film yesterday. Yikes, it was bad. I remember sort of liking the last one, even though I can't remember much about it. But this one was long, boring, goofy, and incomprehensible.

NOTE TO MOVIE-MAKERS: A film should not be a supplement to the book upon which it is based. It should be a stand-alone story.

A good percentage of the movie-goers will have read all the books, but for the rest of us, the experience is, how shall we say...less than pleasant.

Case in point: The opening scene of the movie. Three black smoke trails fly over London. Muggles look up in amazed confusion. The three trails fly into a back alley and magic world, nab somebody whose face isn't seen (there's a bag over his/her head), then fly back out, destroy a bridge (killing lots of Muggles), and flying off.

Boy, I can't wait to find out what the hell that was all about, I thought. Seemed like a pretty good opening. Only...it was never explained what the hell it was all about. After the movie was over, I asked my fellow movie-goers, all of whom had read the books. "Oh, those were minions of Valdemort kidnapping the wand-maker. There's some difference between Harry Potter's wand and Valdemort's wand that Valdemort can't figure out, so he wants to interrogate the wand-maker. That doesn't get explained until the last book." WTF? Now there is a scene where Harry, Ron, and Hermione are walking by the wand-maker's shop and they note that he's out of business, but so are 80% of the other businesses, so there's no reason for someone who hasn't read the books to make a connection.

How about this...for brevity and continuity's sake, put that scene in the next Harry Potter movie.

That's just an example from the first scene. It doesn't get much better from there. The movie is filled with lots and lots of silly teen romance stuff...a little of this goes a looooong way.

One of the appeals of Harry Potter is supposedly getting a sense of wonder at seeing things we've never seen. At this point, we've seen quidditch. We've seen floating candles in the cafeteria. We've seen nearly all the tropes there are to see, so the world just seems boring now. The only mythical beast we see this time is a giant spider, and it's dead.

So the movie is basically incomprehensible, full of silly teen romance stuff, and flat and boring.

Spoilers after the gap...



















And, what part of the plot that did seem to fit together didn't make any damned sense.

If I were to summarize the main plot of the movie...Draco Malfoy is recruited by the bad guys to assassinate Dumbledore. Why, it's not said. Sure they hate Dumbledore because he's good and they're bad, but why now? Do they think he's getting too close to figuring out how to finally put away Valdemort? If so, that's pretty damned subtle, and this is supposed to be accessible to kids, isn't it?

Anyway, here's their plan, I guess: Get Draco to fix a broken vanishing cabinet in some storeroom of the school. Whether he brought the cabinet or it was already there is unclear. We see him putting in an apple, taking it out with a bite out of it, messing with birds, etc., but it's never clear that he's "fixing" it. Whatever. When it does finally work, it's supposed to be a path from another cabinet outside the school that let's in three of the bad guys. Why? Malfoy is supposed to kill Dumbledore, and if he fails, Snape has taken some super badass oath that he will do it himself. Why do we need all this bullshit with the cabinet? Are the three baddies just there for moral support?

Meanwhile, Harry and Dumbledore figure out that the reason Valdemort is so damned hard to get rid of is because he's divided his soul into 7 parts and hidden them in 7 objects, thereby making him invincible unless they're all destroyed. Okay. Dumbledore waves around a burned diary, which supposedly is one of the 7, and takes Harry to find another one, which they get, but which turns out to be a fake, swapped by some other mysterious figure. So the movie ends with Harry and friends dropping out of school to go look for the rest of these things, though how many are left is never said.

At this point, you could say, "Oh, you're not supposed to analyze the story that much, just enjoy all the cool fantasy stuff." Only, there isn't any.

So I'd tell you to avoid it for the stupid mess it is, but if you're a fan you're going to see it anyway, and the damned thing will still rake in truckloads of money. Sigh.

Friday, July 24, 2009

Narratives and Timelines

On the way back from Texas, I started listening to The Hour I First Believed by Wally Lamb. I was a little put off by the cheesy title, but I gave it a chance, and it did a decent job of hooking me. I hadn't read any of Lamb's stuff before, but he's a good writer.

There's only one problem, and it's making me lose interest in the book, even though I'm now into the third disk on audio. And that's how he handles time.

I think flashbacks are fine if used sparingly, or if the bulk of a story is a flashback, but there's really not much to the narrative in the "present" of the story, e.g. an old man is recounting his life story to a journalist. But I think there are real problems with a narrative structure in which the reader is interested in the forward progression of the story in the "present", but keeps getting flung back into repeated, extended flashbacks.

That's the way this book is. The story is ostensibly about a high school teacher who taught at Columbine High School when the massacre took place. The story starts the Friday a few days before the massacre, but so far the bulk of the narrative has taken place in the past, relating the main character's marital problems, his attempt to befriend and rehabilitate a screwed-up female student, and in the section I'm currently on, we go all the way back to the main character's childhood for stories about his family's corn maze.

I remember reading Stephen King's Dark Tower series and like Wizard and Glass the least, mostly because the book was one giant flashback. I was interested in seeing forward progression in the present-day quest, not getting a bunch of back story. So I read it very impatiently. Several years later, when I read the series again, I enjoyed W&G a lot more, mostly because there wasn't the urgency of seeing how the main storyline played out.

In general I think this kind of structure is a mistake. There are clever ways to fill in backstory, which is important for any story. But the bulk of the narrative should take place in the time frame in which your primary story is set. Otherwise the reader feels like they're taking one step forward and three steps back. I can't think of a work where this kind of structure worked very well. If any of you can, please share.

Thursday, July 23, 2009

Moon

I just got back from visiting family in Texas, and while we were there I got to see the new movie Moon, which isn't showing here in Lafayette.

It's hard to talk about the film without spoiling plot elements, so I'll be nebulous above the poster pic and discuss the film with spoilers below it. Overall the movie is a very good sci-fi pic, which apparently is difficult to do since most of them end up sucking pretty hard.

From the preview, I thought the film would be a retread of a lot of previous films which explore the common themes of solitude, loneliness, and insanity in the isolation of space. The film did explore some of those ideas, and borrowed heavily from some venerable sci-fi source material, but it managed to make the mix original. Good art generally either makes you think or evokes some strong emotion. Moon does both of those, and does them well. My one complaint is that the ending feels rushed and slapped together, and not in proportion to the quality of the rest of the film. But all in all I give it a strong recommendation.

If you have already seen it, or don't care about spoilers, there's more below the poster.




As I said above, the film borrows pretty heavily from previous films, most notably Blade Runner, 2001, and the Alien series. There are clones with implanted memories who don't know they're clones with implanted memories. But Moon actually manages to make us care about the character(s), which is all to uncommon in SF with strong ideas.

The biggest plus in my book is the fact that the movie featured a near-human level AI that not only didn't turn out to be completely malfunctioning or evil, but managed to be developed into a full-blown character whose motivations were ever entirely made clear (like most good characters). Did he help the various Sams because that was a priority in his programming? Did he understand the ethical horror that the clones were being put through and actually feel compelled to help put an end to it? We don't know...and that's great. Of course, the bad guy was a big, evil energy corporation, but it was at least refreshing to see an AI treated with some level of complexity and actually developed as a character.

Now, the ending...why did the cleaners leave the dead same in the rover? What good was it going to do to knock out the jammer? Did that happen after the cleaners left? If not, wouldn't they just fix it? If so, wouldn't the corporation still be aware of it? And what exactly was the point...it seemed like the whole situation was exposed when the Sam clone made it back to Earth.

Another big issue was the life spans of the clones. It was strongly insinuated in the film that as a failsafe, the clones only had a lifespan of about three years. The Sam we start the film with starts to fall apart and get extremely sick, coughing and spewing up blood, losing teeth, etc. He sees video of previous Sams getting sick near the end of their contracts, losing hair, coughing, etc. It was never said directly in the film, but if that's the case, why didn't the older Sam warn the younger one that he only had 3 years to live?

Other questions...were the hallucinations at the beginning of his daughter Eve? If so, what brought them on? Was this some sort of signal from a previous Sam, or just a coincidence that he happened to be hallucinating about his daughter being grown up, even though he thought she was still an infant?

Anyway, a good film doesn't answer all it's questions...it takes some thinking about and ultimately has multiple interpretations. I just wish the thought and care that seemed to go into the rest of the film had been put into the ending, which really felt tacked on.

Tuesday, July 21, 2009

The Daily Show on the Anniversary of the Moon Landings

Last night's The Daily Show opened with a segment about the 40th anniversary of the moon landings. As usual, the humor all centered around belittling the accomplishment. Jon Stewart said we spent billions of dollars and astronaut's lives to "hit a golf ball on the moon", ride around in a buggy, and leave it covered with junk like the guy in your neighborhood whose crap is all in his front yard.

Nice.

The implication is that the moon landings were a trivial waste of lives and money. Presumably the writers would be fine with us sitting here on earth in the year 2009 never having set foot on any other place in our solar system. How forward thinking of them.

To devalue the accomplishment of putting a living human being on the moon, safely returning them, and repeating the act, and not only to devalue it, but to sneer at it...well, frankly I think it's repulsive.

The moon landings were a highlight of human civilization, a testament to our curiosity and ingenuity.

So on behalf of everyone who worked so hard to make it happen, I'd just like to give a hearty "fuck you" to the writers of The Daily Show.

Friday, July 17, 2009

Why Should We Care About What Other People Believe?

There's been a big dustup between atheist blogger PZ Myers and the authors of a new book called Unscientific America, Chris Mooney and Sheril Kirshenbaum, which started around the time Myers posted his first comments about the book. At the core of the dispute is a philosophical difference about how scientists who are also atheists should speak and behave with regard to religious believers.

Mooney and Kirshenbaum apparently think that blogs like Pharyngula and books like The God Delusion harm the public perception of science and thus scientific literacy by alienating the general public by strongly linking atheism and science, while talking bad about religion. Myers obviously disagrees.

And check out some of the comments to this Daniel Dennett editorial in The Guardian. Commenters quickly label Dennett a "militant atheist", calling him "irritating" and "intolerant". The implication is that he should shut the hell up about his atheism and leave people to their own beliefs, whatever those might be.

But here's where I'll go ahead and agree with those who compare the proselytizing of religious folk with that of the atheists. It's perfectly understandable to try to change someone else's mind, as long as it's with words.

Time to assume each side's point of view for a little thought experiment...

If you were a devout religious adherent who believed fervently in a heaven and hell, and also believed that those that didn't believe as you would suffer an eternity of torment, what would be your most humane course of action? It would be negligent of you not to try to sway others to believe the same as you. So, if you care at all about the suffering of others, and you believe others will suffer forever if they don't accept your beliefs, the perfectly sensible course of action is to attempt to convert others to your belief system.

On the other side, let's say you're a skeptical unbeliever. Let's say you live in a society where the majority of people believe that three magical dragons created and control the universe. There are sacred books that detail the history of the dragons and their teachings, some of which seems a little outdated and have been used to justify pretty horrible acts, but others which seem to convey some nice messages about how to treat others. Now, one could argue that even though you think the dragon worship is unsupported by any reasonable standards of evidence or common sense, you should simply go about your business believing what you believe, while leaving others to their beliefs. But what if your society was a democratic republic, and every public official was a dragon worshiper? And public policy was decided on the basis of dragon worship? And what was taught in schools and where your tax money was allocated and decisions of foreign policy were all guided and influenced by belief in magic dragons?

The point here is that no one is an island. What I believe affects my neighbors and what they believe affects me. If you lived in a community that strongly believed in witchcraft and you happened to be an older woman who lived alone, and witches were being accused left and right and being burned alive in the town square, then of course you would care what your neighbors believed. This is an extreme example, but illustrates the basic concept. What others in your society believe affects you.

In this light, doesn't it make sense that an atheist would try to convince around them to be more skeptical and discerning regarding their beliefs?

And if the proponents of a particular religion do have a monopoly on the truth, then what do they have to fear from a book here or there that's critical of their beliefs? As far as the stance particular scientists take on religion, they should be able to say whatever they want. I understand strategic PR, but value it much less than I value the truth. And I believe the best way to get at the truth is to have an open marketplace of voices and ideas, all free to say what they will and let people think and sort out what the best ideas might be.

Thursday, July 16, 2009

More Relativia Screenshots

Still plugging away at my game entry for the Android Developer Challenge II. Here are a few more screen shots to show how things are proceeding.

This is the character creation screen. You enter a name and select one of four species and one of four classes (so there are 16 possible combinations). The stats vary depending on the species/class combination. I've hired an artist named Pat Marconett for the characters and backgrounds, and I'm really happy with how it's coming out.




This is the screen that prompts you when you're near a dungeon:



And here's what it looks like when you enter the dungeon. The little purple icon is you. You just tap on an adjacent chamber to enter and engage in combat with whatever lurks there.



I've run into a little setback that I'll discuss more fully after the competition is over. I'm still optimistic that the game will be ready by the end of August, which is the hard submission guideline.

But I'm also planning on entering a second app that's related but distinct from one of my previous apps. That's allowed by the guidelines, which Google just posted in full this week here.

Friday, July 3, 2009

Relativia Gameplay

I'm working on a game for the Android Developer Challenge II. I've hired an artist to work on character, background, and item art. I've got the basics of the combat system worked out, and I thought I'd share a short video demonstrating how gameplay will work.

The player is on the left side, the enemy on the right. Each turn, a player can use one action (an attack or spell, if they have enough of the right kind of energy) and they may drop one token into the playing grid. The game is very much like Connect 4. If a player matches 3 or more in a row of a given token type: gems (square), mana (round), or skulls, those tokens are removed from the grid. If gems are matched, the player gets gem dust, which is used to purchase items in markets. If mana is matched, the player gets energy corresponding with that mana type (blue, orange, green, or purple). If skulls are matched, damage is done directly to one's opponent. Actions can either cause damage to one's opponent, heal the player, or have some other effect (like gaining an extra turn). A given battle ends when one player reaches zero health points.



I'd like to add in more polish, e.g. feedback events for matching, smoother animations, etc., but the deadline is about six weeks away and I'm rushing just to get the basics implemented. I'm optimistic about the progress, but a bit worried about getting it in good shape for the contest. We'll see how it goes.

Reductionism

Yesterday I picked up Melanie Mitchell's new book Complexity: A Guided Tour. I had previously read her excellent primer for genetic algorithms, and this new book looked very interesting.

Though she's an excellent writer, I'm already a little disappointed in the book. For example, her first chapter is entitled What is Complexity?, and she then goes on to ignore the question and give lots of examples of complex systems. Chapter 7 is called Defining and Measuring Complexity, and would probably have been a better start to the book, since it actually attempts to lay out what the concept means and how it is difficult to find a consensus definition among people who study it.

But what made me even more disgruntled right off the bat is her assertion in the preface that reductionism is passe, or worse, dead:

But twentieth-century science was also marked by the demise of the reductionist dream. In spite of its great successes explaining the very large and very small, fundamental physics, and more generally, scientific reductionism, have been notably mute in explaining the complex phenomena closest to our human-scale concerns.

Now look...I'm a reductionist, and as far as I'm concerned, so is every other working scientist. That's why I get a bit peeved when I see reductionism mischaracterized as an outmoded approach that was good for studying classical problems, but a miserable failure for, you know, really complicated stuff.

Here's all reductionism is: Trying to understand a system by understanding its parts and how they work together. That's it. And guess what? That's a wholly sensible approach that works amazingly well.

Reductionism often gets propped up as a straw man and ridiculed for trying to understand a system at one scale in terms of parts at a much lower scale. For example, someone might say "It's ridiculous to try to understand an opera in terms of acoustical dynamics!" or "It's silly to try to explain the migratory patterns of birds in terms of subatomic particles!"

Hey, I agree! Such approaches are stupid. And that's not reductionism. And it doesn't work. The way reductionism bears fruit is by trying to understand a system in terms of its parts at the appropriate lower level of description. Richard Dawkins calls this hierarchical reductionism.

For example, if you want to explain how a car works, describing its function in terms of pistons and axles is going to yield far better results than describing its function at the level of atoms. If you skip too many levels of description between the parts and the whole, your explanation is simply going to suck.

Now, as a working scientist its often difficult to determine what the appropriate level of description of the parts needs to be. But what, exactly, is the alternative to such an approach? I've heard plenty of people knock their characterization of reductionism. But I have yet to hear a proposal for how you go about trying to understand a system without understanding how its elements interact. How do you "holistically" study or explain how a system works? Some of the early examples Mitchell gives of complex systems are ant colonies, human brains, and economic systems. She's correct that such systems composed of interacting elements can give rise to amazingly complex behavior. But I honestly don't see how we can go about trying to understand that behavior without examining the behavior of the constituent elements...which is reductionism.

I'm interested to read the rest of the book and see where it goes, but as far as I'm concerned she's already gotten off on the wrong foot.

Wednesday, July 1, 2009

Transformers 2 FAQ

I haven't seen Transformers 2, and most likely won't. I enjoyed this FAQ of the film probably far more than I would enjoy watching the movie. I liked this bonus question in particular:
So it's not as bad as shitting your pants?
Marginally. I honestly had to make a pro and con list to figure it out.

Saturday, June 27, 2009

The Hangover

Went to see the movie The Hangover today. The movie-going experience was marred by a sold-out theater. The movie's been out for at least a couple of weeks, and we went at 2:30 in the afternoon, and I don't remember every going to a movie that sold out in Lafayette...what the hell? Anyway, there were a couple of particularly annoying audience member. One woman to my left howled and squealed in exaggerated laughter at everything that happened on-screen. I'm glad she was having a good time, but screeching at every phrase and gesture in the movie is a bit much. I think the woman was either drunk or had a chemical imbalance.

The second big annoyance was sitting right in front of me. It was one of those people that feels the need to say everything that happens to be going through her head at the time, which happens to be not a whole lot. Mostly it was just stating what was the on the screen. When the characters in the movie wake up and we see a chicken in their hotel room, the genius in front of me said "It's a chicken." Guess what she said when the tiger was on-screen? This went on pretty much through the whole movie.

Oh yeah, how was the movie? It was all right, but definitely not worth packing the cineplex in the middle of the afternoon. Mostly the humor went for the lowest common denominator and ended up hitting it. We got copious helpings of full-frontal male nudity, and ass, and pedophilia jokes, and vomiting. And you know, there's nothing funnier than a baby getting hit with a car door. That's not to say there weren't a few clever bits, but for the most part the humor was pitched at the level of your average 7th-grader. If you find an old man getting a physical check-up inherently funny, this is the movie for you. Apparently it was also the movie for a lot of other people, because like I said, the theater was packed and the howler monkey to my left wasn't the only one enjoying the show.

That's one thing I really miss about Japan. The audience members in movies were blissfully silent. Here, everyone treats a theater like their living room. I hope if there is a hell, there's a special place in it for the chick sitting in front of me today. And when she gets there, she'll probably be placed front and center so she can contribute to the suffering by saying stuff like "It's hot in here."

Sunday, June 21, 2009

Windows 7 Sleep Nightmare

So I went through the horrible ordeal of trying to build my own PC and having the motherboard fry out on me, so I ordered a pre-built system.

In my continuing hubris, I decided to install the Windows 7 stable release candidate, mostly because I heard it was very good, and that Vista sucks. So I got my new machine on Friday night and spent most of yesterday installing new software and configuring the machine to my liking. Until today, I'd been very pleased with Windows 7.

However, there was a small problem that turned into a very large one. My computer is in the same room that I sleep, so I like to have the monitor either power down or go to a blank screen saver when I'm not using it. Sounds easy enough, but no matter what settings I used, the monitor would never power off or go to a blank screen saver. I read some stuff in various forums saying this was a problem with Vista not filtering input from optical mice (basically it thinks you're still using the mouse, so never shuts off). There's a patch for Vista, but nothing so far that seems to work for Windows 7. Still not a big deal.

I did notice the "Sleep" function in the Start menu and thought that might be a good thing to use. I could put the PC to sleep and it would quickly reboot each morning. So I put it to sleep. And guess what, friends and neighbors? The motherfucker wouldn't wake up. I pushed the power button, and the keyboard would light up, but it acted like it was still sleeping. I powered it completely off and then back on. Same deal. I unplugged the machine and tried again...nada.

This was about 5 hours ago. I was pretty upset, because I didn't want to have to return any more hardware to NewEgg and get a new machine. I tried Gateway's customer service. That was a huge freaking mistake. Both their chat and phone reps told me that I had to register my machine before they could assist me. Sounds easy, right? After all, I've got my warranty, the serial number, the SNID, and shitloads of paperwork on the thing. I've even got a piece of paper in the box that says "Register your computer online at www.gateway.com/register. It's quick and easy." Yeah, okay. But when I tried to register online, it tells me that since I don't have a 20-digit serial number, I'll have to register either by phone or chat. Guess what the tech support reps told me? That I'd have to fax or mail a proof of purchase to Gateway and wait 48 hours for processing, then call them back. WTF?

I asked the rep on the phone exactly why I couldn't register right then with him...I had all the information. He said it was because the computer was manufactured in June of 2008 and because of the time period between being shipped to the retailer and the purchase, Gateway had a policy of requiring a proof of purchase. Huh? Does that make any sense whatsoever? It shouldn't matter what the gap between the manufacture and the purchase. All that should matter is that I have evidence that I purchased the machine, and that they give it to me upon purchase so that I can quickly and easily verify that I purchased it. This isn't a fucking box of cereal, people. So I'm not happy with them.

I was ready to call NewEgg and just replace the stupid machine, but their customer support isn't open on Sundays, so I decided to wait until tomorrow morning. In the meantime, I figured I'd research the problem a bit more.

Thankfully, I came across this Gizmodo post.


Win 7 Tip: Sleep/Hibernate Mode Is Buggy, May Incapacitate Your Machine

When I came home last night, I thought my previously healthy Windows 7 machine was dead. It was making a horrendous squeal and refused to reboot multiple times. Turns out it was asleep.

I'm not sure what kind of sleep it was in (I was only gone for 6 hours and I've left it alone for half a day before and it was fine), but a regular reboot refused to restart it. So I did that ten times in a row, before giving up. I had to pull out the power cable (it's a desktop) and let the motherboard's lights go off and battery drain out. After this, it was able to correctly boot up again to a "Resuming Windows" screen, which then didn't respond to any keyboard/mouse inputs, so I had to reset again.

It's not like previous the sleep mode in Windows versions worked perfectly, but the manufacturer usually tests it once or twice to make sure that it's compatible enough that you don't have to jump through crazy hoops to re-enable your system. So our hint is to disable sleep/hibernate/power save mode on your system, in case it's incompatible, for now to save yourself headaches later.

And yes, it's a beta, so we're hoping compatibility gets fixed by release time.


Now I had unplugged the machine, but only for a few seconds. I went ahead and unplugged the machine for about half an hour, then tried again. It did exactly what the post said, attempting to resume windows on the first reboot, stalling again, then properly booting on the second attempt.

Holy shit, people. What a noxious bug. It locks you utterly and completely out of your system. You can't boot into the BIOS. You can't boot from CD. You can't do shit because the system never properly shuts down and so stays forever in sleep mode.

That was scary, let me tell you. I would have been irritated by having to reinstall the operating system, but when you can't even do that, things are looking really bad. So now at least I know. I disabled all sleep functions in the power management settings, and I'll never manually use it again.

This when I was planning on writing a blog on the coolness of Windows 7. I was very pleased with it until then. It's fast and slick. I really like the way it handles the layout of screens, the toolbar, and the desktop view. But now I'm a little afraid of it. Mostly, I'm just glad I was able to boot back into my machine.

Friday, June 19, 2009

Back From Atlanta

I'm back from the IJCNN in Atlanta, where I presented my paper "Sequential Hierarchical Recruitment Learning in a Network of Spiking Neurons". Sounds like a barrel of monkeys, don't it?

Even though I presented on the last day, attendance to the session on spiking neural networks was good. Eugene Izhikevich was in the audience, but didn't say or react much to the talks. Incidentally, his talk on large-scale brain models was very nice. I've been increasingly skeptical about the approach of trying to make enormous models when we have such little grasp of how small, local circuits in the brain work, but he made a very good case. I see the usefulness of large-scale models for studying global phenomena and simply have available a model of that magnitude to tweak and study. Hopefully the large-scale and small-scale models will one day be able to tie all the theory together in one, nice coherent bundle.

John Hopfield's talk was also a highlight. The theme was basically that you want to pick hardware that's best going to fit with the type of algorithm you need to run, and that evolution leads to such efficient coupling. Thus, if we want to try to understand the algorithms of the brain, we need to pay close attention to the type of operations that neurons carry out very well. His conclusion was that understanding the synchronous operations of populations of neurons is key to understanding how they learn and process information. I wholeheartedly agree. :)

Another highlight was a 3-hour tour of some neuroscience labs at Emory. I got to see live recordings from the network that controls the involuntary "swallowing" in crabs and lobsters. I got to see how they make brain slices from rats and mice (first you drug, then decapitate the animals, then you use a razor blade affixed in a machine that's moving back and forth very fast). Another group was studying a group of neurons in leeches which control their heartbeat. Another group was monitoring cells in awake, alert mice, studying how cells in their auditory cortex respond differently to sounds of mice pups depending on whether or not they have given birth to them. And yet another group was monitoring the activity of cells in a rat's hippocampus as it explored novel objects.

Just as with the Boston conference, I'm exhausted, though. Lots of information to assimilate, so time to fall into bed and sleep.

Tuesday, June 16, 2009

The IJCNN in Atlanta

I'm current attending the International Joint Conference on Neural Networks in Atlanta. It's my first time at this particular conference. For any given conference, I typically expect 20-30% of the content to be relatively engaging and relevant to what I'm studying. In this case, that number is a bit lower. The plenary talks have been decent, but the sessions and posters haven't offered me much of interest. And since there's a serious engineering contingent here, some talks are simply slide after slide of equations, which I don't get much out of.

There's a talk this afternoon on large-scale brain simulations...hopefully that will be interesting. And then, I give a talk on Thursday morning. And Thursday evening there's a tour of "wet" neural labs at Emory, i.e. we're gonna tour labs where people work with real brains.

So more later...

Saturday, June 13, 2009

Polyclef Blog

A lot of indie software developers use their blog to promote their products and keep in touch with other devs and sometimes customers. I thought it would be a good idea to bifurcate this blog into my personal/school related stuff, which I'll keep here, and the Android/game development stuff, which is at a new blog here:

http://polyclefsoftware.blogspot.com/

Building Your Own PC Adventures

So I put an end to the little adventure of trying to build my own PC. I don't think you're going to really save much (if any) money. You can boost your geek cred, but pre-built computers with great specs are cheap these days, and my experience was pretty much a nightmare.

A friend who said he would help me put it together was unable to get it to boot into BIOS, so we started swapping individual components (e.g. PSU, ram, video card, etc.) to try to isolate the problem. After swapping in my friend's PSU, the motherboard started to smoke. Nice, huh?

Having had enough, I called NewEgg customer service and told them the whole sordid story. They let me return all the parts and they paid for the shipping, as long as I ordered a pre-built system from them, which I did. Hopefully they'll issue a full refund on the returned parts upon receiving them and my new machine will work when I plug it in.

Anyway, I learned a valuable lesson. To whit...I am not a hardware dude. One of the advantages of human culture is the whole division of labor thing, and I'll let someone in a Taiwanese sweat shop who does it for 16 hours a day put my PC together for me, while I work on the software side of things.

Tuesday, June 9, 2009

Why You Should Not Try to Build Your Own PC: Part II

Because when you finally get all the right parts and you put it together and turn it on, it just clicks, like the teeth of a dead man's skull clicking together in laughter at you.

My Favorite Games: Robot Odyssey

A while back I picked up the book Game Design Workshop, which is quite good. One of the most interesting features of the book are interviews with professionals in the industry. Most of the time they ask them what their favorite games are, and it's interesting to hear their answers.

I thought I'd answer the question myself. Of course I love to play games...doesn't everyone? So it's no surprise that I've gotten sucked into developing them as a sideline.

Originally I planned to write a single blog talking about all of my favorite games, especially ones that have stuck in my mind over the years, ones that I've played for decades, ones that I don't play anymore but which had a big impact on my life. But that turned out to just be too damned long. So this is the first in a series of posts talking about my favorite games.

Here's the list:

Card/Board Games:
Bridge
Poker
Spades
Hearts
Crazy Eights
Dominoes
Life
Monopoly
Scrabble
Yahtzee
Chess
Go
Pente
Backgammon
Settlers of Catan
Citadels
Risk

Collectible Card Games:
Magic The Gathering

Video Games:
Pong
Adventure
Space Invaders
Robot Odyssey
World of Warcraft
Myst
Portal
Half-Life
Warcraft/Starcraft
Diablo
Peggle and Peggle Nights
Bookworm Adventures
Puzzle Quest
Zelda: Twilight Princess

Text-based computer games:
Zork I, II, and III
Enchanter, Sorcerer, and Spellbreaker
Hitchhiker's Guide to the Galaxy

Arcade Games:
Galaga
Xevious
Dig Dug
Asteroids
Tempest
Gyruss

Again, this isn't meant to be some comprehensive list of games. These are games that influenced me in some way. I'm not sure I'll get to all of them, but we'll see.

One particular video game that I'm going to inaugurate this series with is one that's stuck in my head for decades, and that's Robot Odyssey. The graphics were pretty bad, even for the time period (early 80's), and I didn't even own the game. I ended up pulling all-nighters with a friend of mine who owned the Apple II that ran it.

What makes this game so memorable? Well, the basic setup is that you are a person in an underground city and you're trying to get home. To do so, you have to solve a series of puzzles. The catch is that these puzzles most often have to be solved by programming the robots and having them solve the task. You did this by actually entering the robots and wiring up their various sensors and thrusters with logic circuits (you had a little toolbox of these). For example, you might wire the robot's right bumper to its bottom thruster, so that if it hits the right wall, it goes up. Puzzles usually involved having robots navigate simple maze configurations and get items for you. I may be making it sound dry, but it was amazingly fun and addictive.

The key element of the game is the ability to program robots, something very few games allow you to do. Games like Lemmings have a very crude form of this, where you can assign simple roles to agents, but it's much more limiting. Games that are fun as hell, but still make you think are extremely rare, and this was one of the best. There have been a few ports and similar games, though I haven't checked them out.

At some point I will definitely do my own take on the general concept. I think the idea of gently introducing programming and logic problems to kids is enormously important, and I found the experience of wiring up agents and then watching them act out my programs enormously fun, even when they didn't work (which was most of the time), and especially when they did unexpected things.

In my idea folder are plans for a game that abstracts away the computer/hardware/robot elements but leaves the core game play intact. My video game design uses cards as programming elements. The player assembles a sequence of cards that summon an agent (such as a magical bird, fish, or tiger) and determine its decision policies based on what it encounters in the environment (e.g., a given card might compel the animal to climb a tree if it comes near one). The effects of some cards might be dependent upon adjacent cards in the summoning deck, and others might be independent of order. But the basic idea is that players will solve puzzles by building programs to execute in simple environments in order to solve goals. Hopefully players will be programming, without even knowing that's what they're doing. :)

Anyway, Robot Odyssey has stuck in my head for 25 years, even though I played it less than a month on somebody else's machine. It's a great game, and a model for what a designer can achieve by not dumbing material down and trying to create an innovative experience.

Monday, June 8, 2009

Recap of My Android Market Experiences

So I've been selling and releasing ad-supported apps on the Android market now for nearly 3 months. It's difficult to gauge success. I thought I wasn't really doing all that well, but it sounds like relative to all but the outliers in the iPhone market I'm doing pretty well. This iPhone developer is complaining about having a couple of apps in the top 100 in their respective category and still only pulling in about $20/day.

So here are some summary stats for reference.
  • I released my first paid app, ConcretePal, 83 days ago.
  • My average daily net income for that span is: $19.41
  • My average daily income from ads for that span is: $8.74
    I've released 22 total apps: 14 paid and 8 free (6 of those are ad-supported demo versions of paid apps)
Here's a visual breakdown:

I thought I was going to do very well in the market when I released Spades for $2.99, it was the only one on the market, and it sold very well. But as you can see, that didn't last too long. Sales for a given app settle down to a pretty low baseline, so you constantly need to be releasing new apps if you want to keep the revenue stream coming.

So, I'm not getting rich, but it's nice supplemental income. And it sounds like it's comparable to iPhone apps that are doing reasonably well.

I hope the sales do stay up all right over the next couple of months. I'm not going to be able to release any new apps, since I'm working full-time on my game for the Android Developer Challenge II. It's coming along pretty well. I just hope I can get a decent working version ready for submission by August.

Soon I'll post some screen shots and concept art to give you an idea of where it's headed.

Thursday, June 4, 2009

Building My Own PC

So my old computer is just that...old. It's around 5 years old, which is friggin' ancient for a PC. The video card seems to be going out, so I decided to invest in a new machine, and after debating the pros and cons, I decided to order the parts and build my own.

So far it has not gone extremely well. One friend sent me this guide to assembling a mid-range (~$700) gaming rig, and another friend sent me this guide, for assembling a rig in the range of $800. When I went to order the parts, there were some on the first list that were sold out at the particular retailer. Instead of ordering from two different sites, I just thought I'd piece a system together by combining parts from both guides.

Now, I'm not a hardware guy, though I've mucked around inside PCs a little bit. Still, my knowledge base is pretty scant. I've had compatibility issues with parts before, but that usually arose from the age between them. I assumed that PC parts from the same generation would generally be compatible.

Well, this may seem obvious to anyone who knows anything about PC hardware, but motherboards are specific to certain processors. So one of those systems uses an AMD processor and the other an Intel. I ordered a motherboard that is only compatible with certain AMD processors and I ordered an Intel processor. Guess what? That doesn't work!

So now I get to order a new processor and wait for my new shipment. In the meantime, the return policy at NewEgg is a %15 "restocking fee", so I'd be out about $30 + shipping, which blows. So I listed the processor on Amazon, which, if it sells would actually let me break even.

Also, there was fun with the case. I ordered the Cooler Master RC-534 from the ExtremeTech guide, which it says comes "complete with 460W power supply unit". I found the very same model on NewEgg and ordered it. Guess what? No power supply! Yay! Apparently you can order this model with or without a power supply. That would have been good information to know. So I went to Best Buy last night to get raped on a crappy power supply because I wanted to get my system up and running. This was before I found out about the processor snafu. So since I had to order a new processor anyway, I went ahead and also ordered a much cheaper and much better power supply online. Now I get to return the crappy Best Buy PSU.

I thought I had done a decent amount of homework, but there were just some glaringly obvious things that someone with no hardware experience might completely overlook, unless they are explicitly told such things. I was expecting problems like parts fitting in the case or looking much farther ahead, getting the whole thing put together, turning it on, and having nothing happen. Fun and joy...I haven't even gotten that far yet.

I'm already wishing I had just built a pre-built system, though I am gaining valuable, if stunningly obvious experience.

More updates if I ever get the damn thing put together.