Friday, November 28, 2008

Pecanless Butter Pecan Ice Cream

I once ordered a blueberry muffin at Corner Bakery in Dallas, got to work, and found a blueberry muffin with no blueberries in it. I once ordered chile relleno in a restaurant in Lafayette, Louisiana that didn't have a chile in it.

But last week I got some butter pecan ice cream at Maggie Moo's. When I got it home, I opened it to find butter pecan flavored ice cream, with absolutely no pecans in it. I think Maggie Moo's is one of those places where if you want anything other than smooth ice cream you have to order the nuts/chocolate chips/whatever kneaded into the ice cream. That includes pecans in your butter pecan ice cream. I guess if they did Rocky Road, it would be nut and marshmallow flavored. Bah.

Anyway, I hope everyone had a good Thanksgiving. I'm in Texas for the holidays, and I'll be back in the cockpit on Sunday.

Tuesday, November 25, 2008

What are Car Horns For?

There's a weird article in Slate today about car horns, and I'm not sure what the point is. The tagline says "the car horn is bleeping useless," but the author starts out the article talking about his mother and wishing she'd use her horn more.

My mom is not much of a honker. You know what I mean when I say that: If a driver in front of her fails to hit the gas when the light turns, she simply waits. Time passes, and the green light glows. Eventually, the driver notices the signal change, or the cars behind begin to lay on their horns. Traffic proceeds. But no thanks to my mom; she's just not much of a honker.

For years I've been telling my mom that she ought to learn to honk a little more.

Then he goes on and on about how the horn isn't really a safety device. He talks about studies that show that honking is linked to aggression and gender, and that in most accidents, when it is used, it's too late.

Well, no crap. I've never viewed the horn primarily as a safety device. It's a crude communication device. Sometimes it's useful in exactly the situation the author talks about with his mother. If someone is not aware that a light has changed, beeping at them is fine. Other times it's useful to express dissatisfaction, e.g. when someone does something dangerous or rude on the roadway.

The only time it really works as a safety device is at speeds less than 10 mph, such as in a parking lot. Someone is pulling out of a spot and they don't see you. A beep let's them know you're there, and can avoid a fender bender. But when you're going much faster than that, of course the horn isn't going to help. If you're about to hit someone in an intersection, your attention and motor control is best spent with your hands on the wheel, trying to actually avoid the collision or minimize the damage, rather than moving one of your hands off the wheel in order to honk the horn.

The car horn is the pedestrian equivalent to "Hey!", which is used for a variety of situations, not all of which involve safety.

I sometimes wonder if we had easy car-to-car communication, making car travel more analogous to walking on the street, whether it would in general make driving a more safe and humane enterprise. I think there is an impersonal effect to sitting in a hunk of metal and glass without the ability to talk directly to those around us, and that it makes people inherently view those in other cars as competitors or obstacles, rather than as people. Until then we're stuck with the horn. And the wave and the finger.

Saturday, November 22, 2008

I Am a Nielsen Family

I got randomly chosen to be a Nielsen family, which means the Nielsen TV Ratings people mailed me a diary to keep track of my viewing habits. Filling out the diary should be fairly simple, since I don't actually watch any TV on my television. I use it for watching DVDs. What little TV I do watch is on my computer.

They also mailed me five shiny new dollar bills. It was like getting a birthday card from my grandpa. These people need to get with the times.

Sexual Harassment Training

Via Instapundit, here's an editorial from a professor at UC Irvine who refuses to take state-mandated sexual harassment training and is losing funding and resources because of it.

Four years ago, the governor signed Assembly Bill 1825 into law, requiring all California employers with more than 50 people to provide sexual harassment training for each of their employees. The University of California raised no objection and submitted to its authority.

But I didn't. I am a professor of molecular biology and biochemistry at UC Irvine, and I have consistently refused, on principle, to participate in the sexual harassment training that the state and my employers seem to think is so important.

He then recounts that his continued refusal to take the training resulted in the university turning his lab and assistants over to someone else. Then he says that he told the university he would take the course if they provided a signed statement absolving him of any implication that he had harassed students in the past, which they refused to provide.

So what are his big problems with the training?

First of all, I believe the training is a disgraceful sham. As far as I can tell from my colleagues, it is worthless, a childish piece of theater, an insult to anyone with a respectable IQ, primarily designed to relieve the university of liability in the case of lawsuits. I have not been shown any evidence that this training will discourage a harasser or aid in alerting the faculty to the presence of harassment.

So his first big gripe is that he thinks the training is silly and hasn't been demonstrated to be effective.

Now before I returned to graduate school I actually worked in a department that administered sexual harassment training. As part of my job, I reviewed some existing programs, most of which consisted of videotapes and questionnaires. Most were relatively short (you could finish within half an hour). As you might expect, some had poor production values, while others were more polished. And the better ones actually relayed good information regarding existing laws, while acting out scenarios that could be considered harassment.

Now I haven't seen studies regarding the effectiveness of such training, but I would be surprised if a simple informational video drawing attention to the details of existing laws would not have some positive effect in terms of reducing incidents of sexual harassment. And even if it didn't, what actual harm would it do?

Again, from Dr. McPherson:

What's more, the state, acting through the university, is trying to coerce and bully me into doing something I find repugnant and offensive. I find it offensive not only because of the insinuations it carries and the potential stigma it implies, but also because I am being required to do it for political reasons. The fact is that there is a vocal political/cultural interest group promoting this silliness as part of a politically correct agenda that I don't particularly agree with.

Uh, this guy thinks he's part of some kind of witch hunt. Could it possibly be that sexual harassment is an actual continuing problem, and that the state has enacted the law in order to help with the problem? Nah...gotta be a witch hunt of some kind. And how exactly does training carry a stigma if everyone is taking it. That's moronic.

The imposition of training that has a political cast violates my academic freedom and my rights as a tenured professor. The university has already nullified my right to supervise my laboratory and the students I teach. It has threatened my livelihood and, ultimately, my position at the university. This for failing to submit to mock training in sexual harassment, a requirement that was never a condition of my employment at the University of California 30 years ago, nor when I came to UCI 11 years ago.

Uh, dude...I'm afraid it's you who are threatening your livelihood by not agreeing to take an innocuous, probably helpful little piece of training. This paragraph smacks of a sense of entitlement. He acts like he shouldn't have to take the course because he's tenured, because it wasn't a requirement when he joined the university. Guess what? Laws change, man. Thirty years ago your university probably turned a blind eye to incidents of professors screwing their students for grades. Tenure doesn't make you immune to the laws of your state. If you don't like it, write your representatives and get out and vote. If a law passes that you don't like, that's just tough shit, man.

I could possibly understand a principled stance against a training program, but this guy just seems like a whiny ass.

Friday, November 21, 2008

The Tyranny of the Discontinuous Mind

I finally finished Richard Dawkins' The Ancestor's Tale. I'd give it a B.

One bit I thought was especially interesting was a section where he commented negatively on what he calls "the tyranny of the discontinuous mind," a subject that he's also written about elsewhere. What he's talking about is people who are only able to think of things in discrete, black-and-white terms. The issue where the rubber most often hits the road is the concept of personhood. You can view the concept of personhood as an all-or-nothing affair, and determine the criteria by which something meets that definition (e.g. when an egg becomes fertilized by a sperm, that single cell is a person, and before that it's not). Or you can view personhood as a gradient (e.g. a 5 year-old is more of a person than a fetus, which is more of a person than a fertilized egg), with a sliding scale based on various criteria such as level of development, brain activity, etc.

I'd provisionally agree with Dawkins that discontinuous thinking, in the absence of continuous thinking, is a bane. But I'd make two points to qualify that.

One, discontinuous thinking is often desirable and necessary, especially when enacting laws. When is a person considered legally drunk? Currently, as far as I know, we have a threshold for blood alcohol levels. If you're beyond that you're considered legally inebriated; below it and you're not. We could enact a spectrum of various offenses depending on the exact blood alcohol level, but how fine do we slice it? Do we have 10 categories of inebriation? If we're doing any slicing at all, we're dividing a continuous variable into discrete chunks, so we really can't avoid it. And here's the thing...if you zoom in far enough, the continuous becomes the discontinuous, and if you pull far enough back, the reverse occurs, the discontinuous blurs into a continuum. The fact is, we simply cannot avoid legal categories if we want to have a system that's not bewilderingly complex to the point where it cannot function. So we have to define cut-off points.

This is what Roe v. Wade did with the abortion issue, by creating categories by trimester. Some abortion rights advocates would be in favor of advocating abortion at any time up until birth, while many pro-lifers want protection from abortion upon conception. They're both exhibiting discontinuous thinking, defining personhood in binary terms around a strictly-defined point in time: birth in one case, and conception in the other. Roe v. Wade affords incrementally more rights as the individual develops in the womb.

The last main point I'd want to make is that Dawkins is ignoring the reverse phenomenon, what I'd call the ineptitude of the continuous mind. It's important to be able to view the world from either perspective, as the situation calls for it. If you view everything as continuous, then you would essentially be unable to every make category distinctions or decisions. Many times we need to define hard boundaries and thresholds. Many times we need to sharpen distinctions between groups. In his own field of biology, Dawkins notes the slippery, continuous notion of many concepts, such as species and life. Populations may not always fall neatly into categories, but at the end of the day biologists must use category labels if they're going to be able to study, communicate, and attempt to make sense of it all. To apply purely continuous thinking would be to do away with boundary distinctions altogether, to say that life is just a spectrum of life forms that vary continuously along a spectrum. And that kind of fuzzy thinking is just as problematic as strictly discontinuous thought.

The obvious answer is that we need both, and we need to develop the judgment to determine in which cases each type of thinking most appropriately applies.

Wednesday, November 19, 2008

I Passed

So...I passed my proposal defense. I'm officially ABD now.

Woot.

The Mentalist Jumps the Shark

Actually, during last night's episode, The Mentalist didn't just jump the shark. No, it strapped on some rocket boots and a jetpack and motherfucking blasted its way over the shark, scorching everything and everyone below. The episode was appropriately named "Seeing Red," because anyone who actually admired the show for taking a skeptical perspective on astrologers, TV psychics, and other flim-flam hucksters would actually be seeing red after the end of the show.

Normally people call giving plot points away "spoilers," but that implies that the show or movie might actually be ruined by knowing the story ahead of time. Trust me, there was no way to make it any worse.

Here's what happened: There was a lame story about a rich woman who was run over by a car in the opening segment. She was leaving an appointment with her psychic, Kristina, who warned her that she was in danger. The cops show up to investigate, and we learn the major characters: the dead woman's son and daughter and her lover, a flaky, womanizing photographer.

Throughout the show, Kristina the psychic spars with Thomas Jane, and she also continually provides information that she shouldn't otherwise know, like that the car used in the murder was ditched in the town's reservoir. Now Jane thinks this implicates her as a suspect...and rightly so. But it turns out that she had nothing to do with the murder. It was the woman's daughter, who was mad at her for threatening to cut her brother out of the will and snapped when her mom didn't answer her phone call, so she ran her down in the street with the car. Okay.

Anyway, the show repeatedly insinuates that Kristina the psychic knows things that she shouldn't, she is not implicated in the murder, and she is not exposed as a fraud by the skeptical Jane. And then we come to the stomach-turning final scene.

Kristina asks to speak privately with Jane. He grudgingly accepts. When alone, Kristina says that she spoke with Jane's dead wife, and that there is a question that's been haunting him since their death at the hands of a serial killer. "Your daughter never woke up," Kristina tells him, and then leaves the room.

Jane starts to cry.

And then Derek starts to vomit.


Either Jane is allowing himself to be emotionally manipulated by Kristina, or more likely the show is suggesting that her powers are real, which is repulsive kick in the nuts to anyone who started watching the show because it was a rare little beacon of skeptical perspective in a media landscape filled with sympathetic treatments of supernatural hogwash.

Thanks for completely destroying a promising show before it even really got started, you monkey-humping asshats.

Supercomputer Hyperbole

Wired has an article about the latest iteration of supercomputers breaking the petaflop barrier. That means they can carry out just over a quadrillion (1,000,000,000,000,000) floating-point calculations per second. That's a lot. And it's a significant milestone. But this seems like a bit much:

"The scientific method has changed for the first time since Galileo invented the telescope (in 1509)," said computer scientist Mark Seager of Lawrence Livermore National Laboratory.

Look, this is a quantitative, not a qualitative, change. Some are making the claim that the ability to model and simulate at greater and greater levels of detail will allow for qualitatively different ways of doing science. But I don't think so.

Breaking the petaflop barrier is more like building a bigger and better telescope that allows us to see farther and clearer, and to see things we've never seen before, but it's not like the invention of the telescope in the first place.

Tuesday, November 18, 2008

Defending My Proposal

So tomorrow I'm defending my proposal, which is basically a review of the area I'm interested in and a road map for my dissertation work. If I pass, then I'll be ABD (All But Dissertation), and then I'll actually have to do all the stuff I've said I'm going to be doing.

The basic theory driving my work isn't really all that complicated, though I can easily get bogged down in the details, an urge I'll have to try to resist tomorrow.

In a nutshell, I'm proposing a model of how we learn and process sequences. Virtually everything you do is a result of either learning new sequences, recognizing ones you've already learned, or generating them. So it's crucial that we try to understand how this works. I was inspired a great deal by Jeff Hawkins' ideas, but I saw a large gap between his theory and how it might actually be implemented based on what we know about the brain. My model is an attempt to try to fill in some of that gap.

First of all, the brain is hierarchical, which just means that some areas are "above" others, while others are "below". So broadly speaking there are three kinds of connectivity:

  • Feedforward: from lower to higher areas
  • Lateral: between neighbors in the same area
  • Feedback: from higher to lower areas

I'm proposing that each of these types of connectivity plays a distinct role in learning and processing sequences.

Feedforward connectivity allows you to "chunk" sequential information. For example, when you learn a phone number, you typically learn it as a chunk of 3 numbers combined with a chunk of 4 numbers. If the whole system is arranged hierarchically, then we can group smaller chunks into larger chunks, up and up the hierarchy, so that we can efficiently store very long sequences.

Lateral connectivity allows you to learn pairwise sequences, for example, what comes after "g" in the alphabet. I'm hypothesizing that the type of learning that occurs between neighbors allows for a kind of domino effect. You hear "a" and it's like knocking over the domino for "b" and then "c" and so on, in a cascading effect. This type of representation is directional (i.e. it's difficult to say the alphabet backwards) and content-addressable (which means that you can reproduce the sequence simply by being given some small part of it, like the first few notes of a song).

You can see how these two types of representations might complement one another. The first is efficient and scalable, but isn't content-addressable. The second is content-addressable, but doesn't scale well.

Finally, feedback connectivity has been hypothesized to push predictions back down the hierarchy, which helps when we're confronted with input that is noisy (e.g. a cell phone conversation). If there is noise or gaps in what we're sensing, the higher level nodes are transmitting via the feedback connectivity what they expect to experience next, and that helps fill in the missing pieces, making the whole system very robust and reliable.

My plan is to implement the model incrementally, and I've already got a preliminary model using feedforward connectivity. It learns by associating the immediate past with the present. Let's say you're learning the alphabet for the first time. You hear "a", then "b". The way the model works is, it stores "a" in a kind of short-term memory, and when "b" is presented, the system binds them together, the delayed "a" and the current "b". It chunks together "ab" and "c" in the same way, storing progressively larger chunks. And it does so using spiking neuron models and learning mechanisms that have been experimentally confirmed in animals and humans.

So that's it in a nutshell. There a many more details I'm leaving out, but that's the gist. Hopefully everything will go smoothly, and by tomorrow evening I'll only have one more hurdle to jump before getting my PhD.

Monday, November 17, 2008

The Ancestor's Tale

I'm just finishing up The Ancestor's Tale by Richard Dawkins, which is basically a description of the evolution of life on earth treated as a journey back through time to meet up with common ancestors.

It's been all right, filled with lots of interesting tidbits about various animals, but many times it just feels like Dawkins is riffing about whatever he wants to talk about at the moment. I blogged previously about his lengthy digression about race. In that section, he distinguished the concept of race from that of species by claiming that there was an objective scientific criteria for species. Namely, if individuals mate and reproduce under natural conditions, they're considered to be in the same species.

"Wait...what about asexual species?" I thought. Later in the book Dawkins brings up species that reproduce asexually, and admits that their designation to species is based on similarity judgments by scientists. In other words, the concept of species doesn't really have any firmer ground to stand on than any other classification in biology.

Another thing that bothered me was when he talked about why wheels haven't evolved, another topic I blogged about not too long ago. He pointed out that wheels aren't all that useful without roads. But then he goes on to point out what he says is probably the only example of wheels having evolved: the bacterial flagellum, which is a wheel-axle mechanism that freely rotates, whipping around a tail-like appendage that propels the bacteria forward. Well, if this is considered a true wheel, then we don't necessarily need to think about wheels purely as means of locomotion on land, do we? Why haven't they evolved in aquatic animals? In other words, why don't fish have rotors?

Anyway, it's been a pretty good read, but I'm glad it's nearly over.

Which is Smarter: Cats or Dogs?

The Straight Dope question for today is a classic one, asking which is smarter between cats and dogs.

Cecil takes the political route and gives the following answer:

Judging the relative intelligence of cats and dogs is like deciding which is better looking — there's just not much basis for comparison. Psychologists have a tough enough time coming up with a culture-blind IQ test for humans, who all belong to the same species; designing a species-blind test for dogs and cats is just about impossible. What people take to be signs of intelligence in their pets usually are just specialized survival skills that say nothing about innate brainpower. A cat, for instance, is much more dexterous with its paws than a dog. This dexterity fascinates cat lovers, who also cite the cat's legendary standoffishness as proof of its mental superiority. The dog, on the other hand, is much more of a social animal; dog advocates claim this proves the dog is more civilized, ergo, more intelligent.

This reminds me of Howard Gardner's theory of multiple intelligences. Nobody's really smarter than anybody else. Some people are good and some things, like music and swimming, and others are good at writing poems and solving geometric proofs.

That is to say, there are some people who consider intelligence a wholly subjective concept, and others, such as myself, who consider it an objective concept, amenable to defining and measuring without regard to value judgment.

Instead of talking about two individuals or species that are relatively close in their cognitive capacities, let's take an extreme example to illustrate the point and ask:

Which is more intelligent, an earthworm or a chimpanzee?

There are two broad ways to answer this question. The first is to do so as Cecil did, under the idea that intelligence is subjective and relative to the individual or species. In that case, we would say that the question is nonsensical. An earthworm is good at burrowing and detecting various nutrients in soil. A chimpanzee is good an living a semi-arboreal lifestyle and all that that entails. I personally think this is a silly response. That particular answer negates the concept of intelligence as relating to cognitive capacities, and instead more closely associates it with the concept of evolutionary fitness.

The more sensible answer is the one that I think most scientists and laypeople would give if pressed for an answer, the obvious one: the chimp. And why? Well, those same people might be hard pressed to justify their answer, and as I've blogged about before, there's a paucity of good theory relating to intelligence, especially across species and systems.

Personally, I'd say that intelligence is the collective capacity of a system to recognize patterns (temporal and spatiotemporal), generate patterns, and in some cases learn patterns. An individual's ability to do so is dependent upon the number and quality of their sensory modalities (vision, hearing, touch, etc.) and the range and complexity of their behavioral repertoire. We can measure an individual's ability to recognize, generate, and learn patterns, both in naturalistic and controlled settings by observing four main metrics:


  • breadth: how many different kinds of patterns can they recognize/generate/learn?
  • depth: how many patterns within a given class can they recognize/generate/learn?
  • speed: how fast can they recognize/generate/learn patterns?
  • accuracy: how well can they recognize/generate/learn patterns?


In terms of its ability to perceive and process its environment, a chimpanzee has far more powerful and numerous sensory systems than an earthworm. Also, the range of behaviors that a chimpanzee can perform vastly outnumbers those of an earthworm. Even without any quantitative assessment, it's very easy to see that by these criteria a chimpanzee is more intelligent than an earthworm.

I think an answer exists to questions like: Is person A more intelligent than person B? and Which is more intelligent: cats or dogs?

In the case of species with numerous breeds, the answer may be complex. But I believe there is a practical way to determine whether or not the mean intelligence of a given species is greater or less than that of another. What is needed first is a strong theoretical framework, followed by the development of rigorous tools and methods to make those determinations, and right now we are sorely lacking in both.

Saturday, November 15, 2008

Atheist Advertising

A couple of atheist ad campaigns have made it into the news recently. The most famous is a campaign in Washington DC with ads appearing on buses saying "Why believe in a god? Just be good for goodness' sake".

And my sister just sent me a link to this story about a similar campaign in Denver, featuring billboards that read "Don't believe in God? You're not alone."

Now, both of these seem like decent campaigns to me. The spokesperson for the group that sponsored the first ad says the motivation is to increase awareness for non-religious people in the community that they are not alone, and secondarily to try to help dispel the mistaken notion that morality is necessarily tied to supernatural beliefs. The second campaign very obviously shares the first reason as well, to indicate to the non-religious that even though their views may be in the minority, they are not alone.

Obviously, not everyone thinks its a great idea. From the Fox story:
The humanists' entry into the marketplace of ideas did not impress AFA president Tim Wildmon.

"It's a stupid ad," he said. "How do we define 'good' if we don't believe in God? God in his word, the Bible, tells us what's good and bad and right and wrong. If we are each ourselves defining what's good, it's going to be a crazy world."
Well, Tim, we don't individually decide what's good. We can still collectively reach consensus on how we want to treat each other. That's called a society, with laws and mores. As I've pointed out before, many of the rights put forth in the US Constitution are not dictated by god, but are secular ideals. The very first amendment insures freedom of religion, the personal right to worship whatever you want. In contrast, the first commandment of the Old Testament strictly forbids worshipping anything but the Old Testament deity. One is a secular value, the other is a religious one, handed down from on high. How did we get the idea that religious pluralism is good? The founding fathers used their brains.

There are some gems in the Denver story as well:
Pastor Willard Johnson of Denver's Macedonia Baptist Church called the billboards a desperate effort to discredit Christianity.

"The Bible is being fulfilled. It says that in latter days, you have all these kinds of things coming up, trying to disrupt the validity of Christianity," Johnson said. "If they don't believe in God, how do they believe they came about? We denounce what they are doing. But we do it with love, with gentleness, with decency and with compassion."
Glad the atheists could help fulfill prophecy. Now think for a moment about inventing your own religion. What would be a good feature to implement that would be self-reinforcing against any kind of criticism? How about adding something about how there will be people who will try to tear down your beliefs and discredit them? That might be a good psychological mechanism from keeping your followers from ever actually listening to criticisms of their religious beliefs.

And I very much like the idea that you can denounce something gently.

Also from the same story:
Bob Enyart, a Christian radio host and spokesman for American Right to Life, said it's hard to ignore the evidence.

"The Bible says that faith is the evidence of things not seen. Evidence. If we ignore the evidence for gravity or the Creator, that's really dangerous," said Enyart. "Income tax doesn't not exist because somebody doesn't believe in it. And the same is true with our Creator."
Holy crap...I can't argue with that logic, mostly because there is no logic there. Whew.

Anyway, I applaud the efforts of the non-religious to increase awareness of our presence in society and make an effort to communicate to like-minded others that they are not alone. It's a small step toward community building, although as I've argued in the past, ultimately such a community will need to be build on a foundation of positive principles, things we do believe in such as human rights and scientific inquiry, rather than simply the absence of belief in the supernatural.

A World With No Nukes?

Wired has a feature called Danger Room, with advice from experts to President-elect Obama on issues related to national security. The latest installment is about nuclear weaponry by Joseph Cirincione. His credentials?
He's the president of the Ploughshares Fund, and the author of Bomb Scare: The History and Future of Nuclear Weapons. During the election, Cirincione was an informal advisor to the Obama campaign. Previously, he served as director for nonproliferation at the Carnegie Endowment for International Peace.
Okay, what's his advice?
On nuclear weaponry, the United States must lead by example. Expect President-elect Barack Obama to call for a nuclear summit of leading nations on practical steps that all can take towards a world without nuclear weapons. Their ultimate elimination should be a core principle of his national security strategy. Look for early talks with Russia on mutual reductions to show we are serious. There are dozens of other steps to take, but cleaning out our own nuclear house would be an important early move.

Stopping new nuclear states and preventing nuclear terrorism will also be at the core of Obama's new, more effective nuclear security policy. Fortunately, Obama developed during the campaign the most comprehensive nuclear policy program any candidate has ever detailed. He now must implement it, beginning with a multi-level effort to prevent nuclear terrorism, then quickly pivoting to preventing new nuclear states and eliminating the 26,000 weapons in global arsenals.

The key to is stop terrorists from getting the stuff for the bomb core—highly enriched uranium or plutonium. No material, no bomb, no nuclear terrorism. Obama pledged to lead a global effort to secure all the weapons materials at vulnerable sites within four years, destroying as much as possible. Look for him to appoint a deputy national security advisor to coordinate the work.

The more countries with weapons, the greater the risk, so expect also a quick start on tough, direct diplomacy to roll back the North Korean nuclear program and preventing a nuclear Iran. He will gain leverage by dealing with our weapons. If we cling to our thousands of hydrogen bombs, how can we convince others that they cannot have one?
There's a key problem with this line of reasoning. It assumes that if current nuclear powers eliminate their nuclear weaponry this will "lead by example" and no other states will have motivation to develop or acquire nuclear weaponry. This type of thinking would be nice, if it actually had some kind of basis in reality. In fact, it's incredibly foolish with a moment's reflection.

Imagine a world in the near future where the US, UK, Russia, France, Israel, China, Pakistan, and India (and whoever I might be leaving out) all agree to a pact completely eliminating all of their stores of nuclear weapons. What does the world look like now? If anything, the US has actually increased its military standing, because it still retains the most powerful conventional force by far, but there are no nuclear trump cards to keep it in check.

The equalizing power of nukes is patently clear; they enable a military with weaker conventional forces to provide a clear deterrent to a military with far stronger conventional forces.

This is the simple strategic argument for why it would probably be impossible to get countries like Russia and China to ever give up nuclear weapons. On the practical side, how could we ever enforce such an agreement? We have seen how difficult verification is even in relatively small countries such as North Korea and Iraq. A case in point: In 1972 162 states signed the Biological Weapons Convention (BWC), including the Soviet Union. The treaty banned the development and production of biological weapons. However, we now know from former researchers in the USSR's biological research program that they continued development on biological weapons, despite the treaty, including attempts to weaponize smallpox, a scourge that humanity had declared eradicated.

If a given country does not want to cooperate with a given arms agreement, especially if that country has sufficient resources, there is simply no practical way to ensure cooperation. So even if all the current nuclear powers all agreed to get rid of their weapons, there would be no way to make sure that everyone was abiding by the agreement, and there would be little motivation to cooperate, especially for those with weaker conventional forces.

Finally, in a fantasy world where all the current nuclear powers gave up their stockpiles and we were actually able to verify complete cooperation, why would small states necessarily give up on development of nuclear programs of their own? The naive perspective of Cirincione is that if no one has nukes, no one will want nukes. The implication is that nukes are only ever strategically defensive. But imagine a world where no other power has nukes, but North Korea has secretly developed them. A dictator who was the sole possessor of nuclear weaponry would have carte blanche in annexing neighboring states. They would be able to conquer their neighbors without firing a shot with a nuclear trump card.

No, I'm afraid you can't put the toothpaste back in the tube. For both strategic and practical reasons, nuclear weapons simply will not be expunged from the world stage. I would like to live in such a world, but then, I also entertain a number of other utopian fantasies. I do agree that we need to try to reduce the stores to the lowest possible levels, and implement better controls and tracking for existing nukes. But getting rid of them altogether? That's a silly pipe dream.

Friday, November 14, 2008

Religulous

I finally saw Bill Maher's Religulous yesterday.


Overall I enjoyed it. It was very funny in parts. My favorite bit was when they visited a place where Jews invent contraptions to basically try to circumvent the edict against resting on the sabbath. Really hardcore Jews won't even turn a light switch on or off on the sabbath, but this place comes up with all sorts of gadgets, such as a Shabbat-friendly phone:

Making or receiving phone calls is also a no-no on Shabbat, but technology has come to the rescue yet again. The Institute for Science and Halacha has come up with a button-free telephone. All the electrical circuits are perfectly closed, which means that in practice - the only thing stopping it from dialing all the time is an electrical disruption caused by an infra-red ray. If an emergency arises on Shabbat, you just wedge a stick into a designated hole on the telephone. A circuit is made and dialing is enabled.

Maher pointed out that what this amounts to is trying to trick god, and isn't that more than a little silly?

Mostly Maher chose this kind of low-hanging fruit. He interviews truckers, an amusement-park Jesus, and a guy who claims to be the second coming of Christ. On the one hand, you could say that he's just picking on the wackos, but he does interact with a fair number of "normal folks" who make up the rank and file of believers. I don't think the movie would have been quite as funny if Maher had stuck to interviewing university theologians, and Maher probably didn't think so, either. If you want a more serious version of Maher's approach, there's always Richard Dawkins' Root of All Evil, a documentary he put together for the BBC.

Religulous was bookended by some very serious editorializing by Maher about the deadly mix of religion and modern weaponry (a point Sam Harris makes in The End of Faith), but it felt like a jarring mismatch in tone from the light-hearted stuff that had just come before. Maybe if he'd mixed it in throughout, it would have been more effective.

Anyway, I doubt that the film would have much appeal to moderately religious people, though I don't know. Maher's basic tone is, "Come on, people, look how silly this shit is!" which I don't think will resonate very well with people leaning one way or the other, and I doubt they'd go to see this film anyway.

Still, it was very funny in parts, and I would like to see feedback from a group of people across the religious spectrum, rather than just other atheists and movie reviewers.

Thursday, November 13, 2008

The Ten Commandments and the Seven Aphorisms

Slate has an article describing a current case before the Supreme Court regarding the rights of a group called Summum, a religious group founded in 1975 by a guy who had an encounter with aliens who conveyed a bunch of wisdom to him. Oh, and they mummify people and pets upon request.



And no, I'm not making any of this up.

Anyway, some Summum followers in Utah want to put up a monument in a local park which lists their "Seven Aphorisms". Incidentally, the same park already has a monument listing the Ten Commandments. Here are the Seven Aphorisms:

1. SUMMUM is MIND, thought; the universe is a mental creation.
2. As above, so below; as below, so above.
3. Nothing rests; everything moves; everything vibrates.
4. Everything is dual; everything has an opposing point; everything has its pair of opposites; like and unlike are the same; opposites are identical in nature, but different in degree; extremes bond; all truths are but partial truths; all paradoxes may be reconciled.
5. Everything flows out and in; everything has its season; all things rise and fall; the pendulum swing expresses itself in everything; the measure of the swing to the right is the measure of the swing to the left; rhythm compensates.
6. Every cause has its effect; every effect has its cause; everything happens according to Law; Chance is just a name for Law not recognized; there are many fields of causation, but nothing escapes the Law of Destiny.
7. Gender is in everything; everything has its masculine and feminine principles; Gender manifests on all levels.

Here are the Ten Commandments (though they may actually be divided in slightly different ways, depending on denomination):

1. I am the Lord your God
You shall have no other gods before me
You shall not make for yourself an idol
2. You shall not make wrongful use of the name of your God
3. Remember the Sabbath and keep it holy
4. Honor your father and mother
5. You shall not murder
6. You shall not commit adultery
7. You shall not steal
8. You shall not bear false witness against your neighbor
9. You shall not covet your neighbor's wife
10. You shall not covet anything that belongs to your neighbor

The case before the Supreme Court is framed primarily as a speech issue, and not a religious one, but the distinction is pretty thin. In 2005, the Supreme Court upheld the display of the Ten Commandments in Texas.

What I think this case makes transparently clear is that government and religion should stay out of each others' business as much as is practically possible. The catch-22 that the justices are in is that any reasoning they use to rule against the Seven Aphorisms can also be used against the Ten Commandments. If we want to uphold pluralism in this country, and one group can post their religious doctrine on government property, then all religious groups should be able to. To ban the Summums would be discriminatory.

So how about the simplest, fairest option that accords the most with the intent of the First Amendment...the government shouldn't be in the business of promoting religious speech, no matter how well-intentioned or innocuous. If people want to erect monuments with religious teachings on them on their front lawn, or their own private property, or billboards, or businesses, or whatever, bully for them. If they want to put it in a courthouse or a public park or a Federal building, no way. Let the government do its job, which doesn't include preaching or proselytizing. And let Americans pray and worship and observe their religions to the fullest extent they want to.

That sounds fair, doesn't it?

Monday, November 10, 2008

American Attitudes Toward Atheists

One of the most recent surveys about American attitudes toward atheists is one carried out by the University of Minnesota. Some results:

When asked to identify the group that "does not at all agree with my vision of American society":

Atheists (39.6%)
Muslims (26.3%)
Homosexuals (22.6%)
Conservative Christians (13.5%)
Recent Immigrants (12.5%)
Jews (7.6%)

Responses to the statement "I would disapprove if my child wanted to marry a member of this group":

Atheists (47.6%)
Muslims (33.5%)
African Americans (27.2%)
Asian Americans (18.5%)
Hispanics (18.5%)
Jews (11.8%)
Conservative Christians (6.9%)
Whites at (2.3%)

Gallup polls also consistently indicate that about half of Americans would not vote for a qualified candidate for President if they knew he/she was an atheist.

Here are the percentages of people from the 1999 Gallup poll saying they would refuse to vote for "a generally well-qualified person for president" on the basis of some characteristic:

Atheist (48%)
Muslim (38%)
Gay (37%)
Mormon (17%)
Woman (8%)
Jewish (6%)
Baptist (6%)
Black (5%)
Catholic (4%)

Would you be upset, angry, and/or worried if you lived in a country where nearly half the people said that a member of a group with your religious views didn't share their vision of the country, didn't want you to marry their children, and wouldn't vote for a member of your group for President?

Why Do You Want to Build a Superintelligent Artifact?

One thing I'd really like to see is a survey of researchers working in artificial intelligence or closely related disciplines asking what their motivations are. Nearly everyone I've talked to who works in AI dreams of ultimately building a human-level or higher intelligence. But why?



Most of the super-intelligent machines from fiction and the movies (The Matrix, 2001: A Space Odyssey, Terminator, and on and on) don't tend to have humans' interest at heart.



Or do they want to create something more like Data?



My guess is that most AI researchers are technological optimists, and assume that whatever they happen to engineer will be benevolent. However, history teaches us that how a given technology is used depends on the character of the culture that is wielding it. I also wonder how honest researchers would be, even in an anonymous survey. Another probable result would be that many researchers simply aren't conscious of their motivations. I tend to wonder the extent to which technological innovation advances simply because people want to make cool stuff. Money is of course another motivation, but while the rewards of manufacturing androids would be obvious, such long-term goals are highly speculative, and most bright people could earn a lot more money by focusing on more conservative approaches.

Personally, I'm optimistic that human-level AI will eventually happen, but I'm doubtful that it will happen any time soon. I think the path will involve machines that, like humans, learn most of what they know, rather than having it innately programmed. This will mean a very long training period, and also entails that the character of the machines will be closely related to the type of training they receive. Anyway, I'm much more interested in building a Data than a Skynet. But ultimately our relationship to whatever we create is contingent upon how much we think and plan about the consequences of what we're working on.

Religion, Atheism, and Niceness

Slate has an article about the question of whether or not religion makes you nice and atheism makes you mean. They start out by noting the appallingly negative perception of atheists that most Americans have.

Then they point to research that indicates that when people are primed with "spiritual" words, they tend to be more altruistic in a money-giving task. However, a similar effect is found when people are subjected to posters with big eyes on them. The implication? That words related to god remind people that god is watching.

Then the article talks about a book by Phil Zuckerman:


It's about the Danes and Swedes, two societies that are extremely non-religious. Guess what? They don't murder and rape each other nearly as often as Americans do. They also have fewer suicides, abortions, and teen pregnancy.

What's going on here? Well, we've got a lot of complex social variables swirling around here, so causality is very difficult to pin down. But the situation is certainly not a case of atheism destroying a society and making everyone into amoral psychopaths. The key component here is probably freedom. When people are given the ability to choose their belief system, rather than have it imposed upon them, atheists are not the evil minions of a Stalinist regime, but innocuous, hardworking people like the Danes and Swedes.

Still, I doubt many Americans will read a book like this or be swayed anytime soon by such arguments. Race and gender relations may be taking strides forward in this country, and even the rights of homosexuals, despite recent setbacks. But atheists are still overwhelmingly reviled in this country.

We've got a long way to go.

Sunday, November 9, 2008

Eagle Eye

We were bored and hadn't seen a theater movie in a while, so we went to see Eagle Eye yesterday. I knew it would be bad, but hoped it would be so bad that it was kind of good, and it didn't disappoint in that area.


[spoilers ahead...but honestly, does it matter with this movie?]















I'm going to summarize the plot, which is so ridiculously goofy it's kind of entertaining in and of itself.

An artificial intelligence created by the Federal government named ARIA (Advanced Response something or other) gets miffed when the President doesn't follow its advice to abort a mission to blow up a suspected terrorist. So the computer decides to kill the President, along with the top 12 people in the line of succession down to the one person who did agree with her decision, the Secretary of Defense. So she initiates "Project Guillotine," an labyrinthine plot to execute the top 12 people in the line of succession for the Presidency.

This involves rerouting a new experimental type of crystal explosive to a jeweler who can cut it to make it look like a diamond so it can be embedded in a necklace. The explosive is set off with a sonic trigger. ARIA gets the trigger shipped to a music store and manipulates the owner to embed the trigger in a kid's trumpet. The kid is playing in a band at the Capitol for the heads of state, where the President and the others will be present. ARIA sets it up so that the mother of the kid will be wearing the explosive necklace at the event, and when the kid plays F sharp on his trumpet it will trigger the explosive and blow up the entire Capitol. This will presumably leave ARIA and the Secretary of Defense in charge of the country.

Also, one of the guys who worked on the ARIA team (a guy named Ethan Shaw) realized what she was doing and put a biometric (face and voice) lock on her to stop her. ARIA kills him after he leaves work by manipulating street lights, and then manipulates his twin brother (Jerry Shaw, played by Shia LeBouf) to come to the basement of the Pentagon and release the biometric lock so she can kill the President.

Of course the humans thwart the plans of the evil computer, culminating with Rosario Dawson thrusting a metal beam into ARIA's "eye". Billy Bob Thornton also plays an FBI agent who thinks Jerry Shaw is a terrorist (although it's never made clear why ARIA set him up to look like one). There's an awful lot of running around and car chases and ARIA's calm robotic voice ordering people around while she hacks into cell phones, security cameras, and junkyard cranes with god-like ease.

One of the goofiest instances of this is when ARIA somehow causes power lines to snap and swing into an uncooperative human to fry him on the spot. How do you snap power lines remotely?

I thought the movie was delightfully stupid. The omniscient, omnipotent ARIA was the perfectly silly villain for Shia LeBouf's slacker doofus hero. We even got a scene at the end with the Secretary of Defense talking to a Congressional committee about how the tools we put in place to protect us can actually do more harm than good. Preachy and stupid...awesome!

Anyway, it's at the end of its run in the theater, but it might make for a good rental to heckle from the sofa while you have a few beers with your friends.

Saturday, November 8, 2008

Psychics, the Economy, and Stupidity

Here's a Wired article entitled In Troubling Economic Times, Consumers Flock to Online Psychics:
While it doesn't take a psychic to see that tough times lay ahead for the economy, online practitioners of the divination arts say they're seeing a marked sift in the questions posed by their clientele, with anxious consumers increasingly asking what's in store for them financially in the months ahead. Believers who normally seek psychics for advice on a cheating spouse are now asking whether a pink slip is in their future, and internet psychics across the board saw a spike in traffic in the days following the initial market crash.
Let me get this straight...people want to pay money to people who didn't predict the crash in the first place for advice on what's going to happen next?
Hourly rates for online psychics typically range from $100 to $1,000 per hour, but those steep rates haven't seemed to deter the monetarily anxious from reaching out.
Must...resist...temptation...to...prey...on...stupidity.

Friday, November 7, 2008

Blackness and Whiteness

I'm currently listening to Richard Dawkins' The Ancestor's Tale on audiobook, in which species are treated as pilgrims on a journey back to the origin of life, ala Chaucer's Canterbury Tales. It's pretty interesting, though I'm not finding it as compelling as most of his other books.

Interestingly, the day Barack Obama won the Presidency, I listened to a section about race in the book. Dawkins uses Colin Powell as an example, but Obama would work just as well. According to Wikipedia:

Barack Obama was born at the Kapi'olani Medical Center for Women & Children in Honolulu, Hawaii,to Barack Hussein Obama, Sr., a Luo from Nyang’oma Kogelo, Nyanza Province, Kenya, and Ann Dunham, a white American from Wichita, Kansas of mainly English, Irish and smaller amounts of German descent.

Obama is therefore just as much white as he is black. So why is he, by default, considered black? Dawkins points out that genetic characteristics such as skin color are not the product of dominant and recessive genes, as in eye color. The child's skin color is more often the result of a blending effect of the genes that control it. Blacks more often have brown eyes, which are dominant, and so perhaps there is somehow the perception that certain racial traits contribute more to appearance than others.

Dawkins points out that there is no objective biological criteria for designating race, and that it really is a subjective category. In reality, it makes as much sense to call Obama "white" as it does "black".

I admire the way Obama has transcended race by mitigating its importance. Hopefully we'll get to a point where we don't don't make judgments, legal or personal, based on somewhat arbitrary racial categories, and hopefully Obama's Presidency will be a step toward that point.

Methods for Studying Minds

Cognitive Science suffers from the peculiar problem that we are using our cognition to try to understand cognition. Science is about reducing bias and subjectivity through peer review and reproducibility, but the mind is the source of subjectivity, so how in the heck do we study it.

As I see it, there are four broad methods for studying cognition:

1) Introspection

This is probably the least reliable because it is the most subjective, but I don't think it should be discounted out of hand for that reason. A researcher might gain insights into their own cognitive functions by reflecting on how they might work and paying close attention to how they think, the ways in which their memory works, etc. Obviously, the main drawback is that an individual has privileged access to this kind of information, so it is not subject to unbiased reproducibility.

2) Verbal Communication

This method basically entails asking people what their states of mind are. If I ask you whether you dream, and you say "yes," this is increased evidence that humans experience dreams. The more specific the information gets, the less reliable it becomes, because it relies on relaying information derived from introspection to another party. Still, it is a necessary inclusion in the toolbox.

3) Observation of Overt Behavior

This is the most popular method of studying cognition, and for a time it was the only one. It involves measuring aspects of the outward behavior of an individual, such as reaction time or accuracy while performing a verbal or mathematical task. For a non-human subject, measurements might be made regarding successes or failures at solving a particular problem, such as pressing a series of buttons to receive food. The main problem with relying solely on this method is that the cognitive system is a black box. We only look at the inputs and outputs, and try to make inferences about how the processing might be going on, which is kind of like trying to figure out how a radio works just by listening to it, and not ever looking inside. Which brings us to the last type of methodology...

4) Observation of Cognitive Mechanisms

In the case of biological organisms, this primarily involves neuroimaging, such as EEG, PET, fMRI, etc. Rather than measuring aspects of overt behavior, we look at the mechanisms underlying the processing of information, and how they are behaving. If a monkey is given a task to discriminate between a cube and a sphere, we can directly measure the activity of the cells in their nervous system, or blood flow to a particular area of their brain, which might give some insights into how the task is processed. If we're studying how an artificial intelligence solves a particular task, we have direct access to the algorithm that accomplished the task, and should be able to determine how it did what it did.


When it comes to studying humans, we have all four methods at our disposable, to greater or lesser degrees. Ethical considerations place limitations on the type of data we can gather using these methodologies. For example, lesioning studies, where part of the brain is surgically removed, with monkeys are quite common. These simply aren't ethical with humans. But we do have cases where people suffer damage to brain areas due to injury or disease, which do let us carry out these kinds of studies in an indirect way.

With non-human subjects, such as non-human animals or AIs, we can only use #3 and #4 (at least until those subjects can communicate sufficiently in a natural language). Sign language taught to chimpanzees and gorillas isn't sufficiently rich to communicate meaningful information about their states of mind, and no current AI is close to being able to communicate in a natural language.

Data from all these sources continues to pile up, and neuroimaging especially has advanced as a technique for measuring the behavior of cognitive machinery, but we're still at a loss for strong theories in which to incorporate all the information.

Thursday, November 6, 2008

What If...?

Marvel Comics had a series called "What If...?" where it explored such fascinating hypotheticals as:

So how about a little "What If...?" with the McCain campaign. What if...Kay Bailey Hutchison had been chosen as McCain's running mate, instead of Sarah Palin?

She's a popular female Senator from a Southern state. She's intelligent, articulate, and unlike Sarah Palin, probably knows that Africa is a continent, not a country. She would have appealed to many more moderates and female voters. But alas, she's pro-choice. Would this have alienated the Republican base, or would they still have come out to vote?

We'll never know. I'm still not sure he would have won, but I think it probably would have made the race much closer.

Obama Nation

I'm obviously happy that the candidate I voted for won the election (even if he lost my state).

However, unlike his hardcore supporters, I don't see an Obama Presidency as a panacea. Our economy is in the crapper. Our education and health systems could be a lot better. A lot of Muslims would be happy to see our nation enveloped in flames, and we still need to capture or kill the leader of al Qaeda. Russia is threatening a new Cold War. China is a burgeoning power that remains and possible threat. North Korea and Iran are still hell-bent on getting nuclear weaponry. And on and on...

I'm interested to see how Obama fares as President. I don't have massively high expectations. He's not in a position domestically to implement a lot of his programs because the money's not there, even if he went back on his promise and raised everybody's taxes. I'd just like him to project calm and resolve, to be Presidential, and not screw things up. That's all I really want out of my new President.

Tuesday, November 4, 2008

Back From Voting

I voted at the Louisiana Technical College. I got in line at 9:45 and stepped into the booth at 10:45. One elderly man came in, checked which line he was supposed to be in, and then left saying the wait was too long. I wonder how many people nationwide leave polling stations because they don't want to wait.

The ballot was electronic. There was a huge board which you touched the space next to the option you wanted, and it lit up with a green X. It wasn't a computer touchscreen...more like a giant touchpad which recorded your choices.

I really wish there was a party that didn't necessarily bundle social liberalism and a respect for science with big government and Marxist ideology. But oh well. I voted straight Democrat this time. I'll be interested to see how tonight, and the next four years, turn out.

If the Other Party Wins

My sister sent me a link to this video portraying the nightmare vision of each party's members if the opposing party wins.

My favorite bit:

Lesbian Mom#1: "How was your abortion today, honey?"
Little Girl: "Fine. Like all the rest, I guess."

Anyway, I'm off to vote now...

Monday, November 3, 2008

Obama Fails at Both Black and Geek Pop Culture

During two recent speeches, Barack Obama mixed up his pop culture references. In one speech he was talking about the economy, I think, and he said it was like "I'm coming, Weezie!"

Now, Fred Sanford used to say "I'm coming, Elizabeth!" and say he was having a heart attack on the show Sanford and Son:

While Weezie was the wife of George Jefferson on The Jeffersons:

Then just the other day in a separate speech he said that John McCain wasn't a maverick, he was a sidekick (Bush's). This is a pretty good line, but he kind of ruined it by going on to say that McCain was like Kato to Bush's Green Lantern.
Kato was the sidekick of the Green Hornet:

Whereas the Green Lantern was a dude who wore tights and had a magic ring:

So Obama is fluent in neither black nor geek pop culture. Maybe I'm having second thoughts about how I'm going to vote tomorrow...

Nah.

Is Consciousness Special?

I've had discussions with people before who make the claim that consciousness is somehow going to be forever beyond the realm of science. Here's a Bloggingheads.tv discussion between Eliezer Yudkowsky and Jaron Lanier. About 25 minutes in, this is exactly the kind of point that Lanier tries to make:



If I'm understanding his point, he's saying that he doesn't understand consciousness, but that an understanding of consciousness will not be possible under the assumption that it is a product of physical things (like neurons) carrying out their function in the context of a physical system (like the human brain).

Yudkowsky calls him on this, by basically asking how you can make claims about how something is or isn't going to be understood if you don't understand it. It's a great question, but Lanier does what he does throughout the discussion, which is either laugh and move on to another point or make some kind of ad hominem slur against Yudkowsky by calling his adherence to science a kind of religion.

Lanier ironically states that making such groundless, pessimistic claims somehow makes him a better scientist. Huh?

Look, everything in the world was mysterious before it was explained. Lightning and thunder, for example, were very mysterious. Ancient people came up with initial explanations having to do with the actions of supernatural beings. Those explanations turned out not to be very good. We actually started to get somewhere when someone went, "Hey, wait a minute...maybe there's a reasonable explanation for what's going on here that we can understand."

That assumption, and not a hypothesis or theory or experiment, is the beginning of science and the first step on the road to knowledge. If you don't take that step, and automatically assume that either the explanation is supernatural and/or that you will never be able to understand it, then what you have done is guarantee that you will never understand it because you've given up before you even tried.

The history of science is one in which we have made progress in our understanding of the world by making the default assumption that things are the result of natural processes that work in orderly ways according to principles that we can figure out if we work hard enough.

It may very well be the case that there are hard limits to human understanding, and that there are things we are not going to be able to figure out. The nature of the universe may be one such question, whether or not there is a single frame of reference or some kind of multiverse, why the universe as we know it is expanding and whether or not it is a single case of such an expansion or one of many cycles of expansion and collapse. A grand unified theory of physics that reconciles quantum mechanics and relativity may be beyond our understanding. And so might consciousness. But the fact is, we have barely even begun to try to systematically understand these things. A few rare people have pondered such questions for millennia, but organized science as an institution has only really picked up speed in the last 150 years or so, which is really a very small amount of time.

So isn't it just a tad early to throw in the towel?

Are Brains Digital or Analog?

Last year Chris Chatham wrote up a great post entitled 10 Important Differences Between Brains and Computers. There's a wealth of topics to discuss in reference to the post, but I want to focus on what he lists as the #1 difference:
Brains are analogue; computers are digital
It's easy to think that neurons are essentially binary, given that they fire an action potential if they reach a certain threshold, and otherwise do not fire. This superficial similarity to digital "1's and 0's" belies a wide variety of continuous and non-linear processes that directly influence neuronal processing.

For example, one of the primary mechanisms of information transmission appears to be the rate at which neurons fire - an essentially continuous variable.
There's more, but this is the essential part. What's interesting is that Chris points out that one way in which information is conveyed in the brain is by the rate of fire of neurons. But then he ignores the fact that we know that information is carried by the timing of individual spikes, and he categorically labels the brain as "analogue."

A nice metaphor for neurons is a leaky bucket. When they are receiving incoming activity from other neurons, you can think of that as water trickling into the bucket. This is analogous to a charge building up on the cell membrane. But the membrane has resistance, so it is "leaky", which means in the bucket analogy that there are is also a tiny hole in the bottom of the bucket. If the water you're putting in doesn't exceed the leakage, then the level of water will never rise. Once the water reaches a particular level, a threshold, then you can think of the bucket being tipped over and sending all its water to all the other buckets it connects to. This is analogous to the firing of a neuron. It then resets until it is filled back up to its threshold level.

So before it reaches threshold, a neuron functions in an analog fashion. When it reaches threshold, it generates an action potential, or spike, which is a binary signal. But then, as Chris points out, if we count the number of spikes within a given time frame, that rate of fire can be measured in an analog fashion.

In my own work, I used to use artificial neurons that modeled only the average rate of fire of neurons. These are known as rate-coding neuron models, and a very common function that approximates the firing rate is the sigmoid:

But if you use such a model, you're assuming that no information is being carried by the timing of individual spikes, because you're averaging that information away.

But we know of particular examples in which information is conveyed by individual spikes. One very famous and interesting example is the auditory system of the barn owl. See "Hebbian learning of pulse timing in the barn owl auditory system" by Wulfram Gerstner, Richard Kempter, J. Leo Van Hemmen, Hermann Wagner for a great overview.


Basically, when a mouse makes a sound, the sound waves reach each ear of the barn owl at different times, because the ears are spaced apart. The owls auditory system is able to determine where the sound came from by comparing the relative timing of the sound reaching each ear, and this information is learned and conveyed via the timing of individual spikes. By the way, the image isn't of a barn owl shooting a mouse with laser vision (though that would be cool). It's meant to show how the sound of the mouse squeak reaches each of the barn owl's ears at slightly different times.

We also know that an important aspect of learning throughout the brain involves the relative timing of individual spikes. If a neuron (A) fires just before the neuron (B) it is connected downstream to, then the synapse will "strengthened", or modified in such a way that the next time neuron A fires, it will be more likely to cause neuron B to fire:

However, if the order of firing is reversed, then the synapse is "weakened":

The strengthening and weakening of synapses in this way is known as spike-timing dependent plasticity, or STDP. While there are a number of other ways in which synapses are modified in the brain, these particular mechanisms are thought to underlie many important aspects of learning, and they should not be ignored.

So, the answer to the question "Are Brains Digital or Analog?" is a perhaps unsatisfying "both". Some of the ways in which neurons communicate and undergo modification via learning are based purely on all-or-nothing signals in a digital way. In other cases, information is conveyed by the rate of fire of neurons in an analog manner.

But then, computers emulate analog functions as well, so they are neither distinctly digital or analog. In fact, the dichotomy turns out not to be all that sharp in many domains. What's important is to know in what ways the brain is analog and in what ways it is digital. Both will likely figure into any coherent explanation of how the brain works.

For my own part, I've begun working with spiking neuron models, specifically what are known as leaky integrate-and-fire models. I've become increasingly convinced about the importance of the role of time in understanding cognitive processes, and spiking models allow for communicating information both by the timing of individual spikes and by their rate of fire, while rate-coding models only allow for communication via average firing rates. That's not to say that rate-coding models don't have a lot to teach us about certain aspects of cognition, just that they are limited in their ability to do so.