One thing I'd really like to see is a survey of researchers working in artificial intelligence or closely related disciplines asking what their motivations are. Nearly everyone I've talked to who works in AI dreams of ultimately building a human-level or higher intelligence. But why?
Most of the super-intelligent machines from fiction and the movies (The Matrix, 2001: A Space Odyssey, Terminator, and on and on) don't tend to have humans' interest at heart.
Or do they want to create something more like Data?
My guess is that most AI researchers are technological optimists, and assume that whatever they happen to engineer will be benevolent. However, history teaches us that how a given technology is used depends on the character of the culture that is wielding it. I also wonder how honest researchers would be, even in an anonymous survey. Another probable result would be that many researchers simply aren't conscious of their motivations. I tend to wonder the extent to which technological innovation advances simply because people want to make cool stuff. Money is of course another motivation, but while the rewards of manufacturing androids would be obvious, such long-term goals are highly speculative, and most bright people could earn a lot more money by focusing on more conservative approaches.
Personally, I'm optimistic that human-level AI will eventually happen, but I'm doubtful that it will happen any time soon. I think the path will involve machines that, like humans, learn most of what they know, rather than having it innately programmed. This will mean a very long training period, and also entails that the character of the machines will be closely related to the type of training they receive. Anyway, I'm much more interested in building a Data than a Skynet. But ultimately our relationship to whatever we create is contingent upon how much we think and plan about the consequences of what we're working on.
Monday, November 10, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment