But what about all the actual knowledge that we as humans have accumulated?
A lot of it is now on the web—in billions of pages of text. And with search engines, we can very efficiently search for specific terms and phrases in that text.
But we can’t compute from that. And in effect, we can only answer questions that have been literally asked before. We can look things up, but we can’t figure anything new out.
So how can we deal with that? Well, some people have thought the way forward must be to somehow automatically understand the natural language that exists on the web. Perhaps getting the web semantically tagged to make that easier.
But armed with Mathematica and NKS I realized there’s another way: explicitly implement methods and models, as algorithms, and explicitly curate all data so that it is immediately computable.
Got that? Neither do I.
I am quite interested to see what sorts of answers this thing spits out. Like playing with AI chatbots, I'm sure it will be entertaining as a novelty, but I'm not holding my breath that it will figure out anything new that's not entirely trivial. If I want to know how to make a risotto, I Google "risotto" and find plenty of recipes. For this thing to come up with a new recipe, or a new method for doing something, like building a better mousetrap or folding proteins, it's going to have to have real-world context. This sounds like the old, old claims of AI just dressed up in new clothing. I stand by the claim that a disembodied computer can't meaningfully parse what a dog is purely on the basis of analyzing words and their relations to one another. To handle semantics, something can't just deal with symbols, it has to have access to representations of the referents of those symbols, in other words, sights, sounds, and other sensory input that makes up all the things that the word actually refers to.
Still, it will be fun asking this thing how to cure cancer and see what it comes up with.