I’ve been receiving a lot of messages about the Turing test having been passed and so I thought I better write a quick response to address the “Eugene Goostman” situation. I won’t go into as much detail as Ben Goertzel or Ray Kurzweil, who are two of the best critical thinkers alive on the planet, but I agree with them wholeheartedly and think I can sum things up for people relatively simply.
Though the guys at Reading are actually right, the bot did pass the Turing test in the strictest sense, the Turing test is deliberately vague and “Eugene Goostman” really didn't pass it in a spirit that means anything significant. Readers of my novels know that when an A.I. can fool judges much more consistently than 30% of the time in a situation where the conversation is much longer than five minutes, we'll be living in an extraordinary time where we have to treat artificial entities as conscious. I think we're no more than 15 years away from that threshold, but I don't think we just passed it. Eugene’s achievement was a pretty neat trick, but I've read some of Eugene's conversations and I'm actually amazed anyone judged him as human. I'm much more impressed with IBM's Watson, but even Watson couldn’t fool a human observer over say, an hour long conversation. While I disagree with people who claim that a computer can’t “think” or inherently understand things, I do think that a computer would have to converse at a level where we’re left wondering if it does inherently understand. It’s that very moment when it causes us to have uncertainty that will be special, because, when you really think about it, we can’t even prove our own consciousness. When Descartes wrote, “Cogito Ergo Sum/ I think, therefore I am,” he went as far as he could with what we could “know.” We know we exist, but beyond that, we can’t prove for sure anything else. We treat our family and loved ones as though they’re conscious because they behave that way, but we can’t prove it. If a robot were to behave exactly the same way, we’d be in the exact same position. There’d still be people that doubt it is conscious or that it really “understands,” but we’d have to treat it as though it’s conscious because, just like with our friends and family, we don’t really know—we assume.
Here's a link to Ray Kurzweil's response: http://www.kurzweilai.net/response-by-ray-kurzweil-to-the-announcement-of-chatbot-eugene-goostman-passing-the-turing-test
Ray is still the greatest critical and epistemological thinker I've ever seen, and I don't think I've ever disagreed with any of his views in a very serious way (I do think money will disappear sometime before 2050 and he doesn't seem to think so, but that's the only disagreement I can think of. He's...ahem, usually right on the money. Pun intended ;)
I hope this was helpful and that people don’t think I’m becoming one of those Internet trolls that just throws water on anything cool for the sake of being a contrarian! Haha! When an A.I. passes a Turing test that is so strict that we’re left unable to differentiate between human and machine intelligence, I’ll be really, really stoked. There will be a few more “false-pretenders” in the coming decades, people will start to say it’ll never be achieved and that machines are inherently incapable of passing the Turing test and then, very soon afterward, it’ll happen. And it likely won’t be a particular day or a particular test as this one was reported by the media, but more likely an era in which, over a period of several months or even a couple of years, several A.I.s pass strict Turing tests and a consensus is reached that machine intelligence has matched that of non-enhanced human intelligence.
And it’ll likely be in less than fifteen years…
How freaky is that?