A not-chatbot |
In the test 30 volunteers typed conversations, half with a human, half with a chatbot. Then an audience of 1334 people (including the volunteers) voted on which was which. A total of 59% thought Cleverbot was human, making the organisers (and New Scientist) claim it had passed the Turing test.
By comparison 63% of the voters thought the human participants were human. This can be a bit embarrassing for human participants who are thought to be a computer (there's rather a nice description of taking part in this process in the book The Most Human Human).
I don't think this is really a success under the Turing test. First, they only have a 4 minute chat, which gives chatbot designers an opportunity to use short-term tactics that wouldn't work in a real extended conversation, which I envisage is what Turing had in mind. And then there's the location of the event. A key piece of information that is missing is how many of the voters had English as a first language. If, as I suspect, many of the voters did not, or spoke English with distinctly different idioms, their ability to spot which was human and which wasn't would inevitably be compromised.
See what you think. You can chat to Cleverbot yourself here.
Picture from Wikipedia
Money making ebooks
ReplyDeletehas a ton of great free ebooks!