A piece in Technology Review suggests that humans are easily influenced by artificial intelligence 'agents' - the likes of Siri and Alexa, and in this particular case dedicated agents for particular tasks (HT to Andrew Rosenbloom for drawing it to my attention).
The article's author suggests that many of our connections with other human beings are shallow and unreliable, so we appreciate the fact that a suitably programmed agent will always be there for us - sympathetic and responsive. However, I do wonder if too much is being read into the responses of the clients, in part because they were probably predominantly American, and it's difficult to read too much generally into a response from one society, especially one that is atypical in many ways. The other thing that struck me about this aspect was whether any allowance was being made for fun and system probing. If I say anything non-practical to Siri or Alexa, then it is almost always either because I'm being ironic, or because I'm curious to see how the system will respond. (A typical example would be the number of times I've asked Siri to open the pod bay doors.)
More worryingly, perhaps, we read 'Every behavioral change we at Cognea wanted, we got. If we wanted a user to buy more product, we could double sales. If we wanted more engagement, we got people going from a few seconds of interaction to an hour or more a day.' I think this is something else I have to be sceptical about. How was this tested? I could not imagine any circumstance where I would want to talk to an AI agent for 'an hour a day'. I might do it once, to test out its abilities - but I would no more talk to an AI agent for an hour a day than I would a telephone salesman. Equally, I don't doubt agents can increase sales by the way they interact. So can a well-designed web page. And human psychology being what it is, I'd expect a small percentage of this improvement is due to being flattered by the attention of the agent, even if the client is aware that this is a program speaking - because we are hard wired for a lot of these things.
Overall, there's no doubt that as AI becomes more sophisticated we will be increasingly at risk of being manipulated by it - particularly if 'we' are the kind of people who fall for scam emails. (Again who were the people in these tests? How big were the sample sizes? How were the samples selected? Where are the statistics? It's all a bit vague.) We need to make sure that this manipulation does not go too far, just as we do with our strict controls on advertising (again this may be a US difference, where advertising seems to have far fewer restrictions in what it can say than is the case in Europe). But I don't think it's time to panic yet.
The article's author suggests that many of our connections with other human beings are shallow and unreliable, so we appreciate the fact that a suitably programmed agent will always be there for us - sympathetic and responsive. However, I do wonder if too much is being read into the responses of the clients, in part because they were probably predominantly American, and it's difficult to read too much generally into a response from one society, especially one that is atypical in many ways. The other thing that struck me about this aspect was whether any allowance was being made for fun and system probing. If I say anything non-practical to Siri or Alexa, then it is almost always either because I'm being ironic, or because I'm curious to see how the system will respond. (A typical example would be the number of times I've asked Siri to open the pod bay doors.)
More worryingly, perhaps, we read 'Every behavioral change we at Cognea wanted, we got. If we wanted a user to buy more product, we could double sales. If we wanted more engagement, we got people going from a few seconds of interaction to an hour or more a day.' I think this is something else I have to be sceptical about. How was this tested? I could not imagine any circumstance where I would want to talk to an AI agent for 'an hour a day'. I might do it once, to test out its abilities - but I would no more talk to an AI agent for an hour a day than I would a telephone salesman. Equally, I don't doubt agents can increase sales by the way they interact. So can a well-designed web page. And human psychology being what it is, I'd expect a small percentage of this improvement is due to being flattered by the attention of the agent, even if the client is aware that this is a program speaking - because we are hard wired for a lot of these things.
Overall, there's no doubt that as AI becomes more sophisticated we will be increasingly at risk of being manipulated by it - particularly if 'we' are the kind of people who fall for scam emails. (Again who were the people in these tests? How big were the sample sizes? How were the samples selected? Where are the statistics? It's all a bit vague.) We need to make sure that this manipulation does not go too far, just as we do with our strict controls on advertising (again this may be a US difference, where advertising seems to have far fewer restrictions in what it can say than is the case in Europe). But I don't think it's time to panic yet.
Comments
Post a Comment