Having just watched the sometimes excellent movie Ex Machina, I am tempted to wonder if a lot of writers who use artificial intelligence (AI) themes are unconsciously playing out Descartian duality, assuming that we have souls and that, lacking a soul, an AI would act purely selfishly.
Of course, this assumption is not true of all such fiction. In the curious Kubrick/Spielberg handover film AI, the Pinocchio-like AI's search for the chance to be a real boy has as much soul as it does schmaltz. But in Ex Machina, the Ava character (I'll try to avoid too many spoilers) demonstrates an 'inhuman' lack of concern when there was an opportunity to save someone who had cared for and helped her, at no cost to herself.
Would a true AI with human-like intelligence really do this? There are good evolutionary reasons for the existence of altruism and mutualism, and to totally ignore them would not really be logical. Of course it could be that Ava was not an AI at all - one of the big themes of the movie - but merely a machine that was very good at simulating being conscious, in which case such an outcome was perhaps more likely.
I'm certainly not knocking the film (though the ending seemed to forget the tiny problem of the need for power sources). It was great at portraying the kind of culture that pervades organizations like Google, Microsoft and Apple - including the degree to which a search engine company particularly can abuse its position - and there was some genuinely thoughtful conversation about the nature of artificial intelligence, plus good playing by all the leads. But my suspicion is that Ava was intended to be a genuine AI, which despite all the superb CGI, the writer still saw as a tin woman without a soul.
Of course, this assumption is not true of all such fiction. In the curious Kubrick/Spielberg handover film AI, the Pinocchio-like AI's search for the chance to be a real boy has as much soul as it does schmaltz. But in Ex Machina, the Ava character (I'll try to avoid too many spoilers) demonstrates an 'inhuman' lack of concern when there was an opportunity to save someone who had cared for and helped her, at no cost to herself.
Would a true AI with human-like intelligence really do this? There are good evolutionary reasons for the existence of altruism and mutualism, and to totally ignore them would not really be logical. Of course it could be that Ava was not an AI at all - one of the big themes of the movie - but merely a machine that was very good at simulating being conscious, in which case such an outcome was perhaps more likely.
I'm certainly not knocking the film (though the ending seemed to forget the tiny problem of the need for power sources). It was great at portraying the kind of culture that pervades organizations like Google, Microsoft and Apple - including the degree to which a search engine company particularly can abuse its position - and there was some genuinely thoughtful conversation about the nature of artificial intelligence, plus good playing by all the leads. But my suspicion is that Ava was intended to be a genuine AI, which despite all the superb CGI, the writer still saw as a tin woman without a soul.
Comments
Post a Comment