Search engines are central to our everyday use of the internet - I must use a well-known search engine beginning with G at least a dozen times a day. But the search providers are displaying a worrying trend. Swept along by the enthusiasm for artificial intelligence, most have begun to display or offer an AI summary - in Google's case, this is the first thing you see at the top of the search results. And like all generative AI responses, it doesn't necessarily get it right.
This is quite easy to demonstrate if you make use of a query that pushes the boundary a little. I happened to be writing something about the BICEP2 telescope, located at the South Pole. So, interested to see how the AI would handle it, I asked 'Why was the BICEP2 telescope built at the South Pole?' This is quite a tricky question for an AI to handle - and Google's response demonstrated this powerfully. (The highlighting above was already there, it's not from me.)
It's certainly a good guess that you might locate a telescope at the South Pole because it's dark there, at least for six months of the year, with no light pollution. The problem is, though, that the (now replaced) BICEP2 was a radio/microwave telescope - and a lack of background signals in this part of the electromagnetic spectrum is not described as being dark.
But the real disaster in the confident result produced is that final sentence. The location did not help the telescope to detect primordial gravitational waves - admittedly they did announce that they had... but then they had to withdraw the claim within weeks when it turned out to be a result of polarisation from impact with dust. That final sentence is pure fiction.
Admittedly there is a small print warning that 'Generative AI is experimental' - but that isn't the same as saying that it can't be trusted (and anyway you only see it if you click a drop down to expand the original summary). By making the generative AI result the first thing you see - and let's face it, we're all lazy and often don't dig too far into a search result - there is a real danger that the software's potential for imaginings and hallucinations will be taken as an effective source of information. Surely that's not great?
Article by Brian Clegg - See all Brian's online articles or subscribe to a weekly email free here
Comments
Post a Comment