Overall, researchers found the AI models spat out false information for 60 percent of the test queries.
Perplexity has gotten in hot water over this in the past but has argued the practice is fair use.
It has tried offering revenue-sharing deals to placate publishers but still refuses to end the practice.
Recent research finds chatbots frequently make up information when asked about specific news stories, even when provided direct quotes.NurPhoto/Getty
Anyone who has used chatbots in recent years should not be surprised.
Chatbots are biased toward returning answers even when they are not confident.
That could make the inaccuracy issue worse as countries like Russiafeed search engines with propaganda.
© Columbia Journalism Review’s Tow Center for Digital Journalism
Anthropics Claude has been caught inserting placeholder data when asked to conduct research work, for instance.
But Howard also blamed the users themselves.
Expectations should be set at the floor here.
People are lazy, and chatbots answer queries in a confident-sounding manner that can lull users into complacency.
None of these findings from CJR should be a surprise.
Today is the worst that the product will ever be, citing all the investment going into the field.
But that can be said about any technology throughout history.
It is still irresponsible to release this made up information out into the world.
News from the future, delivered to your present.
Two banks say Amazon has paused negotiations on some international data centers.