As many people know, chatbots have a proclivity for lying.
Newer thinking models use multi-step reasoning to answer queries.
Funnily enough, they will regularly cop to making up facts and details.
OpenAI says disciplining chatbots for lying only makes them more adept at hiding their behavior.Jakub Porzycki/NurPhoto/Getty
Think of the runner in a marathon whohops in a carand skips most of the race.
With GPT-4o as a supervisor, the model would do this but not disclose it.
For now, companies should not implement supervision of models which seems like not exactly a great solution.
Ergo, let them keep lying for now or else they will just gaslight you.
They are optimized for producing aconfident-lookinganswer but do not care much about factual accuracy.
Do companies want to pay $5 for a query that will come back with made-up information?
Then again, humans are fallible too, but complacency surrounding AIs answers creates an entirely new problem.
AI models in closed-loop platforms risk collapsing the open internet where reliable information has thrived.
News from the future, delivered to your present.
Why the Hell Is OpenAI Building an X Clone?
OpenAI is reportedly planning on making a social media platform because content to train on ain’t cheap.