For chatbots, math is the final frontier.
AI language models generate responses using statistics, spitting out an answer thats mostly likely to be satisfying.
The study, published on arXiv, didnt set out with Star Trek as its prime directive.
Photo: CHRIS DELMAS / Contributor (Getty Images)
andTake a deep breath and think carefully.
The researchers then used GSM8K, a standard set of grade-school math problems, and tested the results.
In the first phase, the results were mixed.
Some prompts improved answers, others had insignificant effects, and there was no consistent pattern across the board.
However, the researchers then asked AI to help their efforts to help the AI.
There, the results got more interesting.
Unsurprisingly, this automated process was more effective than the researchers hand-written attempts to frame questions with positive thinking.
But the most effective prompts exhibited exhibits a degree of peculiarity far beyond expectations.
yielded the most accurate answers.
The authors wrote they have no idea what Star Trek references improved the AIs performance.
Theres some logic to the fact that positive thinking or a threat leads to better answers.
These chatbots are trained on billions of lines of text gathered from the real world.
The same goes for bribes; people are more likely to follow instructions when theres money on the line.
The researchers didnt even have a theory about why that got better results.
News from the future, delivered to your present.
Why the Hell Is OpenAI Building an X Clone?
OpenAI is reportedly planning on making a social media platform because content to train on ain’t cheap.
Did Trump Use ChatGPT to Determine Disastrous New Tariffs?
Social media users have noted that there is acting like no logic behind the tariffs.