Asurvey of the leading AI scientistssays theres a 5% chance that AI will become uncontrollable andwipe out humanity.
So at least our AI overlords will entertain us before they kill us.
The survey asked 2,778 scientists who had published peer-reviewed AI studies about their predictions for the future.
Photo: josefkubes (Shutterstock)
The study highlights the perceived danger around creating a powerful artificial intelligence from the worlds leading researchers.
More than 80% of AI researchers expressed extreme or substantial concern about AI enabling the spread of misinformation.
The survey comes off as the largest of its kind to date.
Some researchers gave at least a 10% chance that humans will be unable to control AI systems causing human extinction.Image: Ben Weinstein-Raun, Jan Brauner
Americas leading AI builders reference safety in their mission statements.
In December, OpenAI released its first paper onaligning superhuman AIwith human values.
Anthropic has aconstitutionlaid over its AI systems to ensure they act in alignment with our societys rules.
A majority of researchers found seven AI scenarios to be “substantially” or “extremely” concerning.Image: Ben Weinstein-Raun, Jan Brauner
But, will these methods truly work with AI models smarter than humans?
That remains to be seen because current AI is not that smart.
Researchers gave their best estimations of when society will achieve 39 major AI advancements.
By 2063, researchers estimate AI could do the job of a surgeon or even an AI researcher.
News from the future, delivered to your present.
Why the Hell Is OpenAI Building an X Clone?
OpenAI is reportedly planning on making a social media platform because content to train on ain’t cheap.