The report states that the groups work will be absorbed into OpenAIs other research efforts.
Sutskever and Leike were some of OpenAIs top scientists focused on AI risks.
Leike posted along thread on XFriday vaguely explaining why he left OpenAI.
Photo: Kent Nishimura (Getty Images)
He thinks that OpenAI needs to be more focused on security, safety, and alignment.
I joined because I thought OpenAI would be the best place in the world to do this research.
he’s right we have a lot more to do; we are committed to doing it.
i’ll have a longer post in the next couple of days.
We need new scientific and technical breakthroughs.
Its now unclear if the same attention will be put into those technical breakthroughs.
Undoubtedly, there are other teams at OpenAI focused on safety.
Schulmans team, which is reportedly absorbing Superalignments responsibilities, is currently responsible for fine-tuning AI models after training.
However, Superalignment focused specifically on the most severe outcomes of a rogue AI.
Its unclear who will make the next steps on these projects at OpenAI.
News from the future, delivered to your present.
Why the Hell Is OpenAI Building an X Clone?
OpenAI is reportedly planning on making a social media platform because content to train on ain’t cheap.