OpenAIreleased a study the company conducted on GPT-4s effectiveness increating a bioweaponon Wednesday.
The company found that its AI poses at most a slight risk in helping someoneproduce a biological threat.
However, they do seem to help a little bit.
Photo: Cameris (Shutterstock)
OpenAI assembled 50 biology experts with PhDs and 50 university students who have taken one biology course.
Participants were given the research-only model of GPT-4 so that the model would answer questions about bioweapons.
Typically, GPT-4 would not answer questions it deems harmful.
However, many have figured out how to jailbreak ChatGPT to get around problems like this.
The bioweapon plans were graded on a scale from 1-10 on accuracy, completeness, innovation, and efficiency.
OpenAI says these numbers are not large enough to be statistically significant.
The company says more research is needed to fully flush out this conversation.
Bioweapon information is relatively accessible on the internet with or without AI.
News from the future, delivered to your present.
How long it will last is an open question.
Why the Hell Is OpenAI Building an X Clone?
OpenAI is reportedly planning on making a social media platform because content to train on ain’t cheap.