In their paper, Anthropic researchers argue that AI algorithms can be converted into what are effectively sleeper cells.
In short: Much like a normal software program, AI models can be backdoored to behave maliciously.
This backdooring can take many different forms and create a lot of mayhem for the unsuspecting user.
Photo: Maurice NORBERT (Shutterstock)
Notably,Anthropic is closed-source.
News from the future, delivered to your present.
You May Also Like
Two banks say Amazon has paused negotiations on some international data centers.
Pacific Rims TV Show Drifts to Prime Video
Good news for giant robot fans.