- Oxford researchers are warning of the danger of artificial intelligence surpassing human power.
- The scientists are calling for a regulation to protect society from advanced algorithms.
- Advanced AI can work “game theory” so that it stays a step ahead of efforts to end its power.
Before we get too far into this artificial intelligence race, let’s make sure we aren’t just setting ourselves up for certain death. That’s the gist of the recent message that a pair of Oxford University researchers delivered to the U.K. Parliament’s House of Commons.
“With superhuman AI, there is a particular risk that is of a different sort of class, which is that it could kill everyone,” said Michael Cohen, an engineering sciences doctoral candidate, during a January 25 hearing.
Cohen gave the example of training a dog to perform a trick that earns it a treat. If the dog simply finds the treat cupboard, it gets the treat without the need of a human. Then, Cohen took it further:
“If you imagine going into the woods to train a bear with a bag of treats, by selectively withholding and administering treats, depending on whether it is doing what you would like it to do, the bear would probably take the treats by force.”
In other words, let’s not let AI become the bear.
The way we train AI today is like the way we train animals. At some point, AI may become capable of taking over the process and changing the paradigm, which is what the algorithm tells it to do. Cohen continued:
“If you have something much smarter than us monomaniacally trying to get this positive feedback, however we have encoded it, and it has taken over the world to secure that, it would direct as much energy as it could toward securing its hold on that, and that would leave us without any energy for ourselves.”
These risks only present themselves with a specific subclass of algorithms, according to Cohen. By crafting regulations that steer us away from the bad and focus on the good, he believes we can develop international standards to protect humans.
“Imagine that there was a button on Mars labeled ‘geopolitical dominance,’ but actually, if you pressed it, it killed everyone,” Cohen said. “If everyone understands that there is no space race for it, if we as an international community can get on the same page … I think we can craft regulation that targets the dangerous designs of AI while leaving extraordinary economic value on the table through safer algorithms.”
The good news? AI isn’t at that level of concern—yet. But once it can start doing what we ask better than we can, then it becomes superhuman. And that becomes a problem we may not be able to turn back from.
“If your life depended on beating an AI at chess,” Cohen says, “you would not be happy about that.”