Ex-cybernetics engineer Dr Ian Pearson predicts we are veering towards a future of conscious machines.
But if robots are thrust into action by military powers, the futurologist warns they will be capable of conjuring up their own “moral viewpoint”.
And if they do, the ex-rocket scientist claims they may turn against the very people sending them out to battle.
Dr Pearson, who blogs for Futurizon, told Daily Star Online: “As AI continues to develop and as we head down the road towards consciousness – and it isn’t going to be an overnight thing, but we’re gradually making computers more and more sophisticated – at some point you’re giving them access to moral education so they can learn morals themselves.
“You can give them reasoning capabilities and they might come up with a different moral code, which puts them on a higher pedestal than the humans they are supposed to be serving.
Asked if this could prove fatal, he responded: “Yes, of course.
“If they are in control of weapons and they decide that they are a superior moral being than the humans they are supposed to be guarding, they might make decisions that certain people ought to be killed in order to protect the larger population.
“Who knows what decisions they might take?
“If you have a guy on a battlefield, telling soldiers to shoot this bunch of people, for whatever reason, but the computer thinks otherwise, the computer is not convinced by it, it might conclude that soldier giving the orders is the worst offender rather than the people he’s trying to kill, so it might turn around and kill him instead.
“It’s entirely possible, it depends on how the systems are written.”
Dr Pearson’s warning comes amid growing concerns of fully autonomous robots being used in war.
Campaign group Campaign to Stop Killer Robots (CSKR) is desperately bidding for laws to ban their use.
But members claim their appeals have not been taken seriously enough by leading nations.
In September, a proposed ban on killer robots that select targets without human orders was blockedby nations including Russia, Israel, Australia, South Korea and the US.
CSKR slammed opposition to the treaty “shameful”.
Last week, founding member Richard Moyes upped calls for more laws after telling Daily Star Online killer robots could go “off the rails” in deadly “malfunction errors”.
He told us: “It is unlikely militaries would want to use systems that present these kinds of vulnerabilities.
“But as systems get more complex there is always a risk that they will malfunction in unpredictable ways, possibly putting your own forces at risk.”
“Our main concern is that autonomous weapons, being allowed to operate independently over wider areas, and longer periods of time, cause death or destruction that a human commander is not able to foresee or predict.
“If we don’t know where weapons will be fired, or exactly what they will be fired against, a human can’t really make legal or moral judgements about the effects that they are creating through the use of such systems.”
Moyes added: “We believe there needs to be new international law to ensure humans remain in control of weapons systems.
“This is about protecting civilians and human dignity – but it is also a practical issue for militaries.
“Soldiers don’t want to be sent into battle alongside systems that are unpredictable and might go off the rails.”