It’s likely that most people locked in our jails believe that with a better lawyer, a more lenient judge or a more understanding jury things might have been very different for them.
Human error, they will say, is to blame for them being banged up.
As computer researchers get closer to creating true Artificial Intelligence, it’s predicted to eliminate most paralegal and legal research positions within the next decade.
The next step inevitably involves artificial intelligences aiding, or even completely replacing lawyers. And if we have robot lawyers, why not automated judges and juries too? Why not a fully solid-state legal system?
Lawyer Tom Girardi, who was one of the real-life inspirations for the movie Erin Brockovich, told Forbes: “It may even be considered legal malpractice not to use AI one day
“It would be analogous to a lawyer in the late twentieth century still doing everything by hand when this person could use a computer.”
Writer Rossalyn Warren points out that people are, by their nature, flawed. Flesh-and-blood jurors and judges will always bring their own prejudices into the courtroom.
But a robot juror, she says, “could be crammed with a far broader range of facts and figures about the nature of crime, cases on record and the law, making it much more worthwhile than a juror who has little awareness on such matters.”
She goes on: “Expecting randomly selected members of the public to decide the fate of a person in a jury system is outdated because the notion of a fair and impartial jury doesn’t exist.”
Even cross examinations could be outsourced to an automated system. A thought-provoking experiment shows that people are more likely to be completely honest with an unemotional machine than a potentially judgemental human.
When researchers led by Jonathan Gratch at the Institute for Creative Technologies created an artificially intelligent robot psychologist named Ellie they tested it on two groups of people.
Half were told Ellie was just a machine that was able to ask probing questions and understand their respondents’ emotions with 3D cameras. Those people were shown to give more honest responses to “her” while the experimental subjects that were told that Ellie was being operated by a human ‘puppeteer’ gave less direct answers.
Apart from the possibility of getting a fairer result, raw economics come into play too.
“If a lawyer can use AI to win a case and do it for less than someone without AI,” says Tom Girardi, “who do you think the client will choose to work with next time?”
But a solid-state legal system with no humans involved isn’t necessarily more error-proof than our existing system. Former Prime Minister Theresa May, in an address last year to the Davos Forum, pointed out that we need to develop a set of laws governing Artificial Intelligence so we can “make the most of AI in a responsible way, such as by ensuring that algorithms don’t perpetuate the human biases of their developers.”
After all, as Ms Warren points out: “AI, computers and legal robots are made by humans. “Technology, like humans, can make mistakes and hold the same discriminatory factors.
“For example,” she goes on “people of colour are more likely to trigger a ‘false positive’ match than white people on facial recognition software, which means they are more likely to be subjected to a wrongful police stop and search.”
And the real danger there is that a machine’s bias is less likely to be questioned by a human’s.
If you end up jailed on a robot judge’s decision, there’s even less chance that your pleas of innocence will be believed.