Why the next ‘human rights’ issue will be inhuman… Oxford mathematician believes Artificial Intelligence should have rights

Tuesday, July 26, 2016 by

Advances in artificial intelligence (AI) will spur sentient robots that ought to be granted human rights, argues Oxford University professor and mathematician, Marcus du Sautoy,  for the public understanding of science.

“It’s getting to a point where we might be able to say this thing has a sense of itself and maybe there is a threshold moment where suddenly this consciousness emerges. One of the things I address in my new book is how can you tell whether my smartphone will ever be conscious,” du Sautoy said at The Hay Literary Festival.

Until recently, the most common method scientists used to gauge the intelligence of a computer system was the Turing test. The late mathematician Alan Turing devised a thought experiment, which could theoretically determine whether a machine could think. In particular, Turing claimed any machine capable of convincing someone it is conscious by responding to a series of questions via teletype would, by all measures, be capable of thinking.

The nature of thinking

It should be emphasized that Turing was not arguing that the nature of thinking is universal. The way a bird “flies” is different from the way an airplane “flies,” in the same way a humans thinking may be different from the way a robot thinks. Instead, Turing’s general point was that any entity capable of passing a Turing test would be capable of thinking in one form or another.

Parting company from the Turing test, scientists are now gauging consciousness by looking at neural activity in the brain when sleeping. In addition, Sautoy pressed children are regarded as having a sense of self whenever they can recognize themselves in the mirror.

Du Sautoy thinks that advances in AI and neuroscience have expended the purview of moral consideration. During the festival, he said these new techniques have provided a better understanding of the physical machinery that underpins consciousness.

“The fascinating thing is that consciousness for a decade has been something that nobody has gone anywhere near because we didn’t know how to measure it. But we’re in a golden age. It’s a bit like Galieo with a telescope. We now have a telescope into the brain and it’s given us an opportunity to see things that we’ve never been able to see before. And if we understand these things are having a level of consciousness we might well have to introduce rights. It’s an exciting time,” he said.

These advances mean we should respect consciousness in all its forms, according to du Sautoy, whether it be grounded in the breathing tissue of brains or an artificial construct embedded in cold silicon.

The tipping point

Although the prospect of granting machines human rights sounds far fetched, it is a possibility that experts must take seriously as advances in AI march forward. The million dollar question is at what point will computer systems become so advanced that their consciousness ought to respected?

Some experts suggest computers could embody conscious experience within the next 50 years. However, these time frames are widely speculative. No one is sure when machines will be capable of thinking for sure.

“I think there is something in the brain development which might be like a boiling point. It may be a threshold moment,” du Sautory said. “Philosophers will say that doesn’t guarantee that that thing is really feeling anything and really has a sense of self. It might be just saying all the things that make us think it’s alive. But then even in humans we can’t know that what a person is saying is real,” he added.

Sources include:

ScienceAlert.com

ITPro.co.uk

Glitch.news

Science.NaturalNews.com

Comments

comments powered by Disqus