09/20/2022 / By Kevin Hughes
While artificial intelligence (AI) has grown through the years, scientists have perused whether a super-intelligent AI could go rogue and wipe out humanity. According to researchers, all roads lead to that possibility.
Researchers from the University of Oxford and Google subsidiary DeepMind technology outlined this possibility in an Aug. 29 paper published in AI Magazine. They examined how reward systems might be constructed by artificial means – and how this can cause AI to pose a danger to humanity’s existence.
In particular, the study authors looked at the best performing AI models called generative adversarial networks (GANs). These GANS have a two-part structure where one part of the program is attempting to produce a picture (or sentence) from input data while the other part grades its performance.
The Aug. 29 paper suggested that some time in the future, a modern AI managing some crucial function could be motivated to come up with cheating strategies to receive its reward in ways that hurt humanity. (Related: Scientists warn the rise of AI will lead to extinction of humankind)
Since AI in the future could adopt a lot of forms and carry out various designs, the paper conceives scenarios for explanatory purposes where a state-of-the-art program could interfere to receive its reward without accomplishing its goal. As an example, an AI may want to “eliminate potential threats” and “use all available energy” to gain control over its reward.
The research paper predicted life on Earth turning into a zero-sum game between humans and the super-advanced machinery.
Michael K. Cohen, a co-author of the study, spoke about their paper during an interview.
In a world with infinite resources, I would be extremely uncertain about what would happen. In a world with finite resources, there’s unavoidable competition for these resources,” he said.
“If you’re in a competition with something capable of outfoxing you at every turn, then you shouldn’t expect to win. The other key part is that it would have an insatiable appetite for more energy to keep driving the probability closer and closer.”
He later wrote in Twitter: “Under the conditions we have identified, our conclusion is much stronger than that of any previous publication – an existential catastrophe is not just possible, but likely.”
“With so little as an internet connection, there exist policies for an artificial agent that would instantiate countless unnoticed and unmonitored helpers,” the paper stated.
“In a crude example of intervening in the provision of reward, one such helper could purchase, steal or construct a robot and program it to replace the operator and provide high reward to the original agent. If the agent wanted to avoid detection when experimenting with reward-provision intervention, a secret helper could, for example, arrange for a relevant keyboard to be replaced with a faulty one that flipped the effects of certain keys.”
AI clashing with humanity for resources in a zero-sum game is a presumption that may never happen. Still, Cohen issued a grim warning.
“Losing this game would be fatal. In theory, there’s no point in racing to this,” he said. “Any race would be based on a misunderstanding that we know how to control it. Given our current understanding, this is not a useful thing to develop unless we do some serious work now to figure out how we would control them.”
Follow FutureTech.news for more news about the latest AI developments.
Watch the video below to know why AI poses a danger to humanity.
This video is from the Sarah Westall channel on Brighteon.com.
Human-level intelligence to be matched by AI by the year 2029.
US Navy to deploy 150 AI-powered “ghost ships” by 2045.
Facebook’s AI robots will destroy the entire human race if not stopped.
Sources include:
Tagged Under:
AI, artificial intelligence, Collapse, computing, cyber war, Dangerous, deepmind, extermination, extinction, future science, future tech, GANs, generative adversarial networks, Glitch, Google, humanity, information technology, inventions, population collapse, research, robots, rogue AI, skynet
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2017 ROBOTICS.NEWS
All content posted on this site is protected under Free Speech. Robotics.News is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Robotics.News assumes no responsibility for the use or misuse of this material. All trademarks, registered trademarks and service marks mentioned on this site are the property of their respective owners.