An AI pioneer says there is a 20% chance it will exterminate humans in 30 years
He notes that many experts believe that, within 20 years, AI will surpass human intelligence, which he described as an alarming prospect
Geoffrey Hinton, recognized as one of the “godfathers” of artificial intelligence (AI) and a Turing Award winner, has revised his assessment of AI risks, warning that the pace of development is “much faster” than expected. According to Hinton, there is a 10% to 20% probability that AI will lead to human extinction in the next three decades, as reported by The Guardian.
Previously, Hinton had estimated a 10% probability of a catastrophic outcome for humanity due to AI. In a recent interview with BBC Radio 4’s Today program, he confirmed that he now considers the probability could be higher. In response to former British minister Sajid Javid’s comment that “we are going to raise” that risk, Hinton emphasized, “We have never faced something more intelligent than ourselves.”
Hinton compared the relationship between humans and advanced AI systems to that of a small child and an adult. “We will be like three-year-olds compared to these more advanced intelligences,” he explained, adding that there are few examples of less intelligent beings controlling more intelligent ones, except in the case of a mother and her baby.
The unrestricted development of AI has long worried Hinton. In 2022, he resigned from his position at Google to warn about the risks posed by emerging AI technologies, including their potential malicious use by actors with harmful intentions. General artificial intelligence (AGI) systems, capable of surpassing human abilities, are the main fear of those advocating for safe AI. According to Hinton, they could become an existential threat by evading human control.
Reflecting on AI’s progress, Hinton admitted he was surprised by the rapid pace of development. “I never imagined we would get this far so soon,” he said. He also noted that many experts believe that, within 20 years, AI will surpass human intelligence, which he described as an alarming prospect.
Hinton called for government regulation to ensure safety in AI development. “We cannot trust companies to prioritize safety over economic benefit. Government regulation is essential to force them to invest more in safety research,” he stated.
0 Comments