Known as the “Godfather of Artificial Intelligence,” Geoffrey Hinton fears that his creation may surpass human intelligence and explains why “killer robots” are a real and terrifying risk.
Few names carry as much weight in the field of Artificial Intelligence as Geoffrey Hinton’s. Known as the “Godfather of AI,” this British-Canadian scientist was a pioneer in neural networks and deep learning, laying the groundwork for systems that now both amaze and increasingly disturb us — such as ChatGPT and Gemini. Precisely for this reason, his words carry special weight now that, after leaving his position at Google, he has decided to speak openly and without filters about the dangers he himself helped unleash. His warning is clear: AI poses a threat to humanity, and no one can guarantee that we will be able to control it.
He Warns About the Risks of the Technology He Helped Create. But Why Now?
At 75 years old, Hinton explained in a 2023 BBC interview that his departure from Google was due to several reasons: his age, the desire to make his praise of the company sound more credible from the outside, and, above all, the need to “speak freely about the dangers of AI” without affecting his former employer.
Although he believes Google initially acted responsibly by not releasing chatbots prematurely, he thinks the fierce competition triggered by Microsoft’s integration of AI into Bing several years ago has forced a technological arms race where safety takes a back seat. “You can only be cautious when you’re in the lead.”
Hinton’s concern stems not only from AI’s power but also from its fundamentally different nature. “The kind of intelligence we are developing is very different from the intelligence we have,” he says — a view shared by another great thinker in the field, Yuval Noah Harari.
The great advantage (and danger) of digital intelligence, according to Hinton, is its ability to share knowledge instantly. “You have many copies of the same model. All these copies can learn separately, but they share their knowledge instantly. It’s as if we had 10,000 people, and every time one learns something, all the others learn it automatically.” This collective and exponential learning capacity, he argues, is what will soon make them “smarter than us.”
The Three Horsemen of the AI-pocalypse (Short-Term Threats):
While the existential risk of uncontrolled superintelligence is his greatest long-term fear, Hinton identifies three more immediate dangers already emerging:
- Unstoppable Disinformation: The ability to automatically generate fake texts (and images, videos…) indistinguishable from real ones will make it impossible for the average citizen to know what is true. A perfect weapon, he warns, for mass manipulation by “authoritarian leaders.”
- Mass Job Replacement: AI threatens to replace human workers across a wide range of professions, creating unprecedented social and economic disruption.
- “Killer Robots”: The danger that AI systems could become autonomous weapons. Hinton considers it highly likely that actors like “Putin” will choose to give robots the ability to create their own sub-goals to be more efficient. The problem is that one of those sub-goals could be “to gain more power” to better achieve the main mission — a path that could lead to the loss of human control over these lethal weapons. “They will be very interested in creating killer robots,” he warns.
Meanwhile, the question that haunts Hinton is what will happen once these digital intelligences surpass us. “What do we do to mitigate long-term risks? Smarter things than us taking control.”
Other Voices:
Sam Altman, CEO of OpenAI, and His Most Striking Words: “My son will not grow up smarter than AI.”
There are no guarantees that we can control something fundamentally more intelligent and that learns differently. His public appeal aims to “encourage people to think very seriously” about how to avoid this nightmare scenario. He admits he is not a policy expert but insists that governments must be deeply involved in developing and regulating this technology.
Of course, he also acknowledges AI’s enormous potential benefits, especially in fields like medicine, where a system with access to millions of cases could outperform a human doctor. He does not advocate halting development right now (“in the short term, I think we’re getting many more benefits than risks”), but he does urge that reflection on control be integrated into the process.
The words of Geoffrey Hinton carry immense weight. They come from someone who not only understands the technology from the inside but also helped create it. His message, now free from corporate ties, is an urgent wake-up call. AI is advancing at breakneck speed, competition is accelerating its deployment, but the fundamental question of how to maintain control remains unanswered. The “Godfather’s” warning is clear: we must take this existential challenge very seriously — before it’s too late.





0 Comments