Published on 24/11/2025
When the Swedish Academy awarded the Nobel Prize in Physics to Geoffrey Hinton and John J. Hopfield in October 2024 for their pioneering studies on neural networks, it didn't just reward a scientific discovery. It formally recognized the genesis of the Artificial Intelligence that is redefining the world today.
Yet, unlike many winners who celebrate scientific advancement, Hinton — often called the "Godfather of AI" — used this new, enormous platform to launch a warning that resonates like a paradox: humanity may have generated its own extinction.
Hinton's concern didn't start with the Nobel Prize. Already in May 2023, the then Vice President of Google resigned from the tech giant, stating: "I left so that I could talk about the dangers of AI without considering how this impacts Google."
The paradox is powerful: the man who, more than anyone else, made Super-Intelligence possible, is also the one who fears its consequences the most. His concerns focus not just on job losses, but on three systemic and existential risks:
"We have never had to deal with entities that are smarter than us. I don't know how humans will stay in charge." – Geoffrey Hinton, Nobel Prize in Physics 2024.
Hinton's appeal has not remained isolated. In September and October 2025, the global community responded with a concrete movement aimed at regulation: the "Global Call for AI Red Lines".
This initiative, launched during the United Nations General Assembly and supported by hundreds of prominent figures (including other Nobel Laureates, former Heads of State, and Maria Ressa, Nobel Peace Prize winner), asks governments to act immediately.
The heart of the Call is the request to establish a binding international agreement, by the end of 2026, to define what AI must NEVER be authorized to do.
These "Red Lines" do not concern standard cybersecurity, but systemic risks that are universally unacceptable:
The appeal has generated a heated debate: is stopping innovation for safety possible? Many critics argue that imposing global bans is a vain attempt and would shift innovation towards less regulated regimes.
However, the message from Hinton and the signatories of the "Red Lines" is clear: the stakes are too high to rely solely on Big Tech self-regulation.
As programmers and IT professionals, our role is not just to develop technology, but to be the first to understand its potential impact. We must ask ourselves not only what AI can do, but what it should do. The reflections of Hinton, the father of neural networks, are not a brake, but an invitation to build the AI of the future with an ethics and awareness that we have never had to face before.