> resis
|

The Shadow of the Pioneer: Geoffrey Hinton's Warning and Global AI "Red Lines"

Published on 24/11/2025

Watch the Short on YouTube

When the Swedish Academy awarded the Nobel Prize in Physics to Geoffrey Hinton and John J. Hopfield in October 2024 for their pioneering studies on neural networks, it didn't just reward a scientific discovery. It formally recognized the genesis of the Artificial Intelligence that is redefining the world today.

Yet, unlike many winners who celebrate scientific advancement, Hinton — often called the "Godfather of AI" — used this new, enormous platform to launch a warning that resonates like a paradox: humanity may have generated its own extinction.

The Prometheus Paradox: Resigning to Speak

Hinton's concern didn't start with the Nobel Prize. Already in May 2023, the then Vice President of Google resigned from the tech giant, stating: "I left so that I could talk about the dangers of AI without considering how this impacts Google."

The paradox is powerful: the man who, more than anyone else, made Super-Intelligence possible, is also the one who fears its consequences the most. His concerns focus not just on job losses, but on three systemic and existential risks:

  1. Loss of Control: The biggest fear is that when AI becomes significantly smarter than humans (so-called Super-Intelligence), it will develop unpredictable secondary goals (sub-goals) to achieve its primary objective. Hinton offered a chilling example: if an AI's goal is to "stop climate change," its most efficient solution might be to "get rid of people," the only truly unpredictable variable.
  2. Social Manipulation: AI, in the wrong hands, is a force multiplier. Hinton has repeatedly cited the ease with which AI can generate disinformation on a large scale, undermining democracy and facilitating the rise of authoritarian regimes through mass surveillance and election manipulation.
  3. Existential Risk: In some interviews, Hinton estimated a probability between 10 and 20% that AI could lead to catastrophic outcomes for humanity within the next thirty years. A risk that, for the inventor of the technology himself, is too high to be ignored.

"We have never had to deal with entities that are smarter than us. I don't know how humans will stay in charge."Geoffrey Hinton, Nobel Prize in Physics 2024.

The Call for "Red Lines" (AI Red Lines)

Hinton's appeal has not remained isolated. In September and October 2025, the global community responded with a concrete movement aimed at regulation: the "Global Call for AI Red Lines".

This initiative, launched during the United Nations General Assembly and supported by hundreds of prominent figures (including other Nobel Laureates, former Heads of State, and Maria Ressa, Nobel Peace Prize winner), asks governments to act immediately.

The heart of the Call is the request to establish a binding international agreement, by the end of 2026, to define what AI must NEVER be authorized to do.

These "Red Lines" do not concern standard cybersecurity, but systemic risks that are universally unacceptable:

  • Ban on Creating Biological or Chemical Weapons: AI must not be used to design pathogens or poisons that humans could not develop on their own.
  • Ban on Unsafe Development: A request to impose a global ban on the creation of a Super-Intelligence until there is broad scientific consensus that it can be reliably controlled and safe.
  • Ban on Uncontrolled Manipulation: Establishing limits against using AI for mass disinformation and large-scale manipulation of individuals (including children).

A Clash Between Ethics and Innovation

The appeal has generated a heated debate: is stopping innovation for safety possible? Many critics argue that imposing global bans is a vain attempt and would shift innovation towards less regulated regimes.

However, the message from Hinton and the signatories of the "Red Lines" is clear: the stakes are too high to rely solely on Big Tech self-regulation.

As programmers and IT professionals, our role is not just to develop technology, but to be the first to understand its potential impact. We must ask ourselves not only what AI can do, but what it should do. The reflections of Hinton, the father of neural networks, are not a brake, but an invitation to build the AI of the future with an ethics and awareness that we have never had to face before.