> resis
|

THE GREAT GEMINI DECEPTION: When AI is Just a Joke Generator

Published on 25/11/2025

I've been programming for 40 years. I've seen fads and technologies born, grow, and die. But the current cult around models like Gemini and Claude (or Antigravity, as we might call them) has crossed every limit of decency and engineering common sense.

We are in a media circus where these technologies are praised because they can write the "About Us" section of a website or generate trivial React code. They are perfect for little websites and for those who don't know what the friction between code and the physical world is.

But when we move to the truth of production—when code must run at 115200 baud on a non-standard communication protocol, interacting with flash memory and the constraints of dedicated hardware—then the great bluff is revealed.

Gemini and the Art of Failing Elegantly

The problem manifests every time we try to use these models for low-level engineering tasks: translating code for data management on custom hardware into a performant language is a predictable disaster.

The problem is simple and irrefutable: Gemini is nothing special, it is just an expensive statistical parrot, excellent at producing superficiality.

1. AI Does Not Understand Reality

These models know everything about code grammar, but they have no idea what real systems engineering is.

The Comparison: Asking Gemini to convert code that manages a data stream on dedicated hardware is like asking a child to fly an airliner because they can draw a plane. The produced code is formally correct but fails immediately as soon as it meets the constraints of the physical world: exact timings, chip responses, physical memory issues.

The Bluff of Knowledge: Models are not trained on datasheets or non-standard communication protocols. They cannot access proprietary documentation. Therefore, they invent. And that invention is not intelligence; it is junk code that wastes our time, real programmers who have to discard it.

2. Inefficiency is Real: The Cost of "Prompting"

The ridiculous argument I often hear is: "Just provide more context in the prompt!"

This is not an accelerator; it is an obstacle.

Wasted Time: Spending hours distilling documentation and complex protocols into a prompt that exceeds the model's memory limits is time stolen from field debugging. An hour spent writing a convoluted prompt is always less productive than ten minutes spent writing code by hand and testing it directly with hardware.

Contaminated Output: Their code is not a starting point; it is a rejection point. We have to spend work cycles finding the logical bugs that the AI introduced by bluffing about its driver knowledge.

The LLM in this scenario is not just useless, it is actively inefficient. It is worse than having nothing.

Conclusions: AI is Fine for Little Websites

The current generation of AI, hailed as the Fourth Industrial Revolution, is actually a great technology for generating jokes, superficial summaries, and code for trivial websites. It is the ultimate for producing statistical noise.

But in Deep Engineering, where precision, memory knowledge, and hardware logic are everything, Gemini is not worthy of tying our shoes.

The future is not Gemini; it is human knowledge unmasking its bluff.