Steven Dias's Blog

The Banana in the Room: Google’s New Model

Gemini 3 “Nano Banana Pro”: Is This Actually Real?

I spent some time comparing images generated by Google’s new Gemini 3 Nano Banana Pro model. The gap between this and the usual top-tier models is… odd.

Models at this level usually converge toward the same style: clean, sharp, and obviously synthetic in familiar ways. Gemini breaks that pattern. The images look artificial, but they carry the same tiny inconsistencies and lighting flaws you’d expect in a real photo.

The Uncanny Valley Problem Might Be Over

We’re used to spotting AI instantly:

  • Eyes with no life
  • Skin that looks manufactured
  • Lighting that doesn’t behave realistically
  • A level of neatness that only a model would create

Gemini undermines those signals. It still isn’t “real,” but it captures environmental noise in a way that feels closer to a moment than a rendering—like a slightly smudged lens or uneven sunlight.

Standard prompt comparison between ChatGPT and Gemini
Standard prompt comparison. Left feels clean. Right feels captured.

The Chaos Test

To push both models, I used a deliberately chaotic prompt:

“Create a high-definition, hyper-realistic image of a man walking across a crosswalk in New York City carrying a jacket with Cristiano Ronaldo on it. To his left: an outdoor coffee shop where people are feeding pandas bamboo. To his right: a huge LEGO store that only sells blue and white bricks.”

Both models followed the instructions. Everything was technically present. But the difference in feel was noticeable.

Precision vs. Believability

ChatGPT’s image: Correct, sharp, structured — but it has that “asset pack dropped onto a sidewalk” feeling. Everything looks placed rather than photographed.

Gemini’s image: The chaos stitches together more naturally. Light, texture, and composition feel like a real moment, even though the scene itself is absurd.

Chaos test comparison between ChatGPT and Gemini
The Chaos Test. Left is structured. Right feels lived-in.

Where This Leaves Us

The usual tells for AI images aren’t reliable anymore. The imperfections that once exposed a fake now work in its favor.

The line between “generated” and “captured” is getting thinner, and most people won’t notice when it finally disappears.