The Oracle in the Icebox: How a Fantasy Novel, a Math Proof, and a Talking Refrigerator Made You Think AI Was Smarter Than You
Let’s begin not in Silicon Valley, but in Oxford, in the early 1950s—where a young academic named J.R.R. Tolkien is scribbling away at a sprawling fantasy about elves, rings, and a Palantír—a magical crystal ball that shows you anything, anywhere, anytime.
But there’s a twist: the Palantír doesn’t tell you what’s true. It tells you what it wants you to see. And if you believe it too much, you become easy to control.
Now, flash to 1956, Dartmouth College. A group of mathematicians and dreamers—John McCarthy, Marvin Minsky, Claude Shannon—gather in a summer workshop and declare, with a straight face, that they can simulate human intelligence. They call it Artificial Intelligence. They expect it’ll take a few months. Maybe a year.
They were off by… let’s call it 70 years.
Meanwhile, in 1977, a quiet man named Stephen Cook publishes a dense proof about something called NP-completeness. It’s math speak for “some problems are just way, way harder than others.” No matter how smart your system is, it’s not going to solve certain things efficiently—like routing a school bus through every side street in Detroit and finding the cheapest pizza joint along the way.
In short: computers have limits. Even really, really clever ones.
But here’s where it all goes sideways.
Enter the talking fridge.
By the 2010s, tech companies are slapping AI onto everything—search engines, stock markets, toasters, toothbrushes. Suddenly, your fridge can tell when you’re out of oat milk. Great! But also… creepy. It knows what you eat. It suggests what you should eat. It reports your habits to the cloud. You’ve gone from “master of the kitchen” to “data source for a kitchen-based surveillance node.”
Now pile on ChatGPT, Midjourney, deepfakes, robo-lawyers, AI therapists, synthetic influencers, and a thousand TED Talks with titles like “This Algorithm Will Change Everything.”
People start saying weird things like:
“AI is our new overlord.”
“AI is God.”
“AI will fix democracy.”
“AI is democracy.”
But here’s the rub: AI isn’t magic. It’s math, plus data, plus incentives.
It’s only as smart as the people who built it. And those people? Mostly trying to sell you things. Or harvest your attention. Or replace your job with a more polite version of Clippy.
What we’ve built isn’t an oracle. It’s a Palantír.
It shows us what we already want to see, optimized for engagement, filtered through our past behavior, and piped through 5 billion personalized prediction engines. And if we believe it too much—if we trust it to drive our cars, teach our kids, choose our leaders—we risk becoming exactly what it was trained on: passive, predictable, pliable.
AI didn’t make itself smart.
We made ourselves small.
Comments
Post a Comment