You’re Not Watching the News. You’re Watching a Machine Teach You to Be Afraid.
In the 19th century, newspapers learned a simple rule: if it bleeds, it leads.
A dramatic headline. A bloody crime. Preferably with a photograph. Circulation goes up. Advertisers are pleased. And society gets just a little more afraid of itself.
Now fast-forward.
Today, we no longer wait for the newspaper. We scroll. And we don’t just consume headlines—we train them. With every pause, every click, every angry reply, we teach the machine what makes us twitch. What we can’t look away from.
And the machine learns fast.
Not what’s true. Not what’s good.
But what keeps us engaged. Or more accurately, what keeps us emotionally aroused—outraged, afraid, suspicious.
And here’s where it gets ugly.
Because for centuries, Western societies have been conditioned to view Blackness through the lens of fear and violence. From slave-era propaganda, to Reconstruction-era lynching postcards, to 1990s superpredator myths—it’s an old story. The visual grammar was set long ago.
Now? The algorithm doesn’t know any of that history.
It just knows: when it shows you a video of a Black person hitting a white person, you stop scrolling. Maybe you gasp. Maybe you read the comments. Maybe you share it. But the pattern is there.
So the algorithm says:
“Ah. You liked that. Would you like… some more?”
And suddenly your feed looks like an open-air racial panic attack.
Over and over.
Black violence. White victimhood.
No context. No cause. Just the spectacle. Just the fear.
But remember: this isn’t a person choosing what to show you.
This is an automated system, trained on us—our behavior, our patterns, our historical baggage. And what it’s learned is simple: racism is viral. Dehumanization gets clicks. Fear monetizes.
And the end result?
A society convinced—not through evidence, but through repetition—that it is under siege by the very people it has already marginalized.
And here’s the real kicker:
If anyone designed this on purpose, we’d call it a conspiracy.
But no one had to.
We built a machine to show us what we already believe.
And then we let it show us those beliefs until they felt like truth.
JUST ONE MINUTE!
ReplyDelete“The Algorithm Isn’t Racist—But It Learned From Us.”:
Back in the 19th century, newspapers discovered that if it bleeds, it leads.
Violence sold papers.
Especially if it confirmed the fears readers already had.
Now? We don’t need an editor.
We have an algorithm.
And it’s learned something even the editors didn’t know how to do:
It watches you.
What you click.
What you pause on.
What makes your heart rate spike.
And here’s the problem.
If you stop scrolling on a video of a Black person attacking a white person—just for a second—
the algorithm says, “Got it. You like that.”
And it shows you more.
And more.
And more.
No context.
No stats.
Just images.
Violence.
Fear.
Until the feed starts to look like a warning.
But it’s not reality.
It’s pattern recognition.
And the pattern? Comes from history.
Centuries of portraying Blackness as violent.
Centuries of associating whiteness with innocence.
That’s the training data.
So no—the algorithm isn’t racist.
But it learned from us.
From what we built.
From what we watched.
And now, it’s showing us our own reflection.
On loop.