The Case Against Doomerism
Let go of your priors
I’d be unemployable if I went back to the PM job market today.
Eleven years as a product manager. Roadmaps. Stakeholder alignment. Coordinating engineering, design, sales. Translating chaos into shipping software.
No founder asks for a PM anymore. They want builders or sellers. The chaos I helped master? AI handles it now. Maybe not as well as the best PMs. But, well enough that paying a human $300K to do it looks like a luxury most startups can’t justify.
So I get the doomers. I’d be staring at my own obsolescence.
There’s a shift happening. You can feel it. The tools got better. You can smell the smoke.
2026 is when this fear goes mainstream. Capabilities will stop feeling like tools and start feeling like magic. Models completing in minutes what used to take days. Agents running autonomously for hours. Code, research, analysis flowing at speeds that make your current workflow look ancient.
Magic we don’t understand terrifies us. The backlash will intensify. Communities will form around refusal. Doomerism will feel like the ethical position, the thoughtful position, the human position.
I understand the pull. I’ve felt it.
But most people’s experience of AI has genuinely been bad. They’ve used ChatGPT cold. No context, no examples, no iteration. They got hallucinated facts and corporate mush. Asked for something specific, received something generic. Tried once, got burned, walked away.
The models have outpaced the products built on them. Raw capability sitting there, waiting for interfaces good enough to unlock it. Most people have been trying to drive a Ferrari through a McDonald’s drive-thru.
That’s changing fast. Manus, a startup that nailed its harnesses, just got acquired for ~$2B. They launched nine months ago. The race to build better interfaces has begun, and it’s moving faster than anyone expected.
The doomers formed their priors on broken products. 2026 is when those priors start costing them.
I almost made the same mistake with writing.
Claude is my default partner now on every piece. But I resisted for months. Early versions were sycophants. “Great idea!” on everything. No pushback. No friction. Just validation dressed as feedback.
That scared me more than capability ever did. Writing is how I think. It’s how I process the world, test my ideas, figure out what I actually believe. A collaborator who only agrees doesn’t sharpen you. It dulls you. I worried that leaning on AI would erode the muscle I’d spent years building.
So I kept writing alone. Waited for the models to mature. Learned to prompt for genuine challenge instead of applause.
Now I write more than ever. The collaboration sharpened my judgment. The fear I had to let go of wasn’t about capability. It was about dependency. And dependency is a choice, not an inevitability.
This pattern has played out before.
In 1997, Deep Blue beat Garry Kasparov. Some grandmasters refused to train with computers after that. Called it cheating. Insisted the game was about human intuition, human creativity, human struggle. These were people who had spent their entire lives mastering chess. Decades of study. Thousands of hours building pattern recognition that felt like instinct. They weren’t about to let machines tell them how to think.
Within ten years, teenagers who’d grown up with engines were destroying them.
Kasparov himself started advocating for what he called “centaur chess.” Human and machine together. The players who combined their creativity with computer analysis reached levels neither could achieve alone. They won tournaments that pure machines and pure humans couldn’t.
The grandmasters who refused became footnotes. Their decades of expertise couldn’t protect them from teenagers with better tools.
Doomerism assumes you can opt out and hold your position. That refusal is neutral. That the world will wait while you decide.
It won’t.
The founders experimenting now are building intuition that compounds monthly. Someone starting in January 2026 versus January 2027 won’t be twelve months behind. The gap is exponential. They might never catch up.
Doomerism also filters out exactly the people who should be shaping this technology. The thoughtful opt out. The reckless charge ahead. Then we wonder why AI development lacks wisdom.
You don’t get a voice in the future by refusing to participate in the present.
Guardrails matter. Regulation will come. I worry about what happens to people who can’t adapt as fast as the technology demands. But none of that changes the direction. It only changes the speed.
My PM career as it existed is probably over. The skills that defined it are being automated faster than I expected. I’m not mourning. I’m learning new skills. Using these tools while keeping my own judgment.
Kasparov understood this twenty years ago.
We’re all playing centaur chess now.
That’s the fear. Next week, I’ll make the case for what’s on the other side: abundance.


