Jiro's apprentice massaged octopus for the thousandth time. Ten years before he gets to touch fish that customers will eat. Another 200 attempts at tamago before approval. Every unnecessary step inching closer to mastery.
I finished the movie and remembered I owed a founder a rejection email.
Fifteen seconds later, ChatGPT had drafted it: perfect output. Professional, empathetic even. The founder would never realize a machine rejected their life's work.
This efficiency felt like betrayal. We're trading the friction that builds judgment for the speed that erodes it.
Writing founder rejections should hurt. That sick feeling when you type "we've decided to pass"? That's not inefficiency. The weight forces you to be careful about who you reject and why. Makes you double-check your reasoning. Sometimes changes your mind.
An AI feels nothing. It pattern-matches kindness while destroying dreams.
And yet I've become fully bionic. Meeting someone new? I dump their URL into Deep Research. Fifteen minutes later it's mapped their entire market. Books? ChatGPT curates my reading list. Music, Netflix, Instagram. My entire consumption stack runs on recommendation engines.
This isn't like the old days when AI helped format documents. These reasoning models actually think. They decide which questions to ask. O3 takes my incoherent rambling and returns insights sharper than my own.
What I'm losing: Those hours of manual research built pattern recognition. I could spot market timing issues from similar failed startups. Intuition about founder-market fit from seeing hundreds of pitches. The kind of knowledge that comes from struggle, not search.
Now I prompt and receive. The muscles atrophy.
Last month I was psyched about a founder because Deep Research's market analysis looked compelling. Clean calculations, competitive landscape mapped perfectly. Then I met the founder in person. Couldn't articulate their differentiation beyond what was in their deck. The AI had done such thorough research I never developed intuition about what was actually missing.
I passed. Not because I knew it was bad, but because I realized I didn't actually know if it was great.
Maybe that's the human job now. Not thinking (AI's got that covered) but choosing when not to optimize. Being the only part that chooses the hard way.
Jiro can make perfect sushi in minutes. Every unnecessary step he takes? It's about staying human.
I deleted ChatGPT's draft. Spent an hour writing that rejection. Made typos. Started over twice. Felt sick hitting send.
That discomfort isn't inefficiency. It's the friction that makes our decisions human.
We're on the same wavelength! This capacity you're describing could be articulated as meta-rationality in the David Chapman sense. Or wisdom in the John Vervaeke sense. As the marginal cost of generating bits goes towards zero, there will be increasing demands placed on our discernment.