Tab Tab Tab
How humans became training data with paychecks
Last week I was watching the founder of an AI sales startup build his product using Cursor at a hackathon. The IDE suggested an entire authentication function. He glanced at it for two seconds. Pressed tab. The code appeared. He moved to the next function.
Tab. Tab. Tab.
Twenty minutes later, I asked how much code he’d actually written. Maybe three lines. The rest was accepting suggestions.
He was “vibe coding” as the cool kids call it.
That afternoon I checked my own work. Twelve hours with Claude that week writing memos and doing market research. Maybe twenty original thoughts. The rest was accepting or rejecting Claude’s suggestions.
Mercor reached a $10 billion valuation on exactly this model. They hit $500M in annual revenue paying humans to validate AI responses. Surge crossed $1.2 billion in 2024 doing the same. Scale AI sold half itself to Meta for $14 billion.
These aren’t software companies anymore. They’re human validation farms. And business is booming.
“Really the only way models are now learning is through net new human data,” says Adam Bain about Micro1. Models can generate infinite content. They need humans to tell them which infinity matters.
The tech world thinks this is about developers and Cursor. They’re missing the bigger picture.
Radiologists increasingly review AI-detected anomalies. Investment bankers validate generated models. Partners at law firms review AI contract language. Accept, reject, revise.
The work hasn’t disappeared. It’s become binary. Yes or no. Good or bad. Tab or delete.
Today the literal tab key lives in IDEs. Tomorrow it spreads everywhere. Your email client will draft responses in your voice. Your spreadsheet will propose entire analyses. Your design tools will generate complete layouts.
You’ll open your laptop to find your work already done. The cursor blinking. Waiting for approval.
The next decade of AI improvements comes from three things: GPUs, algorithms, and expert human data. Nvidia sells GPUs to everyone. Algorithms get published in papers. But expert human data? That stays proprietary. That’s the moat.
Every company scrambles to capture it. Your competitor logs every employee decision as training data. Meanwhile most companies leak this expertise through ChatGPT in personal accounts, Claude in browser tabs. Every refinement making someone else’s model stronger.
The smart ones build closed loops. Internal tools that capture every micro-decision. They’re not racing to build better products. They’re racing to build better training data.
I met a founder last month who saw this clearly. His company logs every employee interaction with AI. Every edit tracked. Every preference captured. Not for surveillance, he explained. For compound intelligence. His thesis was simple: companies that can’t capture their employees’ expertise will lose to companies that can.
We used to mock offshore outsourcing. But at least those were humans training humans. Knowledge transferred between people. Skills stayed in heads. Now we upload our judgment directly into models. Once trained, they won’t need us.
Last month a Stanford researcher asked if AI augments my thinking or replaces it.
I wanted to say augments.
But in my heart, I know it will replace my thinking.
Your entire professional life distilled to a single keystroke.
We won’t lose our jobs to AI. We’ll give them away willingly. Teaching the system exactly how to replace us. Getting paid for the privilege.
I still spend twelve hours weekly with Claude. But differently now. Pushing into territories where suggestions fail. Finding thoughts that haven’t been generated yet.
Curiosity can’t be tabbed through. Taste can’t be validated into existence. Problems that don’t exist yet aren’t in any training set.
Everything else is just keystrokes teaching machines not to need us.
Tab. Tab. Tab.
Until there’s no need to press them.


