Revealed Preferences
Every keystroke is a vote
TikTok ignored what you liked. They watched what made you stop scrolling.
Economists call this revealed preferences. The gap between what you say and what you do. Every platform before TikTok asked users to build their own feed. Follow accounts, like posts, tell us what you like. TikTok skipped the asking. Dwell time over likes. Subtle, but important difference.
I’ve been thinking about how modern LLMs learn. There’s this technique called RLHF (reinforcement learning from human feedback). Human raters look at two outputs and pick which one’s better. Thumbs up, thumbs down, repeat a few million times. That’s how ChatGPT, Claude, Gemini all got less weird.
The problem is RLHF (typically) trains one model for everyone. Millions of preferences blended into one reward function. Your thumbs up and mine averaged together. The model converges toward what most people like most of the time.
What passes for personalization today doesn’t change this. Memory stores what you told the system. Your name, your job, your tone. Stated preference bolted onto the same model everyone else uses.
Tab, tab, tab. Every software tool throws off micro-decisions. Accept, reject, edit, regenerate, abandon. Code completions you take versus skip. Email drafts you send versus rewrite. Each one is a vote.
Unconscious, mostly. After a while you stop noticing. 2 AM, accepting code suggestions because you have token anxiety. Third rewrite on that email because your boss might read it wrong. Training data is the last thing on your mind.
Feeds wish they had data this clean. Scrolling is semi-conscious. You know you’re being fed content. But autocomplete? You don’t perform for autocomplete.
Right now, this data improves models for everyone. Cursor uses your accepts and rejects to make Cursor better for all users. Convergence.
It doesn’t have to work that way.
Imagine a version where the data makes your model diverge from mine. A model that becomes irreversibly yours. Your “AI” and my “AI” turn into different products over time because we used them differently, and the system noticed.
Last year I wrote about forking your tools. The power to consciously remix an interface. You decide to fork. Here the product forks itself. You didn’t configure anything. You worked, and your patterns became the product’s patterns.
The moat is the divergence, not the model.
Switching costs transform. Files travel fine between tools. Export, import, done. But the understanding doesn’t transfer. Thousands of micro-preferences the system learned that you never articulated. Switching means teaching a new tool from scratch.
I probably wouldn’t bother.
The infrastructure for per-user divergence isn’t here yet. But it’s coming. Capture the behavioral data now. Log the accepts, rejects, edits, regenerations per user. The companies sitting on that history will have a head start that takes years to replicate.
Users will feel it the moment they try to switch.


