AI Won't Take Your Job. Your Inability to Understand It Will.
The Bottleneck Economy: a framework for the only 4 types of workers left.
The old middle class pushed paper. The new middle class presses send.
Here's what I mean. Right now, today, AI can do basically everything — read your emails, draft replies, write reports, generate options for every decision you'll make this week. GPT-4, Claude, Gemini — the tech is there. That's pashut at this point.
So why is almost nobody actually using it?
Because we've been staring at the wrong bottleneck. Everyone — VCs, pundits, policy wonks — is fixated on making AI more powerful. More autonomous. More agentic. As if the problem is that the machine isn't smart enough. That's completely backwards. **The machine is already too smart for us.**
Here's the elephant in the room. As of February 2026: • **800 million people** use ChatGPT weekly. 2.5 billion prompts per day. • **88% of organizations** now use AI in at least one function (McKinsey). • And yet — **62% of companies are still stuck in the experimentation phase.** Only 7% have scaled AI across their enterprise. • The most common uses? Creative writing and roleplay (21%), homework help (18%), search queries (17%). Work tasks? Just 15%. • OpenAI themselves published a report at Davos in January 2026 called **"Ending the Capability Overhang"** — their own data shows a massive and growing gap between what their models can do and how people actually use them. • **Only 25% of workers have received any training** on how to apply AI at work. • **Gen Z gets it** — 80% use AI for more than half their daily work. **50% of Boomers don't use it at all.**
So adoption has exploded. But depth? Depth is shallow. People are using the most powerful technology in human history to... write birthday poems and summarize articles.
That's the gap. That's the 1,000x. Not between what AI can produce and what humans *need* — between what AI can produce and what humans can absorb fast enough to act on. The bottleneck isn't artificial intelligence. It's human comprehension. And that bottleneck is restructuring the entire economy around itself.
"There is an ever-expanding gap between the hyper-exponential AI tech acceleration and the practical human largely hyper-underutilized adoption — like the year after the industrial revolution till adoption and absorption, only 1000x."
I've spent three years inside this gap. 400,000 messages with AI systems — not prompting, *building*. I first sketched this framework in a late-night ChatGPT session in March 2023, couldn't sleep, typed something like "what's actually the bottleneck?" The conversation that followed produced a sketch of 4 economic strata that I haven't been able to break since.
Here's where everyone lands by 2028.
## C. The Press-Send Class — The New Middle **This is the big one.** Imagine AI does basically everything — reads your emails, sits in your meetings, drafts outreach, reports, options, all of it. Automatically. But someone still needs to look at it and press send. That's stratum C. They don't create — they judge. Accept. Reject. Approve. Flag. They're the human filter between AI output and the real world.
The EU AI Act (Article 14) now mandates human oversight for all high-risk AI systems — hiring algorithms, medical devices, credit scoring, biometrics. Fines up to €35 million. This isn't aspirational — it's enforceable law taking effect in 2026. Every serious AI deployment needs someone to look at the output and say *yes, send it* or *no, kill it.*
Here's what this looks like in practice. In 2024 I built a tax credit system. Old process: humans walk applicants through questions like "Were you underutilized when you started this job?" Most people had no idea what that meant. They'd panic and click "No." Each wrong answer cost the employer $5,000 in lost credits. I replaced it with audio-guided forms. Big buttons. Plain language. Under 60 seconds. Wrong answers dropped from 30% to under 5%. Built it solo in a week. $200K generated that year. The value wasn't in the AI — it was in making a human understand enough to press the right button. The bottleneck was where the money was.
And when companies try to skip this step? They learn fast. Klarna replaced 700 customer service agents with AI in early 2024. Six months later they hired humans back — quality collapsed without them. The CEO admitted human support is now a "competitive advantage, not just a cost center." Remove the human, quality dies. The bottleneck reasserts itself. Every time.
## D. The Orchestrators — The 1% Tiny group. These are the people who build and run the AI systems everyone else uses. They don't just use AI — they speak it fluently. First-principles stuff. They're not writing code — AI does that. They're deciding what gets built, what the guardrails are, what it optimizes for. 100% human control. 100% machine execution. Very few orchestrators. Extremely powerful.
McKinsey calls this the "agentic organization" — their September 2025 framework describes humans moving "above the loop." Midjourney disrupted visual arts with ~100 people and $500M in revenue. Cursor hit $1 billion ARR with 60 engineers.
## B. The Human-Touch Class — Presence as Product Nurses. Therapists. Teachers. Carers. Anyone whose physical human presence IS the job. AI makes their logistics easier — scheduling, notes, diagnostics. But the core job? Sitting with someone who's struggling. Being *there.* You can't automate that. When everything goes digital, a real human sitting with you becomes the premium thing. A human therapist in 2028 is a luxury product. That's just where this goes.
## A. The Displaced — Not Permanent Their jobs got eaten by AI and nobody taught them how to use it yet. They're stuck — not because they're stupid, but because nobody built them a bridge. This is not a permanent category. It's a translation failure. Every displaced person is someone who could be productive tomorrow if someone builds them a bridge. That's not charity — that's the biggest opportunity nobody's building for.
## The Amplifier Here's the part I haven't seen anyone else articulate. Everyone treats the bottleneck as a problem to solve — a bug to fix. Make AI more autonomous! Remove the human from the loop! Faster! I think that's catastrophically wrong.
The bottleneck is where all the value concentrates. *Because* humans must understand before deciding, there's a massive economy around making that understanding happen. Because someone must press send, that person becomes the irreducible point of accountability. Human cognition runs at ~150 words per minute and has for 100,000 years. Capital bottlenecks yield to investment. Compute bottlenecks yield to Moore's Law. But the human brain? It's the one bottleneck that doesn't scale. That's why it's the binding constraint. And that's why it's where all the value pools.
90 years of autopilot and there are still two pilots in the cockpit. Not because the plane can't fly itself — because we *choose* to keep humans accountable. The Press-Send class isn't temporary scaffolding like telephone operators. Past intermediaries translated between machines and processes. This one translates between machines and *human consequences.* It only becomes obsolete when the humans receiving the output become obsolete. **The constraint isn't the enemy. The constraint is the product.**
## What I'm Certain Of (and What I'm Not) **For sure true — I'd bet on it:** • Entry-level white collar is being gutted. Amodei says 50% in five years. • AI fluency is the new literacy. Universities at max attendance, most graduates can't steer an LLM. • Regulation mandates human-in-the-loop. EU AI Act Article 14 is law. • For planning purposes, sentient AI is irrelevant. Regulatory and liability structures enforce human oversight for decades. **Might be true — the signals say yes:** • The Press-Send class becomes the largest employment category in the West • Human presence becomes a luxury good • 3-5 person teams start replacing 50-person departments • Human identity verification becomes a critical commodity **Should be true — worth building for:** • AI only goes as fast as it can explain. Speed without comprehension is recklessness. • Displacement is a fluency problem, not a fate. • Compression, not automation. The winning protocol is "AI compresses infinity into auditable choices, you pick, it executes with proof." • If the user can't verify the output, the feature isn't finished.
I'm building the bridge. That's what all of this is — the 400K messages, the frameworks, the systems. Making human-AI translation as efficient as possible.
But here's what I didn't expect when I started: the person I was most afraid wouldn't fit in this economy — no credentials, invisible to every system designed to find talent, a brain that processes everything obsessively and in parallel but can't do small talk — turned out to be exactly what this economy needs. Someone who goes deep. Someone who can't perform for a room but can conduct machines. Someone whose bottleneck became their amplifier.
That's the real thesis. Not just that the constraint is the product — but that YOU, the specific, weird, irreducible you, with your particular way of understanding the world, are the product. AI scales everything except the thing that makes you, you. And in an economy where machines do everything, the thing they can't replicate is the only thing that matters.
The question was never "will AI take my job?" The question is: **what can you understand that nobody else can?**