The fundamental shock of GLP-1s is ontological: they expose weight loss as a phenomenon we’d mistaken for something else entirely
For half a century, obesity treatment was behavior change. Not mediated by it, not requiring it—it was identical to it. The equation seemed an unbreakable tautology: consume less than you expend; the execution of this imbalance is weight loss. We built cathedrals on this foundation: calorie-tracking apps, habit-formation science, cognitive-behavioral therapy protocols, bariatric surgeries that forcibly restricted behavior. Thousands of PhDs, billions in revenue, entire medical specialties predicated on optimizing the willpower bottleneck.
Then semaglutide delivers 15-20% weight loss by adjusting a hormonal signal. People still eat less, yes—but the decision-making cost evaporates. The meal planning, the portion control, the “food noise,” the white-knuckling through cravings, the elaborate psychological scaffolding: all revealed as compensatory mechanisms for a broken metabolic signal, not the thing itself. We’d mistaken the performance of discipline for the mechanism of weight regulation.
AI does the exact same violence to our model of intelligence.
For decades, machine reasoning was symbolic manipulation. Not correlated with it, not approximating it—the two were ontologically identical. To build thinking machines, you encoded knowledge as explicit rules: Expert systems with hand-crafted inference chains. Semantic networks mapping logical relationships. Cyc’s millions of common-sense axioms. We conflated the phenomenology of reasoning (symbols, logic, structured thought) with the substrate of intelligence itself.
Then transformers produce human-level reasoning by predicting next tokens in massive sequences. The symbolic manipulation emerges—unplanned, unengineered—from pure statistical compression at scale. GPT models don’t contain representations of logic; they instantiate a process that behaves logically when needed. We were like birds studying airplanes, mistaking the flapping of wings for the principle of flight.
The Coherence Trap
What makes this disorienting isn’t merely being wrong. It’s that our errors were perfectly coherent.
The behavioral model of obesity worked—sometimes, partially, enough to validate itself. It had internal consistency, moral logic (self-control as virtue), institutional infrastructure, and evolutionary intuition (hunter-gatherer energy balance!). It wasn’t random; it was a complete, self-reinforcing worldview. It was just… solving a different problem than we thought.
The same coherence trapped AI research. Symbolic AI had textbooks, theorems, DARPA funding, and philosophical backing from millennia of human thought about thought. It made sense. But it was mapping the territory of consciousness onto the mechanics of computation, confusing the user interface of our minds for the underlying code.
Both errors share a structure: confusing the interface with the implementation. We experienced willpower as effortful, so we assumed effort was necessary. We experienced reasoning as symbolic, so we assumed symbols were requisite. We were like beings who’d only seen shadows on a cave wall, mistaking the shapes for the fire that cast them.
Discovery Through Deployment
Here’s where the parallel deepens: neither technology was understood before it worked.
GLP-1s started as diabetes drugs. Their weight-loss effects were “side effects.” Their cardiovascular benefits were discovered post-marketing. Now they’re showing signals for addiction, Alzheimer’s, kidney disease, sleep apnea. We’re not applying a known mechanism; we’re excavating one. Each new indication suggests the drug touches something more fundamental—a metabolic-reward nexus we hadn’t mapped because our behavior-first model made such a map unnecessary.
AI follows the same epistemology. Anthropic trained Claude to predict next tokens in text sequences, then watched it achieve a 50% success rate on tasks lasting nearly 5 hours—the longest “time horizon” ever measured for autonomous AI systems. The model scores 80.9% on SWE-bench Verified, a benchmark testing real-world software engineering on actual GitHub issues requiring codebase comprehension, bug diagnosis, and implementation fixes. The capability timeline has compressed dramatically: tasks that AI can complete autonomously have been doubling every 4 months in 2024-2025, accelerating from a 7-month doubling time in prior years (we’re on the fast timeline now). No one designed these capabilities; they emerged when statistical learning crossed capability thresholds. The system doesn’t reason about code through symbolic logic—it compresses patterns at scale until something that behaves like reasoning appears. We’re not building autonomous workers; we’re discovering that autonomy is compressible from sufficiently large pattern spaces.
This inverts scientific discovery. Normally: understand mechanism, then deploy. Here: deploy at scale, then discover mechanism through effect. We’re conducting discoveries in production, watching reality reveal its hidden dimensions in real-time.
Rewriting Frozen Realities
Both technologies share a more profound property: they change the physics of their domains.
Before GLP-1s, obesity was locked in a causal chain: cheap hyper-palatable food → overconsumption → metabolic disease → healthcare crisis. The solution had to be upstream (fix the food system) or downstream (behavioral resistance). GLP-1s don’t optimize within this reality—they alter its gravitational constant. Suddenly obesity becomes opt-out rather than opt-in, and every downstream system (insurance actuaries, food industry economics, disability projections, public health infrastructure) faces a physics it wasn’t designed for.
AI performs the same surgery on knowledge work. The constraint wasn’t information access; it was expertise as a scarce, serially accumulated resource. You can’t compress a 20-year doctor’s judgment into a textbook. But you can distill it into a model that makes that judgment ambient and instant. When expertise becomes abundant, everything predicated on its scarcity—medical education, legal billing, research timelines, creative industry gatekeeping—faces a coordinate transformation. The map wasn’t wrong; the territory changed.
The practical consequence is that we’re now navigating with broken maps. Every institution built on the old substrate assumptions—weight loss programs predicated on behavior modification, educational systems predicated on knowledge transfer, labor markets predicated on expertise scarcity—are operating on physics that no longer apply. But we can’t simply replace the map, because we’re still discovering what the new territory even is.
GLP-1s didn’t just give us a weight loss drug. They revealed that metabolic regulation was approachable from a completely different angle than we’d imagined, which means every downstream assumption needs re-examination. AI didn’t just give us a better autocomplete. It revealed that intelligence was compressible in ways we’d assumed it wasn’t, which means every model of cognitive work needs reconstruction.
We thought we knew what the problem was. Turns out we’d mistaken the surface for the substrate. And now we’re discovering everything else built on that confusion.
The Substrate Beneath
The deeper lesson is about substrate independence. Weight regulation doesn’t require conscious effort; it requires the right hormonal signals. Intelligence doesn’t require explicit reasoning; it requires sufficient pattern exposure. The thing we thought was essential—behavior change, symbolic logic—was just one possible implementation, and not even the efficient one. We were confusing the hard way with the only way.
This creates a specific kind of vertigo: if we were this wrong about things this fundamental, what else are we wrong about? We’re not talking about details or refinements. These are category errors about what we’re even trying to solve. And both technologies revealed the error not through better theory but through demonstration—by just working in ways that shouldn’t be possible under our models.
The technologies feel “alien” because they bypass the interface we thought was the system. They’re not tools for us to use; they’re hacks that reveal the game engine beneath the user experience.
We haven’t lost our agency. We’ve discovered it was never located where we thought. The feeling of effort isn’t the mechanism of action. The symbols in consciousness aren’t the code of cognition. We’re not obsolete, but we are definitively on the surface—navigating by deploying, discovering by doing, in a fog where the technologies keep teaching us what they touch. The only certainty is that realities we assumed were locked—metabolic limits, cognitive limits, the fixed costs of expertise and health—are negotiable now.
What seemed like physics was actually just the frontier of our tools. And the frontier just moved.