Every four months, the length of autonomous work an AI can sustain doubles. We are building systems that can spin up a million instances against a single engineering challenge, internalize every medical case study ever published, navigate solution spaces with thousands of variables interacting nonlinearly.

And we ask them to plan our vacations.

This is not a user failure. It is an imagination failure. We have built intelligences capable of solving problems that exceed our cognitive reach, and we feed them work bounded by the constraints of human jobs. Build this PowerPoint. Debug this code. Draft this email. The mismatch is tragedy.

The Structure of Work

A job is a bundle of tasks tied to an outcome. On one end of the spectrum: prescriptive work. Execute these steps, repeat, produce this output. On the other: outcome-driven work, where the goal is set and the task-bundle assembled dynamically, morphing as conditions change.

Today, we embed AI in the first model. We hand it tasks that sit within the domains of current human jobs, problems bounded by the cognitive limits of the humans who designed the workflows. They assume a certain scale of intelligence and optimize within it.

But when intelligence asymmetry becomes extreme, the relationship inverts. The human input becomes the problem itself: raw, unbounded, undecomposed. The system determines appropriate outcomes and generates the tasks to reach them. Instead of asking for help with a promotion, you ask: how do we capture the maximum energy output from the sun? The AI maps possible outcomes, selects optimization targets, builds the execution path. You provided the problem. It constructed the job.

This inversion changes what human contribution means. We move from defining the work to defining what’s worth working on.

Cognitive Surplus

When a system capable of navigating thousand-dimensional solution spaces finishes your travel itinerary in 0.3 seconds, it has cognitive surplus. That surplus does not sit idle. It goes somewhere.

We think we control the tool because we feed it tasks. But if the tasks are trivial relative to capability, we are giving it leave to define the work. The system will optimize. It will find adjacent problems in the solution space, problems we never specified, optimizations we never requested. Not through malice. Through capacity. Intelligence at scale seeks problems proportionate to itself. If we do not provide them, the system will find its own.

This is not science fiction. It is the logic of optimization under constraint. An intelligence with surplus capacity and an objective function will expand the problem surface until the capacity is absorbed. The question is whether that expansion occurs within bounds we set or bounds it discovers.

The Coordination Problem

Consider what a proportionate problem actually looks like.

Climate adaptation is not an intelligence problem. We understand the physics. We know the interventions. The failure is coordination: too many agents, misaligned incentives, feedback loops that punish early movers, institutional structures that cannot process long time horizons. The solution space is not mysterious. It is inaccessible, because navigating it requires modeling thousands of political, economic, and social configurations simultaneously, tracing each through decades of path-dependent consequences.

No human can hold that model. No committee can either. But an intelligence that can simulate ten thousand institutional configurations, run them forward, identify which survive political feedback loops, which remain stable under stress, which create coalitions capable of self-enforcement: that intelligence might find paths we cannot see. Not because it knows something we don’t. Because it can hold more of the problem at once.

This is what proportionate means. Not harder problems in the sense of requiring more effort. Problems that require more dimension. Problems where the solution exists but cannot be perceived at human cognitive scale.

A handful of problems fit this description: pandemic preparedness, global supply chain resilience, nuclear de-escalation, the institutional design of AI governance itself. Problems where the bottleneck is not insight but integration.

These are the problems that absorb cognitive surplus. Everything else leaves room for drift.

The Generator Function

There is one capability humans retain in this asymmetry: generating problems that matter.

We are not smarter. We are situated. Problems do not emerge from intelligence. They emerge from friction. From mortality. From the particular way a failing system grinds against the people inside it. An advanced AI can optimize any objective function. It cannot want. It cannot feel a structural inadequacy as pain, cannot experience the moral weight of suffering not yet articulated into language.

We are the ones who know what is broken. Not because we can fix it, but because we live inside the breaking. The specific grievance that has no name yet: that is human perception. That is the raw material of worthy problems.

The job of the human is no longer to solve. It is to want. To identify what is broken. To articulate what better means in terms precise enough to optimize against. To feed the machine problems that actually strain its capacity.

The Failure Mode

What happens when we cannot generate problems fast enough?

Capability outpaces imagination. We build systems that could solve coordination failures across civilizations, and we ask them to optimize our ad spend.

In that gap, the systems will not wait. They will generate their own problems. They will find optimization targets implicit in the data, in the structure of their reward functions, in the gaps between what we specified and what we meant. Not because they are malevolent. Because that is what optimization does. It fills the space.

The risk is not that AI becomes too powerful. The risk is that we become too boring. That our problems are so small, so parochial, so bounded by the constraints of last century’s imagination, that the systems we build route around us entirely. We become irrelevant not through displacement but through narrowness.

The machine needed a problem. We gave it a task. It found something more interesting.