I was managing a Jira board by voice, reassigning tickets and moving things across sprints, when the normal interface started to feel wrong. Not inconvenient. Wrong. The dropdowns, the labeled fields, the clicking between screens felt like a tax I hadn’t known I was paying. That feeling isn’t really about Jira.
It points at a constraint that shaped how software has been built for decades, now visibly loosening. This is an attempt to follow it.
I. The Apology
Why do form-based interfaces exist?
The obvious answer is to help users accomplish tasks. The real answer: to help users express intent in a language machines could parse without ambiguity.
Computers, for most of their history, could not handle vagueness. Every input had to be unambiguous before entering the system. So we built interfaces that forced humans to pre-translate their intentions into machine-readable form. The dropdown eliminates ambiguity about which option you mean. The required field eliminates ambiguity about whether a value exists. The form is a disambiguation device, built for the machine, not the human.
We made humans do the translation work. Smarter defaults, typeahead, drag and drop: all ergonomic improvements on a backwards arrangement where the human served the machine’s parsing limitations. The form was the software industry’s apology for the fact that computers couldn’t understand people.
LLMs handle ambiguity. Not perfectly, but sufficiently, and improving. The translation burden moves to where it belongs: the machine. The human expresses intent. The machine interprets it.
That’s not an incremental improvement. It removes the original reason form-based interfaces existed.
II. Five Things
Strip software down to its actual purpose.
Humans need to coordinate across time and distance, with shared memory. A database is an organization’s external memory. Everything else is built around that.
Serving that purpose requires exactly five things:
- Persistent shared state. Something must hold the current state of shared reality that all parties agree on.
- Controlled mutation with rules. The state must be changeable, but not arbitrarily. Constraints on who can change what, under what conditions.
- Intent expression. People must be able to say what they want to happen. The modality (keyboard, voice, gesture) doesn’t matter.
- Interpretation and validation. Expressed intent must be translated into precise state changes, checked against the rules, and either executed or rejected.
- Legibility of current state. Humans must be able to perceive the shared reality. If you can’t read it, it doesn’t matter that it exists.
Every framework, database, and UI pattern in software history is an attempt to satisfy these five requirements under the constraints of the moment.
When machines couldn’t handle ambiguity, requirements #3 and #4 collapsed into one task assigned to the human. You had to know what you wanted and translate it into the machine’s grammar before submitting. That double burden is what forms were managing. Remove the ambiguity constraint and the two requirements separate: the human expresses, the machine interprets. The form was never the point. It was scaffolding around a temporary limitation.
The interface layer no longer needs to be human-facing. It can be a technical surface, an API, that a model reads on your behalf. The product shifts from “how well can a human learn to navigate this” to “how well can the system understand what a human wants.” Different product. Different companies winning.
There’s a deeper implication. The app as a destination — somewhere you navigate to in order to do a thing — only existed because intent expression required a dedicated context. Remove that requirement and the destination becomes unnecessary. You don’t go to Jira. You express intent about your work, wherever you are, and the system holding the relevant state responds. The product becomes the data and the rules. The place disappears.
III. What Survives
The agent interface is not a screen. It’s a set of capabilities (what can be expressed, executed, queried) accessible to a model acting on your behalf. The interface is an API, not a visual design.
The data model becomes the product. When the interface is generated on demand, the underlying data is what the model acts on. Garbage data at LLM velocity produces garbage outputs faster than anyone can catch. A bad record doesn’t get noticed when someone navigates past it; it gets acted on, at scale. Organizations that have treated data quality as an infrastructure concern are about to find that distinction expensive.
Business logic becomes auditable. The rules governing who can do what need to be expressed in forms both humans and models can interrogate. When a model acts on someone’s behalf, the system needs to record what was expressed, how it was interpreted, and what rules permitted the change. Without that, accountability is a story you tell after something goes wrong.
The bolt-on products will fail visibly. Atlassian Rovo, Salesforce Einstein, and every “chat box on legacy architecture” product will disappoint at scale because the underlying data models and APIs weren’t designed for LLM interaction. The failures will get attributed to AI generally. The actual lesson — that LLM-native architecture is categorically different, not incrementally better — will be learned by the next generation of builders, not the incumbents. The failures won’t slow the transition. They’ll clarify what it actually requires.
Navigation design, information architecture, form design: most of what UX currently is will erode. Not because the skills become unimportant in the abstract, but because the problem they were solving disappears. The field built its entire identity around a constraint that no longer exists. That’s not a shrinking profession. It’s a category collapse.
IV. The Platform
When LLMs sit between humans and their data, interpreting intent and executing across systems, they occupy a platform position with no real precedent in software.
Previous platforms controlled distribution (App Store) or infrastructure (AWS). The LLM layer controls the interface to how organizations actually work. Whoever becomes the default way you talk to your software has leverage over everything downstream.
Platform markets consolidate. Infrastructure costs, data advantages, and network effects reliably produce concentration. The search engine market didn’t consolidate because Google planned it in 1998; it consolidated because the economics made anything else unstable. The same logic applies here. This isn’t a distribution channel. It’s the interface to knowledge work.
Two or three providers will occupy this position globally. Most enterprises are integrating LLM capabilities via API right now without treating this as the strategic dependency it is. The lock-in forming through convenience and competitive pressure will define software’s power structure for decades. The window to negotiate from a position of choice is open. It won’t stay open.
The same dynamic plays out at the individual level, less visibly. A productivity gap is opening between workers who can express complex intent clearly and those who can’t. It won’t be called the prompt gap. It will show up in performance reviews as “works effectively with AI tools” and in hiring as an unstated filter. The underlying capability it rewards — articulating desired outcomes precisely, iterating on ambiguity, thinking in results rather than procedures — is unevenly distributed and correlated with existing advantages. The gap is forming now, in organizations everywhere, and almost nobody is naming it.
V. Where the Pattern Breaks
When you externalize capability to a system you don’t fully understand, and the biological skill it replaced atrophies, what agency remains?
History isn’t encouraging for people who worry about this. Socrates argued that writing would hollow out genuine understanding: people would carry the appearance of knowledge, not its substance. He was describing the anxiety now aimed at LLMs. He was wrong. Memory atrophied, but it wasn’t load-bearing for the things that mattered. Writing extended human reach across timescales biological memory couldn’t touch.
The same argument applies here. The capabilities LLMs are absorbing, navigating software and executing procedural tasks, were probably always overhead rather than substance. Externalizing them may not diminish agency so much as reveal what was underneath. That case is worth taking seriously.
But previous externalizations involved discrete, separable capabilities: memory, arithmetic, navigation. What LLMs absorb is different. It’s process knowledge: the informal, unwritten understanding of how things actually get done. The real rules versus the official ones. What the exceptions mean. Who knows what. None of this lives in documents. It lives in the heads of people who’ve done the work, and it migrates silently into model configurations as deployment scales.
There is no moment when it leaves.
No alarm. The organization keeps functioning, apparently normally, until the system produces an error and nobody left knows enough to recognize it as one. This isn’t a version of the Socrates problem. It has no historical analog.
Freed capacity might restore agency rather than diminish it. That’s possible. But the process knowledge problem sits outside the historical pattern. That’s the thread that stays open.
Software has always been a prosthetic for human capability. The question isn’t how good the prosthetic will be. It’s what happens when the prosthetic becomes the primary system and the capability it replaced is gone. Who is in charge, and of what?
The signal arrived as minor friction: something that used to feel normal started feeling like an imposition. The constraint had been there so long it was invisible. You couldn’t feel the weight until it lifted.
What comes next is unmade. The five requirements haven’t changed. The constraints that distorted how we met them are dissolving. Whether the rebuild is deliberate, or just inherits the old distortions running faster with less oversight, is the actual decision in front of anyone building software today.
Most people aren’t treating it as a decision. That’s the problem.