Context Will Replace Your Design
For fifty years, we designed interfaces to help humans talk to computers. Now AI agents can talk to computers for us. So what happens to design?
Jan 27, 2026

There's a particular irony in typing a Slack DM to an AI assistant that will help me write about why interfaces don't matter anymore... There’s just words in a box (or in this case, a series of manic voice notes), and a system that understands what I need.
I've been running Clawdbot, a persistent AI assistant, for a few days (yeah only days) now. It reads my email, manages my calendar, browses the web on my behalf, and remembers conversations across sessions. It does everything my current VA does. It's changed how I think about software design. Clawd has no interface, no web page for interaction, nothing... and the absence of a traditional interface hasn't diminished the experience. If anything, it's clarified something we've been dancing around since agents have started to take shape: most interface design exists to compensate for computers not understanding what users want.
The Translation Layer We Built
For fifty years, interface design has solved a specific problem: we humans think in fuzzy concepts and intentions, while computers operate on instructions and data. Going from "I want to book a flight to Chicago next Tuesday" to the seventeen form fields, dropdowns, and confirmation screens required to book a flight is what interface designers do.
There are disciplines around this translation work. Information architecture. Visual hierarchy. Affordances and signifiers. Progressive disclosure. All this design exists to help people navigate the rigid structures that computers require. Click here, not there. Fill this field before that one. Read the error message, understand what went wrong, try again.
Design makes speaking “computer” effortless, drawing on the premise that humans adapt to the system.
When the System Adapts to You
Over the weekend, I sent a voice memo to my assistant while skiing that says, "draft a 2,000-word essay about the future of design in a post-agentic world, use Clawdbot as a case study, make it sound like my writing style," and... it did a pretty mediocre job on the essay… but the interaction was perfect. I used WhatsApp and Slack, used voice notes while driving and skiing, it read my other articles on focused.io and austinbv.com. It blew me away. It's a new paradigm for design.
The agent, clawdbot, did all the input work for me. It didn't need maliciously designed input fields, instead the LLM can parse my natural language, and it doesn’t want visual hierarchy to guide attention because it just asks clarifying questions. There’s no more progressive disclosure because it can handle complexity directly.
So, what does design become when the translation layer is the system itself?
The Clawdbot Case Study
My current setup involves an AI assistant (clawdbot) that:
- Lives in WhatsApp and Slack (messaging interfaces I already use)
- Maintains persistent memory across conversations
- Has access to my email, calendar, and files
- Can browse the web and interact with services
- Runs scheduled tasks and proactive check-ins
The "interface" is the chat apps, but the beauty is the tools. Clawdbot connects to other APIs and handles tasks that would require dedicated applications and logins and passwords and and 2fa and have websites with carefully designed interfaces: email clients, calendar apps, travel booking sites, task managers, note-taking tools.
It feels like noise now that I think about it... Designers in an agentic world should be thinking as the Agent, as the user, not a person. You can then frame it as:
- Context structures - What information does the AI need to understand my preferences, constraints, and goals?
- Memory systems - How does the assistant maintain continuity across sessions without drowning in irrelevant history?
- Tool interfaces - How do we expose capabilities (email, calendar, web browsing) in ways that AI can reliably invoke?
- Failure modes - What happens when the AI misunderstands? How do we make correction natural?
The design becomes decoupled from the interface and more about “How it works” rather than “how it looks.” Isn’t that what designers wanted all along?
Context Engineering: The New Discipline
There’s a new term for what's emerging: context engineering. It's the discipline of designing the information environments in which AI agents operate.
Traditional interface design asks: "How do we present information so humans can find and act on it?" Context engineering asks: "How do we structure information so AI can understand and act on it appropriately?"
Sound similar? They are, but lead to radically different practices. Visual hierarchy doesn't matter to an AI, it can process information in any order. Color and iconography don't matter. Animation and micro-interactions don't matter.
What matters:
- Information completeness - Does the AI have everything it needs to make good decisions?
- Semantic clarity - Is the information structured in ways the AI can reliably parse?
- Constraint specification - Are the boundaries of acceptable action clearly defined?
- Context efficiency - Can we convey what's needed without wasting tokens (and money, and latency)?
That last point deserves emphasis. In the current paradigm, loading a webpage or json or whatever costs the user attention and costs the provider bandwidth, both relatively cheap. In an agentic paradigm, loading context costs tokens, which cost real money at scale, add latency that compounds, and confuse agents across multi-step tasks.
The entire economics of information presentation flips. Dense, structured, machine-readable information beats sparse, visual, human-readable information.
What Agents Need
When I watch Clawdbot browse the web on my behalf, it’s painful. It loads pages designed for humans, rich with images, navigation, whitespace, branding, and extracts the tiny fraction of information it needs. A flight search that a human might complete in five minutes of visual scanning takes the AI multiple page loads, screenshots, and navigation of UIs designed to be intuitive for people who look rather than read.
This is obviously transitional. As agents become more prevalent, services will increasingly expose information in agent-friendly formats. I don’t think this is traditional APIs (they are too noisy), but something more flexible, semantic interfaces that can adapt to varied requests while remaining token-efficient.
And a new design challenge emerges, going from "how do we make this look good to humans" to "how do we make this legible to agents while remaining interpretable by humans for oversight?" Yup, with oversight.
It’s harder than it sounds. Pure API endpoints are good for agents but opaque to human oversight. Visual interfaces are easy for humans to monitor but wasteful for agents. The emerging discipline will be the middle ground: interfaces that are simultaneously agent-efficient and human-auditable. Until the agent just gets what it needs and builds the interface on the fly.
The Skills That Transfer
If you're a designer reading this with anxiety about obsolescence. I think your core skills transfer, building APIs had users, the systems were users, the developers were users. Now the Agents are users.
Understanding user intent remains crucial. You're solving for what people want, even when the interface between the user and the system is an agent.
Information architecture evolves. Structuring information so an agent can navigate it requires the same thinking about relationships, hierarchies, and access patterns.
Systems thinking becomes more important. Agent-based systems involve more moving pieces, more failure modes, more need for coherent components.
What won’t transfer is the visual craft, the micro-interactions, the animation choreography.
Design Becomes Invisible
There's a Marshall McLuhan quality to what's happening. The interfaces that are everywhere in our visual landscape, the screens, buttons, forms, and menus, they aren’t the point. They are necessary infrastructure for the pre-AI era when computers couldn't understand intent and humans had to translate, or the interface had to force it.
As that scaffolding comes down, design becomes the structure of context and the shape of information. It’s the logic of systems that we interact with but no longer see.
So, the best interface is no interface, as the cliché goes. And we're about to find out if we meant it.
My AI assistant doesn't have a color scheme. It doesn't have a logo animation or a distinctive illustrative style. It lives in generic chat windows with access to multiple platforms. And it's the most thoughtfully designed software I’ve interacted with, because the design is in what it knows, how it reasons, and how it adapts to what I need.
That's the future of design in an agentic world.
