Why Generative AI Requires Us to Rethink the Foundations of User-Centered Design
User-centered design has long been our north star—grounded in research, journey mapping, and interfaces built around stable, observable tasks. It has been methodical, human-centered, and incredibly effective—until now.
LLM-based Generative AI and Agentic Experiences, have upended this entire paradigm. These technologies don’t follow predefined scripts. Their interfaces aren’t fixed, their user journeys can’t be mapped, and their purpose unfolds as interaction happens. The experience doesn’t precede the user—it emerges from the LLM’s interaction with the user.
This shift demands a new design framework—one that embraces unpredictability and builds adaptive systems capable of responding to fluid goals. One that doesn’t deliver rigid interfaces, but scaffolds flexible environments for creativity, productivity, and collaboration. At Robots & Pencils, we call this approach Emergent Experience Design.
The Limits of Task-Based UX
Traditional UX design starts with research that discovers jobs to be done. We uncover user goals, design supporting interfaces, and optimize them for clarity and speed. When the job is known and stable, this approach excels.
But LLM-based systems like ChatGPT aren’t built for one job. They serve any purpose that can be expressed in language at run-time. The interface isn’t static. It adapts in real time. And the “job” often isn’t clear until the user acts.
If the experience is emergent, our designs need to be as well.
Emergent Experience Design: A UX Framework for Generative AI
Emergent Experience Design is a conceptual design framework for building systems that stay flexible without losing focus. These systems don’t follow scripts—they respond.
- Adapt to user goals in real time
- Respond intelligently to unpredictable behavior
- Stay aligned to intended outcomes without relying on rigid structures
To do that, they’re built on three types of components:
1. Open Worlds
Open worlds are digital environments intentionally designed to invite exploration, expression, and improvisation. Unlike traditional interfaces that guide users down linear paths, open worlds provide open-ended sandboxes for users to work freely—adapting to user behavior, not constraining it. They empower users to bring their own goals, define their own workflows, and even invent new use cases that a designer could never anticipate.
To define these worlds, we begin by choosing the physical or virtual space—a watch, a phone, a desktop computer, or even smart glasses. Then, we can choose one or more interaction design metaphors for that space—a 3D world, a spreadsheet grid, a voice interface, etc. A design vocabulary then defines what elements can exist within that world—from atomic design elements like buttons, widgets, cells, images, or custom inputs, to more expressive functionality like drag-and-drop layouts, formula editors, or a dialogue system.
Finally, open worlds are governed by a set of rules that control how objects interact. These can be strict (like physics constraints or permission layers) or soft (like design affordances and layout behaviors), but they give the world its internal logic. The more elemental and expressive the vocabulary and rules are, the more varied and creative the user behavior becomes.
Different environments will necessitate different component vocabularies—what elements can be placed, modified, or triggered within the world. By exposing this vocabulary via a structured interface protocol (similar to Model-Context-Protocol, or MCP), LLM agents can purpose-build new interfaces in the world responsively based on the medium. A smartwatch might expose a limited set of compact controls, a desktop app might expose modal overlays, windows or toolbars, and a terminal interface might offer only text-based interactions. Yet from the agent’s perspective, these are just different dialects of the same design language—enabling the same user goal to be rendered differently across modalities.
Open worlds don’t prescribe a journey—they provide a landscape. And when these environments are paired with agents, they evolve into living systems that scaffold emergent experiences rather than dictate static ones.
2. Assistive Agents
Assistive agents are the visible, intelligent entities that inhabit open worlds and respond to user behavior in real time. Powered by large language models or other generative systems, these agents act as collaborators—interpreting context, responding to inputs, and acting inside (and sometimes outside) the digital environment. Rather than relying on hardcoded flows or fixed logic, assistive agents adapt dynamically, crafting interactions based on historical patterns and real-time cues.
Each assistive agent can be shaped by two key ingredients:
- Instinct: The training and architecture of the underlying LLM model, which provides its foundational capabilities. This could include the ability to understand text or image inputs, the language in which it responds, and its underlying reasoning patterns.
- Identity: The purpose and personality assigned through prompt instructions and contextual inputs that shape the agent’s perspective—what it knows, how it prioritizes information, and how it speaks or acts.
These two ingredients work together to shape agent behavior: instinct governs what the model can do, while identity defines what it should do in a given context. Instinct is durable—coded in the model’s training and architecture—while identity is flexible, applied at runtime through prompts and context. This separation allows us to reuse the same foundation across wildly different roles and experiences, simply by redefining the agent’s identity.
Agents can perceive a wide variety of inputs from typed prompts or voice commands to UI events and changes in application state—even external signals from APIs and sensors. Increasingly, these agents are also gaining access to formalized interfaces—structured protocols that define what actions can be taken in a system, and what components are available for composition. One emerging standard, the Model-Context-Protocol (MCP) pattern introduced by Anthropic, provides a glimpse of this future: an AI agent can query a system to discover its capabilities, understand the input schema for a given tool or interface, and generate the appropriate response. In the context of UI, this approach should also open the door to agents that can dynamically compose interfaces based on user intent and a declarative understanding of the available design language.
Importantly, while designers shape an agent’s perception and capabilities, they don’t script exact outcomes. This allows the agent to remain flexible and resilient, and able to improvise intelligently in response to emergent user behavior. In this way, assistive agents move beyond simple automation and become adaptive collaborators inside the experience.
The designer’s job is not to control every move the agent makes, but to equip it with the right inputs, mental models, and capabilities to succeed.
3. Moderating Agents
Moderating agents are the invisible orchestration layer of an emergent system. While assistive agents respond in real time to user input, moderating agents maintain focus on long-term goals. They ensure that the emergent experience remains aligned with desired outcomes like user satisfaction, data completeness, business objectives, and safety constraints.
These agents function by constantly evaluating the state of the world: the current conversation, the user’s actions, the trajectory of the interaction, and any external signals or thresholds. They compare that state to a defined ideal or target condition, and when gaps appear, they nudge the system toward correction. This could take the form of suggesting a follow-up question to an assistant, prompting clarification, or halting actions that risk ethical violations or user dissatisfaction.
Moderating agents are not rule-based validators. They are adaptive, context-aware entities that operate with soft influence rather than hard enforcement. They may use scoring systems, natural language evaluations, or AI-generated reasoning to assess how well a system is performing against its goals. These agents often manifest through lightweight interventions—such as adjusting the context window of an assistive agent, inserting clarifying background information, reframing a prompt, or suggesting a next step. In some cases, they may even take subtle, direct actions in the environment—but always in ways that feel like a nudge rather than a command. This balance allows moderating agents to shape behavior without disrupting the open-ended, user-driven nature of the experience.
Designers configure moderating agents through clear articulation of intent. This can include writing prompts that define goals, thresholds for action, and strategies for response. These prompts serve as the conscience of the experience—guiding assistants subtly and meaningfully, especially in open-ended contexts where ambiguity is the norm.
Moderating agents are how we bring intentionality into systems that we don’t fully control. They make emergent experiences accountable, responsible, and productive without sacrificing their openness or creativity.
From Intent to Interface: The Role of Protocols
The promise of Emergent Experience Design doesn’t stop at agent behavior—it extends to how the experience itself is constructed. If we treat user goals as structured intent and treat our UI vocabulary as a query-able language, then the interface becomes the result of a real-time negotiation between those two forces.
This is where the concept of Model-Context-Protocol becomes especially relevant. Originally defined as a mechanism for AI agents to discover and interact with external tools, MCP also offers a compelling lens for interface design. Imagine every environment—from mobile phones to smartwatches to voice UIs—offering a structured “design language” via an MCP server. Agents could then query that server to discover what UI components are supported, how they behave, and how they can be composed.
A single requirement—say, “allow user to log in”—could be expressed through entirely different interfaces across devices, yet generated from the same underlying intent. The system adapts not by guessing what to show, but by asking what’s possible, and then composing the interface from the capabilities exposed. This transforms the role of design systems from static libraries to living protocols, and makes real-time, device-aware interface generation not just feasible, but scalable.
A Mindset Shift for Designers
In this new paradigm, interfaces are no longer fixed blueprints. They are assembled at runtime based on emerging needs. Outcomes are not guaranteed—they are negotiated through interaction. And user journeys are not mapped—they are discovered as they unfold. This dynamic, improvisational structure demands a design framework that embraces fluidity without abandoning intention.
As designers, we have to move from architects of static interfaces to cultivators of digital ecosystems. Emergent Experience Design is the framework that lets us shape the tools and environments where humans co-create with intelligent assistants. Instead of predicting behavior, we guide it. Instead of controlling the path, we shape the world.
Why It Matters
Traditional UX assumes we can observe and anticipate user goals, define the right interface, and guide people efficiently from point A to B. That worked—until GenAI changed the rules.
In agentic systems, intent is fluid. Interfaces are built on the fly. Outcomes aren’t hard-coded—they unfold in the moment. That makes our current design models brittle. They break under uncertainty.
Emergent Experience Design gives us a new toolkit. It helps us move from building interfaces for predefined jobs to crafting systems that automate discovery, collaboration, and adaptation in real time.
With this framework, we can:
- Meet users where they are—not where we expect them to be
- Guide them through complex systems with responsive, context-aware support
- Preserve creativity, flexibility, and human agency at every step
In short: it lets us design with the user, not just for them. And in doing so, it unlocks entirely new categories of experience—ones too dynamic to script, and too valuable to ignore.
Like the article? Share with friends: