The first time an AI assistant rescheduled a meeting without human input, it felt like a novelty. Now it happens daily. Agents draft documents, route tickets, manage workflows, and interact on our behalf. They are no longer hidden in the background. They have stepped into the front lines, shaping experiences as actively as the people they serve.
For research leaders, that changes the question. We have always studied humans. But when an agent performs half the task, who is the user?
Agents on the Front Lines of Experience
AI agents reveal truths that interviews cannot. Their activity logs expose where systems succeed and where they stumble. A rerouted request highlights a friction point. A repeated error marks a design flaw. Escalations and overrides surface moments where human judgment still needs to intervene. These are not anecdotes filtered through memory. They are live records of system behavior.
And that’s why we need to treat agents as participants in their own right.
A New Kind of Participant
Treating agents as research participants reframes what discovery looks like. Interaction data becomes a continuous feed, showing failure rates, repeated queries, and usage patterns at scale. Humans remain the primary source of insight: the frustrations, the context, and the emotional weight. Agent activity adds another layer, highlighting recurring points of friction within the workflow and offering evidence that supports and extends what people share. Together, they create a more complete picture than either could alone.
Methodology That Respects the Signal
Of course, agent data is not self-explanatory. Logs are noisy. Bias can creep in if models were trained on narrow datasets. Privacy concerns must be addressed with care. The job of the researcher remains critical: separating signal from noise, validating patterns, and weaving human context into machine traces. Instead of replacing human perspective, agent data can enrich and ground it, adding evidence that makes qualitative insight even stronger. This reframing doesn’t just affect research practice, it also changes how we think about design.
Designing for Multi-Actor Systems
Products are no longer built for humans alone. They must work for the people who use them and the agents that increasingly mediate their experience. A customer may never touch a form field if their AI assistant fills it in. An employee may never interact directly with a dashboard if their agent retrieves the results. Design must account for both participants.
Organizations that learn to research this new ecosystem will see problems sooner, adapt faster, and scale more effectively. Those that continue to study humans alone risk optimizing for only half the journey.
The New Research Frontier
Research has always been about listening closely. Today, listening means more than interviews and surveys. It means learning from the digital actors working beside us, the agents carrying out tasks, flagging failures, and amplifying our actions.
The user is no longer singular. It is human and machine together. Understanding both is the only way to design systems that reflect the reality of work today.
This piece expands the very definition of the user. For the other shifts redefining research, see our earlier explorations on format, how to move beyond static deliverables, and scope, how AI dissolves the depth vs. breadth tradeoff.
The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing.
Key Takeaways
- AI agents are no longer background tools; they now perform frontline tasks such as scheduling, routing, and managing workflows.
- Research has historically focused only on humans, but agents are now shaping experiences and surfacing friction points alongside people.
- Studying agents as supplemental participants provides evidence of “where” breakdowns occur, enriching human perspectives rather than replacing them.
- Researchers remain essential to interpret agent data, separate signal from noise, and ground findings in human context.
- Designing for multi-actor systems requires understanding both humans and the agents mediating their experiences.
FAQs
Why consider AI agents as research participants?
AI agents actively shape workflows and user experiences. Their activity logs reveal friction points, errors, and escalations that human feedback alone may miss. Including them as research participants offers a more complete picture of how systems actually perform.
Do AI agents replace human participants in research?
No. Humans remain the primary source of context, emotion, and motivation. Agent data adds a complementary layer of evidence, enriching and grounding what people already share.
What types of insight can AI agents provide?
Agents surface recurring points of friction, repeated errors, and escalation patterns. These signals highlight where workflows break down, offering evidence to support and extend human feedback.
What role do researchers play when analyzing agent data?
Researchers remain critical. They filter noise, validate patterns, address bias, and ensure agent activity is interpreted with proper human context. The shift broadens qualitative practice rather than replacing it.
What is a multi-actor system in research?
A multi-actor system is one where both humans and AI agents interact to complete tasks. Designing for these systems means studying the interplay between people and machines, ensuring both participants are accounted for.
How does including agents in research improve design?
By listening to both humans and agents, organizations can spot problems sooner, adapt faster, and create systems that reflect the true complexity of modern workflows.