Your Team’s AI Productivity Is About to Break Your Management Structure 

The Hidden Bottleneck 

It starts small. A manager logs in to find ten versions of the same deck, each slightly more polished than the last. By afternoon, three more have landed, polished by AI and dropped into the same crowded folder. Multiply that across a dozen employees, and what looked like momentum now feels like quicksand. 

AI has made people more prolific than ever. The crisis is no longer creation. It’s curation. 

Work used to move at a human pace: weekly deliverables, quarterly reviews, annual plans. AI just changed the speed limit, and that old cadence can’t keep up. MIT reports that 90% of employees now use unsanctioned AI tools for productivity gains. One person using AI can now outpace a full team. But as output rises, managers are left sorting through a flood of material. One study found teams spend 2.4 hours per day—nearly 30% of the week—just searching for the right information. When everyone is creating but few are curating, alignment and quality slip through the cracks.  

The Middle Manager’s Breaking Point 

This tension shows up most sharply in the middle. Managers used to be the connective tissue of organizations; now they’re underwater. Their traditional role of reviewing work, ensuring coherence, and maintaining quality no longer scales. It’s a common refrain from today’s leaders: teams are so prolific, managers simply can’t keep up. And when first-line managers can’t keep up, directors and executives above them lose sight of what’s really happening. The pyramid bends under the weight of its own output. 

The Productivity Paradox 

On paper, AI promises $4.4 trillion in productivity potential. In practice, many companies see dips before they see gains. The technology works. The real challenge lies in the structure around it. 

Creation has never been cheaper. We still have limits, especially around attention and judgment, the things that give creation its value. So, teams start cutting corners by spot-checking work, letting AI police itself, or just letting things through without a real review. 

The deeper problem? We’re using yesterday’s management playbook to navigate today’s nonstop output. Widening the highway without adding exits moves traffic faster… for a while. But the jams just reappear farther downstream in even bigger knots. Capability is no longer the constraint. Management capacity is. 

New Levers for Leaders 

What can organizations actually do about it? 

The first shift is in how progress gets measured. Volume no longer tells the story; what matters is whether the work actually moves strategic goals forward. Counting docs and drafts misses the point. 

The second shift is treating curation as real work. Tagging and organizing might not be glamorous, but they’re what keep AI-generated abundance usable instead of overwhelming. 

The third shift is elevating judgment. The real value comes not from creating yet another draft, but from deciding which draft matters and why. 

Finally, quality has to be a shared responsibility. Peer review and team-owned standards often beat the old model, where every piece of work climbs a slow chain of approvals before it ships. AI can point to anomalies, but people still define what “good” looks like. 

This isn’t just a productivity challenge; it’s a purpose problem. When roles shrink to prompting and passing along outputs, people lose connection to the work. Middle managers, once anchors of coordination and context, risk becoming bottlenecks. The real value lies in interpretation: guiding teams to make sense of abundance and channel it toward impact. 

The Path Forward 

The organizations that succeed are not the ones producing the most AI content. They are the ones curating with clarity, aligning work to strategy, and building structures strong enough to absorb exponential output without breaking. 

In an age of infinite creation, we’re no longer short on drafts or ideas. What’s scarce now is attention, judgment, and trust. 

AI has made abundance the easy part. The real leadership test is building systems that can turn that abundance into progress. 

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing. 


Key Takeaways 


FAQs 

Why is AI productivity breaking traditional management structures? 
AI enables individuals to produce the volume of ten, overwhelming management systems built for human-speed workflows. Structures designed for weekly deliverables and quarterly reviews cannot scale to AI’s pace. 

What is the real bottleneck: creation or curation? 
The challenge is curation. AI makes content generation cheap and fast, but deciding what matters, aligning it to strategy, and maintaining quality now consume more time and energy than creation. 

Why are middle managers most affected? 
Middle managers traditionally ensure coherence, review work, and maintain quality. With AI-driven output multiplying, this role no longer scales, leaving managers swamped and executives disconnected from what is happening on the ground. 

What is the productivity paradox of AI? 
AI has the potential to unlock trillions in value, yet many companies initially see dips in productivity. More output does not automatically mean more progress. Without curation, abundance creates confusion and slows decision-making. 

How can leaders adapt management for the AI era? 
By shifting from counting deliverables to measuring outcomes, investing in structured curation, redesigning roles around judgment, and embracing peer-driven models of quality. These approaches align AI productivity with organizational purpose. 

Robots & Pencils to Sponsor and Exhibit at Agentic AI and the Student Experience Conference 

AI-first consultancy joins higher ed leaders to explore how agentic AI is reshaping the student journey 

Robots & Pencils, an AI-first, global digital innovation firm specializing in cloud-native web, mobile, and app modernization, today announced its sponsorship and participation in Agentic AI and the Student Experience, hosted by Arizona State University (ASU). As a Silver Sponsor, exhibitor, and active participant, Robots & Pencils will engage with education leaders from around the world October 22–24, 2025, at the Omni Tempe Hotel at ASU. 

See how Robots & Pencils blends AI, cloud, and design to shape the future of education. 

The three-day event convenes higher education professionals and technology innovators to explore how agentic AI, systems that not only respond but proactively decide and solve problems, is revolutionizing the student experience. 

“Higher education is at a turning point, and agentic AI represents a breakthrough opportunity to enhance every stage of the student journey, from admissions to graduation and beyond,” said Leonard Pagon, CEO of Robots & Pencils. “We’re proud to join ASU, AWS, and other higher-education leaders to showcase what’s possible when cloud-native design, intelligent systems, and human-centered experiences come together. This is about accelerating AI readiness and charting the future of the student experience.” 

Robots & Pencils brings deep expertise to the higher education sector, having partnered with ASU on a multi-year transformation to unify academic data, streamline credential management, and expand student engagement through secure, scalable platforms. As an AWS Partner, the firm builds AI-ready, cloud-native systems that deliver speed, security, and scale across higher- ed institutions. 

“Education stands at the edge of a new frontier with agentic AI, where AI systems are proactive, adaptive and deeply personalized to enhance the student experience,” said Lev Gonick, Chief Information Officer at ASU and executive sponsor for the event. “What began as a call to convene has grown into a global gathering of more than 500 education and industry leaders who will chart the next chapter of AI in education,” Gonick continued.   

Robots & Pencils will host conversations at its exhibit table in the conference lobby, where attendees can explore use cases, see demonstrations, and connect with experts on campus modernization and AI readiness. Higher education leaders attending the event are encouraged to reach out in advance to request one-on-one meetings at robotsandpencils.com/asu2025

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing 

AI Agents Are Users Too: Rethinking Research for Multi-Actor Systems 

The first time an AI assistant rescheduled a meeting without human input, it felt like a novelty. Now it happens daily. Agents draft documents, route tickets, manage workflows, and interact on our behalf. They are no longer hidden in the background. They have stepped into the front lines, shaping experiences as actively as the people they serve. 

For research leaders, that changes the question. We have always studied humans. But when an agent performs half the task, who is the user? 

Agents on the Front Lines of Experience 

AI agents reveal truths that interviews cannot. Their activity logs expose where systems succeed and where they stumble. A rerouted request highlights a friction point. A repeated error marks a design flaw. Escalations and overrides surface moments where human judgment still needs to intervene. These are not anecdotes filtered through memory. They are live records of system behavior. 

And that’s why we need to treat agents as participants in their own right. 

A New Kind of Participant 

Treating agents as research participants reframes what discovery looks like. Interaction data becomes a continuous feed, showing failure rates, repeated queries, and usage patterns at scale. Humans remain the primary source of insight: the frustrations, the context, and the emotional weight. Agent activity adds another layer, highlighting recurring points of friction within the workflow and offering evidence that supports and extends what people share. Together, they create a more complete picture than either could alone. 

Methodology That Respects the Signal 

Of course, agent data is not self-explanatory. Logs are noisy. Bias can creep in if models were trained on narrow datasets. Privacy concerns must be addressed with care. The job of the researcher remains critical: separating signal from noise, validating patterns, and weaving human context into machine traces. Instead of replacing human perspective, agent data can enrich and ground it, adding evidence that makes qualitative insight even stronger. This reframing doesn’t just affect research practice, it also changes how we think about design. 

Designing for Multi-Actor Systems 

Products are no longer built for humans alone. They must work for the people who use them and the agents that increasingly mediate their experience. A customer may never touch a form field if their AI assistant fills it in. An employee may never interact directly with a dashboard if their agent retrieves the results. Design must account for both participants. 

Organizations that learn to research this new ecosystem will see problems sooner, adapt faster, and scale more effectively. Those that continue to study humans alone risk optimizing for only half the journey. 

The New Research Frontier 

Research has always been about listening closely. Today, listening means more than interviews and surveys. It means learning from the digital actors working beside us, the agents carrying out tasks, flagging failures, and amplifying our actions. 

The user is no longer singular. It is human and machine together. Understanding both is the only way to design systems that reflect the reality of work today. 

This piece expands the very definition of the user. For the other shifts redefining research, see our earlier explorations on format, how to move beyond static deliverables, and scope, how AI dissolves the depth vs. breadth tradeoff. 

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing.   


Key Takeaways


FAQs

Why consider AI agents as research participants?
AI agents actively shape workflows and user experiences. Their activity logs reveal friction points, errors, and escalations that human feedback alone may miss. Including them as research participants offers a more complete picture of how systems actually perform.

Do AI agents replace human participants in research?
No. Humans remain the primary source of context, emotion, and motivation. Agent data adds a complementary layer of evidence, enriching and grounding what people already share.

What types of insight can AI agents provide?
Agents surface recurring points of friction, repeated errors, and escalation patterns. These signals highlight where workflows break down, offering evidence to support and extend human feedback.

What role do researchers play when analyzing agent data?
Researchers remain critical. They filter noise, validate patterns, address bias, and ensure agent activity is interpreted with proper human context. The shift broadens qualitative practice rather than replacing it.

What is a multi-actor system in research?
A multi-actor system is one where both humans and AI agents interact to complete tasks. Designing for these systems means studying the interplay between people and machines, ensuring both participants are accounted for.

How does including agents in research improve design?
By listening to both humans and agents, organizations can spot problems sooner, adapt faster, and create systems that reflect the true complexity of modern workflows.

How AI Ends the Depth vs. Breadth Research Tradeoff 

The transcripts pile up fast. Ten conversations yield sticky notes that cover the wall, each quote circled, each theme debated. By twenty, the clusters blur. At thirty, the team is saturated, sifting through repetition in search of clarity. The insights are still valuable, but the effort to make sense of them begins to outweigh the return. 

This has always been the tradeoff, go deep with a few voices or broaden the scope and risk losing nuance. Leaders accepted that limitation as the cost of qualitative research. 

That ceiling is gone. 

The Ceiling Was Human Labor 

Generative research has always promised what numbers cannot capture, the story beneath the metric. But human synthesis is slow. Each new transcript multiplies complexity, until the process itself becomes the limiter. Teams stopped at 20 or 30 conversations not because curiosity ended, but because the hours to make sense of them did. Nuance gave way to saturation. 

Executives signed off on smaller studies and called it pragmatism. In truth, it was constraint. 

AI Opens the Door to Scale 

Large language models change the equation. Instead of weeks of sticky notes and clustering, AI can surface themes in hours. It highlights recurring ideas, connects outliers, and organizes insights without exhausting the team. The researcher’s role remains. Judgment still matters, but the ceiling imposed by human-only synthesis disappears. 

Instead of losing clarity as the number grows, each additional conversation now sharpens the signal, strengthening patterns, surfacing weak signals earlier, and giving leaders the confidence to act with richer evidence. 

Discovery Becomes Active 

The real breakthrough is not only scale, but also timing. With AI-enabled synthesis, insights emerge as the study unfolds. After the first dozen conversations, early themes are visible. Gaps in demographics or use cases show up while there is still time to adjust. By week two, the research is already feeding product decisions. 

Instead of waiting for a final report, teams get a living stream of discovery. Research shifts from retrospective artifact to active driver of strategy. 

Nuance at Speed 

For organizations, this ends the false binary. Depth and breadth no longer compete. A bank exploring new digital features can capture voices across demographics in weeks, not months. A health-tech team can fold dozens of patient experiences into the design cycle in real time. A software platform can test adoption signals across continents without sacrificing cultural nuance. 

The payoff is more than efficiency. It is confidence. When executives see both scale and nuance in the evidence, they act faster and with greater conviction. 

The New Standard 

The era of choosing between depth or breadth is behind us. AI frees research leaders from the constraints of small samples or limited perspectives. With AI as a synthesis partner, the standard shifts: hundreds of voices, interpreted with clarity, delivered at speed. 

For teams still focused on fixing the format problem, our previous piece, The $150K PDF That Nobody Reads, explores how static reports constrain research. Our next article examines an even bigger shift: what happens when your users are no longer only people.

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing.   


Key Takeaways


FAQS

What is the depth vs. breadth tradeoff in qualitative research?
The depth vs. breadth tradeoff refers to the long-standing belief that teams must choose between conducting a small number of interviews with rich nuance (depth) or a larger sample with less detail (breadth). Human synthesis struggles to handle both simultaneously, forcing this choice.

How does AI change the depth vs. breadth tradeoff?
AI dissolves the tradeoff by enabling researchers to process hundreds of conversations quickly while still preserving nuance. Instead of diluting insight, scale strengthens pattern recognition and surfaces weak signals earlier.

Why has qualitative research been constrained to small sample sizes?
Human synthesis is time-consuming. After 20–30 interviews, transcripts become overwhelming, and important signals get lost in the noise. This labor bottleneck led leaders to view small samples as “pragmatic,” even though it was really a constraint of capacity.

Does AI replace the role of the researcher?
No. AI accelerates synthesis, but the researcher remains critical for judgment, interpretation, and ensuring context and nuance are applied correctly. AI acts as a partner that expands capacity rather than a replacement.

What is the impact of AI-enabled synthesis on decision-making?
With faster synthesis and preserved nuance, research insights emerge in real time rather than only in final reports. Leaders gain richer evidence earlier, which supports faster, more confident decisions.

What does this mean for the future of qualitative research?
The old tradeoff between depth and breadth is over. AI makes it possible to achieve both simultaneously, shifting the standard for research to hundreds of voices interpreted with clarity and delivered at speed.

Jeff Kirk Named Executive Vice President of Applied AI at Robots & Pencils 

From Alexa to Emma, Kirk brings two decades of AI breakthroughs that have reshaped industries. Now he’s powering Robots & Pencils’ rise in the intelligence age. 

Robots & Pencils, an AI-first, global digital innovation firm specializing in cloud-native web, mobile, and app modernization, today announced the executive appointment of Jeff Kirk as Executive Vice President of Applied AI. A seasoned technology leader with a career spanning global agencies, startups, and Fortune 100 enterprises, Kirk steps into this newly created role to accelerate the firm’s AI-first vision and unlock transformative outcomes for clients. As EVP of Applied AI, Kirk will lead the firm’s strategy and delivery of AI-powered and enterprise AI solutions across industries. 

Explore how Robots & Pencils blends science and design to build market leaders. 

Kirk’s track record speaks for itself, with AI breakthroughs that fueled customer engagement and business growth. He founded and scaled Moonshot, an intelligent digital products company later acquired by Pactera, where he spearheaded next-generation experiences in voice, augmented reality, and enterprise digitalization. At Amazon, he served as International Product & Technology Lead for Alexa, driving AI-powered personal assistant expansion to millions of households and users worldwide. Most recently, at bswift, Kirk led AI & Data as VP, delivering conversational AI breakthroughs with the award-winning Emma assistant and GenAI-powered EnrollPro decision support system. 

Across each of these roles runs a common thread. Kirk builds and scales innovations that transform how industries work, creating technologies that move from experimental to essential at breathtaking speed. 

“Jeff has been at the frontier of every major shift in digital innovation,” said Len Pagon, CEO of Robots & Pencils. “From shaping the future of eCommerce and mobile platforms at Brulant and Rosetta, to pioneering global voice AI at Amazon, to launching AI-driven customer experiences at bswift, Jeff has consistently delivered what’s next. He doesn’t just talk about AI. He builds products that millions use every day. With Jeff at the helm of Applied AI, Robots & Pencils is sharpening its challenger edge, helping clients leap ahead while legacy consultancies struggle to catch up. I’m energized by what this means for our clients and inspired by what it means for our people.” 

Across two decades, Kirk has built a reputation for translating complex business requirements into enterprise-grade AI and technology solutions that scale, stick, and generate measurable results. His entrepreneurial mindset and hands-on leadership style uniquely position him to help clients experiment, activate, and operate AI across their businesses. 

“Organizations and their workers are under pressure to innovate on behalf of customers while simultaneously learning to work with a new type of co-worker: artificial intelligence,” said Kirk. “The steps we take together to learn to work differently will lead to the most outsized innovation in our industries. I’m thrilled to join Robots & Pencils to push the boundaries of what’s possible with AI, to deliver outcomes that matter for our clients and their customers, and to create opportunities for our teams to do the most meaningful work of their careers.” 

Kirk began his career at Brulant and Rosetta, where he worked alongside Pagon and other Robots & Pencils’ executive team members, leading engineering and solutions architecture across content, commerce, mobile, and social platforms. His return to the fold marks both a reunion and a reinvention, positioning Robots & Pencils as a leader in applied AI at scale. 

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing.  

The $150K PDF That Nobody Reads: From Research Deliverables to Living Systems 

A product executive slides open her desk drawer. Tucked between old cables and outdated business cards is a thick, glossy report. The binding is pristine, the typography immaculate, the insights meticulously crafted. Six figures well spent, at least according to the invoice. Dust motes catch the light as she lifts it out: a monument to research that shaped… nothing, influenced… no one, and expired the day it was delivered. 

It’s every researcher’s quiet fear. The initiative they poured months of work, a chunk of their sanity, and about a thousand sticky notes into becomes shelf-ware. Just another artifact joining strategy decks and persona posters that never found their way into real decisions. 

This is the way research has been delivered for decades, by global consultancies, boutique agencies, and yes, even by me. At $150K a report, it sounds extravagant. But when you consider the sheer effort, the rarity of the talent involved, and the stakes of anchoring business decisions in real customer insight, it’s not hard to see why leaders sign the check. 

The issue isn’t the value of the research. It’s the belief that insights should live in documents at all. 

Research as a Living System 

Now picture a different moment. The same executive doesn’t reach for a drawer. She opens her laptop and types: “What causes the most friction when ordering internationally?” 

Within seconds she’s reviewing tagged quotes from dozens of interviews, seeing patterns of friction emerge, even testing new messaging against synthesized persona responses. The research isn’t locked in a PDF. It’s alive, queryable, and in motion. 

This isn’t a fantasy. It’s the natural evolution of how research should work: not as one-time deliverables, but as a living system

The numbers show why change is overdue. Eighty percent of Research Ops & UX professionals use some form of research repository, but over half reported fair or poor adoption. The tools are frustrating, time consuming to maintain, and lack ownership. Instead of mining the insights they already have, teams commission new studies, resulting in an expensive cycle of creating artifacts that sit idle, while decisions move on without them. 

It’s a Usability Problem 

Research hasn’t failed because of weak insights. It’s been constrained by the static format of reports. Once findings are bound in a PDF or slide deck, the deliverable has to serve multiple audiences at once, and it starts to bend under its own weight. 

For executives, the executive summary provides a clean snapshot of findings. But when the time comes to make a concrete decision, the summary isn’t enough. They have to dive into the hundred-page appendix to trace back the evidence, which slows down the moment of action. 

On the other hand, product teams don’t need summaries, they need detailed insights for the feature they’re building right now. In long static reports, those details are often buried or disconnected from their workflow. Sometimes they don’t even realize the answer exists at all, so the research goes unused, or even gets repeated. An insight that can’t be surfaced when it’s needed might as well not exist. 

The constraint isn’t the quality of the research. It’s the format. Static deliverables fracture usability across audiences and leave each group working harder than they should to put insights into play. 

Research as a Product 

While we usually view research as an input into products, research itself is a product too. And with a product mindset, there is no “final deliverable,” only an evolving body of user knowledge that grows in value over time. 

In this model, the researcher acts as a knowledge steward of the user insight “product,” curating, refining, and continuously delivering customer insights to their users: the executives, product managers, designers, and engineers who need insights in different forms and at different moments. 

Like any product, research needs a roadmap. It has gaps to fill, like user groups not yet heard from, or behaviors not yet explored. It has features to maintain like transcripts, coded data, and tagged insights. And it has adoption goals, because insights only create value when people use them. 

This approach transforms reports too. A static deck becomes just a temporary framing of the knowledge that already exists in the system. With AI, you can auto-generate the right “version” of research for the right audience, such as an executive summary for the C-suite, annotations on backlog items for product teams, or a user-centered evaluation for design reviews. 

Treating research as a product also opens the door to continuous improvement. A research backlog can track unanswered questions, emerging themes, and opportunities for deeper exploration. Researchers can measure not just delivery (“did we produce quality insights?”) but usage (“did the insights influence a decision?”). Over time, the research “product” compounds in value, becoming a living, evolving system rather than a series of static outputs. 

This new model requires a new generation of tools. AI can now cluster themes, surface patterns, simulate persona responses, and expose insights through natural Q&A. AI makes the recomposition of insights into deliverables cheap. That allows us to focus on how our users get the insights they need in the way they need them. 

From Deliverable to Product 

Treating research as a product changes the central question. It’s no longer, “What should this report contain?” but “What questions might stakeholders need to answer, and how do we make those answers immediately accessible?” 

When research is built for inquiry, every transcript, survey, and usability session becomes part of a living knowledge base that compounds in value over time. Success shifts too: not in the number of reports delivered, but in how often insights are pulled into decisions. A six-figure investment should inform hundreds of critical choices, not one presentation that fades into archives. 

And here’s the irony: the product mindset actually produces better reports as well. When purpose-built reports focus as much on their usage as the information they contain, they become invaluable components of the software production machine. 

Research itself isn’t broken. It just needs a product mindset and AI-based qualitative analysis tools that turns insights into a living system, not a slide deck. 

Next in the series, we look at two more shifts: AI removing the depth vs. breadth constraint, and the rise of agents as research participants.

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request a strategy session.  


Key Takeaways


FAQs

What is the problem with traditional research reports?
Traditional reports often serve as static artifacts. Once published, they struggle to meet the needs of multiple audiences and quickly become outdated, limiting their impact on real decisions.

Why is research often underutilized in organizations?
Research is underutilized because its insights are locked in formats like PDFs or decks. Executives, product teams, and designers often cannot access the right detail at the right time, so findings go unused or studies are repeated.

What does it mean to treat research as a product?
Treating research as a product means building a continuously evolving knowledge base rather than one-time deliverables. Insights are curated, updated, and delivered in forms that align with the needs of different stakeholders.

How does AI support this new model?
AI makes it possible to cluster themes, surface weak signals, and generate audience-specific deliverables on demand. This reduces maintenance overhead and ensures insights are always accessible when needed.

What role do researchers play in this model?
Researchers become knowledge stewards, ensuring the insight “product” is accurate, relevant, and continuously improved. Their work shifts from producing final reports to curating and delivering insights that compound in value over time.

How does this benefit organizations?
Organizations gain faster, more confident decision-making. A six-figure research investment can inform hundreds of decisions, rather than fading after a single presentation.

How Agentic AI Is Rewiring Higher Education 

A University Without a Nervous System 

Walk through the back offices of most universities, and you will see the challenge. Admissions runs on one platform, advising on another, learning management on a third, and academic affairs on a fourth. Each system functions, yet little connects them. Students feel the gaps when financial aid processing is delayed, academic records are incomplete, and support processes remain confusing and slow. Leaders feel it in the cost of complexity and the weight of compliance. 

Higher education institutions typically manage dozens of disconnected systems, with IT leaders facing persistent integration challenges that consume substantial staff time and budget resources while creating operational bottlenecks that affect both student services and institutional agility. 

For decades, CIOs and CTOs have been tasked with stitching these systems together. Progress came in patches, with integrations here and dashboards there. What emerged looked more like scar tissue than connective tissue. Patchwork technology blocks digital transformation in higher education, and leaders now seek infrastructure that can unify rather than just connect. 

The Rise of Agentic AI as Connective Tissue 

Agentic AI wires the university together. Acting like a nervous system, it routes information and triggers actions throughout the institution, coordinating workflows through intelligent routing and contextual decision-making. Unlike traditional automation that follows rigid rules, agentic AI systems can make contextual decisions, learn from outcomes, and coordinate across multiple platforms without constant human oversight. 

In practice, this means a transfer request automatically verifies transcripts through the National Student Clearinghouse, cross-references degree requirements in the SIS, flags discrepancies for staff to review, and updates student records, typically reducing processing time from 5-7 days to under 24 hours while maintaining accuracy. It means an advising system can recognize a retention risk, trigger outreach, and log the interaction without human staff piecing the puzzle together by hand. 

Agentic AI needs a strong foundation. That foundation is cloud-native infrastructure for universities that’s built to scale during peak demand, enforce compliance, and keep every action visible. With this base in place, universities move from pilot projects to production systems. The result is infrastructure that holds under pressure and adapts when conditions change. 

The Brain Still Decides 

A nervous system does not think on its own. It carries signals to the brain, where decisions are made. In the university context the brain is still human, made up of faculty, advisors, administrators, and executives. 

This is where the design philosophy matters. Agentic AI should amplify human capacity, not replace it. Advisors can spend more time in meaningful conversations with students because degree audits and schedule planning run on their own. CIOs can focus on strategic alignment because monitoring and audit logs are captured automatically. The architecture creates space for judgment, and it also creates space for human connection that strengthens the student experience. 

However, this transition requires careful change management. Faculty often express concerns about AI decision-making transparency, while staff worry about job displacement. Successful implementations address these concerns through clear governance frameworks, explainable AI requirements, and retraining programs that position staff as AI supervisors rather than replacements. 

What Happens When Signals Flow Freely 

When agentic systems begin to carry the load, universities see a different rhythm. Transcript processing moves with speed. Advising interactions trigger at the right time. Students find support without friction. Leaders gain resilience as workflows carry themselves from start to finish. What emerges is more than efficiency. It is an institution that thinks and acts as one, with every part working in concert to support the student journey. 

Designing for Resilience and Trust 

CIOs and CTOs recognize that orchestration brings new responsibility. Data must be structured and governed, with student information requiring FERPA compliant handling throughout all automated processes. Agents must be observable and auditable. Compliance cannot live as a separate checklist but as a property of the system itself. AWS-native controls, from encryption to identity management, provide the levers to design with security as a default rather than a bolt-on. 

At the same time, leaders must design for operational trust. A nervous system functions only when signals are reliable. This requires real-time monitoring dashboards, clear escalation protocols when agents encounter exceptions, and audit trails that document every automated decision. 

The Next Chapter of Higher Education Infrastructure 

What is happening now is less about another wave of apps and more about a shift in the foundation of the institution. Agentic AI is beginning to operate as infrastructure. It connects the university’s digital systems into something coordinated and adaptive. 

The role of leadership is to decide how that nervous system will function, and what kind of human judgment it will amplify. Presidents, provosts, CIOs, and CTOs who recognize this shift will shape not only the student experience but the operational resilience of their institutions for years to come. 

For leaders evaluating agentic AI initiatives, three factors determine readiness.  

Institutions strong in all three areas see faster implementation and higher adoption rates. 

The institutions that succeed will be those that view agentic AI not as a technology project, but as an organizational transformation requiring new governance models, staff capabilities, and student engagement strategies. 

When the nervous system works, the signals move freely, and people do their best work. Students find support when they need it. Advisors focus on real conversations. Leaders see further ahead. That is the promise of agentic AI in higher education, not machines in charge, but machines carrying the load so people can do what only people can do. 

Join Us

Join us at ASU’s Agentic AI and the Student Experience conference. Contact us to book time with our leaders and explore how agentic AI can strengthen your institution. 

Request an AI Briefing.  

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Learn more about Robots & Pencils AI Solutions for Education. 

Beyond Wrappers: What Protocols Leave Unsolved in AI Systems 

I recently built a Model Context Protocol (MCP) integration for my Oura Ring. Not because I needed MCP, but because I wanted to test the hype: Could an AI agent make sense of my sleep and recovery data? 

It worked. But halfway through I realized something. I could have just used the Oura REST API directly with a simple wrapper. What I ended up building was basically the same thing, just with extra ceremony. 

As someone who has architected enterprise AI systems, I understand the appeal. Reliability isn’t optional, and protocols like MCP promise standardization. To be clear, MCP wasn’t designed to fix hallucinations or context drift. It’s a coordination protocol. But the experiment left me wondering: Are we solving the real problems or just adding layers? 

The Wrapper Pattern That Won’t Go Away 

MCP joins a long list of frameworks like LangChain, LangGraph, SmolAgents, and LlamaIndex, each offering a slightly different spin on coordination. But at heart, they’re all wrappers around the same issue, getting LLMs to use tools consistently. 

Take CrewAI. On paper, it looked elegant with agents organized into “crews,” each with roles and tools. The demos showed frictionless orchestration. In practice? The agents ignored instructions, produced invalid JSON even after careful prompting, and burned days in debugging loops. When I dropped down to a lower-level tool like LangGraph, the problems vanished. CrewAI’s middleware hadn’t added resilience, it had hidden the bugs. 

This isn’t an isolated frustration. Billions of dollars are flowing into frameworks while fundamentals like building reliable agentic systems remain unsettled. MCP risks following the same path. Standardizing communication may sound mature, but without solving hallucinations and context loss, it’s just more scaffolding on shaky foundations. 

What We’re Not Solving 

The industry has been busy launching integration frameworks, yet the harder challenges remain stubbornly in place: 

As CData notes, these aren’t just implementation gaps. They’re fundamental challenges. 

What the Experiments Actually Reveal 

Working with MCP brought a sharper lesson. The difficulty isn’t about APIs or data formats. It’s about reliability and security. 

When I connected my Oura data, I was effectively giving an AI agent access to intimate health information. MCP’s “standardization” amounted to JSON-RPC endpoints. That doesn’t address the deeper issue: How do you enforce “don’t share my health data” in a system that reasons probabilistically? 

To be fair, there’s progress. Auth0 has rolled out authentication updates, and Anthropic has improved Claude’s function-calling reliability. But these are incremental fixes. They don’t resolve the architectural gap that protocols alone can’t bridge. 

The Evidence Is Piling Up 

The risks aren’t theoretical anymore. Security researchers keep uncovering cracks

Meanwhile, fragmentation accelerates. Merge.dev lists half a dozen MCP alternatives. Zilliz documents the “Great AI Agent Protocol Race.” Every new protocol claims to patch what the last one missed. 

Why This Goes Deeper Than Protocol Wars 

The adoption curve is steep. Academic analysis shows MCP servers grew from around 1,000 early this year to over 14,000 by mid-2025. With $50B+ in AI funding at stake, we’re not just tinkering with middleware; we’re building infrastructure on unsettled ground. 

Protocols like MCP can be valuable scaffolding. Enterprises with many tools and models do need coordination layers. But the real breakthroughs come from facing harder questions head-on: 

These problems exist no matter the protocol. And until they’re addressed, standardization risks becoming a distraction. 

The question isn’t whether MCP is useful; it’s whether the focus on protocol standardization is proportional to the underlying challenges. 

So Where Does That Leave Us? 

There’s nothing wrong with building integration frameworks. They smooth edges and create shared patterns. But we should be honest about what they don’t solve. 

For many use cases, native function calling or simple REST wrappers get the job done with less overhead. MCP helps in larger enterprise contexts. Yet the core challenges, reliability and security, remain active research problems. 

That’s where the true opportunity lies. Not in racing to the next protocol, but in tackling the questions that sit at the heart of agentic systems. 

Protocols are scaffolding. They’re not the main event. 

Learn more about Agentic AI. 

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request a strategy session.