Heard us on
The AI Daily Brief Podcast?

Move from AI ambition to coordinated execution in 30–45 days.

The New AI Architecture of Higher Education 

Part 1 of our series Rewired: The New AI Architecture of Higher Education 

Part 2: How Higher Education Proves Value in the Skills Economy | Part 3: The Invisible Infrastructure That Determines Higher Education Success

The State of Higher Education 2025 report confirms what institutions have been tracking for years: the enrollment cliff is here. Peak high school enrollment arrived with the Class of 2025, and from now through 2041, the number of graduates will decline by 13%

Institutions knew this was coming. The story they aren’t ready to hear is what it requires: not better retention strategies or more aggressive recruiting, but fundamental reinvention of who they serve and how they serve them. Most institutions see the enrollment cliff as a crisis to be managed. I see it as the catalyst for higher education’s most exciting transformation in decades. 

The report captures a sector at an inflection point. Demographic shifts, AI advancement, and evolving student expectations are converging to create the conditions for fundamental reinvention. The barrier isn’t awareness or willingness, it’s execution. Institutions move slowly. Their systems are disconnected. Their infrastructure is rigid, designed for a traditional student population that no longer represents their future. 

The transformation requires work most institutions have barely started: reimagining who their students are, modernizing how systems serve them, and redefining what counts as proof of learning. 

The Student You’re Not Designing For 

I’ve sat in countless conversations with enrollment and student success teams. The pattern is always the same: everyone is focused on meeting this term’s targets, fixing immediate friction points, optimizing for the students already enrolled. There’s barely time to think about next month, let alone reimagine who you could serve five years from now. 

When leaders do push for serving non-traditional populations, such as adult learners, part-time students, and those with significant transfer credits, the instinct is often to squeeze these students into existing systems. Use the same registration workflows. Same advising model. Same assumptions about what ‘student success’ means. The result? You’ve diversified your enrollment numbers but not your infrastructure. 

This is the trap that keeps institutions focused on a shrinking market. As the traditional undergraduate population declines, a massive population of learners remains underserved: 

These learners represent the future majority of higher education, and they bring fundamentally different expectations. They need to learn while working full-time, while managing families, while living far from campus. They require flexibility as a condition of participation. And they expect university systems to work like every other digital experience in their lives: responsive, intelligent, and adaptive. 

Online-only enrollment has already surpassed 5 million students, and online master’s degrees now exceed in-person programs. The pandemic validated what these learners already knew: flexible learning is the only viable path for students juggling multiple commitments. What institutions treated as emergency response in 2020 has become permanent expectation in 2025. 

Being “student-centric” requires building systems with institutional memory, platforms that recognize a returning student, pre-populate forms with known information, and give advisors visibility into a student’s full academic journey. The technology to do this exists in every other sector. Higher education’s challenge is the complexity of dismantling deeply embedded silos while keeping operations running. 

The institutions that will thrive aren’t the ones fighting to preserve systems designed for traditional learners. They’re the ones willing to do the hard work of building platforms that serve a 19-year-old college freshman and a 45-year-old professional returning for a certification with equal intelligence, systems that recognize both learners, understand their different needs, and adapt accordingly. 

The Platform Play Higher Ed Hasn’t Made 

Online education has proven its viability. The next frontier is integration. Online and on-campus work best as different modes within a unified learning platform that follows students wherever they are in life. 

Right now, most universities treat online programs as separate business units with distinct registration systems, student services, and cultures. I’ve seen this friction play out in painful ways. A junior takes a summer internship out of state and wants to stay on track by taking one online course. Suddenly they’re navigating a completely different registration portal, calling a separate help desk, and dealing with advisors who can’t see their on-campus transcript.  

Or consider the undergraduate alum applying to an online master’s program at the same institution. They’re re-entering all the information the university already has, speaking with advisors who have no visibility into their four years of history. Same institution, but the student experiences it as if starting from zero. 

The friction is real, and it’s expensive. Every moment of confusion, every duplicated form, every advisor who doesn’t have complete context is a moment where the student considers whether continuing is worth the hassle. 

The opportunity sits in building modular, always-on learning environments where micro-credentials, degrees, and continuous upskilling integrate seamlessly. Picture this: A student completes a graduate certificate in data analytics. Three years later, they return for an MBA. The certificate credits automatically apply, their prior work is visible to new faculty, and the advising team can build on previous conversations rather than starting fresh. The student doesn’t have to re-explain themselves. They’re simply continuing a relationship the institution remembers. 

This isn’t hypothetical. Some institutions are building this now, and it’s becoming their competitive advantage. 

This vision requires treating education as a lifelong relationship rather than a four-year transaction. It means building systems that remember students, adapt to their changing needs, and make re-entry feel seamless rather than starting from scratch. The institutions that crack this will turn alumni into lifelong learners and turn education into something that compounds in value over time. 

This fundamentally shifts how institutions think about their role. Instead of a four-year engagement, you’re building relationships that span careers. Alumni who return for stackable credentials every few years represent the best kind of growth: learners you’ve already served well, who understand how your programs work, and who are advocating for your institution with their employers. This is how institutions build enrollment resilience in a shifting demographic landscape. 

What This Looks Like in Practice 

Transformation at this scale relies on strategic planning and attention to detail. It happens when your data architecture can track a learner across programs, modalities, and decades. When your student information system doesn’t silo traditional and non-traditional students into separate workflows and data structures. When your advising model scales to support someone taking one course just as effectively as someone enrolled full-time. 

The institutions getting this right are treating it as a technology transformation, not just a strategy refresh. They’re building unified data layers, modernizing APIs, and creating seamless user experiences. They’re measuring success by how little friction a learner experiences, not just by enrollment and retention numbers. 

Building the Foundation for What’s Next 

The universities that thrive over the next decade will be the ones that expand their definition of students to include learners at every career stage. They’ll create unified platforms where online and on-campus blend seamlessly, building experiences that serve diverse populations with equal care. 

Transformation happens in the essential work of modernizing systems, integrating data, and building platforms for lifelong learning. It happens when institutions shift their focus from what they’ve always done to designing for who they could serve. 

The institutions leading this work will be the ones that respond to the enrollment cliff by expanding who they serve. The ones that understand serving lifelong learners requires purpose-built infrastructure. The ones ready to measure success by skills activated rather than degrees awarded. 

The opportunity is clear: institutions that expand their definition of ‘student’ and build unified platforms for lifelong learning will own the next decade. But expanding who you serve only matters if learners believe your programs are worth their investment. In the next article, we’ll explore how institutions prove value in a skills economy—how they make learning outcomes transparent, credentials employer-legible, and career pathways visible from day one. 

Read part 2 of our Rewired series, How Higher Education Proves Value in the Skills Economy.

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice. We’d love to be a partner in that journey. Request an AI briefing.  


Key Takeaways 


FAQs 

How can universities grow enrollment during the demographic cliff? 

Growth comes from expanding who you define as a student. First-time adult learners, students with transfer credits, professionals seeking micro-credentials, and alumni returning for reskilling represent massive underserved populations. Institutions that build systems serving these learners as well as traditional undergraduates will find new revenue streams throughout the demographic transition. 

How do institutions serve traditional students and lifelong learners simultaneously? 

By building unified platforms where different learner types access personalized experiences through the same underlying systems. An 18-year-old residential student and a 40-year-old professional seeking a certificate have different needs, but both benefit from intelligent advising, clear pathways, and responsive operations. The technology should adapt to the learner, not force the learner to adapt to rigid categories. 

What does a unified learning platform actually include? 

A unified platform integrates registration, advising, credential tracking, and student services across all learning modes. It remembers student history regardless of how long they’ve been away, allows seamless transitions between degree programs and micro-credentials, and personalizes communication and support based on individual circumstances. The goal is making re-entry as natural as initial enrollment. 

Why is lifelong learning more valuable than traditional four-year models? 

Lifelong learning creates recurring revenue streams and deeper alumni relationships. Students who return multiple times throughout their careers generate sustained tuition revenue while building stronger institutional loyalty. Education becomes a compounding relationship rather than a single transaction, increasing lifetime value per student. 

How I Learned to Stop Worrying and Love AI Code: A Designer’s Journey 

How Designers Are Using AI Code Tools: From Figma to Functional Prototypes 


The team Zoom call felt like an intervention. “Just try it,” they said. “Everyone open Claude Code.” My palms were sweating. Twenty years of visual design, and I’d only clumsily played around with code. I was the furthest thing from a developer you could imagine. 

Two hours later, I couldn’t stop. I’d built three working prototypes. My ideas, the ones that lived and died in Figma for years, were suddenly real. Interactive. Alive. 

This is the story of how I went from code-phobic to code-addicted in a single afternoon. And why every designer reading this is about to follow the same path. 

The Designer–Developer Divide 

For decades, we’ve accepted a fundamental lie: Designers design, developers develop. The gap between these worlds felt like a chasm requiring years of computer science education to cross. HTML, CSS, JavaScript were foreign languages spoken in basement servers and terminal windows. 

I believed this myth completely. My job was making things beautiful. Someone else’s job was making them work. This division of labor felt natural, inevitable, and even efficient. Why would I learn to code when developers already did it so well? 

That myth cost me every idea I couldn’t prototype myself, every interaction I couldn’t test, and every vision that got lost in translation. Twenty years of creative constipation, waiting for someone else to birth my ideas. 

Five Minutes to AI-Powered Prototyping 

“Open your terminal,” they said. Haha. I’d only ever really seen it used in The Matrix. The black window appeared. The cursor blinked in judgment. Type ‘claude’ and tell it what you want to build. 

My first prompt was embarrassingly simple: “Make me a color palette generator.” I expected nothing. Error messages, maybe. Definitely not working code. 

But there it was. A functioning app. My app. Built with my words. 

The next prompt came faster: “Add a feature that saves palettes locally.” Done. “Make the colors animate when they change.” Done. Each success made me braver. Each response made me hungrier. 

By the end of that call, I wasn’t just using AI to code. I was thinking in code. The barrier I’d spent two decades accepting had evaporated in minutes. 

The New Addiction: Vibe Coding 

They call it “vibe coding,” this conversational dance with AI. You describe what you want. The AI builds it. You refine. It rebuilds. No syntax to memorize. No documentation to parse. Just pure creative expression flowing directly into functional reality. 

I became obsessed. That first night, I built seven prototypes. Not because anyone asked. Not because I needed them. Because I could. Every design idea I’d shelved, every interaction I’d dreamed about was suddenly possible. 

The feeling was intoxicating. After years of creating static mockups, watching my designs move and respond felt like gaining a superpower. Click this button, trigger that animation. Hover here, reveal that detail. My aesthetic decisions instantly became experiential. 

When Designers Start Coding 

Something profound happens when the person with design taste controls implementation. The endless back-and-forth disappears. The “that’s not quite what I meant” conversations vanish. The design is the product is the code. 

UXPin’s research shows designers can now “generate fully functional components with just a few inputs.” But that clinical description misses the emotional reality. It’s not about generating components. It’s about giving creative vision direct access to digital reality. 

I started noticing details I’d never considered before. The precise timing of transitions. The subtle response to user actions. The difference between functional and delightful. When you control every aspect of implementation, you start designing differently. You start designing more ambitiously, more precisely, and with more courage.  

AI Code Tools That Make It Possible 

The technology enabling this transformation is staggering. Visual Copilot converts Figma designs directly to React code. Codia processes designs 100x faster than manual coding. These aren’t incremental improvements. They’re paradigm shifts disguised as product features. 

But the tools are just enablers. The real revolution happens in your mind. That moment when you realize the prison was self-imposed. The guards were imaginary. The key was always in your pocket. 

Natural language is the new programming language. If you can describe what you want, you can build it. If you can envision it, you can ship it. The only barrier left is imagination. 

The Future of Designer-Coders 

Organizations clinging to traditional designer-developer divisions are about to face a reckoning. While they coordinate handoffs and manage miscommunications, designers who code are shipping. Iterating. Learning. Building. 

This shift amplifies designers. Developers can focus on complex systems and architecture. Designers can implement their vision directly. Everyone works at a higher level of abstraction and impact. 

The competitive advantage is obvious. Teams with designer-coders ship better products faster. Not because they’re more efficient, but because they’re more effective. Vision and execution unified in a single mind. 

Your First Steps with AI Coding 

I know what you’re thinking. “But I’m not technical.” Neither was I. “But I don’t understand programming.” You don’t need to. “But I’m just a designer.” That’s exactly why you’re perfect for this. 

The same skills that make you a great designer, understanding users, crafting experiences, and obsessing over details, make you a natural at AI-powered development. You already think in systems and interactions. Now you can build them. 

Start small. Open a terminal and type a prompt. Build something stupid. Then build something slightly less stupid. Within hours, you’ll be building things that matter. Within days, you’ll wonder how you ever worked without this power. 

The Designer You’ll Become with AI 

Six months later, I barely recognize my old workflow. Static mockups feel like cave paintings. Design documentation seems like elaborate fiction. The idea of handing off my vision for someone else to interpret? Unthinkable. 

My role hasn’t changed. I’m still a visual designer. But my capability has transformed. I create experiences versus just imagining them. I propose ideas and prove them. I don’t just design products and ship them. 

The code anxiety is gone. Every limitation that once constrained me now seems artificial. The only question left is what to build next. 

Your journey starts with a single prompt. What will yours be? 

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing. 


Key Takeaways 


FAQs 

Q: What are AI code tools? 
AI code tools (like Claude Code, GitHub Copilot, or Visual Copilot) let you describe what you want in natural language, then generate working code automatically. 

Q: How can designers use AI code tools? 
Designers can turn Figma mockups or written prompts into functional prototypes, animations, and interactions—without learning traditional programming. 

Q: Does this replace developers? 
No. Developers focus on complex architecture, scaling, and systems. AI coding empowers designers to own interaction and experience details, speeding collaboration. 

Q: Why does this matter for organizations? 
Teams that adopt AI prototyping iterate faster, align design and development more tightly, and ship higher-quality products with fewer miscommunications. 

Q: What skills do designers need to start? 
Curiosity and creativity. If you can describe an idea clearly, you can build it with AI code tools. 

The Death of the Design Handoff: How AI Turns Tastemakers into Makers 

Every designer knows the ritual. You pour weeks into pixel-perfect mockups. You document every interaction, annotate every state, and build out comprehensive design systems. Then you hand it all to development and pray. 

Three sprints later, what comes back looks… different. Not wrong exactly, but not right either. The spacing feels off. The animations lack finesse. That subtle gradient you agonized over? Gone. The developer followed your specs perfectly, yet somehow the soul got lost in translation. 

Designers have always accepted this degradation as the cost of building digital products. We tried creating processes to minimize it, like design tokens, component libraries, and endless documentation, but we never stopped to question the handoff itself. 

Until now. AI just made the entire ritual obsolete. 

AI Ends the Design Handoff 

The design-to-development pipeline has always been messy, more like a game of telephone in a storm than a straight line. A designer’s vision turns into static mockups, those mockups get turned into specs, and then the specs are coded by someone who wasn’t there when the creative calls were made. 

Every step adds noise. Every handoff blurs the details. By the time the design reaches a user, the intent has been watered down through too many layers of translation. To manage the loss, we added layers. Product managers translate between teams, QA engineers catch mistakes, and design systems impose order. But taste cannot be standardized. 

AI design-to-code tools eliminate this process entirely. When a designer can move directly from Figma to functional code, the telephone line disappears. One vision, one implementation, and zero interpretation. 

Developers Spend Half Their Time on UI 

Here’s a truth we rarely say out loud. Developers spend 30–50% of their time on UI implementation. They’re not solving tough algorithms or designing big system architectures. They’re taking what’s already laid out in Figma and turning it into code. It takes skill and attention, but it’s work that repeats more than it invents. 

I’m not criticizing developers. I’m criticizing this process. We’ve asked our most technical team members to spend a third of their time as human transpilers, converting one formal language (design) into another (code). The real tragedy? They’re good at it. So good that we never stopped to ask if they should be doing it at all. 

When Airbnb started generating production code from hand-drawn sketches, they weren’t just saving time. They were liberating their engineers to focus on problems that actually require engineering. 

The Rise of the Tastemaker-Maker 

Something big shifts when designers can bring their own vision to life. The feedback loop shrinks from weeks to minutes. When something doesn’t look right, you can fix it immediately. If inspiration strikes, you can send it to staging and get real reactions in hours instead of weeks. What used to take whole sprints now fits inside a single coffee break. 

It’s tempting to frame this as designers turning into developers, but that misses the point. What’s really happening is that taste itself can now be put into action. The person who knows why a button feels right at 48 pixels, or why an animation needs a certain ease, or why an error state demands a particular shade of red, can actually make those choices real. 

That shift is giving rise to a new kind of role: the tastemaker-maker. They’re not confined to design or development but move fluidly between both. They hold the vision and the skills to bring it to life. They think in experiences and build in code. 

What Happens When Handoffs Disappear 

The implications ripple outward. When handoffs disappear, so do the roles built around managing them. The product manager who translates between design and development. The QA engineer who catches implementation mismatches. The technical lead who estimates UI development time. 

Teams start reorganizing around vision rather than function. Instead of design teams and development teams, you get product teams led by tastemaker-makers who can move from concept to code without translation. Supporting them are engineers focused on what AI can’t do: solving novel technical challenges, building robust architectures, optimizing performance at scale. 

This is job elevation. Developers stop being expensive markup translators and become true engineers. Designers stop being documentation machines and become product builders. Everyone moves up the value chain. 

AI Design to Code Speeds Shipping 

Companies using AI design-to-code tools report shipping features 3x faster with pixel-perfect accuracy. That’s a step function change in capability. While your team is still playing telephone, your competitors are shipping experiences that feel inevitable because they were never compromised by translation. 

The gap compounds daily. Each handoff you eliminate saves time on that project and builds institutional knowledge about what becomes possible when vision and execution converge. Your competitors are shipping faster and learning faster. 

How to Reorganize Without Handoffs 

Adopting AI design-to-code tools is the easy part. The hard part is reimagining your organization without handoffs. Start here: 

Identify your tastemaker-makers. They already exist in your organization. These are the designers who code on the side with strong aesthetic sense. Give them AI tools and watch them soar. 

Reorganize around products, not functions. Small teams with end-to-end ownership beat large teams with perfect handoffs every time. 

Measure differently. Stop counting tickets closed and start counting experiences shipped. Quality and velocity aren’t tradeoffs when the same person owns both. 

The End of the Design Handoff Era 

The design handoff was a bug in digital product development. A workaround for the technological limitation that the person who could envision the experience couldn’t build it. That limitation just died, and with it, an entire way of working that we tolerated for so long we forgot it was broken. 

The future belongs to those who can both dream and deliver. The handoff is dead. Long live the makers. 

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing. 


Key Takeaways 


FAQs 

What is a design handoff? 
The process where designers deliver mockups and specifications to developers, who then translate them into code. 

Why is the handoff inefficient? Each translation from design to documentation to implementation introduces information loss, slowing delivery and compromising quality. 

How do AI design-to-code tools change the process? 
They allow direct conversion from design tools like Figma into functional code, eliminating the translation step. 

What is a tastemaker-maker? 
A hybrid role that combines a designer’s vision with the ability to implement in code, collapsing feedback loops and accelerating iteration. 

Does this replace developers? 
No. It elevates developers to focus on complex engineering challenges, while routine UI translation is handled by AI. 

What’s the business impact? 
Companies using these tools report shipping 3x faster with higher fidelity—creating both a speed and learning advantage. 

AI Agents Are Users Too: Rethinking Research for Multi-Actor Systems 

The first time an AI assistant rescheduled a meeting without human input, it felt like a novelty. Now it happens daily. Agents draft documents, route tickets, manage workflows, and interact on our behalf. They are no longer hidden in the background. They have stepped into the front lines, shaping experiences as actively as the people they serve. 

For research leaders, that changes the question. We have always studied humans. But when an agent performs half the task, who is the user? 

Agents on the Front Lines of Experience 

AI agents reveal truths that interviews cannot. Their activity logs expose where systems succeed and where they stumble. A rerouted request highlights a friction point. A repeated error marks a design flaw. Escalations and overrides surface moments where human judgment still needs to intervene. These are not anecdotes filtered through memory. They are live records of system behavior. 

And that’s why we need to treat agents as participants in their own right. 

A New Kind of Participant 

Treating agents as research participants reframes what discovery looks like. Interaction data becomes a continuous feed, showing failure rates, repeated queries, and usage patterns at scale. Humans remain the primary source of insight: the frustrations, the context, and the emotional weight. Agent activity adds another layer, highlighting recurring points of friction within the workflow and offering evidence that supports and extends what people share. Together, they create a more complete picture than either could alone. 

Methodology That Respects the Signal 

Of course, agent data is not self-explanatory. Logs are noisy. Bias can creep in if models were trained on narrow datasets. Privacy concerns must be addressed with care. The job of the researcher remains critical: separating signal from noise, validating patterns, and weaving human context into machine traces. Instead of replacing human perspective, agent data can enrich and ground it, adding evidence that makes qualitative insight even stronger. This reframing doesn’t just affect research practice, it also changes how we think about design. 

Designing for Multi-Actor Systems 

Products are no longer built for humans alone. They must work for the people who use them and the agents that increasingly mediate their experience. A customer may never touch a form field if their AI assistant fills it in. An employee may never interact directly with a dashboard if their agent retrieves the results. Design must account for both participants. 

Organizations that learn to research this new ecosystem will see problems sooner, adapt faster, and scale more effectively. Those that continue to study humans alone risk optimizing for only half the journey. 

The New Research Frontier 

Research has always been about listening closely. Today, listening means more than interviews and surveys. It means learning from the digital actors working beside us, the agents carrying out tasks, flagging failures, and amplifying our actions. 

The user is no longer singular. It is human and machine together. Understanding both is the only way to design systems that reflect the reality of work today. 

This piece expands the very definition of the user. For the other shifts redefining research, see our earlier explorations on format, how to move beyond static deliverables, and scope, how AI dissolves the depth vs. breadth tradeoff. 

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing.   


Key Takeaways


FAQs

Why consider AI agents as research participants?
AI agents actively shape workflows and user experiences. Their activity logs reveal friction points, errors, and escalations that human feedback alone may miss. Including them as research participants offers a more complete picture of how systems actually perform.

Do AI agents replace human participants in research?
No. Humans remain the primary source of context, emotion, and motivation. Agent data adds a complementary layer of evidence, enriching and grounding what people already share.

What types of insight can AI agents provide?
Agents surface recurring points of friction, repeated errors, and escalation patterns. These signals highlight where workflows break down, offering evidence to support and extend human feedback.

What role do researchers play when analyzing agent data?
Researchers remain critical. They filter noise, validate patterns, address bias, and ensure agent activity is interpreted with proper human context. The shift broadens qualitative practice rather than replacing it.

What is a multi-actor system in research?
A multi-actor system is one where both humans and AI agents interact to complete tasks. Designing for these systems means studying the interplay between people and machines, ensuring both participants are accounted for.

How does including agents in research improve design?
By listening to both humans and agents, organizations can spot problems sooner, adapt faster, and create systems that reflect the true complexity of modern workflows.

How AI Ends the Depth vs. Breadth Research Tradeoff 

The transcripts pile up fast. Ten conversations yield sticky notes that cover the wall, each quote circled, each theme debated. By twenty, the clusters blur. At thirty, the team is saturated, sifting through repetition in search of clarity. The insights are still valuable, but the effort to make sense of them begins to outweigh the return. 

This has always been the tradeoff, go deep with a few voices or broaden the scope and risk losing nuance. Leaders accepted that limitation as the cost of qualitative research. 

That ceiling is gone. 

The Ceiling Was Human Labor 

Generative research has always promised what numbers cannot capture, the story beneath the metric. But human synthesis is slow. Each new transcript multiplies complexity, until the process itself becomes the limiter. Teams stopped at 20 or 30 conversations not because curiosity ended, but because the hours to make sense of them did. Nuance gave way to saturation. 

Executives signed off on smaller studies and called it pragmatism. In truth, it was constraint. 

AI Opens the Door to Scale 

Large language models change the equation. Instead of weeks of sticky notes and clustering, AI can surface themes in hours. It highlights recurring ideas, connects outliers, and organizes insights without exhausting the team. The researcher’s role remains. Judgment still matters, but the ceiling imposed by human-only synthesis disappears. 

Instead of losing clarity as the number grows, each additional conversation now sharpens the signal, strengthening patterns, surfacing weak signals earlier, and giving leaders the confidence to act with richer evidence. 

Discovery Becomes Active 

The real breakthrough is not only scale, but also timing. With AI-enabled synthesis, insights emerge as the study unfolds. After the first dozen conversations, early themes are visible. Gaps in demographics or use cases show up while there is still time to adjust. By week two, the research is already feeding product decisions. 

Instead of waiting for a final report, teams get a living stream of discovery. Research shifts from retrospective artifact to active driver of strategy. 

Nuance at Speed 

For organizations, this ends the false binary. Depth and breadth no longer compete. A bank exploring new digital features can capture voices across demographics in weeks, not months. A health-tech team can fold dozens of patient experiences into the design cycle in real time. A software platform can test adoption signals across continents without sacrificing cultural nuance. 

The payoff is more than efficiency. It is confidence. When executives see both scale and nuance in the evidence, they act faster and with greater conviction. 

The New Standard 

The era of choosing between depth or breadth is behind us. AI frees research leaders from the constraints of small samples or limited perspectives. With AI as a synthesis partner, the standard shifts: hundreds of voices, interpreted with clarity, delivered at speed. 

For teams still focused on fixing the format problem, our previous piece, The $150K PDF That Nobody Reads, explores how static reports constrain research. Our next article examines an even bigger shift: what happens when your users are no longer only people.

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing.   


Key Takeaways


FAQS

What is the depth vs. breadth tradeoff in qualitative research?
The depth vs. breadth tradeoff refers to the long-standing belief that teams must choose between conducting a small number of interviews with rich nuance (depth) or a larger sample with less detail (breadth). Human synthesis struggles to handle both simultaneously, forcing this choice.

How does AI change the depth vs. breadth tradeoff?
AI dissolves the tradeoff by enabling researchers to process hundreds of conversations quickly while still preserving nuance. Instead of diluting insight, scale strengthens pattern recognition and surfaces weak signals earlier.

Why has qualitative research been constrained to small sample sizes?
Human synthesis is time-consuming. After 20–30 interviews, transcripts become overwhelming, and important signals get lost in the noise. This labor bottleneck led leaders to view small samples as “pragmatic,” even though it was really a constraint of capacity.

Does AI replace the role of the researcher?
No. AI accelerates synthesis, but the researcher remains critical for judgment, interpretation, and ensuring context and nuance are applied correctly. AI acts as a partner that expands capacity rather than a replacement.

What is the impact of AI-enabled synthesis on decision-making?
With faster synthesis and preserved nuance, research insights emerge in real time rather than only in final reports. Leaders gain richer evidence earlier, which supports faster, more confident decisions.

What does this mean for the future of qualitative research?
The old tradeoff between depth and breadth is over. AI makes it possible to achieve both simultaneously, shifting the standard for research to hundreds of voices interpreted with clarity and delivered at speed.

Jeff Kirk Named Executive Vice President of Applied AI at Robots & Pencils 

From Alexa to Emma, Kirk brings two decades of AI breakthroughs that have reshaped industries. Now he’s powering Robots & Pencils’ rise in the intelligence age. 

Robots & Pencils, an AI-first, global digital innovation firm specializing in cloud-native web, mobile, and app modernization, today announced the executive appointment of Jeff Kirk as Executive Vice President of Applied AI. A seasoned technology leader with a career spanning global agencies, startups, and Fortune 100 enterprises, Kirk steps into this newly created role to accelerate the firm’s AI-first vision and unlock transformative outcomes for clients. As EVP of Applied AI, Kirk will lead the firm’s strategy and delivery of AI-powered and enterprise AI solutions across industries. 

Explore how Robots & Pencils blends science and design to build market leaders. 

Kirk’s track record speaks for itself, with AI breakthroughs that fueled customer engagement and business growth. He founded and scaled Moonshot, an intelligent digital products company later acquired by Pactera, where he spearheaded next-generation experiences in voice, augmented reality, and enterprise digitalization. At Amazon, he served as International Product & Technology Lead for Alexa, driving AI-powered personal assistant expansion to millions of households and users worldwide. Most recently, at bswift, Kirk led AI & Data as VP, delivering conversational AI breakthroughs with the award-winning Emma assistant and GenAI-powered EnrollPro decision support system. 

Across each of these roles runs a common thread. Kirk builds and scales innovations that transform how industries work, creating technologies that move from experimental to essential at breathtaking speed. 

“Jeff has been at the frontier of every major shift in digital innovation,” said Len Pagon, CEO of Robots & Pencils. “From shaping the future of eCommerce and mobile platforms at Brulant and Rosetta, to pioneering global voice AI at Amazon, to launching AI-driven customer experiences at bswift, Jeff has consistently delivered what’s next. He doesn’t just talk about AI. He builds products that millions use every day. With Jeff at the helm of Applied AI, Robots & Pencils is sharpening its challenger edge, helping clients leap ahead while legacy consultancies struggle to catch up. I’m energized by what this means for our clients and inspired by what it means for our people.” 

Across two decades, Kirk has built a reputation for translating complex business requirements into enterprise-grade AI and technology solutions that scale, stick, and generate measurable results. His entrepreneurial mindset and hands-on leadership style uniquely position him to help clients experiment, activate, and operate AI across their businesses. 

“Organizations and their workers are under pressure to innovate on behalf of customers while simultaneously learning to work with a new type of co-worker: artificial intelligence,” said Kirk. “The steps we take together to learn to work differently will lead to the most outsized innovation in our industries. I’m thrilled to join Robots & Pencils to push the boundaries of what’s possible with AI, to deliver outcomes that matter for our clients and their customers, and to create opportunities for our teams to do the most meaningful work of their careers.” 

Kirk began his career at Brulant and Rosetta, where he worked alongside Pagon and other Robots & Pencils’ executive team members, leading engineering and solutions architecture across content, commerce, mobile, and social platforms. His return to the fold marks both a reunion and a reinvention, positioning Robots & Pencils as a leader in applied AI at scale. 

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing.  

The $150K PDF That Nobody Reads: From Research Deliverables to Living Systems 

A product executive slides open her desk drawer. Tucked between old cables and outdated business cards is a thick, glossy report. The binding is pristine, the typography immaculate, the insights meticulously crafted. Six figures well spent, at least according to the invoice. Dust motes catch the light as she lifts it out: a monument to research that shaped… nothing, influenced… no one, and expired the day it was delivered. 

It’s every researcher’s quiet fear. The initiative they poured months of work, a chunk of their sanity, and about a thousand sticky notes into becomes shelf-ware. Just another artifact joining strategy decks and persona posters that never found their way into real decisions. 

This is the way research has been delivered for decades, by global consultancies, boutique agencies, and yes, even by me. At $150K a report, it sounds extravagant. But when you consider the sheer effort, the rarity of the talent involved, and the stakes of anchoring business decisions in real customer insight, it’s not hard to see why leaders sign the check. 

The issue isn’t the value of the research. It’s the belief that insights should live in documents at all. 

Research as a Living System 

Now picture a different moment. The same executive doesn’t reach for a drawer. She opens her laptop and types: “What causes the most friction when ordering internationally?” 

Within seconds she’s reviewing tagged quotes from dozens of interviews, seeing patterns of friction emerge, even testing new messaging against synthesized persona responses. The research isn’t locked in a PDF. It’s alive, queryable, and in motion. 

This isn’t a fantasy. It’s the natural evolution of how research should work: not as one-time deliverables, but as a living system

The numbers show why change is overdue. Eighty percent of Research Ops & UX professionals use some form of research repository, but over half reported fair or poor adoption. The tools are frustrating, time consuming to maintain, and lack ownership. Instead of mining the insights they already have, teams commission new studies, resulting in an expensive cycle of creating artifacts that sit idle, while decisions move on without them. 

It’s a Usability Problem 

Research hasn’t failed because of weak insights. It’s been constrained by the static format of reports. Once findings are bound in a PDF or slide deck, the deliverable has to serve multiple audiences at once, and it starts to bend under its own weight. 

For executives, the executive summary provides a clean snapshot of findings. But when the time comes to make a concrete decision, the summary isn’t enough. They have to dive into the hundred-page appendix to trace back the evidence, which slows down the moment of action. 

On the other hand, product teams don’t need summaries, they need detailed insights for the feature they’re building right now. In long static reports, those details are often buried or disconnected from their workflow. Sometimes they don’t even realize the answer exists at all, so the research goes unused, or even gets repeated. An insight that can’t be surfaced when it’s needed might as well not exist. 

The constraint isn’t the quality of the research. It’s the format. Static deliverables fracture usability across audiences and leave each group working harder than they should to put insights into play. 

Research as a Product 

While we usually view research as an input into products, research itself is a product too. And with a product mindset, there is no “final deliverable,” only an evolving body of user knowledge that grows in value over time. 

In this model, the researcher acts as a knowledge steward of the user insight “product,” curating, refining, and continuously delivering customer insights to their users: the executives, product managers, designers, and engineers who need insights in different forms and at different moments. 

Like any product, research needs a roadmap. It has gaps to fill, like user groups not yet heard from, or behaviors not yet explored. It has features to maintain like transcripts, coded data, and tagged insights. And it has adoption goals, because insights only create value when people use them. 

This approach transforms reports too. A static deck becomes just a temporary framing of the knowledge that already exists in the system. With AI, you can auto-generate the right “version” of research for the right audience, such as an executive summary for the C-suite, annotations on backlog items for product teams, or a user-centered evaluation for design reviews. 

Treating research as a product also opens the door to continuous improvement. A research backlog can track unanswered questions, emerging themes, and opportunities for deeper exploration. Researchers can measure not just delivery (“did we produce quality insights?”) but usage (“did the insights influence a decision?”). Over time, the research “product” compounds in value, becoming a living, evolving system rather than a series of static outputs. 

This new model requires a new generation of tools. AI can now cluster themes, surface patterns, simulate persona responses, and expose insights through natural Q&A. AI makes the recomposition of insights into deliverables cheap. That allows us to focus on how our users get the insights they need in the way they need them. 

From Deliverable to Product 

Treating research as a product changes the central question. It’s no longer, “What should this report contain?” but “What questions might stakeholders need to answer, and how do we make those answers immediately accessible?” 

When research is built for inquiry, every transcript, survey, and usability session becomes part of a living knowledge base that compounds in value over time. Success shifts too: not in the number of reports delivered, but in how often insights are pulled into decisions. A six-figure investment should inform hundreds of critical choices, not one presentation that fades into archives. 

And here’s the irony: the product mindset actually produces better reports as well. When purpose-built reports focus as much on their usage as the information they contain, they become invaluable components of the software production machine. 

Research itself isn’t broken. It just needs a product mindset and AI-based qualitative analysis tools that turns insights into a living system, not a slide deck. 

Next in the series, we look at two more shifts: AI removing the depth vs. breadth constraint, and the rise of agents as research participants.

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request a strategy session.  


Key Takeaways


FAQs

What is the problem with traditional research reports?
Traditional reports often serve as static artifacts. Once published, they struggle to meet the needs of multiple audiences and quickly become outdated, limiting their impact on real decisions.

Why is research often underutilized in organizations?
Research is underutilized because its insights are locked in formats like PDFs or decks. Executives, product teams, and designers often cannot access the right detail at the right time, so findings go unused or studies are repeated.

What does it mean to treat research as a product?
Treating research as a product means building a continuously evolving knowledge base rather than one-time deliverables. Insights are curated, updated, and delivered in forms that align with the needs of different stakeholders.

How does AI support this new model?
AI makes it possible to cluster themes, surface weak signals, and generate audience-specific deliverables on demand. This reduces maintenance overhead and ensures insights are always accessible when needed.

What role do researchers play in this model?
Researchers become knowledge stewards, ensuring the insight “product” is accurate, relevant, and continuously improved. Their work shifts from producing final reports to curating and delivering insights that compound in value over time.

How does this benefit organizations?
Organizations gain faster, more confident decision-making. A six-figure research investment can inform hundreds of decisions, rather than fading after a single presentation.

How Agentic AI Is Rewiring Higher Education 

A University Without a Nervous System 

Walk through the back offices of most universities, and you will see the challenge. Admissions runs on one platform, advising on another, learning management on a third, and academic affairs on a fourth. Each system functions, yet little connects them. Students feel the gaps when financial aid processing is delayed, academic records are incomplete, and support processes remain confusing and slow. Leaders feel it in the cost of complexity and the weight of compliance. 

Higher education institutions typically manage dozens of disconnected systems, with IT leaders facing persistent integration challenges that consume substantial staff time and budget resources while creating operational bottlenecks that affect both student services and institutional agility. 

For decades, CIOs and CTOs have been tasked with stitching these systems together. Progress came in patches, with integrations here and dashboards there. What emerged looked more like scar tissue than connective tissue. Patchwork technology blocks digital transformation in higher education, and leaders now seek infrastructure that can unify rather than just connect. 

The Rise of Agentic AI as Connective Tissue 

Agentic AI wires the university together. Acting like a nervous system, it routes information and triggers actions throughout the institution, coordinating workflows through intelligent routing and contextual decision-making. Unlike traditional automation that follows rigid rules, agentic AI systems can make contextual decisions, learn from outcomes, and coordinate across multiple platforms without constant human oversight. 

In practice, this means a transfer request automatically verifies transcripts through the National Student Clearinghouse, cross-references degree requirements in the SIS, flags discrepancies for staff to review, and updates student records, typically reducing processing time from 5-7 days to under 24 hours while maintaining accuracy. It means an advising system can recognize a retention risk, trigger outreach, and log the interaction without human staff piecing the puzzle together by hand. 

Agentic AI needs a strong foundation. That foundation is cloud-native infrastructure for universities that’s built to scale during peak demand, enforce compliance, and keep every action visible. With this base in place, universities move from pilot projects to production systems. The result is infrastructure that holds under pressure and adapts when conditions change. 

The Brain Still Decides 

A nervous system does not think on its own. It carries signals to the brain, where decisions are made. In the university context the brain is still human, made up of faculty, advisors, administrators, and executives. 

This is where the design philosophy matters. Agentic AI should amplify human capacity, not replace it. Advisors can spend more time in meaningful conversations with students because degree audits and schedule planning run on their own. CIOs can focus on strategic alignment because monitoring and audit logs are captured automatically. The architecture creates space for judgment, and it also creates space for human connection that strengthens the student experience. 

However, this transition requires careful change management. Faculty often express concerns about AI decision-making transparency, while staff worry about job displacement. Successful implementations address these concerns through clear governance frameworks, explainable AI requirements, and retraining programs that position staff as AI supervisors rather than replacements. 

What Happens When Signals Flow Freely 

When agentic systems begin to carry the load, universities see a different rhythm. Transcript processing moves with speed. Advising interactions trigger at the right time. Students find support without friction. Leaders gain resilience as workflows carry themselves from start to finish. What emerges is more than efficiency. It is an institution that thinks and acts as one, with every part working in concert to support the student journey. 

Designing for Resilience and Trust 

CIOs and CTOs recognize that orchestration brings new responsibility. Data must be structured and governed, with student information requiring FERPA compliant handling throughout all automated processes. Agents must be observable and auditable. Compliance cannot live as a separate checklist but as a property of the system itself. AWS-native controls, from encryption to identity management, provide the levers to design with security as a default rather than a bolt-on. 

At the same time, leaders must design for operational trust. A nervous system functions only when signals are reliable. This requires real-time monitoring dashboards, clear escalation protocols when agents encounter exceptions, and audit trails that document every automated decision. 

The Next Chapter of Higher Education Infrastructure 

What is happening now is less about another wave of apps and more about a shift in the foundation of the institution. Agentic AI is beginning to operate as infrastructure. It connects the university’s digital systems into something coordinated and adaptive. 

The role of leadership is to decide how that nervous system will function, and what kind of human judgment it will amplify. Presidents, provosts, CIOs, and CTOs who recognize this shift will shape not only the student experience but the operational resilience of their institutions for years to come. 

For leaders evaluating agentic AI initiatives, three factors determine readiness.  

Institutions strong in all three areas see faster implementation and higher adoption rates. 

The institutions that succeed will be those that view agentic AI not as a technology project, but as an organizational transformation requiring new governance models, staff capabilities, and student engagement strategies. 

When the nervous system works, the signals move freely, and people do their best work. Students find support when they need it. Advisors focus on real conversations. Leaders see further ahead. That is the promise of agentic AI in higher education, not machines in charge, but machines carrying the load so people can do what only people can do. 

Join Us

Join us at ASU’s Agentic AI and the Student Experience conference. Contact us to book time with our leaders and explore how agentic AI can strengthen your institution. 

Request an AI Briefing.  

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Learn more about Robots & Pencils AI Solutions for Education. 

Beyond Story Points: Rethinking Software Engineering Productivity in the Age of AI 

Why traditional metrics fall short, and how modern frameworks like DORA and SPACE can guide better outcomes 

For years, engineering leaders have relied on familiar metrics to gauge developer performance: story points, bug counts, and lines of code. These measures offered a shared baseline, especially in Agile environments where estimation and output needed a common language. 

But in today’s AI-assisted world, those numbers no longer tell the full story. Performance isn’t just about volume or velocity. It’s about outcomes. Did the developer deliver the expected functionality, with the right quality, on time? That’s how we compensate today, and that’s still what matters. But how we measure those things must evolve.  

With tools like GitHub Copilot, Claude Code, and Cursor generating entire functions, tests, and documentation quickly, output is becoming less about what a developer types and more about what they model, validate, and evolve. 

The challenge for CIOs, CTOs, and SVPs of Engineering isn’t just adopting new tools. It’s rethinking how to measure effectiveness in a world where productivity is amplified by AI and complexity often hides behind automation. 

Why Traditional Metrics Break Down 

The future of measurement hinges on three categories: productivity, quality, and functionality. These have always been essential to evaluating engineering work. But in the AI era, we must measure them differently. That shift doesn’t mean abandoning objectivity; it means updating our tools. 

The problem isn’t that legacy metrics are useless. It’s that they’re easily gamed, misinterpreted, or disconnected from business value. 

At best, these metrics create noise. At worst, they drive harmful incentives, like rewarding speed over safety, or activity over alignment. 

Today’s AI-assisted workflows lack mature solutions for tracking whether functionality requirements, like EPICs and user stories, have been fully met. But new approaches, like multi-domain linking (MDL), are emerging to close that gap. Measurement is getting smarter, and more connected, because it has to. 

The Rise of Directional Metrics 

Modern frameworks like DORA and SPACE were built to address these gaps. 

DORA (DevOps Research and Assessment) focuses on: 

These measure delivery health, not just effort. They’re useful for understanding how efficiently and safely value reaches users. 

SPACE (developed by Microsoft Research) considers: 

SPACE offers a more holistic view, especially in cross-functional and AI-assisted teams. It acknowledges that psychological safety, cross-team communication, and real flow states often impact long-term output more than individual commits. 

AI Complicates the Picture 

AI tools don’t eliminate the need for metrics; they demand smarter ones. When an LLM can write 80% of the code for a feature, how do we credit the developer? By the number of keystrokes? Or by their judgment in prompting, curating, and validating what the tool produced? 

But here’s the deeper challenge: What if that feature doesn’t do what it was supposed to? 

In AI-assisted workflows: 

Productivity isn’t just about output; it’s about fitness to purpose. Without strong traceability between code, tests, user stories, and epics, it’s easy for teams to ship fast but fall short of the business goal. 

Many organizations today struggle to answer a basic question: Did this delivery actually fulfill the intended functionality? 

This is where multi-domain linking (MDL) and AI-powered traceability show promise. By connecting user stories, requirements, test cases, design artifacts, and even user feedback within a unified graph, teams can use LLMs to assess whether the output truly matches the input. 

And this capability unlocks more than just better alignment, it opens the door to innovation. AI-assisted development enables organizations to build more complex, interconnected, and adaptive systems than ever before. As those capabilities expand, so too must our ability to measure their economic value. What applications can we now build that we couldn’t before? And what is that worth to the business? 

That’s not a theoretical exercise. It’s the next frontier in engineering measurement. 

Productivity as a System, Not a Score 

The best engineering organizations treat productivity like instrumentation. No single number can tell you what’s working, but the right mix of signals can guide better decisions. That system must account for both delivery efficiency and functional alignment. High velocity is meaningless if the outcome doesn’t meet the requirements it was designed to fulfill. 

That means: 

Most importantly, it means aligning measurement to what matters: Did the product deliver value? Did it meet its intended function? Was the effort worth the outcome? Those are the questions that still define success and the ones our measurement frameworks must help answer. 

How to Start Rethinking Measurement 

If your metrics haven’t evolved alongside your tooling, here’s how to get started: 

AI is reshaping how software gets built. That doesn’t mean productivity can’t be measured. It means it must be measured differently. The leaders who shift from tracking motion to monitoring momentum will build faster, healthier, and more resilient engineering teams. 

Robots & Pencils: Measuring What Matters in an AI-Driven World 

At Robots & Pencils, we believe productivity isn’t a score; it’s a system. A system that must measure not just speed, but alignment. Did the output meet the requirements? Did it fulfill the epic? Was the intended functionality delivered? 

We help clients extend traditional measurement approaches to fit an AI-first world. That means combining DORA and SPACE metrics with functional traceability, such as linking code to requirements, outcomes to epics, and user stories to business results. 

Our secure, AWS-native platforms are already instrumented for this kind of visibility. And our teams are actively designing multi-domain models that give leaders better answers to the questions they care about most. 

As AI opens the door to applications we never thought were possible, our job is to help you measure what matters, including what’s newly possible. We don’t just help teams move faster. We help them build with confidence and prove it. 

Pilot, Protect, Produce: A CIO’s Guide to Adopting AI Code Tools 

How to responsibly explore tools like GitHub Copilot, Claude Code, and Cursor—without compromising privacy, security, or developer trust 

AI-assisted development isn’t a future state. It’s already here. Tools like GitHub Copilot, Claude Code, and Cursor are transforming how software gets built, accelerating boilerplate, surfacing better patterns, and enabling developers to focus on architecture and logic over syntax and scaffolding. 

The productivity upside is real. But so are the risks. 

For CIOs, CTOs, and senior engineering leaders, the challenge isn’t whether to adopt these tools—it’s how. Because without the right strategy, what starts as a quick productivity gain can turn into a long-term governance problem. 

Here’s how to think about piloting, protecting, and operationalizing AI code tools so you move fast, without breaking what matters. 

Why This Matters Now 

In a recent survey of more than 1,000 developers, 81% of engineers reported using AI assistance in some form, and 49% reported using AI-powered coding assistants daily. Adoption is happening organically, often before leadership even signs off. The longer organizations wait to establish usage policies, the more likely they are to lose visibility and control. 

On the other hand, overly restrictive mandates risk boxing teams into tools that may not deliver the best results and limit experimentation that could surface new ways of working. 

This isn’t just a tooling decision. It’s a cultural inflection point. 

Understand the Risk Landscape 

Before you scale any AI-assisted development program, it’s essential to map the risks: 

These aren’t reasons to avoid adoption. But they are reasons to move intentionally with the right boundaries in place. 

Protect First: Establish Clear Guardrails 

Protect First: Establish Clear Guardrails 

A successful AI coding tool rollout begins with protection, not just productivity. As developers begin experimenting with tools like Copilot, Claude, and Cursor, organizations must ensure that underlying architectures and usage policies are built for scale, compliance, and security. 

Consider: 

For teams ready to push further, Bedrock AgentCore offers a secure, modular foundation for building scalable agents with memory, identity, sandboxed execution, and full observability, all inside AWS. Combined with S3 Vector Storage, which brings native embedding storage and cost-effective context management, these tools unlock a secure pathway to more advanced agentic systems. 

Most importantly, create an internal AI use policy tailored to software development. It should define tool approval workflows, prompt hygiene best practices, acceptable use policies, and escalation procedures when unexpected behavior occurs. 

These aren’t just technical recommendations, they’re prerequisites for building trust and control into your AI adoption journey. 

Pilot Intentionally 

Start with champion teams who can balance experimentation with critical evaluation. Identify low-risk use cases that reflect a variety of workflows: bug fixes, test generation, internal tooling, and documentation. 

Track results across three dimensions: 

Encourage developers to contribute usage insights and prompt examples. This creates the foundation for internal education and tooling norms. 

Don’t Just Test—Teach 

AI coding tools don’t replace development skills; they shift where those skills are applied. Prompt engineering, semantic intent, and architectural awareness become more valuable than line-by-line syntax. 

That means education can’t stop with the pilot. To operationalize safely: 

When used well, these tools amplify good developers. When used poorly, they obscure problems and inflate false productivity. Training is what makes the difference. 

Produce with Confidence 

Once you’ve piloted responsibly and educated your teams, you’re ready to operationalize with confidence. That means: 

Organizations that do this well won’t just accelerate development, they’ll build more resilient software teams. Teams that understand both what to build and how to orchestrate the right tools to do it. The best engineering leaders won’t mandate one AI tool or ban them altogether. They’ll create systems that empower teams to explore safely, evaluate critically, and build smarter together. 

Robots & Pencils: Secure by Design, Built to Scale 

At Robots & Pencils, we help enterprise engineering teams pilot AI-assisted development with the right mix of speed, structure, and security. Our preferred LLM, Anthropic, was chosen precisely because we prioritize data privacy, source integrity, and ethical model design; values we know matter to our clients as much as productivity gains. 

We’ve been building secure, AWS-native solutions for over a decade, earning recognition as an AWS Partner with a Qualified Software distinction. That means we meet AWS’s highest standards for reliability, security, and operational excellence while helping clients adopt tools like Copilot, Claude Code, and Cursor safely and strategically. 

We don’t just plug in AI; we help you govern it, contain it, and make it work in your world. From guardrails to guidance, we bring the technical and organizational design to ensure your AI tooling journey delivers impact without compromise.