Blog

Accelerating Innovation with AWS: Robots & Pencils Selected as an AWS Pattern Partner 

Today, Robots & Pencils joins AWS as a launch partner in the AWS Pattern Partners program, an invite-only initiative that works with a select cohort of consulting partners to define how enterprises adopt next generation AI and emerging technologies on AWS. 

As a Pattern Partner, Robots & Pencils brings proven success with emerging technologies on AWS, including AI/ML, Generative/Agentic AI, Robotics, Space Technology, and Quantum. The program focuses on accelerating enterprise adoption through repeatable, scalable patterns that encode tested ways to solve specific business problems, with architecture, controls, and delivery practices that have already been validated with customers. 

For customers, selection of Robots & Pencils into this program signals that AWS has reviewed and endorsed both the outcomes and the operating model behind the work delivered in these domains. Enterprises that face pressure to modernize critical processes, adopt AI safely, and respond to new regulatory and security requirements gain access to patterns that have already delivered measurable results. 

Pattern Partners also sets a clear horizon view for emerging technology. In the near term, it concentrates on AI/ML, Generative & Agentic AI patterns, including sub domains such as Process to Agent (P2A), Agent to Agent (A2A), Responsible AI, and RegAI. Over the midterm, the program extends these capabilities into connected environments that use Robotics, IoT, and Edge and Space Technology on AWS. For the long term, it explores Quantum and next generation enterprise innovations, aligning new capabilities with existing AWS investments in data, AI, and security as they mature into reliable patterns. 

Our Pattern: Enterprise Document Intelligence Platform 

At the heart of the participation of Robots & Pencils in Pattern Partners is a flagship pattern that the company is co-developing and scaling with AWS. 

The Customer Problem 

Organizations in Energy, Manufacturing, and Health & Wellness face a common set of challenges. Data and workflows sit in disconnected systems, which slows AI adoption and creates duplicated effort. Teams find it difficult to govern AI models and agents at enterprise scale, especially when regulations and internal standards move quickly. Talent and process gaps make it hard to adopt new technology in a way that satisfies risk, compliance, and operational leaders. 

Our Joint Approach with AWS 

Together with AWS, Robots & Pencils has designed the Enterprise Document Intelligence Platform. This pattern combines an architecture built natively on AWS using Amazon Bedrock, Amazon SageMaker, and Amazon Bedrock AgentCore, an operating model with clear roles, runbooks and guardrails for IT, data, security and business teams, and accelerators such as pre-built integrations, automations, policies, templates, dashboards and agents. This pattern is being refined through a time boxed incubation with a set of lighthouse customers. As it matures, it is packaged as a Pattern Package so that more joint customers can adopt it rapidly with consistent results. 

Early Results 

Early adopters are already reporting tangible outcomes from the Robots & Pencils’ Enterprise Document Intelligence Platform. With 2 million interactions across 100,000+ users, customers reported a 90% satisfaction score and 40% improved confidence in responses from the pattern. 

As these results are validated across additional lighthouse customers, the Pattern Package becomes available to AWS field teams globally. This enables customers in new regions and sectors to benefit from the same proven approach without restarting design from the beginning. 

How the Pattern Partners Program Works with Customers 

When a customer engages Robots & Pencils through the Pattern Partners program, the engagement starts from a proven blueprint, not from scratch. The Pattern Package already encodes successful implementations, including architectures, guardrails, and playbooks. Customers receive coordinated support from AWS specialists, the AWS Consulting COE Pattern Partner team and experts from Robots & Pencils across consulting, engineering, and product. 

The program design supports fast yet responsible experimentation. Customers can move from idea to live pilot while maintaining enterprise grade security, compliance and governance. The pattern also includes a clear path from pilot to scale, so organizations can extend from initial deployments to cross region and multi business unit rollouts with ongoing optimization. 

Being part of the AWS Pattern Partners program allows Robots & Pencils to bring emerging AWS capabilities such as Generative AI and Agentic applications to customers earlier. Guardrails and controls stay clear and well defined. The company can turn its strongest customer successes into repeatable assets that benefit a wider set of organizations. Collaboration with AWS field teams, solution architects and service teams keeps the pattern aligned with the latest platform innovation. Robots & Pencils also contributes back to the broader AWS partner ecosystem by sharing learnings and raising the standard for how emerging technology is adopted. For customers, this approach reduces risk, increases predictability, and accelerates business impact from AWS investments. 

Partner Perspective 

“Joining AWS Pattern Partners is a strategic milestone for Robots & Pencils,” said Jeff Kirk, Executive Vice President of Applied AI, Robots & Pencils. “With our Enterprise Document Intelligence Platform, we turn our strongest customer wins into a clear, repeatable path to reduce onboarding time for customers in need of intelligent search, and increased confidence in the accuracy of the results, so customers can move from pilots to production with greater speed, control and confidence.”  

AWS Perspective 

“AWS created Pattern Partners to work with a select cohort of builders who can set the standard for how enterprises adopt emerging technology on AWS. Robots & Pencils brings deep expertise in KnowledgeOps, including RAG and compound systems, and a proven pattern in the Enterprise Document Intelligence Platform that is already delivering measurable outcomes for customers,” said Brian Bohan, Managing Director of Consulting COE, AWS. “We look forward to scaling this work together and bringing these benefits to more joint customers across industries.”  

Next Steps 

Customers interested in these patterns can speak with Robots & Pencils through Robotsandpencils.com/contact to review current challenges and identify which patterns are most relevant. 

Those that want to explore Enterprise Document Intelligence Platform in depth or learn how the AWS Pattern Partners program could support their own roadmap can request a focused discovery session. In that conversation, AWS and Robots & Pencils work with stakeholders to map business challenges to the pattern, estimate potential impact, and define a practical path to adoption. 

Together, AWS and Robots & Pencils look forward to turning critical business challenges into repeatable, scalable patterns for growth. 

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing. 



FAQs

What is the AWS Pattern Partners program?

It is an invite-only AWS initiative that works with a select group of consulting partners to define how enterprises adopt next generation AI and emerging technologies through validated, repeatable patterns.

Why was Robots & Pencils selected as a Pattern Partner?

AWS recognized the company’s proven outcomes across AI and emerging technologies, as well as its track record delivering measurable results with scalable architectures and operating models.

What is the Enterprise Document Intelligence Platform?

It is a jointly designed pattern that uses AWS native services and accelerators to help organizations unify data, streamline governance, and deploy Generative and Agentic AI across complex environments.

Which AWS technologies power the pattern?

Key services include Amazon Bedrock, Amazon SageMaker, and Amazon Bedrock AgentCore, along with AWS controls, security practices, and operational frameworks.

Who benefits most from this pattern?

Enterprises in sectors like Energy, Manufacturing, and Health and Wellness that face challenges with disconnected data, evolving regulations, and the need for responsible AI adoption at scale.

What results have early adopters seen?

Customers reported 2 million interactions across more than 100,000 users, a 90 percent satisfaction score, and a 40 percent improvement in confidence in response accuracy.

How does the program support faster innovation?

Organizations begin with a proven blueprint rather than a blank page. This accelerates pilots while maintaining enterprise grade governance and provides a clear pathway to large scale deployment.

How do customers engage?

Teams can connect through Robotsandpencils.com/contact to discuss current challenges or request a focused discovery session to understand fit, impact potential, and next steps.

What does this mean for long term innovation?

The program continually extends into new domains, guiding enterprises through emerging capabilities such as Robotics, IoT, Space Technology, and Quantum as they mature into reliable patterns.

Robots & Pencils Plans Seattle-area Expansion with Studio for Generative & Agentic AI 

The Bellevue, Washington investment opens pathways for forward deployed engineers and builders seeking career-defining work in applied AI. 

Robots & Pencils, an applied AI engineering partner known for high velocity delivery and measurable business outcomes, today announced plans to open a Seattle-area Studio for Generative & Agentic AI office in downtown Bellevue in early January 2026. The expansion fuels the next phase of growth for the company’s AI-native Studio and strengthens North American delivery, as demand for AI-enabled product engineering accelerates across the United States. As an Amazon Web Services (AWS) Partner, the Bellevue location, with its proximity to Amazon headquarters, is a natural site to accelerate client AI solutions on Amazon Bedrock, Amazon SageMaker, Amazon Bedrock AgentCore, and more. 

Candidates seeking high-impact engineering roles can learn more at robotsandpencils.com/careers. 

The new Studio reflects a growing U.S. footprint supported by existing global operations in Cleveland, Calgary, Toronto, Bogata, and Lviv. The Studio organizes cross-functional product, engineering, data, and design talent into vertical industry-focused pods that support sectors such as Education, Energy, Financial Services, Healthcare, Manufacturing, Transportation, and Retail/CPG. The presence in the Seattle area adds meaningful engineering capacity and enhances support for clients pursuing ambitious AI programs and large-scale modernization work. 

“The investment in Bellevue and access to deep talent in the Pacific Northwest gives our teams and our clients a powerful new chapter,” said Len Pagon Jr., CEO of Robots & Pencils. “The engineering expertise in this region aligns perfectly with our Studio strategy. We see tremendous opportunities to grow our talent base, strengthen delivery, and help organizations reach AI outcomes that advance their businesses. Our teams are energized by this expansion and ready for the momentum ahead.” 

Jeff Kirk, Executive Vice President of Applied AI at Robots & Pencils, will lead the Bellevue studio. “The Studio in Bellevue is a pivotal investment in our client and talent strategy,” said Kirk. “Engineers and builders in this region bring the experience and ambition that shape industry-defining solutions. Speed matters, and our Studio structure is designed for launching AI products to market every 30 to 45 days. The Seattle-area strengthens the engineering capacity required to deliver that velocity at scale. We look forward to building a team that thrives on complex challenges and produces work that matters.” 

Robots & Pencils continues to invest in environments where elite talent can perform at the highest level. The company is known for its talent density, with teams averaging fifteen years of experience and contributing patents, published research, and category-shaping products across industries. The Studio creates space for engineers, applied AI specialists, product leaders, and user experience innovators to influence major client engagements and shape a new hub from the ground up. It anchors work in AI systems, agents and agentic workflows, digital modernization, intelligent automation, and data-driven product innovation. 

Interested applicants can explore open roles at robotsandpencils.com/careers. The Studio is ready for builders who want to shape the next era of AI solutions with momentum and purpose. 

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing

The Agentic Trap: Why 40% of AI Automation Projects Lose Momentum

Gartner’s latest forecast is striking: more than 40% of agentic AI projects will be canceled by 2027. At first glance, this looks like a technology growing faster than it can mature. But a closer look across the industry shows a different pattern. Many initiatives stall for the same reason micromanaged teams do. The work is described at the level of steps rather than outcomes. When expectations aren’t clear, people wait for instructions. When expectations aren’t clear for agents, they either improvise poorly or fail to act. 

This is the same shift I described in my previous article, “Software’s Biggest Breakthrough Was Making It Cheap Enough to Waste.” When software becomes inexpensive enough to test freely, the organizations that pull ahead are the ones that work toward clear outcomes and validate their decisions quickly. 

Agentic AI is the next stage of that evolution. Autonomy becomes meaningful only when the organization already understands the outcome it’s trying to achieve, how good decisions support that outcome, and when judgment should shift back to a human. 

The Shift to Outcome-Oriented Programming 

Agentic AI brings a model that feels intuitive but represents a quiet transformation. Traditional automation has always been procedural in that teams document the steps, configure the workflow, and optimize the sequence. Like a highly scripted form of people management, this model is effective when the work is predictable, but limited when decisions are open-ended or require problem solving. 

Agentic systems operate more like empowered teams. They begin with a desired outcome and use planning, reasoning, and available tools to move toward it. As system designers, our role shifts from specifying every step to defining the outcome, the boundaries, and the signals that guide good judgment. 

Instead of detailing each action, teams clarify: 

This shift places new demands on organizational clarity. To support outcome-oriented systems, teams need a shared understanding of how decisions are made. They need to determine what good judgment looks like, what tradeoffs are acceptable, and how to recognize situations that require human involvement. 

Industry research points to the same conclusion. Harvard Business Review notes that teams struggle when they choose agentic use cases without first defining how those decisions should be evaluated. XMPRO shows that many failures stem from treating agentic systems as extensions of existing automation rather than as tools that require a different architectural foundation. RAND’s analysis adds that projects built on assumptions instead of validated decision patterns rarely make it into stable production. 

Together, these findings underscore a simple theme. Agents thrive when the organization already understands how good decisions are made. 

Decision Intelligence Shapes Agentic Performance  

Agentic systems perform well when the outcome is clear, the signals are reliable, and proper judgment is well understood. When goals or success criteria are fuzzy, or tasks overly complex, performance mirrors that ambiguity. 

In a Carnegie Mellon evaluation, advanced models completed merely one-third of multi-step tasks without intervention. Meanwhile, First Page Sage’s 2025 survey showed much higher completion rates in more structured domains, with performance dropping as tasks became more ambiguous or context heavy. 

This reflects another truth about autonomy. Some problems are simply too broad or too abstract for an agent to manage directly. In such cases, the outcome must be broken into sub-outcomes, and those into smaller decisions, until the individual pieces fall within the system’s ability to reason effectively. 

In many ways, this mirrors effective leadership. Good leaders don’t hand individual team members a giant, unstructured mandate. They cascade outcomes into stratified responsibilities that people can act on. Agentic systems operate the same way. They thrive when the goal has been decomposed into solvable parts with well-defined judgment and guardrails. 

This is why organizational clarity becomes a core predictor of success. 

How Teams Fall Into the Agentic Trap 

Many organizations feel the pull of agentic AI because it promises systems that plan, act, and adapt without waiting for human intervention. But the projects that stall often fall into a predictable trap. 

Teams begin by automating process instead of automating the judgment behind the decisions the agent is expected to make. Teams define what a system should do instead of defining how to evaluate the output or what “good” should look like. Vague quality metrics, progress signals, and escalation criteria lead to technically valid, strategically mediocre decisions that erode confidence in the system. 

The research behind this pattern is remarkably consistent. HBR notes that teams often choose agentic use cases before they understand the criteria needed to evaluate them. XMPRO describes the architectural breakdowns that occur when agentic systems are treated like upgrades to procedural automation. RAND’s analysis shows that assumption-driven decision-making is one of the strongest predictors of AI project failure, while projects built on clear evaluation criteria and validated decision patterns are far more likely to reach stable production. 

This is the agentic trap: trying to automate judgment without first understanding how good judgment is made. Agentic AI is more than automation of steps, it’s the automation of evaluation, prioritization, and tradeoff decisions. Without clear outcomes, criteria, signals, and boundaries to inform decision-making, the system has nothing stable to scale, and its behavior reflects that uncertainty. 

A Practical Way Forward: The Automation Readiness Assessment 
Decisions that succeed under autonomy share five characteristics. When one or more are missing, agents need more support: 

Have all five? Build with confidence. 
Only three or four? Pilot with human review in order to build up a live data set. 
Only one or two? Go strengthen your decision clarity before automating. 

This approach keeps teams grounded. It turns autonomy from an aspirational leap into a disciplined extension of what already works. 

The Path to Agentic Maturity 

Agentic AI expands an organization’s capacity for coordinated action, but only when the decisions behind the work are already well understood. The projects that avoid the 40% failure curve do so because they encode judgement into agents, not just process. They clarify the outcome, validate the decision pattern, define the boundaries, and then let the system scale what works. 

Clarity of judgment produces resilience, resilience enables autonomy, and autonomy creates leverage. The path to agentic maturity begins with well-defined decisions. Everything else grows from there. 

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing. 


Key Takeaways 


FAQs 

What is the “agentic trap”? 
The agentic trap describes what happens when organizations rush to deploy agents that plan and act, before they have defined the outcomes, decision criteria, and guardrails those agents require. The technology looks powerful, yet projects stall because the underlying decisions were never made explicit. 

How is agentic AI different from traditional automation? 
Traditional automation follows a procedural model. Teams document a sequence of steps and the system executes those steps in predictable conditions. Agentic AI starts from an outcome, uses planning and reasoning to choose actions, and navigates toward that outcome using tools, data, and judgment signals. The organization moves from “here are the steps” to “here is the result, the boundaries, and the signals that matter.” 

Why do so many agentic AI projects lose momentum? 
Momentum fades when teams try to automate decisions that have not been documented, validated, or measured. Costs rise, risk concerns surface, and it becomes harder to show progress against business outcomes. Research from Gartner, Harvard Business Review, XMPRO, and RAND all point to the same pattern: projects thrive when the decision environment is explicit and validated, and they struggle when it is based on assumptions. 

What makes a decision “ready” for autonomy? 
Decisions are ready for agentic automation when they meet five criteria: 

The more of these elements are present, the more confidently teams can extend autonomy. 

How can we use the Automation Readiness Assessment in practice? 
Use the five criteria as a simple scoring lens for each candidate decision: 

This keeps investment aligned with decision maturity and creates a clear path from experimentation to durable production. 

Where should leaders focus first to reach agentic maturity? 
Leaders gain the most leverage by focusing on judgment clarity within critical workflows. That means aligning on desired outcomes, success metrics, escalation thresholds, and the signals that inform good decisions. With that foundation, agentic AI becomes a force multiplier for well-understood work rather than a risky experiment in ambiguous territory. 

Software’s Biggest Breakthrough Was Making It Cheap Enough to Waste 

AI and automation are making development quick and affordable. Now, the future belongs to teams that learn as fast as they build. 

Building software takes patience and persistence. Projects run long, budgets stretch thin, and crossing the finish line often feels like survival. If we launch something that works, we call it a win. 

That rhythm has defined the industry for decades. But now, the tempo is changing. Kevin Kelly, the founding executive editor of Wired Magazine, once said, “Great technological innovations happen when something that used to be expensive becomes cheap enough to waste.” 

AI-assisted coding and automation are eliminating the bottlenecks of software development.  What once took months or years can now be delivered in days or weeks. Building is no longer the hard part. It’s faster, cheaper, and more accessible than ever.  

Now, as more organizations can build at scale, custom software becomes easier to replicate, and its ROI as a competitive advantage grows less predictable. As product differentiation becomes more difficult to maintain, a new source of value emerges: applied learning, how effectively teams can build, test, adapt, and prove what works. 

This new ROI is not predicted. It depends on the ability to:  

The organizations that succeed will learn faster from what they build and build faster from what they learn. 

From Features to Outcomes, Speculation to Evidence 

Agile transformed how teams build software. It replaced long project plans with rapid sprints, continuous delivery, and an obsession with velocity. For years, we measured progress by how many features we shipped and how fast we shipped them. 

But shipping features doesn’t equal creating value. A feature only matters if it changes behavior or improves an outcome, and many don’t. As building gets easier, the hard part shifts to understanding which ideas truly create impact and why. 

AI-assisted and automated development now make that learning practical. Teams can generate several variations of an idea, test them quickly, and keep only what works best. The work of software development starts to look more like controlled experimentation. 

This changes how we measure success. The old ROI models relied on speculative forecasts and business cases built on assumptions about value, timelines, and adoption. We planned, built, and launched, but when the product finally reached users, both the market and the problem had already evolved. 

Now, ROI becomes something we earn through proof. We begin with a measurable hypothesis and build just enough to test it:  

If onboarding time falls by 30 percent, retention will rise by 10 percent,  
creating two million dollars in annual value.  

Each iteration provides evidence. Every proof point increases confidence and directs the next investment. In this way, value creation and validation merge, and the more effectively we learn, the faster our return compounds. 

ROI That Compounds 

ROI used to appear only after launch, when the project was declared “done.” It was calculated as an academic validation of past assumptions and decisions. The investment itself remained a sunk cost, viewed as money spent months ago. 

In an outcome-driven model, value begins earlier and grows with every iteration. Each experiment creates two returns: the immediate impact of what works and the insight gained from what doesn’t. Both make the next round more effective. 

Say you launched a small pilot with ten users. Within weeks, they’re saving time, finding shortcuts, and surfacing friction you couldn’t predict on paper. That feedback shapes the next version and builds the confidence to expand to a hundred users. Now, you can measure quantitative impact, like faster response times, fewer manual steps, and higher satisfaction. Pay off rapidly scales, as the value curve steepens with each round of improvement. 

Moreover, you are collecting measurement on return continuously, using each cycle’s results as evidence to justify the next. In this way, return becomes the trigger for further investment, and the faster the team learns, the faster the return accelerates. 

Each step also leaves behind a growing library of reusable assets: validated designs, cleaner data, modular components, and refined decision logic. Together, these assets make the organization smarter and more efficient with each cycle. 

When learning and value grow together, ROI becomes a flywheel. Each iteration delivers a product that’s smarter, a team that’s sharper, and an organization more confident in where to invest next. To harness that momentum, we need reliable ways to measure progress and prove that value is growing with every step. 

Measuring Progress in an Outcome-Driven Model 

When ROI shifts from prediction to evidence, the way we measure progress has to change. Traditional business cases rely on financial projections meant to prove that an investment would pay off. In an outcome-driven model, those forecasts give way to leading indicators collected in real-time.  

Instead of measuring progress by deliverables and deadlines, we use signals that show we’re moving in the right direction. Each iteration increases confidence that we are solving the right problem, delivering the right outcome, and generating measurable value. 

That evidence evolves naturally with the product’s maturity. Early on, we look for behavioral signals, or proof that users see the problem and are willing to change. As traction builds, we measure whether those new behaviors produce the desired outcomes. Once adoption scales, we track how effectively the system converts those outcomes into sustained business value. 

You can think of it as a chain of evidence that progresses from leading to lagging indicators: 

Behavioral Change → Outcome Effect → Monetary Impact 

The challenge, then, is to create a methodology that exposes these signals quickly and enables teams to move through this progression with confidence, learning as they go. This process conceptually follows agile, but changes as the product evolves through four stages of maturity: 

Explore & Prototype → Pilot & Validate → Scale & Optimize → Operate & Monitor 

At each stage, teams iteratively build, test, and learn, advancing only when success is proven. What gets built, how it’s measured, and what “success” means evolve as the product matures. Early stages emphasize exploration and learning; later stages focus on optimizing outcomes and capturing value. Each transition strengthens both evidence that the product works and confidence in where to invest next. 

1. Explore & Prototype:  

In the earliest stage, the goal is to prove potential. Teams explore the problem space, test assumptions, and build quick prototypes to expose what’s worth solving. The success measures are behavioral: evidence of user willingness and intent. Do users engage with early concepts, sign up for pilots, or express frustration with the current process? These signals de-risk demand and validate that the problem matters. 

The product moves to the next stage only with a clear, quantified problem statement supported by credible behavioral evidence. When users demonstrate they’re ready for change, the concept is ready for validation. 

2. Pilot & Validate:  

Here’s where a prototype turns into a pilot to test whether the proposed solution actually works. Real users perform real tasks in limited settings. The indicators are outcome-based. Can people complete tasks faster, make fewer errors, or reach better results? Each of these metrics ties directly to the intended outcome that the product aims to achieve. 

To advance from this stage, the pilot must show measurable progress towards the outcome. When that evidence appears, it’s time to expand. 

3. Scale & Optimize:  

As adoption grows, the focus shifts from proving the concept to demonstrating outcomes and refining performance. Every new user interaction generates evidence that helps teams understand how the product creates impact and where it can improve. 

Learning opportunities emerge from volume. Broader usage reveals edge cases, hidden friction points, and variations that allow teams to refine the experience, calibrate models, automate repetitive tasks, and strengthen outcome efficacy. 

At this stage, value indicators connect usage to business KPIs like faster response times, higher throughput, improved satisfaction, and lower support costs. This is where value capture compounds. As more users adopt the product, the value they generate accumulates, proving that the system delivers significant business impact. 

The product reaches the next level of maturity when it shows sustained reliable impact to outcome measures across wide-spread usage. 

4. Operate & Monitor:  

In the final stage, the emphasis shifts from optimization to observation. The system is stable, but the environment and user needs continue to evolve and erode effectiveness over time. The goal is twofold: ensure that value continues to be realized and detect the earliest signals of change. 

The indicators now focus on sustained ROI and performance integrity. Teams track metrics that show ongoing return (cost savings, revenue contribution, efficiency gains) while monitoring usage patterns, engagement levels, and model accuracy. 

When anomalies appear (drift in outcomes, declining engagement, or new behaviors), they become the warning signs of changing user needs. Each anomaly hints at a new opportunity and loops the team back into exploration. This begins the next cycle of innovation and validation. 

From Lifecycle to Flywheel: How ROI Becomes Continuous 

Across these stages, ROI becomes a continuous cycle of evidence that matures alongside the product itself. Each phase builds on the one before it.  

Together, these stages form a closed feedback loop—or flywheel—where evidence guides investment. Every dollar spent produces both impact and insight, and those insights direct the next wave of value creation. The ROI conversation shifts from “Do you believe it will pay off?” to “What proof have we gathered, and what will we test next?” 

From ROI to Investment Upon Return 

AI and automation have made building easier than ever before. The effort that once defined software development is no longer the bottleneck. What matters now is how quickly we can learn, adapt, and prove that what we build truly works. 

In this new environment, ROI becomes a feedback mechanism. Returns are created early, validated often, and reinvested continuously. Each cycle of discovery, testing, and improvement compounds both value and understanding, and creates a lasting continuous advantage. 

This requires a mindset shift as much as a process shift. From funding projects based on speculative confidence in a solutionto funding them based on their ability to generate proof. When return on investment becomes investment upon return, the economics of software change completely. Value and insight grow together. Risk declines with every iteration. 

When building becomes easy. Learning fast creates the competitive advantage. 

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing. 


The New Equations 


Key Takeaways  


FAQs  

What does “software cheap enough to waste” mean? 
It describes a new phase in software development where AI and automation have made building fast, low-cost, and low risk, allowing teams to experiment more freely and learn faster. 

Why does cheaper software matter for innovation? 
When building is inexpensive, experimentation becomes affordable. Teams can test more ideas, learn from data, and refine products that actually work for people. 

How does this change ROI in software development? 
Traditional ROI measured delivery and cost efficiency. Evidential ROI measures learning, outcomes, and validated impact, value that grows with each iteration. 

What are Return on Learning and Return on Ecosystem? 
Return on Learning measures how quickly teams adapt and improve through cycles of experimentation. Return on Ecosystem measures how insights spread and create shared success across teams. 

What’s the main takeaway for leaders? 
AI and automation have changed the rules. The winners will be those who learn the fastest, not those who build the most. 

Robots & Pencils Brings Its Applied AI Engineering Expertise to AWS re:Invent 2025 

As AI reshapes every industry, Robots & Pencils leads with applied intelligence that drives measurable business advantages. 

Robots & Pencils, an applied AI engineering partner, will attend AWS re:Invent 2025, taking place December 1–5 in Las Vegas, joining global builders and business leaders shaping the future of cloud, data, and AI. 

Schedule time to connect with the Robots & Pencils team at AWS re:Invent. 

Robots & Pencils enables ambitious teams to move faster, build smarter, and deliver measurable results. With proven systems and elite engineering talent, the company modernizes, activates AI, and scales intelligent products across leading cloud platforms. 

“Leaders of organizations are seeking methods to speed up time-to-market and modernize work,” said Jeff Kirk, Executive Vice President of Applied AI at Robots & Pencils. “AI is a strategic advantage that increases the velocity of how organizations deliver on customer needs. That’s where we live, turning data, and design into intelligence that moves the business forward.” 

Where traditional systems integrators scale with headcount, Robots & Pencils scales with small, nimble teams and compounding systems that learn, adapt, and accelerate impact. Through a continuous cycle of piloting, scaling, calibration, and operationalization, the company helps clients move from idea to implementation with speed and confidence. By combining automation with human-in-the-loop intelligence, Robots & Pencils compresses months of research, design, and development into weeks, driving faster outcomes and sharper market alignment. 

Across industries such as Financial Services, Education, Healthcare, Energy, Transportation, Industrial Manufacturing, and CPG/Retail, Robots & Pencils helps organizations modernize systems, activate intelligent automation, and deliver products that evolve with the business. 

The team will be in Las Vegas throughout the week. Schedule a meeting with Robots & Pencils at AWS re:Invent

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing

Robots & Pencils Launches “Rewired: The New AI Architecture of Higher Education” 

As the world’s top education innovators gather at ASU’s Agentic AI Summit and EDUCAUSE, Robots & Pencils unveils a bold blueprint for the intelligent university. 

Robots & Pencils, an Applied AI Engineering Partner that helps universities and enterprises modernize applications and increase the speed of productivity, today announced the launch of Rewired: The New Architecture of Higher Education. This three-part thought leadership series challenges universities to reinvent how they define, deliver, and prove learning in the age of AI. 

As AI reshapes every dimension of learning from admissions to advising, research to retention, Robots & Pencils offers a vision for what intelligent universities can become. 

Start reading Rewired: The New AI Architecture of Higher Education.  

Arriving as higher education leaders converge for the Agentic AI and the Student Experience Summit at Arizona State University and the EDUCAUSE Annual Conference, Rewired explores how institutions can move from digital transformation to institutional intelligence, building systems that learn, adapt, and evolve alongside their students. 

“The next era of higher education will be defined by who learns fastest,” said Kristina Gralak, Client Strategy Analyst at Robots & Pencils and author of the series. “Agentic AI is transforming what it means to be student-centered. The universities that win will rewire their infrastructure for intelligence, creating systems that personalize experiences, validate skills, and connect learning to lifelong opportunity.” 

The three essays within Rewired trace higher education’s most urgent frontiers: 

“Kristina’s series captures the intersection of vision and engineering,” said Jeff Kirk, Executive Vice President of Applied AI at Robots & Pencils. “Every institution seeks to enhance the student experience, yet few realize that progress begins with the invisible systems: the data, cloud, and AI engines that make intelligence possible. Rewired shows what it takes to connect strategy with reality.” 

From intelligent tutoring systems to AI-powered credential networks, Rewired outlines how forward-thinking universities can turn experimentation into institutional evolution. It is a call to action for higher education leaders to design for the lifelong learners of tomorrow and to embrace an AI-driven future where universities think, adapt, and evolve as intelligently as the students they serve.  

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing 

The Invisible Infrastructure That Determines Higher Education Success 

Part 3 of our series Rewired: The New AI Architecture of Higher Education

Part 1: The New AI Architecture of Higher Education | Part 2: How Higher Education Proves Value in the Skills Economy

You can have the perfect enrollment strategy. You can deliver credentials that employers both trust and understand. But none of it matters if your systems frustrate students at every turn. 

The State of Higher Education 2025 highlights how AI is already transforming administrative operations. Institutions are cutting admissions decision times from weeks to days. That efficiency gain matters, but it’s pointing at something bigger. The most transformative applications of AI in higher education will happen in the invisible systems that touch students every day and determine whether institutions can actually deliver on their promises of personalized pathways, skills verification, and career outcomes. 

The Invisible Systems that Determine Everything 

Think about what student-facing infrastructure should look like: registration that anticipates scheduling conflicts before they derail a semester, financial aid that explains packages in plain language and flags missing steps in real time, advising that surfaces degree progress at midnight without requiring an appointment, and career services that connect learning to opportunity throughout the educational journey rather than just senior year. 

Now consider what most students actually experience. Most universities operate on infrastructure built before students expected real-time information, before mobile-first design, and before APIs enabled systems to communicate seamlessly. Advising platforms can’t access degree audit tools. Financial aid offices require documentation already submitted during admissions because systems don’t share data. Registration workflows assume students know course prerequisites that aren’t clearly mapped anywhere accessible. 

This friction is the difference between serving traditional students adequately and serving diverse learners well. A 19-year-old living on campus might tolerate process-heavy systems because they have time to navigate them. A 35-year-old parent working full-time while taking evening classes cannot. 

When Systems Don’t Talk  

Here’s what disconnected systems look like in practice: A student registers for next semester’s courses. The registration system confirms enrollment, but the degree audit tool doesn’t update for 48 hours. The student panics, thinking they’ve registered wrong, and emails their advisor, who also can’t see the registration because their advising platform pulls data overnight. By the time systems sync, the student has already spent hours searching for answers that should have been instantly available. 

Or consider the transfer student navigating data silos. Transcript evaluation sits in one system. The academic advisor works in another. The degree audit reflects only current-institution courses. Financial aid can’t see transfer credits until manually entered elsewhere. Each office operates with partial information, and the student becomes the integration layer, having to shuttle information between departments, resubmit documentation, and try to piece together what no system can provide. 

These challenges define daily operations for institutions managing disconnected systems, and they’re a key reason students choose to leave. Academic quality and affordability still matter, but experience now defines whether education feels achievable or exhausting.  

Building Systems that Create Advantage 

Better experiences lead to stronger retention, which enables sustained enrollment, which funds continued improvement, which attracts students who see a responsive institution. This cycle creates compounding advantages. 

As the State of Higher Education 2025 report notes, students want “an integrated and seamless experience on campus like they have with Amazon 1-Click, Netflix preferences, and Instagram likes.” The goal is not consumerization, but rather alignment with the baseline expectations of how digital systems should function in 2025. 

The institutions that invest in operational intelligence now will differentiate themselves in ways competitors can’t quickly replicate. Competitors can replicate program offerings, but integrated systems that learn from student behavior and adapt over time create advantages that take years to build. 

From Disconnected Systems to Institutional Data Intelligence  

The challenge institutions face goes beyond isolated student-facing systems. It’s a fundamental question about how data flows across the entire institution and whether that data can inform better decision-making at every level. 

The EDUCAUSE 2025 Horizon Report: Data and Analytics Edition identifies the shift “toward unified data models and integrated data ecosystems” as critical for institutional effectiveness. The report notes significant barriers remain: “slow adoption of common data standards, lack of in-house expertise, tight budgets, and concerns about privacy and security when connecting different data sources.” 

But institutions that overcome these barriers will build systems that “respond more quickly, spot and support at-risk students earlier, and evaluate programs more effectively as a whole.” This is what infrastructure modernization actually means: not just upgrading individual systems, but creating the connective tissue that enables institutional learning. 

Imagine infrastructure that functions like a learning organism. Student outcomes from last semester inform course scheduling for next semester. Advising patterns surface which interventions work for specific populations. Registration data reveals course conflicts before hundreds encounter them. Each cycle generates insights that make the next more effective. 

The EDUCAUSE report warns that “rapid AI adoption is introducing new risks” but is equally clear about the path forward: institutions must “develop clear policies and build cross-functional governance structures that include voices from IT, academic affairs, compliance, and student services.” This is the work of infrastructure modernization: integrating intelligence across systems while maintaining human oversight, transparency, and accountability. 

The Infrastructure Challenge for Lifelong Learners  

Traditional systems assume continuous enrollment, students who enter as freshmen and graduate four years later. These assumptions are embedded in everything from registration workflows to student information systems to advising models. 

Serving lifelong learners requires fundamentally different infrastructure. Systems need to remember students across years of non-enrollment. Credential systems must stack learning experiences accumulated across time and institutions. Registration workflows need to accommodate students taking one course while working full-time. 

The platform approach outlined in the first article in this series now defines the path forward for institutions ready to scale lifelong learning. Without unified infrastructure, institutions will continue to relegate adult learners to separate systems that feel like second-class experiences. The institutions that build infrastructure for lifelong learning will turn the enrollment cliff and broader demographic changes into drivers of innovation and competitive advantage.  

The Infrastructure Behind Skills-Based Credentials 

The second article of our series outlined the opportunity in skills-based credentials. But credential transformation depends entirely on infrastructure most institutions don’t yet have. Making educational outcomes relevant to employers requires systems that track competency development across courses and verify skill demonstration through assessed work. These systems must translate learning outcomes into employer language and enable dynamic credential pathways as employment demands evolve. 

Right now, course outcomes exist in syllabi. Assessment data sits in learning management systems. Career outcomes are tracked separately. None of these systems talk to each other, and none can generate the comprehensive, verifiable credentials students need. Building this infrastructure requires more than technical expertise. It depends on registrars, academic affairs, career services, IT, and institutional research working from unified data models. 

Where to Start  

Transformation gains traction through precise, coordinated initiatives that evolve into integrated systems over time. 

Start with a data integration pilot in one high-friction area, such as transfer credit evaluation, financial aid processing, or advising workflows. Build the connections that eliminate manual handoffs. Use that pilot to establish governance patterns and technical standards that can scale. 

Map the student journey to identify friction points. Follow students through registration, financial aid, advising, and enrollment. Document every place they encounter disconnected information or redundant data entry. These pain points become your integration roadmap. 

Most importantly, build with student-facing impact in mind. Every integration should make something tangibly better, such as faster information, clearer guidance, reduced manual work, or more responsive service. Infrastructure projects that deliver only backend efficiencies will struggle to sustain commitment. Projects that demonstrably improve student experiences will build momentum for continued transformation. 

The Infrastructure Imperative 

This series has outlined a clear progression: who to serve (lifelong learners at all career stages), how to prove value (skills-based credentials and AI-powered career connection), and what makes it possible (operational infrastructure that executes strategy at scale). 

The institutions that lead will approach transformation as an interconnected system. Success with diverse learners comes from modern infrastructure, and lasting credential innovation emerges from systems built to verify skills throughout learners’ lives. 

Infrastructure serves as a core differentiator, converting strategic vision into operational strength. It’s the difference between institutions that adapt to demographic change and those that watch enrollment decline while running on systems built for students who no longer represent their future. 

The work is demanding. It requires sustained commitment, cross-functional collaboration, and investment in capabilities that many institutions have historically under-resourced. Continuing to operate on disconnected systems while competitors advance with unified platforms limits growth and long-term resilience. 

Transformation begins with the essential work of modernizing systems, integrating data, and building platforms that serve lifelong learners. That’s where real differentiation happens, and that’s what determines institutional success in the decade ahead. 

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing. 


Key Takeaways 


FAQs 

Why does infrastructure modernization matter for student success? 
Modern systems remove friction in core experiences such as registration, advising, and financial aid. When data flows seamlessly, students receive faster responses, clearer guidance, and more personalized support. 

What does operational intelligence mean for higher education? 
Operational intelligence describes systems that automate processes and learn from them. When institutions integrate data across departments, they gain the ability to anticipate student needs, identify risks earlier, and continuously improve operations. 

How does infrastructure connect to skills-based credentials? 
Skills-based learning depends on interoperable data. Institutions need infrastructure that connects course outcomes, assessments, and verified competencies, creating credentials that employers understand and trust. 

Where should institutions start with modernization? 
Start with a pilot that addresses a visible student challenge such as transfer credit evaluation or financial aid delays. Use that project to establish governance patterns, integration standards, and measurable improvements that demonstrate value across the institution. 

What defines a future-ready institution? 
A future-ready institution treats infrastructure as a living system that learns and adapts. It measures success by student outcomes, institutional agility, and the ability to serve learners continuously throughout their careers.  

How Higher Education Proves Value in the Skills Economy 

Part 2 of our series Rewired: The New AI Architecture of Higher Education

Part 1: The New AI Architecture of Higher Education | Part 3: The Invisible Infrastructure That Determines Higher Education Success

Higher education faces a trust problem. College-going rates have dropped from 70% to 62% since 2016. When you ask students why, two themes dominate: affordability concerns and uncertainty about return on investment. Universities have responded by defending the value of degrees with more vigor and better marketing, but this strategy misunderstands what’s shifting. Students still want to learn, but they also want to know whether what they are learning matters to employers and how it connects to real employment opportunities. Degrees used to provide that assurance implicitly. Employers valued degrees, so students trusted their worth. But as employers shift toward skills-based hiring, that implicit value is eroding. Students now need explicit proof that their education translates into capabilities employers actually want. 

Meanwhile, employers are adopting skills-based hiring at accelerating rates. They care less about where you went to school and more about what you can do. This creates an opportunity for institutions willing to reimagine credentials entirely and use AI to connect learning to career outcomes in real time. 

The Credential Revolution  

The degree is evolving to become modular, transparent, and aligned to real-world capabilities. Today’s students demand degree programs where industry-aligned certifications are embedded throughout, not tacked on at the end. They want digital credentials that verify specific competencies in formats employers can instantly understand. They need evidence of skills activated, not just courses completed. 

This requires solving a problem most institutions are only beginning to articulate: making educational outcomes relevant and legible to employers. Right now, a degree signals institutional affiliation and field of study, but nothing more. Hiring managers need a clear view into whether a graduate can analyze datasets, lead cross-functional teams, or communicate complex ideas to non-technical audiences. 

Institutions know these things. Course learning outcomes exist. Assessment data sits in learning management systems. Capstone projects demonstrate applied competencies. But this evidence is trapped in internal systems, inaccessible to anyone outside the institution. Students leave with a diploma that says what they studied, not what they can do. 

Consider what this looks like from a student’s perspective. A sociology major graduates knowing they can conduct qualitative research, analyze social patterns, manage community-based projects, and synthesize complex information for diverse audiences. But their diploma says “Bachelor of Arts in Sociology.” Their transcript lists course titles and grades. They spend months after graduation trying to articulate their actual capabilities in resumes and interviews because their institution never made those skills visible or verifiable to employers. 

Institutions that build interoperable credential systems with digital credentials that verify specific competencies, stackable certifications embedded throughout degree programs, and verified skill demonstrations will define a new model for learning. They will become the trusted translators between education and employment. They will award degrees and validate capabilities that matter, serving students throughout their careers as they return for new credentials and competencies. 

Some institutions are already moving in this direction. Computer science programs embed AWS or Google Cloud certifications alongside degree requirements. Business schools offer IBM badges and Six Sigma certifications as integrated components of coursework. Universities partner with platforms like Credly and Canvas Credentials to issue competency-based digital badges that students can share directly with employers. 

Arizona State University is taking this even further with its Trusted Learner Network (TLN), building infrastructure for distributed ledger-based, verifiable credentials that can follow students throughout their lifelong learning journey—not just credentials from ASU, but a vision of interoperable credential exchange across institutions, employers, and learning providers. This is what credential infrastructure looks like when institutions think beyond single transactions to lifelong relationships. 

But most institutions are still treating credentials as isolated experiments rather than core infrastructure. A certificate program here, a digital badge pilot there, maybe some industry partnerships in high-demand fields. What’s missing is the institutional commitment to make skills verification foundational to how students progress through their education and how alumni demonstrate their capabilities throughout their careers. 

This transforms the institutional relationship from a four-year transaction to a lifelong partnership. Alumni leave with more than a degree, they maintain a credential relationship with the institution, returning for micro-credentials, professional certifications, and competency validations as their careers evolve. This is the infrastructure that makes lifelong learning operationally viable, a unified system where a 22-year-old recent graduate and a 45-year-old mid-career professional engage with the same credential ecosystem. 

Where AI Readiness Becomes Competitive Advantage 

Recent research surfaces a critical gap. Students are already using AI tools extensively in their academic work for research, writing, and problem-solving. Meanwhile, fewer than 20% of faculty feel confident teaching with or about AI. Most institutions are treating this as a training problem: a few workshops on prompt engineering, some guidance on academic integrity, maybe a pilot program or two. 

That response entirely misses the opportunity. The institutions that will differentiate themselves are doing more than training faculty on AI tools. They’re integrating AI into how students learn, how advisors guide, and how the institution operates. The difference is between treating AI as a tool to learn about versus treating it as the intelligence layer that makes every system more responsive. 

Consider what this looks like operationally. Right now, when a student struggles in a course, they might get flagged for early intervention. For example, they may receive an automated email suggesting the tutoring center, or maybe an advisor reaches out to recommend better study habits or office hours. That’s reactive and generic. 

An AI-informed institution operates differently. The system recognizes the struggle in real-time and surfaces personalized tutoring resources at the moment intervention is needed. These are not generic study tips, but alternative approaches to the material aligned with how that student learns best. When the student registers for next semester, the system adjusts course recommendations to sequence their learning more effectively while still maintaining progress toward their degree. The advisor still has the conversation, but now they’re working with intelligence about what approaches are actually effective for this student. 

The difference is more than better outcomes. It’s operational efficiency at scale. An advisor managing 400 students can’t manually track how each student learns best, which interventions are working, and what course sequences will set them up for success. But an AI-informed system can surface exactly which students need proactive outreach, what specific guidance would be most relevant, and how to sequence their learning path most effectively. The advisor’s time shifts from administrative triage to high-value relationship building. 

The challenge is organizational. It requires integrating intelligence across disconnected systems like advising platforms, learning management systems, career services tools, and student information systems. It requires training staff to use AI-informed insights without replacing their professional judgment. And it necessitates building workflows where AI augments human interaction rather than creating another dashboard no one checks. 

I’ve watched institutions pilot AI capabilities that never scale beyond the pilot. A chatbot answers basic questions but cannot access student records. An early alert system generates so many flags that advisors cannot possibly respond to them all, leading them to ignore the alerts entirely. An AI-powered degree planning tool recommends optimal course sequences but operates in a separate system, disconnected from the advising and registration workflows students actually use. 

The competitive advantage comes from embedding AI into how every system serves students. That requires treating AI integration as an operational transformation, not a technology deployment. And it requires infrastructure built to make intelligence actionable, not just theoretical. 

Proving Value Through Skills and Intelligence 

The institutions that solve the ROI crisis will be the ones that make learning outcomes transparent and connected to employment. They’ll build credential systems that translate education into employer-legible skills and use AI to connect students with career pathways from day one, not just senior year. Industry certifications will be embedded throughout their degree programs rather than treating them as add-ons. 

This transformation requires institutions to fundamentally rethink how they measure success, from degrees awarded to skills activated, from course completion to demonstrated capability, and from graduation metrics to career readiness at every stage. It requires building credential systems that prove competency, not just attendance, and treating career preparation as foundational to education, not a separate service bolted on at the end. 

The institutions leading this work will be the ones that understand proving value is no longer a marketing problem, but an infrastructure problem. You can’t demonstrate skills if you don’t have systems to verify and credential them. You can’t connect learning to careers if your academic systems don’t talk to your career services platforms. You can’t serve students throughout their lifelong learning journey if your infrastructure is designed exclusively for traditional four-year degree seekers. 

The next article in this series examines the operational infrastructure that makes all of this possible. The invisible systems that determine whether students persist or leave, whether institutions can deliver on these promises at scale, and whether the transformation from traditional education to intelligent learning ecosystems actually works in practice. 

Read part 3 of our Rewired series, The Invisible Infrastructure That Determines Higher Education Success.  If you missed our first article in this series, check out The New AI Architecture of Higher Education.  

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing. 


Key Takeaways 


FAQs 

Why do credentials need to change when degrees still matter to employers? 

Employers increasingly hire based on demonstrated skills rather than degree prestige. They need to understand what a graduate can actually do, not just where they studied. Verifiable digital credentials that translate coursework into specific competencies help employers make better decisions and help graduates prove their capabilities clearly. 

What makes AI fluency different from AI adoption in higher education? 

AI adoption means using tools like ChatGPT or administrative automation. AI fluency means weaving intelligent systems into how students learn, advisors guide, career services operate, and institutions run. It’s the difference between adding technology and reimagining how education works when intelligence can personalize, predict, and adapt at scale. 

How do institutions make educational data legible to employers? 

Through interoperable credential systems that translate courses into demonstrated competencies. Instead of transcripts showing only course titles and grades, modern credentials verify specific skills like data analysis, cross-functional leadership, or technical communication. Digital badges and stackable certifications create a common language between education and employment. 

What does AI-powered career services look like in practice? 

AI-powered career services track labor market trends in real time, connect coursework to emerging job opportunities, help students build competency portfolios throughout their education, surface relevant alumni mentors based on career interests, and personalize guidance based on individual strengths and market demand. The technology enables career planning from freshman year instead of senior year scrambling. 

The New AI Architecture of Higher Education 

Part 1 of our series Rewired: The New AI Architecture of Higher Education 

Part 2: How Higher Education Proves Value in the Skills Economy | Part 3: The Invisible Infrastructure That Determines Higher Education Success

The State of Higher Education 2025 report confirms what institutions have been tracking for years: the enrollment cliff is here. Peak high school enrollment arrived with the Class of 2025, and from now through 2041, the number of graduates will decline by 13%

Institutions knew this was coming. The story they aren’t ready to hear is what it requires: not better retention strategies or more aggressive recruiting, but fundamental reinvention of who they serve and how they serve them. Most institutions see the enrollment cliff as a crisis to be managed. I see it as the catalyst for higher education’s most exciting transformation in decades. 

The report captures a sector at an inflection point. Demographic shifts, AI advancement, and evolving student expectations are converging to create the conditions for fundamental reinvention. The barrier isn’t awareness or willingness, it’s execution. Institutions move slowly. Their systems are disconnected. Their infrastructure is rigid, designed for a traditional student population that no longer represents their future. 

The transformation requires work most institutions have barely started: reimagining who their students are, modernizing how systems serve them, and redefining what counts as proof of learning. 

The Student You’re Not Designing For 

I’ve sat in countless conversations with enrollment and student success teams. The pattern is always the same: everyone is focused on meeting this term’s targets, fixing immediate friction points, optimizing for the students already enrolled. There’s barely time to think about next month, let alone reimagine who you could serve five years from now. 

When leaders do push for serving non-traditional populations, such as adult learners, part-time students, and those with significant transfer credits, the instinct is often to squeeze these students into existing systems. Use the same registration workflows. Same advising model. Same assumptions about what ‘student success’ means. The result? You’ve diversified your enrollment numbers but not your infrastructure. 

This is the trap that keeps institutions focused on a shrinking market. As the traditional undergraduate population declines, a massive population of learners remains underserved: 

These learners represent the future majority of higher education, and they bring fundamentally different expectations. They need to learn while working full-time, while managing families, while living far from campus. They require flexibility as a condition of participation. And they expect university systems to work like every other digital experience in their lives: responsive, intelligent, and adaptive. 

Online-only enrollment has already surpassed 5 million students, and online master’s degrees now exceed in-person programs. The pandemic validated what these learners already knew: flexible learning is the only viable path for students juggling multiple commitments. What institutions treated as emergency response in 2020 has become permanent expectation in 2025. 

Being “student-centric” requires building systems with institutional memory, platforms that recognize a returning student, pre-populate forms with known information, and give advisors visibility into a student’s full academic journey. The technology to do this exists in every other sector. Higher education’s challenge is the complexity of dismantling deeply embedded silos while keeping operations running. 

The institutions that will thrive aren’t the ones fighting to preserve systems designed for traditional learners. They’re the ones willing to do the hard work of building platforms that serve a 19-year-old college freshman and a 45-year-old professional returning for a certification with equal intelligence, systems that recognize both learners, understand their different needs, and adapt accordingly. 

The Platform Play Higher Ed Hasn’t Made 

Online education has proven its viability. The next frontier is integration. Online and on-campus work best as different modes within a unified learning platform that follows students wherever they are in life. 

Right now, most universities treat online programs as separate business units with distinct registration systems, student services, and cultures. I’ve seen this friction play out in painful ways. A junior takes a summer internship out of state and wants to stay on track by taking one online course. Suddenly they’re navigating a completely different registration portal, calling a separate help desk, and dealing with advisors who can’t see their on-campus transcript.  

Or consider the undergraduate alum applying to an online master’s program at the same institution. They’re re-entering all the information the university already has, speaking with advisors who have no visibility into their four years of history. Same institution, but the student experiences it as if starting from zero. 

The friction is real, and it’s expensive. Every moment of confusion, every duplicated form, every advisor who doesn’t have complete context is a moment where the student considers whether continuing is worth the hassle. 

The opportunity sits in building modular, always-on learning environments where micro-credentials, degrees, and continuous upskilling integrate seamlessly. Picture this: A student completes a graduate certificate in data analytics. Three years later, they return for an MBA. The certificate credits automatically apply, their prior work is visible to new faculty, and the advising team can build on previous conversations rather than starting fresh. The student doesn’t have to re-explain themselves. They’re simply continuing a relationship the institution remembers. 

This isn’t hypothetical. Some institutions are building this now, and it’s becoming their competitive advantage. 

This vision requires treating education as a lifelong relationship rather than a four-year transaction. It means building systems that remember students, adapt to their changing needs, and make re-entry feel seamless rather than starting from scratch. The institutions that crack this will turn alumni into lifelong learners and turn education into something that compounds in value over time. 

This fundamentally shifts how institutions think about their role. Instead of a four-year engagement, you’re building relationships that span careers. Alumni who return for stackable credentials every few years represent the best kind of growth: learners you’ve already served well, who understand how your programs work, and who are advocating for your institution with their employers. This is how institutions build enrollment resilience in a shifting demographic landscape. 

What This Looks Like in Practice 

Transformation at this scale relies on strategic planning and attention to detail. It happens when your data architecture can track a learner across programs, modalities, and decades. When your student information system doesn’t silo traditional and non-traditional students into separate workflows and data structures. When your advising model scales to support someone taking one course just as effectively as someone enrolled full-time. 

The institutions getting this right are treating it as a technology transformation, not just a strategy refresh. They’re building unified data layers, modernizing APIs, and creating seamless user experiences. They’re measuring success by how little friction a learner experiences, not just by enrollment and retention numbers. 

Building the Foundation for What’s Next 

The universities that thrive over the next decade will be the ones that expand their definition of students to include learners at every career stage. They’ll create unified platforms where online and on-campus blend seamlessly, building experiences that serve diverse populations with equal care. 

Transformation happens in the essential work of modernizing systems, integrating data, and building platforms for lifelong learning. It happens when institutions shift their focus from what they’ve always done to designing for who they could serve. 

The institutions leading this work will be the ones that respond to the enrollment cliff by expanding who they serve. The ones that understand serving lifelong learners requires purpose-built infrastructure. The ones ready to measure success by skills activated rather than degrees awarded. 

The opportunity is clear: institutions that expand their definition of ‘student’ and build unified platforms for lifelong learning will own the next decade. But expanding who you serve only matters if learners believe your programs are worth their investment. In the next article, we’ll explore how institutions prove value in a skills economy—how they make learning outcomes transparent, credentials employer-legible, and career pathways visible from day one. 

Read part 2 of our Rewired series, How Higher Education Proves Value in the Skills Economy.

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice. We’d love to be a partner in that journey. Request an AI briefing.  


Key Takeaways 


FAQs 

How can universities grow enrollment during the demographic cliff? 

Growth comes from expanding who you define as a student. First-time adult learners, students with transfer credits, professionals seeking micro-credentials, and alumni returning for reskilling represent massive underserved populations. Institutions that build systems serving these learners as well as traditional undergraduates will find new revenue streams throughout the demographic transition. 

How do institutions serve traditional students and lifelong learners simultaneously? 

By building unified platforms where different learner types access personalized experiences through the same underlying systems. An 18-year-old residential student and a 40-year-old professional seeking a certificate have different needs, but both benefit from intelligent advising, clear pathways, and responsive operations. The technology should adapt to the learner, not force the learner to adapt to rigid categories. 

What does a unified learning platform actually include? 

A unified platform integrates registration, advising, credential tracking, and student services across all learning modes. It remembers student history regardless of how long they’ve been away, allows seamless transitions between degree programs and micro-credentials, and personalizes communication and support based on individual circumstances. The goal is making re-entry as natural as initial enrollment. 

Why is lifelong learning more valuable than traditional four-year models? 

Lifelong learning creates recurring revenue streams and deeper alumni relationships. Students who return multiple times throughout their careers generate sustained tuition revenue while building stronger institutional loyalty. Education becomes a compounding relationship rather than a single transaction, increasing lifetime value per student. 

How I Learned to Stop Worrying and Love AI Code: A Designer’s Journey 

How Designers Are Using AI Code Tools: From Figma to Functional Prototypes 


The team Zoom call felt like an intervention. “Just try it,” they said. “Everyone open Claude Code.” My palms were sweating. Twenty years of visual design, and I’d only clumsily played around with code. I was the furthest thing from a developer you could imagine. 

Two hours later, I couldn’t stop. I’d built three working prototypes. My ideas, the ones that lived and died in Figma for years, were suddenly real. Interactive. Alive. 

This is the story of how I went from code-phobic to code-addicted in a single afternoon. And why every designer reading this is about to follow the same path. 

The Designer–Developer Divide 

For decades, we’ve accepted a fundamental lie: Designers design, developers develop. The gap between these worlds felt like a chasm requiring years of computer science education to cross. HTML, CSS, JavaScript were foreign languages spoken in basement servers and terminal windows. 

I believed this myth completely. My job was making things beautiful. Someone else’s job was making them work. This division of labor felt natural, inevitable, and even efficient. Why would I learn to code when developers already did it so well? 

That myth cost me every idea I couldn’t prototype myself, every interaction I couldn’t test, and every vision that got lost in translation. Twenty years of creative constipation, waiting for someone else to birth my ideas. 

Five Minutes to AI-Powered Prototyping 

“Open your terminal,” they said. Haha. I’d only ever really seen it used in The Matrix. The black window appeared. The cursor blinked in judgment. Type ‘claude’ and tell it what you want to build. 

My first prompt was embarrassingly simple: “Make me a color palette generator.” I expected nothing. Error messages, maybe. Definitely not working code. 

But there it was. A functioning app. My app. Built with my words. 

The next prompt came faster: “Add a feature that saves palettes locally.” Done. “Make the colors animate when they change.” Done. Each success made me braver. Each response made me hungrier. 

By the end of that call, I wasn’t just using AI to code. I was thinking in code. The barrier I’d spent two decades accepting had evaporated in minutes. 

The New Addiction: Vibe Coding 

They call it “vibe coding,” this conversational dance with AI. You describe what you want. The AI builds it. You refine. It rebuilds. No syntax to memorize. No documentation to parse. Just pure creative expression flowing directly into functional reality. 

I became obsessed. That first night, I built seven prototypes. Not because anyone asked. Not because I needed them. Because I could. Every design idea I’d shelved, every interaction I’d dreamed about was suddenly possible. 

The feeling was intoxicating. After years of creating static mockups, watching my designs move and respond felt like gaining a superpower. Click this button, trigger that animation. Hover here, reveal that detail. My aesthetic decisions instantly became experiential. 

When Designers Start Coding 

Something profound happens when the person with design taste controls implementation. The endless back-and-forth disappears. The “that’s not quite what I meant” conversations vanish. The design is the product is the code. 

UXPin’s research shows designers can now “generate fully functional components with just a few inputs.” But that clinical description misses the emotional reality. It’s not about generating components. It’s about giving creative vision direct access to digital reality. 

I started noticing details I’d never considered before. The precise timing of transitions. The subtle response to user actions. The difference between functional and delightful. When you control every aspect of implementation, you start designing differently. You start designing more ambitiously, more precisely, and with more courage.  

AI Code Tools That Make It Possible 

The technology enabling this transformation is staggering. Visual Copilot converts Figma designs directly to React code. Codia processes designs 100x faster than manual coding. These aren’t incremental improvements. They’re paradigm shifts disguised as product features. 

But the tools are just enablers. The real revolution happens in your mind. That moment when you realize the prison was self-imposed. The guards were imaginary. The key was always in your pocket. 

Natural language is the new programming language. If you can describe what you want, you can build it. If you can envision it, you can ship it. The only barrier left is imagination. 

The Future of Designer-Coders 

Organizations clinging to traditional designer-developer divisions are about to face a reckoning. While they coordinate handoffs and manage miscommunications, designers who code are shipping. Iterating. Learning. Building. 

This shift amplifies designers. Developers can focus on complex systems and architecture. Designers can implement their vision directly. Everyone works at a higher level of abstraction and impact. 

The competitive advantage is obvious. Teams with designer-coders ship better products faster. Not because they’re more efficient, but because they’re more effective. Vision and execution unified in a single mind. 

Your First Steps with AI Coding 

I know what you’re thinking. “But I’m not technical.” Neither was I. “But I don’t understand programming.” You don’t need to. “But I’m just a designer.” That’s exactly why you’re perfect for this. 

The same skills that make you a great designer, understanding users, crafting experiences, and obsessing over details, make you a natural at AI-powered development. You already think in systems and interactions. Now you can build them. 

Start small. Open a terminal and type a prompt. Build something stupid. Then build something slightly less stupid. Within hours, you’ll be building things that matter. Within days, you’ll wonder how you ever worked without this power. 

The Designer You’ll Become with AI 

Six months later, I barely recognize my old workflow. Static mockups feel like cave paintings. Design documentation seems like elaborate fiction. The idea of handing off my vision for someone else to interpret? Unthinkable. 

My role hasn’t changed. I’m still a visual designer. But my capability has transformed. I create experiences versus just imagining them. I propose ideas and prove them. I don’t just design products and ship them. 

The code anxiety is gone. Every limitation that once constrained me now seems artificial. The only question left is what to build next. 

Your journey starts with a single prompt. What will yours be? 

The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing. 


Key Takeaways 


FAQs 

Q: What are AI code tools? 
AI code tools (like Claude Code, GitHub Copilot, or Visual Copilot) let you describe what you want in natural language, then generate working code automatically. 

Q: How can designers use AI code tools? 
Designers can turn Figma mockups or written prompts into functional prototypes, animations, and interactions—without learning traditional programming. 

Q: Does this replace developers? 
No. Developers focus on complex architecture, scaling, and systems. AI coding empowers designers to own interaction and experience details, speeding collaboration. 

Q: Why does this matter for organizations? 
Teams that adopt AI prototyping iterate faster, align design and development more tightly, and ship higher-quality products with fewer miscommunications. 

Q: What skills do designers need to start? 
Curiosity and creativity. If you can describe an idea clearly, you can build it with AI code tools.