Robots & Pencils Expands Retail and Consumer Goods Leadership with Appointment of Saul Delage as Client Partner
Robots & Pencils
As AI reshapes how Retail and Consumer Goods businesses compete, Robots & Pencils plants its flag in the vertical and brings in a 30-year industry veteran to lead the charge.
Robots & Pencils, an applied AI engineering partner known for high-velocity delivery and measurable business outcomes, today announced the appointment of Saul Delage as SVP, Client Partner, Retail and Consumer Goods (RCG). Based in Chicago, he brings 30 years of experience building executive partnerships and driving growth across some of the most recognized names in digital transformation. Delage joins a proven leadership team at Robots & Pencils with extensive experience delivering for more than 100 of the world’s most recognized consumer brands, including dozens of Fortune 500 companies.
The appointment is a deliberate move. Robots & Pencils is investing with intention in Retail and Consumer Goods, including CPG, eCommerce, Restaurants & Everyday Essentials, and Retail, an industry under mounting pressure to move from AI experimentation into generative and agentic AI that performs in production and delivers measurable business outcomes. Delage will lead client relationships across the vertical, helping enterprise leaders get more from their AI investments and more from their investments in AWS, while strengthening the company’s presence in key markets and building closer, more embedded partnerships with clients.
The Right Leader for the Moment
“Saul is the kind of leader clients trust before the contract is signed and can’t imagine working without after,” said Len Pagon, CEO of Robots & Pencils. “He has worked alongside some of our team before. He knows how we operate, and he knows this vertical inside out. Retail and Consumer Goods is a large growing industry vertical for us, and we went out and got the right leader.”
A Career Built on Trust and Delivery
Delage arrives with a career forged across Cognizant, Isobar, Havas, Razorfish, and Fry, where he built high-performing growth teams, secured long-term relationships with Fortune 500 companies, and earned a reputation as one of the most technically fluent executives in the industry, equally effective in the boardroom and in the details of delivery.
His hire reflects the company tenet that winning in Retail and Consumer Goods requires leadership with deep business experience, the technical fluency to speak the language of AI, and the delivery discipline to back it up.
“Retail and Consumer Goods businesses have made significant investments in AI, and too many have little to show for it in production,” said Delage. “Robots & Pencils builds enterprise AI systems — generative, agentic, and production-ready — that move fast and tie directly to revenue and customer experience. That is exactly what this industry needs right now, and Robots & Pencils is built to deliver it.”
Built on AWS. Driven by Outcomes.
Robots & Pencils is unabashedly aligned with AWS and is building its Retail and Consumer Goods vertical around that conviction. The goal is helping enterprise leaders drive measurable business outcomes on AWS, from first deployment to full-scale production. For AWS co-sell teams and enterprise leaders who need a partner that moves fast and delivers, that commitment is the differentiator.
Robots & Pencils Goes All in on AWS with Appointment of Adrian Bird as Vice President of AWS Partnership
Robots & Pencils
Bird brings two decades of alliance leadership, including five years inside AWS, to accelerate co-sell engagement and expand enterprise AI delivery with AWS
Robots & Pencils, an applied AI engineering partner known for high-velocity delivery and measurable business outcomes, today announced the appointment of Adrian Bird as Vice President of AWS Partnership.
The timing is deliberate. Bird joins as the company deepens its investment in the AWS ecosystem. He will lead the company’s AWS Partner strategy and execution, expanding joint customer engagement, and strengthening alignment with AWS teams.
Two Decades of AWS and IBM Partnership Leadership
Bird brings direct experience from AWS, where he was Partner Sales Leader from 2020 to 2026. In that role, he managed comprehensive channel strategy and partner program initiatives across ISVs, global systems integrators, and technology partners, collaborating with AWS field sellers across regions and industries to support joint go-to-market initiatives. He built partner success frameworks that drove exceptional year-over-year growth in partner revenue, created revenue operations tools adopted across multiple AWS business units, and led initiatives that significantly expanded the security partner ecosystem. He earned AWS’s Innovation All-Star recognition in 2024.
Prior to AWS, Bird spent fifteen years at IBM in progressively senior partnership and alliance roles. He grew the Watson Media worldwide partner channel from inception to more than a third of business unit revenue within two years, scaled the IBM Commerce partner business several-fold over four years, and led the integration of Sterling Commerce’s partner ecosystem following its acquisition, retaining the vast majority of partners while substantially growing combined revenue. He is a recipient of IBM’s Industry Solutions Successful Partnering Award and the IBM 100% Club.
Accelerating AWS Partnership Strategy
“Adrian has spent his career building partner ecosystems that generate real, compounding results,” said Scott Young, EVP of Growth and Strategic Partnerships. “Having led partner strategy inside AWS, he knows exactly how AWS field teams operate and what it takes to be a partner they actively bring into deals. That perspective, combined with his track record of execution, is precisely what we need right now. Clients who need enterprise AI at speed will benefit directly from what Adrian builds.”
“We are unabashedly all in on AWS,” said Len Pagon, CEO of Robots & Pencils. “We have tremendous traction and momentum. Adrian is another key investment in taking our AWS partnership further, faster.”
Driving Enterprise AI Adoption and AWS Consumption at Scale
Bird’s appointment builds on recent company milestones including earning AWS Advanced Tier Services Partner status, selection as one of 11 inaugural AWS Pattern Partners globally, and the launch of its Studio for Generative and Agentic AI in Bellevue near AWS headquarters. The company is also actively engaged with AWS through its collaboration with the AWS Generative AI Innovation Center to support joint enterprise initiatives. Bird’s mandate is clear: align partnershipstrategy with Robots & Pencils’ ability to deploy AI into production at speed and drive measurable AWS consumption and customer value at scale.
“Robots & Pencils has built what most companies only claim to have. They have the engineering depth, a proven record of deploying AI into production at speed, and the scale to serve enterprise clients globally,” said Bird. “The AWS partnership is the force multiplier that connects those capabilities to the clients who need them most. My job is to make sure that potential becomes performance, for Robots & Pencils, for AWS, and for the clients we serve together.”
Request an AI briefing to evaluate how applied AI can deliver velocity and impact within your organization.
Robots & Pencils Brings Enterprise AI Platform Expertise to ASU-GSV Summit Panel
Robots & Pencils
CEO Leonard Pagon joins education industry discussion exploring how leading universities move from AI pilots to enterprise-scale platforms with governance and speed
Robots & Pencils, an applied AI engineering partner known for high-velocity delivery and measurable outcomes in complex institutional environments, today announced that its CEO, Leonard Pagon, will participate in a panel discussion at the ASU-GSV Summit, a premier global event focused on the intersection of education, technology, and workforce innovation.
As a seasoned operator who has built and scaled technology-driven companies, Pagon will share practical insights on how universities are operationalizing AI across institutional systems while maintaining governance, security, and accessibility.
Pagon will be joined by Kyle Bowen, Deputy CIO, Arizona State University; Matthew Gee, Director, U.S. Program Data, Gates Foundation; Stephanie Khurana, CEO, Axim Collaborative; and Elizabeth Reilley, Chief AI Officer, University of North Carolina. The panel will be moderated by Lev Gonick, Enterprise CIO, Arizona State University.
The session centers on the real-world mechanics of enterprise AI in higher education, including:
How institutions connect AI across SIS, LMS, CRM, and operational systems
Governance models that allow rapid experimentation without losing institutional control
Real examples that enable non-technical users to design and deploy AI solutions
The growing role of agentic AI in strengthening institutional resilience and operational efficiency
“A lot of AI activity in higher education is happening at the use case level, which creates fragmentation across the institution.” said Pagon. “The real shift is building platforms that bring those efforts together and make AI work inside everyday institutional constraints. The impact shows up when AI moves from pilots into production systems that perform, with the right guardrails in place.”
Scaling AI Across the University with Governance and Control
Robots & Pencils has worked with with Arizona State University since 2019 on cloud-native architecture, platform modernization, and applied AI initiatives that improve the student experience and strengthen institutional operations. This collaboration has produced practical insight into how universities move from small pilots to enterprise AI programs through platforms that support rapid innovation while maintaining strong governance, security, and accessibility standards.
From Experimentation to Enterprise AI
Institutions are increasingly looking to AI to improve student support, reduce manual processes, and extend staff capacity without increasing headcount. The ASU-GSV Summit brings together entrepreneurs, investors, and education leaders who are shaping the future of learning and workforce development through technology.
Higher education leaders are moving quickly to modernize operations, improve student experience, and prepare graduates for an AI-driven economy. Enterprise AI platforms give universities a structured way to design, deploy, and govern AI solutions across departments while enabling faculty, staff, and researchers to use AI in their day-to-day work with security, compliance, and accessibility built in. This approach enables institutions to scale AI in higher education with clarity, control, and speed.
Why higher education AI governance frameworks fail after approval and who is responsible for closing the gap.
Across higher education, AI is no longer theoretical. It shows up in advising offices, finance teams, registrar systems, and IT backlogs every day. Not long ago, the conversations felt divisive. Leaders debated risk, approved tools, and moved forward with cautious optimism.
Today, many of those same leaders are sitting with a different feeling. The systems technically work. Progress feels uneven. Accountability feels scattered. And no one can say with certainty whether the institution is truly advancing or simply carrying new technology without a clear owner of the outcome.
That uncertainty now lives with presidents, provosts, and CIOs expected to defend AI investment, manage institutional risk, and show results inside universities designed to move carefully, by consensus, and without urgency. The technology is working. The institution is not.
The gap between those two facts is structural.
Today, Robots & Pencils, an applied AI engineering partner known for high-velocity delivery and measurable outcomes in complex institutional environments, announces the release of The Institutional Intelligence Crisis, a three-part research series examining why AI adoption fails at the departmental level and what senior leadership must address to change that trajectory.
Drawing on research and operational experience across universities and complex organizations where AI adoption is already underway, the series identifies a set of recurring patterns that appear once AI moves beyond experimentation and into daily operations.
The series is authored by Jess Martin, Principal Delivery Manager at Robots & Pencils, and is written for university presidents, provosts, CIOs, and boards of trustees. It treats AI adoption as an institutional design challenge, not a technology procurement problem, and focuses on the post-pilot phase: the period where accountability structures and human dynamics determine whether AI becomes a reliable capability or quietly rots.
“AI doesn’t create accountability problems.” says Martin. ”It exposes the ones you already have.”
Why AI Governance Fails in Higher Education: Three Failures That Compound
The series is built around three failures that compound in sequence:
The Intelligence Leak (Part 1): When institutions fail to provide operational pathways for AI, high-performing staff build their own, exporting institutional problem-solving logic to personal accounts at third-party vendors. The sector now calls this Shadow AI. The university does not get smarter. The vendor does. When institutions leave a gap between policy and practical access to AI tools, staff close that gap themselves, often outside the visibility of supervisors or institutional governance.
The Redistribution of Expertise (Part 2): AI makes institutional expertise portable. The specialized knowledge that senior staff in advising centers, registrar offices, and financial aid departments have spent decades accumulating and making indispensable can now be replicated by a junior colleague and a well-prompted AI. What leadership often experiences as operational friction is frequently a rational response from professionals whose expertise has defined their role inside the institution.
The Brittle System (Part 3): When no one is accountable for output quality, performance degrades without announcement. Errors become plausible enough that staff quietly work around them rather than report them. The system continues running while confidence in the results quietly erodes. In many institutions, leaders lack a clear view into whether AI systems are improving outcomes or introducing new operational risk.
Higher education leaders are encouraged to read the full series and engage with a data-driven perspective grounded in accountability, execution, and institutional readiness.
Part 3 – The Institutional Intelligence Crisis: The Brittle System
Jessica Martin
This article is part of a three-part series examining why AI adoption stalls in higher education and what senior leaders must address to restore momentum. Each article stands alone. Reading the full series is recommended.
Execution, Quality Drift, and the Cost of Looking Away
For four months, Donna’s enrollment verification tool looked flawless.
As Registrar, she oversaw the deployment, ran the tests, and watched it process thousands of student records without a single flag.
Then an IT team upstream changed how transfer credits were coded as part of a routine update, and the change never surfaced in any channel that reached Donna’s office or AI.
The tool did not fail loudly. It started producing plausible errors, correctly verifying about 90% of students while quietly mishandling a subset of transfer students. With no performance owner assigned to audit for drift, the errors went unnoticed for weeks.
When a student finally flagged the discrepancy, Donna’s staff investigated, found the issue quickly, and stopped trusting the tool.
They left it running, but they also rechecked every single verification by hand. The institution now pays for the AI license, and the full manual workload it was meant to reduce.
This is the AI ROI problem in higher education: a tool that looked like it was working, until someone finally checked.
Why AI Systems Fail After Launch: The Day-Two Problem
Across higher education deployments, Robots & Pencils has consistently observed that the most dangerous phase of AI adoption arrives about six months after launch. Initial energy fades, the project team moves on, and the tool is left in day-to-day operations without a named owner, a monitoring protocol, or a working feedback loop.
Without clear accountability, quality drifts as vendors ship updates, prompts that worked in September fail in February, and upstream data formats change. If nobody owns day-two oversight, those issues accumulate quietly until trust collapses and staff begin working around the tool.
Most institutions are not measuring whether AI is actually paying off. Kiteworks and EDUCAUSE report that only 13% are tracking ROI for AI investments, which leaves the rest funding tools that look like progress on a dashboard without delivering sustained value. The EDT Partners AI Impact Study (2026) found that only 2% of institutions have secured new funding specifically for AI projects, with 30% having no cost accommodation plan at all. When AI is funded by redirecting existing budgets rather than new investment, accountability disappears along with the original budget line.
Algorithmic Bureaucracy vs. Human Bureaucracy
Higher education runs on human bureaucracy. It is slow and imperfect, but it can flex around messy reality: a registrar notices an unusual student situation, applies context, and makes an accountable exception.
Algorithmic bureaucracy trades that flexibility for speed. It is brittle, and when it breaks, it often does so quietly, producing outputs that look compliant until someone checks the edge cases.
THE CRITICAL DISTINCTION
We used to trust “The System” because we trusted the people running it. Now we are being asked to trust the math. That shift requires a different kind of institutional accountability than higher education has built before.
When an AI hallucinates compliance, it delivers a wrong answer with the confidence of a policy manual, with no hedging, no uncertainty, and no indication that something may have gone wrong. A slow human bureaucracy fails loudly and individually. An algorithmic one fails quietly and at scale. Without someone specifically tasked with auditing for brittleness, the system will eventually fail in ways a slow human bureaucracy never would.
How Unchecked AI Trust Becomes Institutional Liability
The Donna incident is not an edge case. It reflects a documented pattern of how AI trust degrades in operational environments.
Policy Awareness Without Confidence of higher education staff are aware of their institution’s AI policies. Of those, only half feel confident using AI tools for work. Having a policy on paper is not the same as having a workforce that trusts it.
Zombie Pilots of institutions are measuring the ROI of their AI investments. The rest are operating on assumption.
In higher education, the 66% non-validation rate reported by KPMG (2025) matters because the consequences are real. A wrong degree audit recommendation can delay graduation, a miscoded financial aid calculation can trigger federal compliance issues, and an enrollment verification error can ripple into accreditation reporting. That pattern of unchecked trust, at that scale, creates genuine institutional liability. This is how adoption degrades in practice: the tool stays “Active” on a dashboard while staff quietly stop believing it and rebuild manual checks around it.
Defining Acceptable Variance
Sustainable AI impact requires honesty about what the technology is. AI will not be 100% accurate, so the institutions that get value out of it define acceptable variance up front and are explicit about which tasks can tolerate errors and which cannot.
MDPI (2026) found that AI achieves higher scoring consistency than humans in 66% of assessment cases, but 50% of those systems fail the Transparency Test and do not adequately disclose how the decision was reached. Consistency without transparency is hard to trust.
Defining acceptable variance before deployment is an ethics and accountability decision that belongs with academic leadership, not an IT implementation detail, and if that conversation hasn’t happened, the institution isn’t ready to deploy.
The Four Requirements for Durable Adoption
Many higher-ed institutions measure the wrong things, like licenses assigned, daily active users, or how much text was generated. Those are activity metrics, and they say nothing about trust, accuracy, or whether the work is actually improving. Across higher education deployments, Robots & Pencils has found that the difference between AI that compounds value and AI that quietly degrades is rarely the technology. It is whether someone is named, empowered, and evaluated on what happens after launch.
The institutions modeling this well are not the ones that moved fastest. Stanford, MIT, Harvard, UC Berkeley, and Arizona State have each implemented named governance structures – ethics boards, oversight committees, regular audits – that make accountability visible and operational. The technology at those institutions is not meaningfully different from what is available to everyone else. The governance surrounding it is.
Four conditions have to be present for AI to move from perpetual pilot to institutional infrastructure. Institutions that are missing any one of them will recognize the Accountability Vacuum opening again. Together they form the core of a durable AI governance framework for universities serious about moving from experimentation to operational accountability.
Named Accountability: Every AI implementation needs a single person accountable for output quality, with that responsibility reflected in their role expectations and evaluation.
Feedback Cycles: Staff need a direct way to flag bad output and see corrections made. If the reporting path requires navigating a ticketing system, it won’t get used, and errors will accumulate quietly until they surface somewhere you can’t ignore them.
Operational Integration: AI review has to be built into the normal rhythm of departmental operations, with a standing owner and outcome metrics that get reported alongside everything else the department is accountable for.
Radical Transparency: Leadership must be honest about what the AI can and cannot do. Pretending an AI is 100% accurate destroys trust the moment the first error appears, and there will be one. The institutions that survive it are the ones that already told their staff it was coming.
The Accountability Vacuum: A Final Word
Every institution in this series was present at launch and absent when consequences arrived. That gap is where institutional credibility is won or lost.
Marcus used a personal AI account because the sanctioned process could not meet the deadline he was given. Diane stepped in because the institution gave her a directive and none of the infrastructure to do it. Raymond configured rules that reflected his judgment; the institution never validated them against policy. Donna stopped trusting the tool because no one was responsible for watching it once it was in production.
Your registrar’s office, advising teams, and financial aid staff have already formed an opinion about whether AI is part of the institution’s operating model or simply a pilot being performed for leadership. Those judgments will settle based on what happens after deployment.
The technology is not the test. Leadership is.
Punch List: Dismantling the Brittle System
#
Action
Owner / Timeline
1
Establish a Drift Audit: Quarterly, test 50 random AI outputs against a human expert’s assessment. Make the results visible to the Output Owner and their department head. If the error rate is rising, that is your early warning system.
AI Output Owner + QA – quarterly
2
Define Variance Tolerance Before Launch: State the Error Budget for every AI-involved task in writing before deployment. If you cannot define what an acceptable error rate looks like, you are not ready to deploy.
Provost Office + Deans – before launch
3
Link IT Change Logs to AI Owners: A database schema change or upstream system update should trigger an immediate AI review cycle; not a help ticket six weeks later. Build this handoff into standing IT protocol.
CIO – standing protocol
4
Create a One-Click Error Channel: Staff need a frictionless way to flag wrong AI output. If it requires navigating a ticketing system, they will not use it. They will use their judgment and quietly work around the tool instead.
IT + Department Heads – within 30 days
5
Report on Outcomes, Not Activity: Replace license counts and login metrics with error rates, exception handling time, and decision consistency scores. These tell you whether AI is working or degrading quietly.
Leadership – next reporting cycle
The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing.
Key Takeaways
Most AI failures occur after deployment, not during launch. The greatest risk emerges months after implementation when oversight fades, ownership is unclear, and systems begin drifting as prompts, data sources, or upstream systems change.
Unchecked AI systems degrade quietly rather than failing visibly. Algorithmic systems often produce plausible but incorrect results that go unnoticed until someone manually verifies them, eroding trust and forcing staff to rebuild manual checks around the tool.
Many institutions measure AI activity instead of outcomes. Metrics such as logins or licenses assigned create the appearance of progress, yet few institutions measure ROI, accuracy, or operational impact, allowing ineffective tools to remain in place.
Unchecked trust in AI outputs creates institutional risk. When staff rely on AI responses without validation, incorrect outputs can propagate into compliance decisions, academic records, and student services at scale.
Durable AI adoption requires operational governance after launch. Sustainable impact depends on four conditions: named accountability, continuous feedback cycles, integration into normal operations, and transparent communication about AI’s limitations.
Frequently Asked Questions
1. Why do AI systems often fail months after deployment? Many institutions treat deployment as the finish line. Without ongoing monitoring, ownership, and feedback loops, changes in data sources, system updates, or prompts can quietly degrade output quality over time.
2. What is the “Day-Two Problem” in AI adoption? The Day-Two Problem describes what happens after the launch phase ends. When project teams move on and no operational owner is assigned, AI systems drift in quality and gradually lose staff trust.
3. Why is algorithmic bureaucracy more fragile than human bureaucracy? Human systems can adapt to unusual situations through judgment and context. Algorithmic systems prioritize consistency and speed, which makes them vulnerable to silent errors when conditions change.
4. How should institutions measure AI performance? Instead of focusing on activity metrics such as usage or logins, institutions should track outcomes such as accuracy rates, exception handling time, and decision consistency.
5. What governance practices help prevent AI quality drift? Organizations can reduce risk by assigning a clear output owner, defining acceptable error thresholds before deployment, creating easy error-reporting channels, and running regular audits of AI outputs.
Part 2 – The Institutional Intelligence Crisis: The Redistribution of Expertise
Jessica Martin
This article is part of a three-part series examining why AI adoption stalls in higher education and what senior leaders must address to restore momentum. Each article stands alone. Reading the full series is recommended.
Professional Identity, Resistance, and the Power Shift AI Creates
Diane didn’t wait for permission. She couldn’t afford to.
As Director of Advising, she had been told her department needed to find ways to absorb the impact of recent turnover, and leadership had suggested AI as a possible direction. But no tool was specified, and no governance ever reached her desk.
So her staff did what capable people do in a vacuum: they improvised. They started using a free LLM to draft student appointment summaries. It worked well until staff began uploading degree audits and academic plans. Diane recognized the security risk immediately. She also recognized that the university’s official AI policy, still in draft, was not going to arrive in time to help her.
Tired of waiting, Diane used her weekend to write her own policy. It was just one page and defined what could be uploaded, what required a human to double-check, and who to call if the tool produced an error.
Six months later, her document was the actual operating standard for her advising staff. The university’s 37-page policy was still sitting with the committee in a draft.
This is what AI adoption in higher education looks like when governance does not keep pace with operational reality.
It is easy to view Diane’s initiative as a simple win: a director filling a leadership vacuum to keep her department safe. However, while Diane was solving the governance problem, her senior staff were reacting to a different reality. These advisors began quietly slowing the AI pilot to a crawl, not because they wanted to be difficult, but because the tool threatened the value of the specialized expertise they had built over decades. Without an alternative role that turned them into ‘architects’ of the system, they protected their professional value by highlighting each and every edge case or exception the AI couldn’t handle, ensuring the tool remained too ‘risky’ to operate without them.
Why Adoption Fails at the Departmental Level
AI adoption in higher education rarely dies in a boardroom. It dies in the registrar’s office, the advising center, the financial aid office. Pilots are almost never formally rejected. They simply fade.
When a tool is introduced without a clear redesign of the workflow around it, usage becomes uneven. A few early adopters find value. The rest quietly route around the tool. By the time leadership reviews usage metrics, the adoption is a ghost. Logins may be high because of a mandate, but actual impact on daily work is negligible.
Leadership often describes this as change management friction or fear of technology. That is the wrong diagnosis. And the wrong diagnosis produces the wrong response.
Research from Frontiers in Education (2025) found that concern about AI ethics does not reliably predict whether faculty actually engage with AI tools, largely because most lack the means to critically evaluate AI-generated outputs. When people cannot assess whether the AI is right, avoidance is the rational response. They have not been given a reason to trust it.
That trust deficit plays out differently depending on where someone sits in the institution. For faculty, it is an epistemological problem. For senior administrative staff, it is an existential one.
When AI Expertise Becomes a Threat to Professional Identity
In many universities, power is held by those who know the rules: the exceptions, the workarounds, the edge cases that never made it into the policy manual because the only person who fully understood them was the one who created them.
For decades, this has been the primary currency of administrative authority in higher education. The gatekeeper holds informal power precisely because what they know is scarce, undocumented, and difficult to transfer.
When a junior advisor with a well-prompted AI can navigate a complex academic plan as accurately as a 20-year veteran, the social architecture of the advising center doesn't adapt gradually. It loses its foundation.
The social architecture of the advising center depends on that scarcity, and the AI eliminates it. The World Economic Forum (2025) identifies this emerging class of displaced knowledge workers as the AI Precariat: staff facing chronic insecurity and identity loss as their specialized roles are undercut by automation.
The numbers are moving faster than most senior administrators realize. Sixty-six percent of enterprises are already reducing entry-level hiring specifically because of AI, and 42% of employers believe most entry-level white-collar positions could disappear within five years. Higher education’s administrative workforce sits precisely in the crosshairs of that projection. These are specialized, knowledge-intensive, relationship-dependent roles. They are not safe from this.
Expansion Expected, Anxiety Rising of higher education faculty and administrators expect institutional AI use to increase over the next two years. Concern about AI-related role elimination has doubled year over year.
The Quiet Saboteur: What No One Will Tell You
Your most resistant senior administrators are not afraid of the technology. They are afraid of what the technology reveals.
Consider what it means to spend nearly two decades becoming indispensable in an institution that moves slowly. The winning strategy centered on becoming the person who understood how the system actually worked. The colleague people called when policy language became unclear. The person whose judgment translated the written rules into workable decisions. Someone the institution had, without ever realizing, built workflows around.
This was not laziness or territoriality. It was how the institution rewarded people. Longevity plus accumulated knowledge equaled authority. And authority, in higher education’s flattened salary structures, was often the only real compensation available. Salary bands in administrative higher education are often tied to the complexity and specialization of the role. The registrar who knows the exceptions is classified and paid differently than the one who processes straightforward cases. Their compensation and title rest on the same premise: the knowledge they hold is scarce and difficult to transfer, and the institution depends on it.
Year three: you discover the workaround for the transfer credit edge case no one else knows.
Year seven: you are the person they call when something breaks.
Year twelve: your institutional memory earns you a seat in rooms your title was never meant to enter.
Year sixteen: you are the policy, in every practical sense that matters.
Year nineteen: a junior staff member sits down with an AI and gets the same answer you would have given, in four seconds.
For someone whose professional identity is built on being the expert in the room, that kind of displacement doesn’t register as a career setback. It lands as something closer to erasure.
And here is the detail that makes it genuinely uncomfortable: some of that legacy knowledge, when the AI replicates it, turns out not to have been sophisticated governance wisdom. Some of the exceptions being gatekept for two decades were never actually correct. They were just unchallenged, because only one person fully understood them, and that person had every incentive to keep it that way.
AI does not just democratize the knowledge. In some cases, it audits it. And that audit can be brutal for someone who built a career on being the authority.
Your senior staff is not going to say they are afraid the AI will make their hard-earned expertise look common. But that anxiety is real. When a tool can perform in seconds what a veteran staffer spent decades mastering, it creates a crisis of professional identity.
Pilots often stall because the people expected to run them are protecting a lifetime of professional equity. They are using the tools they have left to remain indispensable, pointing out every tiny policy exception and procedural hurdle that AI isn’t yet “trusted” to handle.
When Resistance Hides Inside the AI Configuration
Diane improvised in good faith. Raymond did something different.
Raymond had nineteen years in the registrar’s office. He knew the exception credit process the way a watchmaker knows a movement. Not just what the parts did, but why they were arranged the way they were, and what happened when someone who did not understand that arrangement tried to change it.
When the AI degree-audit pilot launched, Raymond was the obvious choice to help configure the exception rules. He was cooperative. He attended every implementation meeting. He flagged edge cases the vendor’s team had not considered. Leadership took his involvement as confirmation that senior staff were bought in.
Raymond configured the exception logic to route any non-standard credit scenario to a human reviewer before the AI could resolve it. Transfer credits. AP overrides. Co-enrollment arrangements. Prior learning assessments. These cases were complex, he explained. The AI could not be trusted with them yet. His threshold flagged 40% of all degree audits for manual review.
The actual institutional policy, had anyone cross-referenced it, required human review on roughly 8%.
Leadership looked at the dashboard and saw what they expected: high AI utilization, appropriate human oversight, senior staff engaged with the process.
What they were actually looking at was Raymond, rebuilt in code. He had not resisted the AI. He had become its gatekeeper. His queue was full. His expertise was indispensable. And because the configuration lived in a system only he fully understood, no one thought to ask whether the threshold was right. Only whether Raymond had approved it.
He had.
This is the version of resistance that never shows up in adoption metrics. Raymond’s department showed 100% AI utilization. His pilot was considered a success. The Accountability Vacuum does not always look like failure. Sometimes it looks exactly like what leadership hoped to see.
From Gatekeeper to Architect
There is an alternative to both of these outcomes, but it requires leadership to move first. In engagements across higher education, Robots & Pencils has found that the institutions making the fastest progress on AI adoption are not the ones with the most sophisticated tools or the strictest policies. They are the ones that looked at what their staff were doing outside the sanctioned path, treated it as data about where that path was failing, and gave their most experienced people a meaningful role in redesigning it. Unauthorized AI use tells you exactly what the institution has not yet solved. Banning the tool addresses the symptom while leaving the underlying need completely intact. The question is not whether your staff are using AI. They are. The question is whether the institution is learning anything from how.
The person who spent decades learning every exception, every workaround, every edge case that the student information system cannot handle: that person is not your AI problem. That person is your answer to it. They are the only one in the building who knows where the institutional logic actually lives.
The person who knows where the bodies are buried is the only person qualified to tell the AI where not to dig.
The difference between a registrar with nineteen years in the office who quietly rebuilds their gatekeeping function inside your AI pilot and one who becomes its most rigorous auditor is not temperament. It is whether the institution made them an offer worth accepting.
This is a genuine repositioning of professional value: moving from a knowledge holder to a knowledge architect. Rather than maintaining individual indispensability through daily tasks, the institution is asking them to make their expertise permanent by building it directly into the institutional framework.
That is a different kind of legacy. And for the right person, it is a more compelling one.
But the timing is critical. If the institution waits until AI has already rendered a role redundant to propose a new path, the offer will likely be perceived as an afterthought. In higher education, where titles change slowly and salary bands are narrow, seniority is one of the few available signals of institutional standing. The transition needs to be presented as a proactive investment in expertise, not a reactive attempt to find someone a new place.
The challenge for leadership is to redesign the reward system that has favored individual gatekeeping.
Shadow AI as a Diagnostic
If 70% of a department is using an unauthorized tool, that is not a discipline problem. It is a map of where the sanctioned path failed them. Reading that map honestly is how institutions move past the Accountability Vacuum. But getting staff onto the sanctioned path is only half the problem. What happens after they get there is where most institutions stop paying attention.
Punch List: Navigating the Power Shift
#
Action
Owner / Timeline
1
Redesign the expert role before the pilot launches: Formally shift senior staff in the registrar’s office, advising center, and financial aid office from information gatekeepers to algorithmic auditors. Make the transition visible, titled, and compensated accordingly. Not a consolation prize.
Deans + HR – before AI rollout
2
Audit your AI configurations: If a senior staff member helped configure your AI tool, have someone independent verify that the exception thresholds match actual institutional policy. Not what they remember the policy to be. The policy.
Provost Office + Registrar – within 60 days
3
Draft the escalation protocol: Create a clear, published answer to the question every staff member actually has: if the AI gives a student the wrong information, who is responsible, and who has the authority to correct it?
Provost Office – within 30 days
4
Run friction interviews: Ask staff in each functional area directly: what part of your job does the approved AI tool make harder? That answer tells you where the resistance lives before it calcifies into something you cannot fix.
Functional leaders – quarterly
5
Formalize the workarounds: Identify the Diane-style departmental standards already operating across your colleges. Integrate them into central governance. They are solving problems your official policy has not addressed yet.
The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing.
Key Takeaways
AI adoption often fails at the departmental level, not the leadership level. Most AI initiatives do not fail through formal rejection. They gradually lose momentum when daily workflows are not redesigned around the new tools, leading staff to quietly route around them.
Resistance to AI is often about professional identity, not technology. Senior administrative staff may slow or resist AI initiatives because the tools threaten the specialized expertise and institutional authority they have built over decades.
Institutional power in higher education is often tied to undocumented expertise. Many administrative roles derive influence from knowing complex rules, exceptions, and workarounds. AI can rapidly replicate this knowledge, disrupting long-standing social and professional hierarchies.
AI resistance can hide inside the system itself. Staff involved in configuring AI tools may unintentionally or deliberately embed gatekeeping logic into the system, preserving their role while appearing to support adoption.
Successful AI adoption requires redefining expert roles. Institutions that move fastest reposition experienced staff from knowledge gatekeepers to system architects and algorithmic auditors, embedding their expertise directly into the institutional infrastructure.
Frequently Asked Questions
1. Why do AI pilots frequently stall within departments? Adoption often slows when the introduction of AI tools does not include a redesign of the underlying workflow. Without clear operational changes, only a few early adopters use the tool while others continue existing processes.
2. Is resistance to AI primarily driven by fear of the technology? Not usually. Resistance more often reflects concern about professional displacement or loss of authority, especially for staff whose roles are built on specialized institutional knowledge.
3. What is the “AI Precariat”? The term describes knowledge workers who face growing insecurity as AI systems replicate or automate expertise that once required years of specialized experience.
4. How can institutions prevent hidden resistance inside AI systems? Organizations should audit AI configurations independently to ensure system rules reflect official policy rather than individual interpretations or legacy workarounds.
5. What role should experienced staff play in an AI-enabled institution? Instead of guarding knowledge through manual processes, senior experts can act as architects and auditors who encode institutional expertise into AI systems and oversee their accuracy and governance.
Part 1 – The Institutional Intelligence Crisis: The Intelligence Leak
Jessica Martin
This article is part of a three-part series examining why AI adoption stalls in higher education and what senior leaders must address to restore momentum. Each article stands alone. Reading the full series is recommended.
Accountability Gaps and the Export of Institutional IP
Marcus was good at his job, and he was under pressure.
As a senior Financial Aid Officer, he had to reconcile a $2 million Work-Study discrepancy across three systems that were never designed to agree with each other: a 2012-era student information system, a departmental spreadsheet someone “owned” in name only, and a central payroll database with exports that did not reconcile cleanly at the best of times. Fiscal year-end was close, the deadline was immovable, and the institution had not provided a sanctioned tool that could pull the data together in one place.
So, Marcus did what high performers do when the process fails them. He exported 4,500 student records to a CSV, uploaded it to a personal Pro-tier AI account, and asked it to find the discrepancy. In under ten minutes, it pointed to the root cause: a coding error in the payroll export. Marcus hit the deadline and was lauded for his efficiency.
The file also contained student names, Social Security numbers, and income information, and the steps Marcus used to isolate the error now live in a private chat history the university does not control, cannot audit, and cannot reproduce when Marcus leaves.
This is an intellectual property leak, not a one-off judgment call. Sensitive data left the institution, and so did the logic that found a $2 million error.
Shadow AI does not look like sabotage. It looks like a high performer solving a real problem with the only tool that worked.
Why AI Pilots Stall Before Becoming Infrastructure
Most higher education leaders are currently managing a Pilot Paradox. Across the sector, institutions have authorized dozens of generative AI pilots. On paper, these initiatives are successes: they meet deployment milestones, they have been vetted by security, and they are accessible to staff.
However, a significant percentage of these pilots stall before they reach the level of institutional infrastructure. The root cause is rarely the technology. Higher education institutions are attempting to integrate 21st-century computational speed into 20th-century committee-based accountability structures.
When AI adoption slows, the cause is usually an institutional vacuum rather than a technology failure. A staff member who cannot identify who is accountable when the AI gets something wrong, will, entirely reasonably, either underuse the tool or find a faster one elsewhere.
The Statistics of Structural Failure
Across campuses, AI use has become routine long before governance has become operational. Recent data from 2025 and 2026 shows a widening gap between day-to-day usage and the policies meant to control it.
Staff Using AI Daily of higher education staff report using AI tools daily, yet only 54% can identify a specific institutional policy governing that use.
Governance That Reaches the Desk of institutions report having clear, actionable governance policies that reach the departmental level. The rest have PDFs.
These numbers reflect a mismatch between official tools and operational reality. When an institution provides a sanctioned AI tool that adds steps to a workflow, staff keep using AI but shift to personal accounts where the friction is lower.
The result is a Shadow AI ecosystem where the institution retains the liability but captures none of the institutional learning. Even when staff use sanctioned tools, many organizations still cannot enforce what the AI does with the data it receives
Shadow AI and the Export of Institutional Intelligence
The Marcus incident is not primarily a data policy violation, though it is that too. Uploading student Social Security numbers and income data to a personal AI account is a FERPA violation and, depending on the institution’s state jurisdiction, potentially a breach notification event.
What leadership tends to miss is the operational loss underneath the compliance failure. By solving a complex institutional problem in a private account, Marcus moved a piece of the university’s problem-solving capability off-campus. The logic he used to isolate that error now lives in a chat history the institution cannot audit, cannot replicate, and will lose entirely when Marcus leaves. Every time a staff member takes this path, the university does not get smarter. The AI vendor does.
Governance decisions determine whether the institution learns from its own operations or pays a subscription fee to make someone else's model smarter.
This creates Intelligence Debt. By forcing high-performers into the shadows through inadequate tooling, leadership ensures that the university’s collective intelligence remains fragmented and invisible. Institutions that fail to provide operational pathways for AI aren’t managing risk so much as actively de-skilling themselves over time. The ISG State of Enterprise AI Adoption (2025) identifies this pattern as a form of institutional fragmentation: the university pays for the output but fails to capture the process, leaving internal systems stagnant while the vendor’s model accumulates the learning.
Government and educational sectors are, by recent measure, a generation behind on this problem. 71% of boards in these sectors are not engaged in AI governance at all. 29% of institutions cite cross-border AI data transfers as a major exposure, and only 36% have visibility into where their data is actually being processed or trained.
If your governance is so restrictive that people default to personal accounts, you are effectively exporting your institution’s intellectual property to a third-party vendor while your own systems accumulate none of the learning.
THE CONTAINMENT GAP
While 58% of organizations have AI monitoring in place, 60% lack a kill switch to terminate misbehaving AI, and 63% cannot enforce purpose limitations on what the AI does with institutional data. Monitoring without containment leaves you observing risk rather than controlling it.
Managing AI Like Personnel, Not Software
The foundational error in higher education AI strategy is categorical: institutions are treating AI like software, something to be installed, configured, and maintained by an IT team. AI requires onboarding, clear expectations, and feedback loops, much closer to how a new employee needs to be managed than how a system needs to be patched. Research from Harvard Business School (2025) found that when AI is framed as a collaborative teammate rather than a tool, teams produce higher-quality, more innovative work. Without that framing and the targeted training that goes with it, users treat AI like a search engine rather than a thought partner.
In a traditional administrative office, errors of this kind would trigger coaching and corrective action. When AI produces the same kinds of inconsistency, such as hallucinations, logic gaps, formatting errors, institutions tend to absorb it as a cost of experimentation rather than a signal that something in the deployment needs to change.
The quality of AI output improves when one named person is accountable for it and has the authority and responsibility to intervene. That accountability needs to be explicit in the role, with protected time and clear authority to act on what they find.
Robots & Pencils has observed this pattern consistently across higher education engagements: the institutions that close the accountability gap fastest are the ones that treat AI deployment as an organizational design problem, not a technology one.
Punch List: Reclaiming Institutional Intelligence
#
Action
Owner / Timeline
1
Map the logic leak: Identify the three most common Shadow AI use cases in your institution. Treat them as signals of where sanctioned tools are adding friction or failing to support real work. The CIO and Provost office should co-sponsor this so it lands as an operational priority, not a compliance exercise.
CIO + Provost Office – within 60 days
2
Assign an output owner at launch: Attach a single accountable name to every sanctioned AI tool’s output quality. The owner needs authority to pause the tool, request changes, and coordinate remediation when something goes wrong. Department heads can assign the owner, but the responsibility needs to be explicit in that person’s role, with protected time and clear authority to act on what they find.
Department Heads – per tool, at launch
3
Remove the speed penalty: If the approved path adds steps, staff will route around it. Focus on making the sanctioned workflow competitive on speed and convenience for the high-value use cases you uncovered in step one. The friction usually lives in process as much as in technology; this is joint work between IT and Academic Affairs.
IT + Academic Affairs – within 90 days
4
Define a containment protocol before agentic AI: Write down what happens when an AI tool produces a bad output at scale. Specify who shuts it down, who investigates, who communicates to affected parties, and what data gets reviewed. This has to exist before you deploy tools that can act on data without a human in the loop.
The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing.
Key Takeaways
Shadow AI is quietly exporting institutional intelligence. When staff solve problems with personal AI tools, the institution loses both sensitive data and the operational logic used to solve those problems, leaving that intelligence stored in private accounts outside institutional control.
AI adoption stalls because governance has not reached operational workflows. Many institutions run AI pilots successfully, yet they fail to become infrastructure because staff cannot identify who is accountable when AI outputs are wrong.
Policy gaps are creating a Shadow AI ecosystem. AI usage is already routine across campuses, yet governance often remains theoretical. When sanctioned tools introduce friction, staff default to faster personal tools even when policies discourage it.
Institutions are treating AI like software rather than like a workforce capability. Effective AI adoption requires ownership, training, and accountability structures similar to those used for managing personnel, not just installing tools managed by IT.
Leadership must close the “containment gap.” Many organizations monitor AI activity but lack operational controls such as kill switches, purpose limitations, and defined incident protocols, leaving them observing risk rather than managing it.
Frequently Asked Questions
1. What is an “Intelligence Leak” in the context of AI? An Intelligence Leak occurs when staff use external or personal AI tools to solve institutional problems, causing both sensitive data and internal problem-solving logic to leave the organization and reside in systems the institution cannot audit or reproduce.
2. Why do AI pilots often fail to become institutional infrastructure? Pilots stall when governance and accountability structures lag behind adoption. Without clear ownership for AI outputs or operational policies that reach departments, staff either avoid the tools or use unsanctioned alternatives.
3. What is Shadow AI? Shadow AI refers to employees using unauthorized AI tools to complete work tasks. It usually emerges when official tools are slower, more restrictive, or poorly aligned with real operational needs.
4. Why is treating AI like traditional software a mistake? AI behaves more like a collaborator than a static system. It requires training, feedback loops, and clear accountability for outputs. Without those structures, teams often use AI like a search engine instead of a strategic partner.
5. What steps can institutions take to reduce Intelligence Leaks? Leaders can map common Shadow AI use cases, assign accountable owners for AI outputs, remove workflow friction from approved tools, and define containment protocols before deploying more advanced AI systems.
Success Story: ASU Streamlines Enrollment with Course Provisioning System Modernization
Robots & Pencils
Industry: Higher Education / Digital Learning & Educational Technology
Location: Tempe, Arizona (main campus), with additional campuses across Arizona and global online presence
Customer Profile: Leading public research university serving thousands of students and faculty members, focused on innovation in education technology, digital transformation, academic credentialing, learning management systems, student engagement platforms, and AI-powered support systems
Customer Challenge
In 2023, Arizona State University modernized its course provisioning system to streamline enrollment processes, reducing delays and minimizing the need for manual intervention at the start of each semester. Robots & Pencils partnered with ASU to build the Canvas Enrollment System (CES), a modern Canvas LMS integration that automates course creation and roster syncing. This new system reduced wait times that occurred previously from more than three days to less than 30 minutes and decreased manual requests by 40%, saving hundreds of administrative hours each term.
Robots & Pencils’ Solution
By modernizing the course enrollment process using AWS managed services, including Amazon SQS, SNS, Step Functions, Lambda, OpenSearch, and DynamoDB, the organization achieved a remarkable reduction in processing time from three days to only 30 minutes.
Amazon SQS and SNS enable reliable, asynchronous communication across components, eliminating processing bottlenecks and ensuring seamless handling of large enrollment volumes.
AWS Step Functions coordinate and monitor the end-to-end workflow, improving transparency, error handling, and operational resilience.
AWS Lambda delivers compute power on demand, automatically scaling to meet workload spikes without provisioning or maintaining servers.
Amazon DynamoDB provides a low-latency, fully managed NoSQL database for high-speed data access, enabling near-instant retrieval and updates of student and course records.
Amazon OpenSearch supports fast, flexible search and analytics, allowing administrators to instantly view enrollment progress and gain actionable insights.
Results & Benefits
The new event-driven, serverless architecture replaced a batch-based legacy system with a scalable, resilient, and highly automated solution.
Reduced processing time by more than 99% moving from three days to 30 minutes and enabling near real-time student onboarding
Increased scalability and reliability, allowing the system to handle thousands of concurrent enrollments during peak periods with no downtime.
Decreased operational costs through serverless compute and managed services that minimize infrastructure maintenance.
Enhanced data visibility and decision-making, empowering staff with real-time reporting and faster issue resolution.
By leveraging AWS’s cloud-native capabilities, ASU has transformed its legacy process into an intelligent, automated, and scalable system that advances institutional growth and optimizes operations in service of enhancing the student experience.
“The new Canvas Enrollment System brings real speed, clarity and reliability to a process that is central to student success. This transformation reflects our commitment to innovation on AWS and how we can collaborate with teams like Robots & Pencils to make improvements to core systems that empower our teams to deliver an exceptional student experience with efficiency and confidence.”
Kyle Bowen, Deputy CIO, Arizona State University
About Arizona State University
Arizona State University, ranked the No. 1 “Most Innovative School” in the nation by U.S. News & World Report for 11 years in succession, has forged the model for a New American University by operating on the principles that learning is a personal and original journey for each student; that they thrive on experience and that the process of discovery cannot be bound by traditional academic disciplines. Through innovation and a commitment to accessibility, ASU has drawn pioneering researchers to its faculty even as it expands opportunities for qualified students.
About Robots & Pencils
Robots & Pencils is an Applied AI Engineering Partner that builds AI systems designed for enterprise velocity and measurable business impact. With delivery centers in Canada, the United States, Eastern Europe, and Latin America, the company combines world-class UX with elite engineering talent for rapid, enterprise-grade delivery. Founded in 2009, Robots & Pencils has earned the trust of leaders in Consumer Products and Retail, Education, Energy, Financial Services, Healthcare, and Manufacturing industries, gaining a reputation as a high-velocity alternative to traditional global systems integrators. Robots & Pencils is an AWS Advanced Tier Partner and one of the 11 inaugural AWS Pattern Partners, selected to help define how enterprise AI systems are productized, deployed, and scaled through AWS Marketplace.
Over the past several months I have been spending a lot of time building relationships with art and design colleges across North America and Latin America. In those conversations, I have had the chance to speak with professors, visit classrooms, and interact directly with students who are preparing to enter the design industry.
What surprised me most was not curiosity about artificial intelligence.
It was fear.
Almost everywhere I go, I hear the same concern from students. They worry that artificial intelligence will remove entry-level design jobs before they even have a chance to begin their careers. For many of them, AI represents something that replaces designers rather than something that enhances what designers can do.
Rethinking What Professional Experience Means in Design
The concern is understandable. When you are about to graduate and enter the workforce, the last thing you want to hear is that the profession itself may be changing faster than expected.
But after spending many years reviewing portfolios and hiring designers, I have started to see the situation a little differently.
Students may be worrying about the wrong thing.
For a long time, professional experience in design meant something very specific. It meant understanding the tools, the workflows, and the production processes that turn ideas into finished work. Designers spent years refining their craft while also learning how the industry operates. With enough time, that accumulated knowledge created a real advantage.
A designer with fifteen years of experience typically knew how to do things faster, more efficiently, and often with a higher level of quality than someone just entering the field.
How Artificial Intelligence is Changing Design’s Professional Experience Curve
AI is not simply another tool added to the designer’s toolkit. In many cases, it is reshaping the workflow itself. Tasks that once required deep technical knowledge or years of production experience can now be explored, tested, and iterated much more quickly with modern AI tools for designers.
What this means is that the distance between a junior designer and a senior designer is starting to shift.
A young designer who grows up alongside AI-assisted design tools can move through experimentation, prototyping, and production at a pace that used to take many years of experience to achieve. Instead of slowly accumulating knowledge about how to produce work, they can focus earlier on exploring ideas and refining their quality.
In other words, the professional experience curve for designers is compressing.
A Pattern Creative Industries Have Seen Before
We have seen something similar happen before in other creative industries.
For decades, producing professional music required access to expensive studios, specialized equipment, and engineers who understood complex recording systems. Experience in the music industry meant knowing how to navigate that entire infrastructure.
Then digital production tools changed everything. Software like Logic and Pro Tools turned laptops into recording studios. Experimentation became cheaper, faster, and far more accessible.
One of the most well-known examples is Billie Eilish, who recorded her early music with her brother in a bedroom studio using digital production tools. They did not come up through traditional studio systems. They grew up inside the new tools.
The technology did not replace musical craft. If anything, it made the importance of taste, storytelling, and artistic identity even more obvious. But it did compress the experience curve. Young creators who understood the new tools could suddenly compete with artists who had spent decades working inside the old system.
Something very similar appears to be happening in design.
Why Craft and Taste Still Matter
This does not mean that craft disappears. In fact, craft may become even more important.
Artificial intelligence can accelerate exploration and production, but it does not replace taste, judgment, or the ability to understand why a design solution works. When reviewing portfolios, those qualities are still what stand out the most. The designers who succeed are the ones who demonstrate thoughtful decisions, strong visual communication, and a clear point of view.
But the role of experience is evolving.
The Rise of AI-Native Designers
For decades, experience meant knowing the established systems better than someone else. Now experience increasingly includes the ability to adapt quickly, integrate new tools into the creative process, and rethink how work gets done.
Students entering the field today are learning these tools, and at the same time, they are learning design itself. In many ways they are becoming AI-native designers, a generation of creatives who develop design craft alongside artificial intelligence and AI-assisted design workflows.
This may create something the design industry has rarely seen before.
For the first time, the youngest designers entering the profession may also be the most technologically fluent.
That changes the competitive landscape.
A New Advantage for the Next Generation of Designers
A talented young designer who understands both craft and emerging technologies may be able to produce work that rivals someone who has spent far longer in the field. The years of experience that once created a clear advantage are no longer the only factor in shaping capability.
This is why the conversation around AI in design needs to shift.
Instead of asking whether AI will eliminate entry-level roles, we may need to ask a different question. How is the definition of professional experience in design changing?
Because if experience is no longer measured only by time spent in the industry, but also by how effectively designers adapt to new tools and new workflows, then the playing field begins to level.
And when that happens, the advantage may not belong to the people who have simply been around the longest.
It may belong to the designers who are learning the fastest.
For students entering the profession today, that should be encouraging rather than frightening. They are learning design and at the same time the tools and processes are evolving. They have the opportunity to build their craft while also developing fluency in technologies that many established professionals are still figuring out.
Moments like this do not happen often in a profession. But when they do, they tend to redefine who has the advantage.
Right now, the next generation of designers may be closer to that advantage than they realize.
If you are a designer who is curious about new tools, interested in how artificial intelligence is changing creative work, and excited about pushing the craft forward, we would love to hear from you.Robots & Pencils is always looking for designers who combine strong visual thinking with a willingness to explore new technologies and new workflows.View open design roles.
Key Takeaways
Artificial intelligence is changing how designers gain professional experience rather than replacing designers.
AI tools for designers allow faster experimentation, prototyping, and production.
The gap between junior and senior designers may shrink as AI compresses the professional experience curve.
Craft, taste, and visual judgment remain essential design skills.
Students entering the field today may become AI-native designers, combining design craft with emerging technologies.
FAQs
Will AI replace designers?
Artificial intelligence is unlikely to replace designers. AI tools can accelerate experimentation, ideation, and production, but they do not replace creative judgment, storytelling, or visual taste.
Should design students be worried about AI?
Many design students worry that artificial intelligence will remove entry-level jobs. In reality, AI is changing how designers develop experience. Students who learn AI assisted design tools early may gain a competitive advantage.
What are AI tools for designers?
AI tools for designers are systems that help generate visual ideas, explore design variations, automate production tasks, and accelerate creative workflows using machine learning models.
What is an AI-native designer?
An AI native designer is someone who learns design craft and at the same time they learn artificial intelligence tools. Instead of adopting AI later in their career, they grow up designing alongside AI assisted workflows.
How is AI changing design careers?
Artificial intelligence is compressing the experience curve in design. Designers can experiment, prototype, and refine ideas much faster than traditional workflows allowed.
Robots & Pencils Appoints Jason Lacy as Client Partner to Lead Education Vertical
Robots & Pencils
Veteran executive brings three decades of experience guiding institutions, edtech platforms, publishers, and workforce organizations through digital transformation and applied AI modernization.
Robots & Pencils, an applied AI engineering partner known for high-velocity delivery and measurable business outcomes, today announced the appointment of Jason Lacy as Client Partner, Education. Lacy will lead the company’s education vertical, expanding its work across the full education ecosystem.
Strengthening Leadership Across the Education Ecosystem
Education has been a core focus for Robots & Pencils since its earliest days. Lacy’s appointment strengthens that commitment with dedicated leadership grounded in deep sector knowledge, platform expertise, and enterprise execution.
“Jason has a deep passion for education and brings decades of experience delivering outstanding results and outcomes for education clients,” said Len Pagon, CEO of Robots & Pencils. “He brings a rare blend of market leadership, real education expertise, and technical depth. He understands how institutions help students and enable faculty, how platforms scale, and how to turn AI strategy into production systems that perform. Education leaders need partners who combine ambition with discipline. Jason brings that balance, and it elevates what we can deliver across the sector.”
Lacy’s 30 years of experience spans global partnerships, enterprise technology strategy, and revenue-aligned growth. Most recently, as Senior Vice President of Global Partnerships at Learnosity, he led a worldwide ecosystem representing a significant share of organizational revenue across assessment, learning technology, and workforce certification platforms. He built and scaled partner programs, advanced complex integrations, and aligned commercial strategy with product innovation to drive sustained growth.
Earlier in his career, Lacy expanded strategic partnership practices at Unicon and strengthened relationships across major publishers, platforms, and institutional stakeholders. With a foundation in software engineering and system architecture, he evaluates integration pathways with precision and translates complex technical capabilities into enterprise value. He has advised institutions across public, private, and online sectors, edtech platforms and publishing organizations on digital transformation, ecosystem strategy, and outcome-based modernization initiatives.
Focused Leadership for AI and Cloud Modernization in Education
In his role, Lacy will guide education clients as they modernize legacy infrastructure, strengthen data foundations, and operationalize artificial intelligence within accountable, enterprise-grade environments.
“Education institutions carry both public trust and generational responsibility,” said Lacy. “Innovation must move forward, but it must do so responsibly. The right technology strengthens operational performance while keeping student success at the center.”
AI Patterns Accelerate Responsible AI Adoption in Education
Robots & Pencils’ AI Pattern framework makes this possible with velocity and impact. This structured, repeatable solution model combines proven architecture with use-case-specific configurations to compress delivery timelines from months to weeks.
“When I looked at Robots & Pencils’ AI Pattern approach, I immediately saw its relevance for education,” said Lacy. “Education leaders operate within rigorous governance and risk frameworks. They need progress they can trust. AI Patterns provide a disciplined, repeatable foundation that allows institutions to move quickly on targeted priorities while maintaining control.”
Robots & Pencils views this early traction as the catalyst for sustained partnership, enabling institutions to expand AI capabilities through phased modernization strategies that advance enrollment growth, student retention, student success analytics, academic operations, enterprise data strategy, and secure adoption.
As an AWS Advanced Tier Services Partner and AWS Pattern Partner, Robots & Pencils plays a guiding role in defining how enterprise AI systems are productized and scaled. That experience strengthens the company’s ability to bring structured, production-ready AI systems to complex institutional environments.
A Longstanding Commitment to Education Innovation
“Everything we build begins with the belief that the best AI systems emerge when engineering discipline meets human-centered design,” said Pagon. “Education sits at the intersection of mission and modernization. With Jason leading our education vertical, we are strengthening our ability to help institutions scale AI responsibly while staying true to the people they serve.”
Robots & Pencils has partnered with education institutions and platforms for well over a decade, modernizing legacy systems, launching cloud-native products, and building digital experiences used by millions of learners. Lacy’s appointment reinforces the company’s long-term investment in education and its commitment to helping leaders translate AI ambition into secure, scalable systems that perform in production.
Education continues to evolve. Robots & Pencils is building the AI and cloud foundations that enable that progress, with Jason Lacy helping guide the way.