This article is part of a three-part series examining why AI adoption stalls in higher education and what senior leaders must address to restore momentum. Each article stands alone. Reading the full series is recommended.
Part 1: The Intelligence Leak | Part 3: The Brittle System
Professional Identity, Resistance, and the Power Shift AI Creates
Diane didn’t wait for permission. She couldn’t afford to.
As Director of Advising, she had been told her department needed to find ways to absorb the impact of recent turnover, and leadership had suggested AI as a possible direction. But no tool was specified, and no governance ever reached her desk.
So her staff did what capable people do in a vacuum: they improvised. They started using a free LLM to draft student appointment summaries. It worked well until staff began uploading degree audits and academic plans. Diane recognized the security risk immediately. She also recognized that the university’s official AI policy, still in draft, was not going to arrive in time to help her.
Tired of waiting, Diane used her weekend to write her own policy. It was just one page and defined what could be uploaded, what required a human to double-check, and who to call if the tool produced an error.
Six months later, her document was the actual operating standard for her advising staff. The university’s 37-page policy was still sitting with the committee in a draft.
This is what AI adoption in higher education looks like when governance does not keep pace with operational reality.
It is easy to view Diane’s initiative as a simple win: a director filling a leadership vacuum to keep her department safe. However, while Diane was solving the governance problem, her senior staff were reacting to a different reality. These advisors began quietly slowing the AI pilot to a crawl, not because they wanted to be difficult, but because the tool threatened the value of the specialized expertise they had built over decades. Without an alternative role that turned them into ‘architects’ of the system, they protected their professional value by highlighting each and every edge case or exception the AI couldn’t handle, ensuring the tool remained too ‘risky’ to operate without them.
Why Adoption Fails at the Departmental Level
AI adoption in higher education rarely dies in a boardroom. It dies in the registrar’s office, the advising center, the financial aid office. Pilots are almost never formally rejected. They simply fade.
When a tool is introduced without a clear redesign of the workflow around it, usage becomes uneven. A few early adopters find value. The rest quietly route around the tool. By the time leadership reviews usage metrics, the adoption is a ghost. Logins may be high because of a mandate, but actual impact on daily work is negligible.
Leadership often describes this as change management friction or fear of technology. That is the wrong diagnosis. And the wrong diagnosis produces the wrong response.
Research from Frontiers in Education (2025) found that concern about AI ethics does not reliably predict whether faculty actually engage with AI tools, largely because most lack the means to critically evaluate AI-generated outputs. When people cannot assess whether the AI is right, avoidance is the rational response. They have not been given a reason to trust it.
That trust deficit plays out differently depending on where someone sits in the institution. For faculty, it is an epistemological problem. For senior administrative staff, it is an existential one.
When AI Expertise Becomes a Threat to Professional Identity
In many universities, power is held by those who know the rules: the exceptions, the workarounds, the edge cases that never made it into the policy manual because the only person who fully understood them was the one who created them.
For decades, this has been the primary currency of administrative authority in higher education. The gatekeeper holds informal power precisely because what they know is scarce, undocumented, and difficult to transfer.
When a junior advisor with a well-prompted AI can navigate a complex academic plan as accurately as a 20-year veteran, the social architecture of the advising center doesn't adapt gradually. It loses its foundation.
The social architecture of the advising center depends on that scarcity, and the AI eliminates it. The World Economic Forum (2025) identifies this emerging class of displaced knowledge workers as the AI Precariat: staff facing chronic insecurity and identity loss as their specialized roles are undercut by automation.
The numbers are moving faster than most senior administrators realize. Sixty-six percent of enterprises are already reducing entry-level hiring specifically because of AI, and 42% of employers believe most entry-level white-collar positions could disappear within five years. Higher education’s administrative workforce sits precisely in the crosshairs of that projection. These are specialized, knowledge-intensive, relationship-dependent roles. They are not safe from this.
| 66% | Entry-Level Hiring Reduction of enterprises are reducing entry-level hiring specifically due to AI restructuring. |
| 42% | Positions Projected to Vanish of employers believe most entry-level white-collar positions could disappear within five years. |
| 88% | Expansion Expected, Anxiety Rising of higher education faculty and administrators expect institutional AI use to increase over the next two years. Concern about AI-related role elimination has doubled year over year. |
The Quiet Saboteur: What No One Will Tell You
Your most resistant senior administrators are not afraid of the technology. They are afraid of what the technology reveals.
Consider what it means to spend nearly two decades becoming indispensable in an institution that moves slowly. The winning strategy centered on becoming the person who understood how the system actually worked. The colleague people called when policy language became unclear. The person whose judgment translated the written rules into workable decisions. Someone the institution had, without ever realizing, built workflows around.
This was not laziness or territoriality. It was how the institution rewarded people. Longevity plus accumulated knowledge equaled authority. And authority, in higher education’s flattened salary structures, was often the only real compensation available. Salary bands in administrative higher education are often tied to the complexity and specialization of the role. The registrar who knows the exceptions is classified and paid differently than the one who processes straightforward cases. Their compensation and title rest on the same premise: the knowledge they hold is scarce and difficult to transfer, and the institution depends on it.
Year three: you discover the workaround for the transfer credit edge case no one else knows.
Year seven: you are the person they call when something breaks.
Year twelve: your institutional memory earns you a seat in rooms your title was never meant to enter.
Year sixteen: you are the policy, in every practical sense that matters.
Year nineteen: a junior staff member sits down with an AI and gets the same answer you would have given, in four seconds.
For someone whose professional identity is built on being the expert in the room, that kind of displacement doesn’t register as a career setback. It lands as something closer to erasure.
And here is the detail that makes it genuinely uncomfortable: some of that legacy knowledge, when the AI replicates it, turns out not to have been sophisticated governance wisdom. Some of the exceptions being gatekept for two decades were never actually correct. They were just unchallenged, because only one person fully understood them, and that person had every incentive to keep it that way.
AI does not just democratize the knowledge. In some cases, it audits it. And that audit can be brutal for someone who built a career on being the authority.
Your senior staff is not going to say they are afraid the AI will make their hard-earned expertise look common. But that anxiety is real. When a tool can perform in seconds what a veteran staffer spent decades mastering, it creates a crisis of professional identity.
Pilots often stall because the people expected to run them are protecting a lifetime of professional equity. They are using the tools they have left to remain indispensable, pointing out every tiny policy exception and procedural hurdle that AI isn’t yet “trusted” to handle.
When Resistance Hides Inside the AI Configuration
Diane improvised in good faith. Raymond did something different.
Raymond had nineteen years in the registrar’s office. He knew the exception credit process the way a watchmaker knows a movement. Not just what the parts did, but why they were arranged the way they were, and what happened when someone who did not understand that arrangement tried to change it.
When the AI degree-audit pilot launched, Raymond was the obvious choice to help configure the exception rules. He was cooperative. He attended every implementation meeting. He flagged edge cases the vendor’s team had not considered. Leadership took his involvement as confirmation that senior staff were bought in.
Raymond configured the exception logic to route any non-standard credit scenario to a human reviewer before the AI could resolve it. Transfer credits. AP overrides. Co-enrollment arrangements. Prior learning assessments. These cases were complex, he explained. The AI could not be trusted with them yet. His threshold flagged 40% of all degree audits for manual review.
The actual institutional policy, had anyone cross-referenced it, required human review on roughly 8%.
Leadership looked at the dashboard and saw what they expected: high AI utilization, appropriate human oversight, senior staff engaged with the process.
What they were actually looking at was Raymond, rebuilt in code. He had not resisted the AI. He had become its gatekeeper. His queue was full. His expertise was indispensable. And because the configuration lived in a system only he fully understood, no one thought to ask whether the threshold was right. Only whether Raymond had approved it.
He had.
This is the version of resistance that never shows up in adoption metrics. Raymond’s department showed 100% AI utilization. His pilot was considered a success. The Accountability Vacuum does not always look like failure. Sometimes it looks exactly like what leadership hoped to see.
From Gatekeeper to Architect
There is an alternative to both of these outcomes, but it requires leadership to move first. In engagements across higher education, Robots & Pencils has found that the institutions making the fastest progress on AI adoption are not the ones with the most sophisticated tools or the strictest policies. They are the ones that looked at what their staff were doing outside the sanctioned path, treated it as data about where that path was failing, and gave their most experienced people a meaningful role in redesigning it. Unauthorized AI use tells you exactly what the institution has not yet solved. Banning the tool addresses the symptom while leaving the underlying need completely intact. The question is not whether your staff are using AI. They are. The question is whether the institution is learning anything from how.
The person who spent decades learning every exception, every workaround, every edge case that the student information system cannot handle: that person is not your AI problem. That person is your answer to it. They are the only one in the building who knows where the institutional logic actually lives.
The person who knows where the bodies are buried is the only person qualified to tell the AI where not to dig.
The difference between a registrar with nineteen years in the office who quietly rebuilds their gatekeeping function inside your AI pilot and one who becomes its most rigorous auditor is not temperament. It is whether the institution made them an offer worth accepting.
This is a genuine repositioning of professional value: moving from a knowledge holder to a knowledge architect. Rather than maintaining individual indispensability through daily tasks, the institution is asking them to make their expertise permanent by building it directly into the institutional framework.
That is a different kind of legacy. And for the right person, it is a more compelling one.
But the timing is critical. If the institution waits until AI has already rendered a role redundant to propose a new path, the offer will likely be perceived as an afterthought. In higher education, where titles change slowly and salary bands are narrow, seniority is one of the few available signals of institutional standing. The transition needs to be presented as a proactive investment in expertise, not a reactive attempt to find someone a new place.
The challenge for leadership is to redesign the reward system that has favored individual gatekeeping.
Shadow AI as a Diagnostic
If 70% of a department is using an unauthorized tool, that is not a discipline problem. It is a map of where the sanctioned path failed them. Reading that map honestly is how institutions move past the Accountability Vacuum. But getting staff onto the sanctioned path is only half the problem. What happens after they get there is where most institutions stop paying attention.
Punch List: Navigating the Power Shift
| # | Action | Owner / Timeline |
| 1 | Redesign the expert role before the pilot launches: Formally shift senior staff in the registrar’s office, advising center, and financial aid office from information gatekeepers to algorithmic auditors. Make the transition visible, titled, and compensated accordingly. Not a consolation prize. | Deans + HR – before AI rollout |
| 2 | Audit your AI configurations: If a senior staff member helped configure your AI tool, have someone independent verify that the exception thresholds match actual institutional policy. Not what they remember the policy to be. The policy. | Provost Office + Registrar – within 60 days |
| 3 | Draft the escalation protocol: Create a clear, published answer to the question every staff member actually has: if the AI gives a student the wrong information, who is responsible, and who has the authority to correct it? | Provost Office – within 30 days |
| 4 | Run friction interviews: Ask staff in each functional area directly: what part of your job does the approved AI tool make harder? That answer tells you where the resistance lives before it calcifies into something you cannot fix. | Functional leaders – quarterly |
| 5 | Formalize the workarounds: Identify the Diane-style departmental standards already operating across your colleges. Integrate them into central governance. They are solving problems your official policy has not addressed yet. | Academic Affairs – within 60 days |
Continue to Part Three: The Brittle System
The pace of AI change can feel relentless with tools, processes, and practices evolving almost weekly. We help organizations navigate this landscape with clarity, balancing experimentation with governance, and turning AI’s potential into practical, measurable outcomes. If you’re looking to explore how AI can work inside your organization—not just in theory, but in practice—we’d love to be a partner in that journey. Request an AI briefing.
Key Takeaways
AI adoption often fails at the departmental level, not the leadership level.
Most AI initiatives do not fail through formal rejection. They gradually lose momentum when daily workflows are not redesigned around the new tools, leading staff to quietly route around them.
Resistance to AI is often about professional identity, not technology.
Senior administrative staff may slow or resist AI initiatives because the tools threaten the specialized expertise and institutional authority they have built over decades.
Institutional power in higher education is often tied to undocumented expertise.
Many administrative roles derive influence from knowing complex rules, exceptions, and workarounds. AI can rapidly replicate this knowledge, disrupting long-standing social and professional hierarchies.
AI resistance can hide inside the system itself.
Staff involved in configuring AI tools may unintentionally or deliberately embed gatekeeping logic into the system, preserving their role while appearing to support adoption.
Successful AI adoption requires redefining expert roles.
Institutions that move fastest reposition experienced staff from knowledge gatekeepers to system architects and algorithmic auditors, embedding their expertise directly into the institutional infrastructure.
Frequently Asked Questions
1. Why do AI pilots frequently stall within departments?
Adoption often slows when the introduction of AI tools does not include a redesign of the underlying workflow. Without clear operational changes, only a few early adopters use the tool while others continue existing processes.
2. Is resistance to AI primarily driven by fear of the technology?
Not usually. Resistance more often reflects concern about professional displacement or loss of authority, especially for staff whose roles are built on specialized institutional knowledge.
3. What is the “AI Precariat”?
The term describes knowledge workers who face growing insecurity as AI systems replicate or automate expertise that once required years of specialized experience.
4. How can institutions prevent hidden resistance inside AI systems?
Organizations should audit AI configurations independently to ensure system rules reflect official policy rather than individual interpretations or legacy workarounds.
5. What role should experienced staff play in an AI-enabled institution?
Instead of guarding knowledge through manual processes, senior experts can act as architects and auditors who encode institutional expertise into AI systems and oversee their accuracy and governance.
