What university risk frameworks reveal — and leave unsaid.
In a previous article, “The End of Strategy as Usual”, it was argued that many universities still treat AI as just another emerging technology, leaving flagship five-year strategies straining to keep up while the hard questions are smoothed over.
If culture eats strategy for breakfast, then without a proper risk register it devours governance for lunch and reputation for dinner. So instead of inspecting the architecture, this time attention turns to the plumbing — via publicly available, risk-related artefacts dated 2024/25. A convenience sample covering 37 institutions across 12 countries was compiled. Unlike the earlier strategy study that focused on five English-speaking nations, the lens has been widened =beyond the Anglosphere to map approaches to generative-AI risk. As before, patterns of behaviour rather than naming names is what interests and so findings are anonymised.
Nine AI-related risk domains one might expect in an enterprise approach were created: integrity, data/cyber, curriculum, regulatory, reputational, operational procedures, business model, and two societal buckets (work transformation and wider civilisational disruption).
In examining the artefacts against each category, the following classification system was used:
- Explicit: institutional documents either named generative AI as a risk or bound the university to specific controls, approvals, ownership, or audits (e.g., adding AI to the enterprise risk register, requiring privacy and security impact assessments for new AI systems, or including AI in formal risk taxonomies)
- Implicit: Official guidance acknowledged AI concerns and prescribed behaviours, but we found no evidence linking it to enterprise risk treatment (noting here that Implicit is NOT another word for inaction, and Explicit statements may live behind the Firewall).
- Not found: No evidence of explicit or implicit engagement across available documents
Terminology does vary across national systems, and there is a risk of detail having been lost in translation when considering non-English institutions, but the international standard for risk management (ISO 31000) offers some level of confidence in the approach.
The chart derived from this Risk X-Ray exercise sits alongside this paper. The headlines:
- Managing What’s Measurable, Missing What Matters
Recalling ISO 31000’s definition—risk is the effect of uncertainty on objectives—the X-ray suggests uncertainty in global higher education when it comes to AI is being managed where controls are obvious, not where objectives are most exposed. Only one Dutch university in our 37 offered a proactive risk narrative that clearly argued AI touches objectives as well as controls (though even there, the business-model threat was implicit, not explicit).
- Managing What’s Measurable, Missing What Matters
- Coalface Leads; C-Suite and Board Follow
Bottom-up practice is doing most of the heavy lifting. At the coalface, strong explicit activity around integrity, curriculum, data/cyber and operational procedures is evident. This was most notable in the Australian sample — likely reflecting early sector focus (including TEQSA requests in 2024) and the maturity of existing data/cyber mitigations that extend naturally into broader operational controls. US institutions were the outlier here — not because these risks were ignored, but because there was more evidence of top-down direction.
- Coalface Leads; C-Suite and Board Follow
- Top-Down Blind Spots: Reputation, Business Model, Society
Where one might expect executive leadership and corporate governance to lean in — reputation, business model, and the two societal buckets — the record is thinner. On work transformation, nearly half of the institutions were silent; only 8/37 called it out explicitly. On wider civilisational risk, 9/37 were explicit, but only 5/37 were explicit in both societal buckets. Nine made no explicit or implicit engagement with either. Moreover, the civilisational lens was relatively narrow — mostly rights, equity, and environmental impacts — which, it could be argued, is only part of the disruption story.
- Top-Down Blind Spots: Reputation, Business Model, Society
- Strategic Unknowns Unrated
Extending the point above, big uncertainties that leadership and governance should be considering — labour substitution, content commoditisation, credential relevance — are not being explicitly captured as institutional risks with Key Risk Indicators (KRIs), appetite statements, and mitigation mapping.
- Strategic Unknowns Unrated
- Reputation Assumed; Business Model Unpriced
The relationship between the two societal buckets and reputation/business model is revealing. Not a single institution explicitly named reputational risk from AI, yet the threat was implicit everywhere — perhaps because many see all risk as reputational by default and feel no need to spell it out. The disconnect is starker for business model: of the 37 institutions, only one (UK) explicitly called AI a business-model threat. It also flagged the wider civilisational risk, though, strikingly, no specific concern about work transformation was found in its public documents. In eight other cases an implicit concern could be discerned; that leaves 75% silent on an existential risk to how they currently run operations.
- Reputation Assumed; Business Model Unpriced
From this initial analysis, it appears that many institutions exhibit systemic blockages that are most obvious at the points of intersection between bottom-up and top-down. The anonymised sample suggests four choke points:
- Taxonomy & appetite gap. Few have AI-specific risk appetite statements tied to strategic objectives (student demand, research position, margin, rankings). Without an appetite anchor, AI remains a compliance topic, not an enterprise exposure.
- Ownership gap. AI risks often sit with operational leads (eg CIO) rather than a single enterprise owner (Provost/DVC, COO) empowered to trade off teaching, research, and finances — so risks stay local.
- Escalation gap. Corporate and academic governance travel on parallel tracks. Coalface items (plagiarism, privacy) are managed locally, but existential scenarios — business-model pressure, demand shocks, capability substitution — rarely reach the key risk register with a rating, owner, KRIs, and mitigations.
- Assurance bias. Governance and risk committees reward what can be audited (DPIAs, policy conformance) and neglect forward scenarios (e.g., employer expectations outpacing degree redesign). This biases the institution toward tactical certainty and away from the strategic uncertainty ISO 31000 asks them to confront.
- Appetite statements with teeth. Add AI-specific appetite thresholds tied to objectives (e.g., acceptable volatility in admissions mix from AI-enabled alternatives; tolerance for automation in student-facing services; thresholds for research-integrity incidents).
- Enterprise ownership & KRIs. Assign a single executive owner for work transformation and business-model risk, with KRIs (e.g., % assessments redesigned; cost-to-serve trends; employer-demand signals).
- Mandatory escalation of scenarios. Require at least two AI “existential” scenarios per year on the key risk register — board-rated, with mitigations and testable actions (reskilling programs, program-portfolio redesign, pricing experiments).
This short study shows substantial AI risk management activity, especially where tools intersect with processes. What’s missing is enterprise risk management of AI: translating uncertainty into the objectives that drive universities. When this connection is made, culture, risk appetite, and strategy stop working at cross-purposes.
The data reveals a sector managing the measurable while missing what matters most — a pattern that leaves institutions vulnerable to the very disruptions they should be preparing for. The path forward requires bridging the gap between operational excellence and strategic foresight, ensuring that AI risk management serves institutional objectives rather than merely checking compliance boxes.